TWI756681B - Artificial intelligence assisted evaluation method applied to aesthetic medicine and system using the same - Google Patents

Artificial intelligence assisted evaluation method applied to aesthetic medicine and system using the same Download PDF

Info

Publication number
TWI756681B
TWI756681B TW109115444A TW109115444A TWI756681B TW I756681 B TWI756681 B TW I756681B TW 109115444 A TW109115444 A TW 109115444A TW 109115444 A TW109115444 A TW 109115444A TW I756681 B TWI756681 B TW I756681B
Authority
TW
Taiwan
Prior art keywords
artificial intelligence
medical
evaluation
facial
module
Prior art date
Application number
TW109115444A
Other languages
Chinese (zh)
Other versions
TW202044279A (en
Inventor
李至偉
Original Assignee
李至偉
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 李至偉 filed Critical 李至偉
Publication of TW202044279A publication Critical patent/TW202044279A/en
Application granted granted Critical
Publication of TWI756681B publication Critical patent/TWI756681B/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Abstract

The present invention discloses an artificial intelligence assisted evaluation method applied to aesthetic medicine, which is applied to an artificial intelligence identified and assisted module. Among this, a real-time facial expression evaluation result regarding a testee is inputted to the artificial intelligence identified and assisted module. Then at least one of a medical knowledge rule module and a historical database of aesthetic medical auxiliary evaluation results are selected for executing an artificial intelligence identified and assisted procedure. After that, the artificial intelligence identified and assisted module generates and outputs an aesthetic medical auxiliary evaluation result. Thus, a therapeutic effect for personalized beauty can be acquired via performing an aesthetic medical behavior based on the aesthetic medical auxiliary evaluation result.

Description

運用於美容醫療之人工智慧輔助評估方法及其系統 Artificial intelligence-assisted evaluation method and system for cosmetic medicine

本發明係關於一種人工智慧(Artificial Intelligence,AI)輔助評估方法,尤其係一種以人臉表情為基礎,並應用於美容醫療的人工智慧輔助評估方法以及運用該方法之系統。 The present invention relates to an artificial intelligence (Artificial Intelligence, AI) assisted evaluation method, especially an artificial intelligence assisted evaluation method based on facial expressions and applied to beauty medicine and a system using the method.

現今流行的美容醫療風氣,特別是臉部方面的微整型美容醫療,已廣為各齡層的人士所喜愛與接受。 Today's popular cosmetic medical treatment, especially the facial micro-plastic cosmetic treatment, has been widely loved and accepted by people of all ages.

其中,進行臉部微整型美容醫療的治療方式,主要是依靠醫師自身的醫學專業技術與知識,以一般性或常態化的標準程序進行處理,抑或再略加上一些醫師自身執業經驗的個別判斷;然此等習知作法,因其通常較欠缺深度的客製化考量,以致於時常會出現實際美容後的治療結果與事先預期的治療效果之間存在著或多或少的期待落差。 Among them, the treatment methods of facial micro-surgery and beauty treatment mainly rely on the medical professional skills and knowledge of the physicians themselves, and handle them with general or normalized standard procedures, or add some individual physicians’ own practice experience. Judgment; however, these conventional methods usually lack in-depth customization considerations, so that there is often a more or less expectation gap between the actual cosmetic treatment results and the pre-expected treatment effects.

甚且,有部分實際美容醫療後的治療結果,因醫師自身錯誤的個別判斷,而導致術後產生更劣質的治療成效, 從而造成更多的醫療治療糾紛與醫療治療上的缺失。 What's more, some of the actual treatment results after cosmetic medical treatment, due to the wrong individual judgment of the physicians themselves, lead to worse treatment results after surgery. As a result, more medical treatment disputes and lack of medical treatment are caused.

是以,如何針對個人化的美容醫療需求,提出更多與更好的協助處理方法與工具,即為目前美容醫療業界的所欲解決的技術課題。 Therefore, how to propose more and better methods and tools to assist in the treatment of individualized cosmetic medical needs is a technical issue to be solved by the cosmetic medical industry at present.

本發明之主要目的在於提供一種可針對個人化的美容醫療需求,提出一種以人臉表情為基礎之美容醫療的人工智慧(AI)輔助評估方法以及運用該方法之系統,以祈解決前述習知技術的缺失。 The main purpose of the present invention is to provide an artificial intelligence (AI) assisted evaluation method for cosmetic medical treatment based on facial expressions and a system using the method, which can meet the needs of individualized cosmetic medical treatment, so as to solve the above-mentioned conventional wisdom. Lack of technology.

本案的實施概念,主要是利用人臉表情評估模組,先進行個人化的即時人臉表情評估結果,之後,再依據前述個人化的即時人臉表情評估結果,執行人工智慧辨識分析程序,並結合醫療知識規則模組中之功能性醫學解剖規則以及動態性醫學解剖規則與美容醫療輔助評估結果歷史資料庫後,從而提供美容醫療輔助評估結果,達到可提供專屬個人化的美容醫療輔助建議。 The implementation concept of this case is mainly to use the facial expression evaluation module to first conduct a personalized real-time facial expression evaluation result. Combined with the functional medical anatomy rules and dynamic medical anatomy rules in the medical knowledge rules module and the historical database of cosmetic medical assistant assessment results, the results of cosmetic medical assistant assessment can be provided, so as to provide exclusive and personalized cosmetic medical assistant suggestions.

為了達到上述目的,於第一較佳實施例,本發明提供一種運用於美容醫療之人工智慧輔助評估方法,其至少包括下列步驟:提供一受測者之一即時人臉表情評估結果;輸入該即時人臉表情評估結果至一人工智慧辨識分析模組中;以及選擇搭配一醫療知識規則模組及一美容醫療輔助評估結果歷史資料庫中之至少一者,以執行一人工智慧辨識分析程序,與產生並輸出一美容醫療輔助評估結果。 In order to achieve the above object, in the first preferred embodiment, the present invention provides an artificial intelligence-assisted evaluation method applied in cosmetic medicine, which at least includes the following steps: providing a real-time facial expression evaluation result of a subject; inputting the The real-time facial expression evaluation result is put into an artificial intelligence identification analysis module; and at least one of a medical knowledge rule module and a beauty medical auxiliary evaluation result historical database is selected and matched to execute an artificial intelligence identification analysis program, And generate and output a cosmetic medical assistant evaluation result.

較佳地,其更包括下列步驟:反饋並儲存該美容 醫療輔助評估結果至該醫療知識規則模組及該美容醫療輔助評估結果歷史資料庫中之至少一者。 Preferably, it further includes the following steps: feedback and store the cosmetic The medical assistant evaluation result is sent to at least one of the medical knowledge rule module and the aesthetic medical assistant evaluation result historical database.

較佳地,其中,輸出之該美容醫療輔助評估結果至少包括:該受測者之一評估治療部位結果的組合與優選順序,抑或該評估治療部位結果的組合與優選順序以及一注射填充劑種類與劑量。 Preferably, the output of the cosmetic medical assistant evaluation result at least includes: the combination and the preferred order of the results of the evaluation of the treatment site by one of the subjects, or the combination and the preferred order of the results of the evaluation of the treatment site, and an injection filler type. with dosage.

較佳地,其中,該醫療知識規則模組更包括一功能性醫學解剖規則以及一動態性醫學解剖規則。 Preferably, the medical knowledge rule module further includes a functional medical anatomy rule and a dynamic medical anatomy rule.

較佳地,其中,該即時人臉表情評估結果包括:一靜態表情評估結果,抑或該靜態表情評估結果以及一動態表情評估結果。 Preferably, the real-time facial expression evaluation result includes: a static expression evaluation result, or the static expression evaluation result and a dynamic expression evaluation result.

較佳地,其中,提供該即時人臉表情評估結果更包括下列步驟:依據該醫療知識規則模組而使一臉部區分為複數臉部動作單元;依據該靜態表情評估結果以及該動態表情評估結果中之至少一者,而形成複數情緒指標組合;以及依據每一該情緒指標組合之比例結果而形成該即時人臉表情評估結果。 Preferably, providing the real-time facial expression evaluation result further includes the following steps: classifying a face into plural facial action units according to the medical knowledge rule module; At least one of the results is used to form a plurality of emotion index combinations; and the real-time facial expression evaluation result is formed according to the proportional result of each emotion index combination.

較佳地,其中,該些情緒指標組合係為一正面情緒指標組合及一負面情緒指標組合中之至少一者。 Preferably, the sentiment index combinations are at least one of a positive sentiment index combination and a negative sentiment index combination.

較佳地,其中,該負面情緒指標組合係為一悲傷指標、一生氣指標、一擔憂指標、一驚訝指標、一害怕指標、一鄙視指標及一討厭指標中之一者或及其組合。 Preferably, the combination of negative emotion indicators is one of a sadness indicator, an anger indicator, a worry indicator, a surprise indicator, a fear indicator, a contempt indicator, and a dislike indicator, or a combination thereof.

較佳地,其中,該正面情緒指標組合係為一開心指標、一快樂指標、一滿足指標、一感動指標、一積極指標及一放鬆指標中之一者或及其組合。 Preferably, the combination of positive emotion indicators is one of a happiness indicator, a happiness indicator, a satisfaction indicator, a moving indicator, a positive indicator, and a relaxation indicator, or a combination thereof.

較佳地,其中,該美容醫療輔助評估結果歷史資料庫包括複數美容醫療輔助評估結果,用以供該人工智慧辨識分析模組依據至少一人工智慧深度學習/訓練演算法,而執行一人工智慧深度學習/訓練程序。 Preferably, the historical database of cosmetic medical assistant evaluation results includes plural cosmetic medical assistant evaluation results, which are used for the artificial intelligence identification and analysis module to execute an artificial intelligence according to at least one artificial intelligence deep learning/training algorithm. Deep learning/training procedures.

較佳地,其中,該人工智慧深度學習/訓練程序包括下列步驟:提供該些美容醫療輔助評估結果;其中,每一該美容醫療輔助評估結果至少包括:一歷史受測者相關之一基本資料、一人臉表情評估結果、一個人臉部特徵、該醫療知識規則模組中之一功能性醫學解剖規則以及一動態性醫學解剖規則、一評估治療部位結果的組合與優選順序以及一注射填充劑的種類與劑量;以及輸入該些美容醫療輔助評估結果至該人工智慧辨識分析模組。 Preferably, wherein, the artificial intelligence deep learning/training program includes the following steps: providing the cosmetic medical assistant evaluation results; wherein, each of the cosmetic medical assistant evaluation results at least includes: a historical subject-related basic data , a person's facial expression evaluation results, a person's facial features, a functional medical anatomy rule and a dynamic medical anatomy rule in the medical knowledge rule module, a combination and priority order of the results of evaluating the treatment site, and an injection filler types and doses; and inputting the aesthetic medical assistant evaluation results into the artificial intelligence identification and analysis module.

較佳地,其中,該個人臉部特徵包括一習慣表情之靜態紋路特徵、一靜態輪廓線特徵或一膚質特徵。 Preferably, the personal facial feature includes a static texture feature of a habitual expression, a static contour line feature or a skin texture feature.

較佳地,其中,該至少一人工智慧深度學習/訓練演算法係為一人工神經網路演算法及一深度學習演算法中之至少一者。 Preferably, the at least one artificial intelligence deep learning/training algorithm is at least one of an artificial neural network road algorithm and a deep learning algorithm.

於第二較佳實施例,本發明提供一種運用如前述之人工智慧輔助評估方法的一電子裝置,該電子裝置至少包括:一人臉表情評估模組,用以提供該即時人臉表情評估結果;該人工智慧辨識分析模組,包括該人工智慧辨識分析程序,該人工智慧辨識分析程序接收該即時人臉表情評估結果,並產生該美容醫療輔助評估結果;以及一輸入輸出模組,輸出該美容醫療輔助評估結果;其中,該人工智慧辨識分析模組自該人臉表情評估模組以及該輸入輸出模組中之至少一者,接收至少一個人臉部特徵。 In a second preferred embodiment, the present invention provides an electronic device using the aforementioned artificial intelligence-assisted evaluation method, the electronic device at least includes: a facial expression evaluation module for providing the real-time facial expression evaluation result; The artificial intelligence identification and analysis module includes the artificial intelligence identification and analysis program, the artificial intelligence identification and analysis program receives the real-time facial expression evaluation result, and generates the beauty medical assistant evaluation result; and an input and output module, which outputs the beauty treatment A medical assistant evaluation result; wherein, the artificial intelligence recognition and analysis module receives at least one human facial feature from at least one of the facial expression evaluation module and the input and output module.

較佳地,該電子裝置係以一無線傳輸方式以及一有線傳輸方式中之至少一者連接於該美容醫療輔助評估結果歷史資料庫及該醫療知識規則模組中之至少一者。 Preferably, the electronic device is connected to at least one of the historical database of aesthetic medical aid evaluation results and the medical knowledge rule module by at least one of a wireless transmission method and a wired transmission method.

較佳地,該電子裝置係為一手持式智慧行動裝置、一個人電腦(PC)或一獨立運作(stand-alone)之智慧裝置。 Preferably, the electronic device is a handheld smart mobile device, a personal computer (PC) or a stand-alone smart device.

於第三較佳實施例,本發明提供一種美容醫療之人工智慧輔助評估系統,其至少包括:一人臉表情評估模組,提供一受測者之一即時人臉表情評估結果;以及一人工智慧辨識分析模組,連接於該人臉表情評估模組;其中,該人工智慧辨識分析模組接收該即時人臉表情評估結果,並依據所連接之一醫療知識規則模組及一美容醫療輔助評估結果歷史資料庫中之至少一者,以執行一人工智慧辨識分析程序,且適應地產生並輸出一美容醫療輔助評估結果。 In a third preferred embodiment, the present invention provides an artificial intelligence-assisted evaluation system for cosmetic medicine, which at least includes: a facial expression evaluation module for providing a real-time facial expression evaluation result of a subject; and an artificial intelligence The identification and analysis module is connected to the facial expression evaluation module; wherein, the artificial intelligence identification and analysis module receives the real-time facial expression evaluation result, and evaluates according to a connected medical knowledge rule module and a cosmetic medical aid At least one of the result history databases is used to execute an artificial intelligence identification and analysis program, and to adaptively generate and output a cosmetic medical assistant evaluation result.

較佳地,其中,該人工智慧辨識分析模組反饋並儲存該美容輔助評估結果至該醫療知識規則模組及該美容醫療輔助評估結果歷史資料庫中之至少一者。 Preferably, the artificial intelligence identification and analysis module feeds back and stores the cosmetic assistance evaluation result to at least one of the medical knowledge rule module and the historical database of cosmetic medical assistance evaluation results.

較佳地,其中,該美容醫療輔助評估結果至少包括該受測者之一評估治療部位結果的組合與優選順序,抑或該評估治療部位結果的組合與優選順序以及一注射填充劑種類與劑量。 Preferably, the cosmetic medical assistant evaluation result includes at least the combination and the preferred order of the results of one of the subjects evaluating the treatment site, or the combination and the preferred order of the evaluation results of the treatment site, and an injection filler type and dosage.

較佳地,其中,該醫療知識規則模組更包括一功能性醫學解剖規則以及一動態性醫學解剖規則。 Preferably, the medical knowledge rule module further includes a functional medical anatomy rule and a dynamic medical anatomy rule.

較佳地,其中,該即時人臉表情評估結果包括一靜態表情評估結果,抑或該靜態表情評估結果以及一動態表情評估結果。 Preferably, the real-time facial expression evaluation result includes a static expression evaluation result, or the static expression evaluation result and a dynamic expression evaluation result.

較佳地,其中,該人臉表情評估模組包括:一臉部影像擷取單元,用以進行一影像擷取動作,以取得一即時臉部影像;以及一臉部動作編碼單元,依據即時臉部影像與該醫療知識規則模組而將影像中所呈現的一即時臉部動作區分為複數臉部動作單元;其中,依據每一該臉部動作單元之一偵測結果與至少另一該臉部動作單元之另一偵測結果之間的變化,而形成該靜態表情評估結果以及該動態表情評估結果。 Preferably, the facial expression evaluation module includes: a facial image capture unit for performing an image capture action to obtain a real-time facial image; and a facial motion encoding unit, according to the real-time The facial image and the medical knowledge rule module distinguish a real-time facial action presented in the image into a plurality of facial action units; wherein, according to a detection result of each of the facial action units and at least another of the facial action units The change between another detection result of the facial action unit forms the static expression evaluation result and the dynamic expression evaluation result.

較佳地,其中,該人臉表情評估模組及該人工智慧辨識分析模組中之至少一者更包括:一情緒分析及臉部辨識單元,依據該靜態表情評估結果以及該動態表情評估結果中之至少一者,而形成複數情緒指標組合;其中,依據該些情緒指標組合中之每一者的比例結果而形成該即時人臉表情評估結果。 Preferably, at least one of the facial expression evaluation module and the artificial intelligence recognition analysis module further includes: an emotion analysis and facial recognition unit, based on the static expression evaluation result and the dynamic expression evaluation result At least one of the emotion index combinations is formed; wherein, the real-time facial expression evaluation result is formed according to the proportional result of each of the emotion index combinations.

較佳地,其中,該些情緒指標組合係為一正面情緒指標組合及一負面情緒指標組合中之至少一者。 Preferably, the sentiment index combinations are at least one of a positive sentiment index combination and a negative sentiment index combination.

較佳地,其中,該負面情緒指標組合係為一悲傷指標、一生氣指標、一擔憂指標、一驚訝指標、一害怕指標、一鄙視指標及一討厭指標中之一者或及其組合。 Preferably, the combination of negative emotion indicators is one of a sadness indicator, an anger indicator, a worry indicator, a surprise indicator, a fear indicator, a contempt indicator, and a dislike indicator, or a combination thereof.

較佳地,其中,該正面情緒指標組合係為一開心指標、一快樂指標、一滿足指標、一感動指標、一積極指標及一放鬆指標中之一者或及其組合。 Preferably, the combination of positive emotion indicators is one of a happiness indicator, a happiness indicator, a satisfaction indicator, a moving indicator, a positive indicator, and a relaxation indicator, or a combination thereof.

較佳地,其中,該美容醫療輔助評估結果歷史資料庫包括複數美容醫療輔助評估結果,用以供該人工智慧辨識分析模組依據至少一人工智慧深度學習/訓練演算法,而執行一人工智慧深度學習/訓練程序。 Preferably, the historical database of cosmetic medical assistant evaluation results includes plural cosmetic medical assistant evaluation results, which are used for the artificial intelligence identification and analysis module to execute an artificial intelligence according to at least one artificial intelligence deep learning/training algorithm. Deep learning/training procedures.

較佳地,其中,每一該美容醫療輔助評估結果至 少包括:一歷史受測者相關之一基本資料、一人臉表情評估結果、一個人臉部特徵、該醫療知識規則模組中之一功能性醫學解剖規則以及一動態性醫學解剖規則、一評估治療部位結果的組合與優選順序以及一注射填充劑的種類與劑量中之一者或及其組合。 Preferably, wherein, each of the cosmetic medical assistant evaluation results to At least include: a basic information related to a historical subject, a person's facial expression evaluation result, a person's facial features, a functional medical anatomy rule in the medical knowledge rule module, a dynamic medical anatomy rule, an evaluation treatment The combination and preferred order of site results and one or a combination of the type and dose of an injectable filler.

較佳地,其中,該個人臉部特徵包括一習慣表情之靜態紋路特徵、一靜態輪廓線特徵或一膚質特徵。 Preferably, the personal facial feature includes a static texture feature of a habitual expression, a static contour line feature or a skin texture feature.

較佳地,其中,該個人臉部特徵係自該人臉表情評估模組以及一輸入輸出模組中之至少一者,而被提供至該人工智慧辨識分析模組。 Preferably, the personal facial features are provided to the artificial intelligence recognition and analysis module from at least one of the facial expression evaluation module and an input and output module.

較佳地,其中,該人工智慧深度學習/訓練程序係為輸入該些美容醫療輔助評估結果至該人工智慧辨識分析模組。 Preferably, the artificial intelligence deep learning/training program is to input the aesthetic medical assistant evaluation results to the artificial intelligence identification and analysis module.

較佳地,其中,該至少一人工智慧深度學習/訓練演算法係為一人工神經網路演算法及一深度學習演算法中之至少一者。 Preferably, the at least one artificial intelligence deep learning/training algorithm is at least one of an artificial neural network road algorithm and a deep learning algorithm.

較佳地,其藉由該人臉表情評估模組、一輸入輸出模組以及該人工智慧辨識分析模組而組裝形成一電子裝置;其中,該電子裝置係為一手持式智慧行動裝置、一個人電腦(PC)或一獨立運作(stand-alone)之智慧裝置。 Preferably, it is assembled to form an electronic device by the facial expression evaluation module, an input and output module and the artificial intelligence recognition and analysis module; wherein, the electronic device is a handheld smart mobile device, a person A computer (PC) or a stand-alone smart device.

較佳地,其中,該電子裝置係以一無線傳輸方式及一有線傳輸方式中之至少一者連接於該美容醫療輔助評估結果歷史資料庫及該醫療知識規則模組中之至少一者。 Preferably, the electronic device is connected to at least one of the historical database of aesthetic medical aid evaluation results and the medical knowledge rule module by at least one of a wireless transmission method and a wired transmission method.

100:人工智慧輔助評估系統 100: Artificial Intelligence Assisted Evaluation System

110:人臉表情評估模組 110: Facial Expression Evaluation Module

111:臉部動作編碼單元 111: Facial Action Coding Unit

112:情緒分析及臉部辨識單元 112: Sentiment Analysis and Facial Recognition Unit

120:人工智慧辨識分析模組 120: Artificial Intelligence Identification Analysis Module

121:人工智慧辨識分析程序 121: Artificial Intelligence Identification Analysis Program

130:醫療知識規則模組 130: Medical Knowledge Rule Module

131:功能性醫學解剖規則 131: Functional Medicine Anatomy Rules

132:動態性醫學解剖規則 132: Dynamic Medical Anatomy Rules

140:美容醫療輔助評估結果歷史資料庫 140: Historical Database of Aesthetic Medical Auxiliary Assessment Results

141:美容醫療輔助評估結果 141: Results of Aesthetic Medical Auxiliary Assessments

150:輸出輸入模組 150: Output input module

160:電子裝置 160: Electronics

220:人工智慧辨識分析模組 220: Artificial Intelligence Identification Analysis Module

221:人工智慧深度學習/訓練程序 221: Deep Learning/Training Procedures for Artificial Intelligence

230:醫療知識規則模組 230: Medical Knowledge Rule Module

231:功能性醫學解剖規則 231: Functional Medicine Anatomy Rules

232:動態性醫學解剖規則 232: Dynamic Medical Anatomy Rules

240:美容醫療輔助評估結果歷史資料庫 240: Historical Database of Aesthetic Medical Auxiliary Assessment Results

241:美容醫療輔助評估結果 241: Aesthetic Medical Aid Assessment Results

A:即時人臉表情評估結果 A: Instant facial expression evaluation results

A’:人臉表情評估結果 A’: The result of facial expression evaluation

A1:靜態表情評估結果 A1: Static expression evaluation results

A2:動態表情評估結果 A2: Dynamic expression evaluation results

A31-A33,A3n:情緒指標組合 A31-A33, A3n: Combination of sentiment indicators

AU1-AUn:靜態表情評估結果之臉部動作單元 AU1-AUn: facial action unit of static expression evaluation results

AU1’-AUn’:動態表情評估結果之臉部動作單元 AU1'-AUn': facial action unit of dynamic expression evaluation results

B:基本資料 B: Basic information

B1:性別 B1: Gender

B2:年齡 B2: Age

C,C1-Cn:評估治療部位結果的組合與優選順序 C, C1-Cn: Combinations and Priority Orders for Assessing Treatment Site Outcomes

P:個人臉部特徵 P: Personal facial features

P1,P1’:習慣表情之靜態紋路特徵 P1, P1': static texture features of habitual expressions

P2,P2’:靜態輪廓線特徵 P2,P2': static contour feature

P3,P3’:膚質特徵 P3, P3': skin texture characteristics

D:注射填充劑的種類 D: Types of injectable fillers

U:注射填充劑的劑量 U: Dosage of injectable filler

R1,R11-R14:功能性醫學解剖規則之複數醫療規則 R1, R11-R14: Plural medical rules of functional medical anatomical rules

R2,R21-R24:動態性醫學解剖規則之複數醫療規則 R2, R21-R24: Plural medical rules of dynamic medical anatomy rules

S11-S14,S111-S113:人工智慧輔助評估方法之流程步驟 S11-S14, S111-S113: Process steps of AI-assisted evaluation method

S31-S32:人工智慧深度學習/訓練程序之流程步驟 S31-S32: Process steps of artificial intelligence deep learning/training program

圖1:係為本案發明概念中之人工智慧輔助評估系統的較佳實施概念圖。 FIG. 1 is a conceptual diagram of a preferred implementation of the artificial intelligence-assisted evaluation system in the concept of the present invention.

圖2:係為本案發明概念中之人工智慧輔助評估方法的較佳實施流程圖。 FIG. 2 is a flow chart of a preferred implementation of the artificial intelligence-assisted evaluation method in the concept of the present invention.

圖3:係為圖1中之人臉表情評估模組的較佳實施方塊示意圖。 FIG. 3 is a schematic block diagram of a preferred implementation of the facial expression evaluation module in FIG. 1 .

圖4:係為圖3中之人臉表情評估模組的較佳實施流程圖。 FIG. 4 is a flow chart of a preferred implementation of the facial expression evaluation module in FIG. 3 .

圖5:係為圖3中之臉部動作編碼單元的較佳實施概念示意圖。 FIG. 5 is a conceptual schematic diagram of a preferred implementation of the facial motion coding unit in FIG. 3 .

圖6A:係為以圖3中之臉部動作編碼單元產生靜態表情評估結果與動態表情評估結果之具體實施概念示意圖。 FIG. 6A is a conceptual schematic diagram of a specific implementation of generating static expression evaluation results and dynamic expression evaluation results by the facial motion coding unit in FIG. 3 .

圖6B:係為以本案中情緒分析及臉部辨識單元定量分析每一臉部動作單元的臉部肌肉群於不同情緒表情時的作動強度之具體實施概念示意圖。 FIG. 6B is a conceptual schematic diagram of a specific implementation of quantitatively analyzing the action intensity of the facial muscle groups of each facial action unit under different emotional expressions by the emotion analysis and face recognition unit in this case.

圖7A:係為本案發明概念中之人工智慧辨識分析模組之人工智慧深度學習/訓練架構的較佳實施概念圖。 FIG. 7A is a conceptual diagram of a preferred implementation of the artificial intelligence deep learning/training architecture of the artificial intelligence identification and analysis module in the concept of the present invention.

圖7B:係為圖7A中運用於人工智慧辨識分析模組之美容醫療輔助評估結果歷史資料庫的較佳實施概念圖。 FIG. 7B is a conceptual diagram of a preferred implementation of the historical database of cosmetic medical assistance evaluation results applied to the artificial intelligence identification and analysis module in FIG. 7A .

圖8:係為圖7A中本案發明概念之人工智慧深度學習/訓練程序的較佳實施流程圖。 FIG. 8 is a flow chart of a preferred implementation of the artificial intelligence deep learning/training procedure of the inventive concept of the present application in FIG. 7A .

圖9A至圖9F:係為應用本案發明概念中之人工智慧輔助評估方法及其系統的第一較佳實施概念示意圖。 FIGS. 9A to 9F are schematic diagrams illustrating a first preferred implementation of the artificial intelligence-assisted evaluation method and the system thereof according to the inventive concept of the present application.

圖10:係為應用本案發明概念中之人工智慧輔助 評估方法及其系統的第二較佳實施概念示意圖。 Figure 10: It is the application of artificial intelligence assistance in the concept of the invention in this case A conceptual schematic diagram of a second preferred implementation of the evaluation method and its system.

圖11:係為應用本案發明概念中之人工智慧輔助評估方法及其系統的第三較佳實施概念示意圖。 FIG. 11 is a schematic diagram of a third preferred implementation concept of applying the artificial intelligence-assisted evaluation method and system thereof in the concept of the present invention.

以下係提出實施例進行詳細說明,實施例僅用以作為範例說明,並不會限縮本發明欲保護之範圍。此外,實施例中之圖式係省略不必要或以通常技術即可完成之元件,以清楚顯示本發明之技術特點。以下,茲以本發明之較佳實施例並配合圖式作進一步說明。 The following examples are provided for detailed description, and the examples are only used as examples to illustrate, and do not limit the scope of protection of the present invention. In addition, the drawings in the embodiments omit unnecessary elements or elements that can be completed with ordinary techniques, so as to clearly show the technical characteristics of the present invention. Hereinafter, the preferred embodiments of the present invention are further described with reference to the drawings.

請參閱圖1及圖2,其分別為本案發明概念中之人工智慧輔助評估系統的較佳實施概念圖,以及人工智慧輔助評估方法的較佳實施流程圖。 Please refer to FIG. 1 and FIG. 2 , which are respectively a conceptual diagram of a preferred implementation of the artificial intelligence-assisted evaluation system in the concept of the present invention, and a flow chart of a preferred implementation of the artificial intelligence-assisted evaluation method.

如圖1所示,本案以人臉表情為基礎,並應用於美容醫療之人工智慧輔助評估系統100包括:人臉表情評估模組110、人工智慧辨識分析模組120、醫療知識規則模組130、美容醫療輔助評估結果歷史資料庫140以及輸出輸入模組150。 As shown in FIG. 1 , the artificial intelligence-assisted evaluation system 100 used in this case is based on facial expressions and applied to cosmetic medicine, including: a facial expression evaluation module 110 , an artificial intelligence identification and analysis module 120 , and a medical knowledge rule module 130 , a historical database 140 of cosmetic medical assistant evaluation results, and an output and input module 150 .

其中,人臉表情評估模組110至少包括臉部動作編碼單元111、情緒分析及臉部辨識單元112、以及臉部影像擷取單元113。人工智慧辨識分析模組120至少包括人工智慧辨識分析程序121。醫療知識規則模組130至少包括功能性醫學解剖規則131以及動態性醫學解剖規則132。美容醫療輔助評估結果歷史資料庫140至少包括多個美容醫療輔助評估結果1-N。 The facial expression evaluation module 110 at least includes a facial motion encoding unit 111 , an emotion analysis and facial recognition unit 112 , and a facial image capture unit 113 . The artificial intelligence identification and analysis module 120 at least includes an artificial intelligence identification and analysis program 121 . The medical knowledge rule module 130 at least includes functional medical anatomy rules 131 and dynamic medical anatomy rules 132 . The historical database 140 of aesthetic medical assistant evaluation results includes at least a plurality of aesthetic medical assistant evaluation results 1-N.

再者,輸出輸入模組150用以接收輸入或輸出多種資訊,例如:接收輸入受測者之基本資料B1-B2及/或個人臉部 特徵P1-P3,並予以輸入至人工智慧辨識分析模組120中;抑或,輸出自人工智慧辨識分析模組120所接收之評估治療部位結果的組合與優選順序C1-Cn,及/或注射填充劑的種類D與注射填充劑的劑量U。 Furthermore, the input/output module 150 is used for receiving input or outputting various information, for example: receiving the basic data B1-B2 and/or personal face of the input subject The features P1-P3 are input into the artificial intelligence identification and analysis module 120; or, the combination and the preferred order C1-Cn of the results of evaluating the treatment site received from the artificial intelligence identification and analysis module 120 are output, and/or injection filling The type D of the agent and the dose U of the injected filler.

其中,人臉表情評估模組110、人工智慧辨識分析模組120以及輸出輸入模組150可組裝形成為一電子裝置160,電子裝置160可為手持式智慧行動裝置、個人電腦(PC)或獨立運作(stand-alone)之智慧裝置。舉例來說,電子裝置160可為平板電腦、智慧行動裝置、筆記型電腦、桌上型電腦、獨立單獨運作的智慧裝置或獨立單獨運作的智慧模組,其中該智慧裝置或該智慧模組可組裝或分離於一醫療裝置中(圖未示出)。 Among them, the facial expression evaluation module 110, the artificial intelligence recognition and analysis module 120 and the input and output module 150 can be assembled to form an electronic device 160, and the electronic device 160 can be a handheld smart mobile device, a personal computer (PC) or an independent A stand-alone smart device. For example, the electronic device 160 can be a tablet computer, a smart mobile device, a notebook computer, a desktop computer, a stand-alone smart device or a stand-alone smart module, wherein the smart device or the smart module can be Assembled or separated into a medical device (not shown).

且,電子裝置160係以無線傳輸方式及/或有線傳輸方式,連接於美容醫療輔助評估結果歷史資料庫140及/或醫療知識規則模組130。舉例來說,美容醫療輔助評估結果歷史資料庫140及/或醫療知識規則模組130可儲存於雲端儲存平台中,而電子裝置160可經由各種區域/廣域網路連接於該雲端儲存平台(圖未示出)。 Moreover, the electronic device 160 is connected to the historical database 140 and/or the medical knowledge rule module 130 in the form of wireless transmission and/or wired transmission. For example, the historical database 140 and/or the medical knowledge rule module 130 of the aesthetic medical assistant evaluation results can be stored in a cloud storage platform, and the electronic device 160 can be connected to the cloud storage platform via various regional/wide area networks (not shown in the figure). Shows).

接著,請參閱圖2,其係為應用於前述人工智慧輔助評估系統100中之人工智慧輔助評估方法S10,並請搭配參閱圖1;亦即,前述人工智慧輔助評估方法S10係包括實施下列各步驟:執行步驟S11:提供受測者之即時人臉表情評估結果A,及受測者之基本資料B1-B2及/或個人臉部特徵P1-P3。 Next, please refer to FIG. 2 , which is an AI-assisted evaluation method S10 applied in the aforementioned AI-assisted evaluation system 100 , and please refer to FIG. 1 ; that is, the aforementioned AI-assisted evaluation method S10 includes implementing the following Step: Execute step S11: provide the test subject's real-time facial expression evaluation result A, and the test subject's basic data B1-B2 and/or personal facial features P1-P3.

其中,即時人臉表情評估結果A可包括由人臉表情評估模組110所提供之靜態表情評估結果A1、動態表情評估結果A2以及依據靜態表情評估結果A1與動態表情評估結果A2中之至少一者而產生之多個情緒指標組合A31-A3n。 The real-time facial expression evaluation result A may include the static expression evaluation result A1 provided by the facial expression evaluation module 110, the dynamic expression evaluation result A2, and at least one of the static expression evaluation result A1 and the dynamic expression evaluation result A2. The result is a combination of multiple sentiment indicators A31-A3n.

另外,受測者之基本資料B1-B2及/或個人臉部特徵P1-P3,除了可由輸出輸入模組150提供之作法外,於其他較佳實施例中,也可改由人臉表情評估模組110直接提供受測者之另一個人臉部特徵P1’-P3’;且前述做法可以擇一或分工進行,其皆不脫離本案之發明構想。 In addition, the basic data B1-B2 and/or the personal facial features P1-P3 of the subject, in addition to the method provided by the input and output module 150, in other preferred embodiments, can also be evaluated by facial expressions. The module 110 directly provides the other facial features P1'-P3' of the subject; and the aforementioned methods can be selected or performed by division of labor, none of which deviates from the inventive concept of the present case.

其次,執行步驟S12:輸入前述即時人臉表情評估結果A,以及受測者之基本資料B1-B2及/或個人臉部特徵P1-P3至人工智慧辨識分析模組120中。 Next, step S12 is performed: the aforementioned real-time facial expression evaluation result A, and the basic data B1-B2 of the subject and/or personal facial features P1-P3 are input into the artificial intelligence recognition analysis module 120 .

接著,執行步驟S13:人工智慧辨識分析模組120可選擇搭配醫療知識規則模組130及美容醫療輔助評估結果歷史資料庫140中之至少一者,以執行人工智慧辨識分析程序121,與產生並輸出一美容醫療輔助評估結果;其中,人工智慧辨識分析模組120可以無線傳輸方式或有線傳輸方式,連接於醫療知識規則模組130及美容醫療輔助評估結果歷史資料庫140。 Next, step S13 is executed: the artificial intelligence identification and analysis module 120 can select at least one of the medical knowledge rule module 130 and the aesthetic medical assistant evaluation result historical database 140 to execute the artificial intelligence identification and analysis program 121, and generate and A cosmetic medical assistant evaluation result is output; wherein, the artificial intelligence identification and analysis module 120 can be connected to the medical knowledge rule module 130 and the cosmetic medical assistant evaluation result historical database 140 by wireless transmission or wired transmission.

再者,人工智慧辨識分析模組120可依據醫療知識規則模組130中之功能性醫學解剖規則131以及動態性醫學解剖規則132,及/或美容醫療輔助評估結果歷史資料庫140中之多個美容醫療輔助評估結果1-N,而適應地產生對應該受測者之美容醫療輔助評估結果;其中,美容醫療輔助評估結果至少包括該受測者之評估治療部位結果的組合與優選順序C1-Cn。 Furthermore, the artificial intelligence identification and analysis module 120 can be based on the functional medical anatomy rules 131 and the dynamic medical anatomy rules 132 in the medical knowledge rule module 130 , and/or a plurality of the historical database 140 of the aesthetic medical assistant evaluation results. Aesthetic medical assistant evaluation results 1-N, and adaptively generate aesthetic medical assistant evaluation results corresponding to the subject; wherein, the aesthetic medical assistant evaluation results at least include the combination of the subject's evaluation treatment site results and the priority order C1- Cn.

當然,前述美容醫療輔助評估結果還可再包括對應於該評估治療部位結果的組合與優選順序C1-Cn,而產生關於注射填充劑種類D與填充劑劑量U之輔助評估結果。 Of course, the aforementioned cosmetic medical assistant evaluation results may further include the combination and the preferred order C1-Cn corresponding to the evaluation treatment site results, so as to generate the assistant evaluation results regarding the injection filler type D and filler dose U.

此外,人工智慧辨識分析模組120可藉由圖1所示之輸出輸入模組150,輸出或顯示評估治療部位結果的組合與 優選順序C1-Cn及/或注射填充劑的種類D與填充劑的劑量U。 In addition, the artificial intelligence recognition and analysis module 120 can output or display the combination of the results of evaluating the treatment site and the output and input module 150 shown in FIG. Preference is given to the sequence C1-Cn and/or the type D of filler injected and the dose U of filler.

另外,關於本案中步驟S13之另一較佳實施做法(圖未示出),本案人工智慧辨識分析模組120亦可因應即時人臉表情評估結果A,並依據已內建的各種醫療知識規則及/或美容醫療輔助評估結果,而直接執行前述人工智慧辨識分析程序121,並產生另一美容醫療輔助評估結果;前述另一較佳作法之實施特點在於,無需額外搭配外掛的前述醫療知識規則模組130及前美容醫療輔助評估結果歷史資料庫140中之至少一者。 In addition, regarding another preferred implementation of step S13 in this case (not shown in the figure), the artificial intelligence recognition and analysis module 120 in this case can also respond to the real-time facial expression evaluation result A and various built-in medical knowledge rules and/or the cosmetic medical assistant evaluation result, and directly execute the aforementioned artificial intelligence identification and analysis program 121, and generate another cosmetic medical assistant evaluation result; the implementation feature of the above-mentioned another preferred method is that there is no need to additionally match the above-mentioned external medical knowledge rules. At least one of the module 130 and the historical database 140 of pre-cosmetic medical aid evaluation results.

本案之一更佳作法,係更包括執行步驟S14:反饋並儲存美容醫療輔助評估結果至醫療知識規則模組130及美容醫療輔助評估結果歷史資料庫140中之至少一者;其中,人工智慧辨識分析模組120可儲存每一受測者的美容醫療輔助評估結果於美容醫療輔助評估結果歷史資料庫140中,以利於人工智慧辨識分析模組120依據至少一人工智慧深度學習/訓練演算法而執行一人工智慧深度學習/訓練程序,且此部分將於後述補充說明,在此先不贅述。 A better practice in this case further includes performing step S14: feeding back and storing the aesthetic medical assistant evaluation result to at least one of the medical knowledge rule module 130 and the aesthetic medical assistant evaluation result historical database 140; wherein, the artificial intelligence identification The analysis module 120 can store the cosmetic medical assistant evaluation result of each subject in the historical database 140 of cosmetic medical assistant evaluation results, so as to facilitate the artificial intelligence identification and analysis module 120 according to at least one artificial intelligence deep learning/training algorithm. Execute an artificial intelligence deep learning/training program, and this part will be supplemented later, and will not be repeated here.

當然,前述步驟S14於其他實施例中亦可以選擇不予實施,且此皆屬於本案的各種均等變更實施態樣。 Of course, the aforementioned step S14 can also be chosen not to be implemented in other embodiments, and these are all equivalently modified implementation aspects of the present case.

更詳細來說,關於前述步驟S11中之人臉表情評估模組110如何提供即時人臉表情評估結果A,還可更包括下列多個流程步驟。請再參閱圖3及圖4,其分別為本案發明概念中之人臉表情評估模組的較佳實施概念圖,以及人臉表情評估模組的較佳實施流程圖。 More specifically, regarding how the facial expression evaluation module 110 in the aforementioned step S11 provides the real-time facial expression evaluation result A, the following steps may be further included. Please refer to FIG. 3 and FIG. 4 , which are respectively a conceptual diagram of a preferred implementation of the facial expression evaluation module in the concept of the present invention, and a flowchart of a preferred implementation of the facial expression evaluation module.

如圖3所示,本案之人臉表情評估模組110包括:臉部動作編碼單元111與情緒分析及臉部辨識單元112、臉部影像擷取單元113;其中,臉部影像擷取單元113於進行一影像擷 取動作(例如,以一攝像裝置(圖未示出)進行拍攝)後,即可取得一即時臉部影像,並予以輸出至臉部動作編碼單元111;之後,臉部動作編碼單元111依據前述即時臉部影像與醫療知識規則模組130,而將影像中所呈現的一即時臉部動作予以細分為多個臉部動作單元AU1-AUn(請配合參閱後述之圖5之相關說明)。 As shown in FIG. 3 , the facial expression evaluation module 110 of this case includes: a facial motion encoding unit 111 , an emotion analysis and facial recognition unit 112 , and a facial image capturing unit 113 ; wherein, the facial image capturing unit 113 to perform an image capture After taking the action (for example, shooting with a camera device (not shown)), a real-time facial image can be obtained and output to the facial motion encoding unit 111; after that, the facial motion encoding unit 111 will The real-time facial image and medical knowledge rule module 130 subdivides a real-time facial action presented in the image into a plurality of facial action units AU1-AUn (please refer to the related description of FIG. 5 described later).

至於情緒分析及臉部辨識單元112,其則包括多個情緒指標組合A31-A3n,且可為屬於正面情緒指標組合,抑或為屬於負面情緒指標組合;其中,前述正面情緒指標組合係指包括,例如:開心指標、快樂指標、滿足指標、感動指標、積極指標及放鬆指標中之一者或及其組合,而前述負面情緒指標組合係指包括,例如:悲傷指標、生氣指標、擔憂指標、驚訝指標、害怕指標、鄙視指標及討厭指標中之一者或及其組合。 As for the emotion analysis and face recognition unit 112, it includes a plurality of emotion index combinations A31-A3n, which may belong to a positive emotion index combination or a negative emotion index combination; wherein, the aforementioned positive emotion index combination refers to including, For example: one or a combination of happiness index, happiness index, satisfaction index, moving index, positive index and relaxation index, and the aforementioned combination of negative emotion index refers to, for example: sadness index, anger index, worry index, surprise One or a combination of indicators, indicators of fear, indicators of contempt and indicators of hatred.

接著,請再搭配參閱圖4,執行步驟S11更包括下列步驟,執行步驟S111:運用人臉表情評估模組110,依據醫療知識規則模組130,而使一臉部肌肉動作區分為多個臉部動作單元AU1-AUn;其中,臉部動作編碼單元111可依據醫療知識規則模組130中之功能性醫學解剖規則131以及動態性醫學解剖規則132而用以區分各種臉部肌肉動作,且前述多個醫學解剖規則,例如可為:各個肌肉群與鄰接之肌肉群之間的連動關係,及/或各個肌肉群的功能等。 Next, please refer to FIG. 4 again. Step S11 further includes the following steps. Step S111 is executed: using the facial expression evaluation module 110 and according to the medical knowledge rule module 130 to differentiate a facial muscle action into multiple faces facial action units AU1-AUn; wherein, the facial action encoding unit 111 can be used to distinguish various facial muscle actions according to the functional medical anatomy rules 131 and the dynamic medical anatomy rules 132 in the medical knowledge rule module 130, and the aforementioned A plurality of medical anatomical rules can be, for example, the linkage relationship between each muscle group and adjacent muscle groups, and/or the function of each muscle group.

舉例來說,請搭配參閱圖5,係為前述之臉部動作編碼單元111的較佳實施概念示意圖。如圖5所示,本案之臉部動作編碼單元111可將眉頭處之皺眉肌定義為臉部動作單元AU4(Face Action Unit 4),抑或將可能使口角下垂之降口角肌(depressor anguli oris)及頦肌(mentails)分別定義為臉部動作單元AU15(Face Action Unit 15)及臉部動作單元AU17(Face Action Unit 17)。 For example, please refer to FIG. 5 , which is a conceptual schematic diagram of a preferred implementation of the aforementioned facial motion encoding unit 111 . As shown in FIG. 5 , the facial action coding unit 111 of this case can define the frown muscle at the brow as the facial action unit AU4 (Face Action Unit 4), or the depressor anguli oris which may cause the corner of the mouth to droop. and the mental muscle (mentails) are defined as the face action unit AU15 (Face Action Unit 15) and the face action unit AU17 (Face Action Unit 15), respectively. Action Unit 17).

其次,接續執行步驟S112:依據該靜態表情評估結果A1以及該動態表情評估結果A2中之至少一者,而形成複數情緒指標組合A31-A3n;其中,臉部動作編碼單元111依據每一臉部動作單元與另一臉部動作單元之間因應人臉情緒變化而連動或作動的組合與肌肉作動的強度,形成該靜態表情評估結果A1以及該動態表情評估結果A2。 Next, step S112 is continued: according to at least one of the static expression evaluation result A1 and the dynamic expression evaluation result A2, a plurality of emotion index combinations A31-A3n are formed; wherein, the facial motion encoding unit 111 is based on each facial expression The static expression evaluation result A1 and the dynamic expression evaluation result A2 are formed by the linkage between the action unit and another facial action unit in response to the change of the facial emotion or the combination of actions and the intensity of the muscle action.

請再以圖3搭配參閱圖6A,其中圖6A係為以本案人臉表情評估模組110中的臉部動作編碼單元111產生前述靜態表情評估結果A1與前述動態表情評估結果A2之具體實施概念示意圖;其中,本案之人臉表情評估模組110先行偵測受測者在靜態與動態中所表現出的人臉表情變化(如左圖的靜態變化至右圖的動態),之後,可例如依據所偵測到的臉部動作單元AU1、臉部動作單元AU4以及臉部動作單元AU15,搭配組成的該靜態表情評估結果A1以及該動態表情評估結果A2。 Please refer to FIG. 6A together with FIG. 3 , wherein FIG. 6A is a specific implementation concept of generating the static expression evaluation result A1 and the dynamic expression evaluation result A2 by the facial motion encoding unit 111 in the facial expression evaluation module 110 of the present case Schematic diagram; wherein, the facial expression evaluation module 110 of this case first detects the changes of the subject's facial expressions in static and dynamic situations (such as the static changes in the left picture to the dynamic changes in the right picture), and then, for example, The static expression evaluation result A1 and the dynamic expression evaluation result A2 are formed according to the detected facial action unit AU1 , the facial action unit AU4 and the facial action unit AU15 .

其中,如圖6A的左側圖所示者,前述靜態表情評估結果A1可為受測者無任何情緒時之多個臉部動作單元AU1-AUn的各個靜態參數值,抑或使受測者於臉部放鬆的狀態下錄製一段短片,此舉可察覺及分析出受測者之多個臉部肌肉群是否會不自覺地施力或作動,相當於肌肉群之間的連動性跟動態性。 Wherein, as shown in the left figure of FIG. 6A , the aforementioned static expression evaluation result A1 may be the static parameter values of a plurality of facial action units AU1-AUn when the subject has no emotions, or the subject can A short video is recorded in a relaxed state, which can detect and analyze whether the multiple facial muscle groups of the subject will exert force or act unconsciously, which is equivalent to the linkage and dynamics between the muscle groups.

而前述動態表情評估結果A2,則如圖6A的右側所示者,係為受測者依據不同情緒而呈現出不同人臉表情,例如其可為:悲傷情緒;當然,前述動態表情評估結果A2還可以包括更多其他生氣表情、大笑表情...等不同的情緒表情。 The aforementioned dynamic expression evaluation result A2, as shown on the right side of FIG. 6A, is that the subject presents different facial expressions according to different emotions, for example, it may be: sadness; of course, the aforementioned dynamic expression evaluation result A2 You can also include more other angry expressions, laughing expressions, etc. different emotional expressions.

接著,再請以圖3搭配參閱圖6B,其中圖6B係為 以本案中的情緒分析及臉部辨識單元112進一步定量分析每一臉部動作單元的臉部肌肉群於不同情緒表情時的作動強度,以提供更精準的動態參數值之具體實施概念示意圖。亦即,本案之情緒分析及臉部辨識單元112可依據該靜態表情評估結果A1以及該動態表情評估結果A2中之至少一者而形成前述多個情緒指標組合A31-A3n。 Next, please refer to FIG. 6B with FIG. 3 , wherein FIG. 6B is The emotion analysis and face recognition unit 112 in this case is used to further quantitatively analyze the action intensity of the facial muscle group of each facial action unit under different emotional expressions, so as to provide a schematic diagram of a specific implementation concept of more accurate dynamic parameter values. That is, the emotion analysis and face recognition unit 112 of this case can form the aforementioned multiple emotion index combinations A31-A3n according to at least one of the static expression evaluation result A1 and the dynamic expression evaluation result A2.

本案之人臉表情評估模組110,係可例如將人臉情緒表情區分為7大類定義,其中除了中性(Neutral)表情之外,還包括:開心(Happy)、悲傷(Sad)、生氣(Angry)、驚訝(Surprise)、恐懼(Fear/Scared)、噁心(Disgusted)等表情定義,此舉相當於本案前述之多種情緒指標組合A31-A3n。 The facial expression evaluation module 110 in this case can, for example, divide facial expressions into 7 categories of definitions, which, in addition to neutral expressions, also include: happy (Happy), sad (Sad), angry ( Angry), Surprise (Surprise), Fear (Fear/Scared), Disgusted (Disgusted) and other expression definitions, which is equivalent to the combination A31-A3n of various emotional indicators mentioned above in this case.

其中,本案之臉部動作編碼單元111定義悲傷情緒表情的特徵可為眉毛下滑(Eyebrow down)、皺眉紋(Frown line)、八字眉(Clock time eyebrow 8:20)、倒U(Reverse U)、眉頭上揚(Brow head up)、黑眼圈(Dark eyes)、下唇突出(Lower lip protruding)、唇緊閉(Lip tight)、嘴角下垂(Lip corners droop)、下巴突出(Chin bulges)等等多個臉部動作單元的編碼,而上述悲傷情緒表情的特徵,可作為情緒分析及臉部辨識單元112判斷悲傷指標,以及區別悲傷指標與其他情緒指標的依據(圖未示出)。 Wherein, the facial action encoding unit 111 in this case defines the features of sad emotional expressions as Eyebrow down, Frown line, Clock time eyebrow 8:20, Reverse U, Brow head up, Dark eyes, Lower lip protruding, Lip tight, Lip corners droop, Chin bulges, etc. The encoding of the facial action unit, and the features of the above-mentioned sad emotional expression, can be used as the basis for the emotion analysis and face recognition unit 112 to determine the sadness index and distinguish the sadness index from other emotion indexes (not shown).

也就是說,本案可運用多個臉部動作單元AU1-AUn,並藉由識別臉部各項肌肉群所綜合表現出來跟情緒相關的情緒表情特徵,而綜合分析臉部表情所要呈現或傳達的真正情緒。如此一來,情緒分析及臉部辨識單元112便能藉由靜態表情評估結果A1與動態表情評估結果A2之間的微表情,從而客觀地辨識及分析出人臉表情的細微變化,進而更精確地評估多個臉部動作單元AU1-AUn之間的作動化而予以量化。 That is to say, in this case, multiple facial action units AU1-AUn can be used, and by identifying the emotional expression characteristics related to emotions comprehensively expressed by various facial muscle groups, and comprehensively analyzing the facial expressions to be presented or conveyed real emotions. In this way, the emotion analysis and face recognition unit 112 can objectively identify and analyze the subtle changes in facial expressions by using the micro-expressions between the static expression evaluation result A1 and the dynamic expression evaluation result A2, and thus be more accurate. It is quantified by evaluating the actuation among a plurality of facial action units AU1-AUn.

當然,於不同較佳實施例中,情緒分析及臉部辨識單元可被組裝結合或安裝載入於人臉表情評估模組及人工智慧辨識分析模組中之至少一者,以使其可依據靜態表情評估結果以及動態表情評估結果中之至少一者,而形成上述多個情緒指標組合。 Of course, in different preferred embodiments, the emotion analysis and face recognition unit can be assembled and combined or installed and loaded into at least one of the facial expression evaluation module and the artificial intelligence recognition analysis module, so that it can be based on At least one of the static expression evaluation result and the dynamic expression evaluation result forms a combination of the above-mentioned multiple emotion indicators.

最後,執行步驟S113:依據每一該情緒指標組合之比例結果而形成該即時人臉表情評估結果A;其中,即時人臉表情評估結果A包括多個情緒指標組合A31-A3n之比例結果,例如:生氣指標為14.1%、悲傷指標為35.2%...等等。 Finally, step S113 is executed: the real-time facial expression evaluation result A is formed according to the ratio result of each combination of the emotion indicators; wherein, the real-time facial expression evaluation result A includes the ratio results of multiple emotion indicator combinations A31-A3n, such as : The angry index is 14.1%, the sadness index is 35.2%...etc.

接續說明,用以辨識分析即時人臉表情評估結果A之人工智慧辨識分析模組120,如何執行人工智慧深度學習/訓練。請參閱圖7A及圖7B,其分別係為人工智慧辨識分析模組之人工智慧深度學習/訓練架構的較佳實施概念圖,以及運用於人工智慧辨識分析模組之美容醫療輔助評估結果歷史資料庫的較佳實施概念圖。 Continuing to describe how the artificial intelligence identification and analysis module 120 for identifying and analyzing the real-time facial expression evaluation result A performs artificial intelligence deep learning/training. Please refer to FIG. 7A and FIG. 7B , which are respectively a conceptual diagram of a preferred implementation of the AI deep learning/training framework of the AI recognition analysis module, and the historical data of the aesthetic medical aid evaluation results applied to the AI recognition analysis module Conceptual diagram of the preferred implementation of the library.

如圖7A及圖7B所示,人工智慧辨識分析模組之人工智慧深度學習/訓練架構包括:人工智慧辨識分析模組220、醫療知識規則模組230以及美容醫療輔助評估結果歷史資料庫240;其中,人工智慧辨識分析模組220包括人工智慧深度學習/訓練程序221,醫療知識規則模組230包括功能性醫學解剖規則231以及動態性醫學解剖規則232,而美容醫療輔助評估結果歷史資料庫240包括多個美容醫療輔助評估結果241。 As shown in FIG. 7A and FIG. 7B , the artificial intelligence deep learning/training framework of the artificial intelligence identification and analysis module includes: an artificial intelligence identification and analysis module 220 , a medical knowledge rule module 230 , and a history database 240 of cosmetic medical aid evaluation results; The artificial intelligence identification and analysis module 220 includes an artificial intelligence deep learning/training program 221 , the medical knowledge rule module 230 includes functional medical anatomy rules 231 and dynamic medical anatomy rules 232 , and a history database 240 of cosmetic medical assistant evaluation results. Multiple cosmetic medical aid assessment results 241 are included.

其中,每一美容醫療輔助評估結果至少包括歷來各受測者名稱、基本資料B、受測時之人臉表情評估結果A’、個人臉部特徵P、功能性醫學解剖規則231以及動態性醫學解剖規則232中的多個醫療規則R1,R2、評估治療部位結果的組合與 優選順序之優選組合C1,C2以及注射填充劑的種類D與填充劑的劑量U等等。 Among them, each cosmetic medical assistant evaluation result at least includes the name of the subject, basic information B, facial expression evaluation result A' at the time of the test, personal facial feature P, functional medical anatomy rules 231 and dynamic medical The combination of multiple medical rules R1, R2 in anatomical rules 232, evaluating the results of the treatment site and The preferred combination of C1, C2 in the preferred order and the type D of the injectable filler and the dose U of the filler and so on.

於實際應用時,基本資料B可包括性別B1及年齡B2;人臉表情評估結果A’至少包括:靜態表情評估結果A1、動態表情評估結果A2以及多個情緒指標組合A31-A33;其中,靜態表情評估結果A1可為受測者無任何情緒時之多個臉部動作單元AU1-AUn的各個靜態參數值;動態表情評估結果A2可為受測者依據不同情緒而產生之多個臉部動作單元AU1’-AUn’的各個動態參數值,情緒指標組合A31-A33可例如包括:負面情緒指標組合之害怕指標A31、生氣指標A32及鄙視指標A33,抑或可包括正面情緒指標組合之開心指標A31、感動指標A32及滿足指標A33。 In practical application, the basic data B may include gender B1 and age B2; the facial expression evaluation result A' includes at least: static expression evaluation result A1, dynamic expression evaluation result A2, and multiple emotional index combinations A31-A33; The expression evaluation result A1 can be the static parameter values of the multiple facial action units AU1-AUn when the subject has no emotions; the dynamic expression evaluation result A2 can be the multiple facial movements generated by the subject according to different emotions For each dynamic parameter value of the units AU1'-AUn', the emotion index combination A31-A33 may include, for example, the fear index A31, the anger index A32 and the contempt index A33 of the negative emotion index combination, or may include the happiness index A31 of the positive emotion index combination , moving index A32 and meeting index A33.

再者,個人臉部特徵P可包括習慣表情之靜態紋路特徵P1、靜態輪廓線特徵P2或膚質特徵P3;其中,個人臉部特徵P1-P3可由人臉表情評估模組110及輸入輸出模組150中之至少一者而提供。 Furthermore, the personal facial features P may include static texture features P1 of habitual expressions, static contour features P2 or skin texture features P3; wherein, the personal facial features P1-P3 can be obtained by the facial expression evaluation module 110 and the input and output modules. provided by at least one of the groups 150 .

至於功能性醫學解剖規則231之醫療規則R1,例如包括各個臉部肌肉群因應不同情緒表情的拉伸程度規則、張力程度規則R11-R14;又,動態性醫學解剖規則232之醫療規則R2,例如包括各個臉部肌肉群之間因應不同情緒表情的連動規則、收縮規則R21-R24。 As for the medical rule R1 of the functional medical anatomy rule 231, for example, it includes the stretching degree rules of each facial muscle group in response to different emotional expressions, and the tension degree rules R11-R14; and the medical rule R2 of the dynamic medical anatomy rule 232, such as Including the linkage rules and contraction rules R21-R24 between various facial muscle groups in response to different emotional expressions.

最後,優選組合C1,C2可例如為受測者欲治療部位之多個臉部動作單元AU1-AUn中的一者或及其組合;注射填充劑的種類D可包括:水凝膠劑類W、肉毒桿菌素劑類X、透明質酸劑類Y以及膠原蛋白劑類Z。其中,水凝膠劑類W、透明質酸劑類Y以及膠原蛋白劑類Z,除了可減少人臉表情的靜態 紋路,從而減少負面情緒指標組合(悲傷指標、生氣指標等)之外,還能增加正面情緒指標組合(開心指標、滿足指標等)。 Finally, the preferred combination C1, C2 can be, for example, one of the multiple facial action units AU1-AUn of the subject to be treated or a combination thereof; the type D of injectable fillers can include: hydrogel type W , botulinum toxin agent class X, hyaluronic acid agent class Y and collagen agent class Z. Among them, hydrogel agent W, hyaluronic acid agent Y and collagen agent Z, in addition to reducing the static state of human facial expression In addition to reducing the negative emotion index combination (sadness index, anger index, etc.), it can also increase the positive emotion index combination (happy index, satisfaction index, etc.).

當然,於本實施例中,前述之美容醫療輔助評估結果1-N可依據實際美容醫療的治療標的需求而調整,不應以本例為限制。且,前述之美容醫療輔助評估結果1-N的內容可為熟悉本技藝之人士進行各種均等的變更或設計,可依據受測者的美容醫療實際需求進而調整設計為適應的實施態樣。 Of course, in this embodiment, the aforementioned cosmetic medical assistant evaluation results 1-N can be adjusted according to the needs of the actual cosmetic medical treatment target, which should not be limited to this example. In addition, the content of the aforesaid cosmetic medical assistant evaluation results 1-N can be modified or designed by those skilled in the art, and can be adjusted and designed according to the actual needs of the subject's cosmetic medical treatment.

接續說明,人工智慧辨識分析模組220如何依據上述之人工智慧深度學習/訓練架構,而執行人工智慧深度學習/訓練程序221。請參閱圖8,係為前述之人工智慧深度學習/訓練程序的較佳實施流程圖。 Continuing to describe how the AI recognition and analysis module 220 executes the AI deep learning/training program 221 according to the above-mentioned AI deep learning/training framework. Please refer to FIG. 8 , which is a flow chart of a preferred implementation of the aforementioned artificial intelligence deep learning/training procedure.

如圖8所示,人工智慧深度學習/訓練程序221之流程步驟包括,執行步驟S31:提供該些美容醫療輔助評估結果241;其中,每一該美容醫療輔助評估結果至少包括:歷史受測者相關之基本資料B、人臉表情評估結果A’、個人臉部特徵P、該醫療知識規則模組230中之功能性醫學解剖規則231以及動態性醫學解剖規則232、評估治療部位結果的組合與優選順序以及注射填充劑的種類D與填充劑的劑量U;其中,該些美容醫療輔助評估結果241之內容為前述之圖7B所示者,當然靜態表情評估結果A1與動態表情評估結果A2可依據實際使用需求而選擇是否記錄於其中。 As shown in FIG. 8 , the process steps of the artificial intelligence deep learning/training program 221 include: performing step S31 : providing the cosmetic medical assistant evaluation results 241 ; wherein each of the cosmetic medical assistant evaluation results at least includes: historical subjects Relevant basic data B, facial expression evaluation results A', personal facial features P, functional medical anatomy rules 231 and dynamic medical anatomy rules 232 in the medical knowledge rule module 230, and the combination of the results of evaluating treatment parts and The preferred order and the type D of the injection filler and the dosage U of the filler; wherein, the content of the cosmetic medical assistant evaluation results 241 are those shown in the aforementioned FIG. 7B , of course, the static expression evaluation result A1 and the dynamic expression evaluation result A2 can be Choose whether to record it or not according to the actual use requirements.

之後,執行步驟S32:輸入該些美容醫療輔助評估結果241至該人工智慧辨識分析模組220,以供其執行人工智慧深度學習/訓練程序221;其中,人工智慧辨識分析模組220依據至少一人工智慧深度學習/訓練演算法,而執行人工智慧深度學習/訓練程序221。而人工智慧深度學習/訓練演算法係為 機器學習演算法、人工神經網路演算法、模糊邏輯演算法、深度學習演算法或及其組合,其中本例係以人工神經網路演算法及深度學習演算法中之至少一者為佳。 Afterwards, step S32 is executed: inputting the cosmetic medical assistant evaluation results 241 to the artificial intelligence identification and analysis module 220 for it to execute the artificial intelligence deep learning/training program 221; wherein the artificial intelligence identification and analysis module 220 is based on at least one The AI deep learning/training algorithm is executed while the AI deep learning/training program 221 is executed. The artificial intelligence deep learning/training algorithm is Machine learning algorithm, artificial neural network road algorithm, fuzzy logic algorithm, deep learning algorithm or a combination thereof, in this case, at least one of artificial neural network road algorithm and deep learning algorithm is preferred.

再則,將以後述多個較佳實施概念說明,如何藉由本案之人工智慧輔助評估方法及其系統,而進行美容醫療行為。其中,下列該些較佳實施例之主要治療標的需求為改善受測者1-3之臉部表情的負面情緒指標組合,且藉由減少或改善因應負面表情情緒而造成的負面情緒指標組合,從而提升個人魅力與人際關係。 Furthermore, a number of preferred implementation concepts will be described later, how to use the artificial intelligence-assisted evaluation method and system in this case to perform cosmetic medical behaviors. Among them, the main treatment target requirements of the following preferred embodiments are to improve the negative emotion index combination of the facial expressions of the subjects 1-3, and by reducing or improving the negative emotion index combination caused by the negative expression emotion, This enhances personal charisma and interpersonal relationships.

舉例來說,該些較佳實施例之主要治療目的為減少或改善不自覺的皺眉或嘴角下垂的人臉表情,以減少常給人生氣或嚴肅的感覺,甚至減少負面的微表情,且還可於其他較佳實施例中,進一步於正面情緒指標組合的部分予以加強,而達到更優質、精確且個人化美感的美容醫療效果。 For example, the main therapeutic purpose of these preferred embodiments is to reduce or improve involuntary frowning or drooping facial expressions, so as to reduce the feeling of being angry or serious, and even reduce negative micro-expressions, and also In other preferred embodiments, the part of the combination of positive emotion indicators can be further strengthened, so as to achieve a higher-quality, more accurate and personalized aesthetic beauty medical effect.

請參閱圖9A至圖9F,其係為應用本案發明概念中之人工智慧輔助評估方法及其系統的第一較佳實施概念示意圖。 Please refer to FIG. 9A to FIG. 9F , which are schematic diagrams of a first preferred implementation concept of applying the artificial intelligence-assisted evaluation method and system thereof in the inventive concept of the present application.

以圖9A至圖9F的內容,並搭配圖1至圖4、圖7B所示為例,藉由人臉表情評估模組110而偵測受測者1之多個臉部動作單元AU1-AUn,且依據每一臉部動作單元AU1-AUn之一偵測結果與另一臉部動作單元AU1-AUn之另一偵測結果之間的變化,而提供即時人臉表情評估結果A;其中,即時人臉表情評估結果A係依據靜態表情評估結果A1及動態表情評估結果A2而產生之每一情緒指標組合的比例結果而形成。 Taking the contents of FIGS. 9A to 9F in combination with FIGS. 1 to 4 and FIG. 7B as an example, the facial expression evaluation module 110 detects a plurality of facial action units AU1-AUn of the subject 1 , and according to the change between a detection result of each facial action unit AU1-AUn and another detection result of another facial action unit AU1-AUn, a real-time facial expression evaluation result A is provided; wherein, The real-time facial expression evaluation result A is formed according to the ratio result of each emotion index combination generated according to the static expression evaluation result A1 and the dynamic expression evaluation result A2.

如圖9A所示,受測者可能會不自覺地皺眉,或是因為老化關係而導致嘴角下垂,將該些臉部動作單元AU1-AUn 所造成的微表情都記錄下來,並綜合形成靜態表情評估結果A1。 As shown in Figure 9A, the subjects may frown unconsciously, or the corners of the mouth may sag due to aging. These facial action units AU1-AUn The resulting micro-expressions are recorded, and a static expression evaluation result A1 is formed.

此外,如圖9B所示,動態表情評估結果A2係為受測者1依據不同情緒而呈現出不同人臉表情,例如:生氣表情、大笑表情...等。 In addition, as shown in FIG. 9B , the dynamic expression evaluation result A2 is that the subject 1 presents different facial expressions according to different emotions, such as angry expressions, laughing expressions, etc.

接著,情緒分析及臉部辨識單元112進一步定量分析每一臉部動作單元的臉部肌肉群於不同情緒表情時的作動強度(包括前述的靜態表情評估結果A1與動態表情評估結果A2),而提供更精準的動態參數值,以作為情緒指標組合A31-A33、評估治療部位結果的組合與優選順序C1-C2,以及注射填充劑的種類D與填充劑的劑量U的治療參考。 Next, the emotion analysis and face recognition unit 112 further quantitatively analyzes the action intensity of the facial muscles of each facial action unit under different emotional expressions (including the aforementioned static expression evaluation result A1 and dynamic expression evaluation result A2), and More accurate dynamic parameter values are provided to be used as the emotional index combination A31-A33, the combination and priority order C1-C2 for evaluating the results of the treatment site, and the type D of the injected filler and the dose U of the filler. The treatment reference.

當然,於其他較佳實施例中,情緒分析及臉部辨識單元可為一流程步驟,其被內建並安裝載入於人工智慧辨識分析模組中,抑或是人工智慧辨識分析程序中之部分程序(圖未示出)。 Of course, in other preferred embodiments, the emotion analysis and face recognition unit may be a process step, which is built-in and installed in the AI recognition analysis module, or is a part of the AI recognition analysis program program (not shown).

再則,如圖9C及圖9D所示,受測者1之多個情緒指標組合A31-A33分別為悲傷指標為35.2%、生氣指標為14.1%、害怕指標為17.7%,以及依據該些情緒指標組合A31-A33所對應之每一臉部動作單元AU1-AUn的相關資訊。 Furthermore, as shown in FIG. 9C and FIG. 9D , the multiple emotion index combinations A31-A33 of the subject 1 are respectively 35.2% for the sadness index, 14.1% for the anger index, and 17.7% for the fear index. Relevant information of each facial action unit AU1-AUn corresponding to the index combination A31-A33.

另一方面,人臉表情評估模組110還可搭配人臉三維(3D)模擬單元與膚質檢測單元,而進一步提供個人臉部特徵P之習慣表情的靜態紋路特徵P1、靜態輪廓線特徵P2或膚質特徵P3等(圖未示出)。 On the other hand, the facial expression evaluation module 110 can also be equipped with a face three-dimensional (3D) simulation unit and a skin texture detection unit to further provide the static texture feature P1 and the static contour feature P2 of the habitual expression of the personal facial feature P. Or skin texture feature P3, etc. (not shown in the figure).

接續,如圖9E所示,輸入即時人臉表情評估結果A至人工智慧辨識分析模組120中,抑或人工智慧辨識分析模組120主動接收即時人臉表情評估結果A,並選擇是否搭配醫 療知識規則模組130及美容醫療輔助評估結果歷史資料庫140中之至少一者,以執行人工智慧辨識分析程序121。之後,產生並輸出受測者1之美容醫療輔助評估結果,其中美容醫療輔助評估結果至少包括評估治療部位結果的組合與優選順序C1-C2,以及注射填充劑的種類D與填充劑的劑量U。 Then, as shown in FIG. 9E , input the real-time facial expression evaluation result A into the artificial intelligence recognition and analysis module 120, or the artificial intelligence recognition and analysis module 120 actively receives the real-time facial expression evaluation result A, and selects whether to match the medical evaluation result A. At least one of the treatment knowledge rule module 130 and the aesthetic medical assistant evaluation result historical database 140 is used to execute the artificial intelligence identification and analysis program 121 . After that, generate and output the cosmetic medical assistant evaluation result of the subject 1, wherein the cosmetic medical assistant evaluation result at least includes the combination of the evaluation results of the treatment site and the preferred order C1-C2, and the type D of the injected filler and the dose U of the filler .

以此實施例來說,針對臉部動作單元AU1有關的肌肉群(內側額肌,Inner Frontalis),施以肉毒桿菌素Botulinum Toxin 8 s.U.,以及針對臉部動作單元AU15有關的肌肉群(降口角肌,depressor anguli oris)與臉部動作單元A17有關的肌肉群(類肌,mentails),施以Botulinum Toxin DAO 4 s.U.與Mentalis 4 s.U的美容醫療輔助評估結果建議。 For this example, Botulinum Toxin 8 sU was administered to the muscle groups related to the facial action unit AU1 (medial frontalis, Inner Frontalis), and to the muscle groups related to the facial action unit AU15 (lower frontalis). The angle of the mouth muscle (depressor anguli oris) is a muscle group (like muscle, mentals) related to the facial action unit A17, and the results of the cosmetic medical assistant evaluation of Botulinum Toxin DAO 4 sU and Mentalis 4 sU are recommended.

如此一來,請參閱圖9F所示,比較受測者1的臉部表情在實施治療之前、一周後與三週後的人臉表情評估結果A’,自其中可以發現受測者1臉部之悲傷指標係從35.2%經過至一周治療後直接降至為0%;另外,對於受測者1臉部之生氣指標的變化而言,其自治療前的14.1%經過至一周治療後降至為7.8%,甚且在實施肉毒桿菌治療後三週,臉部之生氣指標將可完全降至0%。 In this way, please refer to FIG. 9F , compare the facial expression evaluation results A′ of the subject 1 before, one week and three weeks after the treatment, from which it can be found that the face of the subject 1 The sadness index decreased from 35.2% to 0% directly after one week of treatment; in addition, the change of the angry index on the face of subject 1 decreased from 14.1% before treatment to one week after treatment. It was 7.8%, and even three weeks after the botox treatment, the facial vitality index would completely drop to 0%.

另一方面,針對悲傷指標的美容醫療輔助評估結果建議,透過本案的實施運作係也可給出另一種治療參考指引路徑:例如,當出現悲傷指標佔總情緒指標組合(總表情)之10%以上時,且臉部動作編碼單元中之臉部動作單元AU1、臉部動作單元AU4、臉部動作單元AU15皆出現增強(即佔比提高)時,建議可於相對應的肌肉群位置處注射肉毒桿菌素Botulinum Toxin A type;當然,本案可透過採用更多的個案數據(美容醫療輔助評估結果)與進行多次人工智慧深度學習/訓練程序,而顯然提出更佳的治療建議方案,不以上述的美容醫療輔助評估 結果為限。 On the other hand, the evaluation results of aesthetic medical assistance for sadness indicators suggest that through the implementation and operation of this case, another treatment reference guidance path can also be given: for example, when the sadness index accounts for 10% of the total emotional index combination (total expression) When the above, and the facial motion unit AU1, facial motion unit AU4, and facial motion unit AU15 in the facial motion coding unit are all enhanced (that is, the proportion is increased), it is recommended to inject at the corresponding muscle group position Botulinum Toxin A type; Of course, in this case, by using more case data (beauty medical assistant evaluation results) and performing multiple artificial intelligence deep learning/training procedures, it is obvious that a better treatment proposal can be proposed. With the above-mentioned cosmetic medical aids assessment Results are limited.

再者,請參閱圖10,其係為應用本案發明概念中之人工智慧輔助評估方法及其系統的第二較佳實施概念示意圖。 Furthermore, please refer to FIG. 10 , which is a conceptual schematic diagram of a second preferred implementation of the artificial intelligence-assisted evaluation method and system thereof according to the concept of the present invention.

如圖10所示,藉由人臉表情評估模組110而偵測受測者2之多個臉部動作單元AU1-AUn,從而提供即時人臉表情評估結果A,依此得知受測者2的多個情緒指標組合中之中性指標為26.3%為最高的,接著悲傷指標為13.9%為次高的,且同時美容醫療輔助評估結果中,其建議優選治療部位的順序為,主要造成悲傷指標的臉部動作單元AU1,其被歸類於內側眉頭上揚(Inner Brow Raiser)。 As shown in FIG. 10 , the facial expression evaluation module 110 detects a plurality of facial action units AU1-AUn of the subject 2, so as to provide a real-time facial expression evaluation result A, and thus the subject is known. Among the multiple emotional index combinations of 2, the neutral index is the highest at 26.3%, and the sadness index is the second highest at 13.9%. At the same time, in the auxiliary evaluation results of beauty medical treatment, the order of the recommended treatment sites is as follows: The facial action unit AU1 of the sadness indicator is classified as Inner Brow Raiser.

基此,再將前述即時人臉表情評估結果A結合醫療知識規則模組130與美容醫療輔助評估結果歷史資料庫140,即可得知,透過功能性醫學解剖規則以及動態性醫學解剖規則能指引出與前述臉部動作單元AU1高度相關且連動的肌肉群所在位置,之後,便可進一步提供用以進行個人化美容醫療的美容醫療輔助評估結果,例如:針對與臉部動作單元AU1有關的肌肉群(內側額肌,Inner Frontalis),施以肉毒素Toxin 8 s.U.的治療參考建議。 Based on this, the aforementioned real-time facial expression evaluation result A is combined with the medical knowledge rule module 130 and the aesthetic medical auxiliary evaluation result historical database 140, and it can be known that the functional medical anatomy rules and the dynamic medical anatomy rules can guide The positions of the muscle groups that are highly correlated and linked with the aforementioned facial action unit AU1 can be obtained, and then the aesthetic medical assistant evaluation results for personalised cosmetic medical treatment can be further provided, for example, for the muscles related to the facial action unit AU1 Group (medial frontalis, Inner Frontalis), reference recommendations for treatment with botulinum toxin Toxin 8 sU.

如此一來,比較受測者2的臉部表情在實施治療之前、一周後與三週後的人臉表情評估結果A’,自其中可以發現受測者2臉部之悲傷指標係從13.9%經過至一周治療後直接降至為8.4%,甚且在實施肉毒桿菌治療後三週,臉部之悲傷指標將可完全降至為0%。簡言之,透過本案運用於美容醫療之人工智慧輔助評估方法及其系統,其所能得到的美容醫療治療成效確實是十分顯著。 In this way, comparing the facial expression evaluation results A' of the facial expression of the subject 2 before, one week and three weeks after the treatment, it can be found that the sadness index of the subject 2's face is from 13.9%. After up to one week of treatment, it dropped to 8.4% directly, and even three weeks after Botox treatment, the sadness index on the face could be completely reduced to 0%. In short, through the artificial intelligence-assisted evaluation method and its system applied to beauty medical treatment in this case, the cosmetic medical treatment effect that can be obtained is indeed very significant.

之後,請參閱圖11,其係為應用本案發明概念中之人工智慧輔助評估方法及其系統的第三較佳實施概念示意圖。 After that, please refer to FIG. 11 , which is a conceptual schematic diagram of a third preferred implementation of the artificial intelligence-assisted evaluation method and system thereof in the inventive concept of the present application.

如圖11所示,藉由人臉表情評估模組110而偵測受測者3之多個臉部動作單元AU1-AUn,從而提供即時人臉表情評估結果A,依此可得知,不論是哪一種性別或年齡的人類生氣表情,大致都與前述臉部動作單元AU15與臉部動作單元AU17的肌肉群有關;且比較受測者3的臉部表情在實施治療之前、一周後與三週後的人臉表情評估結果A’,自其中可以發現受測者3臉部之生氣指標,經過三週的治療後已有很明顯的改善。 As shown in FIG. 11 , the facial expression evaluation module 110 detects a plurality of facial action units AU1-AUn of the subject 3, so as to provide a real-time facial expression evaluation result A. From this, it can be known that whether Which gender or age the human angry expression is is roughly related to the aforementioned muscle groups of the facial action unit AU15 and the facial action unit AU17; and compare the facial expressions of the subject 3 before, one week and three days after the treatment. The facial expression evaluation result A' after one week, from which it can be found that the anger index of the subject 3's face has been significantly improved after three weeks of treatment.

也就是說,針對生氣指標的美容醫療輔助評估結果建議,透過本案的實施運作係可給出另一種治療參考指引路徑,例如:當出現生氣指標為10%以上,且臉部動作編碼單元中之臉部動作單元AU15與臉部動作單元AU17皆出現增強(即佔比提高)時,建議可於相對應的肌肉群(降口角肌與頦肌)位置處注射肉毒桿菌素Botulinum Toxin A type。 That is to say, it is suggested that another treatment reference guide path can be given through the implementation and operation of this case, for example, when the anger index is more than 10%, and the facial action coding unit is When the facial action unit AU15 and the facial action unit AU17 are both enhanced (that is, the proportion increases), it is recommended to inject Botulinum Toxin A type into the corresponding muscle groups (depressor depressor muscle and mental muscle).

然而,針對上述之較佳實施例,並對照於目前習知僅以醫師個人判斷的美容醫療治療建議作法後,即可比較知悉,習知作法確實易受限於醫師個人經驗與刻板印象的影響,以致不能夠全盤客觀的考量,進而存有漏失每個人微表情的差異的重大缺失。 However, in view of the above-mentioned preferred embodiment, and comparing with the current conventional cosmetic medical treatment recommendations based only on the personal judgment of physicians, it can be seen that the conventional method is indeed easily limited by the influence of physicians’ personal experience and stereotypes. , so that it cannot be considered comprehensively and objectively, and there is a major defect of missing the differences in each person's microexpressions.

詳細來說,治療標的為減少生氣指標,醫師往往使用肉毒桿菌素去減少皺眉肌的動作,而忽略每個人生氣表情的肌肉動作其實微有差異,少數人可能嘴角同時往下,或下巴肌肉會收縮上揚,也有人抬眉肌內側有些動作,而這些肌肉的 牽動有可能只有部分是肉眼可見的,因此易造成肌肉牽動過度細微而不易察覺,從而產生治療上的盲點與誤區,進而造成反效果的醫療成效與不必要的醫療糾紛。 In detail, the target of treatment is to reduce the index of anger. Physicians often use Botox to reduce the movement of the frown muscle, but ignore that the muscle movement of each person's angry expression is actually slightly different. A few people may have the corners of the mouth down at the same time, or the muscles of the chin. It will contract and rise, and some people have some movements on the inner side of the eyebrow raising muscle, and the It is possible that only part of the movement is visible to the naked eye, so it is easy to cause the muscle movement to be too subtle and difficult to detect, resulting in blind spots and misunderstandings in treatment, resulting in adverse medical results and unnecessary medical disputes.

舉例來說,於圖9A至圖9F中之受測者1,如改為以習知醫師個人判斷的美容醫療治療建議為,其主要治療部位將會集中為臉部動作單元AU15與臉部動作單元AU17,其中,醫師遺漏生氣指標之臉部動作單元AU1,而使美容醫療治療效果不佳(圖未示出)。 For example, if the subject 1 in FIG. 9A to FIG. 9F is changed to a cosmetic medical treatment recommendation based on the personal judgment of a conventional doctor, the main treatment parts will be concentrated on the facial action unit AU15 and the facial action. In the unit AU17, the doctor omits the facial action unit AU1 of the anger index, and the effect of cosmetic medical treatment is not good (not shown in the figure).

其次,於圖10中之受測者2,以習知醫師個人判斷的美容醫療治療建議為,通常主要治療部位為臉部動作單元AU2,其判斷依據為受測者2之眼輪匝肌引起眼睛下垂,而造成悲傷指標。然而,執行美容醫療行為後,比較受測者2的臉部表情在實施治療之前、一周後與三週後的人臉表情評估結果A’,自其中可以發現受測者2臉部的悲傷指標,係從6.8%經過至一周治療後直接降至為5.1%,但在實施肉毒桿菌治療後三週的悲傷指標卻回復至6.7%,其原因在於醫師錯誤判斷受測者2的治療部位,而導致美容醫療治療效果不佳,從而無法有效地改善悲傷指標(圖未示出)。 Next, in the subject 2 in FIG. 10, the cosmetic medical treatment recommendation based on the personal judgment of a conventional doctor is that the main treatment part is usually the facial action unit AU2, and the judgment is based on the orbicularis oculi of the subject 2. Drooping eyes, which cause sadness indicators. However, after performing the cosmetic medical behavior, comparing the facial expression evaluation results A' of the facial expression of the subject 2 before, one week and three weeks after the treatment, the sadness indicators on the face of the subject 2 can be found. , it dropped from 6.8% to 5.1% after one week of treatment, but the sadness index returned to 6.7% three weeks after botulinum toxin treatment. As a result, the effect of cosmetic medical treatment is not good, so that the sadness index cannot be effectively improved (not shown in the figure).

最後,於圖11中之受測者3,如改以習知醫師個人判斷的美容醫療治療建議為,則通常主要治療部位為於臉部動作單元AU17處注射肉毒桿菌素abobotulinumtoxin A共4劑量(Units)。然而,執行美容醫療行為後,比較受測者3的臉部表情在實施治療之前、一周後與三週後的人臉表情評估結果A’,自其中可以發現受測者3臉部的生氣指標,係從10.9%經過至一周治療後直接降至為5.9%,但在實施肉毒桿菌治療後三週的生氣指標卻升至13.9%,其原因在於醫師不但遺漏判斷受測者3的治療部位(臉部動作單元AU15),且注射肉毒桿菌素 abobotulinumtoxin A於臉部動作單元AU17的劑量亦不足夠,而導致非但無法改善受測者的人臉表情,反而還造成生氣指標提升的反效果(圖未示出)。 Finally, for subject 3 in Figure 11, if the cosmetic medical treatment recommendation based on the personal judgment of a conventional physician is changed, the main treatment site is usually the injection of botulinum toxin abobotulinumtoxin A at the facial action unit AU17 for a total of 4 doses (Units). However, after performing the cosmetic medical behavior, compare the facial expression evaluation results A' of the facial expression of the subject 3 before, one week and three weeks after the treatment, from which the anger index of the subject 3's face can be found , it dropped from 10.9% to 5.9% after one week of treatment, but the anger index rose to 13.9% three weeks after botulinum toxin treatment. The reason is that the doctor not only missed to judge the treatment site of subject 3 (Facial Action Unit AU15), and injected with Botox The dose of abobotulinumtoxin A in the facial action unit AU17 was also insufficient, which not only failed to improve the subject's facial expression, but also had the opposite effect of increasing the anger index (not shown in the figure).

是以,相較於本案之運用於美容醫療之人工智慧輔助評估方法及其系統,其藉由人臉表情評估模組110提供受測者之即時人臉表情評估結果A,再透過人工智慧辨識分析模組選擇搭配醫療知識規則模組中之複數醫療規則與美容醫療輔助評估結果歷史資料庫,以執行人工智慧辨識分析程序,從而產生並輸出美容醫療輔助評估結果,其中美容醫療輔助評估結果至少包括受測者之評估治療部位結果的組合與優選順序及/或注射填充劑種類與劑量。如此一來,本案不但能精準地分析評估出正確及完整的治療部位(臉部動作單元AU1-AUn),且亦能準確地提供注射填充劑的種類與劑量,從而達到個人化美感的美容醫療治療效果。 Therefore, compared with the artificial intelligence-assisted evaluation method and system used in beauty medicine in this case, the facial expression evaluation module 110 provides the real-time facial expression evaluation result A of the subject, and then recognizes it through artificial intelligence. The analysis module selects and matches the plural medical rules in the medical knowledge rule module and the historical database of the aesthetic medical assistant evaluation results to execute the artificial intelligence identification and analysis program, thereby generating and outputting the aesthetic medical assistant evaluation results, of which the aesthetic medical assistant evaluation results are at least Include the subject's assessment of the combination and preferred order of treatment site results and/or the type and dose of filler injected. In this way, this case can not only accurately analyze and evaluate the correct and complete treatment site (facial action unit AU1-AUn), but also accurately provide the type and dosage of injectable fillers, so as to achieve personalized aesthetic beauty treatment. treatment effect.

除此之外,本案還可進行加強正面情緒指標組合的美容醫療行為,抑或,針對人臉老化的治療部位做預防性改善的美容治療建議,例如:人臉肌肉鬆弛後會造成嘴角下垂,因而可能產生生氣指標的面相等。 In addition, this case can also carry out cosmetic medical behaviors that strengthen the combination of positive emotional indicators, or cosmetic treatment suggestions for preventive improvement of the aging treatment area of the face. For example, after the facial muscles relax, the corners of the mouth will sag, so The faces that may generate angry indicators are equal.

另一方面,本案之方法與系統可應用於多種美容醫療或美學領域,還能作為美容醫療手術前後之治療效果的判斷依據,亦可應用於醫療教學領域,作為培訓醫師以進修或改善先前治療時的盲點與誤區。 On the other hand, the method and system of this case can be applied to a variety of cosmetic medical or aesthetic fields, and can also be used as a basis for judging the treatment effect before and after cosmetic medical surgery. blind spots and mistakes.

以上所述僅為本發明之較佳實施例,並非用以限定本發明之申請專利範圍,因此凡其它未脫離本發明所揭示之精神下所完成之等效改變或修飾,均應包含於本案之申請專利範圍內。 The above descriptions are only the preferred embodiments of the present invention, and are not intended to limit the scope of the patent application of the present invention. Therefore, all other equivalent changes or modifications made without departing from the spirit disclosed in the present invention shall be included in this case. within the scope of the patent application.

S10,S11-S14:流程步驟 S10, S11-S14: Process steps

Claims (30)

一種運用於美容醫療之人工智慧輔助評估方法,其至少包括下列步驟:提供一受測者之一即時人臉表情評估結果;輸入該即時人臉表情評估結果至一人工智慧辨識分析模組中;以及選擇搭配一醫療知識規則模組及一美容醫療輔助評估結果歷史資料庫中之至少一者,以執行一人工智慧辨識分析程序,與產生並輸出一美容醫療輔助評估結果;其中,該即時人臉表情評估結果包括:一靜態表情評估結果,抑或該靜態表情評估結果以及一動態表情評估結果,且前述提供該即時人臉表情評估結果更包括下列步驟:依據該醫療知識規則模組而使一臉部區分為複數臉部動作單元;依據該靜態表情評估結果以及該動態表情評估結果中之至少一者,而形成複數情緒指標組合;以及依據每一該情緒指標組合之比例結果而形成該即時人臉表情評估結果。 An artificial intelligence-assisted evaluation method used in cosmetic medicine, which at least includes the following steps: providing a real-time facial expression evaluation result of a subject; inputting the real-time facial expression evaluation result into an artificial intelligence recognition analysis module; and select and match at least one of a medical knowledge rule module and a history database of cosmetic medical aid evaluation results to execute an artificial intelligence identification and analysis program, and generate and output a cosmetic medical aid evaluation result; wherein the real-time person The facial expression evaluation result includes: a static expression evaluation result, or the static expression evaluation result and a dynamic expression evaluation result, and the aforementioned providing the real-time facial expression evaluation result further includes the following steps: according to the medical knowledge rule module, make a The face is divided into plural facial action units; according to at least one of the static expression evaluation result and the dynamic expression evaluation result, a plurality of emotion index combinations are formed; and the real-time emotion index combination is formed according to the proportional result of each emotion index combination. Facial expression evaluation results. 如申請專利範圍第1項所述之人工智慧輔助評估方法,其更包括下列步驟:反饋並儲存該美容醫療輔助評估結果至該醫療知識規則模組及該美容醫療輔助評估結果歷史資料庫中之至少一者。 The artificial intelligence-assisted evaluation method described in item 1 of the scope of the application further comprises the following steps: feeding back and storing the aesthetic medical assistant evaluation result to the medical knowledge rule module and the aesthetic medical assistant evaluation result historical database at least one. 如申請專利範圍第1項所述之人工智慧輔助評估方法,其中,輸出之該美容醫療輔助評估結果至少包括:該受測者之一評估治療部位結果的組合與優選順序,抑或該評估治療部位結果的組合與優選順序以及一注射填充劑種類與劑量。 The artificial intelligence-assisted evaluation method as described in item 1 of the scope of the application, wherein the output of the auxiliary evaluation result of cosmetic medical treatment at least includes: the combination and the priority order of the evaluation results of one of the subjects to evaluate the treatment part, or the evaluation of the treatment part Combinations and preferred order of results and an injectable filler type and dose. 如申請專利範圍第1項所述之人工智慧輔助評估方法,其中,該醫療知識規則模組更包括一功能性醫學解剖規則以及一動態性醫學解剖規則。 The artificial intelligence-assisted evaluation method as described in item 1 of the patent application scope, wherein the medical knowledge rule module further includes a functional medical anatomy rule and a dynamic medical anatomy rule. 如申請專利範圍第1項所述之人工智慧輔助評估方法,其中,該些情緒指標組合係為一正面情緒指標組合及一負面情緒指標組合中之至少一者。 The artificial intelligence-assisted evaluation method as described in item 1 of the patent application scope, wherein the emotion index combinations are at least one of a positive emotion index combination and a negative emotion index combination. 如申請專利範圍第5項所述之人工智慧輔助評估方法,其中,該負面情緒指標組合係為一悲傷指標、一生氣指標、一擔憂指標、一驚訝指標、一害怕指標、一鄙視指標及一討厭指標中之一者或及其組合。 The artificial intelligence-assisted evaluation method as described in item 5 of the patent application scope, wherein the combination of negative emotion indicators is a sadness indicator, an anger indicator, a worry indicator, a surprise indicator, a fear indicator, a contempt indicator, and a contempt indicator. Hate one or a combination of indicators. 如申請專利範圍第5項所述之人工智慧輔助評估方法,其中,該正面情緒指標組合係為一開心指標、一快樂指標、一滿足指標、一感動指標、一積極指標及一放鬆指標中之一者或及其組合。 The artificial intelligence-assisted evaluation method as described in item 5 of the scope of the patent application, wherein the combination of positive emotion indicators is one of a happiness index, a happiness index, a satisfaction index, a moving index, a positive index and a relaxation index one or a combination thereof. 如申請專利範圍第1項所述之人工智慧輔助評估方法,其中,該美容醫療輔助評估結果歷史資料庫包括複數美容醫療輔助評估結果,用以供該人工智慧辨識分析模組依據至少一人工智慧深度學習/訓練演算法,而執行一人工智慧深度學習/訓練程序。 The AI-assisted assessment method as described in item 1 of the scope of the application, wherein the historical database of cosmetic medical aid assessment results includes a plurality of cosmetic medical aid assessment results, which are used for the AI identification and analysis module to be based on at least one AI A deep learning/training algorithm executes an artificial intelligence deep learning/training program. 如申請專利範圍第8項所述之人工智慧輔助評估方法,其中,該人工智慧深度學習/訓練程序包括下列步驟:提供該些美容醫療輔助評估結果;其中,每一該美容醫療輔助評估結果至少包括:一歷史受測者相關之一基本資料、一人臉表情評估結果、一個人臉部特徵、該醫療知識規則模組中之一功能性醫學解剖規則以及一動態性醫學解剖規則、一評估治療部位結果的組合與優選順序以及一注射填充劑的種類與劑量;以及輸入該些美容醫療輔助評估結果至該人工智慧辨識分析模組。 The artificial intelligence-assisted evaluation method described in item 8 of the patent application scope, wherein the artificial intelligence deep learning/training program includes the following steps: providing the aesthetic medical assistant evaluation results; wherein, each of the aesthetic medical assistant evaluation results at least Including: a basic information related to a historical subject, a person's facial expression evaluation result, a person's facial features, a functional medical anatomy rule in the medical knowledge rule module, a dynamic medical anatomy rule, an evaluation treatment part The combination and the preferred order of the results and the type and dosage of an injected filler; and inputting the aesthetic medical assistant evaluation results into the artificial intelligence identification and analysis module. 如申請專利範圍第9項所述之人工智慧輔助評估方法,其中,該個人臉部特徵包括一習慣表情之靜態紋路特徵、一靜態輪廓線特徵或一膚質特徵。 The artificial intelligence-assisted evaluation method as described in item 9 of the patent application scope, wherein the personal facial feature includes a static texture feature of a habitual expression, a static contour feature or a skin texture feature. 如申請專利範圍第10項所述之人工智慧輔助評估方法,其中,該至少一人工智慧深度學習/訓練演算法係為一人工神經網路演算法及一深度學習演算法中之至少一者。 The AI-assisted evaluation method according to claim 10, wherein the at least one AI deep learning/training algorithm is at least one of an artificial neural network road algorithm and a deep learning algorithm. 一種運用如申請專利範圍第1項所述之人工智慧輔助評估方法的一電子裝置,該電子裝置至少包括:一人臉表情評估模組,用以提供該即時人臉表情評估結果;該人工智慧辨識分析模組,包括該人工智慧辨識分析程序,該人工智慧辨識分析程序接收該即時人臉表情評估結果,並產生該美容醫療輔助評估結果;以及一輸入輸出模組,用以輸出該美容醫療輔助評估結果; 其中,該人工智慧辨識分析模組自該人臉表情評估模組以及該輸入輸出模組中之至少一者,接收至少一個人臉部特徵。 An electronic device using the artificial intelligence-assisted evaluation method described in item 1 of the scope of application, the electronic device at least includes: a facial expression evaluation module for providing the real-time facial expression evaluation result; the artificial intelligence recognition An analysis module, including the artificial intelligence identification analysis program, the artificial intelligence identification analysis program receives the real-time facial expression evaluation result, and generates the beauty medical aid evaluation result; and an input and output module for outputting the beauty medical aid evaluation result; Wherein, the artificial intelligence recognition and analysis module receives at least one human facial feature from at least one of the facial expression evaluation module and the input and output module. 如申請專利範圍第12項所述之電子裝置,其係以一無線傳輸方式以及一有線傳輸方式中之至少一者連接於該美容醫療輔助評估結果歷史資料庫及該醫療知識規則模組中之至少一者。 The electronic device described in item 12 of the scope of the application is connected to the historical database of the aesthetic medical assistant evaluation results and the medical knowledge rule module by at least one of a wireless transmission method and a wired transmission method. at least one. 如申請專利範圍第12項所述之電子裝置,其係為一手持式智慧行動裝置、一個人電腦或一獨立運作之智慧裝置。 The electronic device as described in item 12 of the scope of the patent application is a handheld smart mobile device, a personal computer or an independently operated smart device. 一種美容醫療之人工智慧輔助評估系統,其至少包括:一人臉表情評估模組,提供一受測者之一即時人臉表情評估結果;以及一人工智慧辨識分析模組,連接於該人臉表情評估模組;其中,該人工智慧辨識分析模組接收該即時人臉表情評估結果,並依據所連接之一醫療知識規則模組及一美容醫療輔助評估結果歷史資料庫中之至少一者,以執行一人工智慧辨識分析程序,且適應地產生並輸出一美容醫療輔助評估結果;且,其中,該即時人臉表情評估結果包括一靜態表情評估結果,抑或該靜態表情評估結果以及一動態表情評估結果,且該人臉表情評估模組及該人工智慧辨識分析模組中之至少一者更包括:一情緒分析及臉部辨識單元,依據該靜態表情評估結果以及該動態表情評估結果中之至少一者,而形成複數情緒指標組合;其中,依據該些情緒指標 組合中之每一者的比例結果而形成該即時人臉表情評估結果。 An artificial intelligence-assisted evaluation system for beauty medicine, which at least includes: a facial expression evaluation module for providing a real-time facial expression evaluation result of a subject; and an artificial intelligence identification and analysis module, which is connected to the facial expression Evaluation module; wherein, the artificial intelligence recognition and analysis module receives the real-time facial expression evaluation result, and according to at least one of a connected medical knowledge rule module and a history database of aesthetic medical assistant evaluation results, to Execute an artificial intelligence recognition analysis program, and adaptively generate and output a beauty medical assistant evaluation result; and, wherein, the real-time facial expression evaluation result includes a static expression evaluation result, or the static expression evaluation result and a dynamic expression evaluation As a result, at least one of the facial expression evaluation module and the artificial intelligence recognition analysis module further includes: an emotion analysis and face recognition unit, based on at least one of the static expression evaluation result and the dynamic expression evaluation result One, to form a combination of plural sentiment indicators; wherein, according to these sentiment indicators The proportional results of each of them are combined to form the real-time facial expression evaluation result. 如申請專利範圍第15項所述之美容醫療的人工智慧輔助評估系統,其中,該人工智慧辨識分析模組反饋並儲存該美容輔助評估結果至該醫療知識規則模組及該美容醫療輔助評估結果歷史資料庫中之至少一者。 The artificial intelligence-assisted evaluation system for cosmetic medicine as described in item 15 of the patent application scope, wherein the artificial intelligence identification and analysis module feeds back and stores the cosmetic auxiliary evaluation result to the medical knowledge rule module and the cosmetic medical auxiliary evaluation result At least one of the historical databases. 如申請專利範圍第15項所述之美容醫療的人工智慧輔助評估系統,其中,該美容醫療輔助評估結果至少包括該受測者之一評估治療部位結果的組合與優選順序,抑或該評估治療部位結果的組合與優選順序以及一注射填充劑種類與劑量。 The artificial intelligence-assisted evaluation system for cosmetic medicine as described in item 15 of the scope of the patent application, wherein the auxiliary evaluation result for cosmetic medical treatment at least includes the combination and priority of the results of one of the subjects to evaluate the treatment part, or the evaluation of the treatment part Combinations and preferred order of results and an injectable filler type and dosage. 如申請專利範圍第15項所述之美容醫療的人工智慧輔助評估系統,其中,該醫療知識規則模組更包括一功能性醫學解剖規則以及一動態性醫學解剖規則。 The artificial intelligence-assisted evaluation system for cosmetic medicine as described in item 15 of the patent application scope, wherein the medical knowledge rule module further includes a functional medical anatomy rule and a dynamic medical anatomy rule. 如申請專利範圍第15項所述之美容醫療的人工智慧輔助評估系統,其中,該人臉表情評估模組包括:一臉部影像擷取單元,用以進行一影像擷取動作,以取得一即時臉部影像;以及一臉部動作編碼單元,依據該即時臉部影像與該醫療知識規則模組,而將影像中所呈現的一即時臉部動作區分為複數臉部動作單元;其中,依據每一該臉部動作單元之一偵測結果與至少另一該臉部動作單元之另一偵測結果之間的變化,而形成該靜 態表情評估結果以及該動態表情評估結果。 The artificial intelligence-assisted evaluation system for cosmetic medicine as described in item 15 of the patent application scope, wherein the facial expression evaluation module includes: a facial image capture unit for performing an image capture action to obtain a a real-time facial image; and a facial motion coding unit, according to the real-time facial image and the medical knowledge rule module, to distinguish a real-time facial motion presented in the image into a plurality of facial motion units; wherein, according to The change between one detection result of each of the facial motion units and another detection result of at least one other of the facial motion units forms the static The dynamic expression evaluation results and the dynamic expression evaluation results. 如申請專利範圍第15項所述之美容醫療的人工智慧輔助評估系統,其中,該些情緒指標組合係為一正面情緒指標組合及一負面情緒指標組合中之至少一者。 The artificial intelligence-assisted evaluation system for cosmetic medicine as described in item 15 of the patent application scope, wherein the combination of emotion indicators is at least one of a combination of positive emotion indicators and a combination of negative emotion indicators. 如申請專利範圍第15項所述之美容醫療的人工智慧輔助評估系統,其中,該負面情緒指標組合係為一悲傷指標、一生氣指標、一擔憂指標、一驚訝指標、一害怕指標、一鄙視指標及一討厭指標中之一者或及其組合。 According to the artificial intelligence-assisted evaluation system for beauty medicine as described in item 15 of the patent application scope, the combination of negative emotion indicators is a sadness indicator, an anger indicator, a worry indicator, a surprise indicator, a fear indicator, and a contempt indicator One or a combination of an indicator and a nuisance indicator. 如申請專利範圍第15項所述之美容醫療的人工智慧輔助評估系統,其中,該正面情緒指標組合係為一開心指標、一快樂指標、一滿足指標、一感動指標、一積極指標及一放鬆指標中之一者或及其組合。 The artificial intelligence-assisted evaluation system for beauty medicine as described in item 15 of the patent application scope, wherein the combination of positive emotion indicators is a happiness indicator, a happiness indicator, a satisfaction indicator, a moving indicator, a positive indicator and a relaxation indicator One or a combination of indicators. 如申請專利範圍第15項所述之美容醫療的人工智慧輔助評估系統,其中,該美容醫療輔助評估結果歷史資料庫包括複數美容醫療輔助評估結果,用以供該人工智慧辨識分析模組依據至少一人工智慧深度學習/訓練演算法,而執行一人工智慧深度學習/訓練程序。 The artificial intelligence-assisted evaluation system for cosmetic medicine as described in item 15 of the scope of the patent application, wherein the historical database of cosmetic medical auxiliary evaluation results includes a plurality of cosmetic medical auxiliary evaluation results, which are used for the artificial intelligence identification and analysis module to be based on at least An AI deep learning/training algorithm executes an AI deep learning/training program. 如申請專利範圍第23項所述之美容醫療的人工智慧輔助評估系統,其中,每一該美容醫療輔助評估結果至少包括:一歷史受測者相關之一基本資料、一人臉表情評估結果、一個人臉部特徵、該醫療知識規則模組中之一功能性醫學解剖規則以及一動態性醫學解剖規則、一評估治療部位結果的組合與優選順序以及一注射填充劑的種類與劑量中之一者或及其組合。 The artificial intelligence-assisted evaluation system for cosmetic medicine as described in item 23 of the patent application scope, wherein each auxiliary evaluation result for cosmetic medical treatment at least includes: a basic information related to a historical subject, a facial expression evaluation result of a person, a person One of facial features, a functional medical anatomy rule and a dynamic medical anatomy rule in the medical knowledge rule module, a combination and priority order of evaluating treatment site results, and a type and dose of injected filler, or and its combinations. 如申請專利範圍第24項所述之美容醫療的人工智慧輔助評估系統,其中,該個人臉部特徵包括一習慣表情之靜態紋路特徵、一靜態輪廓線特徵或一膚質特徵。 The artificial intelligence-assisted evaluation system for cosmetic medicine as described in item 24 of the patent application scope, wherein the personal facial feature includes a static texture feature of a habitual expression, a static contour feature or a skin texture feature. 如申請專利範圍第24項所述之美容醫療的人工智慧輔助評估系統,其中,該個人臉部特徵係自該人臉表情評估模組以及一輸入輸出模組中之至少一者,而被提供至該人工智慧辨識分析模組。 The artificial intelligence-assisted evaluation system for cosmetic medicine as described in item 24 of the patent application scope, wherein the personal facial features are provided from at least one of the facial expression evaluation module and an input/output module. to the artificial intelligence identification and analysis module. 如申請專利範圍第23項所述之美容醫療的人工智慧輔助評估系統,其中,該人工智慧深度學習/訓練程序係為輸入該些美容醫療輔助評估結果至該人工智慧辨識分析模組。 The artificial intelligence-assisted evaluation system for cosmetic medicine as described in item 23 of the patent application scope, wherein the artificial intelligence deep learning/training program is to input the cosmetic medical auxiliary evaluation results into the artificial intelligence identification and analysis module. 如申請專利範圍第23項所述之美容醫療的人工智慧輔助評估系統,其中,該至少一人工智慧深度學習/訓練演算法係為一人工神經網路演算法及一深度學習演算法中之至少一者。 The artificial intelligence-assisted evaluation system for cosmetic medicine as described in item 23 of the patent application scope, wherein the at least one artificial intelligence deep learning/training algorithm is at least one of an artificial neural network road algorithm and a deep learning algorithm By. 如申請專利範圍第15項所述之美容醫療的人工智慧輔助評估系統,其藉由該人臉表情評估模組、一輸入輸出模組以及該人工智慧辨識分析模組而組裝形成一電子裝置;其中,該電子裝置係為一手持式智慧行動裝置、一個人電腦或一獨立運作之智慧裝置。 The artificial intelligence-assisted evaluation system for cosmetic medicine as described in item 15 of the patent application scope, which is assembled to form an electronic device by the facial expression evaluation module, an input and output module, and the artificial intelligence recognition and analysis module; Wherein, the electronic device is a handheld smart mobile device, a personal computer or an independently operated smart device. 如申請專利範圍第29項所述之美容醫療的人工智慧輔助評估系統,其中,該電子裝置係以一無線傳輸方式及一有線傳輸方式中之至少一者連接於該美容醫療輔助評估結果歷史資料庫及該醫療知識規則模組中之至少一者。 The artificial intelligence-assisted evaluation system for cosmetic medicine as described in item 29 of the patent application scope, wherein the electronic device is connected to the historical data of the auxiliary evaluation results for cosmetic medicine by at least one of a wireless transmission method and a wired transmission method at least one of the library and the medical knowledge rule module.
TW109115444A 2019-05-09 2020-05-08 Artificial intelligence assisted evaluation method applied to aesthetic medicine and system using the same TWI756681B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962845355P 2019-05-09 2019-05-09
US62/845355 2019-05-09

Publications (2)

Publication Number Publication Date
TW202044279A TW202044279A (en) 2020-12-01
TWI756681B true TWI756681B (en) 2022-03-01

Family

ID=73237854

Family Applications (1)

Application Number Title Priority Date Filing Date
TW109115444A TWI756681B (en) 2019-05-09 2020-05-08 Artificial intelligence assisted evaluation method applied to aesthetic medicine and system using the same

Country Status (2)

Country Link
CN (1) CN111914871A (en)
TW (1) TWI756681B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200823689A (en) * 2006-11-21 2008-06-01 Jing-Jing Fang Method of three-dimensional digital human model construction from two photos and obtaining anthropometry information
WO2014114991A1 (en) * 2013-01-23 2014-07-31 Amato Aldo Procedure for dental aesthetic analysis of the smile area and for facilitating the identification of dental aesthetic treatments
CN107007257A (en) * 2017-03-17 2017-08-04 深圳大学 The automatic measure grading method and apparatus of the unnatural degree of face
CN107993280A (en) * 2017-11-30 2018-05-04 广州星天空信息科技有限公司 Beauty method and system based on threedimensional model

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI457872B (en) * 2011-11-15 2014-10-21 Univ Nat Taiwan Normal Testing system and method having face expressions recognizing auxiliary
TWI546095B (en) * 2013-12-13 2016-08-21 Medical treatment system and device
ES2633152B1 (en) * 2017-02-27 2018-05-03 Universitat De Les Illes Balears METHOD AND SYSTEM FOR THE RECOGNITION OF THE STATE OF MOOD THROUGH IMAGE ANALYSIS

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200823689A (en) * 2006-11-21 2008-06-01 Jing-Jing Fang Method of three-dimensional digital human model construction from two photos and obtaining anthropometry information
WO2014114991A1 (en) * 2013-01-23 2014-07-31 Amato Aldo Procedure for dental aesthetic analysis of the smile area and for facilitating the identification of dental aesthetic treatments
CN107007257A (en) * 2017-03-17 2017-08-04 深圳大学 The automatic measure grading method and apparatus of the unnatural degree of face
CN107993280A (en) * 2017-11-30 2018-05-04 广州星天空信息科技有限公司 Beauty method and system based on threedimensional model

Also Published As

Publication number Publication date
CN111914871A (en) 2020-11-10
TW202044279A (en) 2020-12-01

Similar Documents

Publication Publication Date Title
Grafsgaard et al. Automatically recognizing facial expression: Predicting engagement and frustration
Suen et al. TensorFlow-based automatic personality recognition used in asynchronous video interviews
Krumhuber et al. FACSGen 2.0 animation software: generating three-dimensional FACS-valid facial expressions for emotion research.
Gordon et al. Training facial expression production in children on the autism spectrum
Buettner Robust user identification based on facial action units unaffected by users' emotions
Beyan et al. Moving as a leader: Detecting emergent leadership in small groups using body pose
Tsai et al. Toward Development and Evaluation of Pain Level-Rating Scale for Emergency Triage based on Vocal Characteristics and Facial Expressions.
Varni et al. Computational study of primitive emotional contagion in dyadic interactions
TW202238618A (en) Artificial intelligence assisted evaluation method applied to aesthetic medicine and assisted evaluation system using the same
Malatesta et al. Towards modeling embodied conversational agent character profiles using appraisal theory predictions in expression synthesis
Bowling et al. Emotion expression modulates perception of animacy from faces
Torres-Carrión et al. Facial emotion analysis in down's syndrome children in classroom
Lyakso et al. Facial expression: psychophysiological study
TWI756681B (en) Artificial intelligence assisted evaluation method applied to aesthetic medicine and system using the same
Hakim et al. Computational analysis of emotion dynamics
Vashishth et al. Exploring the Role of Computer Vision in Human Emotion Recognition: A Systematic Review and Meta-Analysis
Gutstein et al. Hand-eye coordination: automating the annotation of physician-patient interactions
Gutstein et al. Optical flow, positioning, and eye coordination: automating the annotation of physician-patient interactions
Khanna et al. Rule based system for recognizing emotions using multimodal approach
Schimmel et al. MP-BGAAD: multi-person board game affect analysis dataset
Gutstein Information extraction from primary care visits to support patient-provider interactions
Ahmadi et al. Applications of machine learning in facial cosmetic surgeries: a scoping review
US20230260625A1 (en) Method And System for Recommending Injectables for Cosmetic Treatments
Dorante et al. Facial Expression after Face Transplant: An International Face Transplant Cohort Comparison
CN114494601B (en) Three-dimensional face retrieval orthodontic correction and curative effect simulation system based on face image