TW200828993A - Imaging apparatus and method, and facial expression evaluating device and program - Google Patents

Imaging apparatus and method, and facial expression evaluating device and program Download PDF

Info

Publication number
TW200828993A
TW200828993A TW096128018A TW96128018A TW200828993A TW 200828993 A TW200828993 A TW 200828993A TW 096128018 A TW096128018 A TW 096128018A TW 96128018 A TW96128018 A TW 96128018A TW 200828993 A TW200828993 A TW 200828993A
Authority
TW
Taiwan
Prior art keywords
expression
face
image
unit
evaluation
Prior art date
Application number
TW096128018A
Other languages
Chinese (zh)
Other versions
TWI343208B (en
Inventor
Kaname Ogawa
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Publication of TW200828993A publication Critical patent/TW200828993A/en
Application granted granted Critical
Publication of TWI343208B publication Critical patent/TWI343208B/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/175Static expression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2132Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on discrimination criteria, e.g. discriminant analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Power Engineering (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Electromagnetism (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

To capture an image capable of giving an excellent satisfaction level to a person to be photographed or a photographer. For a period until an image signal obtained by capturing an image is recorded on a recording medium, a face detector 31 detects the face of a person from the image signal, and a facial expression evaluator 41 evaluates the facial expression of the detected face and calculates a facial expression evaluation value for indicating how much the facial expression is close to one of different facial expressions, such as a smiling face and a normal facial expression. A notification control unit 42 generates notification information according to the calculated facial expression evaluation value, and performs control so that the notification is made to the person to be photographed, via, for example, a display 16, an LED light emitter 21, and a voice output section 22. Further, the facial expression evaluator 41 requests the recording of the image signal on the recording medium to a recording operation control unit 43, when the facial expression evaluation value exceeds a prescribed threshold.

Description

200828993 九、發明說明 【發明所屬之技術領域】 本發明,係爲有關於使用固態攝像元件來攝像畫像之 攝像裝置及其方法,以及對所攝像之臉孔的表情作評價之 表情評價裝置及用以進行該處理的程式。 【先前技術】 在攝像裝置中,從壓下快門鍵之操作起直到經過一定 時間後,自動地按下快門,也就是所謂的自拍計時功能, 不僅僅是在銀鹽攝像機中,而亦被一般地搭載於數位攝像 機中。但是,由於藉由此自拍計時功能而按下快門的時機 係爲事先被決定者,因此在快門被按下時,被攝影者係並 不一定能夠保證有擺出良好之表情,而有多會攝像出無法 令人滿足之照片的問題。 另一方面,在近年,以畫像訊號作爲基礎而進行數位 演算處理之畫像處理技術係急速的進步,作爲其之一例, 係有從畫像中而檢測出人的臉孔之技術。例如,係揭示有 :將臉孔畫像中之兩畫素間的亮度差作爲特徵量並學習, 再根據該特徵量而計算出代表輸入畫像中之特定區域是否 爲臉孔之推測値,再從該1以上之推測値中,對區域內之 畫像進行其是否爲臉孔之最終的判別之臉孔檢測技術(例 如,參考專利文獻1 )。 此種臉孔檢測技術,其開發係已進步到可搭載於數位 相機等之藉由固態攝像元件來進行攝像之數位方式的攝像 -4- 200828993 裝置中的水準,而在最近,係更進而注目有對所檢測出之 臉孔的表情作判別之技術。例如,可以考慮有:從連續之 複數枚的所攝像之畫像訊號中,在每一攝像畫像中對被攝 影者之臉孔的表情等作評價,並成爲可以從此些之評價資 訊中’選擇適合的畫像(例如,參考專利文獻2 )。 〔專利文獻 1〕日本特開2005-157679號公報(段落 號碼〔0040〕〜〔005 2〕、圖 1 ) 〔專利文獻2〕日本特開2004_4659 1號公報(段落號 碼〔0063〕〜〔007 1〕、圖 3 ) 【發明內容】 〔發明所欲解決之課題〕 但是,在最近,由於數位方式之攝像裝置的製造商間 之競爭係變得更爲激烈,因此係強烈地要求有將此種攝像 裝置高功能化,而提高其商品價値。又,如同上述所示之 自拍計時功能的問題點一般,由於所攝像之畫像,對於攝 影者或是被攝影者而言,係並不一定是能夠滿足者,因此 ,此種用以提高滿足度之對攝像動作作補助的功能,可以 說是用以提高商品價値之非常重要者。特別是,雖然係期 望能使用不斷發展之畫像處理技術來實現此種功能,但是 在攝像動作中即時地對該攝像動作進行補助一般的功能係 仍未被實現。 本發明,係爲有鑑於此種課題而進行者,其目的,係 在於提供一種:能夠攝像出對於被攝影者或者是攝影者而 -5- 200828993 言滿足度更高之畫像的攝像裝置以及其方法。 又’本發明之其他目的,係在於提供一種:能夠攝像 出對於被攝影者或者是攝影者而言滿足度更高之畫像的表 情評價裝置以及用以進行該處理之程式。 〔用以解決課題之手段〕 在本發明中,爲了解決上述課題,係提供有一種攝像 裝置’係爲使用固態攝像元件來將畫像做攝像之攝像裝置 ,其特徵爲,具備有:臉孔檢測部,其係在經由攝像所得 之畫像訊號被記錄至記錄媒體爲止的期間中,從該畫像訊 號中檢測出人物的臉孔;和表情評價部,其係對所檢測出 之臉孔的表情作評價,並計算出代表此表情在特定之表情 與其之外的表情之間,有多接近前述特定之表情的程度之 表情評價値;和報告部,其係將因應於前述計算出之表情 評價値的報告資訊,報告給被攝影者。 在此種攝像裝置中,臉孔檢測部,係在經由攝像所得 之畫像訊號被記錄到記錄媒體爲止之間的期間中,從該畫 像訊號中檢測出人物之臉孔,而表情評價部,係對藉由臉 孔檢測部所檢測出之臉孔的表情作評價,並計算出代表此 表情在特定之表情與其之外的表情之間,有多接近前述特 定之表情的程度之表情評價値。報告部,係將因應於前述 計算出之表情評價値的報告資訊,報告給被攝影者。 〔發明之效果〕 -6 - 200828993 若藉由本發明之攝像裝置,則係在將所攝像之畫像訊 號記錄到記錄媒體爲止之間的期間中,從所攝像之畫像中 檢測出人物之臉孔,並對該臉孔的表情作評價,而計算出 代表此表情在特定之表情與其之外的表情之間,有多接近 前述特定之表情的程度之表情評價値。而後,將因應於該 表情評價値的報告資訊,報告給被攝影者。故而,對於被 攝影者,由於能使其認知自己的表情是否適合於攝像,並 藉由該結果,而促使其擺出更佳的表情,因此成爲能夠將 對於被攝影者或是攝影者而言滿足度係爲高之畫像確實地 記錄在記錄媒體中。 【實施方式】 以下,參考圖面,針對本發明之實施形態作詳細說明 〔第1實施形態〕 圖1,係爲展示本發明之第1實施形態的攝像裝置之 重要部分構成的區塊圖。 於圖1所示之攝像裝置,係爲作爲數位相機或是數位 攝影機等而實現者。此攝像裝置,係具備有:光學區塊1 1 、驅動裝置1 1 a、攝像元件1 2、時機驅動器(T G ) 1 2 a、 類比前端(AFE )電路1 3、攝像機訊號處理電路1 4、圖形 處理電路1 5、顯示器1 6、畫像編碼器1 7、記錄裝置1 8、 微電腦 19、輸入部 20、LED (Light Emitting Diode)發 200828993 光部21、以及聲音輸出部22。 光學區塊11,係具備有:用以將從被攝體而來之光集 光至攝像元件1 2的透鏡、用以使透鏡移動而進行對焦或 是縮放的驅動機構、快門機構、光圈機構等。驅動裝置 1 1 a,係根據從微電腦1 9而來之控制訊號,而控制光學區 塊1 1內之各機構的驅動。 攝像元件12,例如,係爲CCD ( Charge Coupled Device)型、CMOS ( Complementary Metal Oxide Semiconductor)型等之固態攝像元件,並根據從 TG12a 所輸出之時機訊號而被驅動,而將從被攝體而來之射入光 變換爲電性訊號。T G 1 2 a,係在微電腦1 9之控制下,輸出 時機訊號。 AFE電路1 3,係對於從攝像元件1 2所輸出之畫像訊 號,以藉由 CDS (Correlated Double Sampling)處理而將 S/N ( Signal/Noise )比保持爲良好的方式而進行取樣保持 處理,並進而藉由AGC ( Auto Gain Control)處理而對增 益作控制,並進行A/D變換,而輸出數位畫像資料。 攝像機訊號處理電路1 4,係對於從AFE電路1 3而來 之畫像訊號,施加用以進行 AF ( Auto Focus )、AE ( Auto Exposure)、各種畫質修正處理之檢波處理,或是施 加因應於以檢波資訊爲基礎而從微電腦1 9所輸出之訊號 的畫質修正處理。另外,如後述一般,在本實施形態中, 此攝像機訊號處理電路1 4,係具備有臉孔檢測功能以及將 該臉孔區域之資料切出的功能。 -8- 200828993 畫像處理電路1 5,係將從攝像機訊號處理電路1 4所 輸出之畫像資料,變換爲用以顯示於顯示器1 6上之訊號 ,並供給至顯示器1 6。又,因應於從微電腦1 9而來之要 求,將後述之表情分數等之資訊合成在畫像上。顯示器1 6 ,例如係由LCD (Liquid Crystal Display)等所成,並根 據從圖形處理電路1 5而來之畫像訊號,來顯示畫像。 畫像編碼器1 7,係對從攝像機訊號處理電路1 4所輸 出之畫像資料進行壓縮編碼,並將編碼資料輸出至記錄裝 置1 8。具體而言,係將在畫像訊號處理電路1 4所處理之 一圖框份的畫像資料,根據 JPEG (Joint Photographic Experts Group)等之編碼方式而壓縮編碼,再輸出靜止畫 像之編碼資料。另外,不僅是靜止畫像,亦可爲可對動畫 像之資料進行壓縮編碼者。 記錄裝置1 8,係爲將從畫像編碼器1 7而來之編碼資 料作爲畫像資料而記錄的裝置,例如,可作爲磁帶、光碟 片等之可搬型記錄媒體的驅動裝置、或者是HDD ( Hard Disk Drive)等而實現之。 微電腦 19,係具備有 CPU ( Central Processing Unit )、或是 ROM ( Read Only Memory ) 、RAM ( Random A c c e s s M e m o r y )等之記憶體,並藉由實行被儲存在記憶 體中之程式,來對此攝像裝置統籌地作控制。 輸入部20,係將因應於使用者所致之對於各種之輸入 開關的操作輸入之控制訊號,輸出至微電腦1 9。作爲此輸 入開關,例如,係被設置有快門釋放鍵、用以進行各種選 200828993 單選擇或是動作模式之設定的十字鍵等。 LED發光部21,係根據從微電腦19而來之控制訊號 ,而使被設置在攝像裝置之外裝面的LED點燈。作爲此 LED,例如,係可考慮有展示自拍計時功能係在動作中者 等。 聲音輸出部22,係根據從微電腦1 9而來之控制訊號 ,而輸出例如動作確認音等之聲音。另外,當具備有聲音 資料之編碼器/解碼器的情況時,則亦可輸出將此聲音資 料作再生時之再生聲音。 在此攝像裝置中,經由攝像元件12而受光並被光電 變換之訊號,係依序被供給至AFE電路1 3,並在被施加 CDS處理或AGC處理之後,被變換爲數位畫像資料。攝 像機訊號處理電路1 4,係對從AFE電路1 3所被供給之畫 像資料進行畫質修正處理,處理後之畫像,係被供給至圖 形處理電路1 5,並被變換爲顯示用之畫像訊號。藉由此, 在顯示器1 6中,現在攝像中之畫像(攝像機所視之畫像 )係被顯示於顯示器1 6,而攝影者係對此畫像作視認,而 成爲可對視角作視認。 又,在此狀態下,若是藉由輸入部20之快門釋放鍵 的被壓下等,而對微電腦1 9指示有畫像之記錄,則從攝 像機訊號處理電路1 4而來之畫像資料,係被供給至畫像 編碼器1 7,並被施加壓縮編碼處理,而被記錄在記錄裝置 1 8中。在靜止畫像之記錄時,從攝像機訊號處理電路i 4 ’一圖框份之畫像資料係被供給至畫像編碼器1 7中,而 -10- 200828993 在動畫像之記錄時,被處理後之畫像資料,係連續地被供 給至畫像編碼器1 7中。 接下來,針對此攝像裝置所具備之攝像動作模式作說 明。此攝像裝置,係具備有:在攝像靜止畫像時,從攝像 畫像中檢測出被攝影者之臉孔,並對其表情作評價,而將 表示評價之程度的資訊報告給被攝影者之模式;和因應於 該評價之程度而自動地壓下快門,並將靜止畫像記錄在記 錄裝置1 8中的模式。以下,將前者之模式稱爲「表情評 價模式」,並將後者之模式稱爲「表情回應記錄模式」。 在表情評價模式中,當從攝像畫像而檢測出有臉孔的 情況時,係對該臉孔之表情作評價,並將因應於評價之資 訊報告給被攝影者,而達成促使被攝影者擺出更適合於攝 像之表情的功能。例如,作爲表情,對於是否爲笑臉之程 度作評價。又,在表情回應記錄模式中,當該評價値超過 特定之値時,則判斷被攝影者之臉孔係成爲了適合於攝像 之表情,並自動地將靜止畫像作記錄。藉由此,以成爲能 夠記錄對被攝影者而言滿足度爲更高的畫像之方式而進行 輔助。另外,在此雖係假定爲具備有表情評價模式與表情 回應記錄模式之2個的模式者,但是,亦可僅具備表情回 應記錄模式。 圖2,係爲展示爲了實現表情評價模式以及表情回應 記錄模式,而在攝像裝置攝像裝置中所具備之功能的區塊 圖。 如圖2所示一般,此攝像裝置,作爲用以實現上述之 -11 - 200828993 各攝像動作模式的功能,係具備有:臉孔檢測部3 1、臉孔 畫像產生部3 2、表情評價部4 1、報告控制部4 2、以及記 錄動作控制部4 3。在本實施形態中,臉孔檢測部3 1以及 臉孔畫像產生部3 2,係經由攝像機訊號處理電路1 4內之 硬體而被實現,表情評價部4 1、報告控制部42以及記錄 動作控制部43,係作爲經由微電腦1 9所實行之軟體之功 能而被實現。但是,此些之功能的各個,係亦可經由硬體 或是軟體之任一者而被實現。又,微電腦1 9,係將使用於 表情評價部4 1所致之表情演算中的判別軸資訊44,預先 保持於其內部之例如ROM等的記憶體中。如後述所示一 般,判別軸資訊44,係包含有:對以從相關於2個的表情 之多數的臉孔之樣本資料中經由主成分分析而得到的訊號 成分作爲基礎,並進行線形判別分析所得到的表情之判別 軸作展示之向量的係數資訊等。 於此,使用以下之圖3以及圖4,對在圖2中所示之 各功能的動作作說明。首先,圖3,係爲展示在表情評價 模式中之動作的槪要之圖。 在表情評價模式中,首先,臉孔檢測部3 1,係根據藉 由以攝像元件1 2所致之攝像而得到並傳達至攝像機訊號 處理電路1 4內之畫像資料,而從該畫像中檢測出被攝影 者之臉孔(步驟S 1 )。而後,將展示所檢測出之臉孔的 區域之檢測資訊輸出至臉孔畫像產生部3 2中。另外,當 如同本實施形態一般,藉由將因應於表情評價値之資訊顯 示於顯示器1 6而進行報告的情況時,從臉孔檢測部3 1而 -12- 200828993 來之臉孔的檢測資訊,係亦被供給至微電腦1 9之報告控 制部4 2中。 作爲臉孔之檢測手法,雖係可適用周知的手法,但是 舉例而言,係可使用記載於專利文獻i中之手法。在此手 法中,首先,係對臉孔畫像中之2畫素間的亮度差作學習 ,並作爲特徵量而預先作保持。而後,如圖3之步驟S 1 欄所示一般,對於輸入畫像,將一定之大小的視窗W 1依 序地嵌入,並以特徵量爲基礎,來推測在視窗W 1中之畫 像內是否包含有臉孔,而輸出推定値。此時,藉由將輸入 畫像依序縮小並進行相同之處理,而能夠使用一定大小之 視窗W 1來進行推定。而後,從藉由此些之動作所得到之 推定値,而最終地判別出存在有臉孔的區域。 接下來,臉孔產生部3 2,係從輸入畫像中,切出所檢 測出之臉孔的區域Af之資料(步驟S2 )。而後,將所切 出之畫像資料,變換爲一定尺寸之畫像資料並進行正規化 ,而供給至表情評價部4 1中(步驟S3 )。 於此,在本實施形態中,作爲從臉孔檢測部3 1所輸 出之臉孔的檢測資訊之例,假設爲係被輸出有將臉孔之周 圍作包圍之長方形的檢測框(例如左端之座標。以下,稱 爲臉孔之位置資訊。),和該檢測框之尺寸(例如,水平 、垂直方向之各畫素數。以下,稱爲臉孔之尺寸資訊。) 者。此時,臉孔畫像產生部3 2,係對將被作爲臉孔之檢測 對象的畫像資料作暫時之記億的記憶體(RAM )進行存取 ,並僅讀入對應於從臉孔檢測部3 1而來之臉孔的位置資 -13- 200828993 訊以及尺寸資訊的區域之資料。 又,所切出之畫像資料,係藉由作爲一定尺寸(解析 度)之畫像資料而進行解析度變換一事而被正規化。此正 規化後之畫像尺寸,係成爲當在表情評價部4 1中對臉孔 之表情作評價時成爲處理單位之尺寸。在本實施形態中’ 作爲例子,例如係設爲48畫素x48畫素之尺寸。 另外,作爲臉孔畫像產生部3 2所具備之上述一般的 畫像切出功能以及解析度變換功能,亦可從先前之攝像機 訊號處理電路1 4中爲了進行檢波或是輸出畫像之產生等 而一般所具有之同樣的功能來流用。 接下來,表情評價部4 1,係根據從臉孔畫像產生部 3 2而來之臉孔的正規化畫像資料,以及預先被記憶之判別 軸資訊44,而進行對該臉孔之表情的程度之評價的演算, 並輸出表情評價値(步驟S4 )。此表情評價値,係爲展 示其係接近2個的表情中之何者之表情的程度者。例如, 作爲2個的表情,適用「笑臉」與「通常時之表情」,並 當表情評價値越高時,評價其之非爲「通常時之表情」而 係爲「笑臉」的程度係爲強。另外,針對此表情評價値之 算出手法,係於後再述。 接下來,報告控制部42,係將因應於從表情評價部 4 1所輸出之表情評價値的資訊,對於被攝影者而進行報告 (步驟S5 )。例如,因應於表情評價値之資訊,係透過 圖形處理電路1 5,而朝向被攝影者側並顯示於顯示器1 6 上°此時’亦可根據從臉孔檢測部3 1所供給之臉孔的位 -14- 200828993 置以及尺寸資訊’而進行將評價對象之臉孔在顯示器1 6 之中作特定一般的顯示。又,亦可利用LED發光部21, 而藉由例如其之亮度的變化或是點滅速度之變化、顏色之 變化等,而在表情評價値中對差異作報告。或者是,亦可 透過聲音輸出部2 2,並藉由輸出因應於表情評價値之相異 的聲音,來作報告。 在以下之說明中,表情評價部4 1,例如係爲對臉孔之 表情係爲笑臉或是無表情之程度作評價者。又,在本實施 形態中,特別,係藉由將因應於該表情評價値之資訊,顯 示在將顯示面朝向被攝影者側之顯示器1 6上,而報告給 被攝影者。在圖3中,係展示有:將表示身爲因應於表情 評價値的値之「笑臉分數」的棒狀圖,顯示在顯示器i 6 上之例。 圖4 ’係爲用以說明展示笑臉分數之棒狀圖的動作之 圖。 如此圖4所示一般,表情評價値,係當臉孔之表情之 成爲笑臉的程度越強時變得越高,而當成爲通常時之表情 的程度越高時,則變爲越低。又,被顯示於棒狀圖之笑臉 分數’係與表情評價値成比例而連續地又或是階段性地變 動。此棒狀圖,係被顯示在朝向被攝影者側之顯示器i 6 中’被攝影者係藉由在攝像時對此棒狀圖作即時的視認, 而可以辨識出自己的表情是否成爲了適合於攝像之笑臉。 其結果’棒狀圖,係對於被攝影者而促使其擺出適合於攝 像之表情’並達成以能夠攝像更好之畫像的方式,而對攝 -15- 200828993 像動作進行輔助之功能。另外,如後述所示一般,對於表 情評價値低之被攝影者,亦可顯示有促使其擺出笑臉一般 之具體的文字資訊等。 於此,當被設定爲「表情回應記錄模式」的情況時, 表情評價部4 1,係在表情評價値超過了特定之臨界値的情 況時,以自動地按下快門,亦即是將攝像畫像作記錄的方 式,來作控制。在圖2中,記錄動作控制部4 3,係爲對攝 像畫像資料之記錄動作進行控制之區塊,在通常之攝像動 作中,係在檢測出輸入部20的快門釋放鍵被壓下一事時 ,對攝像裝置內之各部,以使其進行在記錄時所適當之攝 像動作(例如曝光動作或是訊號處理動作)的方式而作控 制,經由此,將攝像畫像資料在畫像編碼器1 7中作編碼 ,並將編碼化之資料記錄在記錄裝置1 8中。而後,表情 評價部4 1,係當表情評價値超過了特定之臨界値時,對此 記錄動作控制部4 3,以使其實行畫像資料之記錄動作的方 式而作要求。 藉由此’當從攝像畫像中被檢測出臉孔,且該臉孔之 表情係成爲了適合於攝像之狀態時(於此,係爲成爲笑臉 之程度變強時),則成爲將此時之攝像畫像自動地作記錄 。故而,相較於先前之自拍計時功能(亦即是,在將快門 釋放鍵按下起之一定的時間後,將攝像畫像作記錄之功能 )’成爲能夠將被攝影者係以良好的表情而被照下時之畫 像確實地作攝像,而能夠提高被攝影者或是攝影者之滿足 度。 -16- 200828993 接下來’針對在顯示器16中之笑臉分數的具體顯示 畫面之例作說明。 圖5’係爲展示使用有棒狀圖之笑臉分數的畫面顯示 例之圖。 在此圖5中,作爲攝像裝置,係想定爲數位影像攝像 機100。在數位影像攝像機100中,係於攝像機本體部 1 ο 1之側面,設置有畫角確認用之顯示器1 6。在此種構成 之數位影像攝像機100中,一般而言,顯示器16之顯示 面的角度或方向係成爲可變,而成爲可如同圖5 —般來將 顯示面朝向被設置有攝像透鏡102之方向,亦即是朝向被 攝影者之方向。在表情評價模式以及表情回應記錄模式中 ,係在此種將顯示器1 6之顯示面朝向被攝影者側的狀態 下而被使用,並與被攝像之被攝體的畫像一同地將因應於 表情評價値之資訊作顯示。 在圖5之畫面表示例中,係將笑臉分數顯示部202合 成於包含有臉孔201之攝像畫像上而作顯示。於笑臉畫像 顯示部202中,在將因應於表情評價値之笑臉分數作爲棒 狀圖203而作顯示的同時,係將該笑臉分數作爲數値而顯 示在數値顯示部204中。又,在表情回應記錄模式中,係 顯示有表示當自動地將攝像畫像作記錄時之笑臉分數的邊 界之邊界顯示圖符(icon) 205。於此例中,係進而在邊 界顯示圖符205上,將臨界値藉由數値而作表示。 又,在圖5之例中,係與笑臉分數顯示部202同時地 ,而在對應於該笑臉分數之臉孔201的周圍,顯示有臉孔 -17- 200828993 表示框206’而將身爲笑臉之評價對象的臉; 辨識的顯示。進而,在該臉孔表示框206的 將因應於表情評價値而相異之文字作顯示 2 07,並經由文字,而當代表笑臉之度數越 攝影者催促其擺出越強之笑臉。 又,在表情回應記錄模式中,係亦可將 記錄時之表情評價値的臨界値,設爲可由使 設定,而成爲可以自由地決定欲得到當被攝 種程度之笑臉時的畫像。在圖5之例中,例 壓下被設置在輸入部20之左右方向的方向 ,而成爲在能夠使表情評價値之臨界値變化 邊界表示圖符205朝左右移動,而使得使用 對應於表情評價値之臨界値的笑臉分數。此 被設定爲表情回應記錄模式的情況時,自動 的方向鍵作爲用以設定表情評價値之臨界値 應,藉由此,能夠提升使用者之操作性。 另外,表情評價値之變更,係並不限定 ,例如,亦可從經由選單畫面所選擇之專用 進行。又,亦可設定專用的操作鍵來進行。 1 6係爲觸控式面板的情況時,則亦可例如藉 顯示於顯示器1 6上之鍵的畫像,來對臨界 ,亦可藉由在使手指接觸於圖5之邊界表示 態下朝左右移動,而對臨界値作變更。 又,當從攝像畫面中檢測出有複數之臉 FL 201作易於 近旁,設置有 的文字顯示部 低時,對於被 當攝像畫像被 用者來任意作 影者係成爲何 如藉由使用者 鍵(未圖示) 的同時,亦將 者能夠辨識出 時,亦可在當 地將左右方向 的鍵並附加對 於上述之手法 的設定畫面來 又,當顯示器 由使手指接觸 値作變更。又 圖符2 0 5的狀 孔的情況時, -18- 200828993 亦可對此些之臉孔各別計算出表情評價値,並將因應於此 些値的資訊顯示在顯示器1 6上。圖6,係爲展示在檢測出 複數之臉孔的情況時,因應於表情評價値之資訊的第1畫 面顯示例。 在圖6中,作爲一例,係展示有被檢測出了 2個的臉 孔2 1 1以及2 1 2之例。在臉孔2 1 1以及2 1 2之各個中,係 在臉孔區域之周圍顯示有臉孔表示框2 1 3以及2 1 4的同時 ,亦在其近旁設置有文字顯示部2 1 5以及2 1 6。臉孔表示 框2 1 3以及2 1 4,係被設定爲因應於對於各別之臉孔2 1 1 以及2 1 2的表情評價値,而使其線的種類作變化,在文字 顯示部2 1 5以及2 1 6中,係被顯示有因應於表情評價値之 相異的文字。 在圖6之例中,對於臉孔2 1 1,雖係評價爲其具備有 充分強度的笑臉,但是對於臉孔2 1 2,則係評價爲其之笑 臉的程度係爲不足。例如,係顯示有:對於臉孔2 1 1,其 表情評價値雖係達到了自動記錄之臨界値,但是對於臉孔 2 1 2,其表情評價値則係爲較該臨界値爲略低的狀態。此 時,在將對於臉孔2 1 1之臉孔表示框2 1 3以實線來表示, 並將對於臉孔2 1 2之臉孔表示框2 1 4以虛線來表示,以將 此種評價狀態之相異報告給被攝影者的同時,在文字表示 部2 1 6中,係更進而表示有催促其加強笑臉的文字資訊。 另外,在此例中,雖係藉由臉孔表示框2 1 3以及2 1 4之線 種來表示表情評價値之不同,但是,做爲其他例子例如亦 可藉由臉孔表示框之亮度的不同或是顏色的不同、粗細的 -19- 200828993 不同,來報告表情評價値之差異。 圖7,係爲展示在檢測出複數之臉孔的情況時,因應 於表情評價値之資訊的第2畫面顯示例。 在圖7之例中,亦和圖5相同,被檢測出有2個的臉 孔2 1 1以及2 1 2,對於臉孔2 1 1,係評價爲其具備有充分 強度的笑臉,對於臉孔2 1 2,則係評價爲其之笑臉的程度 係爲不足。又,在圖7之例中,藉由於臉孔21 1以及21 2 之各區域的近旁展示因應於表情評價値之記號2 1 7以及 2 1 8,而將表情評價値之相異報告給被攝影者。 如以上一般,藉由將因應於表情評價値之資訊使用顯 示器來作報告,可以將因應於表情評價値之笑臉分數,以 棒狀圖或是數値來作表示,或者,因應於表情評價値而對 臉孔表示框之線種或顏色、明亮度等作變化,或是在臉孔 之近旁因應於表情評價値而表示催促其擺出笑臉的文字等 ,使用各種的方法,來將因應於表情評價値之資訊易於理 解地報告給被攝影者。特別是,在數位影像攝像機的情況 時,由於可以使用於先前技術起即有設置之可變換顯示面 方向的顯示器來作報告,因此不會導致因對攝像機之基本 構成作變更所致的大幅之開發、製造成本的上升,便成爲 可將對使用者而言滿足度係爲高之畫像確實地作記錄。 另外,在以上敘述中,雖係將搭載有可變換顯示面方 向之顯示器的數位影像攝像機作爲例子而列舉,但是,就 算在將畫角確認用之顯示器設置於與攝像透鏡相反之面的 數位相機等之中,只要該顯示器之顯示面方向係爲可變, -20- 200828993 而能夠將該顯示面朝向攝影者側,則即可顯示如上述一般 之顯示畫像,而將因應於表情評價値之資訊報告給被攝影 者。 接下來,針對在此攝像裝置中所使用之表情評價的手 法作說明。 圖8,係爲針對爲了進行表情評價而事先所應產生的 資訊,和該資訊之產生流程,作槪念性的展示之圖。 在本實施形態中,作爲表情之評價手法,係使用所謂 的「費雪(fisher )之線形判別分析」之手法。在此手法 中,首先,係事先準備有多數之分別具備有2個的表情之 臉孔的樣本畫像,並根據此些之樣本畫像的資料,而考慮 2個的表情間之2群問題(2-class problem),並藉由線 形判別分析(LDA: Linear Discriminant Analysis),來 事先形成將此些之2個的表情充分作判別之判別軸Ad。 而後,在進行表情評價時,係藉由求取出所輸入之臉孔畫 像的資料與判別軸Ad之內積,而計算出表情評價値。 如圖8所示,在本實施形態中,係使用笑臉之樣本畫 像Ps與通常之表情的樣本畫像Pn。此些之樣本畫像Ps以 及Pn,例如係作爲被正規化爲48畫素X48畫素之一定尺 寸的畫像而被準備。而後,將此些之樣本畫像的資料,作 爲(4 8x48 )維之向量資料而作處理,並進行LDA處理。 但是,此向量空間,係成爲具備有(48 x48 )根之座標軸 的維度非常大的空間。於此,在LDA處理之前,先對於 此些之向量資料,進行主成分分析(PCA : Principal -21 - 200828993[Technical Field] The present invention relates to an image pickup apparatus and a method thereof for photographing an image using a solid-state image sensor, and an expression evaluation apparatus for evaluating an expression of a face of the imaged image, and The program to perform this processing. [Prior Art] In the camera, the shutter is automatically pressed from the operation of pressing the shutter button until after a certain period of time, which is called the self-timer function, not only in the silver salt camera, but also in general. It is mounted on a digital camera. However, since the timing of pressing the shutter by the self-timer function is determined in advance, when the shutter is pressed, the photographer is not necessarily guaranteed to have a good expression, and how many will be Take pictures of unsatisfactory photos. On the other hand, in recent years, the image processing technology for performing digital arithmetic processing based on the image signal has been rapidly progressing, and as an example thereof, there is a technique of detecting a person's face from an image. For example, it is revealed that the luminance difference between two pixels in the face image is used as the feature amount and learned, and based on the feature amount, a speculation indicating whether or not a specific region in the input image is a face is calculated, and then In the above-mentioned one or more, the face detection technique for determining whether or not the face is the final face of the face is used (for example, refer to Patent Document 1). The development of the face detection technology has progressed to the level of the digital camera -4-200828993 that can be mounted on a digital camera by a solid-state imaging device, and more recently, the system has become more focused. There is a technique for discriminating the expression of the detected face. For example, it is conceivable to evaluate the expression of the face of the photographer in each of the image capturing images from a plurality of successively imaged image signals, and to select from among the evaluation information. (for example, refer to Patent Document 2). [Patent Document 1] Japanese Laid-Open Patent Publication No. 2005-157679 (paragraph No. [0040] to [005 2], FIG. 1) [Patent Document 2] Japanese Patent Laid-Open No. 2004_4659 No. 1 (paragraph No. [0063] to [007 1] 〕, FIG. 3) [Summary of the Invention] [Problems to be Solved by the Invention] However, recently, since the competition among manufacturers of digital-type imaging devices has become more intense, it is strongly required to The camera device is highly functional and increases its commercial price. Moreover, as with the problem of the self-timer function described above, generally, since the image taken is not necessarily satisfactory to the photographer or the photographer, this is used to improve the satisfaction. The function of subsidizing the camera action can be said to be very important to increase the price of the product. In particular, although it is expected that such a function can be realized by using an evolving image processing technique, the general function of assisting the image capturing operation in the imaging operation is still not realized. The present invention has been made in view of such a problem, and an object of the present invention is to provide an image pickup apparatus capable of capturing an image having a higher degree of satisfaction with a photographer or a photographer. method. Further, another object of the present invention is to provide a document evaluation device capable of capturing an image that is more satisfying to a photographer or a photographer, and a program for performing the process. [Means for Solving the Problems] In order to solve the above problems, an image pickup apparatus is provided as an image pickup apparatus that images an image using a solid-state image sensor, and is characterized in that: face detection is provided. a portion that detects a face of the person from the image signal while the image signal obtained by the imaging is recorded on the recording medium; and an expression evaluation unit that performs an expression on the detected face Evaluate and calculate the expression evaluation of how much the expression is close to the specific expression between the specific expression and the expression other than the specific expression; and the report department, which will evaluate the expression according to the above calculation値Report information to the photographer. In the imaging device, the face detecting unit detects the face of the person from the image signal while the image signal obtained by the imaging is recorded on the recording medium, and the expression evaluation unit is The expression of the face detected by the face detecting unit is evaluated, and an expression evaluation indicating how much the expression is close to the specific expression between the specific expression and the expression other than the specific expression is calculated. The report department reports the report information based on the above-mentioned calculated expression evaluation to the photographer. [Effects of the Invention] -6 - 200828993 According to the imaging device of the present invention, the face of the person is detected from the imaged image during the period between the recording of the image signal being recorded and the recording medium. And the expression of the face is evaluated, and an expression evaluation indicating how much the expression is close to the specific expression between the specific expression and the expression other than the specific expression is calculated. Then, the report information corresponding to the expression evaluation is reported to the photographer. Therefore, for the photographer, it is possible for the photographer or the photographer to be able to recognize whether his or her expression is suitable for imaging and to promote a better expression by the result. A portrait with a high degree of satisfaction is recorded in the recording medium. [Embodiment] The embodiment of the present invention will be described in detail with reference to the drawings. FIG. 1 is a block diagram showing an essential part of an imaging device according to a first embodiment of the present invention. The image pickup device shown in Fig. 1 is realized as a digital camera or a digital camera. The imaging device includes an optical block 1 1 , a driving device 1 1 a, an imaging element 1 2, a timing driver (TG) 1 2 a, an analog front end (AFE) circuit 13 , and a camera signal processing circuit 14 . The graphics processing circuit 15 , the display 16 , the image encoder 17 , the recording device 18 , the microcomputer 19 , the input unit 20 , the LED (Light Emitting Diode), the 200828993 optical unit 21 , and the sound output unit 22 . The optical block 11 includes a lens for collecting light from the subject to the imaging element 12, a driving mechanism for moving the lens to focus or zoom, a shutter mechanism, and a diaphragm mechanism. Wait. The driving device 1 1 a controls the driving of each mechanism in the optical block 11 based on the control signals from the microcomputer 19. The image pickup device 12 is, for example, a solid-state image pickup device such as a CCD (Charge Coupled Device) type or a CMOS (Complementary Metal Oxide Semiconductor) type, and is driven based on a timing signal output from the TG 12a, and will be driven from the object. The incoming light is converted into an electrical signal. T G 1 2 a, which outputs the timing signal under the control of the microcomputer 19. The AFE circuit 13 performs sampling and holding processing on the image signal output from the image sensor 12 by maintaining the S/N (Signal/Noise) ratio by CDS (Correlated Double Sampling) processing. Further, the gain is controlled by AGC (Auto Gain Control) processing, and A/D conversion is performed to output digital image data. The camera signal processing circuit 14 applies a detection process for performing AF (Auto Focus), AE (Auto Exposure), various image quality correction processes, or an application for the image signal from the AFE circuit 13 The image quality correction processing of the signal output from the microcomputer 19 based on the detection information. Further, as will be described later, in the present embodiment, the camera signal processing circuit 14 has a face detection function and a function of cutting out the data of the face area. -8- 200828993 The image processing circuit 15 converts the image data output from the camera signal processing circuit 14 into a signal for display on the display 16 and supplies it to the display 16. Further, in response to the request from the microcomputer, information such as expression scores and the like described later are combined on the image. The display 16 is formed, for example, by an LCD (Liquid Crystal Display) or the like, and displays an image based on an image signal from the graphic processing circuit 15. The image encoder 17 compresses and encodes the image data output from the camera signal processing circuit 14 and outputs the encoded data to the recording device 18. Specifically, the image data of one frame processed by the image signal processing circuit 14 is compression-encoded according to a coding method such as JPEG (Joint Photographic Experts Group), and the encoded data of the still image is output. In addition, it is not only a still image but also a compression coder for the image of the animation image. The recording device 18 is a device that records encoded data from the image encoder 17 as image data. For example, it can be used as a drive device for a portable recording medium such as a magnetic tape or a compact disc, or HDD (Hard). Disk Drive) and so on. The microcomputer 19 is provided with a CPU (Central Processing Unit), a ROM (Read Only Memory), a RAM (Random A Ccess Memory), and the like, and executes a program stored in the memory. This camera unit is controlled in a coordinated manner. The input unit 20 outputs a control signal for inputting an operation input to various input switches by the user to the microcomputer 19. As the input switch, for example, a shutter release button, a cross key for performing various selections of the 200828993 single selection or an operation mode setting, and the like are provided. The LED light-emitting unit 21 lights the LEDs mounted on the outside of the image pickup device based on the control signals from the microcomputer 19. As the LED, for example, it is conceivable that the self-timer function is displayed during the operation. The sound output unit 22 outputs a sound such as a motion confirmation sound based on a control signal from the microcomputer 19. Further, when an encoder/decoder having sound data is provided, the reproduced sound when the sound data is reproduced can be output. In this imaging apparatus, signals that are received by the imaging element 12 and photoelectrically converted are sequentially supplied to the AFE circuit 13 and converted into digital image data after being subjected to CDS processing or AGC processing. The camera signal processing circuit 14 performs image quality correction processing on the image data supplied from the AFE circuit 13 and the processed image is supplied to the graphics processing circuit 15 and converted into an image signal for display. . As a result, in the display unit 16, the image currently being imaged (the image viewed by the camera) is displayed on the display unit 16, and the photographer visually recognizes the image and visually recognizes the angle of view. In this state, if the shutter release button of the input unit 20 is depressed or the like, and the microcomputer 19 is instructed to record the image, the image data from the camera signal processing circuit 14 is It is supplied to the image encoder 17 and is subjected to compression encoding processing, and is recorded in the recording device 18. When the still image is recorded, the image data from the camera signal processing circuit i 4 'frame is supplied to the image encoder 17 , and -10- 200828993 is the image processed after the moving image is recorded. The data is continuously supplied to the image encoder 17. Next, the imaging operation mode of the imaging device will be described. In the imaging device, when the still image is captured, the face of the photographer is detected from the captured image, and the expression is evaluated, and the information indicating the degree of the evaluation is reported to the subject. And a mode in which the shutter is automatically depressed in accordance with the degree of the evaluation, and the still image is recorded in the recording device 18. Hereinafter, the former mode is referred to as "expression evaluation mode", and the latter mode is referred to as "expression response recording mode". In the expression evaluation mode, when a face is detected from the image of the camera, the expression of the face is evaluated, and the information corresponding to the evaluation is reported to the photographer, thereby achieving the gesture of the photographer. A function that is more suitable for the expression of the camera. For example, as an expression, it is evaluated as to whether or not it is a smile. Further, in the expression response recording mode, when the evaluation 値 exceeds a certain threshold, it is judged that the face of the photographer becomes an expression suitable for imaging, and the still portrait is automatically recorded. By this, it is possible to assist in recording an image having a higher degree of satisfaction with the photographer. In addition, although it is assumed that there are two modes including the expression evaluation mode and the expression response recording mode, only the expression response recording mode may be provided. Fig. 2 is a block diagram showing the functions of the image pickup apparatus in order to realize the expression evaluation mode and the expression response recording mode. As shown in FIG. 2, the image pickup apparatus is provided with a face detection unit 31, a face image generation unit 3, and an expression evaluation unit as functions for realizing the above-described imaging operation modes of -11 - 200828993. 4 1. The report control unit 4 2 and the recording operation control unit 43. In the present embodiment, the face detection unit 31 and the face image generation unit 32 are realized by the hardware in the camera signal processing circuit 14, the expression evaluation unit 41, the report control unit 42, and the recording operation. The control unit 43 is realized as a function of a software executed by the microcomputer 19. However, each of these functions can also be implemented by either hardware or software. Further, the microcomputer 197 uses the discrimination axis information 44 used in the expression calculation by the expression evaluation unit 41 to hold it in a memory such as a ROM in the inside. As will be described later, the discriminating axis information 44 includes a signal component obtained by principal component analysis from sample data of a face corresponding to a majority of two expressions, and linear discriminant analysis is performed. The discriminant axis of the obtained expression is used as the coefficient information of the vector of the display. Here, the operation of each function shown in Fig. 2 will be described using Figs. 3 and 4 below. First, Fig. 3 is a schematic diagram showing the actions in the expression evaluation mode. In the expression evaluation mode, first, the face detection unit 311 detects from the image based on the image data obtained by the imaging by the imaging element 12 and transmitted to the camera signal processing circuit 14 The face of the photographer is taken out (step S1). Then, the detection information showing the area of the detected face is output to the face image generating unit 32. Further, when the report is performed by displaying the information corresponding to the expression evaluation on the display 16 as in the present embodiment, the face detection information from the face detection unit 31 and -12-200828993 The system is also supplied to the report control unit 42 of the microcomputer 19. As a method of detecting a face, a well-known technique can be applied, but for example, the method described in Patent Document i can be used. In this method, first, the difference in luminance between two pixels in the face image is learned, and is held in advance as a feature amount. Then, as shown in the step S1 of FIG. 3, in general, for the input image, the window W1 of a certain size is sequentially embedded, and based on the feature amount, it is estimated whether or not the image in the window W1 is included. There is a face, and the output is presumed to be 値. At this time, by sequentially reducing the input image and performing the same processing, it is possible to perform estimation using the window W1 of a certain size. Then, from the estimated enthalpy obtained by the above actions, the region in which the face exists is finally determined. Next, the face generating unit 3 2 cuts out the information of the area Af of the detected face from the input image (step S2). Then, the cut image data is converted into image data of a certain size and normalized, and supplied to the expression evaluation unit 41 (step S3). In the present embodiment, as an example of the detection information of the face outputted from the face detecting unit 31, it is assumed that a detection frame having a rectangular shape surrounding the face is output (for example, the left end side) The coordinates are as follows: the position information of the face.), and the size of the detection frame (for example, the number of pixels in the horizontal and vertical directions. Hereinafter, the size information of the face is called.). At this time, the face image generating unit 3 2 accesses the memory (RAM) that temporarily records the image data to be detected as the face, and reads only the corresponding face from the face detecting unit. 3 1 The location of the face and the information of the area of the size information. In addition, the cut image data is normalized by performing resolution conversion as image data of a certain size (resolution). The image size after the normalization is the size of the processing unit when the expression of the face is evaluated in the expression evaluation unit 41. In the present embodiment, 'for example, the size is 48 pixels x 48 pixels. Further, the above-described general image cutout function and resolution conversion function provided in the face image generation unit 32 can be generally used for detecting or outputting an image from the previous camera signal processing circuit 14. Have the same function to stream. Next, the expression evaluation unit 4-1 performs the expression on the face based on the normalized image data of the face from the face image generation unit 3 2 and the discriminative axis information 44 that is memorized in advance. The calculation of the evaluation, and outputting the expression evaluation 値 (step S4). This expression evaluation is the degree to which the expression of the two of the two expressions is displayed. For example, as two expressions, "smiley face" and "normal expression" are applied, and when the expression evaluation is higher, the degree to which the "normal expression" is evaluated as "smiley face" is Strong. In addition, the calculation method for this expression evaluation will be described later. Next, the report control unit 42 reports the information evaluated by the expression outputted from the expression evaluation unit 41 to the subject (step S5). For example, the information corresponding to the expression evaluation is directed to the subject side and displayed on the display 16 through the graphic processing circuit 15 at this time 'may also be based on the face supplied from the face detecting unit 31. The position of the bit-14-200828993 and the size information' are used to make a specific general display of the face of the evaluation object in the display 16. Further, the LED light-emitting portion 21 can be used to report the difference in the expression evaluation 藉 by, for example, a change in brightness or a change in the speed of the click-off, a change in color, or the like. Alternatively, it may be transmitted through the sound output unit 2 2 and outputted by the sound corresponding to the expression evaluation. In the following description, the expression evaluation unit 4 1 is, for example, an evaluator for the degree to which the expression of the face is a smile or no expression. Further, in the present embodiment, in particular, the information corresponding to the expression evaluation is displayed on the display unit 16 on the side of the subject, and is reported to the subject. In Fig. 3, there is shown a bar graph showing a "smiley face score" which is an evaluation of the expression, and is displayed on the display i6. Fig. 4' is a diagram for explaining the action of displaying a bar graph of a smile score. As shown in Fig. 4, in general, the expression evaluation is as high as the degree to which the expression of the face becomes a smiley face, and becomes lower as the degree of the usual expression becomes higher. Further, the smiley face score displayed on the bar graph changes continuously or stepwise in proportion to the expression evaluation 値. This bar graph is displayed on the display i 6 facing the photographer side. 'The photographer can recognize the stick image by taking an instant view of the bar graph during imaging, and can recognize whether his or her expression is suitable. The smiley face of the camera. As a result, the bar graph is a function of assisting the photographer to perform an image suitable for photography and achieving an image capable of capturing a better image, and assisting the motion of the camera -15-200828993. Further, as will be described later, in general, a person who has a low evaluation of the expression may display specific text information or the like which causes the smile to appear. Here, when it is set to the "expression response recording mode", the expression evaluation unit 4-1 automatically presses the shutter when the expression evaluation 値 exceeds the specific threshold ,, that is, the image is captured. The way the portrait is recorded is used for control. In FIG. 2, the recording operation control unit 43 is a block for controlling the recording operation of the captured image data, and in the normal imaging operation, when the shutter release button of the input unit 20 is detected to be pressed. Controlling each part in the imaging device such that an imaging operation (for example, an exposure operation or a signal processing operation) is performed at the time of recording, whereby the captured image data is in the image encoder 17 The encoding is performed, and the encoded material is recorded in the recording device 18. Then, when the expression evaluation unit 値 exceeds the specific threshold ,, the recording operation control unit 43 requests the operation of the image data recording operation. By this, when the face is detected from the image of the camera and the face is in a state suitable for imaging (here, when the degree of the smile becomes strong), this is the case. The camera image is automatically recorded. Therefore, compared to the previous self-timer function (that is, the function of recording the image after a certain period of time when the shutter release button is pressed), it becomes possible to give the photographer a good expression. The image that is photographed is actually photographed, and the satisfaction of the photographer or the photographer can be improved. -16- 200828993 Next, an example of a specific display screen of the smile score in the display 16 will be described. Fig. 5' is a view showing an example of a screen display using a smile score having a bar graph. In Fig. 5, the image pickup apparatus is assumed to be a digital video camera 100. In the digital video camera 100, a display 16 for drawing an angle is provided on the side of the camera main body 1 ο 1 . In the digital video camera 100 of such a configuration, generally, the angle or direction of the display surface of the display 16 is variable, and the display surface can be oriented in the direction in which the imaging lens 102 is disposed as shown in FIG. , that is, toward the direction of the photographer. In the expression evaluation mode and the expression response recording mode, the display surface of the display 16 is used in such a manner as to face the subject side, and the image is reflected in response to the image of the subject being imaged. Evaluate the information for display. In the example of the screen display of Fig. 5, the smile score display unit 202 is integrated and displayed on the image of the image including the face 201. In the smiley face image display unit 202, the smile face score corresponding to the expression evaluation is displayed as the bar graph 203, and the smile face score is displayed on the number display unit 204 as a number. Further, in the expression response recording mode, a border display icon (icon) 205 indicating the boundary of the smile score when the image capturing image is automatically recorded is displayed. In this example, the boundary is further displayed on the border icon 205, and the threshold is represented by a number. Further, in the example of FIG. 5, the smile score display unit 202 is simultaneously displayed, and a face -17-200828993 is displayed around the face 201 corresponding to the smile score, and the face 206 is displayed as a smiley face. The face of the evaluation object; the display of the recognition. Further, in the face display frame 206, the characters which are different according to the expression evaluation are displayed 2 07, and the characters are displayed, and the degree of the representative smile is urged to make the smile stronger. Further, in the expression response recording mode, it is also possible to set the threshold of the expression evaluation at the time of recording to be an image that can be set to be freely determined as a smiling face to be photographed. In the example of FIG. 5, the example is placed in the direction of the left-right direction of the input unit 20, and the boundary 表示 change indication boundary 205 of the expression evaluation 朝 can be moved to the left and right, so that the use corresponds to the expression evaluation. The critical smile of the threshold. When this is set to the expression response recording mode, the automatic direction key serves as a threshold for setting the expression evaluation, whereby the operability of the user can be improved. Further, the change of the expression evaluation is not limited, and for example, it may be performed exclusively from the menu screen. Alternatively, a dedicated operation key can be set. In the case where the touch panel is a touch panel, for example, the image displayed on the display 16 may be critical, or may be left and right by touching the finger to the boundary of FIG. Move, and make changes to the threshold. Further, when it is detected from the imaging screen that the plurality of faces FL 201 are easy to be close to each other, and the set character display portion is low, it is possible to use a user key for the imaged image to be used by the user. At the same time, when the person can recognize it, the button in the left and right direction can be added to the setting screen of the above-mentioned method in the local area, and the display is changed by touching the finger. In the case of the shape of the symbol of 0 0 5 , -18-200828993 can also calculate the expression evaluation for each of the faces, and display the information corresponding to these defects on the display 16. Fig. 6 is a view showing an example of display of the first screen in response to the evaluation of the face when a plurality of faces are detected. In Fig. 6, as an example, two examples of the faces 2 1 1 and 2 1 2 in which two faces have been detected are shown. In each of the faces 2 1 1 and 2 1 2, the face display frames 2 1 3 and 2 1 4 are displayed around the face area, and the character display unit 2 1 5 is also provided in the vicinity thereof. 2 1 6. The face display frames 2 1 3 and 2 1 4 are set so as to change the type of the line in response to the expression evaluation for the respective faces 2 1 1 and 2 1 2 in the character display unit 2 In 1 5 and 2 1 6 , texts that are different in response to the expression evaluation are displayed. In the example of Fig. 6, the face 2 1 1 is evaluated as having a smile with sufficient strength, but for the face 2 1 2, the degree of evaluation of the face is insufficient. For example, the system displays: for the face hole 2 1 1, the expression evaluation 达到 has reached the critical threshold of automatic recording, but for the face hole 2 1 2, the expression evaluation 値 is slightly lower than the critical 値. status. At this time, the face representation frame 2 1 3 for the face 2 1 1 is indicated by a solid line, and the face representation frame 2 1 4 for the face hole 2 1 2 is indicated by a broken line to The difference in the evaluation status is reported to the photographer, and in the character display unit 216, the text information prompting the user to enhance the smile is further indicated. In addition, in this example, although the expression of the expression evaluation is indicated by the line type indicating the frames 2 1 3 and 2 1 4 of the face, as another example, the brightness of the frame may be represented by the face, for example. The difference is different from the color, the thickness of the -19-200828993 is different, to report the difference in expression evaluation. Fig. 7 is a second screen display example showing information in response to the expression evaluation when a plurality of faces are detected. In the example of Fig. 7, as in Fig. 5, two faces 2 1 1 and 2 1 2 are detected, and for the face 2 1 1, it is evaluated that it has a smile with sufficient intensity for the face. Hole 2 1 2, the degree to which the smile is evaluated is insufficient. Further, in the example of Fig. 7, the appearance evaluation of each of the faces 21 1 and 21 2 is reported to the person according to the expressions 2 1 7 and 2 1 8 of the expression evaluation. photographer. As described above, by using the display to report on the information based on the expression evaluation, the smile score corresponding to the expression evaluation can be expressed in a bar graph or a number, or in response to the expression evaluation. And the change in the line type, color, brightness, etc. of the face frame, or the expression of the face that is urging the smiley face in the vicinity of the face, etc., using various methods to respond to The expression evaluation information is easily and understandably reported to the photographer. In particular, in the case of a digital video camera, since it can be reported using a display that can be set in the direction of the display surface that can be changed from the prior art, it does not cause a large change due to a change in the basic configuration of the video camera. As the development and manufacturing costs increase, it is possible to accurately record an image with a high degree of satisfaction for the user. In the above description, a digital video camera equipped with a display that can change the direction of the display surface is exemplified, but a digital camera that sets the display for viewing angle on the opposite side of the imaging lens is used. In the case where the display surface direction of the display is variable, -20-200828993, the display surface can be directed toward the photographer side, and the general display image as described above can be displayed, and the expression will be evaluated in response to the expression. Information is reported to the photographer. Next, a description will be given of a method of evaluating the expression used in the image pickup apparatus. Fig. 8 is a diagram showing a sacred display for the information to be generated in advance for the expression evaluation and the flow of the information. In the present embodiment, as an expression evaluation method, a so-called "fisher linear discrimination analysis" method is used. In this method, firstly, a sample portrait having a plurality of faces having two expressions is prepared in advance, and based on the data of the sample images, two groups of questions between the two expressions are considered (2) -class problem), and by using Linear Discriminant Analysis (LDA), the discriminant axis Ad which is sufficient to distinguish the two expressions is formed in advance. Then, in the expression evaluation, the expression evaluation 値 is calculated by taking out the product of the input face image and the inner product of the discrimination axis Ad. As shown in Fig. 8, in the present embodiment, a sample portrait Ps of a smile face and a sample portrait Pn of a normal expression are used. The sample images Ps and Pn, for example, are prepared as a portrait of a certain size that is normalized to 48 pixels X48 pixels. Then, the data of the sample images are processed as vector data of (4 8x48 ) dimensions and processed by LDA. However, this vector space is a very large space with a coordinate axis of (48 x 48 ) roots. Here, Principal Component Analysis (PCA: Principal -21 - 200828993) is performed on the vector data before LDA processing.

Component Analysis ),而將其變換爲僅將臉孔之特徵作 有效率的表示之維度低的空間之資料(維度壓縮)。 在此P C A處理中,首先,考慮以使所輸入之Μ個( 例如Μ = 3 0 0 )的Ν維(Ν = 4 8 χ4 8 )之樣本畫像群間的偏 差(分散)成爲最大的方式,而得到了 Μ根的軸。此種軸 ,係作爲畫像群之共變異數矩陣的固有値問題之解(固有 向量)而被求取,並藉由僅將係數較大之向量成分作爲主 成分而取出,而能夠壓縮成僅具有適合於對臉孔之特徵作 表現的向量成分之Ν ’( Ν > > Ν ’)維的資料。例如,已得 知了:藉由設爲Ν’ = 40左右,對於臉孔之表情的判別, 即可維持充分的精確度。另外,在藉由PCA處理而得到 的主成分之中,藉由從係數較大者起而依序除去數個左右 ,而能夠一面保持表情判別之精確度,一面更進而減少維 度並減輕接下來之PCA處理的負荷。 於此,圖9,係爲用以針對在PCA處理時所輸入之樣 本畫像的遮罩處理作說明之圖。 當如圖9 ( A )所示一般,直接使用將臉孔221攝像 於特定尺寸之矩形區域中的樣本畫像p來進行p c A處理 的情況時,由於臉孔22 1之背景或是頭髮等等的影響’有 可能會產生無法選擇出適當之主成分的情況。因此’如圖 9(B)所示一般,藉由在臉孔221以外之區域上覆蓋遮罩 2 2 2,並盡可能地變換爲僅殘留有臉孔區域之樣本畫像P 1 ,而在臉孔之資訊密度提升的狀態下進行PCA處理,能 夠進行更爲高精確度之維度壓縮。進而’由於在臉孔區域 -22- 200828993 中’嘴部會依表情而產生大的變化,並多有成爲擾亂之要 素的情況,因此,藉由如圖9 ( C )所示一般,使用對嘴 部之區域亦覆蓋了遮罩223的畫像P2來進行PCa處理, 能夠更進而提升維度壓縮之精確度。 以下’若是回到圖8並作說明’則經由p c A處理而 被維度壓縮之笑臉、通常之表情的各樣本畫像,係被變換 爲僅具備有將臉孔之特徵有效率地作表示的座標軸之部分 空間(PCA空間Spea)上的向量資料。於圖8中,係將在 此PCA空間Spea中,笑臉、通常之表情的各樣本畫像ps 以及Pn被射影的模樣作模式性顯示。如此圖所示一般, 在PCA空間Spea上,可以說表情相同之樣本畫像係較爲 接近的存在。於此,將此些之各表情的樣本畫像群掌握爲 2個的叢集CLS以及CLn,並將最能將此些之叢集彼此分 離的射影軸(判別軸Ad )藉由LDA處理來形成。此種判 別軸Ad,係被稱爲「費雪之射影軸」。 在LDA處理中,一般而言,係求取出使被射影在Ν’ 維之固有向量上的群內以及群間之分散成爲最大的判別軸 。亦即是,求取出對應於群內、群間之各共變異數矩陣的 最大固有値之固有向量,並將此作爲判別軸Ad上之向量 (費雪向量)。將各共變異數矩陣與固有値、固有向量之 關係,展示於式(1 )以及式(2 )中。 〔數1〕 RB /i = λ Rw μ ......(1) ……⑵ -23- 200828993 (Rw :群內共分散行列,Rb :群間共分散行列,λ : 固有値) 費雪向量4/二對應於最大固有値λ max之固有向量Amax 於此,在式(2)之左邊的逆矩陣、固有値、固有向 量之演算中,係分別可使用 LU ( Lower-Upper )分解法、 QR ( Q :正交矩陣、R ··上三角矩陣)分解法、高斯消去 法。表情評價部4 1,係作爲如上述一般所得到之判別軸 Ad的資訊(判別軸資訊44 ),而將費雪向量之各成分的 係數等之資訊,預先保持在ROM等之中。 圖1 0,係爲將像素(p i X e 1 )空間以及P C A空間中之 判別軸以及臉孔之輸入畫像的關係作槪念性展示的圖。 使用有上述之判別軸Ad之表情判別的基本之操作順 序’首先’係對從攝像畫像中所檢測出之臉孔的畫像資料 進行P C A處理’並抽出主成分。而後’此臉孔畫像之表 情’係如圖之pca空間Spea所示一般,作爲對於被 PCA處理後之臉孔畫像向量(輸入臉孔畫像向量)之判別 軸Ad的射影成分而被評價。亦即是,表情評價値Εεχρ之 演算,係可藉由輸入臉孔畫像向量與費雪向量間之內積來 計算出(參考式(5-1 ))。 -24 - 200828993Component Analysis ), which is transformed into a data (dimensional compression) with a low dimensional dimension that only makes the features of the face efficient. In this PCA processing, first, it is considered to maximize the deviation (dispersion) between the sample groups of the image (Ν = 4 8 χ 4 8 ) of the input (for example, Μ = 3 0 0 ). And got the axis of the root. Such a axis is obtained as a solution (inherent vector) of the intrinsic 値 problem of the common variogram matrix of the image group, and can be compressed to have only a vector component having a large coefficient as a main component. The data of the 成分 '( Ν >> Ν ') dimension of the vector component that is suitable for the representation of the face. For example, it has been known that by setting Ν' = 40 or so, the accuracy of the expression on the face can be maintained. Further, among the principal components obtained by the PCA processing, by sequentially removing a plurality of factors from the larger coefficient, it is possible to maintain the accuracy of the expression determination while reducing the dimension and reducing the subsequent The load handled by the PCA. Here, Fig. 9 is a view for explaining a masking process for a sample image input at the time of PCA processing. As shown in FIG. 9(A), when the face image 221 is directly imaged by the sample image p in a rectangular area of a specific size to perform the PC A processing, the background of the face 22 1 or the hair, etc. The impact of 'may have a situation where it is impossible to choose the appropriate principal component. Therefore, as shown in FIG. 9(B), the mask 2 2 2 is covered by the area other than the face 221, and is transformed as much as possible into the sample image P 1 in which the face area remains, and is on the face. PCA processing is performed in a state where the information density of the hole is increased, and dimensional compression with higher accuracy can be performed. Furthermore, because in the face area -22-200828993, the mouth will change greatly depending on the expression, and there are many cases of disturbance. Therefore, by using the pair as shown in Fig. 9 (C) The area of the mouth also covers the portrait P2 of the mask 223 for PCa processing, which further enhances the accuracy of dimensional compression. In the following, if we return to FIG. 8 and explain it, each sample image of a smiling face and a normal expression compressed by the PC A process is converted into a coordinate axis that only expresses the features of the face efficiently. Vector data on part of the space (PCA space Spea). In Fig. 8, in this PCA space Spea, the sample images ps of the smiley face, the normal expression, and the pattern in which Pn is projected are displayed in a pattern. As shown in the figure above, in the PCA space Spea, it can be said that the sample images with the same expression are relatively close to each other. Here, the sample image group of each of these expressions is grasped into two clusters CLS and CLn, and the projection axis (discrimination axis Ad) which can best separate the clusters from each other is formed by LDA processing. This kind of judgment axis Ad is called "Fei Xue's projective axis". In the LDA processing, in general, it is necessary to extract a discriminating axis that maximizes the dispersion in the group and the group between the groups on the eigenvector of the Ν' dimension. That is, the eigenvector of the largest intrinsic 对应 corresponding to the matrix of the common variograms within the group and between the groups is extracted, and this is used as the vector on the discriminating axis Ad (Fisher vector). The relationship between each covariance matrix and the intrinsic 固有 and eigenvectors is shown in equations (1) and (2). [Number 1] RB /i = λ Rw μ (1) ......(2) -23- 200828993 (Rw: co-dispersion in the group, Rb: co-dispersion between groups, λ: intrinsic 费) Fisher The vector 4/two corresponds to the eigenvector Amax of the largest intrinsic 値λ max. Here, in the calculation of the inverse matrix, the intrinsic 値, and the eigenvector to the left of the equation (2), the LU (Lower-Upper) decomposition method, QR can be used. (Q: orthogonal matrix, R ·· upper triangular matrix) decomposition method, Gaussian elimination method. The expression evaluation unit 4-1 holds the information of the discrimination axis Ad (the discrimination axis information 44) obtained as described above, and holds information such as the coefficient of each component of the Fisher vector in advance in the ROM or the like. Fig. 10 is a diagram showing the relationship between the discrimination axis of the pixel (p i X e 1 ) space and the P C A space and the input image of the face. The basic operation sequence using the above-described expression discrimination of the discriminating axis Ad is 'first' to perform P C A processing on the image data of the face detected from the captured image and extract the main component. Then, the expression of the face image is generally shown as the pca space Spea in the figure, and is evaluated as a projective component of the discrimination axis Ad of the face image vector (the input face image vector) processed by the PCA. That is, the calculation of the expression evaluation 値Εεχρ can be calculated by inputting the inner product between the face portrait vector and the Fisher vector (refer to equation (5-1)). -24 - 200828993

【數2】 Pin.pxi ^ ^45^1 + 0-86//2+0.64^3+. · · +0.05^, + C[Number 2] Pin.pxi ^ ^45^1 + 0-86//2+0.64^3+. · · +0.05^, + C

Vxi ^ 〇.98^ + 〇.45^2+0.38^3+. . . +〇.〇9^, + 〇 1.45 0.98 ' 0.86 0.45 0.64 • 0.38 _ 0.05 _ 0.09Vxi ^ 〇.98^ + 〇.45^2+0.38^3+. . . +〇.〇9^, + 〇 1.45 0.98 ' 0.86 0.45 0.64 • 0.38 _ 0.05 _ 0.09

Ad.Pxrc ….·⑶ …··(4) (5 - 1) (5 — 2) 疋>:像素空間上之輸入臉孔畫像向量 :像素空間上之費雪向量 μλ.......^ : 主成分之向量Ad.Pxrc .....(3) ...··(4) (5 - 1) (5 - 2) 疋>: Input face image vector on pixel space: Fisher's vector μλ..... ..^ : vector of principal components

Eexp :表情評價値 ·· PCA處理後之輸入臉孔畫像向量 但是,費雪向量的資訊,係亦可換算爲 Spxl (在PCA處理前之原先畫像資料所持有;$ )中的資訊。式(3 )以及式(4 ),係分別將 像向量以及費雪向量作爲在像素空間Spxl* 5 ,而將此些之關係作槪念性展示的,係爲圖 )、式(4)以及圖10所示一般,除了藉由 得到之主成分A N,以外的向量成分,係司 輸入畫像的平均値而以定數C來近似。故而 )所示一般的內積演算’係可如式(5-2) — 像素空間Spxl之上的向量之內積演算而作等價 在式(5-2)中,由於係可事先計算出在il 在像素空間 :維度的空間 〃輸入臉孔畫 :向量而表示 1 〇。如式(3 PCA處理所 「作爲所有之 ,如式(5 · 1 般,作爲在 表示。 艮素空間Spxl -25- 200828993 中之費雪向量成分與定數C之減算結果,因此,表情評價 部4 1,係將此減算結果與定數C作爲判別軸資訊44而預 先作保持。而後,若是被給予有從攝像畫像所檢測出之臉 孔畫像的向量,則係並不對該向量進行PCA處理,而實 行式(5-2 )之內積演算。在式(5-2 )所致之對於1個的 臉孔所進行之評價値演算,分別係僅實行最大(48 x48 ) 次之減算、乘算以及加算,而且,實際上,係成爲僅被實 行有對應於40個左右之主成分μι〜μΝ,的係數之演算。 故而,相較於進行在PCA空間Spea中之向量的內積演算 之情況,能夠不使表情評價的精確度下降而大幅度地削減 演算量,並能夠在一致於攝像畫像記錄前之畫角的狀態下 ,將表情評價値Eexp即時性地並容易地算出。 在此種演算手法中,例如,就算是相較於將多數之臉 孔畫像的模版(template )與所檢測出之臉孔畫像藉由匹 配(matching )來對表情作評價的手法,亦能夠以遠爲低 的處理負擔來進行高精確度的表情評價。在進行使用有模 版之匹配的情況時,通常,係有必要從所檢測出之臉孔畫 像中更進而抽出眼睛或是嘴部等的構件,並對每一構件進 行匹配處理。相對於此,在本實施形態之手法中,係在將 所檢測出之臉孔畫像的資料正規化爲一定的尺寸之後,將 該臉孔畫像置換爲向量資訊,而可直接以該狀態(或是在 僅對部分覆蓋遮罩的狀態下)來適用於內積演算,且該內 積演算,係爲如同上述所示一般,爲由4 0維左右之減算 、乘算以及加算所成的單純計算。 -26- 200828993 圖1 1,係爲展示將表情評價値住爲數値而輸出之情況 下的算出例之圖。 在本實施形恶中,作爲其中一*例,係根據樣本畫像之 PCA處理結果,而求取出在PCA空間中之笑臉的臉孔畫 像以及通常之表情的臉孔畫像之各別的分布之平均,並決 定此些平均之對於判別軸Ad的射影點。而後,以各平均 之射影點的中點作爲基準,而將表情評價値Eexp換算爲數 値。亦即是,如圖1 1所示一般,將輸入臉孔畫像之對於 判別軸的射影點與平均之射影點的中點間之距離,作爲表 情評價値Eexp,並將笑臉之樣本畫像所分布之側作爲正的 數値。藉由此,成爲能夠將所檢測出之臉孔的畫像係爲接 近笑臉與通常之表情的何者一事,作爲連續的數値而輸出 ,而當表情評價値Eexp越高時,則將其評價爲越強之笑臉 〇 接下來,針對在表情回應記錄模式中之攝像裝置的處 理順序,以流程圖來作總結並說明之。圖1 2 ’係爲展示在 表情回應記錄模式中之攝像裝置的處理之流程的流程圖。 〔步驟s 1 1〕臉孔檢測部3 1,係從攝像畫像之資料而 檢測出臉孔,並將所檢測出之所有的臉孔之位置資訊以及 尺寸資訊,對於臉孔畫像產生部3 2以及報告控制部42而 輸出。 〔步驟s 1 2〕臉孔畫像產生部3 2,係根據從臉孔檢測 部3 1而來之臉孔的位置資訊以及尺寸資訊’而從攝像畫 像之資料中將所檢測出之各個的臉孔之區域的資料切出。 -27- 200828993 〔步驟s 1 3〕臉孔畫像產生部3 2,係將所切出之各個 的臉孔區域之資料,正規化爲特定之畫素數(於此係爲4 8 X48畫素),並進而對在表情檢測中所不必要之區域作遮 罩處理,而將處理後之畫像輸出至表情評價部4 1。 〔步驟S 1 4〕表情評價部4 1,係讀入判別軸資訊44 ,並在從臉孔畫像產生部3 2所供給而來的資訊中,對從1 個的臉孔畫像所得到之向量成分,和判別軸之向量成分進 行內積演算,而計算出表情評價値。所算出之表情評價値 ,例如係被暫時記憶在RAM等之中。 〔步驟S 1 5〕表情評價部4 1,係判斷是否結束了對所 檢測出之所有的臉孔之表情評價處理。在處理尙未結束的 情況時,係針對其他之臉孔而再度實行步驟S 1 4,當處理 已結束的情況時,則實行步驟S 1 6。 〔步驟S 1 6〕報告控制部42,係根據在步驟S 1 5中所 計算出的表情評價値,和對應於其之臉孔的位置資訊以及 尺寸資訊,而將笑臉分數或是表示框等的表情資訊輸出至 圖形處理電路15,並在顯示器16中作合成而顯示。 〔步驟S 1 7〕表情評價部4 1,係判斷在步驟S 1 4中所 計算出的對於所有臉孔之表情評價値,是否爲超過臨界値 。當存在有未超過臨界値之表情評價値的情況時,係以令 其回到步驟S 1 1並實行臉孔檢測的方式,來對攝像機訊號 處理電路1 4下指示,藉由此,而開始下一次的臉孔畫像 之檢測以及表情評價處理。又,當所有的表情評價値均超 過臨界値時,則實行步驟S 1 8。 -28- 200828993 〔步驟S 1 8〕表情評價部4 1,係對於記錄控制部43 ,要求其將攝像畫像之資料記錄於記錄裝置1 8中。藉由 此’係對於攝像畫像而施加有記錄用之訊號處理,並將處 理後之編碼畫像資料記錄在記錄裝置1 8中。 藉由以上之處理,由於係計算出對於所檢測出之所有 的臉孔之表情評價値,並將因應於該表情評價値之資訊作 爲表示資訊而通知給被攝影者,因此能夠對於被攝影者而 催促其擺出適合於攝像之表情。而後,由於當所有的被攝 影者均擺出了適合於攝像之表情時,攝像畫像之資料會自 雷力ί也被記錄’因此能夠將對於被攝影者或攝影者滿足度係 爲高之畫像確實地作記錄。 另外,在步驟S 1 7中之判定基準,係僅爲其中一例, 而並不一定必須以當所有的表情評價値均超過了臨界値時 才將畫像資料作記錄的方式來作控制。例如,在所檢測出 之臉孔中’亦可當一定比例之臉孔的表情評價値超過了臨 界値時,即將畫像資料作記錄。或者是,亦可設定爲當一 定數量之臉孔的表情評價値超過了臨界値時,即對畫像資 料作記錄,來防止對於偶然所攝像之不必要的臉孔亦實行 表情評價的事態。 又,在上述之表情回應記錄模式中,雖係單純在表情 評價値超過了特定之臨界値時自動對攝像畫像作記錄,但 是’作爲其他例子,亦可設定爲:當藉由攝影者而使快門 釋放鍵被壓下時,則在一定時間之後,對被攝影者之表情 作評價,並在成爲了適合於攝像之表情時,自動地將攝像 -29- 200828993 畫像作記錄。於此情況中,例如,只要使微電腦1 9在檢 測出快門釋放鍵之壓下時,則開始時間之計時,並在經過 了一定時間後,開始圖1 2之處理即可。藉由此種處理, 將快門釋放鍵壓下之攝影者亦可確實地移動至攝像範圍內 ,而能提升操作性。 又,在上述之說明中,雖係定義有「笑臉」與「通常 時之表情」之2個的表情,並判別臉孔係多接近笑臉,但 是,亦可成爲針對「笑臉」與笑臉之外的表情(稱爲非笑 臉)之間來進行判別。此「非笑臉」之表情,亦可包含有 嚴肅臉孔、哭泣臉孔、憤怒臉孔等之非爲笑臉的複數之表 情。此時,係從對應於此些之複數表情的臉孔之樣本畫像 的平均中,求取出「非笑臉」的集合,並將此集合與「笑 臉」之集合作爲基礎,來計算出用以進行LDA處理的判 別軸。 進而’表情評價値,係並不一定必須爲代表對如同「 笑臉」一般之1個的表情之接近程度者,例如,亦可將「 笑臉」、「嚴肅臉孔」等之特定的複數表情考慮爲適合於 攝像之表情,並表示該攝像臉孔係爲多接近此些之複數表 情側。此時,亦係從對應於此些之複數表情的臉孔之樣本 畫像的平均中,求取出「適合於攝像之表情」的集合,並 將此集合與「不適合於攝像之表情」之集合作爲基礎,來 計算出用以進行L D A處理的判別軸。 〔第2實施形態〕 -30· 200828993 圖1 3,係爲展示本發明之第2實施形態的攝像裝置之 外觀的圖。 在本實施形態中,係將因應於表情評價値之資訊,使 用圖1之構成中的LED發光部21之一部分,來對被攝影 者作報告。圖1 3所示之攝像裝置1 1 〇,係在被搭載有攝像 透鏡111或是閃光燈發光部112等之面上,被設置有用以 將因應於表情評價値之資訊作報告的專用之L E D發光部 2 1a。在LED發光部21a中,係將複數的LED21b〜21f設 置在1個的線上,並藉由從此些之其中一方起的L E D之 點燈數,來將因應於表情評價値之資訊(於此係爲笑臉分 數)通知給被攝影者側。藉由此種構成,就算是在並未持 有可變換顯示面方向之顯示器的數位相機等之攝像裝置中 ,亦可將因應於表情評價値之資訊報告給被攝影者,並以 使適切之畫像被記錄的方式,來對攝像動作進行輔助。又 ,藉由使用LED等之小型發光裝置,而可使攝像裝置本 體之大型化維持在最小限度。 進而,在LED發光部21a中,亦可將最靠另一側之 LED (於圖中係爲LED21f),作爲展示當攝像畫像被自動 記錄時之笑臉分數者,而使此LED以與其他LED相異的 顏色或是亮度來發光。藉由此,能夠將自動記錄被進行時 之笑臉分數明確地報告給被攝影者’又,亦成爲能夠使被 攝影者認知到自動記錄係已被進行一事。 〔第3實施形態〕 -31 - 200828993 圖1 4,係爲展示本發明之第3實施形態的攝像裝置之 外觀的圖。 圖1 4所示之攝像裝置1 2 0,係在被搭載有攝像透鏡 121或是閃光燈發光部122等之面上,被設置有僅具備1 個的LED之LED發光部21g。在此種LED發光部21g中 ,係因應於表情評價値,而經由例如改變LED之點滅速 度,或是改變LED之亮度或是顏色,而能夠將笑臉分數 報告給被攝影者。例如,可以進行:隨著表情評價値變大 ,使LED之顏色以紅、綠、藍之順序逐漸作變化,或是 使LED逐漸變亮等等的控制。如此這般,藉由僅使用1 個的LED,而能夠更爲顯著地阻止攝像裝置本體之大型化 〇 又,當此攝像裝置1 20係具備有先前之自拍計時功能 時,係亦可將在該自拍計時功能動作時所使用的LED兼 用於表情評價用中。例如,在自拍計時器動作時,係在快 門釋放鍵被壓下之後,直到記錄爲止的期間中,隨著時間 的經過而逐漸使LED之點滅速度變快。而,在表情評價 模式或是表情回應記錄模式中,係當表情評價値越高時, LED的點滅速度就變得越快。藉由此種構成,而能夠不對 先前之攝像裝置的基本構成或是外觀作變化,即可將因應 於表情評價値之資訊報告給被攝影者。又,可作兼用之發 光部,係並不限定爲自拍計時器,例如,亦可將曝光控制 時之測光用發光部拿來兼用。但是,在此情況時,係有必 要使其至少在表情評價時能夠發出可視光。 -32- 200828993 〔第4實施形態〕 圖1 5,係爲展示本發明之第4實施形態的攝像裝置之 外觀的圖。 在上述之各實施形態中,係將因應於表情評價値之資 訊作視覺性的報告。相對於此,在本實施形態中,係使用 於圖2所示之聲音輸出部2 2,而將因應於表情評價値之資 訊藉由聲音來報告。在圖15所示之攝像裝置130中,係 在被搭載有攝像透鏡1 3 1之側設置有揚聲器2 2 a,並因應 於表情評價値而再生並輸出相異之聲音。作爲所輸出之聲 音,例如,與在圖5或圖6中所示之文字資訊相同的,設 定爲當笑臉之程度越低時,則越對被攝影者催促應擺出更 強之笑臉之類的聲音。此時,攝像裝置1 3 0,只要將應進 行再生之聲音的資料,相對於表情評價値而階段性地附加 對應,並預先作保持即可。又,亦可採用因應於表情評價 値而改變聲音之高低或是輸出間隔的手法,或是因應於表 情評價値而輸出相異之旋律的聲音之手法等。另外,亦可 將聲音所致之報告,和視覺性的資訊一倂作使用。 〔第5實施形態〕 圖1 6,係爲展示本發明之第5實施形態的P C (個人 電腦)之外觀的圖。 在上述之各實施形態中的表情評價功能、因應於表情 評價値之資訊的報告功能、以及因應於表情評價値之畫像 -33- 200828993 的自動記錄功能,係亦可在如圖16所示之PC140 —般的 各種之電腦上來實現。在圖1 6中,作爲其中一例,係展 示有將由LCD所成之顯示器141與鍵盤142或本體部一 體構成的筆記型之P C 1 4 0。而,在此P C 1 4 0中,例如係在 顯示器1 4 1之上端部,被一體化地設置有攝像部1 43,而 成爲可以對操作PC 1 40之使用者側作攝像。另外,攝像部 143,例如係亦可爲經由 USB (Universal Serial Bus)等 之通訊界面而連接於外部者。 在此種電腦中,係藉由將記述有上述之各功能的處理 內容之程式在電腦上作實行,而能在電腦上實現此些之功 能。記述處理內容之程式,係可記錄在可藉由電腦而讀取 的記錄媒體中。作爲藉由電腦而可讀取之記錄媒體,例如 係有磁性記錄裝置、光碟片、光磁性記錄媒體、半導體記 憶體等。 當使此程式作流通時,例如,係可販賣被記錄有程式 之光碟片等的可搬型記錄媒體。又,亦可將程式儲存在伺 服器電腦之記憶裝置中,並經由網路來將該程式從伺服器 電腦而傳送至其他之電腦中。 實行程式之電腦,例如,係可將被記錄於可搬型記錄 媒體中之程式或者是從伺服器電腦所傳送而來之程式,儲 存在自己的記憶裝置中。而後,電腦,係從自己的記憶裝 置中讀取程式,並實行依據於程式之處理。另外,電腦, 係亦可從可搬型記錄媒體中直接讀取程式,並實行依據於 該程式之處理。又,電腦,係亦可在每次從伺服器電腦而 -34- 200828993 傳送程式時,依序實行依據於所接收之程式之處理。 【圖式簡單說明】 〔圖1〕展示本發明之第1實施形態的攝像裝置之重 要部分構成的區塊圖。 〔圖2〕展示爲了實現表情評價模式以及表情回應記 錄模式,而在攝像裝置中所具備之功能的區塊圖。 〔圖3〕展示在表情評價模式中之動作的槪要之圖。 〔圖4〕用以說明展示笑臉分數之棒狀圖的動作之圖 〇 〔圖5〕展示使用有棒狀圖之笑臉分數的畫面顯示例 之圖。 〔圖6〕展示在檢測出複數之臉孔的情況時,因應於 表情評價値之資訊的第1畫面顯示例之圖。 〔圖7〕展示在檢測出複數之臉孔的情況時,因應於 表情評價値之資訊的第2畫面顯示例之圖。 〔圖8〕針對爲了進行表情評價而事先所應產生的資 訊,和該資訊之產生流程,作槪念性的展示之圖。 〔圖9〕用以針對在PCA處理時所輸入之樣本畫像的 遮罩處理作說明之圖。 〔圖10〕將在像素(pixel )空間以及PCA空間中之 判別軸以及臉孔之輸入畫像的關係作槪念性展示的圖。 〔圖1 1〕展示將表情評價値作爲數値而輸出之情況下 的算出例之圖。 -35- 200828993 〔圖1 2〕展示在表情回應記錄模式中之攝像裝置的處 理之流程的流程圖。 〔圖1 3〕展示本發明之第2實施形態的攝像裝置之外 觀的圖。 〔圖1 4〕展示本發明之第3實施形態的攝像裝置之外 觀的圖。 〔圖1 5〕展示本發明之第4實施形態的攝像裝置之外 觀的圖。 〔圖1 6〕展示本發明之第5實施形態的PC之外觀的 圖。 【主要元件符號說明】 1 1 :光學區塊 1 1 a :驅動裝置 1 2 :攝像元件 12a :時機驅動器(TG) 1 3 :類比前端(AFE )電路 1 4 :攝像機訊號處理電路 1 5 :圖形處理電路 1 6 :顯不器 1 7 :畫像編碼器 1 8 :記錄裝置 1 9 :微電腦 2 0 :輸入部 -36- 200828993 21 : LED發光部 22 :聲音輸出部 3 1 :臉孔檢測部 3 2 :臉孔畫像產生部 4 1 :表情評價部 42 :報告控制部 43 :記錄動作控制部 44 :判別軸資訊 -37-Eexp : Expression evaluation 値 · · Input face image vector after PCA processing However, the information of Fisher's vector can also be converted into information in Spxl (held by the original image data before PCA processing; $). Equations (3) and (4) are based on the image space and the Fisher vector, respectively, in the pixel space Spxl*5, and the relationship is shown as a phantom, as shown in Figure (1) and (4). As shown in Fig. 10, in general, except for the vector component other than the principal component AN obtained, the department inputs the average 値 of the image and approximates it by a fixed number C. Therefore, the general inner product calculus shown in the equation can be calculated as equation (5-2) - the inner product of the vector above the pixel space Spxl is equivalent to the equation (5-2), since the system can be calculated in advance In il in pixel space: dimension space 〃 enter face painting: vector and represent 1 〇. As the formula (3 PCA processing "as all, as in the equation (5 · 1 , as the representation of the Fisher's space component Spxl -25- 200828993 in the Fisher vector component and the fixed number C subtraction results, therefore, expression evaluation The portion 4 1 holds the subtraction result and the fixed number C as the discriminating axis information 44. Then, if a vector having a face image detected from the imaged image is given, the vector is not PCA. Processing, and implementing the inner product calculus of equation (5-2). The evaluation 値 calculus for one face caused by equation (5-2) is only the maximum (48 x 48) reductions. In addition, in fact, it is a calculation that only a coefficient corresponding to about 40 principal components μι to μΝ is performed. Therefore, compared with the inner product of the vector in the PCA space Spea. In the case of the calculation, the amount of calculation can be greatly reduced without reducing the accuracy of the expression evaluation, and the expression evaluation 値Eexp can be calculated instantaneously and easily in a state in which the angle of the image before the image recording is recorded is matched. In this kind of calculation For example, even if the template of the majority of the face portrait and the detected face image are evaluated by matching, the processing load can be far lower. For high-precision expression evaluation, when using a template-matched case, it is usually necessary to extract a member such as an eye or a mouth from the detected face image, and for each In contrast, in the method of the present embodiment, after the data of the detected face image is normalized to a constant size, the face image is replaced with vector information, and In this state (or in a state in which only the mask is partially covered), the inner product calculus is applied, and the inner product calculation is as shown in the above, and is reduced, multiplied, and multiplied by about 40 dimensions. Addition calculation is a simple calculation. -26- 200828993 Fig. 1 is a diagram showing a calculation example in the case where the expression evaluation is outputted as a number of numbers, and in the case of this embodiment, as one of the examples, Based on the result of the PCA processing of the sample image, the average of the distribution of the face image of the smiling face in the PCA space and the face image of the normal expression is extracted, and the average of the projection points for the discrimination axis Ad is determined. Then, using the midpoint of each average projective point as a reference, the expression evaluation 値Eexp is converted into a number 値. That is, as shown in Fig. 11, the projection point of the face image for the discriminant axis is input. The distance from the midpoint of the average projective point is used as the expression evaluation 値Eexp, and the side on which the sample image of the smiley face is distributed is regarded as a positive number. Thereby, the image of the detected face can be obtained. In order to approach the smile and the usual expression, it is output as a continuous number, and when the expression evaluation 値Eexp is higher, it is evaluated as the stronger smile 〇 Next, for the expression response recording mode The processing sequence of the camera device is summarized and illustrated by a flowchart. Fig. 1 2' is a flow chart showing the flow of processing of the image pickup apparatus in the expression response recording mode. [Step s 1 1] The face detecting unit 3 1 detects the face from the data of the captured image, and displays the position information and the size information of all the detected faces for the face image generating unit 3 2 And the report control unit 42 outputs the result. [Step s 1 2] The face image generating unit 3 2 sets the detected faces from the information of the image of the image based on the position information and the size information ' of the face from the face detecting unit 31. The data of the area of the hole is cut out. -27- 200828993 [Step s 1 3] The face image generation unit 3 2 normalizes the data of each of the cut face regions into a specific picture number (this is a 4 8 X48 pixel) And further, masking the area unnecessary in the expression detection, and outputting the processed image to the expression evaluation unit 41. [Step S1 4] The expression evaluation unit 4 1 reads the discrimination axis information 44, and obtains a vector obtained from one face image in the information supplied from the face image generation unit 32. The component, and the vector component of the discriminant axis are subjected to inner product calculation, and the expression evaluation 値 is calculated. The calculated expression evaluation 値 is temporarily stored in the RAM or the like, for example. [Step S1 5] The expression evaluation unit 4 1 determines whether or not the expression evaluation processing for all the detected faces has been completed. When the processing is not completed, the step S 14 is performed again for the other faces, and when the processing has ended, the step S 16 is executed. [Step S16] The report control unit 42 sets the smile score or the presentation frame based on the expression evaluation 计算 calculated in the step S15, and the position information and the size information corresponding to the face of the face. The expression information is output to the graphics processing circuit 15 and synthesized in the display 16 for display. [Step S17] The expression evaluating unit 4 1 judges whether or not the expression evaluation for all faces calculated in step S1 4 exceeds the critical value 値. When there is a case where the expression evaluation 未 does not exceed the threshold ,, the camera signal processing circuit 14 is instructed to return to the step S 1 1 and perform the face detection, thereby starting from this. The next face image detection and expression evaluation processing. Further, when all of the expression evaluations exceed the critical threshold, step S 18 is carried out. -28-200828993 [Step S1 8] The expression evaluation unit 4 1 requests the recording control unit 43 to record the data of the captured image in the recording device 18. By this, a signal processing for recording is applied to the image pickup image, and the processed code image data is recorded in the recording device 18. With the above processing, since the expression evaluation for all the detected faces is calculated, and the information corresponding to the expression evaluation is notified to the photographer as the presentation information, it is possible to And urged to put on the expression suitable for camera. Then, when all the photographers have an expression suitable for the camera, the image of the camera image will be recorded from Lei Lie, so that the image that satisfies the photographer or the photographer is high. Make a record. Further, the criterion for determination in step S17 is only one example, and it is not necessarily necessary to control the image data to be recorded when all of the expression evaluations exceed the critical threshold. For example, in the detected face hole, it is also possible to record the image data when the expression evaluation of a certain proportion of the face exceeds the threshold. Alternatively, it may be set such that when the expression evaluation of a certain number of faces exceeds the critical threshold, the portrait data is recorded to prevent the expression of the expression from being evaluated for unnecessary faces that are accidentally photographed. Further, in the above-described expression response recording mode, although the image of the image is automatically recorded when the expression evaluation exceeds the specific threshold, "as another example, it may be set as: by the photographer When the shutter release button is depressed, the photographer's expression is evaluated after a certain period of time, and when the image is suitable for imaging, the image of the camera -29-200828993 is automatically recorded. In this case, for example, as long as the microcomputer 19 detects the depression of the shutter release button, the time is started, and after a certain period of time has elapsed, the processing of Fig. 12 can be started. With this processing, the photographer who has pressed the shutter release button can also be surely moved into the imaging range, and the operability can be improved. In addition, in the above description, although two expressions of "smiley face" and "normal expression" are defined, and it is determined that the face is close to a smiling face, it may be other than "smiley face" and smiling face. The expression is called between the expressions (called non-smile faces). This "non-smiley face" expression can also include a plural face that is not a smiley face, such as a serious face, a crying face, an angry face, and the like. At this time, from the average of the sample images of the faces corresponding to the plurality of expressions of the plurality of expressions, the collection of "non-smile faces" is extracted, and the set is combined with the set of "smile faces" to calculate The discriminant axis of the LDA process. Furthermore, the expression evaluation is not necessarily required to represent the degree of proximity to an expression like "smiley face". For example, a specific plural expression such as "smiley face" or "serious face" may be considered. It is suitable for the expression of the camera, and indicates that the camera face is close to the plural expression side of the camera. At this time, it is also necessary to extract a set of "features suitable for imaging" from the average of the sample images of the faces corresponding to the plurality of expressions of the plurality of expressions, and to use the set of "expressions not suitable for imaging" as a set. Based on the basis, the discriminant axis used for LDA processing is calculated. [Second Embodiment] -30. 200828993 FIG. 1 is a view showing an appearance of an image pickup apparatus according to a second embodiment of the present invention. In the present embodiment, the photographer is notified of the portion of the LED light-emitting portion 21 in the configuration of Fig. 1 in response to the evaluation of the expression. The imaging device 1 1 shown in FIG. 13 is provided with a dedicated LED illumination for reporting information corresponding to the expression evaluation on the surface on which the imaging lens 111 or the flash light emitting unit 112 or the like is mounted. Part 2 1a. In the LED light-emitting portion 21a, a plurality of LEDs 21b to 21f are placed on one line, and the number of lights from the LEDs is used to evaluate the information in response to the expression. It is notified to the photographer side for smiley score). With such a configuration, even in an image pickup apparatus such as a digital camera that does not have a display that can change the direction of the display surface, it is possible to report the information corresponding to the expression evaluation to the photographer, so as to be appropriate. The way the image is recorded is to assist the camera action. Further, by using a small-sized light-emitting device such as an LED, the size of the imaging device itself can be kept to a minimum. Further, in the LED light-emitting portion 21a, the LED on the other side (LED 21f in the drawing) may be used as a smile face score when the image is automatically recorded, and the LED is used with other LEDs. Different colors or brightness to illuminate. By this, it is possible to clearly report the smile score when the automatic recording is performed to the subject', and also to enable the photographer to recognize that the automatic recording system has been performed. [Third Embodiment] -31 - 200828993 Fig. 14 is a view showing an appearance of an image pickup apparatus according to a third embodiment of the present invention. The imaging device 1 220 shown in Fig. 14 is provided with an LED light-emitting portion 21g having only one LED, on the surface on which the imaging lens 121 or the flash light-emitting portion 122 or the like is mounted. In the LED light-emitting portion 21g, the smile score can be reported to the photographer by, for example, changing the speed of the LED or changing the brightness or color of the LED in response to the expression evaluation. For example, it is possible to perform control such that the color of the LED gradually changes in the order of red, green, and blue as the expression evaluation becomes larger, or the LED is gradually brightened. In this way, by using only one LED, the size of the main body of the imaging device can be more significantly prevented. When the camera device 20 has the previous self-timer function, it can also be The LED used in the self-timer function is also used for expression evaluation. For example, when the self-timer is activated, the dot-off speed of the LED is gradually increased as time elapses after the shutter release button is depressed until the recording. However, in the expression evaluation mode or the expression response recording mode, when the expression evaluation is higher, the dot-off speed of the LED becomes faster. With such a configuration, it is possible to report the information corresponding to the expression evaluation to the subject without changing the basic configuration or appearance of the conventional imaging apparatus. Further, the light-emitting portion that can be used in combination is not limited to a self-timer. For example, the light-receiving portion for light metering during exposure control can be used in combination. However, in this case, it is necessary to make it visible at least in the evaluation of the expression. [Fourth Embodiment] Fig. 15 is a view showing an appearance of an image pickup apparatus according to a fourth embodiment of the present invention. In each of the above embodiments, the information corresponding to the expression evaluation is visually reported. On the other hand, in the present embodiment, the sound output unit 2 2 shown in Fig. 2 is used, and the information corresponding to the expression evaluation is reported by sound. In the imaging device 130 shown in Fig. 15, the speaker 2 2 a is provided on the side on which the imaging lens 1 31 is mounted, and the sound is reproduced and outputted in response to the expression evaluation. As the outputted sound, for example, the same as the text information shown in FIG. 5 or FIG. 6, the lower the degree of smile is, the more urged the photographer should be to make a stronger smile or the like. the sound of. At this time, the image pickup apparatus 130 may add the data of the sound to be reproduced to the expression evaluation step by step, and may hold it in advance. Further, it is also possible to use a method of changing the level of the sound or the output interval in response to the expression evaluation, or a method of outputting a sound of a different melody in response to the evaluation of the expression. In addition, reports of sound and visual information can be used together. [Fifth Embodiment] Fig. 16 is a view showing the appearance of a P C (personal computer) according to a fifth embodiment of the present invention. In the above-described embodiments, the expression evaluation function, the report function of the information corresponding to the expression evaluation, and the automatic recording function of the portrait-33-200828993 corresponding to the expression evaluation may be as shown in FIG. The PC140 is implemented on a variety of computers. In Fig. 16, as an example, a notebook type P C 1 4 0 in which a display 141 made of an LCD is integrally formed with a keyboard 142 or a main body portion is shown. Further, in this P C 1 400, for example, at the upper end portion of the display 1 4 1 , the imaging unit 1 43 is integrally provided, and the user side of the operation PC 1 40 can be imaged. Further, the imaging unit 143 may be connected to the outside via a communication interface such as a USB (Universal Serial Bus). In such a computer, such a function can be realized on a computer by executing a program describing the processing contents of the above-described respective functions on a computer. The program describing the processing contents can be recorded in a recording medium readable by a computer. The recording medium readable by a computer is, for example, a magnetic recording device, an optical disk, a magneto-optical recording medium, a semiconductor memory or the like. When the program is distributed, for example, a portable recording medium on which a disc or the like of a program is recorded can be sold. Alternatively, the program can be stored in the memory device of the server computer and transmitted from the server computer to other computers via the network. The computer that executes the program, for example, can store the program recorded on the portable recording medium or the program transmitted from the server computer in its own memory device. Then, the computer reads the program from its own memory device and implements the processing according to the program. In addition, the computer can also directly read the program from the portable recording medium and execute the processing according to the program. In addition, the computer can also perform the processing according to the received program in sequence when transferring the program from the server computer to -34-200828993. [Brief Description of the Drawings] Fig. 1 is a block diagram showing an essential part of an image pickup apparatus according to a first embodiment of the present invention. Fig. 2 is a block diagram showing the functions of the image pickup apparatus in order to realize the expression evaluation mode and the expression response recording mode. [Fig. 3] shows a schematic diagram of the action in the expression evaluation mode. [Fig. 4] A diagram for explaining an action of displaying a bar graph of a smile score 〇 Fig. 5 is a view showing an example of screen display using a smile score having a bar graph. Fig. 6 is a view showing a first screen display example of information corresponding to the expression evaluation when a plurality of faces are detected. Fig. 7 is a view showing a second screen display example of information corresponding to the expression evaluation when a plurality of faces are detected. [Fig. 8] A diagram showing the sacred display of the information that should be generated in advance for the expression evaluation and the flow of the information. Fig. 9 is a view for explaining mask processing of a sample portrait input at the time of PCA processing. Fig. 10 is a diagram showing the relationship between the discriminating axis in the pixel space and the PCA space and the input image of the face. [Fig. 11] A diagram showing a calculation example in the case where the expression evaluation 値 is output as a number. -35- 200828993 [Fig. 1 2] A flow chart showing the flow of processing of the image pickup apparatus in the expression response recording mode. Fig. 13 is a view showing the appearance of an image pickup apparatus according to a second embodiment of the present invention. Fig. 14 is a view showing the appearance of an image pickup apparatus according to a third embodiment of the present invention. Fig. 15 is a view showing the appearance of an image pickup apparatus according to a fourth embodiment of the present invention. Fig. 16 is a view showing the appearance of a PC according to a fifth embodiment of the present invention. [Main component symbol description] 1 1 : Optical block 1 1 a : Drive device 1 2 : Image sensor 12a : Timing driver (TG) 1 3 : Analog front end (AFE) circuit 1 4 : Camera signal processing circuit 1 5 : Graphics Processing circuit 1 6 : Display 1 7 : Image encoder 1 8 : Recording device 1 9 : Microcomputer 2 0 : Input unit - 36 - 200828993 21 : LED light emitting unit 22 : Sound output unit 3 1 : Face detecting unit 3 2 : Face image generation unit 4 1 : Expression evaluation unit 42 : Report control unit 43 : Recording operation control unit 44 : Discriminating axis information - 37 -

Claims (1)

200828993 十、申請專利範圍 1 一種攝像裝置,係爲使用固態攝像元 做攝像之攝像裝置,其特徵爲,具備有: 臉孔檢測部,其係在經由攝像所得之畫像 至記錄媒體爲止的期間中,從該畫像訊號中檢 臉孔;和 表情評價部,其係對所檢測出之臉孔的表 並計算出代表此表情在特定之表情與其之外的 有多接近前述特定之表情的程度之表情評價値 報告部,其係將反映了所計算出之前述表 報告資’報告給被攝影者。 2 .如申請專利範圍第1項所記載之攝像 ,前述表情評價部,係在以被判斷爲包含有前 情的複數臉孔資料爲基礎之第1臉孔集合,和 包含有前述特定之表情以外的表情的複數臉孔 之第2臉孔集合之間,計算出代表其之接近前 集合的程度之前述表情評價値。 3.如申請專利範圍第1項所記載之攝像 ,前述表情評價部,作爲前述特定之表情,係 並將所檢測出之臉孔之接近笑臉的程度,作爲 價値而計算出。 4 .如申請專利範圍第3項所記載之攝像 ,前述表情評價部,作爲前述特定之表情以外 適用通常之表情。 件來將畫像 訊號被記錄 測出人物的 情作評價, 表情之間, ;和 情評價値的 裝置,其中 述特定之表 以被判斷爲 資料爲基礎 述第1臉孔 裝置,其中 適用笑臉, 前述表情評 裝置,其中 的表情,係 -38 - 200828993 5 .如申請專利範圍第3項所記載之攝像裝置,其中 ’前述表情評價部,作爲前述特定之表情以外的表情,係 適用笑臉以外之表情。 6·如申請專利範圍第1項所記載之攝像裝置,其中 ,係更進而具備有畫像記錄控制部,其係當前述表情評價 値超過特疋之臨界値時》將經由攝像所得到之畫像訊號自 動記錄至記錄媒體中。 7 ·如申請專利範圍第6項所記載之攝像裝置,其中 ,係可將前述臨界値因應於使用者之輸入操作而作變更。 8 ·如申請專利範圍第6項所記載之攝像裝置,其中 ,前述報告部,係將因應於前述表情評價値而變化之前述 報告資訊,和將前述臨界値之位置對應於前述報告資訊之 變化而作展示的臨界値資訊,作爲視覺性的資訊而作報告 〇 9 ·如申請專利範圍第8項所記載之攝像裝置,其中 ,前述報告部,係將以棒之長度來表示前述表情評價値之 大小的棒狀顯示畫像,顯示於顯示面係朝向被攝影者側的 顯示裝置上,同時,將表示前述臨界値之位置的位置表示 畫像顯示於前述棒狀表示畫像上, 在因應於使用者所致之輸入操作而對前述臨界値作變 更的同時,因應於前述臨界値之變更,在前述棒狀顯示畫 像上之前述位置表示畫像的位置亦會變化。 10.如申請專利範圍第6項所記載之攝像裝置,其中 ,前述臉孔檢測部係檢測出複數之臉孔,當前述表情評價 -39- 200828993 部針對所檢測出之複數的臉孔而分別計算出前述表情評價 値時,前述畫像記錄控制部,係當相關於所檢測出之所有 的臉孔之前述表情評價値超過前述臨界値時,將畫像訊號 自動作記錄。 11. 如申請專利範圍第6項所記載之攝像裝置,其中 ,前述臉孔檢測部係檢測出複數之臉孔,當前述表情評價 部針對所檢測出之複數的臉孔而分別計算出前述表情評價 値時,前述畫像記錄控制部,係當在所檢測出之臉孔中, 相關於特定數量又或是特定比例之臉孔的表情評價値超過 前述臨界値時,將畫像訊號自動作記錄。 12. 如申請專利範圍第6項所記載之攝像裝置,其中 ,前述畫像記錄控制部,若是檢測出使用者所致之畫像記 錄操作,則在一定時間後,從前述表情評價部取得前述表 情評價値,並當該表情評價値超過特定之臨界値時,將經 由攝像所得到之畫像訊號自動記錄至記錄媒體中。 1 3 .如申請專利範圍第1項所記載之攝像裝置,其中 ,前述報告部,係作爲前述報告資訊,而對於被攝影者側 ,報告將前述表情評價値之大小以視覺性來展示的資訊。 14·如申請專利範圍第13項所記載之攝像裝置,其 中,前述報告部,係在顯示面爲朝向被攝影者側之顯示裝 置上,將因應於前述表情評價値之前述報告資訊,與攝像 中之畫像一同作顯不。 1 5 ·如申請專利範圍第1 4項所記載之攝像裝置,其 中,前述報告部,係作爲前述報告資訊,而將以棒之長度 40- 200828993 來表示前述表情評價値之大小的棒狀顯示畫像,顯示於前 述顯示裝置上。 1 6 ·如申請專利範圍第1 4項所記載之攝像裝置,其 中,前述報告部,作爲前述報告資訊,係在被顯示於前述 顯不裝置之被攝影者的臉孔周圍以框架體來作表示,並因 應於前述表情評價値,而使前述框架體之顏色、亮度、線 條種類之至少一者作變化。 17·如申請專利範圍第14項所記載之攝像裝置,其 中,前述報告部,係作爲前述報告資訊,而因應於前述表 情評價値之大小,作爲相異之文字資訊並顯示於前述顯示 裝置上。 1 8 .如申請專利範圍第1 4項所記載之攝像裝置,其 中,前述表情評價部,當藉由前述臉孔檢測部而檢測出了 複數之臉孔的情況時,係對於每一被檢測出之臉孔,分別 計算出前述表情評價値, 前述報告部,係將因應於每一臉孔之前述表情評價値 的前述報告資訊,分別與被顯示於前述顯示裝置上之被攝 影者的臉孔之各個附加對應並作顯示。 19·如申請專利範圍第1 3項所記載之攝像裝置,其 中,前述報告部,係由點燈面被朝向被攝影者側之複數的 點燈部所成,並將前述表情評價値之大小,以前述點燈部 之點燈數來作顯示。 20·如申請專利範圍第13項所記載之攝像裝置,其 中,前述報告部,係由點燈面被朝向被攝影者側之1個 -41 - 200828993 的點燈部所成,並將前述表情評價値之大小,以 部之顏色的變化、又或是亮度之變化、又或是點 變化來作顯示。 2 1 ·如申請專利範圍第20項所記載之攝像 中,前述點燈部,係被兼用於將前述表情評價値 表示之功能,與其他的功能。 22. 如申請專利範圍第20項所記載之攝像 中,係更進而具備有: 自拍計時(self timer)記錄控制部,其係以 快門釋放(shutter release)鍵之壓下後起一定 將經由攝像所得到之畫像訊號編碼化,並自動記 媒體中的方式來作控制;和 自拍計時用點燈部,其係在前述自拍計時記 所致之控制時間中,在從前述快門釋放鍵之壓下 前述記錄媒體之畫像訊號的記錄爲止之間,對被 告自拍計時器係爲在動作中一事, 用以顯示前述表情評價値之大小的前述點燈 兼用爲前述自拍計時器用點燈部。 23. 如申請專利範圍第1項所記載之攝像裝 ,前述報告部,係作爲前述報告資訊,而輸出因 表情評價値之大小而相異的報告聲音。 24. 如申請專利範圍第1項所記載之攝像裝 ,前述表情評價部,係將根據分別對應於前述特 以及其之外之表情的臉孔之樣本畫像的集合’而 前述點燈 滅速度之 裝置,其 之大小作 裝置,其 從檢測出 時間後, 錄至記錄 錄控制部 起直到對 攝影者報 部,係被 置,其中 應於前述 置,其中 定之表情 進行線形 -42- 200828993 判別分析所得到之判別軸的資訊預先作保持,並從基於經 由前述臉孔檢測部所檢測出之臉孔的畫像訊號之向量的對 於前述判別軸上之向量的投影成分之大小,來計算出前述 表情評價値。 25. 如申請專利範圍第24項所記載之攝像裝置,其 中’目U述判別軸’係將把基於目I[述樣本畫像之向量的資料 藉由主成分分析而作維度壓縮後的資料作爲基礎,而計算 出。 26. 如申請專利範圍第25項所記載之攝像裝置,其 中,前述表情評價部,係將前述判別軸上之向量的資料, 作爲持有在維度壓縮前之維度的向量之資料而預先作保持 ,並藉由演算出該向量與根據經由前述臉孔檢測部所檢測 出之臉孔的畫像訊號之向量間的內積,來計算出前述表情 評價値。 27. 如申請專利範圍第24項所記載之攝像裝置,其 中,前述表情評價部,係將經由前述臉孔檢測部所檢測出 之臉孔的畫像訊號,正規化爲在前述判別軸之算出時所使 用的前述樣本畫像之畫像尺寸,並使用正規化後之畫像訊 號來進行前述表情評價値之算出。 2 8 ·如申請專利範圍第2 7項所記載之攝像裝置,其 中,前述表情評價部,係對於前述正規化後之畫像訊號, 施加將對於表情判別不會造成實質上影響之畫像區域的訊 號作遮蓋之遮罩處理,並使用前述遮罩處理後之畫像訊號 ’來進行前述表情評價値之算出。 -43- 200828993 29· —種攝像方法,係爲用以使用固態攝像元件來將 畫像做攝像之攝像方法,其特徵爲: 在經由攝像所得之畫像訊號被記錄至記錄媒體爲止的 期間中’臉孔檢測部,係從該畫像訊號中檢測出人物的臉 孔; 表情評價部,係對經由前述臉孔檢測部所檢測出之臉 孔的表情作評價,並計算出代表此表情在特定之表情與其 之外的表情之間,有多接近前述特定之表情的程度之表情 評價値; 報告部’係將因應於藉由前述表情評價部所計算出之 前述表情評價値的報告資訊,報告給被攝影者。 3 0· —種表情評價裝置,係爲對使用固態攝像元件所 攝像之臉孔的表情作評價之表情評價裝置,其特徵爲,具 備有: 臉孔檢測部,其係在經由攝像所得之畫像訊號被記錄 至記錄媒體爲止的期間中,從該畫像訊號中檢測出人物的 臉孔;和 表情評價部,其係對所檢測出之臉孔的表情作評價, 並計算出代表此表情在特定之表情與其之外的表情之間, 有多接近前述特定之表情的程度之表情評價値;和 報告資訊產生部,其係產生因應於前述計算出之表情 評價値而變化之用以對被攝影者作報告的報告資訊。 3 1 . —種表情評價程式,係爲用以使電腦實行對使用 固態攝像元件所攝像之臉孔的表情作評價處理之表情評價 -44- 200828993 程式,其特徵爲 臉孔檢測部 至記錄媒體爲止 臉孔;和 表情評價部 並計算出代表此 有多接近前述特 報告資訊產 評價値而變化之 ,使前述電腦作爲下述之各部而起作用: ,其係在經由攝像所得之畫像訊號被記錄 的期間中,從該畫像訊號中檢測出人物的 ,其係對所檢測出之臉孔的表情作評價, 表情在特定之表情與其之外的表情之間, 定之表情的程度之表情評價値;和 生部,其係產生因應於前述計算出之表情 用以對被攝影者作報告的報告資訊。 -45-200828993 X. Patent Application No. 1 An image pickup apparatus is an image pickup apparatus that performs image pickup using a solid-state image sensor, and is characterized in that: a face detection unit is provided in a period from the image obtained by the image capture to the recording medium And detecting the face hole from the image signal; and the expression evaluation unit, which is for the table of the detected face and calculates the degree to which the expression is close to the specific expression in addition to the specific expression and the expression. The expression evaluation 値 report department will reflect the calculated report of the previous report 'reported to the photographer. 2. The imaging evaluation unit according to the first aspect of the patent application, wherein the expression evaluation unit is a first face set based on a plurality of face data determined to include the affair, and includes the specific expression. The expression evaluation 値 representing the degree of the proximity of the front set is calculated between the second face sets of the plural faces of the other expressions. 3. The imaging evaluation unit according to the first aspect of the patent application, wherein the expression evaluation unit calculates the degree of the face close to the smile as the specific expression as the price. 4. In the imaging described in the third paragraph of the patent application, the expression evaluation unit applies a normal expression as the specific expression. The device for recording the character image is recorded, and the device is evaluated, and the device is evaluated, and the specific table is described as the first face device, wherein the smile face is applied. The above-mentioned expression evaluation device is an image pickup device described in claim 3, wherein the expression evaluation unit described above is applied to an expression other than the specific expression. expression. 6. The imaging device according to the first aspect of the invention, further comprising an image recording control unit, wherein the image signal obtained by the image is obtained when the expression evaluation exceeds a critical threshold of the feature Automatically record to the recording media. 7. The image pickup apparatus according to claim 6, wherein the threshold is changed in response to an input operation by a user. The imaging device according to claim 6, wherein the report unit changes the report information that changes in response to the expression evaluation, and changes the position of the threshold to the change of the report information. The information on the criticality of the display is reported as visual information. The image pickup apparatus described in claim 8 of the patent application, wherein the report section indicates the expression evaluation by the length of the stick. The bar-shaped display image of the size is displayed on the display device on the side of the display side of the display surface, and the position display image indicating the position of the critical point is displayed on the bar-shaped display image, in response to the user. At the same time as the input operation, the change of the threshold is changed, and the position of the image is also changed at the position on the bar-shaped display image in response to the change of the threshold. 10. The imaging device according to claim 6, wherein the face detecting unit detects a plurality of faces, and the expression evaluation-39-200828993 is separately for the detected plurality of faces. When the expression evaluation 値 is calculated, the image recording control unit automatically records the image signal when the expression evaluation of all the detected faces exceeds the threshold 値. 11. The imaging device according to claim 6, wherein the face detecting unit detects a plurality of faces, and the expression evaluating unit calculates the expressions for the detected plurality of faces. In the case of evaluation, the image recording control unit automatically records the image signal when the expression evaluation of the face corresponding to a specific number or a specific ratio exceeds the threshold value in the detected face. 12. The imaging device according to the sixth aspect of the invention, wherein the image recording control unit obtains the expression evaluation from the expression evaluation unit after detecting the image recording operation by the user.値, and when the expression evaluation 値 exceeds a certain critical threshold, the image signal obtained by the imaging is automatically recorded in the recording medium. In the imaging device according to the first aspect of the invention, the report unit is configured to report information indicating that the size of the expression evaluation is visually displayed on the subject side as the report information. . The imaging device according to claim 13, wherein the report unit is configured to display the information on the display side facing the subject side, and to report the report information in response to the expression evaluation. The portraits in the middle show together. In the imaging device according to the above-mentioned report, the report unit displays the bar display of the size of the expression evaluation value by the length of the bar 40-200828993 as the report information. The image is displayed on the aforementioned display device. The imaging device according to claim 14, wherein the report unit is configured as a frame body around the face of the subject displayed on the display device as the report information. It is indicated that at least one of the color, the brightness, and the type of the line of the frame body is changed in accordance with the expression evaluation. The imaging device according to claim 14, wherein the report unit is used as the report information, and is displayed on the display device as disparate text information in response to the size of the expression evaluation file. . The imaging device according to claim 14, wherein the expression evaluation unit detects each of the plurality of faces when the face detection unit detects the plurality of faces. In the face, the expression evaluation is calculated separately, and the report unit evaluates the report information of the expression corresponding to each face of the face, respectively, with the face of the photographer displayed on the display device. Each of the holes is additionally associated and displayed. The imaging device according to the first aspect of the invention, wherein the reporting unit is formed by a lighting unit that faces the plurality of lighting units on the side of the photographer, and the size of the expression is evaluated. The display is performed by the number of lights of the aforementioned lighting unit. The imaging device according to claim 13, wherein the report unit is formed by a lighting unit facing a lighting unit of one to 41 - 200828993 on the side of the photographer, and the expression is The size of the sputum is evaluated, and the color change of the part, or the change of the brightness, or the change of the point is displayed. 2 1 In the imaging described in claim 20, the lighting unit is also used for the function of expressing the expression evaluation 与 and other functions. 22. In the image recording described in claim 20, the self-timer (self timer) recording control unit is provided with a self-timer (shutter release) button, and is surely passed through the camera. The obtained image signal is encoded and automatically recorded in the media for control; and the self-timer lighting unit is controlled by the shutter release button in the control time caused by the self-timer In the case where the recording of the image signal of the recording medium is recorded, the above-mentioned self-timer lighting unit is used for the self-timer timer lighting unit. 23. In the image pickup apparatus described in the first aspect of the patent application, the report unit outputs a report sound different in size due to the size of the expression evaluation as the report information. 24. The image pickup device according to the first aspect of the invention, wherein the expression evaluation unit sets the speed of the image based on the set of sample images of the face corresponding to the face and the expression other than the expression. The device, the size of which is the device, after detecting the time, is recorded to the record control unit until it is placed on the photographer, which should be placed in the above, wherein the expression is linearly-42-200828993 The information of the obtained discriminant axis is held in advance, and the expression is calculated from the magnitude of the projection component of the vector on the discriminant axis based on the vector of the image signal of the face detected by the face detecting unit. Evaluation 値. 25. The image pickup apparatus according to claim 24, wherein the 'mesh discriminant axis' is a data obtained by dimensionally compressing the data based on the vector of the sample image by principal component analysis. The basis is calculated. 26. The imaging device according to claim 25, wherein the expression evaluation unit holds the data of the vector on the discriminating axis as data of a vector having a dimension before dimensional compression. The expression evaluation 値 is calculated by calculating the inner product between the vector and the vector of the image signal based on the face detected by the face detecting unit. 27. The imaging device according to claim 24, wherein the expression evaluation unit normalizes an image signal of a face detected by the face detection unit to when the determination axis is calculated. The image size of the sample image used is used to calculate the expression evaluation using the normalized image signal. The imaging device according to the second aspect of the invention, wherein the expression evaluation unit applies a signal to an image region that does not substantially affect the expression determination on the normalized image signal. The cover mask processing is performed, and the above-described expression evaluation is performed using the image signal 'after the mask processing'. -43-200828993 29 - An imaging method for imaging an image using a solid-state imaging device, wherein a face is imaged while being recorded on a recording medium The hole detecting unit detects the face of the person from the image signal, and the expression evaluation unit evaluates the expression of the face detected by the face detecting unit, and calculates a specific expression representing the expression. Between the other expressions, how much is the expression evaluation that is close to the specific expression of the above-mentioned expression; the report department's report to the quilt based on the report information of the aforementioned expression evaluation calculated by the expression evaluation unit. photographer. The expression evaluation device is an expression evaluation device that evaluates the expression of the face imaged by the solid-state imaging device, and is characterized in that the face detection unit is provided with a portrait image obtained by imaging. During the period in which the signal is recorded to the recording medium, the face of the person is detected from the image signal; and the expression evaluation unit evaluates the expression of the detected face and calculates that the expression is specific Between the expression and the expression other than the expression, how close is the expression evaluation of the specific expression; and the report information generation unit, which is adapted to be photographed in response to the calculation of the expression calculated above. Report information for the report. 3 1 . An expression evaluation program is an expression for evaluating a facial expression of a face imaged by a solid-state imaging device, and is characterized by a face detection unit to a recording medium. The face and the expression evaluation unit are calculated to change how much the representative information is close to the above-mentioned special report information evaluation, and the computer functions as the following parts: the image signal obtained by the imaging is During the recording period, the person is detected from the image signal, and the expression of the detected face is evaluated, and the expression is evaluated between the specific expression and the expression other than the expression. And the Department of Health, which produces report information that is used to report to the photographer in response to the aforementioned calculated expression. -45-
TW096128018A 2006-08-02 2007-07-31 Imaging apparatus and method, and facial expression evaluating device and program TW200828993A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2006211000A JP4197019B2 (en) 2006-08-02 2006-08-02 Imaging apparatus and facial expression evaluation apparatus

Publications (2)

Publication Number Publication Date
TW200828993A true TW200828993A (en) 2008-07-01
TWI343208B TWI343208B (en) 2011-06-01

Family

ID=39050846

Family Applications (1)

Application Number Title Priority Date Filing Date
TW096128018A TW200828993A (en) 2006-08-02 2007-07-31 Imaging apparatus and method, and facial expression evaluating device and program

Country Status (5)

Country Link
US (6) US8416996B2 (en)
JP (1) JP4197019B2 (en)
KR (1) KR101401165B1 (en)
CN (4) CN101163199B (en)
TW (1) TW200828993A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI399674B (en) * 2008-08-28 2013-06-21 Japan Display West Inc An image recognition method, an image recognition device, and an image input / output device

Families Citing this family (115)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7945653B2 (en) * 2006-10-11 2011-05-17 Facebook, Inc. Tagging digital media
JP4197019B2 (en) * 2006-08-02 2008-12-17 ソニー株式会社 Imaging apparatus and facial expression evaluation apparatus
US8977631B2 (en) * 2007-04-16 2015-03-10 Ebay Inc. Visualization of reputation ratings
JP4865631B2 (en) * 2007-05-11 2012-02-01 オリンパス株式会社 Imaging device
JP4453721B2 (en) * 2007-06-13 2010-04-21 ソニー株式会社 Image photographing apparatus, image photographing method, and computer program
US8041076B1 (en) * 2007-08-09 2011-10-18 Adobe Systems Incorporated Generation and usage of attractiveness scores
JP5144422B2 (en) * 2007-09-28 2013-02-13 富士フイルム株式会社 Imaging apparatus and imaging method
JP5169139B2 (en) * 2007-10-25 2013-03-27 株式会社ニコン Camera and image recording program
JP2012165407A (en) * 2007-12-28 2012-08-30 Casio Comput Co Ltd Imaging apparatus and program
US8750578B2 (en) * 2008-01-29 2014-06-10 DigitalOptics Corporation Europe Limited Detecting facial expressions in digital images
JP5040734B2 (en) * 2008-03-05 2012-10-03 ソニー株式会社 Image processing apparatus, image recording method, and program
JP2009223524A (en) * 2008-03-14 2009-10-01 Seiko Epson Corp Image processor, image processing method, and computer program for image processing
JP4508257B2 (en) 2008-03-19 2010-07-21 ソニー株式会社 Composition determination apparatus, composition determination method, and program
JP2009237611A (en) * 2008-03-25 2009-10-15 Seiko Epson Corp Detection of face area from target image
JP5493284B2 (en) * 2008-03-31 2014-05-14 カシオ計算機株式会社 Imaging apparatus, imaging method, and program
JP5004876B2 (en) * 2008-06-03 2012-08-22 キヤノン株式会社 Imaging device
US8477207B2 (en) 2008-06-06 2013-07-02 Sony Corporation Image capturing apparatus, image capturing method, and computer program
JP5251547B2 (en) 2008-06-06 2013-07-31 ソニー株式会社 Image photographing apparatus, image photographing method, and computer program
US20090324022A1 (en) * 2008-06-25 2009-12-31 Sony Ericsson Mobile Communications Ab Method and Apparatus for Tagging Images and Providing Notifications When Images are Tagged
JP5136245B2 (en) * 2008-07-04 2013-02-06 カシオ計算機株式会社 Image composition apparatus, image composition program, and image composition method
JP5482654B2 (en) * 2008-07-17 2014-05-07 日本電気株式会社 Imaging apparatus, imaging method, and program
JP5072757B2 (en) * 2008-07-24 2012-11-14 キヤノン株式会社 Image processing apparatus, image processing method, and program
JP5195120B2 (en) * 2008-07-25 2013-05-08 株式会社ニコン Digital camera
JP5386880B2 (en) * 2008-08-04 2014-01-15 日本電気株式会社 Imaging device, mobile phone terminal, imaging method, program, and recording medium
KR20100027700A (en) * 2008-09-03 2010-03-11 삼성디지털이미징 주식회사 Photographing method and apparatus
JP4720880B2 (en) 2008-09-04 2011-07-13 ソニー株式会社 Image processing apparatus, imaging apparatus, image processing method, and program
JP4702418B2 (en) * 2008-09-09 2011-06-15 カシオ計算機株式会社 Imaging apparatus, image region existence determination method and program
JP4561914B2 (en) * 2008-09-22 2010-10-13 ソニー株式会社 Operation input device, operation input method, program
US20110199499A1 (en) * 2008-10-14 2011-08-18 Hiroto Tomita Face recognition apparatus and face recognition method
KR20100056280A (en) * 2008-11-19 2010-05-27 삼성전자주식회사 Apparatus for processing digital image and method for controlling thereof
JP4927808B2 (en) 2008-11-26 2012-05-09 京セラ株式会社 Equipment with camera
JP4659088B2 (en) * 2008-12-22 2011-03-30 京セラ株式会社 Mobile device with camera
JP5407338B2 (en) * 2009-01-07 2014-02-05 カシオ計算機株式会社 Imaging apparatus and computer program
JP4770929B2 (en) 2009-01-14 2011-09-14 ソニー株式会社 Imaging apparatus, imaging method, and imaging program.
JP5293223B2 (en) * 2009-01-27 2013-09-18 株式会社ニコン Digital camera
JP2010176224A (en) * 2009-01-27 2010-08-12 Nikon Corp Image processor and digital camera
JP5359318B2 (en) * 2009-01-29 2013-12-04 株式会社ニコン Digital camera
US8125557B2 (en) * 2009-02-08 2012-02-28 Mediatek Inc. Image evaluation method, image capturing method and digital camera thereof for evaluating and capturing images according to composition of the images
JP5304294B2 (en) * 2009-02-10 2013-10-02 株式会社ニコン Electronic still camera
KR101559583B1 (en) * 2009-02-16 2015-10-12 엘지전자 주식회사 Method for processing image data and portable electronic device having camera thereof
JP5272797B2 (en) * 2009-02-24 2013-08-28 株式会社ニコン Digital camera
US20120071785A1 (en) * 2009-02-27 2012-03-22 Forbes David L Methods and systems for assessing psychological characteristics
US9558499B2 (en) * 2009-02-27 2017-01-31 The Forbes Consulting Group, Llc Methods and systems for assessing psychological characteristics
AU2010217803A1 (en) * 2009-02-27 2011-09-22 Forbes Consulting Group, Llc Methods and systems for assessing psychological characteristics
US20100225773A1 (en) * 2009-03-09 2010-09-09 Apple Inc. Systems and methods for centering a photograph without viewing a preview of the photograph
JP2010226558A (en) * 2009-03-25 2010-10-07 Sony Corp Apparatus, method, and program for processing image
KR101665130B1 (en) * 2009-07-15 2016-10-25 삼성전자주식회사 Apparatus and method for generating image including a plurality of persons
KR101661211B1 (en) * 2009-08-05 2016-10-10 삼성전자주식회사 Apparatus and method for improving face recognition ratio
JP5431083B2 (en) * 2009-09-16 2014-03-05 オリンパスイメージング株式会社 Image capturing apparatus and method for controlling image capturing apparatus
CN102088539B (en) * 2009-12-08 2015-06-03 浪潮乐金数字移动通信有限公司 Method and system for evaluating pre-shot picture quality
US10356465B2 (en) * 2010-01-06 2019-07-16 Sony Corporation Video system demonstration
JP5234833B2 (en) * 2010-01-19 2013-07-10 日本電信電話株式会社 Facial expression classifier creation apparatus, facial expression classifier creation method, facial expression recognition apparatus, facial expression recognition method, and programs thereof
JP4873086B2 (en) * 2010-02-23 2012-02-08 カシオ計算機株式会社 Imaging apparatus and program
US9767470B2 (en) 2010-02-26 2017-09-19 Forbes Consulting Group, Llc Emotional survey
TWI447658B (en) * 2010-03-24 2014-08-01 Ind Tech Res Inst Facial expression capturing method and apparatus therewith
JP5577793B2 (en) 2010-03-30 2014-08-27 ソニー株式会社 Image processing apparatus and method, and program
US8503722B2 (en) * 2010-04-01 2013-08-06 Broadcom Corporation Method and system for determining how to handle processing of an image based on motion
JP2011234002A (en) * 2010-04-26 2011-11-17 Kyocera Corp Imaging device and terminal device
JP5264831B2 (en) * 2010-06-21 2013-08-14 シャープ株式会社 Image processing apparatus, image reading apparatus, image forming apparatus, image processing method, computer program, and recording medium
JP2012010162A (en) * 2010-06-25 2012-01-12 Kyocera Corp Camera device
JP5577900B2 (en) * 2010-07-05 2014-08-27 ソニー株式会社 Imaging control apparatus, imaging control method, and program
KR101755598B1 (en) * 2010-10-27 2017-07-07 삼성전자주식회사 Digital photographing apparatus and control method thereof
US20120105663A1 (en) * 2010-11-01 2012-05-03 Cazier Robert P Haptic Feedback Response
JP5762730B2 (en) * 2010-12-09 2015-08-12 パナソニック株式会社 Human detection device and human detection method
US9848106B2 (en) * 2010-12-21 2017-12-19 Microsoft Technology Licensing, Llc Intelligent gameplay photo capture
JP5779938B2 (en) * 2011-03-29 2015-09-16 ソニー株式会社 Playlist creation device, playlist creation method, and playlist creation program
US20120281874A1 (en) * 2011-05-05 2012-11-08 Lure Yuan-Ming F Method, material, and apparatus to improve acquisition of human frontal face images using image template
JP2013021434A (en) * 2011-07-08 2013-01-31 Nec Saitama Ltd Imaging apparatus, imaging control method, and program
JP2013128183A (en) * 2011-12-16 2013-06-27 Samsung Electronics Co Ltd Imaging apparatus and imaging method
US9036069B2 (en) * 2012-02-06 2015-05-19 Qualcomm Incorporated Method and apparatus for unattended image capture
KR101231469B1 (en) * 2012-02-23 2013-02-07 인텔 코오퍼레이션 Method, apparatusfor supporting image processing, and computer-readable recording medium for executing the method
CN103813076B (en) * 2012-11-12 2018-03-27 联想(北京)有限公司 The method and electronic equipment of information processing
US20140153900A1 (en) * 2012-12-05 2014-06-05 Samsung Electronics Co., Ltd. Video processing apparatus and method
US9008429B2 (en) * 2013-02-01 2015-04-14 Xerox Corporation Label-embedding for text recognition
WO2014127333A1 (en) * 2013-02-15 2014-08-21 Emotient Facial expression training using feedback from automatic facial expression recognition
CN105050673B (en) * 2013-04-02 2019-01-04 日本电气方案创新株式会社 Facial expression scoring apparatus, dancing scoring apparatus, Caraok device and game device
JP5711296B2 (en) * 2013-05-07 2015-04-30 オリンパス株式会社 Imaging method and imaging apparatus
CN104346601B (en) * 2013-07-26 2018-09-18 佳能株式会社 Object identifying method and equipment
CN105917305B (en) * 2013-08-02 2020-06-26 埃莫蒂安特公司 Filtering and shutter shooting based on image emotion content
US9443307B2 (en) * 2013-09-13 2016-09-13 Intel Corporation Processing of images of a subject individual
JP6180285B2 (en) * 2013-11-06 2017-08-16 キヤノン株式会社 Imaging apparatus, imaging method, and program
JP6481866B2 (en) * 2014-01-28 2019-03-13 ソニー株式会社 Information processing apparatus, imaging apparatus, information processing method, and program
USD760261S1 (en) * 2014-06-27 2016-06-28 Opower, Inc. Display screen of a communications terminal with graphical user interface
US9571725B2 (en) * 2014-09-02 2017-02-14 Htc Corporation Electronic device and image capture method thereof
JP6428066B2 (en) * 2014-09-05 2018-11-28 オムロン株式会社 Scoring device and scoring method
CN105488516A (en) * 2014-10-08 2016-04-13 中兴通讯股份有限公司 Image processing method and apparatus
US9473687B2 (en) 2014-12-23 2016-10-18 Ebay Inc. Modifying image parameters using wearable device input
US9715622B2 (en) 2014-12-30 2017-07-25 Cognizant Technology Solutions India Pvt. Ltd. System and method for predicting neurological disorders
US9626594B2 (en) 2015-01-21 2017-04-18 Xerox Corporation Method and system to perform text-to-image queries with wildcards
TWI658315B (en) * 2015-01-26 2019-05-01 鴻海精密工業股份有限公司 System and method for taking pictures
US11049119B2 (en) 2015-06-19 2021-06-29 Wild Blue Technologies. Inc. Apparatus and method for dispensing a product in response to detection of a selected facial expression
US10607063B2 (en) * 2015-07-28 2020-03-31 Sony Corporation Information processing system, information processing method, and recording medium for evaluating a target based on observers
US20170053190A1 (en) * 2015-08-20 2017-02-23 Elwha Llc Detecting and classifying people observing a person
US10534955B2 (en) * 2016-01-22 2020-01-14 Dreamworks Animation L.L.C. Facial capture analysis and training system
JP6778006B2 (en) * 2016-03-31 2020-10-28 株式会社 資生堂 Information processing equipment, programs and information processing systems
EP3229429B1 (en) * 2016-04-08 2021-03-03 Institut Mines-Télécom Methods and devices for symbols detection in multi antenna systems
USD859452S1 (en) * 2016-07-18 2019-09-10 Emojot, Inc. Display screen for media players with graphical user interface
US10282599B2 (en) * 2016-07-20 2019-05-07 International Business Machines Corporation Video sentiment analysis tool for video messaging
JP6697356B2 (en) * 2016-09-13 2020-05-20 Kddi株式会社 Device, program and method for identifying state of specific object among predetermined objects
JP6587995B2 (en) * 2016-09-16 2019-10-09 富士フイルム株式会社 Image display control system, image display control method, and image display control program
CN106419336B (en) * 2016-09-29 2017-11-28 浙江农林大学 LED mood display and excitation intelligent mirror and use method thereof
DE112016007416T5 (en) * 2016-11-07 2019-07-25 Motorola Solutions, Inc. Guardian system in a network to improve situational awareness in an incident
WO2018084726A1 (en) * 2016-11-07 2018-05-11 Motorola Solutions, Inc. Guardian system in a network to improve situational awareness of a crowd at an incident
CN110268703A (en) * 2017-03-15 2019-09-20 深圳市大疆创新科技有限公司 Imaging method and imaging control apparatus
CN107301390A (en) * 2017-06-16 2017-10-27 广东欧珀移动通信有限公司 Shoot reminding method, device and terminal device
JP6821528B2 (en) 2017-09-05 2021-01-27 本田技研工業株式会社 Evaluation device, evaluation method, noise reduction device, and program
US10475222B2 (en) 2017-09-05 2019-11-12 Adobe Inc. Automatic creation of a group shot image from a short video clip using intelligent select and merge
US11048745B2 (en) 2018-06-22 2021-06-29 International Business Machines Corporation Cognitively identifying favorable photograph qualities
US10972656B2 (en) 2018-06-22 2021-04-06 International Business Machines Corporation Cognitively coaching a subject of a photograph
CN112585940B (en) * 2018-10-08 2023-04-04 谷歌有限责任公司 System and method for providing feedback for artificial intelligence based image capture devices
KR20200040625A (en) 2018-10-10 2020-04-20 삼성전자주식회사 An electronic device which is processing user's utterance and control method thereof
JP7401968B2 (en) * 2018-12-07 2023-12-20 ルネサスエレクトロニクス株式会社 Shooting control device, shooting system, and shooting control method
JP7156242B2 (en) * 2019-10-18 2022-10-19 トヨタ自動車株式会社 Information processing device, program and control method
US11388334B2 (en) 2020-05-22 2022-07-12 Qualcomm Incorporated Automatic camera guidance and settings adjustment
CN112843731B (en) * 2020-12-31 2023-05-12 上海米哈游天命科技有限公司 Shooting method, shooting device, shooting equipment and storage medium

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5659624A (en) * 1995-09-01 1997-08-19 Fazzari; Rodney J. High speed mass flow food sorting appartus for optically inspecting and sorting bulk food products
JP2001043345A (en) * 1999-07-28 2001-02-16 Mitsubishi Electric Corp Expression recognition device, dosing control system using the same, awaking level evaluation system and restoration evaluation system
TW546874B (en) 2000-10-11 2003-08-11 Sony Corp Robot apparatus and its control method with a function of preventing an image from being stolen
JP2003219218A (en) 2002-01-23 2003-07-31 Fuji Photo Film Co Ltd Digital camera
JP4139942B2 (en) 2002-02-01 2008-08-27 富士フイルム株式会社 Camera with light emitter
EP1359536A3 (en) 2002-04-27 2005-03-23 Samsung Electronics Co., Ltd. Face recognition method and apparatus using component-based face descriptor
JP2004046591A (en) 2002-07-12 2004-02-12 Konica Minolta Holdings Inc Picture evaluation device
JP4292837B2 (en) 2002-07-16 2009-07-08 日本電気株式会社 Pattern feature extraction method and apparatus
KR20050085583A (en) * 2002-12-13 2005-08-29 코닌클리케 필립스 일렉트로닉스 엔.브이. Expression invariant face recognition
JP2004294498A (en) 2003-03-25 2004-10-21 Fuji Photo Film Co Ltd Automatic photographing system
EP2955662B1 (en) 2003-07-18 2018-04-04 Canon Kabushiki Kaisha Image processing device, imaging device, image processing method
JP4517633B2 (en) 2003-11-25 2010-08-04 ソニー株式会社 Object detection apparatus and method
KR100543707B1 (en) 2003-12-04 2006-01-20 삼성전자주식회사 Face recognition method and apparatus using PCA learning per subgroup
JP4254529B2 (en) * 2003-12-25 2009-04-15 カシオ計算機株式会社 Imaging equipment with monitor
JP2005242567A (en) * 2004-02-25 2005-09-08 Canon Inc Movement evaluation device and method
WO2006040761A2 (en) 2004-10-15 2006-04-20 Oren Halpern A system and a method for improving the captured images of digital still cameras
CN1328908C (en) * 2004-11-15 2007-07-25 北京中星微电子有限公司 A video communication method
KR100718124B1 (en) * 2005-02-04 2007-05-15 삼성전자주식회사 Method and apparatus for displaying the motion of camera
US7667736B2 (en) * 2005-02-11 2010-02-23 Hewlett-Packard Development Company, L.P. Optimized string table loading during imaging device initialization
DE602006009191D1 (en) 2005-07-26 2009-10-29 Canon Kk Imaging device and method
JP4197019B2 (en) 2006-08-02 2008-12-17 ソニー株式会社 Imaging apparatus and facial expression evaluation apparatus

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI399674B (en) * 2008-08-28 2013-06-21 Japan Display West Inc An image recognition method, an image recognition device, and an image input / output device

Also Published As

Publication number Publication date
US8416999B2 (en) 2013-04-09
US20110216217A1 (en) 2011-09-08
CN101867713B (en) 2013-03-13
CN101163199A (en) 2008-04-16
US8416996B2 (en) 2013-04-09
US8406485B2 (en) 2013-03-26
CN101877766B (en) 2013-03-13
US8238618B2 (en) 2012-08-07
JP2008042319A (en) 2008-02-21
US8260041B2 (en) 2012-09-04
US20110216216A1 (en) 2011-09-08
US20110216942A1 (en) 2011-09-08
JP4197019B2 (en) 2008-12-17
US20080037841A1 (en) 2008-02-14
KR101401165B1 (en) 2014-05-29
CN101867712B (en) 2013-03-13
US8260012B2 (en) 2012-09-04
CN101163199B (en) 2012-06-13
CN101867712A (en) 2010-10-20
CN101877766A (en) 2010-11-03
TWI343208B (en) 2011-06-01
CN101867713A (en) 2010-10-20
KR20080012231A (en) 2008-02-11
US20110216218A1 (en) 2011-09-08
US20110216943A1 (en) 2011-09-08

Similar Documents

Publication Publication Date Title
TWI343208B (en)
JP4315234B2 (en) Imaging apparatus and facial expression evaluation apparatus
KR101795601B1 (en) Apparatus and method for processing image, and computer-readable storage medium
JP5617603B2 (en) Display control apparatus, display control method, and program
US20120242818A1 (en) Method for operating electronic device and electronic device using the same
JP5169139B2 (en) Camera and image recording program
JP4431547B2 (en) Image display control device, control method therefor, and control program therefor
CN101753822A (en) Imaging apparatus and image processing method used in imaging device
JP2007310813A (en) Image retrieving device and camera
KR20100027700A (en) Photographing method and apparatus
US20100079613A1 (en) Image capturing apparatus, image capturing method, and computer program
JP5030022B2 (en) Imaging apparatus and program thereof
US9648195B2 (en) Image capturing apparatus and image capturing method
JP2012257112A (en) Imaging apparatus and program
JP5272773B2 (en) Image processing apparatus, image processing method, and program
JP5109779B2 (en) Imaging device
JP5093031B2 (en) Imaging apparatus and program
JP2010081562A (en) Imaging device, method, and program
JP4254467B2 (en) Digital camera
JP2008167028A (en) Imaging apparatus
JP2006222900A (en) Imaging device and control program therefor
JP2011197999A (en) Image processing apparatus and image processing method
JP2005117531A (en) Digital camera
JP2010187108A (en) Image layout apparatus, and program
JP2009302835A (en) Image processor and image processing method

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees