TWI545947B - Display device with image capture and analysis module - Google Patents

Display device with image capture and analysis module Download PDF

Info

Publication number
TWI545947B
TWI545947B TW101112362A TW101112362A TWI545947B TW I545947 B TWI545947 B TW I545947B TW 101112362 A TW101112362 A TW 101112362A TW 101112362 A TW101112362 A TW 101112362A TW I545947 B TWI545947 B TW I545947B
Authority
TW
Taiwan
Prior art keywords
user
display
face
image
component
Prior art date
Application number
TW101112362A
Other languages
Chinese (zh)
Other versions
TW201306573A (en
Inventor
圖馬索 帕利堤
亞維納許 艾普路瑞
哈瑞 查卡瑞維路胡拉
賽利亞 莫瑞利斯
蘇瑪特 梅瑞
Original Assignee
南昌歐菲光電技術有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/082,568 external-priority patent/US8913005B2/en
Priority claimed from US13/220,612 external-priority patent/US20130050395A1/en
Priority claimed from US13/294,977 external-priority patent/US20130057553A1/en
Priority claimed from US13/294,964 external-priority patent/US20130057573A1/en
Application filed by 南昌歐菲光電技術有限公司 filed Critical 南昌歐菲光電技術有限公司
Publication of TW201306573A publication Critical patent/TW201306573A/en
Application granted granted Critical
Publication of TWI545947B publication Critical patent/TWI545947B/en

Links

Landscapes

  • User Interface Of Digital Computer (AREA)
  • Telephone Function (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Studio Devices (AREA)

Description

具有影像擷取及分析模組之顯示裝置 Display device with image capture and analysis module

本發明係關於顯示裝置,且特定而言係關於具有影像擷取及分析組件之顯示器。 The present invention relates to display devices, and in particular to displays having image capture and analysis components.

本申請案主張對下列美國專利申請案之優先權之權益:(1)2011年8月29日提出申請,序列號13/220,612;(2)2011年11月11日提出申請,序列號13/294,977;(3)2011年9月2日提出申請,序列號61/530,872;(4)2011年11月11日提出申請,序列號13/294,964;(5)2011年9月2日提出申請,序列號61/530,867;及(6)2011年4月8日提出申請,序列號13/082,568。 This application claims the benefit of priority to the following U.S. Patent Applications: (1) Application filed on August 29, 2011, Serial No. 13/220, 612; (2) Application filed on November 11, 2011, Serial No. 13/ 294,977; (3) Application on September 2, 2011, serial number 61/530,872; (4) Application on November 11, 2011, serial number 13/294,964; (5) Application on September 2, 2011, Serial No. 61/530,867; and (6) filed on April 8, 2011, Serial No. 13/082,568.

人與企業一直期望自其手持式行動智慧電話裝置獲得更多。唯音訊會議呼叫已變得可用且多年來在行動電話上廣泛使用。行動裝置上之視訊會議仍在其初期。期望具有一種行動裝置以輔助一使用者當在行進中時瞭解、互動、計劃或檢查一活動之細節及其他社會與專業要求。亦期望具有一種可提供比當前可用之行動視訊會議體驗更優質之一行動視訊會議體驗之行動裝置。圖1圖解說明一行動視訊會議環境之一實例。圖1之行動智慧電話顯示器包含一視訊會議中之兩個參與者之面部影像。該顯示器之一顯著部分包含每一參與者之位置之背景影像,其並非按期望進行該視訊會議所需。具體而言,參與者可甚至不想將背景資訊傳輸至該呼叫中之其他 人。美國專利申請案12/883,183、12/883,191及12/883,192受讓於同一受讓人且以引用的方式併入本文中來有利地解決此問題。 People and businesses have always wanted to get more from their handheld mobile smart phone devices. Only audio conference calls have become available and have been widely used on mobile phones for many years. The videoconferencing on the mobile device is still in its infancy. It is desirable to have a mobile device to assist a user in understanding, interacting, planning or reviewing the details of an activity and other social and professional requirements while on the road. It is also desirable to have a mobile device that provides an interactive video conferencing experience that is superior to the currently available mobile video conferencing experience. Figure 1 illustrates an example of an active video conferencing environment. The mobile smart phone display of Figure 1 contains facial images of two participants in a video conference. A significant portion of the display contains a background image of the location of each participant that is not required for the video conference as desired. Specifically, participants may not even want to transmit background information to other parties in the call. people. This problem is advantageously solved by the U.S. Patent Application Serial Nos. 12/883, 183, 12/883, 191, and 12/ 883, 192 assigned to the same assignee.

藉助適當的工具,有利地,優質視訊會議將變得對攜帶一智慧電話裝置之在行進中之任何人可用。本申請案之發明人已意識到,亦特別需要一種針對無光、低光及/或不均勻光照狀況以及針對其中固持該行動裝置之人員及/或載具相對於背景正在運動中之情況之優質行動視訊會議解決方案。 With the right tools, advantageously, premium video conferencing will become available to anyone on the move who carries a smart phone device. The inventors of the present application have recognized that there is a particular need for a situation in which no light, low light, and/or uneven illumination conditions, and for persons and/or vehicles in which the mobile device is held, are in motion relative to the background. Premium mobile video conferencing solutions.

行動視訊會議參與者極有可能處在具有低光照或不均勻光照狀況之一環境中,乃因若該參與者有機會利用一預設視訊會議環境用於該呼叫,則其可能將不選擇使用一智慧手機用於該呼叫。圖2處圖解說明之顯示器包含在左下方角落處之明顯地既不均勻亦不充分地被光照之面部。不均勻且低光狀況可導致正在視訊會議中顯示之使用者面部影像之非期望效應,乃因通常正是諸如一人員之微笑或溝通式面部表情等小細節使得視訊會議變得極為所期望,但此等小細節在低光或不均勻光狀況下通常難以解決。因此,在下文中闡述提供在此等低光照或不均勻光照狀況下在一行動視訊會議中顯示之參與者面部之改良之實施例。 Mobile video conferencing participants are most likely to be in an environment with low or uneven lighting conditions, because if the participant has an opportunity to utilize the default video conferencing environment for the call, they may not choose to use it. A smart phone is used for the call. The display illustrated at Figure 2 contains a portion that is substantially neither uniformly nor sufficiently illuminated at the lower left corner. Uneven and low light conditions can cause undesired effects of the user's facial image being displayed in a video conference, as small details such as a person's smile or communicative facial expression often make video conferencing extremely desirable. However, such small details are often difficult to resolve in low light or uneven light conditions. Accordingly, embodiments for providing improved face of a participant displayed in an active video conference under such low or uneven illumination conditions are set forth below.

一行動視訊會議參與者亦可能在一呼叫期間正在步行、駕駛或以其他方式移動,同樣由於若參與者係在一靜態環境中,諸如一會議室、辦公室或電腦桌,或甚至具有一膝上型電腦之一就坐位置,具有特別預設置之光照、一舒適的座椅、及固定至接地或在一桌上之一網路相機,則其可能將不使用一智慧手機用於該呼叫。雖然參與者嘗試固持該電話相對於其面部靜止,但背景通常將係快速運動的。因此,在本文中闡述藉由對焦於參 與者並減少或消除背景影像之處理及/或傳輸來有效地使用一行動智慧電話環境之有限計算資源之實施例。 An active video conferencing participant may also be walking, driving, or otherwise moving during a call, also because if the participant is in a static environment, such as a conference room, office or computer desk, or even has a laptop One of the computer's sitting positions, with special pre-set lighting, a comfortable seat, and a webcam fixed to ground or on a desk, may not use a smart phone for the call. While the participant attempts to hold the phone stationary relative to its face, the background will typically be fast moving. Therefore, in this paper, by focusing on the reference Embodiments that reduce or eliminate the processing and/or transmission of background images to effectively use the limited computing resources of a mobile smart phone environment.

電子顯示裝置通常係用作電視機或與電腦一起使用以將二維影像顯示給一使用者。在計算之情形中,電子顯示裝置提供與電腦之作業系統之一視覺互動。 Electronic display devices are commonly used as televisions or with computers to display two-dimensional images to a user. In the case of computing, the electronic display device provides visual interaction with one of the computer's operating systems.

在多數情形中,一使用者藉助使用一外部輸入裝置、更常見地藉助一鍵盤與一滑鼠或軌跡球之組合來將輸入提供至一電腦。然而,最近,內建於電子顯示器中之觸控螢幕裝置(例如,電容性或電阻性觸控螢幕)已變得普及,作為用於將輸入提供至一計算裝置或電視顯示器之一替代構件。 In most cases, a user provides input to a computer by using an external input device, more commonly by a keyboard and a mouse or trackball combination. Recently, however, touch screen devices (e.g., capacitive or resistive touch screens) built into electronic displays have become popular as an alternative to providing input to a computing device or television display.

電子顯示器已自大而重的陰極射線管監視器(CRT)演變至較輕較薄的液晶顯示器(LCD)及有機發光二極體(OLED)顯示器。諸多顯示器現已併入有諸如相機及通用串行匯流排(USB)埠之額外特徵,以改良計算或電視體驗。 Electronic displays have evolved from large and heavy cathode ray tube monitors (CRTs) to lighter and thinner liquid crystal displays (LCDs) and organic light emitting diode (OLED) displays. Many displays have now incorporated additional features such as cameras and Universal Serial Bus (USB) to improve computing or TV experience.

一電腦使用者通常花費其大部分時間與電腦互動。舉例而言,一辦公室工作者可花費數小時在由一桌上型電腦或其他電腦驅動之一顯示器前面。若使用者以一在人體工學上不適合之方式使用電腦,諸如自一非最佳位置及/或在可透過使用者行為校正之其他不利狀況下觀看顯示器,則可不利地影響其健康。已提議用於確保在人體工學上適合之電腦使用之各種技術,但仍存在改良空間。 A computer user typically spends most of his time interacting with a computer. For example, an office worker can spend hours in front of a display driven by a desktop or other computer. If the user uses the computer in an ergonomically unsuitable manner, such as viewing the display from a non-optimal location and/or other adverse conditions that are calibrated by user behavior, it can adversely affect its health. Various techniques have been proposed for ensuring ergonomically suitable computers, but there is still room for improvement.

本發明提供一種手持式具備相機能力之視訊會議裝置,其包含經組態以固持於一使用者手中之一外殼。一處理器及記憶體係含納於該外殼內。該記憶體其中嵌入有用於程式化該處理器之碼,包含視訊會議組件、 面部偵測組件、面部辨識組件及相關聯影像處理組件。該記憶體進一步含有與一或多個特定使用者身份相關聯之面部資料。該裝置亦包含一顯示器,該顯示器內建於外殼中且經組態以可由一使用者在一視訊會議期間觀看。一相機亦係內建於外殼中且經組態以擷取使用者在觀看該顯示器時之影像。該相機包含一紅外(IR)光源及對IR敏感之影像感測器,用於在低光或不均勻光狀況(或兩者)下擷取使用者之影像,以讓面部偵測組件能偵測使用者之面部。面部辨識組件經組態以使一使用者之一特定身份關聯至一所偵測面部。該影像處理組件經組態以根據該使用者之該特定身份而用儲存於記憶體中之面部資料取代所偵測面部之面部資料,以增強在低光或不均勻光狀況(或兩者)下擷取之所偵測面部之一影像並將其傳輸至一遠端視訊會議參與者。 The present invention provides a handheld camera-capable video conferencing device that includes a housing configured to be held in a user's hand. A processor and memory system are contained within the housing. The memory embeds a code for programming the processor, including a video conferencing component, A face detection component, a face recognition component, and an associated image processing component. The memory further contains facial information associated with one or more specific user identities. The device also includes a display built into the housing and configured to be viewable by a user during a video conference. A camera is also built into the housing and configured to capture images of the user while viewing the display. The camera includes an infrared (IR) light source and an IR-sensitive image sensor for capturing images of the user in low or uneven light conditions (or both) to enable the face detection component to detect Measure the face of the user. The facial recognition component is configured to associate a particular identity of a user to a detected face. The image processing component is configured to replace facial data of the detected face with facial data stored in the memory according to the particular identity of the user to enhance low light or uneven light conditions (or both) One of the detected faces is captured and transmitted to a far-end video conferencing participant.

該面部資料可包含色度資料或照度資料或兩者。該面部偵測組件或面部辨識組件或兩者可包含經訓練以在低光或不均勻光狀況(或兩者)下分別偵測面部或辨識面部之分類器。該IR光源可包含耦合至外殼且經安置以在一視訊會議期間照明使用者之面部之一或多個IR LED。該記憶體可含有一面部追蹤組件以追蹤所偵測面部以讓裝置在視訊會議期間能傳輸使用者面部之近似連續視訊影像。 The facial data may include chromaticity data or illuminance data or both. The face detection component or face recognition component or both may include a classifier trained to detect a face or recognize a face under low light or uneven light conditions (or both). The IR light source can include one or more IR LEDs coupled to the housing and disposed to illuminate a user's face during a video conference. The memory can include a face tracking component to track the detected face to enable the device to transmit an approximately continuous video image of the user's face during the video conference.

記憶體可包含一組件以估計至使用者之面部之一距離並基於所估計距離來控制IR光源之一輸出功率。該距離之估計可使用自動對焦資料來判定,及/或可係基於使用者之面部之一所偵測大小。該記憶體可包含一組件以判定一使用者之面部相對於裝置之一位置並控制照明該使用者之面部之IR光源之一方向。 The memory can include a component to estimate a distance to the user's face and control one of the IR light source output powers based on the estimated distance. The estimate of the distance can be determined using autofocus data and/or can be based on the size detected by one of the user's faces. The memory can include a component to determine the position of a user's face relative to one of the devices and to control the direction of one of the IR light sources that illuminate the face of the user.

本發明提供另一種手持式具備相機能力之視訊會議裝置,其包含經組態以固持於一使用者之一或兩個手中之一外殼,及含納於該外殼內之一處理器及記憶體。該記憶體其中嵌入有用於程式化該處理器之碼,包含視訊會議組件及前景/背景分割組件或其組合。一顯示器係內建於該外殼中且經組態以可由一使用者在一視訊會議期間觀看。一相機係內建於該外殼中且經組態以擷取使用者在觀看該顯示器時之影像。一通信介面將音訊/視覺資料傳輸至一遠端視訊會議參與者。該前景/背景分割組件經組態以藉由辨別前景對背景資料之不同運動向量來提取不具有背景資料之使用者身份資料。 The present invention provides another handheld camera-capable video conferencing device that includes a housing configured to be held in one or both of a user's hands, and a processor and memory contained within the housing . The memory embeds a code for programming the processor, including a video conferencing component and a foreground/background segmentation component or a combination thereof. A display is built into the housing and is configured to be viewable by a user during a video conference. A camera is built into the housing and configured to capture images of the user while viewing the display. A communication interface transmits audio/visual data to a remote video conferencing participant. The foreground/background segmentation component is configured to extract user identity data without background material by identifying different motion vectors of the foreground versus background material.

使用者身份資料可包含面部資料。前景/背景分割組件可經校準以匹配特定使用者身份資料作為前景資料。 User identity data can include facial information. The foreground/background segmentation component can be calibrated to match specific user identity data as foreground material.

本發明提供另一種手持式具備相機能力之視訊會議裝置,其包含經組態以固持於一使用者之一或兩個手中之一外殼及含納於外殼內之一處理器及記憶體,該記憶體其中嵌入有用於程式化該處理器之碼,包含視訊會議組件及前景/背景分割組件或其組合。一顯示器係內建於該外殼中且經組態以可由一使用者在一視訊會議期間觀看。一相機係內建於該外殼中且經組態以擷取使用者在觀看該顯示器時之影像。一通信介面將音訊/視覺資料傳輸至一遠端視訊會議參與者。該前景/背景分割組件經組態以藉由匹配所偵測面部資料作為前景資料來提取不具有背景資料之使用者身份資料。 The present invention provides another handheld camera-capable video conferencing device that includes a processor configured to be held in one or both of a user's hands and a processor and memory contained within the housing. The memory embeds a code for programming the processor, including a video conferencing component and a foreground/background segmentation component or a combination thereof. A display is built into the housing and is configured to be viewable by a user during a video conference. A camera is built into the housing and configured to capture images of the user while viewing the display. A communication interface transmits audio/visual data to a remote video conferencing participant. The foreground/background segmentation component is configured to extract user identity data without background material by matching the detected facial data as foreground data.

相機可包含一紅外(IR)光源及對IR敏感之影像感測器,用於在低光或不均勻光狀況(或兩者)下擷取使用者之影像,以讓一面部偵測組件能偵測一使用者之面部。該影像處理組件可根據使用者之特定身份而用儲存於記憶體中之面部資料取代所偵測面部之面部資料,以增強在低光或不均勻光 狀況(或兩者)下擷取之所偵測面部之一影像並將其傳輸至一遠端視訊會議參與者。 The camera can include an infrared (IR) light source and an IR sensitive image sensor for capturing images of the user in low or uneven light conditions (or both) to enable a face detection component to Detecting a user's face. The image processing component can replace the facial data of the detected face with the facial data stored in the memory according to the specific identity of the user to enhance the low or uneven light. One of the detected faces of the situation (or both) is captured and transmitted to a far-end video conferencing participant.

該記憶體可含有一面部追蹤組件以追蹤所偵測面部以讓裝置在視訊會議期間能傳輸該使用者之面部之近似連續視訊影像。 The memory can include a face tracking component to track the detected face to enable the device to transmit an approximately continuous video image of the user's face during the video conference.

該特定使用者身份資料可包含一所偵測面部之一影像。彼資料可包含一頸部、部分軀幹或襯衫或一或兩個臂、或其部分或其組合。 The specific user identity data can include an image of a detected face. The information may include a neck, a partial torso or a shirt or one or two arms, or portions thereof or a combination thereof.

該記憶體可含有與一或多個特定使用者身份相關聯之面部資料,以使得基於匹配記憶體中之面部資料來提取該特定使用者身份資料。 The memory can contain facial information associated with one or more particular user identities such that the particular user identity data is extracted based on the facial data in the matching memory.

本發明提供一種動態改變一顯示器參數之方法,其包括:偵測定位於一電子顯示器前面的一使用者之一使用者參數,及基於所偵測之使用者參數來自動調整該顯示器上之一使用者偏好或顯示一指示符。 The present invention provides a method for dynamically changing a display parameter, comprising: detecting a user parameter of a user positioned in front of an electronic display, and automatically adjusting one of the displays based on the detected user parameter The user prefers or displays an indicator.

在某些實施例中,該使用者參數係使用者之一年齡。 In some embodiments, the user parameter is one of the ages of the user.

在另一實施例中,在使用者年老時增加一可顯示內容量。在另一實施例中,在使用者係一兒童或未成年人時減少一可顯示內容量。 In another embodiment, an amount of displayable content is added when the user is old. In another embodiment, the amount of displayable content is reduced when the user is a child or a minor.

在某些實施例中,在使用者係一兒童或未成年人時增加隱私設定。在其他實施例中,在使用者係一成人或老年人時減少隱私設定。 In some embodiments, the privacy setting is added when the user is a child or a minor. In other embodiments, privacy settings are reduced when the user is an adult or an elderly person.

在某些實施例中,該使用者參數係自使用者至電子顯示器之一距離。在一項實施例中,在該距離小於一最佳距離時顯示一距離指示符。 In some embodiments, the user parameter is a distance from the user to the electronic display. In one embodiment, a distance indicator is displayed when the distance is less than an optimal distance.

在一項實施例中,該使用者參數係使用者定位於顯示器前面的一時間。在某些實施例中,在該時間大於一預定時間限制時顯示一時間指示符。 In one embodiment, the user parameter is a time when the user is positioned in front of the display. In some embodiments, a time indicator is displayed when the time is greater than a predetermined time limit.

在一項實施例中,該使用者參數係一頭部角度。在某些實施例中,在該頭部角度不適合時顯示一人體工學指示符。 In one embodiment, the user parameter is a head angle. In some embodiments, an ergonomic indicator is displayed when the head angle is not suitable.

在某些實施例中,該使用者參數係一周圍光位準。在其他實施例中,該使用者參數係一周圍光位準及一瞳孔閉合百分比。 In some embodiments, the user parameter is a ambient light level. In other embodiments, the user parameter is a ambient light level and a pupil closure percentage.

在一項實施例中,若周圍光位準係低的,且瞳孔閉合百分比係高的,則自動調亮顯示器。在另一實施例中,若周圍光位準係低的,且瞳孔閉合百分比係低的,則自動調暗顯示器。在替代實施例中,若周圍光位準係高的,且瞳孔閉合百分比係高的,則自動調亮顯示器。在又一實施例中,若周圍光位準係高的,且瞳孔閉合百分比係低的,則自動調暗顯示器。 In one embodiment, the display is automatically brightened if the ambient light level is low and the pupil closure percentage is high. In another embodiment, the display is automatically dimmed if the ambient light level is low and the pupil closure percentage is low. In an alternate embodiment, the display is automatically illuminated if the ambient light level is high and the pupil closure percentage is high. In yet another embodiment, the display is automatically dimmed if the ambient light level is high and the pupil closure percentage is low.

如技術方案13之方法,其中基於該所偵測之周圍光位準自動調暗或調亮該顯示器。 The method of claim 13 wherein the display is automatically dimmed or brightened based on the detected ambient light level.

在某些實施例中,該使用者參數係一未知使用者。 In some embodiments, the user parameter is an unknown user.

在一項實施例中,在偵測到未知使用者時調暗或關斷顯示器。在另一實施例中,在偵測到未知使用者時鎖定該顯示器且在顯示器上展示一安全指示符。在某些實施例中,該安全指示符通知該未知使用者對顯示器之存取被拒絕。 In one embodiment, the display is dimmed or turned off when an unknown user is detected. In another embodiment, the display is locked and an security indicator is displayed on the display when an unknown user is detected. In some embodiments, the security indicator informs the unknown user that access to the display is denied.

在某些實施例中,偵測步驟包括藉助安置於電子顯示器上或附近之一感測器來偵測使用者參數。在一項實施例中,該感測器包括一相機。 In some embodiments, the detecting step includes detecting user parameters by means of a sensor disposed on or near the electronic display. In one embodiment, the sensor includes a camera.

在某些實施例中,該電子顯示器包括一電腦監視器。在其他實施例中,該電子顯示器包括一蜂巢式電話。 In some embodiments, the electronic display includes a computer monitor. In other embodiments, the electronic display includes a cellular telephone.

在一項實施例中,自動調整步驟包括藉助一控制器來處理使用者參數並基於所偵測之使用者參數來自動調整顯示器上之使用者偏好或顯示指示符。 In one embodiment, the automatic adjustment step includes processing the user parameters with a controller and automatically adjusting user preferences or display indicators on the display based on the detected user parameters.

亦提供一種電子顯示器,其包括:感測器,其經組態以偵測定位於該顯示器前面的一使用者之一使用者參數;一螢幕,其經組態以將文字 或影像顯示給使用者;及一處理器,其經組態以基於所偵測之使用者參數來調整一使用者偏好或顯示一指示符。 There is also provided an electronic display comprising: a sensor configured to detect a user parameter of a user positioned in front of the display; a screen configured to place text Or the image is displayed to the user; and a processor configured to adjust a user preference or display an indicator based on the detected user parameter.

在一項實施例中,該使用者參數係年齡。 In one embodiment, the user parameter is age.

在另一實施例中,該使用者參數係自使用者至電子顯示器之一距離。 In another embodiment, the user parameter is a distance from the user to the electronic display.

在某些實施例中,該使用者參數係使用者之一頭部角度。 In some embodiments, the user parameter is one of the user's head angles.

在一項實施例中,該使用者參數係一未知使用者。 In one embodiment, the user parameter is an unknown user.

在某些實施例中,該使用者參數係一周圍光位準。 In some embodiments, the user parameter is a ambient light level.

在某些實施例中,該感測器包括一相機。 In some embodiments, the sensor includes a camera.

在一項實施例中,該電子顯示器包括一電腦監視器。 In one embodiment, the electronic display includes a computer monitor.

在另一實施例中,該電子顯示器包括一蜂巢式電話。 In another embodiment, the electronic display includes a cellular telephone.

在某些實施例中,該電子顯示器包括一平板電腦。 In some embodiments, the electronic display includes a tablet.

本發明提供一種動態調整一顯示器參數之方法,其包括:藉助一感測器來判定一使用者之面部是否定位於一電子顯示器前面,若該使用者之面部未定位於電子顯示器前面,則藉助該感測器來監視該使用者之面部達一預定時間段,且若在該預定時間段期間該使用者之面部未定位於電子顯示器面前,則起始該電子顯示器上之一電力節省常式。 The present invention provides a method for dynamically adjusting a display parameter, comprising: determining whether a user's face is positioned in front of an electronic display by means of a sensor, and if the user's face is not positioned in front of the electronic display, The sensor monitors the face of the user for a predetermined period of time, and if the face of the user is not positioned in front of the electronic display during the predetermined time period, a power saving routine on the electronic display is initiated.

在某些實施例中,該電力節省常式包括調暗顯示器。 In some embodiments, the power saving routine includes dimming the display.

在其他實施例中,該電力節省常式包括使顯示器斷電。 In other embodiments, the power saving routine includes powering down the display.

在某些實施例中,在將顯示器斷電之後,該方法包括間或地將電子顯示器之感測器通電以監視使用者之身體。 In some embodiments, after powering down the display, the method includes energizing the sensor of the electronic display to monitor the body of the user.

該方法之某些實施例進一步包括:藉助感測器來判定使用者之眼睛是否正朝向電子顯示器凝視,若使用者之眼睛未朝向電子顯示器凝視, 則藉助該感測器來在預定時間段期間監視朝向顯示器凝視之使用者之眼睛,且若在該預定時間段期間使用者之眼睛並未朝向電子顯示器凝視則起始該電子顯示器上之電力節省常式。 Some embodiments of the method further include: determining, by means of the sensor, whether the user's eyes are staring toward the electronic display, if the user's eyes are not staring toward the electronic display, Using the sensor to monitor the eyes of the user staring towards the display during a predetermined period of time, and to initiate power savings on the electronic display if the user's eyes are not staring toward the electronic display during the predetermined time period Regular style.

在某些實施例中,該預定時間段係使用者可調整的。 In some embodiments, the predetermined period of time is user adjustable.

在其他實施例中,一調暗百分比係使用者可調整的。 In other embodiments, a darkening percentage is user adjustable.

本發明提供一種電子顯示器,其包括:一感測器,其經組態以偵測定位於該顯示器前面的一使用者之面部;及一處理器,其經組態以便若在一預定時間段期間該使用者之面部未定位於該顯示器前面則實施一電力節省常式。 The present invention provides an electronic display comprising: a sensor configured to detect a face of a user positioned in front of the display; and a processor configured to be in a predetermined period of time A power saving routine is implemented during which the user's face is not positioned in front of the display.

在一項實施例中,該感測器包括一相機。 In one embodiment, the sensor includes a camera.

在其他實施例中,該電子顯示器包括一電腦監視器。 In other embodiments, the electronic display includes a computer monitor.

在某些實施例中,該電子顯示器包括一蜂巢式電話。 In some embodiments, the electronic display includes a cellular telephone.

本發明提供一種動態改變一顯示器參數之方法,其包括:偵測定位於一電子顯示器前面的一使用者之一使用者參數,及基於所偵測之使用者參數來自動調整顯示器上之文字之一字型大小。 The present invention provides a method for dynamically changing a display parameter, comprising: detecting a user parameter of a user positioned in front of an electronic display, and automatically adjusting a text on the display based on the detected user parameter. One size.

在某些實施例中,該使用者參數係使用者之一年齡。 In some embodiments, the user parameter is one of the ages of the user.

在一項實施例中,在使用者年老時增加字型大小。在另一實施例中,在使用者係一兒童或未成年人時減小字型大小。 In one embodiment, the font size is increased as the user is old. In another embodiment, the font size is reduced when the user is a child or a minor.

在某些實施例中,該使用者參數係自使用者至電子顯示器之一距離。 In some embodiments, the user parameter is a distance from the user to the electronic display.

在一項實施例中,在該距離大於一最佳距離時增加字型大小。在另一實施例中,在該距離小於一最佳距離時減小字型大小。 In one embodiment, the font size is increased when the distance is greater than an optimal distance. In another embodiment, the font size is reduced when the distance is less than an optimal distance.

在某些實施例中,在自使用者至電子顯示器之距離改變時字型大小即時地動態改變。 In some embodiments, the font size changes dynamically on the fly as the distance from the user to the electronic display changes.

在其他實施例中,字型大小在自使用者至電子顯示器之距離變小時即時地減小,其中字型大小在自使用者至電子顯示器之距離變大時即時地增加。 In other embodiments, the font size decreases instantaneously as the distance from the user to the electronic display becomes smaller, wherein the font size increases instantaneously as the distance from the user to the electronic display becomes greater.

在某些實施例中,該偵測步驟包括藉助安置於電子顯示器上或附近之一感測器來偵測使用者參數。在一項實施例中,該感測器包括一相機。 In some embodiments, the detecting step includes detecting user parameters by means of a sensor disposed on or near the electronic display. In one embodiment, the sensor includes a camera.

在某些實施例中,該電子顯示器包括一電腦監視器。在其他實施例中,該顯示器包括一蜂巢式電話。 In some embodiments, the electronic display includes a computer monitor. In other embodiments, the display includes a cellular telephone.

在一項實施例中,自動調整步驟包括:藉助一控制器來處理使用者參數,並基於所偵測之使用者參數來藉助該控制器自動調整顯示器上之文字之字型大小。 In one embodiment, the automatic adjustment step includes processing the user parameters by means of a controller and automatically adjusting the font size of the text on the display by means of the controller based on the detected user parameters.

本發明亦提供一種動態改變一顯示器參數之方法,其包括:偵測定位於一電子顯示器前面的一使用者之一使用者參數,及基於所偵測之使用者參數來自動調整顯示器上之一圖示大小。 The present invention also provides a method for dynamically changing a display parameter, comprising: detecting a user parameter of a user positioned in front of an electronic display, and automatically adjusting one of the displays based on the detected user parameter The size of the illustration.

在某些實施例中,該使用者參數係使用者之一年齡。 In some embodiments, the user parameter is one of the ages of the user.

在一項實施例中,在使用者年老時增加圖示大小。在另一實施例中,在使用者係一兒童或未成年人時減小圖示大小。 In one embodiment, the graphical size is increased as the user is old. In another embodiment, the illustrated size is reduced when the user is a child or a minor.

在某些實施例中,該使用者參數係自使用者至電子顯示器之一距離。 In some embodiments, the user parameter is a distance from the user to the electronic display.

在一項實施例中,在該距離大於一最佳距離時增加圖示大小。在其他實施例中,在該距離小於一最佳距離時減小圖示大小。 In one embodiment, the graphical size is increased when the distance is greater than an optimal distance. In other embodiments, the illustrated size is reduced when the distance is less than an optimal distance.

在某些實施例中,該偵測步驟包括藉助安置於該電子顯示器上或附近之一感測器來偵測該使用者參數。在一項實施例中,該感測器包括一相機。 In some embodiments, the detecting step includes detecting the user parameter by means of a sensor disposed on or adjacent to the electronic display. In one embodiment, the sensor includes a camera.

在某些實施例中,該電子顯示器包括一電腦監視器。在其他實施例中,該顯示器包括一蜂巢式電話。 In some embodiments, the electronic display includes a computer monitor. In other embodiments, the display includes a cellular telephone.

在某些實施例中,該自動調整步驟包括藉助一控制器來處理使用者參數,並基於所偵測之使用者參數來藉助該控制器自動調整顯示器上之文字之字型大小。 In some embodiments, the automatic adjustment step includes processing the user parameters with a controller and automatically adjusting the font size of the text on the display by the controller based on the detected user parameters.

在其他實施例中,字型大小自使用者至電子顯示器之距離改變而即時地動態改變。 In other embodiments, the font size changes dynamically from the user to the electronic display and changes dynamically on the fly.

在一項實施例中,字型大小在自使用者至電子顯示器之距離變小時即時地減小,其中圖示大小在自使用者至電子顯示器之距離變大時即時地增加。 In one embodiment, the font size decreases instantaneously as the distance from the user to the electronic display becomes smaller, wherein the graphical size increases instantaneously as the distance from the user to the electronic display becomes greater.

本發明提供一種電子顯示器,其包括:一感測器,其經組態以判定定位於該顯示器前面的一使用者之一使用者參數;一螢幕,其經組態以將文字或影像顯示給使用者;及一處理器,其經組態以基於所判定使用者參數來調整文字或影像之一大小。 The present invention provides an electronic display comprising: a sensor configured to determine a user parameter of a user positioned in front of the display; a screen configured to display text or images to a user; and a processor configured to adjust a size of the text or image based on the determined user parameter.

在某些實施例中,使用者參數係年齡。 In some embodiments, the user parameter is age.

在另一實施例中,使用者參數係自使用者至電子顯示器之一距離。 In another embodiment, the user parameter is one distance from the user to the electronic display.

在某些實施例中,該感測器包括一相機。 In some embodiments, the sensor includes a camera.

在另一實施例中,該電子顯示器包括一電腦監視器。在一額外實施例中,該電子顯示器包括一蜂巢式電話。在某些實施例中,該電子顯示器包括一平板電腦。 In another embodiment, the electronic display includes a computer monitor. In an additional embodiment, the electronic display includes a cellular telephone. In some embodiments, the electronic display includes a tablet.

可使用具有一人體工學感測器之一顯示裝置,該人體工學感測器包括介接至處理硬體以獲得及分析繪示該顯示裝置之一使用者之影像資料之一成像裝置。該人體工學感測器可預組態有指示該顯示裝置之人體工學使用之資料,以使得可在最小或無使用者校準或設置之情況下分析使用者之影像。替代地,該人體工學感測器可提供影像資料以分析用於提供即時回饋,諸如在使用者之行為超出顯示裝置之一人體工學使用範圍時之警告或建議。在某些實施方案中,該人體工學感測器係與顯示裝置整合,但在其他實施方案中,可使用一單獨元件或預存成像裝置。 A display device having an ergonomic sensor can be used, the ergonomic sensor including an imaging device that interfaces to the processing hardware to obtain and analyze image data of a user of the display device. The ergonomic sensor can be pre-configured with information indicative of the ergonomic use of the display device such that the user's image can be analyzed with minimal or no user calibration or settings. Alternatively, the ergonomic sensor can provide imaging data for analysis to provide immediate feedback, such as warnings or suggestions when the user's behavior exceeds one of the ergonomic uses of the display device. In some embodiments, the ergonomic sensor is integrated with the display device, but in other embodiments, a separate component or pre-stored imaging device can be used.

論述此實例並非為限制本標的物,而是為提供一簡要概論。在下文之實施方式中闡述額外實例。可根據本文之一或多個教示基於一實施方案之說明及/或實踐之評述來判定本標的物之目標及優勢。 This example is not intended to limit the subject matter, but rather to provide a brief overview. Additional examples are set forth in the embodiments below. The objectives and advantages of the subject matter can be determined based on one or more teachings herein based on the description of the embodiments and/or the practice.

900‧‧‧顯示器 900‧‧‧ display

902‧‧‧螢幕 902‧‧‧ screen

904‧‧‧感測器 904‧‧‧ sensor

906‧‧‧使用者 906‧‧‧Users

1000‧‧‧顯示器 1000‧‧‧ display

1008‧‧‧提示或符號 1008‧‧‧ Tips or symbols

1108‧‧‧提示 1108‧‧‧ Tips

1210‧‧‧使用者計時器指示符或符號 1210‧‧‧User timer indicator or symbol

1306‧‧‧使用者 1306‧‧‧Users

1312‧‧‧人體工學指示符或符號 1312‧‧‧ Ergonomic indicators or symbols

1400‧‧‧顯示器 1400‧‧‧ display

1402‧‧‧螢幕 1402‧‧‧ screen

1406‧‧‧使用者 1406‧‧‧Users

1414‧‧‧第二使用者 1414‧‧‧ second user

1700‧‧‧顯示器 1700‧‧‧ display

1706‧‧‧使用者 1706‧‧‧Users

1800‧‧‧顯示器 1800‧‧‧ display

1802‧‧‧螢幕 1802‧‧‧ screen

1804‧‧‧感測器 1804‧‧‧ sensor

1806‧‧‧使用者 1806‧‧‧Users

1906‧‧‧使用者 1906‧‧‧Users

1908‧‧‧字型 1908‧‧‧ font

2006‧‧‧使用者 2006‧‧‧Users

2010‧‧‧系統圖示 2010‧‧‧System icon

2100‧‧‧顯示器 2100‧‧‧ display

2106‧‧‧使用者 2106‧‧‧Users

2108‧‧‧字型 2108‧‧‧ font

2110‧‧‧圖示 2110‧‧‧ icon

2112‧‧‧距離 2112‧‧‧Distance

2202‧‧‧人體工學感測器模組 2202‧‧‧ Ergonomic sensor module

2204‧‧‧顯示器 2204‧‧‧ display

2204A‧‧‧結構元件 2204A‧‧‧Structural components

2205‧‧‧電腦系統/電腦 2205‧‧‧Computer System/Computer

2206‧‧‧使用者 2206‧‧‧Users

2208‧‧‧感測器/影像感測裝置 2208‧‧‧Sensor/Image Sensing Device

2210‧‧‧處理元件 2210‧‧‧Processing components

2212‧‧‧輸入/輸出介面 2212‧‧‧Input/Output Interface

2214‧‧‧記憶體 2214‧‧‧ memory

2216‧‧‧人體工學分析常式 2216‧‧‧ Ergonomic analysis routine

2217‧‧‧人體工學分析常式 2217‧‧‧ Ergonomic analysis routine

2218‧‧‧回饋訊息/警告訊息 2218‧‧‧Feedback/Warning Message

2220‧‧‧記憶體 2220‧‧‧ memory

2302‧‧‧人體工學感測器模組 2302‧‧‧ Ergonomic sensor module

2304‧‧‧顯示器 2304‧‧‧Display

2402‧‧‧人體工學感測器模組 2402‧‧‧ Ergonomic sensor module

2404‧‧‧顯示器 2404‧‧‧Display

2405‧‧‧電腦 2405‧‧‧ computer

圖1圖解說明一行動視訊會議環境之一實例。 Figure 1 illustrates an example of an active video conferencing environment.

圖2圖解說明包含在低光狀況(例如,30勒克司)下拍攝之一面部之一影像之一實例。 Figure 2 illustrates an example of one of the images of one of the faces taken under low light conditions (e.g., 30 lux).

圖3圖解說明具有一或多個紅外光發射器(例如,一圈紅外光發射器)之一手持式裝置之一實例。 3 illustrates an example of a handheld device having one or more infrared light emitters (eg, a circle of infrared light emitters).

圖4圖解說明在不具有任何外部可見光源之情況下(亦即,在不具有紅外光發射器之情況下該面部係在0勒克司處完全暗的)藉助在40cm處之一單一紅外光發射器偵測之一面部。 Figure 4 illustrates a single infrared light emission at 40 cm without any external visible light source (i.e., the facial system is completely dark at 0 lux without an infrared light emitter) The device detects one of the faces.

圖5圖解說明在低光狀況(例如,30勒克司)處拍攝之一影像之一實例,其中以來自在一較高光位準處拍攝之一校準影像之一塊皮膚色調進行塗飾。 Figure 5 illustrates an example of one of the images taken at a low light condition (e.g., 30 lux) in which the skin tone is applied as a piece of skin tone from one of the calibration images taken at a higher light level.

圖6圖解說明在比圖5之影像高的光位準處、且理想地在一最佳光位準處拍攝之一校準影像之一實例。 Figure 6 illustrates an example of one of the calibration images taken at a higher light level than the image of Figure 5, and ideally at an optimal light level.

圖7A至圖7B圖解說明使用一手持式相機拍攝之兩個影像之一序列,該兩個影像具有不同背景但在兩者中具有由類似輪廓形狀指示之類似面部位置。 7A-7B illustrate a sequence of two images taken using a handheld camera having different backgrounds but having similar facial positions indicated by similar contour shapes in both.

圖8A至圖8B圖解說明圖7A至圖7B之兩個影像之序列,但這次具有指示一實例性行動視訊會議環境中之背景對前景面部物件之運動方向及量值之運動向量箭頭。 Figures 8A-8B illustrate a sequence of two images of Figures 7A-7B, but this time with motion vector arrows indicating the direction and magnitude of motion of the background to the foreground facial objects in an exemplary mobile video conferencing environment.

圖9係在一顯示器之視野中之一使用者之一圖解。 Figure 9 is an illustration of one of the users in the field of view of a display.

圖10係在一顯示器之視野中之一兒童使用者之一圖解。 Figure 10 is an illustration of one of the children's users in the field of view of a display.

圖11A至圖11B係在一顯示器之視野中之不同使用者之圖解。 11A-11B are illustrations of different users in the field of view of a display.

圖12係在具有一使用者計時器之一顯示器之視野中之一使用者之一圖解。 Figure 12 is an illustration of one of the users in the field of view of a display having a user timer.

圖13圖解說明對在一顯示器之視野中之一使用者之一人體工學指示符。 Figure 13 illustrates an ergonomic indicator of one of the users in the field of view of a display.

圖14係當在一顯示器之視野中偵測到兩個使用者時之一隱私設定之一圖解。 Figure 14 is an illustration of one of the privacy settings when two users are detected in the field of view of a display.

圖15圖解說明在一使用者未被一顯示器辨識出時之一指示符。 Figure 15 illustrates one of the indicators when a user is not recognized by a display.

圖16圖解說明僅照明對應於一使用者之凝視之顯示器之一區段之一顯示器。 Figure 16 illustrates a display that illuminates only one of the sections of the display corresponding to a user's gaze.

圖17圖解說明對在顯示器之視野中之一使用者之一距離指示符。 Figure 17 illustrates one of the distance indicators for a user in the field of view of the display.

圖18係在一顯示器之視野中之一使用者之一圖解說明。 Figure 18 is an illustration of one of the users in the field of view of a display.

圖19A至圖19B圖解說明基於使用者之年齡來調整一顯示器上之文字之大小。 19A-19B illustrate adjusting the size of text on a display based on the age of the user.

圖20A至圖20B圖解說明基於使用者之年齡來調整在一顯示器上之圖示之大小。 20A-20B illustrate the size of the graphic on a display based on the age of the user.

圖21A至圖21B圖解說明基於使用者距顯示器之距離來調整在一顯示器上之文字及/或圖示之大小。 21A-21B illustrate the adjustment of the size of text and/or icons on a display based on the distance of the user from the display.

圖22係展示一例示性人體工學感測器模組之一方塊圖。 Figure 22 is a block diagram showing an exemplary ergonomic sensor module.

圖23係展示整合至一顯示器中之一人體工學感測器模組之一實例之一圖式。 Figure 23 is a diagram showing one example of an ergonomic sensor module integrated into a display.

圖24係展示在一顯示器外部使用之一人體工學感測器模組之一實例之一圖式。 Figure 24 is a diagram showing one example of an ergonomic sensor module used external to a display.

圖25係展示當在使用一人體工學感測器模組時進行的一例示性處理方法中之步驟之一流程圖。 Figure 25 is a flow chart showing one of the steps in an exemplary processing method performed when an ergonomic sensor module is used.

圖26係展示相對於一顯示器之一使用者之偏擺角度之一實例之一圖式。 Figure 26 is a diagram showing one example of a yaw angle with respect to a user of a display.

圖27係展示相對於一顯示器之一使用者之俯仰角度之一實例之一圖式。 Figure 27 is a diagram showing one example of a pitch angle relative to a user of a display.

用於擷取數位影像之良好或自然光照狀況使得一物件看起來係自所有方向均勻地照明且無過多或過少之光。不良光照狀況可包含低光、不均勻光及無光狀況。不均勻光包含自使得諸如一面部之一物件在一個側上比另一側(例如,左-右、上-下、沿一對角線等)稍微明亮之一角度定向之光,或僅包含在物件上某處之一或多個陰影之光。圖2圖解說明包含在一低光狀 況下(例如,30勒克司)拍攝之一面部之一影像之一實例。圖2處圖解說明之影像中之面部係既被暗淡光照且亦被不均勻光照的,亦即面部之一個側看起來比另一側暗得多。包含前額、頸部、一個耳朵、鼻端及一側臉頰之區域儘管暗但多少仍可辨別,而諸如眼睛、嘴及下巴、軀幹或襯衫、頭髮及兩個耳朵中的一個耳朵係近似完全暗的。 The good or natural illumination conditions used to capture digital images make an object appear to be uniformly illuminated from all directions without excessive or too little light. Poor lighting conditions can include low light, uneven light, and no light conditions. The uneven light includes light that is oriented such that one of the items on one side is slightly brighter on one side than the other side (eg, left-right, up-down, along a diagonal line, etc.), or only One or more shades of light somewhere on the object. Figure 2 illustrates the inclusion in a low light In the case (for example, 30 lux), one of the images of one of the faces is taken. The facial system in the image illustrated in Figure 2 is both dimly lit and also unevenly illuminated, i.e., one side of the face appears much darker than the other side. The area containing the forehead, neck, one ear, nose and one cheek is somewhat discernible, such as one of the eyes, mouth and chin, torso or shirt, hair and two ears. dark.

大體而言,低光狀況使得諸如一面部之一物件可或不可被偵測到,且物件/面部追蹤可難以鎖定(若存在),且無論如何影像資料含有比所期望的更少之資訊。舉例而言,在低光狀況中,諸如在圖2中,一物件之僅某些區域可係可辨別的而其他區域不可辨別。在另一實例中,一或多個參數可係不充分地可判定的,例如,諸如照度、色彩、焦點、色調重現、或白平衡資訊,或諸如一參與者正在微笑還是眨眼或被部分地遮斷或以其他方式有陰影之面部特徵資訊。在US20080219517及US20110102553中可找到不良光照狀況之某些說明及用於處置其之某些解決方案,該等申請案係受讓於同一受讓人且以引用的方式併入。 In general, a low light condition allows an item such as a face to be or may not be detected, and object/face tracking may be difficult to lock (if present), and in any event the image material contains less information than desired. For example, in low light conditions, such as in Figure 2, only certain areas of an object may be discernible while other areas are indistinguishable. In another example, one or more parameters may be insufficiently determinable, such as, for example, illumination, color, focus, hue reproduction, or white balance information, or such as a participant being smiling or blinking or being partially Information that is occluded or otherwise shaded. Some descriptions of poor illumination conditions and certain solutions for disposing of them are found in US20080219517 and US Pat. No. 2,110,102, 553, the disclosure of each of which is incorporated herein by reference.

在無光狀況下,物件(例如,一面部)係甚至不可分辨或不可偵測的。在不存在任何可見光時,一人員在視覺上不可辨別物件之任何區域或參數。本發明提供申請人之優勢性裝置,該裝置包含根據某些實施例且參照圖3至圖4在下文闡述之一紅外光源及感測器以在光照狀況小於最佳時增強行動視訊會議。 In the absence of light, objects (eg, a face) are even indistinguishable or undetectable. In the absence of any visible light, a person is visually indistinguishable from any area or parameter of the object. The present invention provides an applicant's advantageous device comprising an infrared source and sensor as set forth below in accordance with certain embodiments and with reference to Figures 3 through 4 to enhance an action video conference when the illumination condition is less than optimal.

在低光照或不均勻光照狀況下擷取之影像將通常具有混合有較明亮區域之陰影/暗區域,且通常不如在正常光或最佳光狀況下擷取之彼等影像看起來一樣悅目。事實上,不同於在一專業攝影室中擷取之影像,藉助智慧電話及手持式消費型數位相機由人們拍攝之多數圖片係在具有小於最 佳光照狀況之各種地點中拍攝。在某些實施例中,可有利地儲存由使用者先前所擷取(例如,在較佳光狀況下,諸如在正常或甚至最佳光位準下)之校準影像且然後將其用於增強、重構或甚至取代在經反覆不良光照之一視訊串流之所擷取影像內並非充分地可辨別或期望之某些影像區域、特徵、參數或性質,諸如皮膚色調、被遮斷或有陰影之特徵、色彩平衡度、白平衡、曝光等。 Images captured under low or uneven illumination will typically have shaded/dark areas mixed with brighter areas and are generally less pleasing to the eye than those captured under normal or optimal light conditions. In fact, unlike the images captured in a professional studio, most of the pictures taken by people with smart phones and handheld consumer digital cameras are less than the most Shoot in various locations with good lighting conditions. In some embodiments, it may be advantageous to store a calibration image previously captured by the user (eg, under a preferred light condition, such as at normal or even optimal light level) and then used to enhance Reconstructing, or even replacing, certain image regions, features, parameters or properties that are not sufficiently discernible or desirable within the captured image of one of the reversed illuminations, such as skin tones, occlusions or Shadow features, color balance, white balance, exposure, etc.

可接近地複製來自一當前原始影像之某些資訊(例如,面部大小、眼睛及嘴唇移動、焦點、色調、色彩、定向或相對曝光或整體曝光)以使用皮膚色調(例如,見美國專利7,844,076、7,460,695及7,315,631,其係以引用的方式併入)及/或來自校準影像之一或多個其他性質來提供盡可能接近於一自然面部之外觀。US 20080219581、US 20110102638及US 20090303343係受讓於同一受讓人,且在此以引用的方式併入本文中,以提供可與本文中明確闡述之某些實施例組合之用於與低光影像一起運作並增強該等低光影像之進一步解決方案。在某些實施例中,亦可用一人造背景、或用自藉助較好光照狀況而拍攝之一影像提取之一背景、或用一模糊或任意背景來取代該背景。 Reproducibly copying certain information from a current original image (eg, face size, eye and lip movement, focus, hue, color, orientation, or relative exposure or overall exposure) to use skin tones (see, for example, US Pat. No. 7,844,076, 7, 460, 695 and 7, 315, 631, which are incorporated by reference, and/or from one or more other properties of the calibrated image to provide an appearance as close as possible to a natural face. US 20080219581, US Pat. No. 20,110,102, 638, and US Patent No. 20090303343 are hereby incorporated by reference herein inco Work together and enhance further solutions for these low-light images. In some embodiments, the background may also be replaced with an artificial background, or with one of the images captured from one of the images with better illumination, or with a blurred or arbitrary background.

本文所述之多個實施例涉及使用來自先前所儲存影像之資訊及/或資料,以改良在一視訊會議之任一端處由會議參與者中之任何參與者(無論其可位於何處)觀看之諸如面部影像之物件之皮膚色調及/或其他性質。亦闡述手持式裝置之有效資源使用,例如,藉由僅傳輸音訊及前景面部資料而包含或不包含周邊資料、且尤其是根據本文所述之實施例不包含有別於前景資料之背景資料。 The various embodiments described herein relate to the use of information and/or information from previously stored images to improve viewing by any participant in a conference participant (regardless of where they are located) at either end of a video conference Skin tone and/or other properties of items such as facial images. Effective resource usage of the handheld device is also illustrated, for example, by transmitting only audio and foreground facial data with or without surrounding material, and in particular, according to embodiments described herein, does not include background material that is different from foreground data.

在某些實施例中,尤其是關於在低光狀況及非均一光狀況下擷取之影像方面,行動視訊會議已有利地被增強。在某些實施例中,使用由發射紅外(IR)光之經良好定位之IR發光二極體(LED)之一陣列(例如,一圈)可利用自面部反射之IR改善在低光/無光狀況下對一使用者之面部之偵測。圖3圖解說明包含一或多個紅外光發射器(例如,一圈紅外光發射器)之一手持式裝置之一實例。在其他實施例中,僅使用一單一IR發射器,或將兩個IR發射器左-右或上-下地安置在該裝置之一側上一個,或提供四個IR發射器,包含在該裝置之每一側上一個。各種配置皆係可能的,包含將相對的IR發射器安置於裝置之6個側中之任一側上,且其相對於裝置可係固定的或可移動的。 In some embodiments, mobile video conferencing has been advantageously enhanced in particular with respect to images captured in low light conditions and non-uniform light conditions. In some embodiments, an array of IR-emitting diodes (LEDs) that are well-positioned by emitting infrared (IR) light (eg, one turn) can be utilized to improve IR in the low light/none Detection of a user's face under light conditions. 3 illustrates an example of one hand-held device that includes one or more infrared light emitters (eg, a circle of infrared light emitters). In other embodiments, only a single IR emitter is used, or two IR emitters are placed left-right or up-down on one side of the device, or four IR emitters are provided, included in the device One on each side. Various configurations are possible, including placing opposing IR emitters on either of the six sides of the device, and that can be fixed or movable relative to the device.

此等IR LED可具有受控之電流及因此具有相依於諸如面部距手持式裝置之距離等參數而受控之輸出功率。此特徵亦可有利地減少現有的基於閃光燈之相機之電力使用。在進一步實施例中,最初以一搜尋模式使該IR發射器對焦,同時一旦由裝置之面部追蹤器模組追蹤到面部即對面部維持恆定對焦。 These IR LEDs can have a controlled current and thus have an output power that is controlled in dependence on parameters such as the distance of the face from the handheld device. This feature can also advantageously reduce the power usage of existing flash-based cameras. In a further embodiment, the IR emitter is initially focused in a seek mode while maintaining constant focus on the face once the face tracker module of the device is tracked to the face.

藉助一或多個紅外光LED對面部之照明可提供對距裝備有一IR感測器之一裝置短的距離之面部之改良偵測,該IR感測器擷取自該面部或其他目標物件反射之IR光。圖4圖解說明在不具有一外部可見光源之情況下(亦即,在不具有紅外光發射器之情況下該面部係在0勒克司處完全暗的)藉助在40cm處之一單一紅外光發射器偵測到之一面部。 Illuminating the face with one or more infrared light LEDs provides improved detection of a face that is a short distance from a device equipped with an IR sensor that is reflected from the face or other target object IR light. Figure 4 illustrates the use of a single infrared light emission at 40 cm without an external source of visible light (i.e., the face is completely dark at 0 lux without an infrared light emitter) The device detected one of the faces.

可使用在最佳光位準下由使用者(遵循特定指令)先前擷取之校準影像來重構皮膚色調。可接近地複製來自當前原始影像之資訊(諸如面部大小及眼睛與嘴唇移動)以使用來自(舉例而言)校準影像中之一或多者之皮膚色 調而提供盡可能接近於一自然面部外觀。圖5圖解說明在低光狀況(例如,30勒克司)處拍攝之一影像之一實例,其中以來自在一較高光位準處拍攝之一校準影像之一塊皮膚色調進行塗飾。類似地,亦可用一人造背景及/或自藉助較好光照狀況拍攝之一影像取代該背景。圖6圖解說明在比圖5之影像高的光位準處、且理想地在一最佳或正常光位準處拍攝之一校準影像之一實例。 Skin tones can be reconstructed using a calibration image previously captured by the user (following specific instructions) at the optimal light level. Information from the current original image (such as face size and eye and lip movement) can be closely copied to use skin color from one or more of, for example, calibrated images Tune to provide as close as possible to a natural facial appearance. Figure 5 illustrates an example of one of the images taken at a low light condition (e.g., 30 lux) in which the skin tone is applied as a piece of skin tone from one of the calibration images taken at a higher light level. Similarly, the background can be replaced with an artificial background and/or one of the images taken with good light conditions. Figure 6 illustrates an example of one of the calibration images taken at a higher light level than the image of Figure 5 and ideally taken at an optimal or normal light level.

在一行動視訊會議參與者使用一手持式裝置時,相對於相機鏡頭/感測器而言,背景之運動通常比前景之運動更大或更快。前景可包含具有或不具有任何周邊區域(諸如頭髮、頸部、軀幹、襯衫、臂、帽子、圍巾或其他周邊物件或區域)之參與者之面部,見US 20110081052及US 20070269108,其以引用的方式併入本文中。參與者通常儘量保持其面部相對於相機鏡頭/感測器靜止。若參與者就此努力而言係成功的,則參與者與相機鏡頭/感測器可係靜態的或大致一起移動或平均一起移動,而背景可係靜態的,或替代地相對於相機鏡頭/感測器而相對快速地移動。 When a mobile video conferencing participant uses a handheld device, the motion of the background is typically greater or faster than the motion of the foreground relative to the camera lens/sensor. The foreground may include a face of a participant with or without any peripheral area such as hair, neck, torso, shirt, arm, hat, scarf or other peripheral items or areas, see US 20110081052 and US 20070269108, which are incorporated by reference. The manner is incorporated herein. Participants usually try to keep their faces still relative to the camera lens/sensor. If the participant is successful in this effort, the participant and the camera lens/sensor may move statically or substantially together or on average, while the background may be static, or alternatively relative to the camera lens/sensory The detector moves relatively quickly.

藉由辨別相對於相機快速移動之物件對明顯較慢移動之物件,根據某些實施例之一裝置能夠分割由裝置擷取之一影像中之前景物件及區域與背景物件及區域。根據某些實施例,藉由僅將前景傳輸至一或多個其他視訊會議參與者,裝置係更為資源有效的,且無需以其他方式處理模糊移動背景之影像資料而是僅將其摒棄即可。另一選擇係,可在不進行進一步處理之情況下傳輸模糊背景影像,乃因可期望僅傳輸一模糊背景以(舉例而言)維持隱私(見同一受讓人之美國專利序列號12/883,192,其以引用的方式併入本文中)及/或避免在背景資料方面花費處理資源。 By distinguishing objects moving rapidly relative to the camera from objects that move significantly slower, in accordance with some embodiments the apparatus is capable of splitting the foreground objects and regions and background objects and regions in one of the images captured by the device. According to some embodiments, by transmitting only the foreground to one or more other video conferencing participants, the device is more resource efficient and does not need to otherwise process the image of the blurred moving background but simply discards it. can. Another option is to transmit a blurred background image without further processing, as it may be desirable to transmit only a blurred background to, for example, maintain privacy (see U.S. Patent Serial No. 12/883,192 to the same assignee). , which is incorporated herein by reference) and/or avoids the expense of processing resources in terms of background information.

圖7a至圖7b圖解說明使用一手持式相機拍攝之兩個影像之一序列,該兩個影像具有不同背景但在兩者中具有由類似輪廓形狀指示之類似面部位置。由於可能不期望傳輸背景資訊或由會議參與者中之一個參與者、某些參與者或甚至所有參與者觀看背景資訊,則在否則可期望其提供可觀看影像之其他影像處理當中,裝置可有利地放棄否則將涉及到連續地提供改變背景影像(包含去模糊、色彩及白平衡增強及焦點增強)之資源密集性計算。替代地,將彼等影像增強更好地花費於實際上所期望之影像資料上,舉例而言,會議參與者面部。 Figures 7a-7b illustrate a sequence of two images taken using a handheld camera having different backgrounds but having similar facial positions indicated by similar contour shapes in both. Since it may not be desirable to transmit background information or to view background information by one of the participants, some participants, or even all of the participants, the device may be advantageous in other image processing that may otherwise be expected to provide viewable images. Abandonment would otherwise involve continuously providing resource-intensive calculations that alter the background image (including deblurring, color and white balance enhancement, and focus enhancement). Alternatively, their image enhancements are better spent on the actual desired image material, for example, the conference participant's face.

圖8a至圖8b圖解說明圖7a至圖7b之兩個影像之序列,但這次具有指示一實例性行動視訊會議環境中之背景對前景面部物件之運動方向及量值之運動向量箭頭。相機能夠使用運動向量資訊辨別前景與背景。 Figures 8a-8b illustrate the sequence of the two images of Figures 7a through 7b, but this time with motion vector arrows indicating the direction and magnitude of motion of the background to the foreground facial object in an exemplary mobile video conferencing environment. The camera can use motion vector information to identify the foreground and background.

在一行動視訊會議環境中,使用者通常盡可能地保持行動裝置穩定並定向於其面部處。然而,由於裝置及使用者兩者可係正在此環境中移動,因此背景通常將逐圖框地快速變化。因此,與背景相比,前景(例如,面部)通常將相對穩定,但在人員係靜止之情況下除外,於此情形中背景與前景兩者將近似相等地穩定。自前景資料分割出易變的背景資料以大大增強裝置在一行動視訊會議環境中之效率。 In an mobile video conferencing environment, the user typically keeps the mobile device stable and oriented at its face as much as possible. However, as both the device and the user can move in this environment, the background will typically change rapidly from frame to frame. Thus, the foreground (eg, face) will generally be relatively stable compared to the background, except where the person is stationary, in which case both the background and the foreground will be approximately equally stable. Segmentation of variable background data from foreground data greatly enhances the efficiency of the device in an active video conferencing environment.

藉由使用前景與背景之運動差之此理解、區分背景對前景之穩定性且亦視情況使用關於使用者之特定資訊(諸如面部辨識、瑕疵校正、皮膚色調、眼睛色彩、面部美化或使用者特定之其他使用者選定影像處理或自動影像處理,乃因行動裝置主要係單一使用者裝置),根據有效且用以適合行動視訊會議之需要的實施例來提供背景對前景區分演算法。以下各項係以引用的方式併入本文中,闡述此等技術中之某些技術之各種實例:由US 20090189997之同一受讓人在2010年12月2日提出申請之US 20100026831、US 20090080796、US 20110064329、US 20110013043、US 20090179998、US 20100066822、US 20100321537、US 20110002506、US 20090185753、US 20100141786、US 20080219517、US 20070201726、US 20110134287、US 20100053368、US 20100054592以及US 20080317378及US序列號12/959,151。有利地,使用影像中之不同物件/像素之運動向量來幫助決策(例如)一物件是屬於背景還是前景,因此可有效地分配資源。 This understanding of the difference between the foreground and the background is used to distinguish the background from the stability of the foreground and also to use specific information about the user (such as facial recognition, sputum correction, skin tone, eye color, facial beautification or user). The particular other user selects image processing or automatic image processing because the mobile device is primarily a single user device), providing a background-to-foreground discriminant algorithm based on an embodiment that is effective and suitable for the needs of the mobile video conferencing. The following items are incorporated herein by reference, to describe various examples of some of these techniques: by US US 20100026831, US 20090080796, US 20110064329, US 20110013043, US 20090179998, US 20100066822, US 20100321537, US 20110002506, US 20090185753, US 20100141786, US 20080219517, US, filed on December 2, 2010, by the same assignee. 20070201726, US 20110134287, US 20100053368, US 20100054592 and US 20080317378 and US Serial No. 12/959,151. Advantageously, motion vectors of different objects/pixels in the image are used to aid in decision making, for example, whether an object belongs to the background or foreground, and thus resources can be efficiently allocated.

在某些實施例中,可使用前景區域之一候選者以藉由僅對焦於彼候選區域上來加速面部辨識、面部追蹤或面部偵測。此外,由於一典型行動裝置主要係由一單一使用者使用,因此可使用先前校準影像來加速面部辨識、面部偵測及影像增強並強化背景分離,例如一前景面部與非面部資料之分離。 In some embodiments, one of the foreground regions can be used to speed up facial recognition, face tracking, or face detection by focusing only on the candidate region. In addition, since a typical mobile device is primarily used by a single user, previously calibrated images can be used to speed up facial recognition, face detection, and image enhancement and enhance background separation, such as separation of foreground and non-face data.

在某些實施例中,一旦偵測到前景(例如,面部範圍),即可用一使用者之背景偏好來取代背景。此提供在行動視訊會議應用之情形中使用一有效前景/背景分離方法對背景之一有效實施。 In some embodiments, once the foreground (eg, face range) is detected, the background of a user can be used instead of the background. This provides an effective implementation of one of the contexts in the case of an active video conferencing application using a valid foreground/background separation method.

由於相對不重要的背景資訊與習知使用者之面部相比通常將更快速地改變,因此,背景資訊將另外涉及更多頻帶寬使用以傳輸至行動視訊會議室之另一端。有利地,藉由偵測及傳輸僅經壓縮面部及/或其他前景資訊而在本文中提供頻帶寬之有效使用用於行動視訊會議。 Since the relatively unimportant background information will generally change more quickly than the face of a known user, the background information will additionally involve more bandwidth usage for transmission to the other end of the mobile video conference room. Advantageously, efficient use of the frequency bandwidth is provided herein for mobile video conferencing by detecting and transmitting only compressed faces and/or other foreground information.

一旦偵測到並傳輸該前景,即可用接收端處之一使用者之偏好或其他自動選定之資料來取代該背景。由於該優勢性分離方法而提供背景取代之一有效實施。另外,由於甚至在光照狀況改變時亦大致維持恆定之面部皮膚色調而提供改良的壓縮效能。此改良行動視訊會議之頻帶寬效率。 在某些實施例中,背景對面部或其他前景區分係基於分析在行動視訊會議環境中所拍攝之影像中之物件之運動向量之間的差異。可替代或結合此技術一起使用其他前景/背景分割技術,如在同一受讓人之以引用的方式併入本文中之專利申請案中之若干專利申請案中所述。 Once the foreground is detected and transmitted, the background can be replaced with the preferences of one of the users at the receiving end or other automatically selected material. One of the background substitutions provided by this advantageous separation method is effectively implemented. In addition, improved compression performance is provided by maintaining a constant facial skin tone even when the lighting conditions change. This improves the frequency bandwidth efficiency of mobile video conferencing. In some embodiments, the background-to-face or other foreground distinction is based on analyzing the difference between the motion vectors of the objects in the images captured in the mobile video conferencing environment. Other foreground/background segmentation techniques may be used in place of or in conjunction with this technique, as described in several of the patent applications incorporated herein by reference.

美國專利7,953,287及7,469,071以引用的方式併入且包含涉及前景/背景分割與故意背景模糊化之實施例之說明。美國專利7,868,922、7,912,285、7,957,597、7,796,822、7,796,816、7,680,342、7,606,417及7,692,696以及US 20070269108係以引用的方式併入,闡述前景/背景分割技術。U.S專利7,317,815及7,564,994係關於面部工具及面部影像工作流且亦以引用的方式併入。US專利7,697,778及7,773,118與US 20090167893、US 20090080796、US 20110050919、US 20070296833、US 20080309769及US 20090179999以及USSN 12/941,983係以引用的方式併入,含有與數位影像中之運動及/或低光或不均勻光補償相關之實施例之說明。US 20080205712及US 20090003661係以引用的方式併入,含有與基於紋理空間分解來分離統計面部模型化中之一定向光照可變性相關之說明,且US 20080219517係以引用的方式併入,含有與使用分類器鏈之照明偵測相關之實施例之說明。 U.S. Patent Nos. 7,953,287 and 7,469,071 are incorporated herein by reference in their entirety in their entirety in the extent of the disclosure of the disclosure of the present disclosure. U.S. Patent Nos. 7,868,922, 7, 912, 285, 7, 957, 597, 7, 796, 822, 7, 796, 816, 7, 680, 342, 7, 606, 417, and 7, 692, 696, and U.S. Patents 7,317,815 and 7,564,994 relate to facial tools and facial imaging workflows and are also incorporated by reference. US Patent Nos. 7,697,778 and 7,773,118 and US 20090167893, US 20090080796, US 20110050919, US 20070296833, US 20080309769, and US 20090179999, and USSN 12/941,983 are incorporated herein by reference, and incorporate A description of the embodiment of the uniform light compensation. US 20080205712 and US 20090003661 are incorporated by reference, which are incorporated herein by reference in its entirety in its entirety in the the the the the the the the Description of embodiments related to illumination detection of a classifier chain.

[圖9至圖17] [Fig. 9 to Fig. 17]

本發明提供技術及方法以基於一電子顯示器系統或監視器所偵測之參數或狀況來調整使用者偏好設定。在某些實施例中,顯示器系統可偵測及/或判定一使用者之一年齡。在另一實施例中,顯示器系統可偵測及/或判定使用者與顯示器之間的一距離。在再一實施例中,顯示系統可單獨地或結合上文偵測之年齡或距離狀況一起來偵測及/或判定周圍光或一使 用者面部上之光量。在某些實施例中,顯示系統可辨識一使用者之面部,且可額外地辨識一使用者之凝視或判定使用者之瞳孔直徑。 The present invention provides techniques and methods for adjusting user preference settings based on parameters or conditions detected by an electronic display system or monitor. In some embodiments, the display system can detect and/or determine an age of a user. In another embodiment, the display system can detect and/or determine a distance between the user and the display. In still another embodiment, the display system can detect and/or determine ambient light or a combination of the age or distance conditions detected separately or in combination with the above detected age or distance conditions. The amount of light on the user's face. In some embodiments, the display system can identify a user's face and can additionally identify a user's gaze or determine the user's pupil diameter.

可基於顯示器所偵測或判定之參數或狀況來動態調整任何數量之使用者偏好或顯示器設定。舉例而言,在一項實施例中,可基於使用者之所偵測年齡來調整可顯示內容或使用者隱私設定。在另一實施例中,可基於使用者之所偵測年齡來限定能夠顯示之內容類型或檔案類型。在某些實施例中,個別地辨識特定使用者,且可使可顯示內容及/或隱私設定個別地適應於顯示器所辨識之特定個體。在某些實施例中,一使用者計時器可判定何時超過一預定時間限制,並向使用者指示以停止顯示器之使用。另外,在使用者以可導致傷害、疼痛或不適之一方式就坐或傾斜時顯示器可向使用者指出。在某些實施例中,螢幕之亮度可基於使用者之所偵測年齡、使用者之瞳孔直徑、圍繞使用者或在使用者之面部上的周圍光、使用者與顯示器之間的距離或所有前述狀況之任何邏輯組合而自動調整。 Any number of user preferences or display settings can be dynamically adjusted based on parameters or conditions detected or determined by the display. For example, in one embodiment, the displayable content or user privacy settings can be adjusted based on the age detected by the user. In another embodiment, the type of content or file type that can be displayed can be defined based on the age detected by the user. In some embodiments, a particular user is individually identified and the displayable content and/or privacy settings can be individually adapted to the particular individual identified by the display. In some embodiments, a user timer can determine when a predetermined time limit is exceeded and indicate to the user to stop use of the display. Additionally, the display can be indicated to the user when the user is sitting or tilting in one of the ways that can cause injury, pain, or discomfort. In some embodiments, the brightness of the screen may be based on the age detected by the user, the diameter of the pupil of the user, the ambient light around the user or on the face of the user, the distance between the user and the display, or all Automatic adjustment of any logical combination of the foregoing conditions.

圖9圖解說明一顯示器900,諸如一電腦監視器、一電視顯示器、一蜂巢式電話顯示器、一平板顯示器或一膝上型電腦顯示器,其具有一螢幕902及複數個感測器904。該等感測器可包含(舉例而言)一成像感測器(諸如包含一CCD或CMOS感測器之一相機)、一閃光燈或其他形式之照明及/或經組態以偵測或使物件成像之任何其他感測器(諸如,超聲、紅外(IR)光、熱感測器或周圍光感測器)。感測器可安置於顯示器上或併入於顯示器內,或另一選擇係,感測器可整合於顯示器內。可將任何數目個感測器包含於該顯示器中。在某些實施例中,可使用感測器之組合。舉例而言,在一項實施例中,一相機、一閃光燈及一紅外光感測器可皆包含於一顯示器中。應瞭解,顯示器上或附近可包含任何感測器組合或任何數目個感測器。如 圖9中展示,使用者906係展示為定位於顯示器900前面,在感測器904之偵測範圍或視野內。 9 illustrates a display 900, such as a computer monitor, a television display, a cellular telephone display, a flat panel display, or a laptop display having a screen 902 and a plurality of sensors 904. The sensors can include, for example, an imaging sensor (such as a camera including a CCD or CMOS sensor), a flash or other form of illumination, and/or configured to detect or enable Any other sensor that the object is imaged (such as ultrasound, infrared (IR) light, thermal sensors, or ambient light sensors). The sensor can be placed on or incorporated into the display, or another selection, the sensor can be integrated into the display. Any number of sensors can be included in the display. In some embodiments, a combination of sensors can be used. For example, in one embodiment, a camera, a flash, and an infrared sensor can all be included in a display. It should be appreciated that any sensor combination or any number of sensors can be included on or near the display. Such as As shown in FIG. 9, user 906 is shown positioned in front of display 900 within the detection range or field of view of sensor 904.

各種實施例涉及安裝於一顯示器上或附近之一相機,該顯示器與經程式化以偵測、追蹤及/或辨識一面部或部分面部或一面部區域(諸如一個或兩個眼睛、或一嘴區域)或一面部神態或表情(諸如微笑或眨眼)之一處理器耦合。在某些實施例中,該處理器係整合於顯示器內或安置於顯示器上。在其他實施例中,該處理器係與顯示器分離。該處理器可包含記憶體及經組態以自感測器接收信號並處理該等信號之軟體。某些實施例包含藉助該等感測器來感測一使用者或一使用者之特徵並判定與面部相關參數,基於諸如定向、姿態、傾斜、色調、色彩平衡、白平衡、相對曝光或整體曝光、面部大小或面部區域大小(包含眼睛或諸如瞳孔、虹膜、鞏膜或眼瞼等眼睛區域之大小)、一對焦狀況及/或相機或顯示器與面部之間的一距離等。就此而言,將以下各項以引用的方式併入本文中,其揭示可與本文所述之實施例或實施例特徵組合之替代實施例及特徵:2011年2月25日提出申請之美國專利申請案第13/035,907號、2010年9月16日提出申請之第12/883,183號及2010年11月11日提出申請之第12/944,701號,其每一者皆受讓於同一受讓人,以及美國專利第7,853,043、7,844,135、7,715,597、7,620,218、7,587,068、7,565,030、7,564,994、7,558,408、7,555,148、7,551,755、7,460,695、7,460,694、7,403,643、7,317,815、7,315,631及7,269,292號。 Various embodiments relate to a camera mounted on or near a display that is programmed to detect, track, and/or identify a facial or partial facial or a facial region (such as one or two eyes, or a mouth) A region or a processor coupled with a facial expression or expression (such as a smile or blink). In some embodiments, the processor is integrated into the display or disposed on the display. In other embodiments, the processor is separate from the display. The processor can include memory and software configured to receive signals from the sensors and process the signals. Some embodiments include sensing the characteristics of a user or a user with the sensors and determining parameters related to the face based on, for example, orientation, pose, tilt, hue, color balance, white balance, relative exposure, or overall Exposure, face size, or size of the face area (including the size of the eye or the area of the eye such as the pupil, iris, sclera, or eyelid), a focus condition, and/or a distance between the camera or display and the face. In this regard, the following items are hereby incorporated by reference, which are incorporated herein by reference in its entirety in its entirety in the the the the the the the the Application No. 13/035,907, No. 12/883,183, filed on September 16, 2010, and No. 12/944,701, filed on November 11, 2010, each of which is assigned to the same assignee And U.S. Patents 7,853,043, 7,844,135, 7,715,597, 7,620,218, 7,587,068, 7,565,030, 7,564,994, 7,558,408, 7,555,148, 7,551,755, 7,460,695, 7,460,694, 7,403,643, 7,317,815, 7,315,631 and 7,269,292.

可使用諸多技術來判定就坐於一顯示器或監視器前面的一使用者之年齡。在一項實施例中,可基於使用者之眼睛大小、使用者之虹膜大小及/或使用者之瞳孔大小來判定使用者之年齡。 A number of techniques can be used to determine the age of a user sitting in front of a display or monitor. In one embodiment, the age of the user may be determined based on the size of the user's eyes, the size of the user's iris, and/or the pupil size of the user.

相依於顯示器中所包含之感測器,可藉由包含該等感測器之顯示器取得關於使用者之一影像或其他資料,例如使用者之一影像。關於所取得資料之後設資料(包含至使用者或物件之距離、光圈、CCD或CMOS大小、鏡頭之焦距及景深)可記錄於影像上或在取得影像時與其一同記錄。基於此資訊,顯示器可判定眼睛、虹膜、瞳孔或紅眼區域(若使用一閃光燈)之一潛在大小範圍。 Depending on the sensor included in the display, one of the images or other data of the user, such as one of the users, may be obtained by the display including the sensors. Information about the acquired data (including distance to the user or object, aperture, CCD or CMOS size, focal length and depth of field of the lens) can be recorded on the image or recorded along with the image. Based on this information, the display can determine the potential size range of one of the eye, iris, pupil or red-eye area (if a flash is used).

於此情形中,可變性不僅係針對不同個體,且亦係基於年齡之可變性。幸運地是,在眼睛之情形中,眼睛大小當人自嬰孩成長為成人係相對恆定。此係在嬰孩及幼童中所見的「大眼」之衝擊效應之原因。嬰兒之平均眼球量測值自前至後約為19.5毫米,且如上文所述,在該人員之壽命期間生長至平均24毫米。基於此資料,在眼偵測之情形中,在視野中之一物件(其可係一瞳孔,瞳孔係虹膜之部分)之大小有限,允許下述某種可變性:9mm≦虹膜大小≦13mm。 In this case, variability is not only for different individuals, but also based on age variability. Fortunately, in the case of the eye, the size of the eye is relatively constant as the person grows from a baby to an adult. This is the reason for the impact of the "big eyes" seen in babies and young children. The average eyeball measurement of the infant was approximately 19.5 mm from front to back and, as described above, grew to an average of 24 mm during the life of the person. Based on this data, in the case of eye detection, one of the objects in the field of view (which can be a pupil, the portion of the pupil of the pupil) is limited in size, allowing for a variability of 9 mm ≦ iris size ≦ 13 mm.

如此,藉由藉助感測器904相對於其他面部特徵來偵測或判定一使用者之眼睛大小,可計算該使用者之年齡。在DeLuca等人之美國專利第7,630,006號中可找到關於用於基於眼睛、虹膜或瞳孔大小來判定一使用者之年齡之方法及過程之進一步細節。 Thus, by detecting or determining the size of a user's eyes relative to other facial features by means of the sensor 904, the age of the user can be calculated. Further details regarding methods and processes for determining the age of a user based on eye, iris or pupil size can be found in U.S. Patent No. 7,630,006 to De.

在另一實施例中,可根據被攝物之年齡來偵測及分類人的面部(例如,見Lobo等人之美國專利第5,781,650號)。大量影像處理技術可與關於面部特徵之人體測量資料組合以判定一特定面部影像之年齡類別之一估計。在一較佳實施例中,使用一數位影像內之人體測量資料來驗證面部特徵及/或眼睛區域。亦可採用相反途徑且其可涉及一概率推理,亦稱為貝氏統計(Bayesian Statistics)。 In another embodiment, the person's face can be detected and classified according to the age of the subject (see, for example, U.S. Patent No. 5,781,650 to Lobo et al.). A number of image processing techniques can be combined with anthropometric data on facial features to determine an estimate of one of the age categories of a particular facial image. In a preferred embodiment, anthropometric data within a digital image is used to verify facial features and/or eye regions. The opposite approach can also be employed and it can involve a probabilistic reasoning, also known as Bayesian Statistics.

除判定使用者之年齡外,顯示器亦可判定或偵測使用者至顯示器之距離、凝視(或更具體而言,使用者注視之位置及方向)、使用者之姿勢或頭部傾斜量、及包含周圍光及使用者之面部上的亮度量之光照位準。在DeLuca等人之美國專利第7,630,006號以及美國申請案第13/035,907號中亦找到關於如何判定使用者距顯示器之距離、使用者之凝視、頭部傾斜或方向及光照位準之細節。 In addition to determining the age of the user, the display can also determine or detect the distance from the user to the display, gaze (or more specifically, the position and direction of the user's gaze), the posture of the user or the amount of head tilt, and Contains the ambient light and the amount of light on the user's face. Details on how to determine the distance of the user from the display, the user's gaze, head tilt or direction, and illumination level are also found in U.S. Patent No. 7,630,006 to DeLuca et al., and U.S. Patent Application Serial No. 13/035,907.

可藉助使用一IR感測器或超聲感測器而容易地判定距離。在其他實施例中,可藉助一相機拍攝使用者之一影像,且可藉由比較所偵測面部之相對大小與面部上之所偵測特徵(諸如眼睛、鼻子、嘴唇等)之大小來判定使用者之距離。在另一實施例中,面部上之特徵之相對間距可與面部之所偵測大小相比較來判定使用者距感測器之距離。在再一實施例中,可使用相機之焦距來判定使用者距顯示器之距離,或另一選擇係,該焦距可與關於該使用者之所偵測特徵(諸如面部大小或面部特徵之相對大小)組合以判定使用者距顯示器之距離。 The distance can be easily determined by using an IR sensor or an ultrasonic sensor. In other embodiments, one of the images of the user can be captured by a camera and can be determined by comparing the relative size of the detected face with the detected features of the face, such as the eyes, nose, lips, and the like. The distance of the user. In another embodiment, the relative spacing of features on the face can be compared to the detected size of the face to determine the distance of the user from the sensor. In still another embodiment, the focal length of the camera can be used to determine the distance of the user from the display, or another selection system that can be related to the detected features of the user (such as the size of the face or the relative size of the facial features). ) Combine to determine the distance of the user from the display.

在某些實施例中,判定使用者之凝視可包含取得及偵測包含一面部之至少部分(包含一個或兩個眼睛)之一數位影像。可分析該等眼睛中之至少一者,且可判定一眼瞼對一眼球之一覆蓋程度。基於眼瞼對眼球之所判定覆蓋程度,可判定眼睛垂直凝視之一大約方向。該等眼睛中之至少一者之分析可進一步包含判定水平凝視之一大約方向。在某些實施例中,該技術包含至少部分地基於水平凝視之所判定大約方向來起始一進一步動作或起始一不同動作(或兩者)。該一或兩個眼睛之分析可包含光譜分析來自該一或兩個眼睛之光之一反射。此可包含分析在虹膜之至少一側上可見之一 鞏膜量。在其他實施例中,此可包含計算在虹膜之相反側上可見之鞏膜量之一比率。 In some embodiments, determining the user's gaze can include acquiring and detecting a digital image comprising at least a portion of one face (including one or two eyes). At least one of the eyes can be analyzed and a degree of coverage of one of the eyeballs can be determined. Based on the degree of coverage of the eyeball against the eyeball, one of the vertical gaze of the eye can be determined to be approximately the direction. Analysis of at least one of the eyes can further comprise determining one of the horizontal gaze directions. In some embodiments, the technique includes initiating a further action or initiating a different action (or both) based at least in part on the determined direction of the horizontal gaze. The analysis of the one or two eyes can include spectrally reflecting one of the light from the one or both eyes. This can include analysis of one of the visible on at least one side of the iris Scleral volume. In other embodiments, this may include calculating a ratio of the amount of sclera visible on the opposite side of the iris.

在某些實施例中,可分析該數位影像以判定該面部與正常狀態之一角度偏移,並部分地基於角度偏移且部分地基於眼瞼對眼球之覆蓋程度來判定眼睛垂直凝視之大約方向。 In some embodiments, the digital image can be analyzed to determine an angular offset of the face from the normal state, and based on the angular offset and based in part on the degree of coverage of the eyeball by the eyelid to determine the approximate direction of the eye's vertical gaze. .

某些實施例包含提取面部之一或多個有關特徵,其通常係高度可偵測的。此等物件可包含眼睛及嘴唇或鼻子、眼眉、眼瞼、諸如瞳孔、虹膜及/或鞏膜之眼睛特徵、頭髮、前額、下巴、耳朵等。舉例而言,兩個眼睛與嘴唇中心之組合可形成一個三角形,其可被偵測到以不僅判定面部之定向(例如,頭部傾斜)且亦判定面部相對於一面部鏡頭之旋轉。可偵測特徵之定向可用於判定面部與正常狀態之一角度偏移。可標記影像之其他高度可偵測部分(諸如,鼻孔、眼眉、發際、鼻樑及頸部)作為面部之實體延伸。 Some embodiments include extracting one or more related features of the face, which are typically highly detectable. Such items may include the eyes and lips or nose, eyebrows, eyelids, eye features such as the pupil, iris and/or sclera, hair, forehead, chin, ears, and the like. For example, the combination of the two eyes and the center of the lips can form a triangle that can be detected to determine not only the orientation of the face (eg, head tilt) but also the rotation of the face relative to a facial shot. The orientation of the detectable feature can be used to determine an angular offset of the face from the normal state. Other highly detectable portions of the markable image, such as the nostrils, eyebrows, hairline, bridge of the nose, and neck, extend as a physical entity to the face.

可藉助一周圍光感測器或一相機來判定周圍光。在其他實施例中,可基於一使用者之瞳孔對其眼睛大小或其他面部特徵之相對大小來判定周圍光。 The ambient light can be determined by means of a surrounding light sensor or a camera. In other embodiments, ambient light may be determined based on the relative size of a user's pupil to its eye size or other facial features.

藉助顯示器所偵測之此等設定或參數(包含年齡、眼睛、瞳孔及虹膜大小、距顯示器之距離、凝視、頭部傾斜及/或周圍光照),可動態調整或改變任何數目個使用者偏好設定以適應特定使用者及設定。 Any number of user preferences can be dynamically adjusted or changed by means of such settings or parameters detected by the display (including age, eye, pupil and iris size, distance from the display, gaze, head tilt and/or ambient illumination) Set to suit specific users and settings.

在一項實施例中,可基於使用者之一所偵測年齡而自動改變可顯示內容及隱私設定。參照圖10,在偵測到一兒童或未成年人在顯示器1000前面時,可顯示一提示或符號1008以指示已偵測到一兒童或未成年人且已針對顯示器啟用適當的可顯示內容及隱私設定。在一項實施例中,若偵測到一兒童或未成年人在顯示器前面,則可啟用預設隱私及過濾選項(亦即, 由一成人或管理員程式化或選定)以控制在顯示器1000上展示之內容類型。舉例而言,可強化web瀏覽器過濾以防止一年幼使用者遇到被一家長或管理員視為不適合年齡之材料或內容(例如,色情、污穢語言、暴力等)。 In one embodiment, the displayable content and privacy settings can be automatically changed based on the age detected by one of the users. Referring to Figure 10, upon detecting that a child or minor is in front of the display 1000, a prompt or symbol 1008 can be displayed to indicate that a child or minor has been detected and that appropriate displayable content has been enabled for the display and Privacy settings. In one embodiment, if a child or minor is detected in front of the display, the default privacy and filtering options can be enabled (ie, It is programmed or selected by an adult or administrator to control the type of content displayed on display 1000. For example, web browser filtering can be enhanced to prevent younger users from encountering materials or content that are considered unsuitable for age by a long or administrator (eg, pornography, swearing language, violence, etc.).

對哪些年齡群組相當於一「兒童」、一「未成年人」及「成人」或一「老年」人之判定可由一管理員預程式化或選定。然而,在某些實施例中,一兒童可係小於15歲之一人員,一未成年人可係自15至17歲之一人員,一成人可係自18至65歲之一人員,且一老年人可係大於65歲之一人員。 The determination of which age group is equivalent to a "child", a "minor" and an "adult" or an "elderly" person may be pre-programmed or selected by an administrator. However, in some embodiments, a child may be a person under 15 years of age, a minor may be from one of 15 to 17 years of age, and an adult may be from one of 18 to 65 years old, and one Older people can be one of those older than 65 years of age.

另外,當前已含於附接至該顯示器之一電腦上之內容可視為相依於所偵測使用者之年齡或分類而不可顯示。舉例而言,若顯示器前面的使用者被判定為過於年幼,則隱私金融檔案、照片、視訊或其他敏感文件或資料可自動變得不可存取或不可顯示。如上文所述,可由一管理員預程式化或選定用於判定資料是否不可存取或不可顯示之年齡限制截止點。 Additionally, content currently contained on a computer attached to one of the displays may be considered to be non-displayable depending on the age or classification of the detected user. For example, if the user in front of the display is determined to be too young, the privacy financial file, photo, video or other sensitive file or material may automatically become inaccessible or undisplayable. As described above, an administrator can pre-program or select an age limit cutoff point for determining whether the material is unreachable or undisplayable.

除基於使用者之所偵測年齡來改變可顯示內容及/或隱私設定外,在某些實施例中,顯示器可偵測或辨識特定個別使用者並基於所偵測之個別使用者來調整可顯示內容、隱私設定及/或個人設定。參照圖11A,由顯示器辨識一第一使用者(例如,使用者1),且如提示1108所指示,彼使用者之個別使用者偏好、可顯示內容及隱私設定被自動裝載於顯示器上。類似地,在圖11B中,由顯示器辨識一第二使用者(例如,使用者2),且如提示1108所指示,彼使用者之個別使用者偏好、可顯示內容及隱私設定被自動裝載於顯示器上。由於此等設定可由使用者或由其他某人(例如,一家長或管理員)定製,因此應瞭解,使用者1之設定可不同於使用者2之設定。舉例而言,一管理員可改變系統之所有潛在使用者之使用者可顯示內容及隱私設定,且可輸人每一使用者之照片或每一潛在使用者之其他可辨識特徵。 在使用者定位於顯示器前面時,顯示器可拍攝使用者之一影像並比較其與系統之已知使用者,並基於所偵測使用者來自動調整可顯示內容及隱私設定。 In addition to changing the displayable content and/or privacy settings based on the age detected by the user, in some embodiments, the display can detect or identify a particular individual user and adjust based on the detected individual user. Display content, privacy settings, and/or personal settings. Referring to Figure 11A, a first user (e.g., user 1) is identified by the display, and as indicated by prompt 1108, individual user preferences, displayable content, and privacy settings for the user are automatically loaded on the display. Similarly, in FIG. 11B, a second user (eg, user 2) is identified by the display, and as indicated by prompt 1108, individual user preferences, displayable content, and privacy settings of the user are automatically loaded on On the display. Since these settings can be customized by the user or by someone else (eg, a long or administrator), it should be understood that the setting of User 1 can be different from the setting of User 2. For example, an administrator can change the user's content and privacy settings for all potential users of the system, and can enter a photo of each user or other identifiable features of each potential user. When the user is positioned in front of the display, the display can capture an image of the user and compare it to known users of the system, and automatically adjust the displayable content and privacy settings based on the detected user.

圖12係在具有一使用者計時器之一顯示器之視野中之一使用者之一圖解。於此實施例中,顯示器可偵測到定位於顯示器前面的一使用者之存在達一預定時間限制,並藉助一提示或符號1208向使用者指示該使用者已超出在顯示器前面的預定時間限制。此可用於(舉例而言)限制一使用者花費在顯示器前面的時間量,或勸告頻繁休息(例如,鍛煉、減少眼睛疲勞等)。在某些實施例中,該預定時間限制可相依於顯示器所偵測之使用者之年齡而變化。舉例而言,一家長可想要限制一兒童花費在顯示器前面的時間量。於此實例中,若顯示器偵測到使用者係一兒童,則在預定時間限制之後,可顯示指示符或符號以勸告使用者停止使用顯示器。在另一實施例中,一使用者計時器指示符或符號1210可勸告使用者進行一短期休息,諸如遵守要求在某一時間量之後的雇員休息之地方性、州或聯邦規定。在某些實施例中,可在到達預定時間限制之後自動關斷顯示器。另外,顯示器可保持關斷達經程式化之持續時間以防止進一步使用(例如,直至第二天為止,或直至在顯示器可再次使用之前已過去一預設時間段為止)。 Figure 12 is an illustration of one of the users in the field of view of a display having a user timer. In this embodiment, the display can detect the presence of a user positioned in front of the display for a predetermined time limit and indicate to the user by a prompt or symbol 1208 that the user has exceeded a predetermined time limit in front of the display. . This can be used, for example, to limit the amount of time a user spends in front of the display, or to advise on frequent rest (eg, exercise, reduce eye strain, etc.). In some embodiments, the predetermined time limit may vary depending on the age of the user detected by the display. For example, a family may want to limit the amount of time a child spends in front of the display. In this example, if the display detects that the user is a child, an indicator or symbol may be displayed to advise the user to stop using the display after a predetermined time limit. In another embodiment, a user timer indicator or symbol 1210 may advise the user to take a short break, such as complying with local, state, or federal regulations requiring an employee to rest after a certain amount of time. In some embodiments, the display can be automatically turned off after a predetermined time limit is reached. In addition, the display can remain off for a stylized duration to prevent further use (eg, until the next day, or until a predetermined period of time has elapsed before the display can be used again).

除判定一使用者是否已在顯示器前面超過一預定時間限制外,顯示器亦可判定使用者是否不適合地就坐或以可導致傷害或不適之一方式傾斜其頭部。舉例而言,參照圖13,顯示器可偵測或判定使用者1306正以不良姿勢或以一傾斜頭部凝視顯示器,此可潛在地導致疼痛、痛性痙攣或其他不適。一不適合的頭部傾斜可被判定為自一正常或垂直頭部姿勢偏移之頭部之一角度傾斜。於此例項中,顯示器可展示一人體工學指示符或符號 1312以通知使用者或向使用者指示校正其不適合的姿勢或頭部傾斜。此特徵可能夠校正一否則未察覺之使用者之不適合姿勢或頭部傾斜,防止未來疼痛、不適或傷害。 In addition to determining whether a user has exceeded a predetermined time limit in front of the display, the display may also determine if the user is sitting unsuitably or tilting his or her head in a manner that may result in injury or discomfort. For example, referring to Figure 13, the display can detect or determine that the user 1306 is staring at the display in a bad posture or with a tilted head, which can potentially cause pain, cramps, or other discomfort. An unsuitable head tilt can be determined to be tilted at an angle from one of the heads of a normal or vertical head position shift. In this example, the display can display an ergonomic indicator or symbol 1312 to notify the user or to indicate to the user that the posture or head tilt that is unsuitable is corrected. This feature may be able to correct an unintended user's unsuitable posture or head tilt to prevent future pain, discomfort or injury.

上文所述之面部偵測、眼睛偵測、距離及年齡判定可進一步結合光偵測(例如,周圍光偵測或使用者面部之照明位準)一起用於改變或調整額外使用者偏好設定。在一項實施例中,可基於顯示器所偵測之環境光量來改變顯示器亮度。此外,可判定一較年長使用者比一較年幼使用者要求一更明亮之螢幕,因此可相依於使用者之所偵測年齡來自動調整螢幕亮度。在另一實施例中,基於面部之一所偵測亮度來偵測周圍光。在再一實施例中,顯示器可偵測一瞳孔閉合百分比並組合其與面部上之一照明位準及/或背景周圍光位準以判定螢幕之亮度位準。 The face detection, eye detection, distance and age determination described above can be further used in conjunction with light detection (eg, ambient light detection or illumination level of the user's face) to change or adjust additional user preferences. . In one embodiment, the display brightness can be varied based on the amount of ambient light detected by the display. In addition, it can be determined that an older user requires a brighter screen than a younger user, so that the brightness of the screen can be automatically adjusted depending on the age detected by the user. In another embodiment, ambient light is detected based on the brightness detected by one of the faces. In still another embodiment, the display can detect a pupil closure percentage and combine it with one of the illumination levels on the face and/or the ambient light level to determine the brightness level of the screen.

舉例而言,若一光正在一使用者之面部閃耀,則使用者之瞳孔將較閉合且其將需要一較明亮螢幕。於此實例中,可基於使用者之面部之照明位準及/或使用者之瞳孔大小而自動調亮螢幕。另一方面,若在使用者之背景而非使用者之面部中存在高的周圍光,則使用者之瞳孔將較敞開,但螢幕可由背景光充分照明且可無需做出任何調整。在再一情境中,使用者之面部與使用者之背景兩者皆係暗的或皆具有低周圍光,則可同樣需要一明亮螢幕,且可自動增加或調整顯示器之亮度以進行補償。 For example, if a light is shining on the face of a user, the user's pupil will be relatively closed and it will require a brighter screen. In this example, the screen can be automatically illuminated based on the illumination level of the user's face and/or the user's pupil size. On the other hand, if there is high ambient light in the user's background rather than the user's face, the user's pupil will be more open, but the screen can be sufficiently illuminated by the backlight and no adjustments can be made. In still another scenario, both the user's face and the user's background are dark or have low ambient light, and a bright screen is also required, and the brightness of the display can be automatically increased or adjusted to compensate.

在再一實施例中,可在一額外人員進入感測器之視野時或在一未經辨識之使用者進入視野時調整使用者或螢幕隱私。在第一實施例中,如圖14中展示,在一第二使用者1414與使用者1406一起進入視野時顯示器1400之螢幕1402可被關斷。類似地,參照圖15,若一使用者1506未被顯示器辨識出,則可在顯示器上顯示一指示符或符號1516以向該使用者指示其 未被辨識出。於此實施例中,顯示器可經程式化以在顯示指示符1516之後自動切斷,或另一選擇係,可顯示一鎖定螢幕直至一經辨識使用者進入視野為止,或直至賦予未知使用者1506對系統之存取為止。 In still another embodiment, user or screen privacy may be adjusted as an additional person enters the field of view of the sensor or when an unidentified user enters the field of view. In the first embodiment, as shown in FIG. 14, the screen 1402 of the display 1400 can be turned off when a second user 1414 enters the field of view with the user 1406. Similarly, referring to Figure 15, if a user 1506 is not recognized by the display, an indicator or symbol 1516 can be displayed on the display to indicate to the user Not recognized. In this embodiment, the display can be programmed to automatically turn off after displaying the indicator 1516, or another selection system can display a locked screen until the recognized user enters the field of view, or until the unknown user is assigned 1506. Access to the system.

在再一額外實施例中,顯示器可跟使用者之凝視且僅照明對應於使用者之凝視之螢幕區段1518,如圖16中所展示。顯示器亦可基於一使用者在閱讀多列文字時跨越螢幕之眼睛移動而使其自身自校準並基於使用者之閱讀速度而照明螢幕之適當區段。 In still another additional embodiment, the display can gaze with the user and only illuminate the screen segment 1518 corresponding to the user's gaze, as shown in FIG. The display can also illuminate the appropriate portion of the screen based on the user's reading speed based on a user moving across the screen's eyes as they read the multi-column text.

在再一實施例中,在圖17中所展示,顯示器1700可在使用者坐得過於接近於顯示器時(亦即,在顯示器偵測到使用者定位於比一最佳觀看距離更接近於顯示器時)藉助指示符或圖示1722向一使用者1706指出。 In still another embodiment, as shown in FIG. 17, the display 1700 can be positioned closer to the display when the user is sitting too close to the display (ie, when the display detects that the user is positioned closer to the display than an optimal viewing distance) When indicated by a pointer or icon 1722 to a user 1706.

在某些實施例中,顯示器可基於所偵測使用者設定而自動調整顯示器之亮度或完全接通/關斷顯示器以節省電力。 In some embodiments, the display can automatically adjust the brightness of the display or fully turn the display on/off based on the detected user settings to conserve power.

該系統亦可包含用於基於上文所述之使用者所偵測特徵來進行電力節省之特徵。電力節省過程可包含多個步驟。舉例而言,若顯示器未辨識出顯示器前面的一面部及/或兩個眼睛達一預定時間段,則顯示器可起始一電力節省協定。在一項實施例中,可起始一第一級電力節省。舉例而言,一第一級電力節省可係為:當在顯示器前面未偵測到一使用者達預定時間段時,可以一設定百分比將顯示器調暗。若顯示器繼續未偵測到使用者之面部及/或眼睛達一額外時間段,則顯示器可完全關閉電源。此過程可具有多個中間電力位準步驟,該等中間電力位準步驟可由系統之一管理員基於個別電力節省目標來組態。 The system may also include features for power saving based on features detected by the user as described above. The power saving process can involve multiple steps. For example, if the display does not recognize a face and/or two eyes in front of the display for a predetermined period of time, the display may initiate a power saving agreement. In one embodiment, a first level of power savings can be initiated. For example, a first level power saving can be such that when a user is not detected in front of the display for a predetermined period of time, the display can be dimmed by a set percentage. If the display continues to detect the user's face and/or eyes for an additional period of time, the display can be completely powered off. This process can have multiple intermediate power level steps that can be configured by one of the system administrators based on individual power saving goals.

在另一實施例中,在顯示器進入一電力節省模式時可將整個感測器系統與顯示系統之處理器與顯示器一起關斷。感測器與處理器可經組態 以偶爾接通(例如,在一預定時間段已過去之後短暫地接通),掃描一面部及/或眼睛,並在未偵測到一使用者及/或眼睛之情況下回到關斷狀態。在其中軟體係在一電力不足處理器上運行之一唯軟體實施方案中,此可係極為有利的。 In another embodiment, the entire sensor system can be turned off with the display processor and display when the display enters a power save mode. Sensor and processor can be configured Scans a face and/or eyes with occasional turn-on (eg, briefly after a predetermined period of time has elapsed) and returns to the off state without detecting a user and/or eyes . This can be extremely advantageous in a software-only implementation in which the soft system operates on a low power processor.

[圖18至圖21] [Fig. 18 to Fig. 21]

本發明提供技術及方法以基於由一顯示系統或監視裝置偵測到之參數或狀況來調整使用者偏好設定。在某些實施例中,顯示系統可偵測及/或判定一使用者之一年齡。在另一實施例中,顯示系統可偵測及/或判定使用者與顯示器之間的一距離。在再一實施例中,顯示系統可單獨地或結合前述所偵測年齡或距離狀況一起來偵測及/或判定周圍光或一使用者面部上之光量。在某些實施例中,顯示系統可辨識一使用者之面部,且可額外地辨識一使用者之凝視或判定使用者之瞳孔直徑。 The present invention provides techniques and methods for adjusting user preference settings based on parameters or conditions detected by a display system or monitoring device. In some embodiments, the display system can detect and/or determine the age of a user. In another embodiment, the display system can detect and/or determine a distance between the user and the display. In still another embodiment, the display system can detect and/or determine the amount of ambient light or light on a user's face, either alone or in combination with the detected age or distance conditions. In some embodiments, the display system can identify a user's face and can additionally identify a user's gaze or determine the user's pupil diameter.

可基於顯示器所偵測或判定之參數或狀況來動態調整任何數目個使用者偏好或顯示器設定。舉例而言,在一項實施例中,可基於使用者之所偵測年齡來調整字型大小或圖示大小。在另一實施例中,可基於使用者距顯示器之所偵測距離來調整字型大小或圖示大小。在某些實施例中,個別地辨識特定使用者,且可使字型或圖示大小個別地適應於顯示器所辨識之特定個體。 Any number of user preferences or display settings can be dynamically adjusted based on parameters or conditions detected or determined by the display. For example, in one embodiment, the font size or graphic size may be adjusted based on the age detected by the user. In another embodiment, the font size or graphic size may be adjusted based on the detected distance of the user from the display. In some embodiments, a particular user is individually identified and the font or graphic size can be individually adapted to the particular individual identified by the display.

圖18圖解說明一顯示器1800,諸如一電腦監視器、一電視顯示器、一蜂巢式電話顯示器、一平板顯示器或一膝上型電腦顯示器,其具有一螢幕1802及複數個感測器1804。該等感測器可包含(舉例而言)一成像感測器(諸如包含一CCD或CMOS感測器之一相機)、一閃光燈或其他形式之照明及/或經組態以偵測物件或使物件成像之任何其他感測器,諸如超聲、紅外(IR) 光或熱感測器。該等感測器可安置於顯示器上或整合於顯示器內,或另一選擇係,該等感測器可與顯示器分離。可將任何數目個感測器包含於該顯示器中。在某些實施例中,可使用感測器之組合。舉例而言,在一項實施例中,一相機、一閃光燈及一紅外光感測器可皆包含於一顯示器中。應瞭解,可將任何感測器組合或任何數目個感測器包含於顯示器上或附近。如圖18中所展示,使用者1806係展示為定位於顯示器1800前面,在感測器1804之偵測範圍或視野內。 18 illustrates a display 1800, such as a computer monitor, a television display, a cellular telephone display, a flat panel display, or a laptop display having a screen 1802 and a plurality of sensors 1804. The sensors can include, for example, an imaging sensor (such as a camera including a CCD or CMOS sensor), a flash or other form of illumination, and/or configured to detect an object or Any other sensor that images the object, such as ultrasound, infrared (IR) Light or thermal sensor. The sensors can be placed on the display or integrated into the display, or another selection system that can be separated from the display. Any number of sensors can be included in the display. In some embodiments, a combination of sensors can be used. For example, in one embodiment, a camera, a flash, and an infrared sensor can all be included in a display. It should be appreciated that any sensor combination or any number of sensors can be included on or near the display. As shown in FIG. 18, user 1806 is shown positioned in front of display 1800 within the detection range or field of view of sensor 1804.

各種實施例涉及安裝於一顯示器上或附近之一相機,該顯示器與經程式化以偵測、追蹤及/或辨識一面部或部分面部或一面部區域(諸如一個或兩個眼睛、或一嘴區域)或一面部神態或表情(諸如微笑或眨眼)之一處理器耦合。在某些實施例中,該處理器係整合於顯示器內或安置於顯示器上。在其他實施例中,該處理器係與顯示器分離。該處理器可包含記憶體及經組態以自感測器接收信號並處理該等信號之軟體。某些實施例包含藉助該等感測器來感測一使用者或一使用者之特徵並判定與面部相關之參數,諸如定向、姿態、傾斜、色調、色彩平衡、白平衡、相對曝光或整體曝光、面部大小或面部區域大小(包含眼睛或諸如瞳孔、虹膜、鞏膜或眼瞼等眼睛區域之大小)、一對焦狀況及/或相機或顯示器與面部之間的一距離等。就此而言,將以下各項以引用的方式併入本文中,揭示可與本文所述之實施例或實施例特徵組合之替代實施例及特徵:2011年2月25日提出申請之美國專利申請案第13/035,907號、2010年9月16日提出申請之第12/883,183號及2010年11月11日提出申請之第12/944,701號,其每一者皆受讓於同一受讓人,以及美國專利第7,853,043、7,844,135、7,715,597、7,620,218、7,587,068、 7,565,030、7,564,994、7,558,408、7,555,148、7,551,755、7,460,695、7,460,694、7,403,643、7,317,815、7,315,631及7,269,292號。 Various embodiments relate to a camera mounted on or near a display that is programmed to detect, track, and/or identify a facial or partial facial or a facial region (such as one or two eyes, or a mouth) A region or a processor coupled with a facial expression or expression (such as a smile or blink). In some embodiments, the processor is integrated into the display or disposed on the display. In other embodiments, the processor is separate from the display. The processor can include memory and software configured to receive signals from the sensors and process the signals. Some embodiments include sensing the characteristics of a user or a user with the sensors and determining parameters related to the face, such as orientation, pose, tilt, hue, color balance, white balance, relative exposure, or overall Exposure, face size, or size of the face area (including the size of the eye or the area of the eye such as the pupil, iris, sclera, or eyelid), a focus condition, and/or a distance between the camera or display and the face. In this regard, the following items are hereby incorporated by reference in their entirety to the extent of the disclosure of the disclosure of the disclosure of the disclosure of the disclosure of Case No. 13/035,907, No. 12/883,183, filed on September 16, 2010, and No. 12/944,701, filed on November 11, 2010, each of which is assigned to the same assignee, And U.S. Patents 7,853,043, 7,844,135, 7,715,597, 7,620,218, 7,587,068, 7,565,030, 7,564,994, 7,558,408, 7,555,148, 7,551,755, 7,460,695, 7,460,694, 7,403,643, 7,317,815, 7,315,631 and 7,269,292.

可使用諸多技術來判定就坐於一顯示器或監視器前面的一使用者之年齡。在一項實施例中,可基於使用者之眼睛大小、使用者之虹膜大小及/或使用者之瞳孔大小來判定使用者之年齡。 A number of techniques can be used to determine the age of a user sitting in front of a display or monitor. In one embodiment, the age of the user may be determined based on the size of the user's eyes, the size of the user's iris, and/or the pupil size of the user.

相依於顯示器中所包含之感測器,可藉由具有該等感測器之顯示器取得關於使用者之一影像或其他資料,例如使用者之一影像。關於所取得資料之後設資料(包含至使用者或物件之距離、光圈、CCD或CMOS大小、鏡頭之焦距及景深)可記錄於影像上或在取得影像時與其一同記錄。基於此資訊,顯示器可判定眼睛、虹膜、瞳孔或紅眼區域(若使用一閃光燈)之一潛在大小範圍。 Depending on the sensor included in the display, one of the images or other data of the user, such as an image of the user, can be obtained by the display having the sensors. Information about the acquired data (including distance to the user or object, aperture, CCD or CMOS size, focal length and depth of field of the lens) can be recorded on the image or recorded along with the image. Based on this information, the display can determine the potential size range of one of the eye, iris, pupil or red-eye area (if a flash is used).

於此情形中,可變性不僅係針對不同個體,且亦係基於年齡之可變性。幸運地,在眼睛之情形中,眼睛大小一人員自一嬰孩成長為一成人而相對恆定,此係在嬰孩及幼童中所見的「大眼」之衝擊效應之原因。嬰兒之平均眼球量測值自前至後約為19.5毫米,且如上文所述,在該人員之壽命期間生長至平均24毫米。基於此資料,在眼睛偵測之情形中,在允許下述某種可變性時,物件(其係瞳孔,瞳孔係虹膜之部分)之大小受到限制:9mm≦虹膜大小≦13mm。 In this case, variability is not only for different individuals, but also based on age variability. Fortunately, in the case of the eye, the size of the eye is relatively constant from the time a baby grows to an adult, which is the cause of the "big eye" impact effect seen in babies and young children. The average eyeball measurement of the infant was approximately 19.5 mm from front to back and, as described above, grew to an average of 24 mm during the life of the person. Based on this data, in the case of eye detection, when a certain variability is allowed, the size of the object (which is the pupil, the part of the pupil of the pupil) is limited: 9 mm ≦ iris size ≦ 13 mm.

如此,藉由藉助感測器1804來偵測或判定一使用者之眼睛大小,可計算該使用者之年齡。在DeLuca等人之美國專利第7,630,006號中可找到關於用於基於眼睛、虹膜或瞳孔大小來判定一使用者之年齡之方法及過程之進一步細節。 Thus, by detecting or determining the size of a user's eyes by means of the sensor 1804, the age of the user can be calculated. Further details regarding methods and processes for determining the age of a user based on eye, iris or pupil size can be found in U.S. Patent No. 7,630,006 to De.

在另一實施例中,可根據被攝物之年齡來偵測及分類人的面部(例如,見Lobo等人之美國專利第5,781,650號)。大量影像處理技術可與關於面部特徵之人體測量資料組合以判定一特定面部影像之年齡類別之一估計。在一較佳實施例中,使用一數位影像內之人體測量資料來驗證面部特徵及/或眼睛區域。亦可採用相反途徑且其可涉及一概率推理,亦稱為貝氏統計(Bayesian Statistics)。 In another embodiment, the person's face can be detected and classified according to the age of the subject (see, for example, U.S. Patent No. 5,781,650 to Lobo et al.). A number of image processing techniques can be combined with anthropometric data on facial features to determine an estimate of one of the age categories of a particular facial image. In a preferred embodiment, anthropometric data within a digital image is used to verify facial features and/or eye regions. The opposite approach can also be employed and it can involve a probabilistic reasoning, also known as Bayesian Statistics.

除判定使用者之年齡外,顯示器亦可判定或偵測使用者至顯示器之距離、凝視(或更具體而言,使用者注視之位置及方向)、使用者之姿勢或頭部傾斜量、及包含周圍光及使用者之面部上的亮度量之光照位準。在DeLuca等人之美國專利第7,630,006號以及美國申請案第13/035,907號中亦找到關於如何判定使用者距顯示器之距離、使用者之凝視、頭部傾斜或方向及光照位準之細節。 In addition to determining the age of the user, the display can also determine or detect the distance from the user to the display, gaze (or more specifically, the position and direction of the user's gaze), the posture of the user or the amount of head tilt, and Contains the ambient light and the amount of light on the user's face. Details on how to determine the distance of the user from the display, the user's gaze, head tilt or direction, and illumination level are also found in U.S. Patent No. 7,630,006 to DeLuca et al., and U.S. Patent Application Serial No. 13/035,907.

可藉助使用一IR感測器或超聲感測器而容易地判定距離。在其他實施例中,可藉助一相機拍攝使用者之一影像,且可藉由比較所偵測面部之相對大小與面部上之所偵測特徵(諸如眼睛、鼻子、嘴唇等)之大小來判定使用者之距離。在另一實施例中,面部上之特徵之相對間距可與面部之所偵測大小相比較來判定使用者距感測器之距離。在再一實施例中,可使用相機之焦距來判定使用者距顯示器之距離,或另一選擇係,該焦距可與關於該使用者之所偵測特徵(諸如面部大小或面部特徵之相對大小)組合以判定使用者距顯示器之距離。 The distance can be easily determined by using an IR sensor or an ultrasonic sensor. In other embodiments, one of the images of the user can be captured by a camera and can be determined by comparing the relative size of the detected face with the detected features of the face, such as the eyes, nose, lips, and the like. The distance of the user. In another embodiment, the relative spacing of features on the face can be compared to the detected size of the face to determine the distance of the user from the sensor. In still another embodiment, the focal length of the camera can be used to determine the distance of the user from the display, or another selection system that can be related to the detected features of the user (such as the size of the face or the relative size of the facial features). ) Combine to determine the distance of the user from the display.

在某些實施例中,判定使用者之凝視可包含取得及偵測包含一面部之至少部分(包含一個或兩個眼睛)之一數位影像。可分析該等眼睛中之至少一者,且可判定一眼瞼對一眼球之一覆蓋程度。基於眼瞼對眼球之所判 定覆蓋程度,可判定眼睛垂直凝視之一大約方向。該等眼睛中之至少一者之分析可進一步包含判定水平凝視之一大約方向。在某些實施例中,該技術包含至少部分地基於水平凝視之所判定大約方向來起始一進一步動作或起始一不同動作(或兩者)。該一或兩個眼睛之分析可包含光譜分析來自該一或兩個眼睛之光之一反射。此可包含分析在虹膜之至少一側上可見之一鞏膜量。在其他實施例中,此可包含計算在虹膜之相反側上可見之鞏膜量之一比率。 In some embodiments, determining the user's gaze can include acquiring and detecting a digital image comprising at least a portion of one face (including one or two eyes). At least one of the eyes can be analyzed and a degree of coverage of one of the eyeballs can be determined. Based on the eyelids' judgment on the eyeball By determining the degree of coverage, one of the vertical gaze of the eye can be determined to be approximately the direction. Analysis of at least one of the eyes can further comprise determining one of the horizontal gaze directions. In some embodiments, the technique includes initiating a further action or initiating a different action (or both) based at least in part on the determined direction of the horizontal gaze. The analysis of the one or two eyes can include spectrally reflecting one of the light from the one or both eyes. This can include analyzing the amount of one sclera visible on at least one side of the iris. In other embodiments, this may include calculating a ratio of the amount of sclera visible on the opposite side of the iris.

在某些實施例中,可分析該數位影像以判定該面部與正常狀態之一角度偏移,並部分地基於角度偏移且部分地基於眼瞼對眼球之覆蓋程度來判定眼睛垂直凝視之大約方向。 In some embodiments, the digital image can be analyzed to determine an angular offset of the face from the normal state, and based on the angular offset and based in part on the degree of coverage of the eyeball by the eyelid to determine the approximate direction of the eye's vertical gaze. .

某些實施例包含提取面部之一或多個有關特徵,其通常係高度可偵測的。此等物件可包含眼睛及嘴唇或鼻子、眼眉、眼瞼、諸如瞳孔、虹膜及/或鞏膜之眼睛特徵、頭髮、前額、下巴、耳朵等。舉例而言,兩個眼睛與嘴唇中心之組合可形成一個三角形,其可被偵測到以不僅判定面部之定向(例如,頭部傾斜)且亦判定面部相對於一面部鏡頭之旋轉。可偵測特徵之定向可用於判定面部與正常狀態之一角度偏移。可標記影像之其他高度可偵測部分(諸如,鼻孔、眼眉、發際、鼻樑及頸部)作為面部之實體延伸。 Some embodiments include extracting one or more related features of the face, which are typically highly detectable. Such items may include the eyes and lips or nose, eyebrows, eyelids, eye features such as the pupil, iris and/or sclera, hair, forehead, chin, ears, and the like. For example, the combination of the two eyes and the center of the lips can form a triangle that can be detected to determine not only the orientation of the face (eg, head tilt) but also the rotation of the face relative to a facial shot. The orientation of the detectable feature can be used to determine an angular offset of the face from the normal state. Other highly detectable portions of the markable image, such as the nostrils, eyebrows, hairline, bridge of the nose, and neck, extend as a physical entity to the face.

可藉助一周圍光感測器或一相機來判定周圍光。在其他實施例中,可基於一使用者之瞳孔對其眼睛大小或其他面部特徵之相對大小來判定周圍光。 The ambient light can be determined by means of a surrounding light sensor or a camera. In other embodiments, ambient light may be determined based on the relative size of a user's pupil to its eye size or other facial features.

藉助顯示器所偵測之此等設定或參數(包含年齡、眼睛、瞳孔及虹膜大小、距顯示器之距離、凝視、頭部傾斜及/或周圍光照),可動態調整或改變任何數目個使用者偏好設定以適應特定使用者及設定。對哪些年齡群 組相當於一「兒童」、一「未成年人」及「成人」或一「老年」人之判定可由一管理員預程式化或選定。然而,在某些實施例中,一兒童可係小於15歲之一人員,一未成年人可係自15至17歲之一人員,一成人可係自18至65歲之一人員,且一老年人可係大於65歲之一人員。 Any number of user preferences can be dynamically adjusted or changed by means of such settings or parameters detected by the display (including age, eye, pupil and iris size, distance from the display, gaze, head tilt and/or ambient illumination) Set to suit specific users and settings. For which age groups The determination of a group equivalent to a "child", a "minor" and an "adult" or an "elderly" person may be pre-programmed or selected by an administrator. However, in some embodiments, a child may be a person under 15 years of age, a minor may be from one of 15 to 17 years of age, and an adult may be from one of 18 to 65 years old, and one Older people can be one of those older than 65 years of age.

在一項實施例中,可基於使用者之一所偵測年齡來動態改變顯示器1800上所顯示之字型大小。現參照圖19A至圖19B,在圖19A中,使用者1906係偵測為一較年長使用者,且因此可基於使用者之年齡判定而自動增加字型1908之大小。類似地,在圖19B中,使用者係偵測為一較年幼使用者,且因此可基於使用者之年齡判定而自動減小字型1908之大小。 In one embodiment, the font size displayed on display 1800 can be dynamically changed based on the age detected by one of the users. Referring now to Figures 19A-19B, in Figure 19A, user 1906 is detected as an older user, and thus the size of font 1908 can be automatically increased based on the age determination of the user. Similarly, in Figure 19B, the user is detected as a younger user, and thus the size of the font 1908 can be automatically reduced based on the age determination of the user.

類似地,除基於使用者之所偵測年齡來動態改變字型大小外,顯示器亦可基於使用者之年齡判定而自動改變系統圖示大小。參照圖20A至圖20B,在圖20A中,使用者2006係偵測為一較年長使用者,且因此,可基於使用者之年齡判定而自動增加系統圖示2010之大小。類似地,在圖20B中,使用者係偵測為一較年幼使用者,且因此可基於使用者之年齡判定而自動減小系統圖示2010之大小。 Similarly, in addition to dynamically changing the font size based on the age detected by the user, the display can also automatically change the size of the system icon based on the age determination of the user. Referring to FIGS. 20A-20B, in FIG. 20A, the user 2006 is detected as an older user, and thus, the size of the system icon 2010 can be automatically increased based on the age determination of the user. Similarly, in FIG. 20B, the user is detected as a younger user, and thus the size of the system icon 2010 can be automatically reduced based on the age determination of the user.

除基於使用者之一所偵測年齡來改變字型或圖示大小外,顯示器亦可基於使用者與顯示器之間的一所偵測距離而自動改變字型及/或圖示大小。現參照圖21A至圖21B,在圖21A中,使用者2106與顯示器2100之間的一距離2112增加,字型2108及/或圖示2110之大小可在顯示器上增加以輔助視覺化。類似地,在圖21B中,使用者2106與顯示器2100之間的距離2112減小,字型2108及/或圖示2110之大小可在顯示器上減小。在一項實施例中,可預程式化一使用者距顯示器之一最佳距離(例如,距一24"螢幕>80cm),且顯示器可經組態以針對使用者遠離或朝向顯示器移動之每一cm或 英吋而分別自動增加或減小字型大小達一預定百分比。在某些實施例中,顯示器可考量使用者年齡與使用者距顯示器之距離兩者來判定字型及/或圖示之大小。在某些實施例中,顯示器可偵測一人員是否在觀看顯示器方面有困難,諸如由於使用者之所偵測年齡、其較接近於顯示器之移動、其距顯示器之距離、所偵測偏斜等。一旦偵測到觀看問題,系統即可自動放大字型及/或圖示大小作為回應。 In addition to changing the font or size based on the age detected by one of the users, the display can also automatically change the font and/or size based on a detected distance between the user and the display. Referring now to Figures 21A-21B, in Figure 21A, a distance 2112 between the user 2106 and the display 2100 is increased, and the size of the font 2108 and/or the representation 2110 can be increased on the display to aid visualization. Similarly, in FIG. 21B, the distance 2112 between the user 2106 and the display 2100 is reduced, and the size of the font 2108 and/or the illustration 2110 can be reduced on the display. In one embodiment, a user can be pre-programmed at an optimal distance from one of the displays (eg, from a 24" screen > 80 cm) and the display can be configured to move away from or toward the display for the user. One cm or In English, the font size is automatically increased or decreased by a predetermined percentage. In some embodiments, the display can determine both the age of the user and the distance of the user from the display to determine the size of the font and/or icon. In some embodiments, the display can detect whether a person has difficulty viewing the display, such as due to the user's detected age, its proximity to the display, its distance from the display, and the detected skew. Wait. Once a viewing issue is detected, the system automatically zooms in on the font and/or size of the image as a response.

字型及/或圖示之大小改變的量可由使用者或由一管理員調整。舉例而言,在以一距離就坐時,個別使用者可偏好比正常狀態大的字型,或在就坐得近時偏好較小字型。基於距離及/或年齡之圖示改變量可完全由使用者或系統管理員定製。 The amount of change in the size of the font and/or icon can be adjusted by the user or by an administrator. For example, when sitting at a distance, individual users may prefer fonts that are larger than normal, or prefer smaller fonts when sitting close. Graphical changes based on distance and/or age can be fully customized by the user or system administrator.

本文所揭示之實施例可適用於一電視、桌上型電腦監視器、膝上型監視器、平板裝置、諸如智慧電話之其他行動裝置及具有顯示器之其他電子裝置。 The embodiments disclosed herein are applicable to a television, desktop monitor, laptop monitor, tablet device, other mobile devices such as smart phones, and other electronic devices having displays.

現將參照附圖在下文中更充分地闡述實例性實施方案:然而,其可以不同形式實現且本標的物不應構建為限定至本文所列舉之實例。而是,提供此等實施例以便使此揭示內容將透徹且完整,且將向熟習此項技術者全面傳達本標的物。 Example embodiments will now be described more fully hereinafter with reference to the accompanying drawings in which: FIG. Rather, the embodiments are provided so that this disclosure will be thorough and complete, and the subject matter will be fully conveyed by those skilled in the art.

圖22係展示與一顯示器2204一起使用之一例示性人體工學感測器模組2202,其中該顯示器係與一電腦系統2205一起使用。該電腦系統可包括一桌上型、膝上型、伺服器、遊戲系統、行動裝置或其他計算裝置驅動顯示器2204。當然,顯示器2204可包括任何適合顯示器類型,包含(但不限於)一LCD、電漿、CRT或其他顯示器或甚至一電視。感測器模組2202經定位以使得其可產生顯示裝置2204之一使用者2206之影像資料。於此實例 中,感測器模組2202係定位於顯示器2204之一結構元件2204A(諸如顯示面板或外殼)上或其中。感測器模組2202可定位於相對於顯示器2204之任何適合位置處,且可甚至定位於離開顯示器處。使用者2206可就坐、站立或以其他方式接近於顯示器2204。 22 shows an exemplary ergonomic sensor module 2202 for use with a display 2204 that is used with a computer system 2205. The computer system can include a desktop, laptop, server, gaming system, mobile device, or other computing device to drive display 2204. Of course, display 2204 can include any suitable display type including, but not limited to, an LCD, plasma, CRT or other display or even a television. The sensor module 2202 is positioned such that it can produce image data for a user 2206 of the display device 2204. This instance The sensor module 2202 is positioned on or in one of the structural elements 2204A (such as a display panel or housing) of the display 2204. The sensor module 2202 can be positioned at any suitable location relative to the display 2204 and can even be positioned away from the display. User 2206 can sit, stand, or otherwise approach display 2204.

如插圖中所展示,感測器模組2202包含一或多個影像感測裝置(感測器2208)、一處理元件2210及一輸入/輸出介面2212。舉例而言,感測器2208可包括一CMOS或可用於提供靜止及/或視訊影像資料之其他影像感測技術。處理元件2210可包括一微處理器、數位信號處理器(DSP)、特殊應用積體電路(ASIC)或可經組態以對來自感測器2208之資料取樣並經由I/O介面2212提供輸出之其他硬體邏輯。 As shown in the illustration, the sensor module 2202 includes one or more image sensing devices (sensors 2208), a processing component 2210, and an input/output interface 2212. For example, sensor 2208 can include a CMOS or other image sensing technology that can be used to provide still and/or video image data. Processing component 2210 can include a microprocessor, digital signal processor (DSP), special application integrated circuit (ASIC), or can be configured to sample data from sensor 2208 and provide output via I/O interface 2212. Other hardware logic.

處理元件2210經組態以自影像感測裝置獲得影像資料,且於此實例中,分析該影像資料以基於定義顯示裝置之一人體工學使用範圍之存取預定義資料來判定該影像資料是否指示該顯示器之一使用者正在人體工學使用範圍內使用該顯示器。於此實例中,處理元件2210進一步介接至記憶體2214,記憶體2214代表任何適合的非暫時性電腦可讀媒體,且包含組態處理元件2210以獲得及分析資料之一人體工學分析常式2216之程式碼。例如,記憶體2214可包括RAM、ROM、快取或其他記憶體或一儲存裝置(例如,磁碟、光碟、快閃記憶體等)。然而,如上文提及,實施方案可使用一基於硬體之途徑(例如,一ASIC、可程式化邏輯陣列或致使處理元件2210執行分析並產生輸出之其他硬體邏輯)。 The processing component 2210 is configured to obtain image data from the image sensing device, and in this example, analyze the image data to determine whether the image data is based on accessing predefined data defining a range of ergonomic use of the display device A display is indicated to be used by a user of the display within the scope of ergonomic use. In this example, processing component 2210 is further interfaced to memory 2214, which represents any suitable non-transitory computer readable medium, and includes configuration processing component 2210 to obtain and analyze data. The code of the formula 2216. For example, memory 2214 can include RAM, ROM, cache or other memory or a storage device (eg, a magnetic disk, optical disk, flash memory, etc.). However, as mentioned above, embodiments may use a hardware-based approach (eg, an ASIC, a programmable logic array, or other hardware logic that causes processing component 2210 to perform analysis and produce an output).

在某些實施方案中,I/O介面2212連接至顯示裝置2204且處理元件2210進一步經組態以回應於判定該影像資料指示該顯示器之一使用者並未在人體工學使用範圍內使用顯示器而使用顯示裝置輸出一回饋訊息2218。 舉例而言,人體工學分析常式2216可引導處理元件2210以在無電腦2205介入或處理之情況下使用I/O介面來顯示警告訊息2218。 In some embodiments, the I/O interface 2212 is coupled to the display device 2204 and the processing component 2210 is further configured to respond to the determination that the image data indicates that one of the displays is not in use by the user within the scope of ergonomic use A feedback message 2218 is output using the display device. For example, ergonomic analysis routine 2216 can direct processing component 2210 to display warning message 2218 using the I/O interface without the intervention or processing of computer 2205.

電腦2205包括處理器2218、記憶體2220及其他習用電腦組件(例如,匯流排、網路介面、顯示介面、儲存媒體等)。在某些實施方案中,除人體工學分析常式2216或替代人體工學分析常式2216外,亦由電腦2205進行人體工學分析常式2217。舉例而言,包括感測器2208、處理元件2210及I/O介面2212之一人體工學感測器模組可僅將影像資料提供至人體工學分析常式2217。在某些實施方案中,一網路攝影機或其他成像裝置充當人體工學感測器模組2202。 The computer 2205 includes a processor 2218, a memory 2220, and other conventional computer components (eg, a bus, a network interface, a display interface, a storage medium, etc.). In some embodiments, in addition to the ergonomic analysis routine 2216 or the alternative ergonomic analysis routine 2216, the ergonomic analysis routine 2217 is also performed by the computer 2205. For example, an ergonomic sensor module including sensor 2208, processing element 2210, and I/O interface 2212 can provide image data only to ergonomic analysis routine 2217. In some embodiments, a webcam or other imaging device functions as an ergonomic sensor module 2202.

圖23係展示整合至一顯示器之一結構元件中之一人體工學感測器模組2302之一實例之一圖式。於此實例中,感測器模組2302之組件包封於顯示器2304之一面板內,其中在顯示器之前面上有一孔隙以使一使用者成像。舉例而言,人體工學感測器模組2302可組態為一內建式網路攝影機,其將影像資料提供至裝載於介接至顯示器2304之電腦2305(未展示)處之人體工學分析常式2317。然而,感測器模組2302可包含充分的處理能力以主控人體工學分析常式2316並使用顯示器2304直接提供輸出。 23 is a diagram showing one example of an ergonomic sensor module 2302 integrated into one of the structural elements of a display. In this example, the components of the sensor module 2302 are enclosed within a panel of the display 2304 with an aperture in front of the display to image a user. For example, the ergonomic sensor module 2302 can be configured as a built-in webcam that provides image data to ergonomics loaded on a computer 2305 (not shown) that is interfaced to the display 2304. Analyze the normal form 2317. However, the sensor module 2302 can include sufficient processing power to host the ergonomic analysis routine 2316 and provide output directly using the display 2304.

圖24係展示在介接至一電腦2405之一顯示器2404外部使用之一人體工學感測器模組2402之一實例之一圖式。於此實例中,人體工學感測器模組2402包括一網路攝影機。例如,該網路攝影機可經由一USB或其他介面提供影像資料供由電腦2405之一處理器使用以分析及判定適當回饋。該網路攝影機可附接至或定位於顯示裝置2404上,諸如在顯示裝置之頂部上、在顯示裝置之一前面上等。當然,該網路攝影機可定位於顯示裝置側面或亦可在別處。電腦2405可提供有軟體或韌體(在一非暫時性電腦可讀媒 體中實現)以使得電腦2405基於來自網路攝影機之影像資料來進行人體工學分析中之某些或全部。 24 is a diagram showing one example of an ergonomic sensor module 2402 used externally to a display 2404 that is interfaced to a computer 2405. In this example, the ergonomic sensor module 2402 includes a webcam. For example, the webcam can provide image data via a USB or other interface for use by a processor of the computer 2405 to analyze and determine appropriate feedback. The webcam can be attached to or positioned on display device 2404, such as on top of the display device, on the front of one of the display devices, and the like. Of course, the webcam can be positioned on the side of the display device or can be located elsewhere. The computer 2405 can be provided with software or firmware (in a non-transitory computer readable medium) Implemented in the body to cause the computer 2405 to perform some or all of the ergonomic analysis based on image data from the webcam.

上文展示之網路攝影機及整合之外觀尺寸及位置僅係出於實例性目的。該成像裝置可定位於任何適合點處以提供顯示器2404之使用者之一影像。在某些實施方案中,成像裝置經定位以擷取表示如自顯示器2404所見之使用者之一影像之光(例如,使用擷取朝向顯示器之一前面之光之一感測器或光學器件)。 The size and location of the webcam and integration shown above are for illustrative purposes only. The imaging device can be positioned at any suitable point to provide an image of one of the users of display 2404. In some embodiments, the imaging device is positioned to capture light representing one of the images of the user as seen from display 2404 (eg, using one of the sensors or optics that taps toward the front of one of the displays) .

圖25係展示在使用一人體工學感測器模組時進行之一例示性方法2500之步驟之一流程圖。舉例而言,基於影像資料,方法2500可藉由模組2502實施之人體工學分析常式2516進行,及/或可藉由電腦2505所實施之人體工學分析常式2517進行。該感測器模組可在執行軟體或韌體之過程中進行方法2500。然而,方法2500亦可在基於硬體之實施方案中進行,諸如藉由硬體邏輯,諸如一特殊應用積體電路(ASIC)、可程式化邏輯陣列(PLA)、邏輯閘之配置,或可獲得輸入值(例如,像素值)並處理該等輸入值以判定一輸出(例如,該等像素值是否指示一人體工學使用)之其他硬體。在實踐中,可週期性地或連續地進行方法2500或另一影像分析方法以將即時回饋提供至一或多個使用者。 Figure 25 is a flow chart showing one of the steps of an exemplary method 2500 when an ergonomic sensor module is used. For example, based on the image data, the method 2500 can be performed by the ergonomic analysis routine 2516 implemented by the module 2502, and/or can be performed by the ergonomic analysis routine 2517 implemented by the computer 2505. The sensor module can perform method 2500 during the execution of the software or firmware. However, method 2500 can also be performed in a hardware-based implementation, such as by hardware logic, such as a special application integrated circuit (ASIC), programmable logic array (PLA), logic gate configuration, or Input values (eg, pixel values) are obtained and processed to determine an output (eg, whether the pixel values indicate an ergonomic use) other hardware. In practice, method 2500 or another image analysis method can be performed periodically or continuously to provide instant feedback to one or more users.

方塊2502表示自影像感測裝置(例如,一或多個影像感測器)獲得影像資料。舉例而言,此方塊可包括自影像感測裝置存取影像資料,且判定該影像資料繪示一顯示裝置之一使用者。若該使用者不存在,則無需進行該常式之其餘部分。使用者之存在可藉由分析視野來判定,諸如使用運動偵測演算法、一背景影像與影像資料之比較、面部偵測或以某一其他方式。在某些實施方案中,可藉由(舉例而言)使用面部偵測來辨識多個使用者。 Block 2502 represents obtaining image data from an image sensing device (eg, one or more image sensors). For example, the block may include accessing image data from the image sensing device, and determining that the image data indicates a user of a display device. If the user does not exist, the rest of the routine need not be performed. The presence of the user can be determined by analyzing the field of view, such as using a motion detection algorithm, comparing a background image to image data, detecting a face, or in some other manner. In some embodiments, multiple users can be identified by, for example, using face detection.

概言之,方塊2504至2508表示分析影像資料以基於定義顯示裝置之一人體工學使用範圍之預定義資料來判定該影像資料是否指示該顯示器之使用者正在人體工學使用範圍內使用該顯示器。若已辨識多個使用者,則該常式可判定是否每一使用者正做出顯示器之人體工學使用。然而,在某些實施例中,分析影像包含選擇該等使用者中之一者(例如,主要使用者)來判定彼使用者是否正進行人體工學使用。舉例而言,可藉由判定在一既定時刻由成像系統看見的最大面部大小來選擇一使用者。 In summary, blocks 2504 through 2508 represent analyzing image data to determine whether the image data indicates that the user of the display is using the display within an ergonomic range based on predefined data defining an ergonomic range of use of the display device. . If multiple users have been identified, the routine can determine if each user is making an ergonomic use of the display. However, in some embodiments, analyzing the image includes selecting one of the users (eg, the primary user) to determine if the user is performing ergonomic use. For example, a user can be selected by determining the maximum face size seen by the imaging system at a given time.

方塊2504表示存取定義該顯示裝置之一或多個人體工學使用範圍之資料。該(等)人體工學使用範圍可定義為人體工學度量之各種參數之範圍,該等人體工學度量用於特徵化使用者之姿態及周圍使用狀況。在方塊2506處,將一或多個影像分析演算法應用於該影像資料以判定對應人體工學度量之參數值,且在方塊2508處,將該等參數值與人體工學適用範圍相比較以判定使用者是否係在該一或多個人體工學度量之一人體工學使用範圍中。 Block 2504 represents accessing data defining one or more ergonomic use ranges of the display device. The range of ergonomic use can be defined as the range of various parameters of the ergonomic metric used to characterize the user's posture and surrounding conditions. At block 2506, one or more image analysis algorithms are applied to the image data to determine parameter values for the corresponding ergonomic metrics, and at block 2508, the parameter values are compared to the ergonomic range of application. A determination is made as to whether the user is in one of the one or more ergonomic measurements of ergonomic use.

在某些實施方案中,分析該資料以判定下列人體工學度量中之一或多者之一參數值,其中將該參數值與下文列舉之人體工學使用範圍相比較。然而,僅出於實例性目的來提供此等度量及範圍。實施例可使用額外人體工學度量及/或人體工學使用範圍以適應特定需要。 In certain embodiments, the data is analyzed to determine a parameter value of one or more of the following ergonomic metrics, wherein the parameter value is compared to the ergonomic range of use listed below. However, such metrics and ranges are provided for exemplary purposes only. Embodiments may use additional ergonomic measurements and/or ergonomic use ranges to suit particular needs.

可以任何適合方式分析該影像資料以產生人體工學度量之參數值。舉例而言,在某些實施方案中,該分析包含使用一面部辨識演算法來判定使用者之面部在影像中之何處。面部辨識演算法之使用可允許感測器模組獨立於使用者之面部形狀(例如,不考量使用者之面部是橢圓形、方形或某一其他形狀)來分析顯示器之使用。該演算法尋找皮膚色調及諸如眼睛、嘴唇/嘴部位置之面部特徵之偵測以判定一人員之存在且因此獨立於面部本身之實際形狀。基於使用者之面部之位置,影像之面部部分可經受額外分析演算法以判定各種人體工學度量之參數值。 The image data can be analyzed in any suitable manner to produce parameter values for ergonomic measurements. For example, in some embodiments, the analysis includes using a facial recognition algorithm to determine where the user's face is in the image. The use of a facial recognition algorithm may allow the sensor module to analyze the use of the display independently of the user's facial shape (eg, without considering the user's face being elliptical, square, or some other shape). The algorithm looks for skin tone and detection of facial features such as eye, lip/mouth position to determine the presence of a person and is therefore independent of the actual shape of the face itself. Based on the location of the user's face, the facial portion of the image can be subjected to additional analysis algorithms to determine parameter values for various ergonomic measurements.

另外,藉由使用影像分析,可獨立於使用者之預擷取姿勢資料或要求使用者匹配某一預定義姿勢或位置來解決人體工學使用之問題。替代地,影像資料本身用於判定在影像中可偵測之使用者特徵及/或在影像中可偵測之周圍狀況是否與人體工學使用一致(或不一致)。該演算法使用瞳孔間距離(兩個眼睛的中心之間的距離)之一量測來偵測一面部距顯示器之距離並使用同一度量來判定該面部是否具有一偏擺/傾斜/轉動角度。 In addition, by using image analysis, the problem of ergonomic use can be solved independently of the user's pre-fetching of the posture data or by requiring the user to match a predefined posture or position. Alternatively, the image data itself is used to determine whether the user characteristics detectable in the image and/or the surrounding conditions detectable in the image are consistent (or inconsistent) with the ergonomic use. The algorithm measures one of the distances between the pupils (the distance between the centers of the two eyes) to detect the distance of a face from the display and uses the same metric to determine if the face has a yaw/tilt/rotation angle.

舉例而言,可藉由識別影像中之一特徵(例如,使用者之眼睛)來判定距監視器之距離。基於指示感測器模組之位置之資料,可使用距使用者 之眼睛或甚至距使用者之整個面部之平行視差或三角測量來估計使用者之距離及角度。 For example, the distance from the monitor can be determined by identifying one of the features in the image (eg, the user's eyes). Based on the information indicating the location of the sensor module, the user can be used The distance or angle of the user is estimated by the parallel parallax or triangulation of the eye or even the entire face of the user.

在一項實施方案中,一特徵辨識演算法基於分析影像以識別使用者之眼睛下方的陰影來定位一使用者之眼睛。特定而言,可評估該影像之像素強度值以識別可對應於陰影之較暗區域;若較暗區域係經類似成形且相距一可接受距離,則該特徵辨識演算法可推斷使用者之眼睛係在陰影上方。 In one embodiment, a feature recognition algorithm locates a user's eyes based on analyzing the image to identify shadows beneath the user's eyes. In particular, the pixel intensity value of the image can be evaluated to identify a darker region that can correspond to a shadow; if the darker region is similarly shaped and spaced apart by an acceptable distance, the feature recognition algorithm can infer the user's eye It is above the shadow.

可使用辨識一使用者之眼睛之影像分析來判定使用者是否已盯視過久而不眨眼。例如,一眨眼辨識演算法可分析一系列影像來判定使用者之眼睛已保持敞開多久(亦即,出現於該系列影像中)。若使用者之眼睛在一臨限時間段已過去之後尚未眨眼,則可提供一警告或其他回饋。 Image analysis that identifies a user's eyes can be used to determine if the user has stared for too long without blinking. For example, a blink recognition algorithm can analyze a series of images to determine how long the user's eyes have remained open (ie, appearing in the series of images). If the user's eyes have not blinked after a threshold period has elapsed, a warning or other feedback may be provided.

在某些實施方案中,可使用使用者之眼睛、面部及/或其他區別性特徵來判定同一使用者是否已保持接近顯示器(例如,在其前面)而無休息。舉例而言,可針對顯示器之人體工學使用來定義一臨限時間段。藉由分析使用者連續出現的時間長度,感測器模組可判定使用者是否已超過該臨限制且應進行休息。該演算法亦可尋找一最小休息持續時間以確保使用者離開顯示器達一最小時間段。 In some embodiments, the user's eyes, face, and/or other distinctive features can be used to determine if the same user has remained close to the display (eg, in front of it) without a break. For example, a threshold period of time can be defined for the ergonomic use of the display. By analyzing the length of time that the user continues to appear, the sensor module can determine if the user has exceeded the limit and should take a break. The algorithm can also find a minimum rest duration to ensure that the user leaves the display for a minimum period of time.

在某些實施方案中,分析影像資料以判定關於使用者之面部相對於顯示器(例如,相對於顯示器之一平面)之空間位置之資訊。舉例而言,使用者相對於顯示器之面部轉動角度、偏擺角度或俯仰角度中之一或多者,其用於基於所判定之一或多個角度來判定使用者之面部是否在一人體工學使用範圍內。該等轉動、俯仰及偏擺角度可定義為指示使用者之面部之平面相對於顯示器之平面之旋轉角度。 In some embodiments, the image data is analyzed to determine information about the spatial position of the user's face relative to the display (eg, relative to one of the planes of the display). For example, one or more of a user's face rotation angle, yaw angle, or pitch angle with respect to the display is used to determine whether the user's face is in a ergonomic condition based on the determined one or more angles Learning to use. The rotational, pitch and yaw angles may be defined as the angle of rotation indicative of the plane of the user's face relative to the plane of the display.

圖26圖解說明使用者相對於顯示器之偏擺角度。該偏擺角度量測使用者自顯示器上之一點向右或向左偏移多遠。如圖26中所展示,偏擺角度係在自顯示器之頂部或底部觀看時自使用者延伸至顯示器上之一點之一線與自顯示器上之該點延伸之一垂直線之間的角度。舉例而言,使用者與顯示器之間的該線可包含接近於使用者之眼睛之間的一點之使用者上之一點與顯示器之中點。 Figure 26 illustrates the yaw angle of the user relative to the display. The yaw angle measures how far the user is offset from the point on the display to the right or left. As shown in Figure 26, the yaw angle is the angle between a line extending from the user to one of the points on the display and a vertical line extending from the point on the display when viewed from the top or bottom of the display. For example, the line between the user and the display can include a point on the user and a point in the display that is close to a point between the eyes of the user.

圖27圖解說明相對於顯示器之使用者之俯仰角度。該俯仰角度量測使用者自顯示器上之一點向上或向下偏離多遠。如圖27中所展示,俯仰角度係在自顯示器側面觀看時自使用者延伸至顯示器上之一點之一線與自顯示器上之該點延伸之一垂直線之間的角度。舉例而言,使用者與顯示器之間的該線可包含接近於使用者之眼睛附近之一點之使用者上之一點與在顯示器頂部處之一點。 Figure 27 illustrates the pitch angle of a user relative to the display. The pitch angle measures how far the user deviates upward or downward from a point on the display. As shown in Figure 27, the pitch angle is the angle between a line extending from the user to a point on the display and a vertical line extending from the point on the display as viewed from the side of the display. For example, the line between the user and the display can include a point on the user that is close to a point near the user's eye and a point on the top of the display.

眩光與周圍光可使用搜尋影像查找對應於過於明亮或過於暗淡之眩光及/或周圍光之強度型樣之一演算法來辨識。舉例而言,可找出並比例調整影像之平均強度以判定周圍光狀況之一參數值。來自監視器之眩光可藉由搜尋其中強度迅速上漲之影像區來識別-舉例而言,可分析諸如使用者之臉頰/前額之使用者之面部區來判定使用者之面部是否正反射大量光。藉由持續地分析跨越整個影像之強度值,進行人體工學分析常式之一處理元件可獨立於周圍光照狀況之改變來判定人體工學使用。跨越影像之所量測強度經定限以判定低光狀況。該演算法選取接近於使用者之面部及以上之影像區以移除使用者之暗衣著(降低平均強度值)之效應。 Glare and ambient light can be identified using a search image to find an algorithm that corresponds to an intensity pattern that is too bright or too dim glare and/or ambient light. For example, the average intensity of the image can be found and scaled to determine one of the ambient light conditions. The glare from the monitor can be identified by searching for an image area in which the intensity is rapidly rising - for example, the face area of the user such as the user's cheek/forehead can be analyzed to determine whether the user's face is reflecting a large amount of light . By continuously analyzing the intensity values across the entire image, one of the ergonomic analysis routines can determine ergonomic use independently of changes in ambient light conditions. The measured intensity across the image is limited to determine the low light condition. The algorithm selects an image area that is close to the user's face and above to remove the effect of the user's darkening (lowering the average intensity value).

朝向監視器之眩光可藉由分析影像查找高的後面照明來識別-假設影像感測器正面向使用者,若使用者係後面照明的(亦即,面部區具有比圍 繞使用者之面部之區低的像素強度),則可存在朝向監視器之眩光。該強度差可用以判定一參數值以與眩光之人體工學使用範圍相比較。 The glare towards the monitor can be identified by analyzing the image for high back illumination - assuming the image sensor is facing the user, if the user is behind the illumination (ie, the face area has a comparison There may be glare towards the monitor with a low pixel intensity around the area of the user's face. This intensity difference can be used to determine a parameter value to be compared to the ergonomic range of glare.

如上文提及,在方塊2708處,人體工學分析常式判定使用者是否係在一或多個人體工學使用範圍中,諸如藉由比較自影像計算之參數值與定義使用範圍之所存取資料,人體工學分析常式可判定一使用者是在顯示器之人體工學使用之一限制內、外側還是附近。 As mentioned above, at block 2708, the ergonomic analysis routine determines whether the user is in one or more ergonomic use ranges, such as by comparing the parameter values calculated from the image with the defined range of use. Taking data, the ergonomic analysis routine can determine whether a user is within, outside or near one of the limits of the ergonomic use of the display.

該人體工學分析常式藉助具有多個定向之顯示器來操作。舉例而言,某些顯示器允許一使用者將一顯示器旋轉約90°以使得在一個定向上顯示器比其高更寬,通稱為橫屏(landscape)定向,且在一第二定向中顯示器比其寬更高,通稱為縱屏(portrait)定向。該人體工學分析常式判定顯示器之定向且若必要,則基於定向來進行調整。在一項實施方案中,該人體工學分析常式監視一控制信號並自控制信號之狀態或位準來判定顯示器之定向。 This ergonomic analysis routine operates with a display having multiple orientations. For example, some displays allow a user to rotate a display by about 90° such that the display is wider than its height in one orientation, commonly referred to as a landscape orientation, and in a second orientation the display is The width is higher, commonly referred to as portrait orientation. This ergonomic analysis routine determines the orientation of the display and, if necessary, adjusts based on the orientation. In one embodiment, the ergonomic analysis routine monitors a control signal and determines the orientation of the display from the state or level of the control signal.

方塊2710表示提供一回饋訊息之輸出資料。一回饋訊息之格式、內容及觸發準則可變化,且在某些實施方案中,與影像分析即時地提供該訊息。作為一項實例,若分析展示該使用者係在一人體工學使用範圍外側則可提供一回饋訊息,其中該訊息指示已「違反」哪一或多個度量(例如,距離、角度、眨眼不足、周圍光等)。此可允許使用者進行校正性行動。 Block 2710 represents the output of a feedback message. The format, content, and triggering criteria of a feedback message can vary, and in some embodiments, the message is provided on-the-fly with image analysis. As an example, if the analysis shows that the user is outside an ergonomic range of use, a feedback message can be provided, wherein the message indicates which one or more metrics have been "violated" (eg, distance, angle, lack of blinks) , ambient light, etc.). This allows the user to perform corrective actions.

在一使用者係在人體工學使用範圍之一邊緣附近時亦可提供回饋以進行指示。例如,若使用者幾乎離顯示器過近或過遠(例如,離限制3至4cm),則可提供一警告以允許進行校正性行動。更進一步地,在使用者在一人體工學使用範圍內側時亦可提供回饋以(舉例而言)強化良好使用。 Feedback can also be provided for indication when a user is near an edge of the ergonomic range of use. For example, if the user is almost too close or too far away from the display (eg, 3 to 4 cm from the limit), a warning may be provided to allow for corrective action. Furthermore, feedback can also be provided when the user is inside an ergonomic range of use to, for example, enhance good use.

回饋訊息之格式可如上文提及來變化在一項實施例中,藉由將資料發送至顯示器2704來提供一視覺訊息。例如,可以文字或圖形產生一快顯視窗或覆疊。其他實例包含聲音或其他回饋。 The format of the feedback message can be varied as mentioned above. In one embodiment, a visual message is provided by transmitting the data to display 2704. For example, a quick view or overlay can be generated from text or graphics. Other examples include sound or other feedback.

相依於特定實施方案,該回饋訊息之資料可由感測器模組2702本身或由電腦2705提供。舉例而言,在一項實施方案中,模組2702整合至顯示器中且可將訊息直接提供至顯示器,同時部分地或完全地遮蔽電腦2705所提供之其他資料(例如,可在來自電腦2705之所顯示資料(若存在)上呈現之一覆疊中提供該訊息)。然而,在某些實施方案中,由模組2702執行之人體工學分析常式2716提供指示待產生之一輸出訊息之資料,且電腦2705利用由電腦2705主控之一互補人體工學分析常式2717以呈現一視窗或以其他方式提供該訊息。更進一步地,模組2702可僅提供影像資料,其中該影像資料係由電腦2705所主控之一分析常式2717分析,該分析常式亦呈現該視窗或以其他方式提供該訊息。 Depending on the particular implementation, the information of the feedback message may be provided by the sensor module 2702 itself or by the computer 2705. For example, in one embodiment, the module 2702 is integrated into the display and can provide the information directly to the display while partially or completely obscuring other information provided by the computer 2705 (eg, from the computer 2705) The displayed material (if present) is rendered on one of the overlays provided in the message). However, in some embodiments, the ergonomic analysis routine 2716 performed by the module 2702 provides information indicative of one of the output messages to be generated, and the computer 2705 utilizes a complementary ergonomic analysis of one of the masters of the computer 2705. Equation 2717 is to present a window or otherwise provide the message. Further, the module 2702 can provide only image data, wherein the image data is analyzed by an analysis routine 2717, which is controlled by the computer 2705, and the analysis routine also presents the window or otherwise provides the message.

使用一上述人體工學感測器模組2702之數個實例利用一個感測器。將瞭解,在一個模組2702內可使用多個感測器,且可針對一單一顯示器或針對多個顯示器同時地使用多個模組2702。 A number of examples using one of the above ergonomic sensor modules 2702 utilize one sensor. It will be appreciated that multiple sensors can be used within one module 2702 and multiple modules 2702 can be used simultaneously for a single display or for multiple displays.

任何適合的非暫時性電腦可讀媒體可用於實施或實踐當前所揭示之標的物,其包含但不限於磁片、磁碟機、基於磁之儲存媒體、光學儲存媒體(例如,CD-ROM、DVD-ROM及其變化)、快閃、RAM、ROM、寄存器儲存器、快取記憶體及其他記憶體裝置。舉例而言,實施方案包含(但不限於)實現致使一處理器進行如本文所列舉之方法之指令及/或在包含(但不限於)本文所論述實例之實施方案之操作期間進行的操作之非暫時性電腦可讀媒體。 Any suitable non-transitory computer readable medium may be used to implement or practice the presently disclosed subject matter, including but not limited to magnetic disks, disk drives, magnetic storage media, optical storage media (eg, CD-ROM, DVD-ROM and its changes), flash, RAM, ROM, register memory, cache memory and other memory devices. For example, an implementation includes, but is not limited to, implementing instructions that cause a processor to perform the methods as recited herein and/or operations performed during operation including, but not limited to, embodiments of the examples discussed herein. Non-transitory computer readable media.

本標的物可由基於命令來進行一系列操作之任何計算裝置實施。此一或多個硬體電路或元件包含通用及專用處理器,其存取儲存於一電腦可讀媒體中之致使處理器進行如本文所論述之操作之指令,以及經組態以進行如本文所論述之操作之硬體邏輯(例如,場可程式化閘陣列(FPGA)、可程式化邏輯陣列(PLA)、特殊應用積體電路(ASIC))。 The subject matter can be implemented by any computing device that performs a series of operations based on a command. The one or more hardware circuits or components comprise a general purpose and special purpose processor that accesses instructions stored in a computer readable medium causing the processor to perform operations as discussed herein, and configured to perform as herein The hardware logic of the operations discussed (eg, Field Programmable Gate Array (FPGA), Programmable Logic Array (PLA), Special Application Integrated Circuit (ASIC)).

儘管已闡述及圖解說明本發明之一實例性圖式及特定實施例,但應瞭解,本發明之範疇不限於所論述之特定實施例。因此,應將該等實施例視為例示性而非約束性,且應瞭解,熟習此項技術者可在不背離如下列申請專利範圍及其結構與功能等效物中所列舉之本發明範疇之情況下對彼等實施例做出變化。 Although an exemplary embodiment of the invention has been illustrated and described, it is understood that the scope of the invention is not limited to the specific embodiments discussed. Therefore, the present embodiments are to be considered as illustrative and not restrictive, and it should be understood that those skilled in the art In the case of these embodiments, changes are made to them.

另外,在可根據本文之較佳及替代性實施例及申請專利範圍來執行之方法中,已以選定印刷序列來闡述該等操作。然而,該等序列係出於印刷便利之目的來選定且因此排序,且不意欲暗示用於執行該等操作之任何特定次序,除非將一特定排序明確地指示為所要求或被熟習此項技術者視為必須。 In addition, in the methods that can be performed in accordance with the preferred and alternative embodiments and the scope of the claims herein, the operations have been described in a selected printing sequence. However, the sequences are selected for the purpose of printing convenience and are therefore ordered, and are not intended to imply any particular order for performing such operations unless a particular ordering is explicitly indicated as being required or skilled in the art. It is considered necessary.

Claims (17)

一種手持式具備相機能力之視訊會議裝置,其包括:一外殼,其經組態以固持於一使用者之手中;一處理器,其在該外殼內;一記憶體,其在該外殼內,其中嵌入有用於程式化該處理器之碼,包含視訊會議組件、面部偵測組件、面部辨識組件及相關聯影像處理組件,且其中該記憶體進一步含有與一或多個特定使用者身份相關聯之面部資料;一顯示器,其內建於該外殼中且經組態以可由一使用者在一視訊會議期間觀看;及一相機,其內建於該外殼中且經組態以擷取該使用者在觀看該顯示器時之影像,該相機包含一紅外(IR)光源及對IR敏感之影像感測器,用於在低光或不均勻光狀況或兩者下擷取該使用者之影像,以讓該面部偵測組件能偵測該使用者之面部,並視該面部為該使用者之影像的前景,其餘則為背景;且其中該面部辨識組件經組態以將一使用者之一特定身份關聯至被視為前景的一所偵測面部:且其中該影像處理組件根據該使用者之該特定身份而用儲存於該記憶體中之面部資料取代該所偵測面部之面部資料,以增強在低光或不均勻光狀況或兩者下擷取之該所偵測面部之一影像,並傳輸該前景傳輸至一遠端視訊會議參與者。 A handheld camera-capable video conferencing device includes: a housing configured to be held in the hands of a user; a processor in the housing; a memory in the housing Embedded in the code for programming the processor, comprising a video conferencing component, a face detection component, a face recognition component, and an associated image processing component, and wherein the memory further includes being associated with one or more specific user identities Facial data; a display built into the housing and configured to be viewable by a user during a video conference; and a camera built into the housing and configured to capture the usage The image of the user when viewing the display, the camera includes an infrared (IR) light source and an IR sensitive image sensor for capturing images of the user in low or uneven light conditions or both. So that the face detecting component can detect the face of the user, and view the face as the foreground of the image of the user, and the rest is the background; and wherein the face recognition component is configured to be a user A specific identity is associated with a detected face that is considered to be foreground: and wherein the image processing component replaces the face data of the detected face with the face data stored in the memory according to the specific identity of the user To enhance one of the detected faces captured in low light or uneven light conditions or both, and transmit the foreground to a far-end video conferencing participant. 如請求項1之裝置,其中該面部資料包括色度資料。 The device of claim 1, wherein the facial data comprises chromaticity data. 如請求項1之裝置,其中該面部資料包括照度資料。 The device of claim 1, wherein the facial data comprises illuminance data. 如請求項1之裝置,其中該面部偵測組件或該面部辨識組件或兩者包括經訓練以在低光或不均勻光狀況或兩者下偵測面部或辨識面部或兩者之分類器。 The device of claim 1, wherein the face detection component or the face recognition component or both comprise a classifier trained to detect a face or recognize a face or both in low light or uneven light conditions or both. 如請求項1之裝置,其中該IR光源包括耦合至該外殼且經安置以在一視訊會議期間照明該使用者之面部之一或多個IR LED。 A device as claimed in claim 1, wherein the IR light source comprises one or more IR LEDs coupled to the housing and arranged to illuminate a face of the user during a video conference. 如請求項1之裝置,其中該記憶體進一步包含一面部追蹤組件以追蹤該所偵測面部以讓該裝置在該視訊會議期間能傳輸該使用者之面部之近似連續視訊影像。 The device of claim 1, wherein the memory further comprises a face tracking component to track the detected face to enable the device to transmit an approximate continuous video image of the user's face during the video conference. 如請求項1之裝置,其中該記憶體進一步包括一組件以估計至該使用者之面部之一距離並基於該所估計距離而控制該IR光源之一輸出功率。 The device of claim 1, wherein the memory further comprises a component to estimate a distance to the face of the user and to control an output power of the IR light source based on the estimated distance. 如請求項7之裝置,其中該距離之該估計係使用自動對焦資料來判定。 The apparatus of claim 7, wherein the estimate of the distance is determined using an autofocus material. 如請求項8之裝置,其中該距離之該估計係基於該使用者之面部之一所偵測大小來判定。 The device of claim 8, wherein the estimate of the distance is determined based on a size detected by one of the faces of the user. 如請求項8之裝置,其中該記憶體進一步包括一組件以判定一使用者之面部相對於該裝置之一位置並控制照明該使用者之面部之該IR光源之一方向。 The device of claim 8, wherein the memory further comprises a component to determine a position of a user's face relative to the device and to control a direction of the IR light source that illuminates the face of the user. 一種手持式具備相機能力之視訊會議裝置,其包括:一外殼,其經組態以固持於一使用者之手中;一處理器,其在該外殼內; 一記憶體,其在該外殼內,其中嵌入有用於程式化該處理器之碼,包含視訊會議組件及前景/背景分割組件或其組合,一顯示器,其內建於該外殼中且經組態以可由一使用者在一視訊會議期間觀看;一相機,其內建於該外殼中且經組態以擷取該使用者在觀看該顯示器時之影像;及一通信介面,其用以將音訊/視覺資料傳輸至一遠端視訊會議參與者;且其中該前景/背景分割組件經組態以藉由辨別前景對背景資料之不同運動向量來提取不包含背景資料之使用者身份資料,該通信介面傳遞不包含背景資料的視覺資料至該遠端視訊會議參與者;其中該前景/背景分割組件經校準以匹配特定使用者身份資料作為前景資料。 A handheld camera-capable video conferencing device comprising: a housing configured to be held in the hands of a user; a processor in the housing; a memory in the housing in which is embedded a code for programming the processor, including a video conferencing component and a foreground/background segmentation component or a combination thereof, a display built into the casing and configured Illustrated by a user during a video conference; a camera built into the housing and configured to capture images of the user while viewing the display; and a communication interface for audio Transmitting/visual data to a far-end video conferencing participant; and wherein the foreground/background segmentation component is configured to extract user identity data that does not include background material by identifying different motion vectors of the foreground versus background material, the communication The interface delivers visual material that does not contain background material to the far-end video conferencing participant; wherein the foreground/background segmentation component is calibrated to match the particular user identity data as foreground material. 如請求項11之裝置,其中該使用者身份資料包括面部資料。 The device of claim 11, wherein the user identity data comprises facial information. 一種手持式具備相機能力之視訊會議裝置,其包括:一外殼,其經組態以固持於一使用者之手中;一處理器,其在該外殼內;一記憶體,其在該外殼內,其中嵌入有用於程式化該處理器之碼,包含視訊會議組件及前景/背景分割組件或其組合;一顯示器,其內建於該外殼中且經組態以可由一使用者在一視訊會議期間觀看;一相機,其內建於該外殼中且經組態以擷取該使用者在觀看該顯示器時之影像,該相機包含一紅外(IR)光源及對IR敏感之影像感測器,用於在低光或不均 勻光狀況或兩者下擷取該使用者之影像以讓一面部偵測組件能偵測一使用者之面部;及一通信介面,其用以將音訊/視覺資料傳輸至一遠端視訊會議參與者;且其中該前景/背景分割組件經組態以藉由匹配所偵測面部資料作為前景資料來提取不具有背景資料之使用者身份資料,該通信介面傳遞不包含背景資料的視覺資料至該遠端視訊會議參與者。 A handheld camera-capable video conferencing device includes: a housing configured to be held in the hands of a user; a processor in the housing; a memory in the housing Embedded therein is a code for programming the processor, comprising a video conferencing component and a foreground/background segmentation component or a combination thereof; a display built into the casing and configured to be accessible by a user during a video conference Viewing; a camera built into the housing and configured to capture images of the user while viewing the display, the camera including an infrared (IR) source and an IR sensitive image sensor In low light or uneven The image of the user is captured in a uniform state or both to enable a face detection component to detect a user's face; and a communication interface for transmitting audio/visual data to a remote video conference Participant; and wherein the foreground/background segmentation component is configured to extract user identity data without background material by matching the detected facial data as foreground data, the communication interface transmitting visual material not including background material to The far-end video conferencing participant. 如請求項13之裝置,其中該記憶體進一步含有一面部追蹤組件以追蹤該所偵測面部以讓該裝置在該視訊會議期間能傳輸該使用者之面部之近似連續視訊影像。 The device of claim 13, wherein the memory further comprises a face tracking component to track the detected face to enable the device to transmit an approximate continuous video image of the user's face during the video conference. 如請求項13之裝置,其中該特定使用者身份資料包括一所偵測面部之一影像。 The device of claim 13, wherein the specific user identity data comprises an image of a detected face. 如請求項15之裝置,其中該特定使用者身份資料進一步包括頸部、部分軀幹或襯衫或一或兩個臂或其組合。 The device of claim 15 wherein the particular user identity further comprises a neck, a portion of a torso or a shirt or one or both arms or a combination thereof. 如請求項13之裝置,其中該記憶體進一步含有與一或多個特定使用者身份相關聯之面部資料,且該裝置經組態以基於匹配該記憶體中之該面部資料而提取該特定使用者身份資料。 The device of claim 13, wherein the memory further comprises facial material associated with one or more particular user identities, and the device is configured to extract the particular use based on matching the facial metric in the memory Identity information.
TW101112362A 2011-04-08 2012-04-06 Display device with image capture and analysis module TWI545947B (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US13/082,568 US8913005B2 (en) 2011-04-08 2011-04-08 Methods and systems for ergonomic feedback using an image analysis module
US13/220,612 US20130050395A1 (en) 2011-08-29 2011-08-29 Rich Mobile Video Conferencing Solution for No Light, Low Light and Uneven Light Conditions
US201161530867P 2011-09-02 2011-09-02
US201161530872P 2011-09-02 2011-09-02
US13/294,977 US20130057553A1 (en) 2011-09-02 2011-11-11 Smart Display with Dynamic Font Management
US13/294,964 US20130057573A1 (en) 2011-09-02 2011-11-11 Smart Display with Dynamic Face-Based User Preference Settings

Publications (2)

Publication Number Publication Date
TW201306573A TW201306573A (en) 2013-02-01
TWI545947B true TWI545947B (en) 2016-08-11

Family

ID=47972418

Family Applications (1)

Application Number Title Priority Date Filing Date
TW101112362A TWI545947B (en) 2011-04-08 2012-04-06 Display device with image capture and analysis module

Country Status (2)

Country Link
CN (1) CN103024338B (en)
TW (1) TWI545947B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI706377B (en) * 2016-11-14 2020-10-01 瑞典商安訊士有限公司 Action recognition in a video sequence
TWI735816B (en) * 2018-11-05 2021-08-11 香港商冠捷投資有限公司 Display device and method for automatically turning off the display device
TWI814270B (en) * 2022-03-08 2023-09-01 巧連科技股份有限公司 Position-sensing-with-audio conference video apparatus and method for the same

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103190883B (en) * 2012-12-20 2015-06-24 苏州触达信息技术有限公司 Head-mounted display device and image adjusting method
CN104123926B (en) * 2013-04-25 2016-08-31 乐金显示有限公司 Gamma compensated method and use the display device of this gamma compensated method
US9282285B2 (en) * 2013-06-10 2016-03-08 Citrix Systems, Inc. Providing user video having a virtual curtain to an online conference
TWI549476B (en) * 2013-12-20 2016-09-11 友達光電股份有限公司 Display system and method for adjusting visible range
CN103795931B (en) * 2014-02-20 2017-12-29 联想(北京)有限公司 A kind of information processing method and electronic equipment
US20160011657A1 (en) * 2014-07-14 2016-01-14 Futurewei Technologies, Inc. System and Method for Display Enhancement
GB201419438D0 (en) * 2014-10-31 2014-12-17 Microsoft Corp Modifying video call data
CN105635634A (en) * 2014-11-07 2016-06-01 中兴通讯股份有限公司 Method and device of realizing video image processing
CN107077212B (en) * 2015-01-30 2020-03-24 惠普发展公司,有限责任合伙企业 Electronic display illumination
CN104965739A (en) * 2015-06-30 2015-10-07 盛玉伟 Method for setting display parameters of virtual reality display and virtual reality display
CN107735136B (en) * 2015-06-30 2021-11-02 瑞思迈私人有限公司 Mask sizing tool using mobile applications
TWI570638B (en) * 2015-07-29 2017-02-11 財團法人資訊工業策進會 Gaze analysis method and apparatus
CN105120165A (en) * 2015-08-31 2015-12-02 联想(北京)有限公司 Image acquisition control method and device
CN108780266B (en) * 2016-03-17 2021-01-15 松下知识产权经营株式会社 Contrast device
CN105721888B (en) * 2016-03-31 2020-03-24 徐文波 Image quality processing method and device in real-time video application
CN106454481B (en) * 2016-09-30 2019-08-23 广州华多网络科技有限公司 A kind of method and device of live broadcast of mobile terminal interaction
WO2018072178A1 (en) * 2016-10-20 2018-04-26 深圳达闼科技控股有限公司 Iris recognition-based image preview method and device
KR102627244B1 (en) * 2016-11-30 2024-01-22 삼성전자주식회사 Electronic device and method for displaying image for iris recognition in electronic device
WO2018163356A1 (en) * 2017-03-09 2018-09-13 株式会社 資生堂 Information processing device and program
CN107808127B (en) * 2017-10-11 2020-01-14 Oppo广东移动通信有限公司 Face recognition method and related product
CN110096936B (en) * 2018-01-31 2023-03-03 伽蓝(集团)股份有限公司 Method for evaluating apparent age of eyes and aging degree of eyes and application thereof
CN108805818B (en) * 2018-02-28 2020-07-10 上海兴容信息技术有限公司 Content big data density degree analysis method
TWI684955B (en) * 2018-05-25 2020-02-11 瑞昱半導體股份有限公司 Method and electronic apparatus for extracting foreground image
CN108965694B (en) * 2018-06-26 2020-11-03 影石创新科技股份有限公司 Method for acquiring gyroscope information for camera level correction and portable terminal
CN109446912B (en) 2018-09-28 2021-04-09 北京市商汤科技开发有限公司 Face image processing method and device, electronic equipment and storage medium
TWI739041B (en) 2018-10-31 2021-09-11 華碩電腦股份有限公司 Electronic device and control method thereof
CN110398988A (en) * 2019-06-28 2019-11-01 联想(北京)有限公司 A kind of control method and electronic equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7817914B2 (en) * 2007-05-30 2010-10-19 Eastman Kodak Company Camera configurable for autonomous operation
JP2010211485A (en) * 2009-03-10 2010-09-24 Nippon Telegr & Teleph Corp <Ntt> Gaze degree measurement device, gaze degree measurement method, gaze degree measurement program and recording medium with the same program recorded
US20100302393A1 (en) * 2009-05-26 2010-12-02 Sony Ericsson Mobile Communications Ab Self-portrait assistance in image capturing devices

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI706377B (en) * 2016-11-14 2020-10-01 瑞典商安訊士有限公司 Action recognition in a video sequence
TWI735816B (en) * 2018-11-05 2021-08-11 香港商冠捷投資有限公司 Display device and method for automatically turning off the display device
TWI814270B (en) * 2022-03-08 2023-09-01 巧連科技股份有限公司 Position-sensing-with-audio conference video apparatus and method for the same

Also Published As

Publication number Publication date
TW201306573A (en) 2013-02-01
CN103024338A (en) 2013-04-03
CN103024338B (en) 2016-03-09

Similar Documents

Publication Publication Date Title
TWI545947B (en) Display device with image capture and analysis module
EP2515526A2 (en) Display device with image capture and analysis module
US11196917B2 (en) Electronic system with eye protection
US20130057553A1 (en) Smart Display with Dynamic Font Management
US20130057573A1 (en) Smart Display with Dynamic Face-Based User Preference Settings
CN108700933B (en) Wearable device capable of eye tracking
CN110121885A (en) For having recessed video link using the wireless HMD video flowing transmission of VR, the low latency of watching tracking attentively
CN107272904B (en) Image display method and electronic equipment
US20120133754A1 (en) Gaze tracking system and method for controlling internet protocol tv at a distance
EP3230825B1 (en) Device for and method of corneal imaging
US20180115717A1 (en) Display method, system and computer-readable recording medium thereof
US20210349536A1 (en) Biofeedback method of modulating digital content to invoke greater pupil radius response
CN109155053B (en) Information processing apparatus, information processing method, and recording medium
TW202009786A (en) Electronic apparatus operated by head movement and operation method thereof
CN110352033A (en) Eyes degree of opening is determined with eye tracks device
CN112969436A (en) Hands-free control of autonomous augmentation in electronic vision-assistance devices
JP2017220158A (en) Virtual makeup apparatus, virtual makeup method, and virtual makeup program
WO2018219290A1 (en) Information terminal
KR101492832B1 (en) Method for controlling display screen and display apparatus thereof
KR101720607B1 (en) Image photographing apparuatus and operating method thereof
US20230418372A1 (en) Gaze behavior detection
CN114610141A (en) Display apparatus for adjusting display mode using gaze direction and method of operating the same
WO2023073240A1 (en) Method and system for determining eye test screen distance
GB2612366A (en) Method and system for eye testing
GB2612364A (en) Method and system for determining user-screen distance

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees