TWI240891B - Method of controlling the computer mouse by tracking user's head rotation and eyes movement - Google Patents

Method of controlling the computer mouse by tracking user's head rotation and eyes movement Download PDF

Info

Publication number
TWI240891B
TWI240891B TW92122827A TW92122827A TWI240891B TW I240891 B TWI240891 B TW I240891B TW 92122827 A TW92122827 A TW 92122827A TW 92122827 A TW92122827 A TW 92122827A TW I240891 B TWI240891 B TW I240891B
Authority
TW
Taiwan
Prior art keywords
block diagram
center
control
movement
computer mouse
Prior art date
Application number
TW92122827A
Other languages
Chinese (zh)
Other versions
TW200508976A (en
Inventor
Jyh-Horng Chen
Cheng-Yao Chen
Original Assignee
Jyh-Horng Chen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jyh-Horng Chen filed Critical Jyh-Horng Chen
Priority to TW92122827A priority Critical patent/TWI240891B/en
Publication of TW200508976A publication Critical patent/TW200508976A/en
Application granted granted Critical
Publication of TWI240891B publication Critical patent/TWI240891B/en

Links

Abstract

A method, which uses the face images captured by the common video conference camera, controls the computer mouse by tracking the user's head rotation and eyes movement. It adopts real-time digital image processing to identify the locations of the eyes and lips. By tracking the three feature points mentioned above, our method can determine the angles of the head rotation and eyes movement. Then, combined with the function menu which is composed of click, double click, drag, and so on, our method can fully replace the normal computer mouse. Finally, it can really benefit the people, who are not capable of using normal computer mouse, by allowing them to display their own minds through computers and enjoy the multimedia entertainment in the digital era.

Description

!24〇89l 坎、發明說明: 【發明所屬之技術領域】 *本發明係關於一種藉由即時追蹤使用者頭部轉動以及眼球運動來 Μ彳電知滑鼠游標之方法,其所追蹤之影像係經由一般家用網路視訊 會議攝影機所攝得,並_ USB介面傳回電腦_軟體處理。 【先前技術】 Μ世界上已經發展出幾種不同方法來控制電腦滑鼠,早期如··手關 節擺動控制、聲音控制、肌電控制、肩膀控制、以及口吹氣等。之後 逐漸發展至追縱頭部猶以及眼球軸來控制之方式,最早是採用紅 外線的方式,利用紅外線反射的差異來偵測各項的運動。近來又再演 進^職位影像處理的方式來魏各項運動,控㈣腦滑鼠,不過 先月ίι的方摘要求使用者需搭配制之攝韻或配麟殊之攝影儀器 不僅叩貝且造成使用者的不便。本方法乃是完全侧軟體配合一般家 用視訊會麵雜,不但成本低廉,且無須制者喊任何特殊器具 或標記,即可讓使用者完成操控電腦之目標。 【發明内容】 -個因脊敵到血管病變或是外傷導致四肢或軀幹完全或不完全 麻痺的脊纖傷患者,其賴魏的喪失,辟顺患者的活動功能 以及,立性,但在現今電子及資訊科學發_時代,妥善顧電腦就 ^以-他們重新自由表達個人意志以及享受數位時代所帶來的多媒體 娛樂休閒、’,提升生活品質,不僅減_患家屬之貞擔,減少醫療資源 之消耗’亚錄幫助病患重拾作㈣。因此發展—種適合殘障患者 使用之滑鼠游標控制方式,是一個刻不容緩的議題。 對於非魏補m般正常使財來說,電腦 來說所能帶來的制性也與日俱增,如果能_電腦細性的工作交 由視覺以及其他方絲絲,錢何同時較字輸人或者是更高階 之操作,將能醜高卫作效率以及增加軟體朗之鎌性 代之滑鼠游標控制方式也是資訊時代一關鍵性之突破 , 本發明方法係糊電職體處理—般_之視訊會議攝影機所攝 1240891 得之臉部影像,並且使用者無需配戴任何裝置於身上即可完成操作電 腦之目的。其特色以及功效包含: (A) 額外硬體僅需一般家用視訊會議攝影機,成本低廉。 (B) 使用者無須配戴其他任何之特殊儀器或標記,降低使用者之生 理負擔。 (C) 使用場合之背景複雜度容忍度高,且無麵心旁觀者· 影響。 (D) 滑鼠游標操控方式接近人類視覺感知,即利用眼球移動作小距 離之微調,頭部轉動作較大之調整,容易學習。 jE)軟體佔用資源低,且相容性高,易於整合至其他之數位資訊產 品如:PDA、手機、以及環境控制系統。 【實施方式】 二本發明之方法架構圖如圖一,其中方塊圖A (丨)表示一般家用視 訊會議攝影機影像之輸人,本方法相容於各種市面販售採用聰介面 之數位1影機。方塊K B⑵為本方法所採狀臉部辨識之核心,係 利用及日守數位影像處理之方式將所得之臉部影像辨識出其眼睛及嘴巴 位置’並追縱之。方塊目c⑴為本方法之滑鼠控制核心、,係將方塊 ,Β (2)所得狀眼睛和嘴巴位置所形成之三角形重心、對於臉部區域 貝心之相對位移轉換成滑鼠座標之移動。其中方塊圖Β (2)、方塊圖C (3)為本發明之軟體核心,詳述於下: 。 本發明之軟體流程圖如圖二,其中方塊圖i⑷為使用者臉部影像 之輸入,本方法要求之取樣頻率需約介於每秒五張圖片至每秒三十=圖 片,且無須經過壓縮處理,以%位元之RGB格式輸入。影像利用汉㈤ 格式表不法本身帶有影像強度之資訊,人眼對於顏色受到影像強度 (intensity)之干擾可以自動濾除,但是電廳彳無法做到,為了減少^ 像強,對顏色的干擾,仙對輸人雜作了—格狀機,我們先二’二 形中每一像素(pixel)之r、G、b值除以像素之強度值,使得新得之r、 G、B值將介於零與一之間,然後根據下列數式轉換成Hsi表示法: rr r Θ if B>G Χ7 ={24〇89l, description of the invention: [Technical field to which the invention belongs] * The present invention relates to a method for realizing a mouse cursor by tracking the user's head rotation and eye movement in real time, and the tracked image It was taken by a general home network video conference camera, and the _ USB interface is returned to the computer _ software processing. [Previous technology] Several different methods have been developed in the world to control the computer mouse. In the early days, such as hand joint swing control, sound control, electromyography control, shoulder control, and mouth blowing. Later, it gradually developed to follow the head and the eyeball axis to control. The earliest method was infrared, using the difference in infrared reflection to detect various movements. Recently, the method of post image processing has been developed to control various movements and control the mouse. However, the abstract of the first month requires users to match the system's photo rhyme or Linshu's photography equipment. Inconvenience. This method is completely side software compatible with general household video meetings, which is not only low cost, but also does not require the manufacturer to call any special appliances or marks, so that the user can complete the goal of controlling the computer. [Summary of the Invention]-A spinal fibrosis patient with complete or incomplete paralysis of the limbs or trunk due to spinal enemy to vascular lesions or trauma, the loss of Lai Wei, the function of the patient and the legitimacy, but in the present Electronic and Information Science _Era, take care of computers properly ^-They re-freely express their personal will and enjoy the multimedia entertainment and leisure brought by the digital era, 'to improve the quality of life, not only reduce the burden of family members, reduce medical care Consumption of Resources' Alu helps patients regain their troubles. Therefore, the development of a mouse cursor control method suitable for disabled patients is an urgent issue. For non-Wei Bu m normal make money, the computer can bring the increasing control. If the computer can be fine-grained to the vision and other squares, how can money be lost compared with the word or It is a higher-level operation, and it will be able to improve the efficiency of ugly guards and increase the soft cursor's sickle instead of the mouse cursor control method. It is also a key breakthrough in the information age. The facial image obtained by the conference camera 1240891 can be used to operate the computer without wearing any device on the body. Its features and functions include: (A) The additional hardware requires only a general home video conference camera, and the cost is low. (B) The user does not need to wear any other special instruments or marks to reduce the user's physiological burden. (C) The background complexity of the use situation is high, and there is no face-on bystander influence. (D) The mouse cursor control method is close to human visual perception, that is, the eyeball movement is used to fine-tune the small distance, and the head movement is adjusted to make it easier to learn. jE) The software occupies low resources and has high compatibility, which is easy to integrate into other digital information products such as PDA, mobile phones, and environmental control systems. [Embodiment] The structure diagram of the method of the present invention is shown in Fig. 1. The block diagram A (丨) represents the input of a general home video conference camera image. This method is compatible with various commercially available digital 1-cameras using a smart interface. . Block K B⑵ is the core of facial recognition adopted in this method. It uses the digital image processing method of day-watching to recognize the obtained facial image's eyes and mouth position 'and traces it. Block c is the mouse control core of the method, which converts the triangle center of gravity formed by the eyes and mouth position obtained by block B (2), and the relative displacement of the center of the face to the face area into the movement of the mouse coordinates. The block diagram B (2) and the block diagram C (3) are the core of the software of the present invention, which are detailed below:. The software flow chart of the present invention is shown in Figure 2. The block diagram i⑷ is the input of the user's facial image. The sampling frequency required by this method needs to be between about five pictures per second and thirty pictures per second = pictures, without compression. Processing, input in% RGB format. The image uses the information of the image format in the Chinese language format. The human eye can automatically filter out the color interference from the intensity of the image, but the electric hall cannot do it. In order to reduce the image intensity, the color interference , Xian made a miscellaneous input-lattice machine, we first divide the r, G, b value of each pixel (pixel) in the second two shape by the intensity value of the pixel, so that the new r, G, B value Will be between zero and one, and then converted to Hsi notation according to the following formula: rr r Θ if B > G χ7 = {

、循-θ if B<G 6 1240891 其中: ^ = cos_1{ 0.5[(R - G) + (R-B)] 拟-G)(R - G) + (R - B)(G - B) ^ _ max(i?, G? B) - min(i?? G, B) max(i?,G,5), -Θ if B < G 6 1240891 where: ^ = cos_1 {0.5 [(R-G) + (RB)] quasi-G) (R-G) + (R-B) (G-B) ^ _ max (i ?, G? B)-min (i ?? G, B) max (i ?, G, 5)

R + G + B 經過HSI之轉換後,我們就可以免除影像強度對於顏色之影響, 因此使用ί哀境之光線強弱將不會對於本方法造成干擾,然後再調整書 面之白平衡,可以減低不同顏色光源對顏色之改變。而人類皮膚於HSI 之特徵值為0介於330度至345度,S/Ι介於0·2至1之間,方塊圖3 (6)即是藉由以上之值我們可以決定可能為皮膚之點,然後作一圖形 連結(Flood Fill)之演算法,考慮各可能為皮膚之點如果其上下左右 之點也為可能之皮膚點即將其劃為同一區域,之後可決定畫面中共有 夕少之可能為臉部之區域。在此我們作一假設為使用者之臉部最靠近 螢幕,因此使用者之臉部區域將為最大之皮膚區域,所以我們找尋最 大之皮膚區域定義為使用者之臉部區域。方塊圖4 (7)表示如果在整After R + G + B is converted by HSI, we can avoid the influence of image intensity on color, so using the light intensity of the sadness will not interfere with this method, and then adjust the written white balance to reduce the difference. Color light source changes the color. The characteristic value of human skin in HSI is 0 between 330 and 345 degrees, and S / I is between 0.2 and 1. Block diagram 3 (6) is that we can determine the possible skin for the above values. Point, and then make a graphics fill (Flood Fill) algorithm, consider each point that may be skin, if its points up, down, left and right are also possible skin points will be divided into the same area, and then you can determine the total number of nights in the picture It may be a face area. Here we assume that the user's face is closest to the screen, so the user's face area will be the largest skin area, so we find the largest skin area to define as the user's face area. Block diagram 4 (7) shows

個畫面中都找尋不到可能為皮膚之點,及跳到下一張畫面自方塊圖工 (4)重新開始。 A 方塊圖5 (8)則是利用嘴唇顏色之^值大於345度或小於5度, 以及S/Ι介於〇_2至1之間,決定在臉部區域中共有哪些可能為嘴唇之 點,經由如同方塊圖3 (6)之圖形連結之方式,我們找出各個在^部 區域中可能為嘴唇之區域,我們考慮嘴唇區域之形狀接近侧形,因 此在此作橢圓狀崎,以_固定之顧形之模版套酬每—可 嘴唇區域,計算其相關性(correlati〇n),最大者即為嘴唇區域 們再計异此區域之質心作為嘴唇之中心。 1240891 對於嘴唇為左右對稱,以及其至嘴唇之距離約為〇·3至〇·45個臉部長 度’以及利用方塊圖5 (8)中之橢圓形驗證找出左右兩眼之區域,然 後再利用眼球為眼部區域中之明顯黑色部分,找出眼球並計算其質心 作為眼球之中心。 在找出兩眼中心,以及嘴唇中心之後,方塊圖7 (1〇)則是根據此 眼睛之區域為臉部上半區域中邊界(edge)最為密集之區域,因此 我們在方塊圖6(9)中利用了一個二階(secondorder)的邊界偵測(edge detector) ’其乃是將一如右之九宮格套用到臉部的 上半區域各點,然後大於固定閥值者即認定為邊界 點。同樣利用方塊圖3 (6)中之圖形連結方式,我 們可以找出可能之邊界集中區域,再考慮眼部區域 二點所形成之三角形之重心,相對於臉部區域質心之位移來控制滑 亂,π乳移動之方向即是三角形重心對於臉部區域之質心位移之方 向,而滑鼠移動之速度則是由相對位移之大小決定,考量頭部以及眼 目月垂直方向之動態移動範圍較水平方向小,所以垂直方向的速度比值 為水平方向之兩倍。因為兩眼珠運動可以使該三角形之重心作小幅移 動而頭°卩整體之動悲移動範圍較大,會使三角形重心作大幅移動, 因此我們可以達到利用眼睛作小幅微調,頭部轉動作大幅移動之目標。 由方塊® 1(4)到方塊圖7(1〇)即為完成一張圖片之完整步驟, 接者在5秒鐘之後為了節省電腦之運算量,我們則動態範圍估計 (motum estimation)之方式,考量我們的取樣頻率介於每秒5至3〇張 圖片,將臉部區域搜尋範圍縮小至上一臉部區域之上下左右各5〇個像 素點之範圍,眼部以及嘴唇之區域搜尋範圍縮小至上下左右各2個像 素點之翻。如此不但可喊少非自然之運動誤差,也可以降低系統 之負荷,更可以減少背景林侧之改變對於臉觸識之影響。 以本發明揭露之方法所實施之硬體,亦為申請專利範圍。 1240891 【圖式簡單說明】 圖一係為本發明方法之架構圖 圖二係為本發明之軟體流程圖 【元件符號說明】In all the pictures, no possible skin points were found, and skipped to the next picture to restart from the block mapper (4). A block diagram 5 (8) is to use lips with a color value greater than 345 degrees or less than 5 degrees, and S / I between 0 and 2 to determine which points in the face area may be lips. Through the way of the graphic connection like the block diagram 3 (6), we find each region that may be a lip in the ^ region. We consider that the shape of the lip region is close to the side shape, so here we make an oval-shaped saki, with _ The fixed gu-shaped template remuneration can be calculated for the area of the lips (correlati). The largest one is the area of the lips, and the centroid of this area is regarded as the center of the lips. 1240891 is left-right symmetrical for the lips, and the distance from the lips to the face is about 0.3 to 0.45 face length ', and the elliptical verification in the block diagram 5 (8) is used to find the areas of the left and right eyes, and then Using the eyeball as the obvious black part of the eye area, find the eyeball and calculate its center of mass as the center of the eyeball. After finding the center of the two eyes and the center of the lips, the block diagram 7 (10) is based on the area of this eye as the most dense edge in the upper half of the face, so we are in the block diagram 6 (9 ) Uses a second order edge detector (edge detector), which applies the Nine-square grid on the right to each point in the upper half of the face, and then it is regarded as a boundary point if it is greater than a fixed threshold. Similarly, using the graphic connection method in block diagram 3 (6), we can find the possible boundary concentration areas, and then consider the center of gravity of the triangle formed by the two points of the eye area, and control the slip relative to the displacement of the center of mass of the face area. The direction of the π milk movement is the direction of the centroid displacement of the center of gravity of the triangle to the face area, and the speed of the mouse movement is determined by the relative displacement. Considering the vertical movement range of the head and the vertical direction of the eyes The horizontal direction is small, so the speed ratio in the vertical direction is twice the horizontal direction. Because the movement of the two eyes can make the center of gravity of the triangle move slightly and the head ’s overall movement range is larger, it will cause the center of gravity of the triangle to move significantly. Therefore, we can use the eyes to make small fine adjustments and the head to move significantly. Goal. From block 1 (4) to block 7 (10) is the complete step to complete an image. In order to save the computer's calculation after 5 seconds, we use the dynamic range estimation method. Considering that our sampling frequency is between 5 and 30 pictures per second, the search range of the face area is reduced to a range of 50 pixels above and below the left and right of the previous face area, and the search area of the eyes and lips is reduced. Up to 2 pixels up, down, left and right. This can not only reduce unnatural motion errors, but also reduce the load on the system. It can also reduce the impact of changes in the background forest on the face. The hardware implemented by the method disclosed in the present invention is also within the scope of patent application. 1240891 [Schematic description] Figure 1 is a structural diagram of the method of the present invention Figure 2 is a software flowchart of the present invention [Description of component symbols]

(1) 方塊圖A(1) Block diagram A

(2) 方塊圖B(2) Block diagram B

(3) 方塊圖C (4) 方塊圖1 (5) 方塊圖2 (6) 方塊圖3 (7) 方塊圖4 (8) 方塊圖5 (9) 方塊圖6 (10) 方塊圖7 (11) 方塊圖8 拾、申請專利範圍: 1. 一種利用頭部轉動以及眼球移動來控制電腦滑鼠游標之方法,其包 含: (a) 只需以一般家用之網路視訊攝影機所擷取之影像作為輸入, (b) 即時辨識使用者之臉部範圍、兩眼位置、以及嘴唇位置, (c) 模擬人類視覺感知之方式來控制滑鼠,即以眼球運動控制小範圍之 移動,頭部運動控制大範圍之移動。 目/ 11咐^,Μ轉(e) &含制兩個眼睛以 重心相對於臉部區域質心之相對位移來決定滑鼠 3·根據申請專利範圍第2項之方、本 於臉部範®質心之相對運動之中包含使崎於該三角形重心對 之大小來控制滑鼠游標之運動速度。(3) Block diagram C (4) Block diagram 1 (5) Block diagram 2 (6) Block diagram 3 (7) Block diagram 4 (8) Block diagram 5 (9) Block diagram 6 (10) Block diagram 7 (11 ) Block diagram 8 Scope of patent application: 1. A method for controlling computer mouse cursor using head rotation and eye movement, which includes: (a) Only the images captured by the ordinary home video camera As input, (b) real-time recognition of the user's face range, two-eye position, and lip position, (c) simulate the human visual perception to control the mouse, that is, use eye movement to control a small range of movement and head movement Controls large movements. Head / 11 ^, M turn (e) & Include two eyes to determine the mouse based on the relative displacement of the center of gravity relative to the center of mass of the face area 3. According to the second item of the scope of the patent application, the original is on the face The relative motion of the Fan® center of mass includes the size of the triangle center of gravity to control the speed of the mouse cursor.

Claims (1)

1240891 【圖式簡單說明】 圖一係為本發明方法之架構圖 圖二係為本發明之軟體流程圖 【元件符號說明】 (1) 方塊圖A (2) 方塊圖B (3) 方塊圖C (4) 方塊圖1 (5) 方塊圖2 (6) 方塊圖3 (7) 方塊圖4 (8) 方塊圖5 (9) 方塊圖6 (10) 方塊圖7 (11) 方塊圖8 拾、申請專利範圍: 1. 一種利用頭部轉動以及眼球移動來控制電腦滑鼠游標之方法,其包 含: (a) 只需以一般家用之網路視訊攝影機所擷取之影像作為輸入, (b) 即時辨識使用者之臉部範圍、兩眼位置、以及嘴唇位置, (c) 模擬人類視覺感知之方式來控制滑鼠,即以眼球運動控制小範圍之 移動,頭部運動控制大範圍之移動。 目/ 11咐^,Μ轉(e) &含制兩個眼睛以 重心相對於臉部區域質心之相對位移來決定滑鼠 3·根據申請專利範圍第2項之方、本 於臉部範®質心之相對運動之中包含使崎於該三角形重心對 之大小來控制滑鼠游標之運動速度。1240891 [Brief description of the diagram] Fig. 1 is a structural diagram of the method of the present invention. Fig. 2 is a software flowchart of the present invention. [Description of component symbols] (1) Block diagram A (2) Block diagram B (3) Block diagram C (4) Block diagram 1 (5) Block diagram 2 (6) Block diagram 3 (7) Block diagram 4 (8) Block diagram 5 (9) Block diagram 6 (10) Block diagram 7 (11) Block diagram 8 Patent application scope: 1. A method for controlling computer mouse cursor by using head rotation and eyeball movement, which includes: (a) Only the image captured by a general home network video camera is used as input, (b) Real-time recognition of the user's face range, two-eye position, and lip position. (C) Simulate human visual perception to control the mouse, that is, use eye movement to control small-scale movements, and head movement to control large-scale movements. Head / 11 ^, M turn (e) & Include two eyes to determine the mouse based on the relative displacement of the center of gravity relative to the center of mass of the face area 3. According to the second item of the scope of the patent application, the original is on the face The relative motion of the Fan® center of mass includes the size of the triangle center of gravity to control the speed of the mouse cursor.
TW92122827A 2003-08-20 2003-08-20 Method of controlling the computer mouse by tracking user's head rotation and eyes movement TWI240891B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW92122827A TWI240891B (en) 2003-08-20 2003-08-20 Method of controlling the computer mouse by tracking user's head rotation and eyes movement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW92122827A TWI240891B (en) 2003-08-20 2003-08-20 Method of controlling the computer mouse by tracking user's head rotation and eyes movement

Publications (2)

Publication Number Publication Date
TW200508976A TW200508976A (en) 2005-03-01
TWI240891B true TWI240891B (en) 2005-10-01

Family

ID=37012978

Family Applications (1)

Application Number Title Priority Date Filing Date
TW92122827A TWI240891B (en) 2003-08-20 2003-08-20 Method of controlling the computer mouse by tracking user's head rotation and eyes movement

Country Status (1)

Country Link
TW (1) TWI240891B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI780366B (en) * 2019-05-13 2022-10-11 日商微網股份有限公司 Facial recognition system, facial recognition method and facial recognition program

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201028895A (en) * 2009-01-23 2010-08-01 Rui-Keng Chou Electro-oculogram control system
TWI480764B (en) * 2011-03-10 2015-04-11 Nat Univ Chung Hsing Device and method for controlling mouse cursor by head
TWI437467B (en) 2011-10-11 2014-05-11 Ind Tech Res Inst Display control apparatus and display control method
TWI469066B (en) * 2012-07-30 2015-01-11 Hon Hai Prec Ind Co Ltd System and method for displaying product catalog
TWI492098B (en) * 2013-03-04 2015-07-11 Head control system and method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI780366B (en) * 2019-05-13 2022-10-11 日商微網股份有限公司 Facial recognition system, facial recognition method and facial recognition program

Also Published As

Publication number Publication date
TW200508976A (en) 2005-03-01

Similar Documents

Publication Publication Date Title
US11575856B2 (en) Virtual 3D communications using models and texture maps of participants
US20140254939A1 (en) Apparatus and method for outputting information on facial expression
WO2023119557A1 (en) Avatar display device, avatar generation device, and program
TWI255141B (en) Method and system for real-time interactive video
US9424678B1 (en) Method for teleconferencing using 3-D avatar
US11778002B2 (en) Three dimensional modeling and rendering of head hair
WO2017058733A1 (en) Head-mounted display with facial expression detecting capability
WO2021004257A1 (en) Line-of-sight detection method and apparatus, video processing method and apparatus, and device and storage medium
CN114219878B (en) Animation generation method and device for virtual character, storage medium and terminal
US11765332B2 (en) Virtual 3D communications with participant viewpoint adjustment
CN115917474A (en) Rendering avatars in three-dimensional environments
WO2021147465A1 (en) Image rendering method, electronic device, and system
JP2534617B2 (en) Real-time recognition and synthesis method of human image
CN112348937A (en) Face image processing method and electronic equipment
TWI240891B (en) Method of controlling the computer mouse by tracking user's head rotation and eyes movement
US20230281901A1 (en) Moving a direction of gaze of an avatar
JP2008186075A (en) Interactive image display device
JP6461394B1 (en) Image generating apparatus and image generating program
JP2000331190A (en) Virtual transformation device
KR20200134623A (en) Apparatus and Method for providing facial motion retargeting of 3 dimensional virtual character
WO2022224732A1 (en) Information processing device and information processing method
US20230070853A1 (en) Creating a non-riggable model of a face of a person
Chen et al. Automatic real-time generation of talking cartoon faces from image sequences in complicated backgrounds and applications

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees