CN110837294A - Facial expression control method and system based on eyeball tracking - Google Patents
Facial expression control method and system based on eyeball tracking Download PDFInfo
- Publication number
- CN110837294A CN110837294A CN201910970941.0A CN201910970941A CN110837294A CN 110837294 A CN110837294 A CN 110837294A CN 201910970941 A CN201910970941 A CN 201910970941A CN 110837294 A CN110837294 A CN 110837294A
- Authority
- CN
- China
- Prior art keywords
- user
- eyeball
- state information
- game
- calling
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 210000005252 bulbus oculi Anatomy 0.000 title claims abstract description 88
- 238000000034 method Methods 0.000 title claims abstract description 25
- 230000008921 facial expression Effects 0.000 title claims abstract description 17
- 230000008451 emotion Effects 0.000 claims abstract description 44
- 230000014509 gene expression Effects 0.000 claims abstract description 36
- 210000001508 eye Anatomy 0.000 claims abstract description 30
- 230000008859 change Effects 0.000 claims abstract description 20
- 238000010801 machine learning Methods 0.000 claims abstract description 16
- 210000001747 pupil Anatomy 0.000 claims abstract description 13
- 230000004397 blinking Effects 0.000 claims abstract description 9
- 230000009471 action Effects 0.000 claims abstract description 8
- 230000008602 contraction Effects 0.000 claims abstract description 8
- 230000003068 static effect Effects 0.000 claims description 21
- 238000012549 training Methods 0.000 claims description 15
- 239000011521 glass Substances 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 6
- 230000004044 response Effects 0.000 claims description 3
- 238000006467 substitution reaction Methods 0.000 abstract description 3
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 238000004590 computer program Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 230000000994 depressogenic effect Effects 0.000 description 1
- 230000002996 emotional effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000004424 eye movement Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/40—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/80—Special adaptations for executing a specific game genre or game mode
- A63F13/822—Strategy games; Role-playing games
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/80—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
- A63F2300/807—Role playing or strategy games
Abstract
The technical scheme of the invention comprises a facial expression control method and a system based on eyeball tracking, which are used for realizing the following steps: the method comprises the steps that a user uses a terminal device to run a game program; the game program calls an image acquisition device of the terminal equipment to acquire eyeball state information of the user, wherein the eyeball state information comprises an angle between an eyeball and the equipment, a watching direction, a blinking action, eye pupil size change and an iris stretching or contraction state; calling a machine learning algorithm to calculate the current emotion attribute of the user according to the acquired eyeball state information; and calling the corresponding expression from the expression library according to the current emotion attribute of the client and the preset rule, and displaying the corresponding expression by the virtual character appointed in the game. The invention has the beneficial effects that: the game experience is improved, the game sense of reality is enhanced, the user viscosity and the game liveness are improved, the player has more substitution senses, and the dependence and loyalty of the player on the game can be improved.
Description
Technical Field
The invention relates to a facial expression control method and system based on eyeball tracking, and belongs to the technical field of computers.
Background
Character models of the present RPG game (role playing game) mostly have realistic and anthropomorphic representation. Unfortunately, the minds of the characters in the game are dull. When the player looks at the game character, the game character does not look at the visual angle of the player, visual interaction cannot be generated between the game character and the player, the game experience is seriously influenced, the game experience is not real enough for the user, the stickiness of the user is reduced, and the game activity is reduced.
Eyeball rotation analyzes personality, and the eyeball rotation can often reflect that psychological activities of people are summarized earlier, for example, two eyes are put out of light when excited, two eyes are dull when depressed, pupils are unnoticed when sad, and eyes are angry when angry, and the like. Still others believe that the eyes lie to the left and up and think to the right and up. Proves that the eyeballs play a certain role in judging the characters and emotions of the human body.
Disclosure of Invention
In order to solve the above problems, an object of the present invention is to provide a method and a system for controlling facial expressions based on eye tracking, including a user running a game program using a terminal device; the game program calls an image acquisition device of the terminal equipment to acquire eyeball state information of the user, wherein the eyeball state information comprises an angle between an eyeball and the equipment, a watching direction, a blinking action, eye pupil size change and an iris stretching or contraction state; calling a machine learning algorithm to calculate the current emotion attribute of the user according to the acquired eyeball state information; and calling the corresponding expression from the expression library according to the current emotion attribute of the client and the preset rule, and displaying the corresponding expression by the virtual character appointed in the game.
The technical scheme adopted by the invention for solving the problems is as follows: a facial expression control method based on eyeball tracking is characterized by comprising the following steps: s100, a user uses terminal equipment to run a game program; s200, calling an image acquisition device of the terminal equipment by the game program to acquire eyeball state information of the user, wherein the eyeball state information comprises an angle between an eyeball and the equipment, a watching direction, a blinking action, a change of the size of the pupil of the eye and an iris stretching or contraction state; s300, calling a machine learning algorithm to calculate the current emotion attribute of the user according to the obtained eyeball state information; s400, calling the corresponding expression from the expression library according to the current emotion attribute of the client and the preset rule, and displaying the corresponding expression by the virtual character designated in the game.
Further, the S200 further includes: s210, collecting a certain number of static images of the eyeballs of the user in a specified time period, wherein the specified time period and the certain number can be defined by users; s220, selecting a certain number of static images of the user eyeballs for comparison, and obtaining the change state and the change track of the user eyeballs in the appointed time according to the comparison result.
Further, the S220 further includes: s221, selecting two temporally adjacent user eyeball static images, and comparing to obtain eyeball difference; s222, calculating the distance between the glasses of the user and the equipment according to the difference and the fixed position of the corresponding time node image acquisition device on the terminal equipment.
Further, the S300 further includes: s310, acquiring eyeball state information and big data information corresponding to emotion connection; s320, training the machine learning algorithm according to big data information to obtain a trained machine emotion algorithm; s330, the collected eyeball state information of the user is used as input, and the emotion information of the user in a certain time period is output after the corresponding training is carried out by a machine emotion algorithm.
Further, the S400 further includes: s410, outputting the corresponding changed retrospective angle, pupil size and eye outline shape data of the virtual character eyes by taking the emotion of the user as input based on a response algorithm model; and S420, calling the corresponding expression from the expression library and displaying the corresponding expression by the virtual character designated in the game.
The other aspect of the technical scheme adopted by the invention for solving the problems is as follows: a facial expression control system based on eye tracking, comprising: the execution module is used for running the game program on the terminal equipment; the equipment calling module is used for calling an image acquisition device of the terminal equipment to acquire eyeball state information of a user, wherein the eyeball state information comprises an angle between an eyeball and the equipment, a watching direction, a blinking action, a change of the size of the pupil of the eye and an iris stretching or contraction state; the machine algorithm execution module is used for calling a machine learning algorithm to calculate the current emotion attribute of the user according to the acquired eyeball state information; and the expression calling module is used for calling the corresponding expression from the expression library according to the current emotion attribute of the client and the preset rule and displaying the corresponding expression by the designated virtual character in the game.
Further, the image acquisition device includes, but is not limited to, a camera and an infrared projection acquisition device.
Further, the device invoking module further includes: the device comprises a setting unit, a processing unit and a processing unit, wherein the setting unit is used for setting the acquisition time period and the acquisition frequency of the image acquisition device and acquiring a certain number of user eyeball static images in a specified time period, and the specified time period and the certain number can be customized; and the track generation unit is used for selecting a certain number of static images of the user eyeballs for comparison, and acquiring the change state and the change track of the user eyeballs in the appointed time according to the comparison result.
Further, the trajectory generation unit further includes: the comparison subunit is used for selecting two temporally adjacent user eyeball static images and comparing the two temporally adjacent user eyeball static images to obtain eyeball difference; and the distance calculating subunit is used for calculating the distance between the user glasses and the equipment according to the difference and the fixed position of the corresponding time node image acquisition device on the terminal equipment.
Further, the machine algorithm execution module further comprises: the big data unit is used for acquiring eyeball state information and big data information corresponding to emotion connection; the training unit is used for training the machine learning algorithm according to the big data information to obtain a trained machine emotion algorithm; and the computing unit is used for taking the collected eyeball state information of the user as input, and outputting the emotion information of the user within a certain time period after the corresponding training is carried out by the machine emotion algorithm.
The invention has the beneficial effects that: the game experience is improved, the game sense of reality is enhanced, the user viscosity and the game liveness are improved, the player has more substitution senses, and the dependence and loyalty of the player on the game can be improved.
Drawings
FIG. 1 is a schematic flow diagram of a method according to a preferred embodiment of the present invention;
fig. 2 is a schematic diagram of a system architecture according to a preferred embodiment of the present invention.
Detailed Description
The conception, the specific structure and the technical effects of the present invention will be clearly and completely described in conjunction with the embodiments and the accompanying drawings to fully understand the objects, the schemes and the effects of the present invention.
It should be noted that, unless otherwise specified, when a feature is referred to as being "fixed" or "connected" to another feature, it may be directly fixed or connected to the other feature or indirectly fixed or connected to the other feature. Furthermore, the descriptions of upper, lower, left, right, etc. used in the present disclosure are only relative to the mutual positional relationship of the constituent parts of the present disclosure in the drawings. As used in this disclosure, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Furthermore, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art. The terminology used in the description herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any combination of one or more of the associated listed items.
It will be understood that, although the terms first, second, third, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element of the same type from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present disclosure. The use of any and all examples, or exemplary language ("e.g.," such as "or the like") provided herein, is intended merely to better illuminate embodiments of the invention and does not pose a limitation on the scope of the invention unless otherwise claimed.
According to the report of the American well-known science and technology website mashable 2018, 7 and 30, an international research team consisting of researchers of institutions such as university of Stuttgart in Germany, university of Francis in Australia, university of Australia and the like uses the most advanced machine learning algorithm to discover the relationship between the character and the eyeball movement of a human through machine learning.
The following are some of the personality traits related to eye movement developed by researchers through artificial intelligence techniques:
(1) curiosity: the eyes look around more;
(2) open heart state: people with clear ideas take longer to stare at the abstract pattern;
(3) the nerve matter: the blinking speed is faster;
(4) acceptance of new experience: the eyeball moves more from side to side;
(5) the responsibility center is as follows: the pupil size of a person with a high degree of responsibility will fluctuate more;
(6) optimistic: the negative emotional content is focused for a shorter time than the pessimistic person.
Referring to fig. 1, there is a schematic flow chart of a method according to a preferred embodiment of the present invention,
s100, a user uses terminal equipment to run a game program;
s200, calling an image acquisition device of the terminal equipment by the game program to acquire eyeball state information of the user, wherein the eyeball state information comprises an angle between an eyeball and the equipment, a watching direction, a blinking action, a change of the size of the pupil of the eye and an iris stretching or contraction state;
s300, calling a machine learning algorithm to calculate the current emotion attribute of the user according to the obtained eyeball state information;
s400, calling the corresponding expression from the expression library according to the current emotion attribute of the client and the preset rule, and displaying the corresponding expression by the virtual character designated in the game.
S200 further includes: s210, collecting a certain number of static images of the eyeballs of the user in a specified time period, wherein the specified time period and the certain number can be defined by users; s220, selecting a certain number of static images of the user eyeballs for comparison, and obtaining the change state and the change track of the user eyeballs in the appointed time according to the comparison result.
S220 further includes: s221, selecting two temporally adjacent user eyeball static images, and comparing to obtain eyeball difference; s222, calculating the distance between the glasses of the user and the equipment according to the difference and the fixed position of the corresponding time node image acquisition device on the terminal equipment.
S300 further comprises: s310, acquiring eyeball state information and big data information corresponding to emotion connection; s320, training the machine learning algorithm according to big data information to obtain a trained machine emotion algorithm; s330, the collected eyeball state information of the user is used as input, and the emotion information of the user in a certain time period is output after the corresponding training is carried out by a machine emotion algorithm.
S400 further includes: s410, outputting the corresponding changed retrospective angle, pupil size and eye outline shape data of the virtual character eyes by taking the emotion of the user as input based on a response algorithm model; and S420, calling the corresponding expression from the expression library and displaying the corresponding expression by the virtual character designated in the game.
Referring to fig. 2, there is shown a schematic diagram of a system architecture according to a preferred embodiment of the present invention,
the method comprises the following steps: the execution module is used for running the game program on the terminal equipment; the equipment calling module is used for calling an image acquisition device of the terminal equipment to acquire eyeball state information of a user, wherein the eyeball state information comprises an angle between an eyeball and the equipment, a watching direction, a blinking action, a change of the size of the pupil of the eye and an iris stretching or contraction state; the machine algorithm execution module is used for calling a machine learning algorithm to calculate the current emotion attribute of the user according to the acquired eyeball state information; and the expression calling module is used for calling the corresponding expression from the expression library according to the current emotion attribute of the client and the preset rule and displaying the corresponding expression by the designated virtual character in the game.
The image capturing device includes, but is not limited to, a camera and an infrared projection capturing device.
The device call module further comprises: the device comprises a setting unit, a processing unit and a processing unit, wherein the setting unit is used for setting the acquisition time period and the acquisition frequency of the image acquisition device and acquiring a certain number of user eyeball static images in a specified time period, and the specified time period and the certain number can be customized; and the track generation unit is used for selecting a certain number of static images of the user eyeballs for comparison, and acquiring the change state and the change track of the user eyeballs in the appointed time according to the comparison result.
The trajectory generation unit further includes: the comparison subunit is used for selecting two temporally adjacent user eyeball static images and comparing the two temporally adjacent user eyeball static images to obtain eyeball difference; and the distance calculating subunit is used for calculating the distance between the user glasses and the equipment according to the difference and the fixed position of the corresponding time node image acquisition device on the terminal equipment.
The machine algorithm execution module further comprises: the big data unit is used for acquiring eyeball state information and big data information corresponding to emotion connection; the training unit is used for training the machine learning algorithm according to the big data information to obtain a trained machine emotion algorithm; and the computing unit is used for taking the collected eyeball state information of the user as input, and outputting the emotion information of the user within a certain time period after the corresponding training is carried out by the machine emotion algorithm.
It should be recognized that embodiments of the present invention can be realized and implemented by computer hardware, a combination of hardware and software, or by computer instructions stored in a non-transitory computer readable memory. The methods may be implemented in a computer program using standard programming techniques, including a non-transitory computer-readable storage medium configured with the computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner, according to the methods and figures described in the detailed description. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Further, the operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes described herein (or variations and/or combinations thereof) may be performed under the control of one or more computer systems configured with executable instructions, and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) collectively executed on one or more processors, by hardware, or combinations thereof. The computer program includes a plurality of instructions executable by one or more processors.
Further, the method may be implemented in any type of computing platform operatively connected to a suitable interface, including but not limited to a personal computer, mini computer, mainframe, workstation, networked or distributed computing environment, separate or integrated computer platform, or in communication with a charged particle tool or other imaging device, and the like. Aspects of the invention may be embodied in machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, optically read and/or write storage medium, RAM, ROM, or the like, such that it may be read by a programmable computer, which when read by the storage medium or device, is operative to configure and operate the computer to perform the procedures described herein. Further, the machine-readable code, or portions thereof, may be transmitted over a wired or wireless network. The invention described herein includes these and other different types of non-transitory computer-readable storage media when such media include instructions or programs that implement the steps described above in conjunction with a microprocessor or other data processor. The invention also includes the computer itself when programmed according to the methods and techniques described herein.
A computer program can be applied to input data to perform the functions described herein to transform the input data to generate output data that is stored to non-volatile memory. The output information may also be applied to one or more output devices, such as a display. In a preferred embodiment of the invention, the transformed data represents physical and tangible objects, including particular visual depictions of physical and tangible objects produced on a display.
The above description is only a preferred embodiment of the present invention, and the present invention is not limited to the above embodiment, and any modifications, equivalent substitutions, improvements, etc. within the spirit and principle of the present invention should be included in the protection scope of the present invention as long as the technical effects of the present invention are achieved by the same means. The invention is capable of other modifications and variations in its technical solution and/or its implementation, within the scope of protection of the invention.
Claims (10)
1. A facial expression control method based on eyeball tracking is characterized by comprising the following steps:
s100, a user uses terminal equipment to run a game program;
s200, calling an image acquisition device of the terminal equipment by the game program to acquire eyeball state information of the user, wherein the eyeball state information comprises an angle between an eyeball and the equipment, a watching direction, a blinking action, a change of the size of the pupil of the eye and an iris stretching or contraction state;
s300, calling a machine learning algorithm to calculate the current emotion attribute of the user according to the obtained eyeball state information;
s400, calling the corresponding expression from the expression library according to the current emotion attribute of the client and the preset rule, and displaying the corresponding expression by the virtual character designated in the game.
2. The method for controlling facial expression based on eye tracking according to claim 1, wherein the S200 further comprises:
s210, collecting a certain number of static images of the eyeballs of the user in a specified time period, wherein the specified time period and the certain number can be defined by users;
s220, selecting a certain number of static images of the user eyeballs for comparison, and obtaining the change state and the change track of the user eyeballs in the appointed time according to the comparison result.
3. The method for controlling facial expression based on eye tracking according to claim 2, wherein said S220 further comprises:
s221, selecting two temporally adjacent user eyeball static images, and comparing to obtain eyeball difference;
s222, calculating the distance between the glasses of the user and the equipment according to the difference and the fixed position of the corresponding time node image acquisition device on the terminal equipment.
4. The method for controlling facial expression based on eye tracking according to claim 1, wherein the S300 further comprises:
s310, acquiring eyeball state information and big data information corresponding to emotion connection;
s320, training the machine learning algorithm according to big data information to obtain a trained machine emotion algorithm;
s330, the collected eyeball state information of the user is used as input, and the emotion information of the user in a certain time period is output after the corresponding training is carried out by a machine emotion algorithm.
5. The method for controlling facial expression based on eye tracking according to claim 1, wherein the S400 further comprises:
s410, outputting the corresponding changed retrospective angle, pupil size and eye outline shape data of the virtual character eyes by taking the emotion of the user as input based on a response algorithm model;
and S420, calling the corresponding expression from the expression library and displaying the corresponding expression by the virtual character designated in the game.
6. A facial expression control system based on eye tracking, comprising:
the execution module is used for running the game program on the terminal equipment;
the equipment calling module is used for calling an image acquisition device of the terminal equipment to acquire eyeball state information of a user, wherein the eyeball state information comprises an angle between an eyeball and the equipment, a watching direction, a blinking action, a change of the size of the pupil of the eye and an iris stretching or contraction state;
the machine algorithm execution module is used for calling a machine learning algorithm to calculate the current emotion attribute of the user according to the acquired eyeball state information;
and the expression calling module is used for calling the corresponding expression from the expression library according to the current emotion attribute of the client and the preset rule and displaying the corresponding expression by the designated virtual character in the game.
7. The eye tracking based facial expression control system of claim 6, wherein the image capture device comprises but is not limited to a camera and an infrared projection capture device.
8. The eye tracking based facial expression control system of claim 6, wherein the device invocation module further comprises:
the device comprises a setting unit, a processing unit and a processing unit, wherein the setting unit is used for setting the acquisition time period and the acquisition frequency of the image acquisition device and acquiring a certain number of user eyeball static images in a specified time period, and the specified time period and the certain number can be customized;
and the track generation unit is used for selecting a certain number of static images of the user eyeballs for comparison, and acquiring the change state and the change track of the user eyeballs in the appointed time according to the comparison result.
9. The eye tracking-based facial expression control system according to claim 8, wherein the trajectory generation unit further comprises:
the comparison subunit is used for selecting two temporally adjacent user eyeball static images and comparing the two temporally adjacent user eyeball static images to obtain eyeball difference;
and the distance calculating subunit is used for calculating the distance between the user glasses and the equipment according to the difference and the fixed position of the corresponding time node image acquisition device on the terminal equipment.
10. The eye tracking based facial expression control system of claim 6, wherein the machine algorithm execution module further comprises:
the big data unit is used for acquiring eyeball state information and big data information corresponding to emotion connection;
the training unit is used for training the machine learning algorithm according to the big data information to obtain a trained machine emotion algorithm;
and the computing unit is used for taking the collected eyeball state information of the user as input, and outputting the emotion information of the user within a certain time period after the corresponding training is carried out by the machine emotion algorithm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910970941.0A CN110837294B (en) | 2019-10-14 | 2019-10-14 | Facial expression control method and system based on eyeball tracking |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910970941.0A CN110837294B (en) | 2019-10-14 | 2019-10-14 | Facial expression control method and system based on eyeball tracking |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110837294A true CN110837294A (en) | 2020-02-25 |
CN110837294B CN110837294B (en) | 2023-12-12 |
Family
ID=69575401
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910970941.0A Active CN110837294B (en) | 2019-10-14 | 2019-10-14 | Facial expression control method and system based on eyeball tracking |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110837294B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111352507A (en) * | 2020-02-27 | 2020-06-30 | 维沃移动通信有限公司 | Information prompting method and electronic equipment |
CN111632367A (en) * | 2020-05-18 | 2020-09-08 | 歌尔科技有限公司 | Hand-trip system based on visual guidance and hand-trip response method |
CN114187647A (en) * | 2021-12-10 | 2022-03-15 | 深圳爱酷智能科技有限公司 | Drug-taking detection method, device, equipment and storage medium |
CN114296548A (en) * | 2021-12-14 | 2022-04-08 | 杭州朱道实业有限公司 | Intelligent mobile information identification system for exhibition |
WO2022183424A1 (en) * | 2021-03-04 | 2022-09-09 | 深圳技术大学 | Emotion recognition-based online social method and apparatus, and storage medium |
CN116912748A (en) * | 2023-09-13 | 2023-10-20 | 江西工业贸易职业技术学院 | Event view tracking method, system, readable storage medium and computer |
Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5169319A (en) * | 1990-07-30 | 1992-12-08 | John Potocki | Method for improving a person's skill for playing an interactive video game requiring eye-hand coordination and operation of manual activation means |
CN101393599A (en) * | 2007-09-19 | 2009-03-25 | 中国科学院自动化研究所 | Game role control method based on human face expression |
US20120190456A1 (en) * | 2011-01-21 | 2012-07-26 | Rogers Henk B | Systems and methods for providing an interactive multiplayer story |
CN105573613A (en) * | 2015-06-24 | 2016-05-11 | 宇龙计算机通信科技(深圳)有限公司 | Program icon sorting method and apparatus |
CN105797375A (en) * | 2014-12-31 | 2016-07-27 | 深圳市亿思达科技集团有限公司 | Method and terminal for changing role model expressions along with user facial expressions |
CN106537290A (en) * | 2014-05-09 | 2017-03-22 | 谷歌公司 | Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects |
CN107589829A (en) * | 2016-07-07 | 2018-01-16 | 迪斯尼实业公司 | Location-based experience to interactive commodity |
US20180314881A1 (en) * | 2017-05-01 | 2018-11-01 | Google Llc | Classifying facial expressions using eye-tracking cameras |
CN108742657A (en) * | 2018-03-30 | 2018-11-06 | 百度在线网络技术(北京)有限公司 | Game assessment method, device, games system and storage medium |
CN108905204A (en) * | 2018-07-24 | 2018-11-30 | 合肥爱玩动漫有限公司 | A kind of exchange method for immersion virtual game |
US20180348861A1 (en) * | 2017-05-31 | 2018-12-06 | Magic Leap, Inc. | Eye tracking calibration techniques |
CN109376621A (en) * | 2018-09-30 | 2019-02-22 | 北京七鑫易维信息技术有限公司 | A kind of sample data generation method, device and robot |
CN109431761A (en) * | 2018-08-31 | 2019-03-08 | 深圳众赢维融科技有限公司 | Method, apparatus, electronic equipment and the storage medium of asthenopia relief |
CN109621418A (en) * | 2018-12-03 | 2019-04-16 | 网易(杭州)网络有限公司 | The expression adjustment and production method, device of virtual role in a kind of game |
CN109670385A (en) * | 2017-10-16 | 2019-04-23 | 腾讯科技(深圳)有限公司 | The method and device that expression updates in a kind of application program |
CN109683709A (en) * | 2018-12-17 | 2019-04-26 | 苏州思必驰信息科技有限公司 | Man-machine interaction method and system based on Emotion identification |
CN109725418A (en) * | 2017-10-30 | 2019-05-07 | 华为技术有限公司 | Display equipment, the method and device presented for adjusting the image of display equipment |
CN109844735A (en) * | 2016-07-21 | 2019-06-04 | 奇跃公司 | Affective state for using user controls the technology that virtual image generates system |
CN109857650A (en) * | 2019-01-14 | 2019-06-07 | 珠海金山网络游戏科技有限公司 | A kind of game performance monitor method and system |
-
2019
- 2019-10-14 CN CN201910970941.0A patent/CN110837294B/en active Active
Patent Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5169319A (en) * | 1990-07-30 | 1992-12-08 | John Potocki | Method for improving a person's skill for playing an interactive video game requiring eye-hand coordination and operation of manual activation means |
CN101393599A (en) * | 2007-09-19 | 2009-03-25 | 中国科学院自动化研究所 | Game role control method based on human face expression |
US20120190456A1 (en) * | 2011-01-21 | 2012-07-26 | Rogers Henk B | Systems and methods for providing an interactive multiplayer story |
CN106537290A (en) * | 2014-05-09 | 2017-03-22 | 谷歌公司 | Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects |
CN105797375A (en) * | 2014-12-31 | 2016-07-27 | 深圳市亿思达科技集团有限公司 | Method and terminal for changing role model expressions along with user facial expressions |
CN105573613A (en) * | 2015-06-24 | 2016-05-11 | 宇龙计算机通信科技(深圳)有限公司 | Program icon sorting method and apparatus |
CN107589829A (en) * | 2016-07-07 | 2018-01-16 | 迪斯尼实业公司 | Location-based experience to interactive commodity |
CN109844735A (en) * | 2016-07-21 | 2019-06-04 | 奇跃公司 | Affective state for using user controls the technology that virtual image generates system |
US20180314881A1 (en) * | 2017-05-01 | 2018-11-01 | Google Llc | Classifying facial expressions using eye-tracking cameras |
CN110249337A (en) * | 2017-05-01 | 2019-09-17 | 谷歌有限责任公司 | Using eye tracks camera to facial expression classification |
US20180348861A1 (en) * | 2017-05-31 | 2018-12-06 | Magic Leap, Inc. | Eye tracking calibration techniques |
CN109670385A (en) * | 2017-10-16 | 2019-04-23 | 腾讯科技(深圳)有限公司 | The method and device that expression updates in a kind of application program |
CN109725418A (en) * | 2017-10-30 | 2019-05-07 | 华为技术有限公司 | Display equipment, the method and device presented for adjusting the image of display equipment |
CN108742657A (en) * | 2018-03-30 | 2018-11-06 | 百度在线网络技术(北京)有限公司 | Game assessment method, device, games system and storage medium |
CN108905204A (en) * | 2018-07-24 | 2018-11-30 | 合肥爱玩动漫有限公司 | A kind of exchange method for immersion virtual game |
CN109431761A (en) * | 2018-08-31 | 2019-03-08 | 深圳众赢维融科技有限公司 | Method, apparatus, electronic equipment and the storage medium of asthenopia relief |
CN109376621A (en) * | 2018-09-30 | 2019-02-22 | 北京七鑫易维信息技术有限公司 | A kind of sample data generation method, device and robot |
CN109621418A (en) * | 2018-12-03 | 2019-04-16 | 网易(杭州)网络有限公司 | The expression adjustment and production method, device of virtual role in a kind of game |
CN109683709A (en) * | 2018-12-17 | 2019-04-26 | 苏州思必驰信息科技有限公司 | Man-machine interaction method and system based on Emotion identification |
CN109857650A (en) * | 2019-01-14 | 2019-06-07 | 珠海金山网络游戏科技有限公司 | A kind of game performance monitor method and system |
Non-Patent Citations (2)
Title |
---|
刘春雷;林熠轩;: "视觉交互游戏所隐含的艺术语境变革探析", 现代商贸工业, no. 19 * |
郭淳;居全伟;: "基于玩家体验的电脑游戏界面情感化设计研究", 黑龙江科学, no. 06 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111352507A (en) * | 2020-02-27 | 2020-06-30 | 维沃移动通信有限公司 | Information prompting method and electronic equipment |
CN111632367A (en) * | 2020-05-18 | 2020-09-08 | 歌尔科技有限公司 | Hand-trip system based on visual guidance and hand-trip response method |
WO2021232698A1 (en) * | 2020-05-18 | 2021-11-25 | 歌尔股份有限公司 | Vision guidance-based mobile gaming system and mobile gaming response method |
WO2022183424A1 (en) * | 2021-03-04 | 2022-09-09 | 深圳技术大学 | Emotion recognition-based online social method and apparatus, and storage medium |
CN114187647A (en) * | 2021-12-10 | 2022-03-15 | 深圳爱酷智能科技有限公司 | Drug-taking detection method, device, equipment and storage medium |
CN114296548A (en) * | 2021-12-14 | 2022-04-08 | 杭州朱道实业有限公司 | Intelligent mobile information identification system for exhibition |
CN116912748A (en) * | 2023-09-13 | 2023-10-20 | 江西工业贸易职业技术学院 | Event view tracking method, system, readable storage medium and computer |
Also Published As
Publication number | Publication date |
---|---|
CN110837294B (en) | 2023-12-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110837294A (en) | Facial expression control method and system based on eyeball tracking | |
US11074748B2 (en) | Matching meshes for virtual avatars | |
EP3381175B1 (en) | Apparatus and method for operating personal agent | |
Sundstedt | Gazing at games: An introduction to eye tracking control | |
Varona et al. | Hands-free vision-based interface for computer accessibility | |
US11557076B2 (en) | Computer generated hair groom transfer tool | |
Duric et al. | Integrating perceptual and cognitive modeling for adaptive and intelligent human-computer interaction | |
CN112970056A (en) | Human-computer interface using high speed and accurate user interaction tracking | |
US20150072322A1 (en) | Situated simulation for training, education, and therapy | |
CN108942919B (en) | Interaction method and system based on virtual human | |
Calandra et al. | Navigating wall-sized displays with the gaze: a proposal for cultural heritage. | |
Korn et al. | Assistive systems for the workplace: Towards context-aware assistance | |
CN109840019B (en) | Virtual character control method, device and storage medium | |
US11836840B2 (en) | Systems and methods for cross-application authoring, transfer, and evaluation of rigging control systems for virtual characters | |
CN108460324A (en) | A method of child's mood for identification | |
Ascari et al. | Personalized gestural interaction applied in a gesture interactive game-based approach for people with disabilities | |
CN111773669B (en) | Method and device for generating virtual object in virtual environment | |
Huang et al. | Design dimensions for holographic intelligent agents: A comparative analysis | |
Baothman | An intelligent big data management system using haar algorithm-based Nao agent multisensory communication | |
KR20200019296A (en) | Apparatus and method for generating recognition model of facial expression and computer recordable medium storing computer program thereof | |
Dan | Construction and design of visual information platform in human-computer interaction | |
JP6935531B1 (en) | Information processing programs and information processing systems | |
WO2023000310A1 (en) | Methods, devices, and media for customizing and expressing personality of robot | |
Keskin | Real time human computer interface application based on eye gaze tracking and head detection | |
Ventrella | Virtual gaze: the communicative energy between avatar faces |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |