CN111860451A - Game interaction method based on facial expression recognition - Google Patents

Game interaction method based on facial expression recognition Download PDF

Info

Publication number
CN111860451A
CN111860451A CN202010766945.XA CN202010766945A CN111860451A CN 111860451 A CN111860451 A CN 111860451A CN 202010766945 A CN202010766945 A CN 202010766945A CN 111860451 A CN111860451 A CN 111860451A
Authority
CN
China
Prior art keywords
game
expression
player
facial expression
expressions
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010766945.XA
Other languages
Chinese (zh)
Inventor
张胜利
吕钊
张超
郭晓静
穆雪
欧阳蕊
黄小鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Xiaoma E Commerce Co ltd
Original Assignee
Suzhou Xiaoma E Commerce Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Xiaoma E Commerce Co ltd filed Critical Suzhou Xiaoma E Commerce Co ltd
Priority to CN202010766945.XA priority Critical patent/CN111860451A/en
Publication of CN111860451A publication Critical patent/CN111860451A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2148Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1087Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals comprising photodetecting means, e.g. a camera
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6045Methods for processing data by generating or executing the game program for mapping control signals received from the input arrangement into game commands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a game interaction method based on facial expression recognition, which comprises the following steps: (1) extracting visual features by learning static images with different types of expressions by using a convolutional neural network, and determining the relationship between the conversion of facial expressions in an image sequence and facial basic expressions to obtain a training model; (2) collecting video information of a player to be detected, and intercepting the video information according to frames; (3) preprocessing a video image to generate a preprocessed image; (4) analyzing the facial expression, matching the facial expression with the expression characteristics in the training model, and analyzing the current expression of the player; (5) and controlling the game role through the current facial expression. The invention only shoots the face of the player through the camera, performs expression analysis and recognition in the computer, and converts the result into the control instruction of the game, thereby realizing the control of the role in the game, and also directly or assisting the player to have a conversation with other roles, thereby expanding the traditional game interaction mode.

Description

Game interaction method based on facial expression recognition
Technical Field
The invention relates to the technical field of facial expression image analysis and recognition, in particular to a game interaction method based on facial expression recognition.
Background
The expression can be said to be a world language, and is not distinguished from national boundaries, ethnicities and sexes, and all people can be said to have universal expressions. Facial expression recognition is widely applied to robots, medical treatment, driver driving fatigue detection and man-machine interaction systems, and in the earliest 20 th century, Ekman and Friesen define 6 basic expressions through cross-cultural research: the expression of 'slight' is added subsequently, wherein the expression is angry, afraid, disgust, happy, sad and frightened. Pioneering work and intuitive definition make this model still popular in automatic facial expression recognition (AFEA). According to the feature representation, the processing objects of the task of the facial expression recognition system can be divided into two types of pictures and videos. Thanks to the development of deep learning and the emergence of the more challenging dataset FER2013, more and more researchers are applying deep learning techniques to facial expression recognition.
In recent years, with the innovation of computer technology, the digital entertainment industry represented by computer games has been rapidly developed. As a special application software, the computer game realizes the interactive operation between the user and the game by providing a series of menu options and operation instructions for the game user. The traditional man-machine interaction modes for games are as follows: mouse, keyboard, joystick and special game equipment. However, with the development of game types and contents, these modes have not been able to meet the requirements of stronger human-computer interaction, and it is a necessary trend to apply the facial expression recognition technology to games.
Disclosure of Invention
The invention aims to provide a game interaction method based on facial expression recognition, which adopts the facial expression of a player to control a game role and realizes the control of the role in a game.
In order to achieve the purpose, the invention adopts the following technical scheme:
a game interaction method based on facial expression recognition comprises the following steps:
(1) extracting visual features by learning static images with different types of expressions through a convolutional neural network, determining the relationship between the conversion of facial expressions in an image sequence and facial basic expressions to obtain a training model, wherein the facial basic expressions comprise anger, fear, disgust, joy, sadness, surprise, slight sight and negation;
(2) collecting video information of a player to be detected, carrying out image interception on the video information according to frames, and collecting the video information of the player to be detected through a camera, wherein the camera comprises a high-definition camera and an infrared camera, and the camera is arranged at an included angle of 140-180 degrees at a distance of 50-80cm from a face;
(3) preprocessing the video image to generate a preprocessed image, wherein the preprocessed image comprises positioning and extracting organ characteristics and texture areas of the face and other predefined characteristic points, and the preprocessed image is positioned in a face area of a player through the characteristic points; the preprocessing the video image specifically comprises: preprocessing a video image of a player to be detected, extracting a key frame, then normalizing the acquired video key frame, and detecting a human face and extracting characteristics;
(4) analyzing the facial expression, matching the facial expression with the expression characteristics in the training model, and analyzing the current expression of the player;
(5) controlling the game role through the current facial expression;
(51) displaying characters and images of the game scenario on a screen through the game window according to the game scenario;
(52) in the interaction between a player and an NPC, comparing the system prompt expression with the expression made by the player, and triggering a preset scenario through expression error correction verification;
(53) when the branch line selection is carried out, the system prompts the player to make one of the basic expressions on the game window;
(54) when a player makes one of the basic expressions, the system triggers game scenario branches corresponding to the player expression as the basic expression according to the expression made by the player after the system passes the expression error correction verification, and outputs the corresponding branch scenario to a game window;
(55) in a specific plot, controlling various movement modes of the role in the game by using different expressions, and controlling the role action by using the basic expression; the basic expressions comprise joy, anger, sadness and surprise, wherein the joy controls the character to move forwards, the anger controls the character to jump, the sadness controls the character to squat and the fright controls the sliding, and the duration of the movement mode corresponding to the characters in the game can be controlled according to the duration of the expression made by the player.
Further, the normalization processing specifically includes:
(A) performing illumination normalization on the image by using threshold segmentation histogram equalization, and eliminating gray level difference and noise of edge pixels of the segmentation part by feathering;
(B) training a human eye region through a self-adaptive enhancement algorithm cascade detector, finding out coordinates of a central point of human eyes as a central position of horizontal rotation of affine transformation, and finally obtaining a distorted human face image to realize posture normalization;
(C) the face alignment is realized by aligning the coordinates of the center points of the two eyes among different images, so that the normalization of the scale is realized;
(D) and (4) effectively cutting out a local area of the face by adopting an ERT characteristic point segmentation algorithm, and finishing the primary preprocessing work of the image.
In the scheme, the characteristic extraction is to extract the textural characteristics of the human face by adopting an LBP price mode, then detect and mark a target, establish a cascade table by using a training result of a classifier of harr characteristics, and transmit the picture to be detected and the cascade table to a target detection algorithm together to obtain a detected human face set.
The expression error correction verification comprises the following steps: when the player expression pictures intercepted by frames within seconds after the system is prompted are identified, the expression with the largest proportion is calculated, and the expression is used as an identification result; and then displaying the expression recognition result to the user, detecting whether the user has negative expressions or not, if so, indicating detection errors, repeating the system prompt of the steps (52) and (54), detecting again, and eliminating previous error expressions when the expressions corresponding to the pictures are matched.
According to the technical scheme, the game interaction method based on facial expression recognition adopts the facial expression of the player to control the game role, namely, the facial expression information of the player is used as the supplement of the traditional keyboard and mouse interaction mode, so that the man-machine interaction mode is enriched. The face of a player is shot only through the camera, expression analysis and recognition are carried out in the computer, results are converted into control instructions of the game, the role in the game is controlled, the dialogue between the player and other roles can be directly or assisted, and the traditional game interaction mode is expanded. Since the game has high real-time requirements, the video detection method must be real-time and robust. Such a control method must also be easy to implement and operate for ease of use by the user. The invention can lead game users to expect that the interaction operation can be carried out in a new natural and intelligent mode, thereby leading the game to be more interactive and immersive. With the development of computer vision technology, natural human-computer interaction by applying vision becomes possible, and as a camera becomes a common configuration of a computer, the application of the technology has a wide prospect.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a basic flow diagram of the model training of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings:
as shown in fig. 1, the method for controlling a game character based on facial expressions of this embodiment specifically includes the following steps:
step 1: the method comprises the steps of extracting visual features by learning a large number of static images with different types of expressions through a Convolutional Neural Network (CNN), determining the relationship between the conversion of facial expressions and inspiration, fear, disgust, happiness, sadness, surprise, slight vision and negation (shaking head) in an image sequence, and obtaining a training model, so that the system can accurately recognize the facial expressions;
step 2: video information of a player to be detected is collected through a camera, and image interception is carried out on the video information according to frames. The two cameras, namely the high-definition camera and the infrared camera, are used together, the distance of 50-80cm is kept between the cameras and the face, and the cameras are arranged at an included angle of 140-180 degrees, so that a good recognition effect is generated in adverse environments such as backlight, poor light and the like;
and step 3: the method comprises the steps of preprocessing a video image to generate a preprocessed image, wherein in order to facilitate emotion recognition, organ features and texture regions of a human face and other predefined feature points need to be positioned and extracted, and the human face region of a player is positioned through the feature points;
and 4, step 4: analyzing the facial expression, matching the facial expression with the expression characteristics in the training model, and analyzing the current expression of the player;
when the selection is carried out, the system prompts a player to make an expression, a camera picture appears at the lower left corner of the system, the face is framed, and real-time expression detection is carried out. When the tester makes an angry expression and is identified by the program, the system will select the first option corresponding to angry and then jump to the corresponding game scenario; when the tester makes a happy expression and is identified by the program, the second option that is happy is selected and then jumps to the corresponding game scenario.
And 5: using the obtained facial expression information to control a game character, comprising:
step 5.1: displaying characters and images of a game scenario on a screen according to a preset game scenario through a game window;
step 5.2: in the interaction between a Player and an NPC (Non-Player-Controlled Character), the system prompts the Player to make a confused expression, and when the Player makes the confused expression and is identified by the system, a predetermined scenario can be triggered through expression error correction verification;
step 5.3: when the system prompts on a game window when selecting branches in a game, a player is prompted to make one of angry, fear, disgust, joy, sadness, surprise and slight expressions;
step 5.4: when a player makes one of the expressions, the system respectively triggers the player expressions to be game plot branches corresponding to anger, fear, disgust, joy, sadness, surprise and slight according to the expressions made by the player after the system passes the expression error correction verification, and outputs the corresponding branch plot branches to a game window;
and 5.5 in a specific plot, controlling various motion modes of the interior characters of the game by using different expressions, moving happy expression control characters forwards, controlling character jumping, controlling character squatting under sadness and controlling sliding at surprise, and controlling the time length of the motion mode corresponding to the interior characters of the game according to the time length of the expression made by the player.
In the above method, the model training specifically includes:
the convolutional neural network uses a residual structure and a depth separable convolutional structure consisting of a depth convolution and a point-by-point convolution, the main purpose of these layers being to separate the spatial cross-correlation from the channel cross-correlation. The method comprises the steps of passing two layers of 8x8 convolutional layers for an input image, wherein convolution kernels are 3x3 and have a step size of 1x1, sequentially passing through residual convolutional layers of 16x16,16x16,32x32,32x32,64x64,64x64,128x128 and 128x128, wherein each residual convolutional layer consists of two separable convolutional layers with convolution kernels of 3x3, one residual block with convolution kernels of 1x1 and having a step size of 2x2, and one maximum pooling layer with convolution kernels of 3x3 and having a step size of 2x2, and all the convolutional layers use a linear rectification function (relu) as an activation function of the convolutional layers. Finally, a Softmax function is used as an activation function of the full link layer through a global average pooling layer and the full link layer. Wherein the classification residual module modifies a desired mapping between two subsequent layers in order to learn the difference of the original features and the desired features. Thus, the desired feature h (x) is modified to solve the easier learning problem f (x) such that: h (x) ═ f (x) + x. The basic flow of model training is shown in fig. 2.
The image preprocessing specifically comprises:
preprocessing a video image of a player to be detected, extracting a key frame, then normalizing the acquired video key frame, and detecting a human face and extracting characteristics. In the normalization processing process, in order to overcome the influence of a complex illumination environment on the recognition effect in reality, illumination normalization is carried out on the image by adopting threshold segmentation histogram equalization, and gray level difference and noise are eliminated from edge pixels of the segmentation part through feathering. And then training an eye region through an adaptive enhancement algorithm (Adaboost) cascade detector, finding out coordinates of the center point of the eye as the center position of horizontal rotation of affine transformation, and finally obtaining a distorted face image to realize posture normalization. And then, the face alignment is realized by aligning the coordinates of the center points of the two eyes among different images, so that the scale normalization is realized. And finally, in order to avoid the interference of a complex environment on the recognition effect, an ERT feature point segmentation algorithm is adopted to effectively cut out a local area of the face, and the preliminary preprocessing work of the image is completed. In the feature extraction stage, in order to correctly identify expressions, an improved LBP equivalent pattern (Uniform pattern) is selected to extract texture features of a human face, then a target is detected and marked, a cascade table is established by using a training result of a harr feature classifier, and a detected human face set can be obtained by transmitting a picture to be detected and the cascade table to a target detection algorithm together.
The expression verification error correction mechanism comprises:
when the facial expression pictures of the player captured by frames within 2s after the system prompts are identified, the facial expression with the largest proportion is obtained, and the facial expression is used as the identification result. And then displaying the expression recognition result to the user, detecting whether the user has negative expressions (shaking the head) or not, if so, representing detection errors, repeating the system prompt of the steps 5.2 and 5.4, detecting again, and eliminating previous error expressions when the expressions corresponding to the pictures are matched.
The above-mentioned embodiments are merely illustrative of the preferred embodiments of the present invention, and do not limit the scope of the present invention, and various modifications and improvements of the technical solution of the present invention by those skilled in the art should fall within the protection scope defined by the claims of the present invention without departing from the spirit of the present invention.

Claims (10)

1. A game interaction method based on facial expression recognition is characterized by comprising the following steps:
(1) extracting visual features by learning static images with different types of expressions by using a convolutional neural network, and determining the relationship between the conversion of facial expressions in an image sequence and facial basic expressions to obtain a training model;
(2) collecting video information of a player to be detected, and intercepting the video information according to frames;
(3) preprocessing a video image to generate a preprocessed image;
(4) analyzing the facial expression, matching the facial expression with the expression characteristics in the training model, and analyzing the current expression of the player;
(5) and controlling the game role through the current facial expression.
2. The game interaction method based on facial expression recognition of claim 1, wherein: in the step (1), the facial basic expressions comprise anger, fear, disgust, joy, sadness, surprise, slight and negation.
3. The game interaction method based on facial expression recognition of claim 1, wherein: in the step (2), video information of a player to be detected is acquired through a camera, wherein the camera comprises a high-definition camera and an infrared camera; the camera and the face keep a distance of 50-80cm and are placed at an included angle of 140-180 degrees.
4. The game interaction method based on facial expression recognition of claim 1, wherein: in the step (3), the video image is preprocessed to generate a preprocessed image, wherein the preprocessed image comprises the steps of positioning and extracting the organ characteristics and the texture area of the face and other predefined characteristic points, and the preprocessed image is positioned to the face area of the player through the characteristic points.
5. The game interaction method based on facial expression recognition of claim 1, wherein: in the step (5), the game role is controlled by the current facial expression, and the method comprises the following steps:
(51) displaying characters and images of the game scenario on a screen through the game window according to the game scenario;
(52) in the interaction between a player and an NPC, comparing the system prompt expression with the expression made by the player, and triggering a preset scenario through expression error correction verification;
(53) when the branch line selection is carried out, the system prompts the player to make one of the basic expressions on the game window;
(54) when a player makes one of the basic expressions, the system triggers game scenario branches corresponding to the player expression as the basic expression according to the expression made by the player after the system passes the expression error correction verification, and outputs the corresponding branch scenario to a game window;
(55) in a specific plot, different expressions are used for controlling various movement modes of the characters in the game, and the basic expressions are used for controlling the actions of the characters.
6. The game interaction method based on facial expression recognition of claim 5, wherein: in the step (55), the basic expressions comprise joy, anger, sadness and surprise, wherein the joy controls the character to move forwards, the anger controls the character to jump, the sadness controls the character to squat and the surprise controls the sliding, and the duration of the movement mode corresponding to the role in the game can be controlled according to the duration of the expression made by the player.
7. The game interaction method based on facial expression recognition of claim 1, wherein: in the step (3), the preprocessing the video image specifically includes:
preprocessing a video image of a player to be detected, extracting a key frame, then normalizing the acquired video key frame, and detecting a human face and extracting characteristics.
8. The game interaction method based on facial expression recognition, according to claim 7, is characterized in that: the normalization processing specifically includes:
(A) performing illumination normalization on the image by using threshold segmentation histogram equalization, and eliminating gray level difference and noise of edge pixels of the segmentation part by feathering;
(B) training a human eye region through a self-adaptive enhancement algorithm cascade detector, finding out coordinates of a central point of human eyes as a central position of horizontal rotation of affine transformation, and finally obtaining a distorted human face image to realize posture normalization;
(C) the face alignment is realized by aligning the coordinates of the center points of the two eyes among different images, so that the normalization of the scale is realized;
(D) and (4) effectively cutting out a local area of the face by adopting an ERT characteristic point segmentation algorithm, and finishing the primary preprocessing work of the image.
9. The game interaction method based on facial expression recognition of claim 7, wherein: the feature extraction is to extract the texture features of the human face by adopting an LBP price mode, then detect and mark a target, establish a cascade table by using a training result of a classifier of harr features, and transmit the picture to be detected and the cascade table together to a target detection algorithm to obtain a detected human face set.
10. The game interaction method based on facial expression recognition, as claimed in claim 5, wherein the facial expression error correction verification comprises:
when the player expression pictures intercepted by frames within seconds after the system is prompted are identified, the expression with the largest proportion is calculated, and the expression is used as an identification result;
and then displaying the expression recognition result to the user, detecting whether the user has negative expressions or not, if so, indicating detection errors, repeating the system prompt of the steps (52) and (54), detecting again, and eliminating previous error expressions when the expressions corresponding to the pictures are matched.
CN202010766945.XA 2020-08-03 2020-08-03 Game interaction method based on facial expression recognition Pending CN111860451A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010766945.XA CN111860451A (en) 2020-08-03 2020-08-03 Game interaction method based on facial expression recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010766945.XA CN111860451A (en) 2020-08-03 2020-08-03 Game interaction method based on facial expression recognition

Publications (1)

Publication Number Publication Date
CN111860451A true CN111860451A (en) 2020-10-30

Family

ID=72952852

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010766945.XA Pending CN111860451A (en) 2020-08-03 2020-08-03 Game interaction method based on facial expression recognition

Country Status (1)

Country Link
CN (1) CN111860451A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112347941A (en) * 2020-11-09 2021-02-09 南京紫金体育产业股份有限公司 Motion video collection intelligent generation and distribution method based on 5G MEC
CN112684889A (en) * 2020-12-29 2021-04-20 上海掌门科技有限公司 User interaction method and device
CN113908553A (en) * 2021-11-22 2022-01-11 广州简悦信息科技有限公司 Game character expression generation method and device, electronic equipment and storage medium
CN115068940A (en) * 2021-03-10 2022-09-20 腾讯科技(深圳)有限公司 Control method of virtual object in virtual scene, computer device and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101393599A (en) * 2007-09-19 2009-03-25 中国科学院自动化研究所 Game role control method based on human face expression
CN101944163A (en) * 2010-09-25 2011-01-12 德信互动科技(北京)有限公司 Method for realizing expression synchronization of game character through capturing face expression
CN104123562A (en) * 2014-07-10 2014-10-29 华东师范大学 Human body face expression identification method and device based on binocular vision
CN104123545A (en) * 2014-07-24 2014-10-29 江苏大学 Real-time expression feature extraction and identification method
CN105308625A (en) * 2013-06-28 2016-02-03 高通股份有限公司 Deformable expression detector
CN105303200A (en) * 2014-09-22 2016-02-03 电子科技大学 Human face identification method for handheld device
CN106325501A (en) * 2016-08-10 2017-01-11 合肥泰壤信息科技有限公司 Game control method and system based on facial expression recognition technology
CN107491726A (en) * 2017-07-04 2017-12-19 重庆邮电大学 A kind of real-time expression recognition method based on multi-channel parallel convolutional neural networks
CN108108677A (en) * 2017-12-12 2018-06-01 重庆邮电大学 One kind is based on improved CNN facial expression recognizing methods
CN109344693A (en) * 2018-08-13 2019-02-15 华南理工大学 A kind of face multizone fusion expression recognition method based on deep learning
CN109766759A (en) * 2018-12-12 2019-05-17 成都云天励飞技术有限公司 Emotion identification method and Related product

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101393599A (en) * 2007-09-19 2009-03-25 中国科学院自动化研究所 Game role control method based on human face expression
CN101944163A (en) * 2010-09-25 2011-01-12 德信互动科技(北京)有限公司 Method for realizing expression synchronization of game character through capturing face expression
CN105308625A (en) * 2013-06-28 2016-02-03 高通股份有限公司 Deformable expression detector
CN104123562A (en) * 2014-07-10 2014-10-29 华东师范大学 Human body face expression identification method and device based on binocular vision
CN104123545A (en) * 2014-07-24 2014-10-29 江苏大学 Real-time expression feature extraction and identification method
CN105303200A (en) * 2014-09-22 2016-02-03 电子科技大学 Human face identification method for handheld device
CN106325501A (en) * 2016-08-10 2017-01-11 合肥泰壤信息科技有限公司 Game control method and system based on facial expression recognition technology
CN107491726A (en) * 2017-07-04 2017-12-19 重庆邮电大学 A kind of real-time expression recognition method based on multi-channel parallel convolutional neural networks
CN108108677A (en) * 2017-12-12 2018-06-01 重庆邮电大学 One kind is based on improved CNN facial expression recognizing methods
CN109344693A (en) * 2018-08-13 2019-02-15 华南理工大学 A kind of face multizone fusion expression recognition method based on deep learning
CN109766759A (en) * 2018-12-12 2019-05-17 成都云天励飞技术有限公司 Emotion identification method and Related product

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112347941A (en) * 2020-11-09 2021-02-09 南京紫金体育产业股份有限公司 Motion video collection intelligent generation and distribution method based on 5G MEC
CN112684889A (en) * 2020-12-29 2021-04-20 上海掌门科技有限公司 User interaction method and device
CN115068940A (en) * 2021-03-10 2022-09-20 腾讯科技(深圳)有限公司 Control method of virtual object in virtual scene, computer device and storage medium
CN113908553A (en) * 2021-11-22 2022-01-11 广州简悦信息科技有限公司 Game character expression generation method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
Xu A real-time hand gesture recognition and human-computer interaction system
CN111860451A (en) Game interaction method based on facial expression recognition
CN108334814B (en) Gesture recognition method of AR system
CN101393599B (en) Game role control method based on human face expression
CN109635752B (en) Method for positioning key points of human face, method for processing human face image and related device
CN110561399B (en) Auxiliary shooting device for dyskinesia condition analysis, control method and device
dos Santos Anjo et al. A real-time system to recognize static gestures of Brazilian sign language (libras) alphabet using Kinect.
CN109325408A (en) A kind of gesture judging method and storage medium
CN101110102A (en) Game scene and role control method based on fists of player
CN113158914B (en) Intelligent evaluation method for dance action posture, rhythm and expression
CN107145226A (en) Eye control man-machine interactive system and method
CN112381045A (en) Lightweight human body posture recognition method for mobile terminal equipment of Internet of things
CN107329564B (en) Man-machine finger guessing method based on gesture intelligent perception and man-machine cooperation mechanism
Dardas Real-time hand gesture detection and recognition for human computer interaction
Tiwari et al. Sign language recognition through kinect based depth images and neural network
CN114428553A (en) Interaction method, system, device and computer readable storage medium
WO2021203368A1 (en) Image processing method and apparatus, electronic device and storage medium
Singh Recognizing hand gestures for human computer interaction
Baak et al. Stabilizing motion tracking using retrieved motion priors
Chaudhary et al. An ANN based Approach to Calculate Robotic fingers positions
Manresa-Yee et al. Towards hands-free interfaces based on real-time robust facial gesture recognition
CN114779925A (en) Sight line interaction method and device based on single target
Ajallooeian et al. Fast hand gesture recognition based on saliency maps: An application to interactive robotic marionette playing
Dalka et al. Lip movement and gesture recognition for a multimodal human-computer interface
Shin et al. Welfare interface implementation using multiple facial features tracking for the disabled people

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination