CN111773676A - Method and device for determining virtual role action - Google Patents
Method and device for determining virtual role action Download PDFInfo
- Publication number
- CN111773676A CN111773676A CN202010717124.7A CN202010717124A CN111773676A CN 111773676 A CN111773676 A CN 111773676A CN 202010717124 A CN202010717124 A CN 202010717124A CN 111773676 A CN111773676 A CN 111773676A
- Authority
- CN
- China
- Prior art keywords
- action
- user interface
- graphical user
- virtual character
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000009471 action Effects 0.000 title claims abstract description 138
- 238000000034 method Methods 0.000 title claims abstract description 50
- 230000008921 facial expression Effects 0.000 claims abstract description 47
- 238000004458 analytical method Methods 0.000 claims abstract description 37
- 230000014509 gene expression Effects 0.000 claims description 29
- 230000004044 response Effects 0.000 claims description 3
- 238000009877 rendering Methods 0.000 abstract description 5
- 230000000875 corresponding effect Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000002996 emotional effect Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000001276 controlling effect Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000005034 decoration Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 206010011469 Crying Diseases 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 230000003387 muscular Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/40—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
- A63F13/42—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/55—Controlling game characters or game objects based on the game progress
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/60—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/30—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
- A63F2300/308—Details of the user interface
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a method and a device for determining virtual role actions. The method obtains a graphical user interface through rendering of a display component of a terminal, wherein the graphical user interface at least partially comprises a game scene and a virtual character, and comprises the following steps: providing a touch area in the graphical user interface; responding to the touch operation acting on the touch area, and acquiring a facial expression image of the user; analyzing the facial expression image to obtain an analysis result; and determining the target action to be displayed of the virtual role in the graphical user interface according to the analysis result. The invention solves the technical problems that the method for determining the action of the virtual character in the prior art has complex operation flow and the current game state of a player is interrupted due to the fact that a popup window interface which pops up a selection action in the battle of a game shields a game picture to a certain extent.
Description
Technical Field
The invention relates to the technical field of games, in particular to a method and a device for determining virtual character actions.
Background
In the prior art, in a game such as Moba, a user can control character actions of a game virtual character, such as distraction, fear and the like, for example, when a game is in war, a player enters a popup interface by clicking an action entry, and then selects a target action by clicking an action option, so as to control the virtual character to execute the corresponding character action.
However, the operation flow in the prior art is complex, and the pop-up window interface for popping up the selection action in the wartime blocks the game picture to a certain extent, which interrupts the current game state of the player.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a method and a device for determining actions of virtual characters, which are used for at least solving the technical problems that the method for determining the actions of the virtual characters in the prior art has a complex operation process, and a popup window interface for popping up a selection action in a battle period of a game shields a game picture to a certain extent, so that the current game state of a player is interrupted.
According to an aspect of the embodiments of the present invention, there is provided a method for determining actions of a virtual character, in which a graphical user interface is rendered by a display component of a terminal, the graphical user interface at least partially including a game scene and a virtual character, the method including: providing a touch area in the graphical user interface; responding to the touch operation acting on the touch area, and acquiring a facial expression image of the user; analyzing the facial expression image to obtain an analysis result; and determining the target action to be displayed of the virtual role in the graphical user interface according to the analysis result.
According to another aspect of the embodiments of the present invention, there is also provided an apparatus for determining actions of a virtual character, wherein a graphical user interface is rendered by a display component of a terminal, the graphical user interface at least partially includes a game scene and a virtual character, and the apparatus includes: the touch control module is used for providing a touch control area in the graphical user interface; the acquisition module is used for responding to the touch operation acted on the touch area and acquiring a facial expression image of the user; the analysis module is used for analyzing the facial expression image to obtain an analysis result; and the determining module is used for determining the target action to be displayed of the virtual role in the graphical user interface according to the analysis result.
According to another aspect of the embodiments of the present invention, a non-volatile storage medium is further provided, where the non-volatile storage medium includes a stored program, and when the program runs, a device in which the non-volatile storage medium is located is controlled to execute any one of the above methods for determining a virtual role action.
According to another aspect of the embodiments of the present invention, there is also provided a processor, configured to execute a program stored in a memory, where the program executes any one of the above methods for determining a virtual character action.
In the embodiment of the invention, a graphical user interface is obtained by rendering through a display component of a terminal, the graphical user interface at least partially comprises a game scene and a virtual character, and a touch area is provided in the graphical user interface; responding to the touch operation acting on the touch area, and acquiring a facial expression image of the user; analyzing the facial expression image to obtain an analysis result; the target action to be displayed of the virtual character in the graphical user interface is determined according to the analysis result, the purpose of simplifying the operation flow of a player when the action of the character is determined in the game fight is achieved, and other interfaces do not shield the current game picture, so that the technical effects of avoiding interrupting the current game state of the player and improving the game fluency are achieved, and the technical problems that the method for determining the action of the virtual character in the prior art is complex in operation flow, and the pop-up window interface popping up for selecting the action when the game fight shields the game picture to a certain extent, so that the current game state of the player is interrupted are solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow diagram of a method of determining virtual character actions in accordance with an embodiment of the present invention;
FIG. 2 is a schematic diagram of an alternative graphical user interface according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of another alternative graphical user interface according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an apparatus for determining an action of a virtual character according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, in order to facilitate understanding of the embodiments of the present invention, some terms or nouns referred to in the present invention will be explained as follows:
face analysis method: by analyzing the facial expressions of the user, the method is used for interpreting the information request made by the user.
Virtual character action: the game scene is a mode for expressing the emotion of the player by controlling the virtual character to execute corresponding actions, and the mode is similar to expressions used in chatting in a communication tool.
In accordance with an embodiment of the present invention, there is provided an embodiment of a method for determining virtual character actions, it being noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than presented herein.
In the method for determining an action of a virtual character provided in an embodiment of the present application, a graphical user interface is obtained by rendering a display component of a terminal, where the graphical user interface at least partially includes a game scene and a virtual character, fig. 1 is a flowchart of a method for determining an action of a virtual character according to an embodiment of the present invention, and as shown in fig. 1, the method includes the following steps:
step S102, providing a touch area in the graphical user interface;
step S104, responding to the touch operation acted on the touch area, and acquiring a facial expression image of the user;
step S106, analyzing the facial expression image to obtain an analysis result;
step S108, determining the target action to be displayed of the virtual role in the graphical user interface according to the analysis result.
In the embodiment of the invention, a graphical user interface is obtained by rendering through a display component of a terminal, the graphical user interface at least partially comprises a game scene and a virtual character, and a touch area is provided in the graphical user interface; responding to the touch operation acting on the touch area, and acquiring a facial expression image of the user; analyzing the facial expression image to obtain an analysis result; and determining the target action to be displayed of the virtual role in the graphical user interface according to the analysis result.
According to the method and the device, the purpose of simplifying the operation flow of the player when the player determines the action of the character in the game fight is achieved, and no other interface can shield the current game picture, so that the technical effects of avoiding interrupting the current game state of the player and improving the smoothness of the game are achieved, and the technical problem that the action of the virtual character is interrupted due to the fact that the game picture is shielded to a certain extent by a pop-up window interface which pops up the selection action in the game fight and exists in the method for determining the action of the virtual character in the prior art is solved.
In this embodiment, a touch area 202 may be provided in the graphical user interface 200 shown in fig. 2, optionally, the touch area 202 may be a touch key, and in a game match, a player may operate the touch key, for example, by pressing the touch key for a long time to start a front camera of a terminal, and acquire a facial expression image of the player through the front camera; acquiring a facial expression image acquired by the front camera, and analyzing the facial expression image in a facial analysis mode to obtain an analysis result; according to the analysis result, the target action to be displayed of the virtual character in the graphical user interface is determined, a new popup interface cannot be popped up, the space of the graphical user interface can be saved, the aim of simplifying the operation flow of a player when the action of the character is determined in game fight is fulfilled, the action of the virtual character is determined by linking through a face analysis function, the current game state cannot be interrupted, and the game experience of the player is improved.
In an alternative embodiment, it may be recognized whether a specific muscle or muscular tissue in the facial expression image is activated by recognizing the shape or motion of the facial expression image, and matching the facial expression image with a pre-stored sample expression image with each of the plurality of sample expression images; determining the target action to be displayed of the virtual role according to the matching result; and after the player operates the finger of the touch key to leave, controlling the virtual character to execute the target action.
It should be noted that the setting position of the touch area 202 may be, but is not limited to, the setting position shown in fig. 2, and as an optional embodiment, the setting position is set at an upper right corner position shown in fig. 2, which is convenient for a player to operate.
As an optional embodiment, after determining the target action to be shown by the virtual character in the graphical user interface according to the parsing result, the method further includes:
step S202, in response to the end of the touch operation, controlling the virtual character to execute the target action.
In the above optional embodiment, after the finger of the player manipulating the touch key leaves, that is, the touch operation is finished, the virtual character is controlled to execute the target action.
As an alternative embodiment, the analyzing the facial expression image to obtain an analysis result includes:
step S302, obtaining a plurality of prestored sample expression images;
step S304, matching the facial expression image with each sample expression image in the plurality of sample expression images;
step S306, determining the sample expression image matching the facial expression image.
As an alternative embodiment, the facial expression image may be matched with each sample expression image in the plurality of sample expression images, which is a process of analyzing the facial expression image; and determining a sample expression image matched with the facial expression image, wherein the sample expression image is an analysis result. And further determining a target action corresponding to the facial expression image based on the sample action corresponding to the sample expression image.
In an optional embodiment, determining the target action to be shown by the virtual character in the graphical user interface according to the analysis result includes:
step S402, determining the virtual character action category corresponding to the sample expression image;
step S404, selecting the target action to be displayed from the virtual role action category.
In an optional embodiment, the selecting the target action to be displayed from the virtual character action category includes at least one of:
randomly selecting the target action to be displayed from the virtual role action category;
selecting the target action to be displayed according to the preset priority of the virtual role action;
selecting the target action to be displayed according to the display frequency of the virtual character action;
and selecting the target motion to be displayed according to the matching degree of the virtual character motion and the facial expression image.
As an optional embodiment, before providing a touch area in the graphical user interface, all virtual character actions in a game scene may be acquired, and all virtual character actions are classified according to the types of the virtual character actions to obtain a plurality of virtual character action categories, and further, in the process of performing action matching, after obtaining a sample expression image corresponding to a facial expression image through parsing, the virtual character action category corresponding to the sample expression image may be further specifically determined, and the target action to be displayed is selected from the virtual character action categories.
For example, in the embodiment of the present application, the target action to be presented may be, but is not limited to, randomly selected from the virtual role action categories; or selecting the target action to be displayed according to the preset priority of the virtual role action; or selecting the target action to be displayed according to the display frequency of the virtual character action; or selecting the target motion to be displayed according to the matching degree of the virtual character motion and the facial expression image, and the like.
In an optional embodiment, before acquiring the facial expression image of the user, the method further includes:
step S502, providing a setting area in the graphical user interface;
step S504, responding to the setting operation acted on the setting area, and collecting the sample expression image;
step S506, storing the sample expression image.
Optionally, in this embodiment of the application, a setting area may be provided in the graphical user interface in advance; for example, adding an action setting option in the setting interface, the player starts the front camera of the terminal by clicking the action setting option, and controls the front camera to acquire a sample expression image of the player, for example: and the expression images such as happiness, anger, sadness, fear and the like are stored, and all the collected sample expression images are stored.
As an alternative embodiment, the player may modify the set sample expression image by clicking the action setting option after the sample expression image is set.
As an alternative embodiment, before providing a touch area in the gui, the method further includes:
step S602, acquiring all virtual character actions in a game scene, wherein the virtual character actions comprise the target actions;
step S604, classifying all the virtual character actions according to the types of the virtual character actions, wherein the types include at least one of the following: joy class, anger class, sadness class, fear class;
step S606, according to the classification result, the virtual role action is equipped in the graphical user interface.
As an alternative embodiment, at least one of the virtual character actions of each type is equipped in the graphical user interface.
In the above optional embodiment, before providing a touch area in the graphical user interface, all virtual character actions in the game scene, such as the actions 1, 2, 3, 4, 5, 6, and 7 shown in fig. 3, are obtained; and classifying all the virtual character actions into one of the following actions according to the types of the virtual character actions: a happy class, an angry class, a sadness class, a fear class. As shown in fig. 3, actions 6 may be categorized into a happy category, and these actions are equipped in the game interface after all the virtual character actions are categorized.
Optionally, at least one of said avatar actions of each type is instrumented in said graphical user interface.
In the above alternative embodiments, happy refers to the emotional experience that a person expects and pursues for achieving the purpose, for example, laughter and dancing actions can be classified into happy categories; anger refers to the emotional experience that results when the objective sought is hindered and the desire cannot be fulfilled, e.g., anger classification can be classified as angry and pounding actions; sadness refers to emotional experience generated when something missing love or ideal or wish is broken, for example, crying action can be classified into sadness; fear refers to the emotional experience that is generated when attempting to get rid of and escape from a dangerous situation without being able to do so, for example, fear, escape behavior can be classified into sadness and sadness.
According to an embodiment of the present invention, there is also provided an apparatus embodiment for implementing the method for determining a virtual character action, where the apparatus for determining a virtual character action obtains a graphical user interface through rendering of a display component of a terminal, where the graphical user interface at least partially includes a game scene and a virtual character, fig. 4 is a schematic structural diagram of an apparatus for determining a virtual character action according to an embodiment of the present invention, and as shown in fig. 4, the apparatus for determining a virtual character action includes: a touch module 40, an acquisition module 42, and an analysis module 44, wherein:
a touch module 40, configured to provide a touch area in the graphical user interface; an obtaining module 42, configured to respond to a touch operation applied to the touch area, and obtain a facial expression image of the user; an analysis module 44, configured to analyze the facial expression image to obtain an analysis result; and the determining module is used for determining the target action to be displayed of the virtual role in the graphical user interface according to the analysis result.
It should be noted that the above modules may be implemented by software or hardware, for example, for the latter, the following may be implemented: the modules can be located in the same processor; alternatively, the modules may be located in different processors in any combination.
It should be noted that the touch module 40, the obtaining module 42 and the analyzing module 44 correspond to steps S102 to S106 in embodiment 1, and the modules are the same as the corresponding steps in the implementation example and the application scenario, but are not limited to the disclosure in embodiment 1. It should be noted that the modules described above may be implemented in a computer terminal as part of an apparatus.
It should be noted that, reference may be made to the relevant description in embodiment 1 for alternative or preferred embodiments of this embodiment, and details are not described here again.
The device for determining the action of the virtual character may further include a processor and a memory, where the touch module 40, the obtaining module 42, the analyzing module 44, and the like are all stored in the memory as program units, and the processor executes the program units stored in the memory to implement corresponding functions.
The processor comprises a kernel, and the kernel calls a corresponding program unit from the memory, wherein one or more than one kernel can be arranged. The memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip.
According to the embodiment of the application, the embodiment of the storage medium is also provided. Optionally, in this embodiment, the storage medium includes a stored program, and the device on which the storage medium is located is controlled to execute any one of the above methods for determining the virtual character action when the program runs.
Optionally, in this embodiment, the storage medium may be located in any one of a group of computer terminals in a computer network, or in any one of a group of mobile terminals, and the storage medium includes a stored program.
Optionally, the program controls the device on which the storage medium is located to perform the following functions when running: providing a touch area in the graphical user interface; responding to touch operation acting on the touch area, and acquiring a facial expression image of a user; analyzing the facial expression image to obtain an analysis result; and determining the target action to be displayed of the virtual role in the graphical user interface according to the analysis result.
According to the embodiment of the application, the embodiment of the processor is also provided. Optionally, in this embodiment, the processor is configured to run the program, where the program executes any method for determining the virtual character action.
The embodiment of the application provides equipment, the equipment comprises a processor, a memory and a program which is stored on the memory and can run on the processor, and the following steps are realized when the processor executes the program: providing a touch area in the graphical user interface; responding to touch operation acting on the touch area, and acquiring a facial expression image of a user; analyzing the facial expression image to obtain an analysis result; and determining the target action to be displayed of the virtual role in the graphical user interface according to the analysis result.
The present application further provides a computer program product adapted to perform a program for initializing the following method steps when executed on a data processing device: providing a touch area in the graphical user interface; responding to touch operation acting on the touch area, and acquiring a facial expression image of a user; analyzing the facial expression image to obtain an analysis result; and determining the target action to be displayed of the virtual role in the graphical user interface according to the analysis result.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.
Claims (11)
1. A method of determining actions of a virtual character, wherein a graphical user interface is rendered by a display component of a terminal, the graphical user interface at least partially comprising a game scene and a virtual character, the method comprising:
providing a touch area in the graphical user interface;
responding to touch operation acting on the touch area, and acquiring a facial expression image of the user;
analyzing the facial expression image to obtain an analysis result;
and determining the target action to be displayed of the virtual role in the graphical user interface according to the analysis result.
2. The method of claim 1, wherein after determining the target action to be presented by the virtual character in the graphical user interface according to the parsing result, the method further comprises:
and controlling the virtual role to execute the target action in response to the end of the touch operation.
3. The method of claim 1, wherein analyzing the facial expression image to obtain an analysis result comprises:
obtaining a plurality of prestored sample expression images;
matching the facial expression image with each sample expression image of the plurality of sample expression images;
determining the sample expression image matched with the facial expression image.
4. The method of claim 3, wherein determining the target action to be shown by the virtual character in the graphical user interface according to the parsing result comprises:
determining the action category of the virtual character corresponding to the sample expression image;
and selecting the target action to be displayed from the virtual role action category.
5. The method of claim 4, wherein selecting the target action to be presented from the virtual character action category comprises at least one of:
randomly selecting the target action to be displayed from the virtual role action category;
selecting the target action to be displayed according to the preset priority of the virtual role action;
selecting the target action to be displayed according to the display frequency of the virtual character action;
and selecting the target action to be displayed according to the matching degree of the virtual character action and the facial expression image.
6. The method of claim 3, wherein prior to obtaining the image of the user's facial expression, the method further comprises:
providing a setting area in the graphical user interface;
collecting the sample expression image in response to the setting operation acting on the setting area;
and storing the sample expression image.
7. The method of claim 1, wherein prior to providing a touch area in the graphical user interface, the method further comprises:
acquiring all virtual character actions in a game scene, wherein the virtual character actions comprise the target actions;
classifying all virtual role actions according to the types of the virtual role actions, wherein the types comprise at least one of the following: joy class, anger class, sadness class, fear class;
and equipping the virtual role action in the graphical user interface according to the classification result.
8. The method of claim 7, wherein the instrumenting the virtual character action in the graphical user interface according to the classification result comprises:
equipping at least one of the virtual character actions in each virtual character action category in the graphical user interface according to the classification result.
9. An apparatus for determining actions of a virtual character rendered by a display component of a terminal to obtain a graphical user interface, the graphical user interface at least partially containing a game scene and a virtual character, comprising:
the touch control module is used for providing a touch control area in the graphical user interface;
the acquisition module is used for responding to touch operation acting on the touch area and acquiring facial expression images of the user;
the analysis module is used for analyzing the facial expression image to obtain an analysis result;
and the determining module is used for determining the target action to be displayed of the virtual role in the graphical user interface according to the analysis result.
10. A non-volatile storage medium, comprising a stored program, wherein when the program runs, a device in which the non-volatile storage medium is located is controlled to perform the method for determining the virtual character action according to any one of claims 1 to 8.
11. A processor for executing a program stored in a memory, wherein,
the program when running performs the method of determining virtual character actions of any of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010717124.7A CN111773676B (en) | 2020-07-23 | 2020-07-23 | Method and device for determining virtual character actions |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010717124.7A CN111773676B (en) | 2020-07-23 | 2020-07-23 | Method and device for determining virtual character actions |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111773676A true CN111773676A (en) | 2020-10-16 |
CN111773676B CN111773676B (en) | 2024-06-21 |
Family
ID=72763891
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010717124.7A Active CN111773676B (en) | 2020-07-23 | 2020-07-23 | Method and device for determining virtual character actions |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111773676B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112528072A (en) * | 2020-12-02 | 2021-03-19 | 泰州市朗嘉馨网络科技有限公司 | Object type analysis platform and method applying big data storage |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013027893A1 (en) * | 2011-08-22 | 2013-02-28 | Kang Jun-Kyu | Apparatus and method for emotional content services on telecommunication devices, apparatus and method for emotion recognition therefor, and apparatus and method for generating and matching the emotional content using same |
CN105797376A (en) * | 2014-12-31 | 2016-07-27 | 深圳市亿思达科技集团有限公司 | Method and terminal for controlling role model behavior according to expression of user |
WO2017219450A1 (en) * | 2016-06-21 | 2017-12-28 | 中兴通讯股份有限公司 | Information processing method and device, and mobile terminal |
CN107959789A (en) * | 2017-11-10 | 2018-04-24 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
CN108563327A (en) * | 2018-03-26 | 2018-09-21 | 广东欧珀移动通信有限公司 | Augmented reality method, apparatus, storage medium and electronic equipment |
CN109876450A (en) * | 2018-12-14 | 2019-06-14 | 深圳壹账通智能科技有限公司 | Implementation method, server, computer equipment and storage medium based on AR game |
CN110180168A (en) * | 2019-05-31 | 2019-08-30 | 网易(杭州)网络有限公司 | A kind of display methods and device, storage medium and processor of game picture |
CN111240482A (en) * | 2020-01-10 | 2020-06-05 | 北京字节跳动网络技术有限公司 | Special effect display method and device |
-
2020
- 2020-07-23 CN CN202010717124.7A patent/CN111773676B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013027893A1 (en) * | 2011-08-22 | 2013-02-28 | Kang Jun-Kyu | Apparatus and method for emotional content services on telecommunication devices, apparatus and method for emotion recognition therefor, and apparatus and method for generating and matching the emotional content using same |
CN105797376A (en) * | 2014-12-31 | 2016-07-27 | 深圳市亿思达科技集团有限公司 | Method and terminal for controlling role model behavior according to expression of user |
WO2017219450A1 (en) * | 2016-06-21 | 2017-12-28 | 中兴通讯股份有限公司 | Information processing method and device, and mobile terminal |
CN107959789A (en) * | 2017-11-10 | 2018-04-24 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
CN108563327A (en) * | 2018-03-26 | 2018-09-21 | 广东欧珀移动通信有限公司 | Augmented reality method, apparatus, storage medium and electronic equipment |
CN109876450A (en) * | 2018-12-14 | 2019-06-14 | 深圳壹账通智能科技有限公司 | Implementation method, server, computer equipment and storage medium based on AR game |
CN110180168A (en) * | 2019-05-31 | 2019-08-30 | 网易(杭州)网络有限公司 | A kind of display methods and device, storage medium and processor of game picture |
CN111240482A (en) * | 2020-01-10 | 2020-06-05 | 北京字节跳动网络技术有限公司 | Special effect display method and device |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112528072A (en) * | 2020-12-02 | 2021-03-19 | 泰州市朗嘉馨网络科技有限公司 | Object type analysis platform and method applying big data storage |
CN112528072B (en) * | 2020-12-02 | 2021-06-22 | 深圳市三希软件科技有限公司 | Object type analysis platform and method applying big data storage |
Also Published As
Publication number | Publication date |
---|---|
CN111773676B (en) | 2024-06-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110611776B (en) | Special effect processing method, computer device and computer storage medium | |
CN108273265A (en) | The display methods and device of virtual objects | |
CN110545442B (en) | Live broadcast interaction method and device, electronic equipment and readable storage medium | |
JP5736601B2 (en) | Method and apparatus for automatically reproducing facial expressions with virtual images | |
CN111240482B (en) | Special effect display method and device | |
CN111714874B (en) | Control state switching method and device and electronic equipment | |
CN110348193A (en) | Verification method, device, equipment and storage medium | |
CN108905193A (en) | Game manipulates processing method, equipment and storage medium | |
CN104765520B (en) | A kind of information processing method and device | |
CN112274909A (en) | Application operation control method and device, electronic equipment and storage medium | |
CN105447355A (en) | Terminal application control method and terminal | |
WO2023093451A1 (en) | Live-streaming interaction method and apparatus in game, and computer device and storage medium | |
CN113301385A (en) | Video data processing method and device, electronic equipment and readable storage medium | |
CN113648650A (en) | Interaction method and related device | |
CN111773676B (en) | Method and device for determining virtual character actions | |
CN112619147A (en) | Game equipment replacing method and device and terminal device | |
CN106984044B (en) | Method and equipment for starting preset process | |
CN110302535B (en) | Game thread recording method, device, equipment and readable storage medium | |
CN111701239B (en) | Message display control method and device in game | |
CN113271486A (en) | Interactive video processing method and device, computer equipment and storage medium | |
CN111914763A (en) | Living body detection method and device and terminal equipment | |
CN109388932B (en) | Verification method, terminal device and data processing method | |
CN109157831A (en) | Implementation method, device, intelligent terminal and the computer readable storage medium of game | |
CN112749357A (en) | Interaction method and device based on shared content and computer equipment | |
CN111625101A (en) | Display control method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |