CN115554701A - Control method and device of virtual role, computer equipment and storage medium - Google Patents

Control method and device of virtual role, computer equipment and storage medium Download PDF

Info

Publication number
CN115554701A
CN115554701A CN202211184556.1A CN202211184556A CN115554701A CN 115554701 A CN115554701 A CN 115554701A CN 202211184556 A CN202211184556 A CN 202211184556A CN 115554701 A CN115554701 A CN 115554701A
Authority
CN
China
Prior art keywords
virtual
user
real
picture
picture element
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211184556.1A
Other languages
Chinese (zh)
Inventor
梁慕娟
刘万军
曾宇骋
张燕秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Shangquwan Network Technology Co ltd
Original Assignee
Anhui Shangquwan Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Shangquwan Network Technology Co ltd filed Critical Anhui Shangquwan Network Technology Co ltd
Priority to CN202211184556.1A priority Critical patent/CN115554701A/en
Publication of CN115554701A publication Critical patent/CN115554701A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application relates to a control method and device of a virtual character, computer equipment, a virtual character interaction system and a storage medium. The method comprises the following steps: performing element disassembly processing on a user interaction picture output by image acquisition equipment in response to the received user interaction picture to obtain a real picture element set; performing element identification matching on the real picture element set based on a preset virtual picture element database to obtain a virtual picture element set; and generating a virtual role control instruction according to the virtual picture element set, and controlling the virtual role to carry out interaction according to the virtual role control instruction. By adopting the method, the interactive content between the virtual character and the user can be enriched.

Description

Control method and device of virtual role, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for controlling a virtual character, a computer device, a virtual character interaction system, and a storage medium.
Background
With the development of Artificial Intelligence (AI) technology, virtual characters in a virtual world appear in different application scenes such as games, advertisements, dialects, performances or live webcasts, for example, scenes such as an interactive game virtual main broadcast, a hospital large-screen intelligent triage interaction, an enterprise interactive virtual foreground, an exhibition intelligent interactive image, and the like.
However, in the prior art, the interactive content between the virtual character and the user is relatively single, which results in lack of deep interaction between the user and the virtual character.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a method and an apparatus for controlling a virtual character, a computer device, a virtual character interaction system, and a storage medium, which can enrich the content of interaction between the virtual character and a user.
In a first aspect, a method for controlling a virtual character is provided, where the method includes:
performing element disassembly processing on a user interaction picture output by image acquisition equipment in response to the received user interaction picture to obtain a real picture element set;
performing element identification matching on the real picture element set based on a preset virtual picture element database to obtain a virtual picture element set;
and generating a virtual role control instruction according to the virtual picture element set, and controlling the virtual role to carry out interaction according to the virtual role control instruction.
In one embodiment, the virtual picture element database comprises at least one virtual picture element sub-database; based on a preset virtual picture element database, carrying out element identification matching on a real picture element set to obtain a virtual picture element set, comprising the following steps: identifying the category of each element in the real picture element set; and respectively carrying out element identification matching on each element based on the virtual picture element sub-database corresponding to the category identification result of each element to obtain a virtual picture element set.
In one embodiment, the virtual picture element sub-database comprises a virtual interactive action sub-database; identifying a category for each element in the set of real picture elements comprises identifying a user real action element from the set of real picture elements; respectively carrying out element identification matching on each element based on the virtual picture element sub-database corresponding to the category identification result of each element to obtain a virtual picture element set, wherein the step of obtaining the virtual picture element set comprises the following steps: based on the virtual interaction sub-database, carrying out element identification matching on the real action elements of the user to obtain virtual interaction elements; the set of virtual picture elements includes virtual interaction elements; the user real action elements are used for representing head actions and/or limb actions displayed by a user in a user interaction picture; the virtual interactive action elements are used to characterize the head and/or limb actions required by the virtual character to complete the interactive action.
In one embodiment, the virtual picture element sub-database comprises a virtual interaction property sub-database; performing category identification on each element in the set of real picture elements comprises identifying a user real prop element from the set of real picture elements; respectively carrying out element identification matching on each element based on the virtual picture element sub-database corresponding to the category identification result of each element to obtain a virtual picture element set, wherein the step of obtaining the virtual picture element set comprises the following steps: identifying and processing real prop elements of a user to obtain a prop type, a prop identifier and a prop color; matching a corresponding interactive prop frame in the virtual interactive prop sub-database according to the prop type; rendering the interactive prop frame according to the prop identification and the prop color to obtain virtual interactive prop elements; the virtual picture element set further comprises virtual interaction prop elements; the real user prop element is used for representing props used by users in the user interaction picture; the virtual interaction prop element is used for representing props required by the virtual character to complete the interaction action.
In one embodiment, the virtual picture element sub-database comprises a virtual interactive emoticon sub-database; identifying a category for each element in the set of real picture elements comprises identifying a user facial expression element from the set of real picture elements; respectively carrying out element identification matching on each element based on the virtual picture element sub-database corresponding to the category identification result of each element to obtain a virtual picture element set, wherein the virtual picture element set comprises the following steps: based on the virtual interactive expression sub-database, carrying out element identification matching on the facial expression elements of the user to obtain virtual interactive expression elements; the virtual picture element set also comprises virtual interactive expression elements; the user face element is used for representing the facial expression shown by the user in the user interaction picture; the virtual interactive expression elements are used for representing the facial expressions required by the virtual character to complete the interactive actions.
In one embodiment, the method further includes: in response to receiving user interaction audio output by audio acquisition equipment, identifying the user interaction audio to obtain a real audio keyword set; correcting each element in the virtual picture element set according to the real audio keyword set to obtain a corrected virtual picture element set; generating a virtual role control instruction according to the virtual picture element set, and controlling the virtual role to perform interactive action according to the virtual role control instruction, wherein the method comprises the following steps: and generating a virtual character control instruction according to the corrected virtual picture element set, and controlling the virtual character to perform interactive action according to the virtual character control instruction.
In one embodiment, the set of real picture elements further comprises a user real background element; the method further comprises the following steps: generating a background rendering instruction according to the real background element of the user, and rendering the background of the virtual character according to the background rendering instruction; the user real background element is used for representing the environment background where the user is located in the user interaction picture.
In one embodiment, the virtual character control instructions comprise a first virtual character control instruction and a second virtual character control instruction; generating a virtual role control instruction according to the virtual picture element set, and controlling the virtual role to perform interactive action according to the virtual role control instruction, wherein the method comprises the following steps: responding to the selected interface display mode, generating a first virtual role control instruction according to the virtual picture element set, and controlling the virtual role displayed on the user interface to perform interactive action according to the first virtual role control instruction; and responding to the selection of the holographic projection mode, generating a second virtual role control instruction according to the virtual picture element set, and controlling the virtual role presented by the holographic projection equipment to perform interactive action according to the second virtual role control instruction.
In a second aspect, a virtual character control apparatus is provided, which includes a screen disassembling module, an identification matching module, and a character control module.
The image disassembling module is used for responding to a received user interaction image output by the image acquisition equipment and performing element disassembling processing on the user interaction image to obtain a real image element set; the identification matching module is used for carrying out element identification matching on the real picture element set based on a preset virtual picture element database to obtain a virtual picture element set; and the role control module is used for generating a virtual role control instruction according to the virtual picture element set and controlling the virtual role to carry out interaction according to the virtual role control instruction.
In a third aspect, a computer device is provided, the storage device comprising a memory and a processor, the memory storing a computer program, the processor implementing the steps of any of the above method embodiments when executing the computer program.
In a fourth aspect, a virtual character interaction system is provided, where the virtual character interaction system includes an image capture device and a computer device in any of the above device embodiments.
The image acquisition equipment is electrically connected with the computer equipment and used for acquiring and outputting the user interaction pictures.
In a fifth aspect, a computer-readable storage medium is provided, having stored thereon a computer program which, when executed by a processor, carries out the steps of any of the above-described method embodiments.
The control method and device of the virtual character, the computer equipment, the virtual character interaction system and the storage medium respond to the received user interaction picture output by the image acquisition equipment, and carry out element disassembly processing on the user interaction picture to obtain a real picture element set; then, based on a preset virtual picture element database, carrying out element identification matching on the real picture element set to obtain a virtual picture element set; and then, a virtual character control instruction is generated according to the virtual picture element set, and the virtual character is controlled to perform interactive action according to the virtual character control instruction, so that the interactive content between the virtual character and the user can be enriched, the interactive flexibility and the reality of the virtual character are improved, the virtual character is more vivid and natural in demonstration, and the deep interaction between the user and the virtual character is enhanced.
Drawings
FIG. 1 is a diagram of an application environment of a control method of a virtual character in one embodiment;
FIG. 2 is a first flowchart of a control method for a virtual character according to an embodiment;
FIG. 3 is a flowchart illustrating a step of performing element recognition and matching on a real picture element set based on a preset virtual picture element database to obtain a virtual picture element set according to an embodiment;
fig. 4 is a schematic flow chart illustrating a step of performing element identification matching on each element to obtain a virtual picture element set, based on a virtual picture element sub-database corresponding to a category identification result of each element in one embodiment;
FIG. 5 is a second flowchart of a control method of a virtual character according to an embodiment;
fig. 6 is a third flowchart of a control method of a virtual character according to an embodiment;
FIG. 7 is a fourth flowchart illustrating a control method for a virtual character according to an embodiment;
FIG. 8 is a block diagram showing the construction of a control apparatus for a virtual character in one embodiment;
FIG. 9 is a diagram of the internal structure of a computer device in one embodiment;
FIG. 10 is a diagram showing a first internal configuration of the virtual character interaction system in accordance with one embodiment;
FIG. 11 is a diagram showing a second internal configuration of the virtual character interaction system in accordance with an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
To facilitate an understanding of the present application, the present application will now be described more fully with reference to the accompanying drawings. Embodiments of the present application are given in the accompanying drawings. This application may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first resistance may be referred to as a second resistance, and similarly, a second resistance may be referred to as a first resistance, without departing from the scope of the present application. The first resistance and the second resistance are both resistances, but they are not the same resistance.
It is to be understood that "connection" in the following embodiments is to be understood as "electrical connection", "communication connection", and the like if the connected circuits, modules, units, and the like have communication of electrical signals or data with each other.
As used herein, the singular forms "a", "an" and "the" may include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises/comprising," "includes" or "including," etc., specify the presence of stated features, integers, steps, operations, components, parts, or combinations thereof, but do not preclude the presence or addition of one or more other features, integers, steps, operations, components, parts, or combinations thereof.
The control method of the virtual role provided by the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and the server 104 may be implemented by an independent server or a server cluster formed by a plurality of servers.
In a first aspect, as shown in fig. 2, a method for controlling a virtual character is provided, which is described by taking the method as an example for being applied to the server in fig. 1, and includes the following steps 202 to 206.
Step 202, in response to receiving the user interaction picture output by the image acquisition device, performing element disassembly processing on the user interaction picture to obtain a real picture element set.
The user interaction picture refers to a picture used by a user to perform an interaction action with a virtual character, and may be, but is not limited to, a user game interaction picture, a user advertisement interaction picture, a user performance interaction picture or a user live broadcast picture. The set of real picture elements may include, but is not limited to, user real action elements, user real prop elements, user facial expression elements, and/or user real background elements.
In one specific example, the user real action element is used for representing head actions and/or limb actions displayed by the user in the user interaction picture, the user real prop element is used for representing props used by the user in the user interaction picture, the user face element is used for representing facial expressions displayed by the user in the user interaction picture, and the user real background element is used for representing an environment background where the user is located in the user interaction picture. The above are only specific examples, and the setting is flexible in practical application according to user requirements, and is not limited here.
It can be understood that the user interaction picture can be collected and output by configuring an image collecting device electrically connected with the server. The server can receive the user interaction picture output by the image acquisition equipment and perform element disassembly processing on the user interaction lake surface, so that a real picture element set is obtained.
And 204, performing element identification matching on the real picture element set based on a preset virtual picture element database to obtain a virtual picture element set.
The server is pre-configured with a preset virtual picture element database, which may include, but is not limited to, at least one virtual picture element sub-database. The server can perform element identification matching on the real picture element set based on a preset virtual picture element database, and a virtual picture element set can be obtained. Furthermore, recognition matching may be implemented, but not limited to, using a recognition matching algorithm or a pre-trained recognition matching neural network model.
In one particular example, the virtual screen element sub-database may include, but is not limited to, a virtual interaction action sub-database, a virtual interaction props sub-database, and/or a virtual interaction emoticons sub-database. The virtual interaction action sub-database is used for storing each virtual interaction action element, and the virtual interaction action elements are used for representing head actions and/or body actions required by the virtual character to finish the interaction action. The virtual interaction prop sub-database is used for storing each virtual interaction prop element, and the virtual interaction prop elements are used for representing props required by the virtual roles for finishing interaction actions. The virtual interactive expression sub-database is used for storing each virtual interactive expression element, and the virtual interactive expression elements are used for representing the facial expressions required by the virtual character to finish the interactive actions. The above is only a specific example, and the actual application is flexibly set according to the user requirement, and is not limited herein.
In one embodiment, as shown in fig. 3, the virtual picture element database includes at least one virtual picture element sub-database. Based on a preset virtual picture element database, performing element identification matching on the real picture element set to obtain a virtual picture element set, including step 301 and step 302.
Step 301, identifying the category of each element in the real picture element set;
and 302, respectively performing element identification matching on each element based on the virtual picture element sub-database corresponding to the category identification result of each element to obtain a virtual picture element set.
The server can perform category identification on each element in the real picture element set, and obtain a category identification result of each element; then, element identification matching can be performed on each element based on the virtual picture element sub-database corresponding to the category identification result of each element, so as to obtain a virtual picture element set.
In the embodiment, the category identification is carried out on each element in the real picture element set; then, element identification matching is carried out on each element based on the virtual picture element sub-database corresponding to the category identification result of each element to obtain a virtual picture element set, so that the richness of the virtual picture element set is improved, the interactive content between the virtual character and the user can be indirectly enriched, and the deep interaction between the user and the virtual character is enhanced.
And step 206, generating a virtual character control instruction according to the virtual picture element set, and controlling the virtual character to perform interaction according to the virtual character control instruction. .
Specifically, the server generates a virtual character control instruction according to the virtual picture element set, that is, the server can control the virtual character to perform an interactive action according to the virtual character control instruction. In one particular example, the interaction of the virtual character and the virtual character may be presented via a user interface presented by a display device, and may also be presented in real space via a holographic projection device. The above are only specific examples, and are flexibly set according to user requirements in practical applications, and are not limited herein.
Based on the above, the control method of the virtual character responds to the received user interaction picture output by the image acquisition equipment, and performs element disassembly processing on the user interaction picture to obtain a real picture element set; then, based on a preset virtual picture element database, carrying out element identification matching on the real picture element set to obtain a virtual picture element set; then, a virtual character control instruction is generated according to the virtual picture element set, so that the virtual character can be controlled to perform interactive action according to the virtual character control instruction, the interactive content between the virtual character and the user can be enriched, the interactive flexibility and reality of the virtual character are improved, the virtual character is more vivid and natural in demonstration, and the deep interaction between the user and the virtual character is enhanced.
In one embodiment, the virtual picture element sub-database comprises a virtual interactive action sub-database. The step of identifying the category of each element in the real picture element set comprises the following steps: user real action elements are identified from the set of real picture elements.
In particular, the set of real picture elements may include, but is not limited to, user real action elements, user real prop elements, user facial expression elements, and/or user real background elements. The server may, but is not limited to, identify the user real action elements from the set of real picture elements in the process of performing category identification on each element in the set of real picture elements using a category identification algorithm.
Respectively carrying out element identification matching on each element based on the virtual picture element sub-database corresponding to the category identification result of each element to obtain a virtual picture element set, wherein the step of obtaining the virtual picture element set comprises the following steps: and carrying out element identification matching on the real action elements of the user based on the virtual interaction action sub-database to obtain the virtual interaction action elements.
The virtual picture element set comprises virtual interactive action elements, and the user real action elements are used for representing head actions and/or body actions displayed by a user in the user interactive picture; the virtual interactive action elements are used to characterize the head and/or limb actions required by the virtual character to complete the interactive action.
Specifically, in the process that the server performs element identification matching on each element based on the virtual picture element sub-database corresponding to the category identification result of each element to obtain the virtual picture element set, the server may perform element identification matching on the user real action elements identified from the real picture element set based on the virtual interaction action sub-database, so as to obtain the virtual interaction action elements in the virtual picture element set.
In a specific example, the user real action element may be, but is not limited to, a flower sending action element, a lottery ticket sending action element, a dance action element or a hand-lifting cheering action element, and the server identifies the user real action element as the flower sending action element from the real picture element set; and then, the server can perform element identification matching on the flower sending action elements based on the virtual interaction sub-database, the obtained virtual interaction action elements are flower receiving and sending interaction elements, the flower receiving and sending interaction elements are stored in the virtual picture element set, and the virtual roles are displayed to complete the flower receiving and sending interaction actions. The above is only a specific example, and the actual application is flexibly set according to the user requirement, and is not limited herein.
In the present embodiment, the user real action elements are identified from the real picture element set; then, based on the virtual interaction sub-database, element identification matching is carried out on the real action elements of the user to obtain the virtual interaction action elements, so that the corresponding virtual interaction action elements are accurately matched according to the real action elements of the user concentrated by the real picture elements, the interaction flexibility and the reality of the virtual character are improved, the virtual character demonstration is more vivid and natural, and the deep interaction between the user and the virtual character is enhanced.
In one embodiment, the virtual picture element sub-database comprises a virtual interaction properties sub-database. Wherein performing category identification on each element in the set of real picture elements comprises identifying a user real prop element from the set of real picture elements.
In particular, the set of real picture elements may include, but is not limited to, user real action elements, user real prop elements, user facial expression elements, and/or user real background elements. The server may, but is not limited to, identify the user real item elements from the set of real picture elements in the process of performing category identification on each element in the set of real picture elements using a category identification algorithm.
As shown in fig. 4, performing element recognition matching on each element based on the virtual picture element sub-database corresponding to the category recognition result of each element to obtain a virtual picture element set includes steps 401 to 403.
Step 401, identifying the real item of the prop of the user to obtain the type of the prop, the mark of the prop and the color of the prop.
And step 402, matching the corresponding interactive prop frame in the virtual interactive prop sub-database according to the prop type.
And step 403, rendering the interactive prop frame according to the prop identifier and the prop color to obtain a virtual interactive prop element.
Wherein the virtual picture element set further comprises virtual interactive prop elements; the user real prop element is used for representing props used by users in the user interaction picture; the virtual interaction prop element is used for representing props required by the virtual character to complete the interaction action. The server identifies and matches elements based on the virtual picture element sub-databases corresponding to the category identification results of the elements to obtain a virtual picture element set, and can identify and process the real prop elements of the user to obtain a prop type, a prop identifier and a prop color; then, matching a corresponding interactive prop frame in the virtual interactive prop sub-database according to the prop type; and then, rendering the interactive prop frame according to the prop identification and the prop color to obtain the virtual interactive prop element with concentrated virtual picture elements.
In a specific example, the user real prop element can be but is not limited to a real flower element, a real medal element, a real water cup element and the like, and the server identifies the user real prop element as a real flower element from the real picture element set; then, identifying the real flower elements to obtain the property type of the real flower elements as a bouquet, the property identification of the real flower elements as a fresh object language brand and the property color of the real flower elements as a deep red; then, according to the fact that the property type of the real flower element is a bouquet, a corresponding interactive property frame is matched in the virtual interactive property sub-database to be a virtual bouquet frame; finally, the stage property identification of the fresh object language brand, namely the real flower element, and the stage property color of the dark red, namely the real flower element, are rendered into the virtual bouquet frame, so that the corresponding virtual interaction stage property element is obtained, the virtual interaction stage property element can be displayed in the hands of the virtual character, the interaction flexibility and the authenticity of the virtual character are improved, the virtual character is more vivid and natural in demonstration, and the deep interaction between a user and the virtual character is enhanced. The above are only specific examples, and the setting is flexible in practical application according to user requirements, and is not limited here.
In this embodiment, the real property elements of the user are identified to obtain property types, property identifications and property colors; then, matching a corresponding interactive prop frame in the virtual interactive prop sub-database according to the prop type; and then, rendering the interactive prop frame according to the prop identification and the prop color to obtain virtual interactive prop elements, so that the interactive flexibility and the reality of the virtual character are improved, the virtual character demonstration is more vivid and natural, and the deep interaction between the user and the virtual character is enhanced.
In one embodiment, the virtual picture element sub-database comprises a virtual interactive emoticon sub-database. The step of identifying the category of each element in the real picture element set comprises the following steps: user facial expression elements are identified from the set of real picture elements.
In particular, the set of real picture elements may include, but is not limited to, user real action elements, user real prop elements, user facial expression elements, and/or user real background elements. The server may identify the user facial expression elements from the set of real picture elements in a process of performing category identification on each element in the set of real picture elements using, but not limited to, a category identification algorithm.
Respectively carrying out element identification matching on each element based on the virtual picture element sub-database corresponding to the category identification result of each element to obtain a virtual picture element set, wherein the step of obtaining the virtual picture element set comprises the following steps: and performing element identification matching on the facial expression elements of the user based on the virtual interactive expression sub-database to obtain the virtual interactive expression elements.
The virtual picture element set further comprises virtual interactive expression elements; the user face element is used for representing the facial expression shown by the user in the user interaction picture; the virtual interactive expression elements are used for representing the facial expressions required by the virtual character to complete the interactive actions.
Specifically, the server performs element identification matching on each element based on the virtual picture element sub-database corresponding to the category identification result of each element to obtain a virtual picture element set, and may perform element identification matching on the facial expression elements of the user based on the virtual interactive expression sub-database to obtain virtual interactive expression elements in the virtual picture element set.
In one particular example, the user facial expression elements may be determined from eye status, mouth status, and/or eyebrow status in the user interaction screen. The facial expression elements of the user can comprise happy expression elements, hard expression elements, surprised expression elements and the like. The server identifies the facial expression elements of the user as the happy expression elements from the real picture element set; then, element identification matching can be carried out on the happy expression elements, namely the facial expression elements of the user on the basis of the virtual interactive expression sub-database, and the obtained virtual interactive expression elements are the happy interactive expression elements, so that the happy interactive expressions can be displayed by virtual characters, the interactive flexibility and the reality of the virtual characters are improved, the virtual character demonstration is more vivid and natural, and the deep interaction between the user and the virtual characters is enhanced. The above is only a specific example, and the actual application is flexibly set according to the user requirement, and is not limited herein.
In the present embodiment, by identifying user facial expression elements from a set of real picture elements; then, in the process of carrying out element identification matching on each element to obtain a virtual picture element set, element identification matching can be carried out on facial expression elements of a user based on the virtual interactive expression sub-database to obtain virtual interactive expression elements in the virtual picture element set, so that expression interaction flexibility and reality of virtual characters are improved, the expressions demonstrated by the virtual characters are more vivid and natural, and deep interaction between the user and the virtual characters is enhanced.
In one embodiment, as shown in fig. 5, the method further includes step 501 and step 502.
Step 501, in response to receiving user interaction audio output by an audio acquisition device, identifying the user interaction audio to obtain a real audio keyword set.
Step 502, correcting each element in the virtual picture element set according to the real audio keyword set to obtain a corrected virtual picture element set.
It is understood that the user interactive audio may be captured and output by configuring an audio capture device electrically connected to the server. The server can identify and process the user interaction audio to obtain a real audio keyword set when receiving the user interaction audio output by the audio acquisition equipment; and then, correcting each element in the virtual picture element set according to the real audio keyword set to obtain a corrected virtual picture element set, thereby avoiding the occurrence of matching errors or errors in the virtual picture element set caused by the occurrence of a barrier in a user interaction picture, an identification error in the user interaction picture or a messy code in the user interaction picture in the control process of the virtual character.
And generating a virtual character control instruction according to the virtual picture element set, and controlling the virtual character to perform an interactive action according to the virtual character control instruction, including step 503.
Step 503, generating a virtual character control instruction according to the corrected virtual picture element set, and controlling the virtual character to perform an interactive action according to the virtual character control instruction.
Specifically, the server may generate a virtual character control instruction according to the corrected virtual picture element set, and control the virtual character to perform an interactive action according to the virtual character control instruction. In one particular example, the interaction of the virtual character and the virtual character may be presented via a user interface presented by a display device, and may also be presented in real space via a holographic projection device. The above is only a specific example, and the setting is flexible according to the user requirement in practical application, and is not limited herein.
In a specific example, the user shows real flower elements on the image acquisition device, and the server may determine that the user real prop elements are real ice cream elements after performing category identification on each element in the real picture element set because of the appearance of a blocking object on a user interaction picture, the appearance of an identification error on the user interaction picture, or the appearance of a messy code on the user interaction picture. At this point, the user speaks the content of the user interaction audio "sent to your flower! The server can identify and process the user interaction audio to obtain a real audio keyword set comprising the keyword 'flower'; and then, correcting the virtual picture element set comprising the real ice cream elements according to the real audio keyword set comprising the keywords 'flower' to obtain the corrected virtual picture element set comprising the real ice cream elements, so that the matching error or error of the virtual picture element set caused by the appearance of a shelter in a user interaction picture, the appearance of an identification error in the user interaction picture or the appearance of a messy code in the user interaction picture in the control process of the virtual character is avoided. The above are only specific examples, and the setting is flexible in practical application according to user requirements, and is not limited here.
In the embodiment, the user interaction audio output by the audio acquisition equipment is received, and the user interaction audio is identified to obtain a real audio keyword set; then, correcting each element in the virtual picture element set according to the real audio keyword set to obtain a corrected virtual picture element set; and a virtual character control instruction is generated according to the corrected virtual picture element set, and the virtual character is controlled to perform interaction action according to the virtual character control instruction, so that the problem that the virtual picture element set has matching errors or errors due to the fact that a shielding object appears on a user interaction picture, an identification error appears on the user interaction picture or a messy code appears on the user interaction picture in the control process of the virtual character is avoided, the accuracy of the virtual picture element set is ensured, the authenticity and the accuracy of the virtual character are further improved, the virtual character is more truly and vividly demonstrated, and the deep interaction between a user and the virtual character is enhanced.
In one embodiment, as shown in FIG. 6, the set of real picture elements also includes a user real background element. The method further comprises step 601.
Step 601, generating a background rendering instruction according to the real background element of the user, and rendering the background of the virtual character according to the background rendering instruction.
The user real background element is used for representing the environment background where the user is located in the user interaction picture. The server can automatically generate a background rendering instruction according to the real background elements of the user, and render the background of the virtual character according to the background rendering instruction, so that the reality of the background of the virtual character is improved, and the deep interaction effect between the user and the virtual character is improved.
In a specific example, element disassembling processing is carried out on a user interactive picture to obtain a user real background element in a real picture element set as a user office background element; and then, a background rendering instruction can be generated according to the background elements of the user office, and the background where the virtual character is located is rendered according to the background rendering instruction, so that the background where the virtual character is located displayed on the user interface is more real. The above are only specific examples, and the setting is flexible in practical application according to user requirements, and is not limited here.
In this embodiment, the background rendering instruction is generated according to the real background element of the user, and the background of the virtual character is rendered according to the background rendering instruction, so that the reality of the background of the virtual character is improved, and the deep interaction effect between the user and the virtual character is improved.
In one embodiment, as shown in fig. 7, the virtual character control instructions include a first virtual character control instruction and a second virtual character control instruction; generating a virtual character control instruction according to the virtual picture element set, and controlling the virtual character to perform interactive action according to the virtual character control instruction, wherein the method comprises the following steps: steps 701 and 702.
Step 701, in response to the selection of the interface display mode, generating a first virtual character control instruction according to the virtual picture element set, and controlling the virtual character displayed on the user interface to perform an interactive action according to the first virtual character control instruction.
And 702, responding to the selection of the holographic projection mode, generating a second virtual character control instruction according to the virtual picture element set, and controlling the virtual character presented by the holographic projection equipment to perform interactive action according to the second virtual character control instruction.
The display mode of the virtual character can include, but is not limited to, an interface display mode and a holographic projection mode. The server can determine whether the interface display mode or the holographic projection mode is selected by the user according to the user mode selection instruction. It will be appreciated that in the case where the user selects the interface presentation mode, both the virtual character and the interaction by the virtual character are presented on the user interface of the display device. And under the condition that the user selects the holographic projection mode, the virtual character and the interactive action performed by the virtual character are displayed in the real space through the holographic projection equipment.
Specifically, under the condition that the interface display mode is selected, a role control instruction can be generated according to the virtual picture element set, so that the virtual role displayed on the user interface can be controlled to perform interactive action according to the first virtual role control instruction; that is, the display device may display the virtual character on the user interface for an interactive action according to the control of the first virtual character control instruction. Moreover, under the condition that the server selects the holographic projection mode, a second virtual character control instruction can be generated according to the virtual picture element set, and the virtual character presented by the holographic projection equipment is controlled to perform interaction according to the second virtual character control instruction; that is, the holographic projection device may perform an interactive action with respect to the holographic avatar presented under the control of the second avatar control instruction.
In this embodiment, by responding to the condition of selecting the interface display mode, a role control instruction can be generated according to the virtual picture element set, so that the virtual role displayed on the user interface can be controlled to perform an interactive action according to the first virtual role control instruction; and the holographic projection mode is selected in response, a second virtual character control instruction is generated according to the virtual picture element set, and the holographic virtual character presented by the holographic projection equipment is controlled to perform interaction according to the second virtual character control instruction, so that the interaction flexibility and authenticity of the virtual character are improved, the virtual character is created to be more vivid, the interestingness is increased, and the use experience of a user is improved.
It should be understood that although the various steps in the flow charts of fig. 2-7 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not limited to being performed in the exact order illustrated and, unless explicitly stated herein, may be performed in other orders. Moreover, at least some of the steps in fig. 2-7 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In a second aspect, as shown in fig. 8, there is provided a virtual character control apparatus, which includes a screen disassembling module 801, an identification matching module 802, and a character control module 803.
The image disassembling module 801 is configured to perform element disassembling processing on a user interaction image in response to receiving the user interaction image output by the image acquisition device, so as to obtain a real image element set; the recognition matching module 802 is configured to perform element recognition matching on the real picture element set based on a preset virtual picture element database to obtain a virtual picture element set; the role control module 803 is configured to generate a virtual role control instruction according to the virtual picture element set, and control the virtual role to perform an interaction according to the virtual role control instruction.
In one embodiment, the virtual picture element database comprises at least one virtual picture element sub-database; the recognition matching module 802 includes a category recognition unit and an element recognition matching unit.
The category identification unit is used for carrying out category identification on each element in the real picture element set; the element identification matching unit is used for carrying out element identification matching on each element based on the virtual picture element sub-database corresponding to the category identification result of each element to obtain a virtual picture element set.
In one embodiment, the virtual picture element sub-database comprises a virtual interaction sub-database; the category identification unit includes a first category identification subunit. The first category identification subunit is used for identifying real action elements of the user from the real picture element set.
The element identification matching unit comprises a first element identification matching subunit; the first element identification matching subunit is used for carrying out element identification matching on the real action elements of the user based on the virtual interaction action sub-database to obtain virtual interaction action elements; the set of virtual picture elements includes virtual interactive elements; the user real action elements are used for representing head actions and/or limb actions displayed by a user in a user interaction picture; the virtual interactive action elements are used to characterize the head and/or limb actions required by the virtual character to complete the interactive action.
In one embodiment, the virtual picture element sub-database comprises a virtual interaction property sub-database; the category identification unit includes a second category identification subunit. The second category identification subunit is used for identifying the real prop elements of the user from the real picture element set.
The element identification matching unit comprises a second element identification matching subunit; the second element identification matching subunit is used for identifying and processing the real prop elements of the user to obtain prop types, prop identifications and prop colors; the second element identification matching subunit is used for matching a corresponding interactive prop frame in the virtual interactive prop sub-database according to the prop type; the second element identification matching subunit is used for rendering the interactive prop frame according to the prop identifier and the prop color to obtain a virtual interactive prop element; the virtual picture element set further comprises virtual interaction prop elements; the real user prop element is used for representing props used by users in the user interaction picture; the virtual interaction prop element is used for representing props required by the virtual character to complete the interaction action.
In one embodiment, the virtual picture element sub-database comprises a virtual interactive emoticon sub-database; the category identifying unit includes a third category identifying subunit. The third category identification subunit is configured to identify the user facial expression elements from the set of real picture elements.
Wherein the element identification matching unit comprises a third element identification matching subunit. And the third element identification matching subunit is used for carrying out element identification matching on the facial expression elements of the user based on the virtual interactive expression sub-database to obtain the virtual interactive expression elements.
The virtual picture element set further comprises virtual interactive expression elements; the user face element is used for representing the facial expression displayed by the user in the user interaction picture; the virtual interactive expression elements are used for representing the facial expressions required by the virtual character to complete the interactive actions.
In one embodiment, the control device of the virtual character further comprises an audio recognition module and a correction module. The character control module 803 includes a first character control unit. The audio identification module is used for responding to the received user interaction audio output by the audio acquisition equipment, and identifying the user interaction audio to obtain a real audio keyword set; the correction module is used for correcting each element in the virtual picture element set according to the real audio keyword set to obtain a corrected virtual picture element set; the first role control unit is used for generating a virtual role control instruction according to the corrected virtual picture element set and controlling the virtual role to carry out interaction according to the virtual role control instruction.
In one embodiment, the set of real picture elements further comprises a user real background element; the control device of the virtual character further comprises a background rendering module.
The background rendering module is used for generating a background rendering instruction according to the real background element of the user and rendering the background of the virtual character according to the background rendering instruction; the user real background element is used for representing the environment background where the user is located in the user interaction picture.
In one embodiment, the virtual character control instructions comprise a first virtual character control instruction and a second virtual character control instruction; the character control module 803 includes a second character control unit. The second role control unit is used for responding to the selection interface display mode, generating a first virtual role control instruction according to the virtual picture element set, and controlling the virtual role displayed on the user interface to perform interactive action according to the first virtual role control instruction; and the second role control unit is also used for responding to the selection of the holographic projection mode, generating a second virtual role control instruction according to the virtual picture element set, and controlling the virtual role presented by the holographic projection equipment to perform interactive action according to the second virtual role control instruction.
For specific limitations of the control device of the virtual character, reference may be made to the above limitations of the control method of the virtual character, which are not described herein again. Each module in the control device of the virtual character may be entirely or partially implemented by software, hardware, or a combination thereof. The modules can be embedded in a hardware form or independent of a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device 9000 is provided, the computer device 9000 may be a server, and its internal structure diagram may be as shown in fig. 9. The computer device 9000 includes a processor, memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device 9000 is configured to provide computing and control capabilities. The memory of the computer device 9000 includes a non-volatile storage medium, an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device 9000 is for storing real picture element set data and virtual picture element set data. The network interface of the computer apparatus 9000 is used for communication with an external terminal through a network connection. The computer program is executed by a processor to implement a control method of a virtual character.
Those skilled in the art will appreciate that the configuration shown in FIG. 9 is a block diagram of only a portion of the configuration associated with aspects of the present application and does not constitute a limitation on the computing device 9000 to which aspects of the present application may be applied, and that a particular computing device 9000 may include more or fewer components than shown, or some components may be combined, or have a different arrangement of components.
In a third aspect, as shown in fig. 9, there is provided a computer device 9000, the storage device comprising a memory and a processor, the memory storing a computer program which when executed by the processor performs the steps of any of the above method embodiments.
In a fourth aspect, as shown in fig. 10, there is provided a virtual character interaction system including an image capture device 1001 and a computer device 9000 of any one of the above-described device embodiments.
The image capturing device 1001 is electrically connected to the computer device 9000, and is configured to capture and output a user interaction image. In one embodiment, the image capturing device 1001 may be, but is not limited to, a camera.
In the embodiment, the interaction content between the virtual character and the user can be enriched through the virtual character interaction system, the interaction flexibility and the reality of the virtual character are improved, the virtual character demonstration is more vivid and natural, and the deep interaction between the user and the virtual character is enhanced.
In one embodiment, as shown in FIG. 11, the avatar interaction system further includes a display device 1002, an audio capture device 1003, and a holographic projection device 1004.
The display device 1002 is electrically connected to the computer device 9000, and is configured to display a user interface, and display a virtual character on the user interface according to the control of the virtual character control instruction to perform an interaction. In one embodiment, the display device may be, but is not limited to, an LED display screen.
The audio capture device 1003 is electrically connected to the computer device 9000 for capturing and outputting user interaction audio. In one embodiment, the audio capture device 1003 may be, but is not limited to, a microphone.
The holographic projection device 1004 is electrically connected to the computer device 9000 for performing an interactive action with respect to the holographic avatar presented under the control of the second avatar control instruction.
In this embodiment, the virtual character interaction system configured with the audio acquisition device 1003 and the holographic projection device 1004 can improve the interaction flexibility and reality of the virtual character, so that the virtual character can be more vivid, the interestingness is increased, and the use experience of the user is improved.
In a fifth aspect, a computer-readable storage medium is provided, having stored thereon a computer program which, when executed by a processor, implements the steps of any one of the above-described method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is specific and detailed, but not to be understood as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, and these are all within the scope of protection of the present application. Therefore, the protection scope of the present patent application shall be subject to the appended claims.

Claims (12)

1. A method for controlling a virtual character, the method comprising:
performing element disassembly processing on a user interaction picture output by image acquisition equipment in response to the received user interaction picture to obtain a real picture element set;
based on a preset virtual picture element database, carrying out element identification matching on the real picture element set to obtain a virtual picture element set;
and generating a virtual character control instruction according to the virtual picture element set, and controlling the virtual character to perform interactive action according to the virtual character control instruction.
2. The method of claim 1, wherein the virtual picture element database comprises at least one virtual picture element sub-database; the element recognition matching is carried out on the real picture element set based on a preset virtual picture element database to obtain a virtual picture element set, and the method comprises the following steps:
identifying the category of each element in the real picture element set;
and respectively carrying out element identification matching on each element based on the virtual picture element sub-database corresponding to the category identification result of each element to obtain the virtual picture element set.
3. The method of claim 2, wherein the virtual picture element sub-database comprises a virtual interaction sub-database; said identifying a category for each element in the set of real picture elements comprises identifying a user real action element from the set of real picture elements;
the performing element identification matching on each element based on the virtual picture element sub-database corresponding to the category identification result of each element to obtain the virtual picture element set includes:
based on the virtual interaction sub-database, performing element identification matching on the user real action elements to obtain the virtual interaction action elements; wherein the set of virtual picture elements includes the virtual interactive element; the user real action elements are used for representing head actions and/or limb actions displayed by the user in the user interaction picture; the virtual interaction element is used for representing the head action and/or the limb action required by the virtual character to complete the interaction action.
4. The method of claim 2, wherein the virtual picture element sub-database comprises a virtual interaction property sub-database; said identifying each element in the set of real picture elements by category comprises identifying a user real prop element from the set of real picture elements;
the performing element identification matching on each element based on the virtual picture element sub-database corresponding to the category identification result of each element to obtain the virtual picture element set comprises:
identifying the real prop elements of the user to obtain prop types, prop identifications and prop colors;
matching a corresponding interactive prop frame in the virtual interactive prop sub-database according to the prop type;
rendering the interactive prop frame according to the prop identification and the prop color to obtain the virtual interactive prop element; the set of virtual picture elements further comprises the virtual interaction prop element; wherein the user real prop element is used for representing a prop used by a user in the user interaction picture; the virtual interaction prop element is used for representing props required by the virtual character to complete the interaction action.
5. The method of claim 2, wherein the virtual picture element sub-database comprises a virtual interactive emoticon sub-database; said identifying a category for each element in said set of real picture elements comprises identifying a user facial expression element from said set of real picture elements;
the performing element identification matching on each element based on the virtual picture element sub-database corresponding to the category identification result of each element to obtain the virtual picture element set includes:
performing element identification matching on the user facial expression elements based on the virtual interactive expression sub-database to obtain the virtual interactive expression elements; the virtual picture element set further comprises the virtual interactive expression elements; wherein the user facial element is used for representing facial expressions shown by the user in the user interaction picture; the virtual interactive expression elements are used for representing the facial expressions required by the virtual character to complete the interactive actions.
6. The method of any of claims 1 to 5, further comprising:
in response to receiving user interaction audio output by audio acquisition equipment, identifying the user interaction audio to obtain a real audio keyword set;
correcting each element in the virtual picture element set according to the real audio keyword set to obtain a corrected virtual picture element set;
the generating a virtual character control instruction according to the virtual picture element set and controlling a virtual character to perform an interactive action according to the virtual character control instruction includes: and generating the virtual character control instruction according to the corrected virtual picture element set, and controlling a virtual character to perform interactive action according to the virtual character control instruction.
7. The method of claim 1, wherein the set of real picture elements further comprises a user real background element; the method further comprises the following steps:
generating a background rendering instruction according to the real background element of the user, and rendering the background of the virtual character according to the background rendering instruction; wherein the user real background element is used for representing the environment background in which the user is located in the user interaction picture.
8. The method of claim 1, wherein the avatar control instructions include a first avatar control instruction and a second avatar control instruction;
the generating a virtual character control instruction according to the virtual picture element set and controlling a virtual character to perform an interactive action according to the virtual character control instruction includes:
responding to a selected interface display mode, generating the first virtual character control instruction according to the virtual picture element set, and controlling the virtual character displayed on the user interface to perform interactive action according to the first virtual character control instruction;
and responding to the selection of the holographic projection mode, generating a second virtual character control instruction according to the virtual picture element set, and controlling the virtual character presented by the holographic projection equipment to perform interactive action according to the second virtual character control instruction.
9. An apparatus for controlling a virtual character, the apparatus comprising:
the image disassembling module is used for responding to a received user interaction image output by the image acquisition equipment and performing element disassembling processing on the user interaction image to obtain a real image element set;
the identification matching module is used for carrying out element identification matching on the real picture element set based on a preset virtual picture element database to obtain a virtual picture element set;
and the role control module is used for generating a virtual role control instruction according to the virtual picture element set and controlling the virtual role to carry out interaction according to the virtual role control instruction.
10. A computer arrangement comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 8 when executing the computer program.
11. A virtual character interaction system, characterized in that the system comprises an image acquisition device and a computer device according to claim 10;
the image acquisition equipment is electrically connected with the computer equipment and is used for acquiring and outputting the user interaction picture.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
CN202211184556.1A 2022-09-27 2022-09-27 Control method and device of virtual role, computer equipment and storage medium Pending CN115554701A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211184556.1A CN115554701A (en) 2022-09-27 2022-09-27 Control method and device of virtual role, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211184556.1A CN115554701A (en) 2022-09-27 2022-09-27 Control method and device of virtual role, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115554701A true CN115554701A (en) 2023-01-03

Family

ID=84744008

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211184556.1A Pending CN115554701A (en) 2022-09-27 2022-09-27 Control method and device of virtual role, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115554701A (en)

Similar Documents

Publication Publication Date Title
CN112348969B (en) Display method and device in augmented reality scene, electronic equipment and storage medium
CN113099298B (en) Method and device for changing virtual image and terminal equipment
WO2020150686A1 (en) Systems and methods for face reenactment
CN111080759B (en) Method and device for realizing split mirror effect and related product
CN107277615B (en) Live broadcast stylization processing method and device, computing device and storage medium
WO2022116604A1 (en) Image captured image processing method and electronic device
CN113362263B (en) Method, apparatus, medium and program product for transforming an image of a virtual idol
WO2020211347A1 (en) Facial recognition-based image modification method and apparatus, and computer device
CN112527115A (en) User image generation method, related device and computer program product
CN111643900A (en) Display picture control method and device, electronic equipment and storage medium
CN116437137B (en) Live broadcast processing method and device, electronic equipment and storage medium
CN114170472A (en) Image processing method, readable storage medium and computer terminal
JP7183414B2 (en) Image processing method and apparatus
US11812183B2 (en) Information processing device and program
JP5930450B2 (en) Annotation device and annotation system
CN113361419A (en) Image processing method, device, equipment and medium
CN115554701A (en) Control method and device of virtual role, computer equipment and storage medium
CN115116295A (en) Method, system, equipment and storage medium for displaying association interaction training
CN113327311B (en) Virtual character-based display method, device, equipment and storage medium
CN115499613A (en) Video call method and device, electronic equipment and storage medium
CN113408452A (en) Expression redirection training method and device, electronic equipment and readable storage medium
CN114757836A (en) Image processing method, image processing device, storage medium and computer equipment
CN112488965A (en) Image processing method and device
CN111738087A (en) Method and device for generating face model of game role
CN113223128A (en) Method and apparatus for generating image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination