WO2022247536A1 - 在虚拟场景中显示表情的方法、装置、设备以及介质 - Google Patents
在虚拟场景中显示表情的方法、装置、设备以及介质 Download PDFInfo
- Publication number
- WO2022247536A1 WO2022247536A1 PCT/CN2022/088267 CN2022088267W WO2022247536A1 WO 2022247536 A1 WO2022247536 A1 WO 2022247536A1 CN 2022088267 W CN2022088267 W CN 2022088267W WO 2022247536 A1 WO2022247536 A1 WO 2022247536A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target
- emoticon
- expression
- virtual scene
- area
- Prior art date
Links
- 230000014509 gene expression Effects 0.000 title claims abstract description 270
- 238000000034 method Methods 0.000 title claims abstract description 83
- 230000004044 response Effects 0.000 claims abstract description 93
- 238000004590 computer program Methods 0.000 claims description 20
- 230000015654 memory Effects 0.000 claims description 18
- 230000008921 facial expression Effects 0.000 claims description 17
- 230000009471 action Effects 0.000 claims description 11
- 230000000875 corresponding effect Effects 0.000 description 57
- 230000003993 interaction Effects 0.000 description 27
- 238000010586 diagram Methods 0.000 description 18
- 230000008569 process Effects 0.000 description 17
- 229920002472 Starch Polymers 0.000 description 13
- 230000002093 peripheral effect Effects 0.000 description 10
- 238000012545 processing Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 9
- 230000000694 effects Effects 0.000 description 7
- 230000002860 competitive effect Effects 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 3
- 230000001276 controlling effect Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 3
- 239000010931 gold Substances 0.000 description 3
- 229910052737 gold Inorganic materials 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000010187 selection method Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007123 defense Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/85—Providing additional services to players
- A63F13/87—Communicating with other players during game play, e.g. by e-mail or chat
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04817—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/50—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
- A63F2300/55—Details of game data or player data management
- A63F2300/5546—Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history
- A63F2300/5553—Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history user representation in the game field, e.g. avatar
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/24—Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2004—Aligning objects, relative positioning of parts
Definitions
- the present application relates to the field of computer technology, in particular to a method, device, device and medium for displaying facial expressions in a virtual scene.
- the auto chess game is a relatively popular game. During the game, different virtual objects can fight against each other in a virtual scene.
- Embodiments of the present application provide a method, device, device and medium for displaying expressions in a virtual scene. Described technical scheme is as follows:
- a method for displaying facial expressions in a virtual scene comprising:
- an expression adding icon is displayed in the virtual scene, and the expression adding icon is used to add an expression in the virtual scene;
- an expression selection area is displayed on a first target position in the virtual scene, where the first target position is the position where the drag operation ends, and the expression selection There are multiple first candidate emoticons displayed in the area;
- the first target emoticon is displayed in the virtual scene.
- a method for displaying facial expressions in a virtual scene comprising:
- an expression adding icon is displayed in the virtual scene, and the expression adding icon is used to add an expression in the virtual scene;
- the emoticon selection area is displayed in the virtual scene, and a plurality of first candidate emoticons are displayed in the emoticon selection area;
- a device for displaying facial expressions in a virtual scene comprising:
- the scene display module is used to display a virtual scene, and an expression adding icon is displayed in the virtual scene, and the expression adding icon is used to add an expression in the virtual scene;
- An area display module configured to display an expression selection area at a first target position in the virtual scene in response to a drag operation on the expression adding icon, where the first target position is the end of the drag operation position, a plurality of first candidate emoticons are displayed in the emoticon selection area;
- An emoticon display module configured to display the first target emoticon in the virtual scene in response to a selection operation on the first target emoticon among the plurality of first candidate emoticons.
- a device for displaying facial expressions in a virtual scene comprising:
- the scene display module is used to display a virtual scene, and an expression adding icon is displayed in the virtual scene, and the expression adding icon is used to add an expression in the virtual scene;
- An emoticon selection area display module configured to display the emoticon selection area in the virtual scene in response to a click operation on the emoticon adding icon, and display a plurality of first candidate emoticons in the emoticon selection area;
- a target expression display module configured to display the second target expression on a second target position in response to a drag operation on the second target expression among the plurality of first candidate emoticons, and the second target position The position where the drag operation ends.
- a computer device in one aspect, includes one or more processors and one or more memories, at least one computer program is stored in the one or more memories, and the computer program is executed by the One or more processors are loaded and executed to realize the method for displaying facial expressions in a virtual scene.
- a computer-readable storage medium wherein at least one computer program is stored in the computer-readable storage medium, and the computer program is loaded and executed by a processor to realize the method for displaying facial expressions in a virtual scene .
- a computer program product or computer program includes program code, the program code is stored in a computer-readable storage medium, and a processor of a computer device reads the program code from the computer-readable storage medium The program code, the processor executes the program code, so that the computer device executes the above-mentioned method for displaying facial expressions in a virtual scene.
- the user can trigger the display of the expression selection area by dragging the expression adding icon, so that the expression selection area can be used for expression selection, so that the selected first target expression will be displayed on the on the first target position in the virtual scene.
- the first target position is the position where the dragging operation ends
- the display position of the first target emoticon can be changed by adjusting the dragging operation, the operation is simple and convenient, and the human-computer interaction efficiency is high.
- the emoticon can be sent by dragging the emoticon adding icon, and then selecting the first target emoticon to be displayed in the displayed emoticon selection area.
- Yu first calls up the chat window, calls the emoticon selection panel in the chat window, selects the emoticon in the emoticon selection panel, and then clicks the send control in the chat window to realize the sending of emoticons.
- the operation is simple and convenient, and the efficiency of human-computer interaction is improved. .
- Fig. 1 is a schematic diagram of an implementation environment of a method for displaying emoticons in a virtual scene provided by an embodiment of the present application;
- Fig. 2 is a schematic diagram of an interface provided by an embodiment of the present application.
- Fig. 3 is a flow chart of a method for displaying emoticons in a virtual scene provided by an embodiment of the present application
- Fig. 4 is a flow chart of another method for displaying emoticons in a virtual scene provided by an embodiment of the present application.
- Fig. 5 is a schematic diagram of another interface provided by the embodiment of the present application.
- Fig. 6 is a schematic diagram of another interface provided by the embodiment of the present application.
- Fig. 7 is a schematic diagram of another interface provided by the embodiment of the present application.
- Fig. 8 is a logic block diagram of a method for displaying emoticons in a virtual scene provided by an embodiment of the present application.
- Fig. 9 is a flow chart of another method for displaying emoticons in a virtual scene provided by an embodiment of the present application.
- Fig. 10 is a schematic diagram of another interface provided by the embodiment of the present application.
- Fig. 11 is a schematic diagram of another interface provided by the embodiment of the present application.
- Fig. 12 is a schematic diagram of functional module division provided by the embodiment of the present application.
- Fig. 13 is a schematic structural diagram of a device for displaying expressions in a virtual scene provided by an embodiment of the present application
- Fig. 14 is a schematic structural diagram of another device for displaying expressions in a virtual scene provided by an embodiment of the present application.
- FIG. 15 is a schematic structural diagram of a terminal provided by an embodiment of the present application.
- first and second are used to distinguish the same or similar items with basically the same function and function. It should be understood that “first”, “second” and “nth” There are no logical or timing dependencies, nor are there restrictions on quantity or order of execution.
- the term "at least one" means one or more, and the meaning of "multiple” means two or more, for example, a plurality of sub-regions refers to two or more sub-regions.
- a user wants to send an emoticon while playing an auto chess game, he needs to bring up a chat window in the auto chess game, call an emoticon selection panel in the chat window, select an emoticon in the emoticon selection panel, and then Click the send control in the chat window to send emoticons.
- Auto Chess A new type of multiplayer battle strategy game. Users develop their own chess pieces and match the lineup of chess pieces to meet the opponent's lineup. During the battle, the loser of the battle will deduct the virtual life value, and the ranking will be determined according to the order of elimination.
- Chess pieces Different combat units. Users can equip, upgrade, buy and sell chess pieces. Most of the chess pieces are obtained from refreshing the chess piece pool, and a small part comes from "draft" and combat activities.
- chess pieces Different classifications of chess pieces (generally there are two types of traits: occupation and race). A certain number of chess pieces with the same trait can be activated in the combat zone to activate the special ability of the trait and make the chess pieces have stronger combat effectiveness.
- Buying and selling users can get chess pieces by consuming virtual resources, and users can get virtual resources by selling chess pieces (there will be a certain discount).
- Fig. 1 is a schematic diagram of an implementation environment of a method for displaying emoticons in a virtual scene provided by an embodiment of the present application.
- the implementation environment includes a terminal 110 and a server 140 .
- the terminal 110 is connected to the server 140 through a wireless network or a wired network.
- the terminal 110 is a smart phone, a tablet computer, a notebook computer, a desktop computer, etc., but not limited thereto.
- the terminal 110 is installed and runs an application program supporting virtual scene display.
- the server 140 is an independent physical server, or a server cluster or a distributed system composed of multiple physical servers, or provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication , middleware services, domain name services, security services, distribution network (Content Delivery Network, CDN) and cloud servers for basic cloud computing services such as big data and artificial intelligence platforms.
- cloud databases cloud computing, cloud functions, cloud storage, network services, cloud communication , middleware services, domain name services, security services, distribution network (Content Delivery Network, CDN) and cloud servers for basic cloud computing services such as big data and artificial intelligence platforms.
- the terminal 110 generally refers to one of multiple terminals, and this embodiment of the present application only uses the terminal 110 as an example for illustration.
- the number of the foregoing terminals may be more or less.
- the embodiment of the present application does not limit the number of terminals and device types.
- the virtual object is also a pawn
- the classification icon is also a characteristic icon.
- the auto chess game is a turn-based game.
- the battle process of the game is divided into multiple rounds, and the user can upgrade the chess pieces and adjust the position of the chess pieces between the rounds.
- Battles are divided into battles between different users, and battles between users and non-user-controlled characters (Non-Player-Controlled Character, NPC).
- NPC Non-Player-Controlled Character
- the victorious user can obtain a large number of virtual resources, and the defeated user can only obtain a small number of virtual resources.
- the virtual resources are gold coins in the game.
- a certain number of virtual life points will be deducted from failed users. When the virtual life value of any user drops to 0, the user will also be eliminated.
- users who win consecutively can get winning streak rewards, that is, users who win consecutively can get additional virtual resources as rewards.
- users who lose consecutively can also get losing streak rewards.
- the user can obtain virtual equipment by defeating the NPC, and the virtual equipment can improve the attributes of the chess pieces, thereby improving the combat ability of the chess pieces.
- Pieces in auto chess games vary in quality, which is usually described by star ratings.
- the chess pieces usually include three star ratings.
- the combat ability of the 3-star chess pieces is stronger than that of the 2-star chess pieces
- the combat ability of the 2-star chess pieces is stronger than that of the 1-star chess pieces.
- the process of improving the star level of chess pieces is also the process of users improving their combat capabilities.
- the pieces initially obtained by the user are all 1-star pieces, and as the game progresses, the user can increase the star level of the pieces.
- three identical 1-star chess pieces can be combined into a 2-star chess piece, and three identical 2-star chess pieces can be combined into a 3-star chess piece.
- Users can purchase chess pieces through the store, that is, users consume a certain amount of virtual resources in the store in exchange for a chess piece. In some embodiments, only 1 star pieces are available in the shop.
- one chess piece may have a guardian quality, and another chess piece may have a wizard quality.
- they can get additional attribute bonuses.
- the defense ability of these three chess pieces can be improved;
- the chess pieces that are mages play at the same time these three chess pieces can all get the promotion of magic attack ability.
- the above three chess pieces are different chess pieces. The more chess pieces the user collects with the same characteristics, the stronger the combat ability of the chess pieces.
- a chess piece may correspond to two or more qualities.
- a chess piece may have both the professional quality of a mage and the faction quality of a beast at the same time. Carry out the matching of chess pieces.
- FIG. 2 is a schematic diagram of a game interface of an auto chess game.
- FIG. 2 includes an information prompt area 201 , a store trigger control 202 , a battlefield area 203 , a preparation area 204 , a characteristic display area 205 , an equipment library 206 and a scoring area 207 .
- the information prompt area 201 is used to display the user's game level and the virtual resources needed to improve the game level.
- the user's game level determines the number of chess pieces that the user can place in the battlefield area 203 at the same time. For example, at the beginning of the game, the user's game level is level 1, then the user can simultaneously place 3 chess pieces in the battlefield area 203 to compete with other users. If the user wants to increase the number of chess pieces placed in the battlefield area 203 at the same time, the user needs to consume virtual resources to improve the game level.
- every time the user improves a game level he can add a chess piece placed in the battlefield area 203, that is, if the user can place 3 chess pieces in the battlefield area 203 at the same time at level 1, then the user's When the game level is upgraded to level 2, 4 chess pieces can be placed in the battlefield area 203 at the same time, and so on.
- the virtual resources required for a user to level up a game decrease as the number of rounds increases. For example, the user's game level at the beginning of the game is level 1, and the user needs to spend 4 gold coins to upgrade the game level to level 2 in the first round. When the game enters the second round, the user wants to upgrade the game level to level 2. When the level is upgraded from level 1 to level 2, only 2 gold coins are needed.
- the store trigger control 202 is used to display the object transaction area after being triggered, that is, the store area; various chess pieces are provided in the store area, and the user can select the chess pieces to be exchanged in the store area.
- Exchange pieces need to consume a certain amount of virtual resources.
- the amount of virtual resources consumed by exchanging different chess pieces is different, and the chess pieces with stronger combat capabilities need to consume more virtual resources.
- the store area can either be displayed when the user clicks on the store trigger control 202 or automatically at the end of each game round.
- the types of chess pieces provided in the store area are randomly determined by the terminal.
- the battlefield area 203 is an area where chess pieces fight. Users can drag chess pieces to the battlefield area 203 to fight with chess pieces of other users. In addition, the user can also adjust the positions of chess pieces in the battlefield area 203 . When each game round starts, chess pieces of other users will also appear in the battlefield area 203 , and the user's chess pieces can fight with chess pieces of other users in the battlefield area 203 .
- the preparation area 204 is used to store chess pieces owned by the user but not played temporarily, that is, placing chess pieces in the battlefield area 203 to play. Since the user needs the same three chess pieces to increase the star rating of the chess pieces, the preparation area 204 is also the area for storing these chess pieces.
- the chess pieces will be automatically synthesized and upgraded. For example, there is a 1-star chess piece in the battlefield area 203, and there is an identical 1-star chess piece in the preparation area 204.
- the 1-star chess piece in the preparation area 204 will also be Combined on the chess pieces in the battlefield area 203, the 1-star chess pieces in the battlefield area 203 will disappear, the chess pieces in the battlefield area 203 will be upgraded from 1 star to 2 stars, and a vacancy will be reserved in the preparation area 204.
- the positions for storing chess pieces in the preparation area 204 are limited, and when all the positions for storing chess pieces in the preparation area 204 are occupied, the user cannot place chess pieces in the preparation area 204 .
- the trait display area 205 is used to display classification icons, that is, trait icons.
- the trait icons are used to remind the user of the traits possessed by the chess piece.
- the "yordle" and “shaper" in FIG. 2 are two traits.
- the characteristic display area can display an icon representing the mage's characteristic.
- the color of the icon is gray, indicating that the attribute bonus corresponding to the mage trait is not activated, and the number 1/3 is displayed next to the icon, where 1 represents the number of pawns with the mage trait existing in the battlefield area 203, 3 represents the number of chess pieces required to activate the attribute corresponding to the mage trait.
- the icon representing the mage traits will change from gray to colored, indicating that the attribute bonus corresponding to the mage traits is activated.
- the equipment library 206 is used for the user to view the type and quantity of the virtual equipment owned.
- the scoring area 207 is used to display the nickname and score of each user.
- the avatars of each user are sorted according to the virtual life value of the user in the scoring area 207. Position to determine your own rank in the current game.
- the computer device can be configured as a terminal or a server.
- the technical solution provided by the embodiment of the present application is implemented by the terminal as the execution body, or the technical solution provided by the present application is implemented through the interaction between the terminal and the server, which is not limited in the embodiment of the present application.
- the following will take the execution subject as the terminal as an example to illustrate:
- Fig. 3 is a flow chart of a method for displaying emoticons in a virtual scene provided by an embodiment of the present application. Referring to Fig. 3, the method includes:
- the terminal displays a virtual scene, and an expression adding icon is displayed in the virtual scene, and the expression adding icon is used to add an expression in the virtual scene.
- the emoticon adding icon is an icon corresponding to the emoticon adding control.
- the terminal In response to the drag operation on the emoticon adding icon, the terminal displays an emoticon selection area at the first target position in the virtual scene, where the first target position is the end position of the drag operation, and multiple emoticon selection areas are displayed in the emoticon selection area. A waiting expression.
- the first candidate expression is an expression configured in advance by technicians, such as an expression drawn in advance by an artist; or an expression downloaded from a database, which is used to maintain the expressions owned by the user; Or an emoticon configured for the user, such as an emoticon uploaded by the user, an emoticon exchanged, or an emoticon drawn; this embodiment of the present application does not limit this.
- the first emoticon to be selected is displayed in the emoticon selection area, and the user can select the first emoticon to be selected in the emoticon selection area. By displaying the first emoticons to be selected in the emoticon selection area, the user can intuitively view the plurality of first emoticons to be selected, thereby improving the viewing efficiency of the first emoticons to be selected, thereby improving the information acquisition efficiency of the user.
- the terminal In response to a selection operation on a first target emoticon among the plurality of first candidate emoticons, the terminal displays the first target emoticon in the virtual scene.
- any first emoticon to be selected in the emoticon selection area is clicked, then the first emoticon to be selected is also the selected first target emoticon.
- the user can trigger the display of the expression selection area by dragging the expression adding icon, so that the expression selection area can be used for expression selection, so that the selected first target expression will be displayed on the on the first target position in the virtual scene.
- the first target position is the position where the dragging operation ends
- the display position of the first target emoticon can be changed by adjusting the dragging operation, the operation is simple and convenient, and the human-computer interaction efficiency is high.
- the emoticon can be sent by dragging the emoticon adding icon, and then selecting the first target emoticon to be displayed in the displayed emoticon selection area.
- Yu first calls up the chat window, calls the emoticon selection panel in the chat window, selects the emoticon in the emoticon selection panel, and then clicks the send control in the chat window to realize the sending of emoticons.
- the operation is simple and convenient, and the efficiency of human-computer interaction is improved. .
- Fig. 4 is a flow chart of another method for displaying emoticons in a virtual scene provided by the embodiment of the present application.
- the computer device is configured as a terminal, and the terminal is used as an execution subject as an example. Referring to Fig. 4, the method includes:
- the terminal displays a virtual scene, and an expression adding icon is displayed in the virtual scene, and the expression adding icon is used to add an expression in the virtual scene.
- the virtual scene is a game scene of an auto chess game
- the virtual scene displays a controlled virtual object, a plurality of virtual objects of the first type and a plurality of virtual objects of the second type.
- the controlled virtual object is a virtual object controlled by the terminal, and the user can control the virtual object to move in the virtual scene through the terminal.
- the controlled virtual object is the "virtual image" of the user in the virtual scene.
- the first type of virtual object is the "piece” of the user in the game of auto chess
- the second type of virtual object is the "piece” of other users playing against the user in the game of auto chess
- the second type of virtual object is an NPC (Non -player Character, non-player character), which is not limited in the embodiment of this application.
- NPC Non -player Character, non-player character
- the terminal in response to the user starting a competitive game, displays a virtual scene corresponding to the current competitive game.
- a game of competitive battle is also a game of auto chess.
- the terminal displays the expression adding icon on the first icon position of the virtual scene.
- the position of the first icon is set by the technician according to the actual situation, or set by the user in the setting interface of the virtual scene, which is not limited in this embodiment of the present application.
- the terminal displays the emoticon adding icon in the lower left corner of the virtual scene.
- the terminal displays a virtual scene 501 in which an emoticon adding icon 502 is displayed.
- the terminal before displaying the virtual scene, can also display the expression setting interface of the virtual scene. There are multiple expressions displayed in the expression setting interface, and the user can select multiple expressions in the expression setting interface. The selected expression It is an expression that can be displayed in a virtual scene.
- an emoticon drawing control is also displayed in the emoticon setting interface.
- the terminal In response to a click operation on the emoticon drawing control, the terminal displays an emoticon drawing interface. There are various drawing tools displayed in the emoticon drawing interface, and the user can use these drawing tools Draw in the expression drawing interface.
- the terminal stores the image in the expression drawing interface as an expression, and displays the expression in the expression setting interface for the user to select.
- the terminal displays an emoticon selection area on the first target position in the virtual scene, the first target position is the position where the drag operation ends, and multiple emoticon selection areas are displayed in the emoticon selection area.
- a waiting expression is the case where the drag operation ends.
- the terminal in response to a drag operation on the emoticon adding icon, can display the emoticon selection area at the position where the drag operation ends, and the position where the drag operation ends is also the first target position.
- the user can control the displayed position of the expression selection area through the drag operation. Since the drag operation is simple, the human-computer interaction that triggers the display of the expression selection area is more efficient. .
- an emoticon adding icon 602 is displayed in the virtual scene 601.
- the terminal displays an emoticon selection area 603 at the position where the drag operation ends.
- the emoticon selection area 603 There are multiple first emoticons 604 to be selected.
- the expression selection area is no longer displayed.
- the emoticon selection area includes a first sub-area and a second sub-area
- the first sub-area is used to display the type icon of the emoticon
- the terminal in response to a drag operation on the emoticon adding icon, the terminal is at the first target position A first sub-area and a second sub-area are displayed.
- the terminal In response to a selection operation on the first type of icon in the first subarea, the terminal displays a plurality of first candidate emoticons corresponding to the first type of icon in the second subarea.
- the terminal in response to the selection operation of the second type of icon in the first sub-area, switches the plurality of first emoticons to be selected in the second sub-area to A plurality of second candidate emoticons, the second candidate emoticons are emoticons corresponding to the second type of icons.
- the first subarea is a circular area
- the second subarea is an annular area inscribed with the first subarea
- the first subarea and the second subarea have the same circle center.
- the type icons displayed in the first subarea can also be called emoticons to be selected
- users can select different types of emoticons by triggering different types of icons, which is easy to operate and highly efficient in human-computer interaction.
- the emoticon selection area includes multiple sub-areas, and multiple first candidate emoticons are displayed in the multiple sub-areas respectively.
- the terminal can display multiple first emoticons to be selected in multiple sub-areas, so that different sub-areas can separate multiple first emoticons to be selected, so that the user can pass Intuitive viewing and selection of the first candidate expression in different sub-areas, high viewing efficiency, simple and fast selection method, and high human-computer interaction efficiency.
- the emoticon selection area is a circular area, one sub-area is a part of the circular area, and a plurality of type icons corresponding to the first emoticons to be selected are displayed in the center of the circular area.
- the expression selection area is a rotatable area, and in response to a sliding operation on the expression selection area, the terminal controls the expression selection area to rotate in the direction of the sliding operation, so that the user can slide the expression selection area.
- the first expression to be selected will also rotate accordingly. The user can rotate the first expression to be selected to the desired direction, and then select an expression.
- the expression selection area is also called Emoticon Roulette.
- the type icons displayed in the center of the circular area are used to represent the types of the multiple first candidate emoticons displayed in the sub-area, and the user can determine the types of the multiple first candidate emoticons by viewing the type icons.
- At least one first virtual object is displayed in the virtual scene, and the first candidate expression is an expression corresponding to the first virtual object.
- the first virtual object is a virtual object controlled by the terminal login user.
- the terminal adds the first candidate corresponding to the first virtual object in the expression selection area. expression.
- going to the field refers to controlling the first virtual object to be displayed in the virtual scene, or refers to dragging the first virtual object from the preparation area 204 to the battlefield area 203, as shown in FIG. Do limited.
- the corresponding relationship between the first virtual object and the first expression to be selected is set by the technician according to the actual situation, or is matched by the terminal based on the image recognition algorithm.
- the terminal In response to the first virtual object playing, the terminal performs image recognition on the first virtual object to obtain the label of the first virtual object. Based on the tag of the first virtual object, the terminal performs matching in the expression database to obtain the first candidate expression corresponding to the first virtual object. In this embodiment of the present application, the terminal determines the first candidate expression corresponding to the first virtual object. There is no limit to the way of selecting emoticons. Adding the first candidate expression corresponding to the first virtual object in the expression selection area by the terminal means that the terminal adds the first candidate expression corresponding to the first virtual object to the folder corresponding to the expression selection area. When displaying the emoticon selection area, the terminal can display the first candidate emoticon in the emoticon selection area.
- the first virtual object includes not only the virtual object controlled by the terminal login user, but also the virtual object controlled by other users who compete with the user.
- the terminal adds a first candidate expression corresponding to the first virtual object in the expression selection area, and the user can select an expression in the expression selection area.
- the method for the terminal to add the first candidate emoticon corresponding to the first virtual object in the emoticon selection area belongs to the same inventive concept as that in the above embodiment, which is not limited in this embodiment of the present application.
- the expressions displayed in the expression selection area have diversity and randomness, and then the user can quickly send the expression based on the expression selection control. Choose the method of setting a few fixed expressions in the locale, and the human-computer interaction is more efficient.
- At least one first virtual object is displayed in the virtual scene, and in response to the expression adding icon being dragged to the location of any first virtual object, an expression selection area is displayed at the location of the first virtual object , the position where the first virtual object is located is also the first target position.
- the expression selected through the expression selection area is regarded as the expression sent to the first virtual object.
- multiple user avatars are displayed in the virtual scene, and one user avatar corresponds to one user participating in the competitive battle.
- an emoticon selection area is displayed at the position of the user avatar, which is also the first target position.
- the emoticon selected through the emoticon selection area is regarded as the emoticon sent to the user corresponding to the user avatar.
- the terminal displays the first target emoticon in the virtual scene.
- the first target expression is any one of multiple first candidate expressions.
- the terminal zooms in and displays the first target emoticon in the virtual scene.
- the first target emoticon is a vector graphic, and when displaying the first target emoticon, the terminal can enlarge and display the first target emoticon, which is convenient for the user to view.
- the position where the terminal displays the first target expression in the virtual scene is the first target position, and since the expression selection area is also displayed at the first target position, the terminal displays the position of the first target expression in the virtual scene, The same as where the terminal displays the emoticon selection area.
- the terminal displays the first target emoticon at a first target position in the virtual scene.
- the terminal zooms in and displays the first target emoticon at the first target position in the virtual scene.
- the first target position is the position where the drag operation on the emoticon adding icon ends, then the terminal can also display the first target emoticon at the end position.
- the expression adding control can be dragged to the specified position, and the first target expression selected in the expression selection area can be displayed on the at the specified location.
- the terminal zooms in and displays the first target emoticon 605 at the first target position in the virtual scene 701, that is, in the virtual scene 701
- the enlarged first target emoticon 702 is displayed in .
- the terminal in response to the selection operation of the first target emoticon, plays the animation corresponding to the first target emoticon in the virtual scene.
- the animation corresponding to the first target expression is configured by the technical personnel, for example, after the technical personnel make the expression and the animation corresponding to the expression, they bind and store the animation corresponding to the expression and the expression.
- the terminal can directly load the animation corresponding to the first target emoticon, and play the animation in the virtual scene.
- the terminal in response to clicking on a first target emoticon among the plurality of first candidate emoticons, loads an animation corresponding to the first target emoticon, and plays the animation at the first target position in the virtual scene. If the first target position is the end position of the drag operation on the emoticon adding icon, then the terminal can also play the animation at the end position.
- At least one virtual object is displayed in the virtual scene, and the first candidate expression is an expression corresponding to the first virtual object.
- the terminal can control the target virtual object to perform an action corresponding to the first target expression, where the target virtual object is the first virtual object corresponding to the first target expression among at least one first virtual object.
- control here refers to the meaning of display
- the control process is executed by the server
- the terminal displays the process of executing the action of the target virtual object
- the target virtual object is directly controlled by the terminal to execute the action, which is not limited in the embodiment of the present application.
- the corresponding relationship between the first target expression and the action is set by the technician according to the actual situation.
- the terminal controls the target virtual object to execute The action corresponding to the target expression.
- the terminal in addition to displaying the first target expression in the virtual scene, the terminal can also control the target virtual object to perform corresponding actions, which enriches the display effect of the first target expression and improves the user's gaming experience.
- the terminal in response to the selection operation of the first target emoticon among the plurality of first candidate emoticons, controls the controlled virtual object to move to the first target position, and displays the first target emoticon at the first target position .
- the controlled virtual object is the virtual object controlled by the user login terminal.
- the control in the terminal control of the controlled virtual object refers to the meaning of display.
- the control process is executed by the server.
- the terminal displays the process of the controlled virtual object executing the action, or the controlled virtual object is directly controlled by the terminal to execute the action.
- the terminal can also control the controlled virtual object to move to the display position of the first target emoticon, so as to enrich the effect when the first target emoticon is displayed.
- Figure 8 shows a logic block diagram of the above steps 401-403, see Figure 8, after entering the game, if the user's drag operation on the emoticon adding icon is detected, the emoticon wheel is displayed, and the emoticon wheel is also the emoticon selection area . If a click operation on any emoticon in the emoticon wheel is detected, the clicked emoticon is displayed at a position corresponding to the drag operation. If the user's drag operation on the emoticon adding icon is not detected, the emoticon will not be displayed; if the user's click operation in the emoticon roulette is not detected, the emoticon will not be displayed.
- the user can trigger the display of the expression selection area by dragging the expression adding icon, so that the expression selection area can be used for expression selection, so that the selected first target expression will be displayed on the on the first target position in the virtual scene.
- the first target position is the position where the dragging operation ends
- the display position of the first target emoticon can be changed by adjusting the dragging operation, the operation is simple and convenient, and the human-computer interaction efficiency is high.
- the emoticon can be sent by dragging the emoticon adding icon, and then selecting the first target emoticon to be displayed in the displayed emoticon selection area.
- Yu first calls up the chat window, calls the emoticon selection panel in the chat window, selects the emoticon in the emoticon selection panel, and then clicks the send control in the chat window to realize the sending of emoticons.
- the operation is simple and convenient, and the efficiency of human-computer interaction is improved.
- this application also provides another method for displaying facial expressions in a virtual scene.
- the computer device is configured as a terminal, and the terminal is used as an execution subject as an example. See Figure 9.
- the method includes:
- the terminal displays a virtual scene, and an expression adding icon is displayed in the virtual scene, and the expression adding icon is used to add an expression in the virtual scene.
- the process of displaying the virtual scene by the terminal belongs to the same inventive concept as the above-mentioned step 401.
- For the implementation process refer to the relevant description of the above-mentioned step 401, which will not be repeated here.
- the terminal In response to a click operation on the emoticon adding icon, the terminal displays an emoticon selection area in the virtual scene, and multiple first candidate emoticons are displayed in the emoticon selection area.
- the terminal in response to a click operation on the emoticon adding icon, displays an emoticon selection area at a fourth target position in the virtual scene, and the fourth target position is a position adjacent to the emoticon adding icon.
- the terminal in response to the click operation on the emoticon adding icon 1002 in the virtual scene 1001, the terminal displays the emoticon selection area 1102 on the fourth target position of the virtual scene 1101, in which a plurality of The first candidate expression.
- the emoticon selection area includes a first sub-area and a second sub-area
- the first sub-area is used to display the type icon of the emoticon
- the terminal in response to a drag operation on the emoticon adding icon, the terminal is at the first target position A first sub-area and a second sub-area are displayed.
- the terminal In response to a selection operation on the first type of icon in the first subarea, the terminal displays a plurality of first candidate emoticons corresponding to the first type of icon in the second subarea.
- the type of expression displayed in the second sub-area can be switched by triggering the first sub-area, the operation is simple and convenient, and the human-computer interaction efficiency is high.
- the user can select different types of emoticons by triggering different types of icons, the operation is simple and convenient, and the efficiency of human-computer interaction is high.
- the emoticon selection area 1102 includes a first sub-area 1103 and a second sub-area 1104 , and a plurality of type icons are displayed in the first sub-area 1103 .
- the terminal displays a plurality of first emoticons to be selected corresponding to the first type icon 1105 in the second subarea 1104 .
- the terminal in response to the selection operation of the second type of icon in the first sub-area, switches the plurality of first emoticons to be selected in the second sub-area to A plurality of second candidate emoticons, the second candidate emoticons are emoticons corresponding to the second type of icons.
- the first subarea is a circular area
- the second subarea is an annular area inscribed with the first subarea
- the first subarea and the second subarea have the same circle center.
- the terminal in response to a click operation on the second-type icon 1106 in the first sub-area 1103 , the terminal displays a plurality of second candidate emoticons corresponding to the second-type icon 1106 in the second sub-area 1104 .
- the emoticon selection area includes multiple sub-areas, and multiple first candidate emoticons are displayed in the multiple sub-areas respectively.
- the terminal can display the first emoticons to be selected in multiple sub-areas, and different sub-areas can separate the multiple first emoticons to be selected, and the user can select the desired expression in different sub-areas.
- the first candidate expression By dividing the emoticon selection area into multiple sub-areas, the terminal can display multiple first emoticons to be selected in multiple sub-areas, so that different sub-areas can separate multiple first emoticons to be selected, so that the user can pass Intuitive viewing and selection of the first candidate expression in different sub-areas, high viewing efficiency, simple and fast selection method, and high human-computer interaction efficiency.
- the emoticon selection area is a circular area, one sub-area is a part of the circular area, and a plurality of type icons corresponding to the first emoticons to be selected are displayed in the center of the circular area.
- the expression selection area is an area that can be rotated.
- the terminal controls the expression selection area to rotate in the direction of the sliding operation, so that the user can slide the expression selection area.
- the first expression to be selected will also rotate accordingly.
- the user can rotate the first expression to be selected to the desired direction, and then select an expression.
- the expression selection area is also called Emoticon Roulette.
- the type icons displayed in the center of the circular area are used to represent the types of the multiple first candidate emoticons displayed in the sub-area, and the user can determine the types of the multiple first candidate emoticons by viewing the type icons.
- At least one first virtual object is displayed in the virtual scene, and the first candidate expression is an expression corresponding to the first virtual object.
- the first virtual object is a virtual object controlled by the terminal login user.
- the terminal In response to the user controlling any first virtual object to enter the field, the terminal adds the first candidate corresponding to the first virtual object in the expression selection area. expression.
- the expressions displayed in the expression selection area have diversity and randomness, and then the user can quickly send the expression based on the expression selection control. Choose the method of setting a few fixed expressions in the locale, and the human-computer interaction is more efficient.
- the terminal In response to a drag operation on a second target emoticon among the plurality of first candidate emoticons, the terminal displays the second target emoticon at a second target position, where the second target position is a position where the drag operation ends.
- the second target location is the location where the drag operation ends. That is, the user can perform a drag operation on the first candidate emoticon in the emoticon selection area, and the dragged first candidate emoticon can be regarded as the selected second target emoticon. The user can control the position where the terminal displays the second target emoticon through the position where the dragging operation ends.
- the drag operation on the second target emoticon means that the user places the finger on the position corresponding to the second target emoticon, and presses Control the finger to drag on the screen after lowering the finger, but when the user wants to end the drag operation, he only needs to lift the finger.
- the terminal in response to detecting that the click operation on the second target emoticon meets the target condition, the terminal sets the second target emoticon to a draggable state, where the draggable state means that the second target emoticon can be dragged Operation moves and moves.
- the terminal obtains the position of the drag operation on the screen in real time, and displays the second target emoji at the position where the drag operation ends. From the user's point of view, the second target emoji in the draggable state is always located on the screen. below.
- the terminal displays the second target emoticon at the position where the drag operation ends.
- the click operation on the second target emoticon meets the target condition means that the duration of the click operation on the second target emoticon is greater than or equal to the time threshold, or the force of the click operation on the second target emoticon is greater than or equal to the force threshold.
- the time threshold and the strength threshold are set by technicians according to actual conditions, which are not limited in this embodiment of the present application.
- the terminal in response to the second target emoticon 1202 in the virtual scene 1201 being dragged to the second target position 1203 , the terminal displays the second target emoticon 1202 on the second target position 1203 .
- a plurality of first virtual objects are displayed in the virtual scene, and are dragged to a third target position in response to the second target expression, and the distance between any first virtual object and the third target position conforms to Target condition, the terminal displays a second target emoticon at the second target position where the first virtual object is located.
- the terminal can display the second target expression on top of the first virtual object that meets the target conditions, which can be used to indicate that the second target expression is the expression made by the first virtual object, enriching the second target expression
- the richness of the display facilitates the user to convey information through the second target emoticon. Since the drag operation is simple to execute, the efficiency of sending emoticons is improved, thereby improving the efficiency of human-computer interaction.
- a first virtual object is displayed in the virtual scene, and in response to the second target emoticon being dragged to the third target position, the terminal determines the distance between the first virtual object and the third target position. In response to the distance between the first virtual object and the third target position being less than or equal to the distance threshold, the terminal meets the target condition for the distance between the first virtual object and the third target position, and the distance between the first virtual object and the third target position meets the target condition.
- a second target expression is displayed on the second target position. From the user's point of view, when the second target emoticon is dragged near the first virtual object, the second target emoticon can be displayed above the first virtual object.
- the above description is based on the example of one virtual object displayed in the virtual scene.
- multiple first virtual objects can be displayed in the virtual scene.
- the terminal determines the distance between the multiple first virtual objects and the third target position, and the first virtual object with the smallest distance to the third target among the multiple first virtual objects
- the virtual object is determined as the first virtual object whose distance from the third target position meets the target condition, and the terminal displays the second target expression on the first virtual object.
- the user can use this display method to control the terminal to display the second target emoticon on top of different first virtual objects, so as to convey different information.
- the terminal in response to the first virtual object moving in the virtual scene, can adjust the display position of the second target emoticon, so that the second target emoticon is always displayed above the first virtual object.
- the terminal displays the second target emoticon above the first virtual object, so that when the first virtual object moves in the virtual scene, the second target The expression will move along with the first virtual object, so as to achieve the effect that the second target expression is always kept above the first virtual object.
- a plurality of first virtual objects are displayed in the virtual scene, dragged to the third target position in response to the second target expression, and the distance between at least two first virtual objects and the third target position If the target condition is met, the second target emoticon is displayed at the second target position where the second virtual object is the first virtual object controlled by the terminal among the at least two first virtual objects.
- the terminal determines the distance between the plurality of first virtual objects in the virtual scene and the third target position.
- the terminal determines that the distance between the at least two first virtual objects and the third target position meets the target condition, and the terminal is at least
- the second virtual object is determined from the two first virtual objects, and the second virtual object is the first virtual object controlled by the terminal, or the first virtual object controlled by the player.
- the terminal displays the second target emoticon on the second target position where the second virtual object is located, which can achieve the effect of displaying the second target emoticon on top of the virtual object controlled by the terminal.
- the terminal in response to the movement of the second virtual object in the virtual scene, adjusts the display position of the second target emoticon, so that the second target emoticon is always displayed above the second virtual object.
- the terminal displays the second target emoticon above the second virtual object, so that when the second virtual object moves in the virtual scene, the second target The expression will move together with the second virtual object, so as to achieve the effect that the second target expression is always kept above the second virtual object.
- the terminal in response to the selection operation of the second target emoticon among the plurality of first candidate emoticons, controls the controlled virtual object to move to the second target position, and displays the second target emoticon at the second target position .
- the controlled virtual object is the virtual object controlled by the user login terminal.
- the control of the controlled virtual object by the terminal refers to the meaning of display.
- the control process is executed by the server.
- the terminal displays the process of executing the action of the controlled virtual object, or the controlled virtual object is directly controlled by the terminal to execute the action. This is not limited. Since the terminal can also control the controlled virtual object to move to the display position of the first target emoticon when displaying the first target emoticon, thereby enriching the effect of displaying the first target emoticon.
- multiple user avatars are displayed in the virtual scene, and in response to the third target emoticon among the multiple first candidate emoticons being dragged to the position where the target user's avatar is located, the terminal sends a message to the terminal corresponding to the target user's avatar.
- a third target emoticon is sent, and the target user's avatar is any one of multiple user avatars.
- the terminal in response to the third target emoticon being dragged to the position where the target user's avatar is located, the terminal sends the third target emoticon to the terminal corresponding to the target user's avatar, that is, displays the third target emoticon in the chat box in the virtual scene
- the third target emoticon is sent, and the recipient of the third target emoticon is the target user's terminal.
- the user can trigger the display of the expression selection area by dragging the expression adding icon, so that the expression selection area can be used for expression selection, so that the selected first target expression will be displayed on the on the first target position in the virtual scene.
- the first target position is the position where the dragging operation ends
- the display position of the first target emoticon can be changed by adjusting the dragging operation, the operation is simple and convenient, and the human-computer interaction efficiency is high.
- the emoticon can be sent by selecting the first target emoticon to be displayed in the displayed emoticon selection area, and then dragging and dropping the emoticon.
- chat window call the emoticon selection panel in the chat window, select the emoticon in the emoticon selection panel, and then click the send control in the chat window to realize the sending of emoticons.
- the operation is simple and convenient, and the efficiency of human-computer interaction is improved.
- FIG. 13 is a schematic structural diagram of a device for displaying expressions in a virtual scene provided by an embodiment of the present application.
- the device includes: a scene display module 1301 , an area display module 1302 and an expression display module 1303 .
- the scene display module 1301 is used for displaying a virtual scene, and an expression adding icon is displayed in the virtual scene, and the expression adding icon is used for adding an expression in the virtual scene.
- the area display module 1302 is configured to display an expression selection area at the first target position in the virtual scene in response to a drag operation on the expression adding icon, where the first target position is the position where the drag operation ends, and the expression selection area displays There are multiple first candidate expressions.
- the emoticon display module 1303 is configured to display the first target emoticon in the virtual scene in response to a selection operation on the first target emoticon among the plurality of first candidate emoticons.
- the emoticon display module 1303 is configured to display the first target emoticon at the first target position in the virtual scene in response to a selection operation on the first target emoticon among the plurality of first candidate emoticons.
- a controlled virtual object is displayed in the virtual scene, and the target expression display module 1303 is configured to control the movement of the controlled virtual object in response to the selection operation of the first target expression among the plurality of first candidate expressions. to the first target position.
- the emoticon display module 1303 is configured to play an animation corresponding to the first target emoticon in the virtual scene in response to a selection operation on the first target emoticon among the plurality of first candidate emoticons.
- the emoticon selection area includes a first sub-area and a second sub-area
- the first sub-area is used to display the type icons of emoticons
- one type icon corresponds to multiple emoticons
- the emoticon display module 1303 is used to The first sub-area and the second sub-area are displayed on the first target position in the scene.
- a plurality of first emoticons to be selected corresponding to the first type of icon are displayed in the second subarea.
- the device also includes:
- the emoticon switching module is used to switch the plurality of first emoticons to be selected displayed in the second subarea to a plurality of second emoticons to be selected in response to the selection operation of the second type of icon in the first subarea,
- the second candidate emoticon is an emoticon corresponding to the second type of icon.
- the emoticon selection area includes multiple sub-areas, and multiple first candidate emoticons are displayed in the multiple sub-areas respectively.
- the emoticon selection area is a circular area, and one sub-area is a part of the circular area, and a plurality of type icons corresponding to the first emoticons to be selected are displayed in the center of the circular area.
- At least one first virtual object is displayed in the virtual scene, and the first candidate expression is an expression corresponding to the first virtual object.
- the device also includes:
- a control module configured to control the target virtual object to perform an action corresponding to the first target expression, where the target virtual object is the first virtual object corresponding to the first target expression among the at least one first virtual object.
- the device for displaying emoticons in a virtual scene provided by the above-mentioned embodiments displays emoticons in a virtual scene
- the division of the above-mentioned functional modules is used as an example for illustration. In practical applications, the above-mentioned functions can be allocated according to needs. It is completed by different functional modules, that is, the internal structure of the computer device is divided into different functional modules to complete all or part of the functions described above.
- the device for displaying emoticons in a virtual scene provided by the above-mentioned embodiments belongs to the same idea as the method embodiment for displaying emoticons in a virtual scene. For the specific implementation process, please refer to the method embodiment for details, and will not be repeated here.
- the user can trigger the display of the expression selection area by dragging the expression adding icon, so that the expression selection area can be used for expression selection, so that the selected first target expression will be displayed on the on the first target position in the virtual scene.
- the first target position is the position where the drag operation ends
- the display position of the first target expression can be changed, the operation is simple and convenient, and the human-computer interaction efficiency is high.
- the emoticon can be sent by dragging the emoticon adding icon, and then selecting the first target emoticon to be displayed in the displayed emoticon selection area.
- Yu first calls up the chat window, calls the emoticon selection panel in the chat window, selects the emoticon in the emoticon selection panel, and then clicks the send control in the chat window to realize the sending of emoticons.
- the operation is simple and convenient, and the efficiency of human-computer interaction is improved. .
- FIG. 14 is a schematic structural diagram of another device for displaying expressions in a virtual scene provided by an embodiment of the present application.
- the device includes: a scene display module 1401 , an area display module 1402 and an expression display module 1403 .
- the scene display module 1401 is used to display a virtual scene, and an expression adding icon is displayed in the virtual scene, and the expression adding icon is used to add an expression in the virtual scene;
- the area display module 1402 is used to display the expression selection area in the virtual scene in response to the click operation of the expression adding icon, and display a plurality of first candidate emoticons in the expression selection area;
- An emoticon display module 1403, configured to display the second target emoticon at a second target position in response to a drag operation on the second target emoticon among the plurality of first candidate emoticons, where the second target position is the dragged The position where the drag operation ends.
- multiple first virtual objects are displayed in the virtual scene, and the expression display module 1403 is configured to perform any of the following:
- the second target emoticon is displayed on the second target position;
- the second target In response to the expression of the second target being dragged to the third target position, and the distance between at least two of the first virtual objects and the third target position meets the target condition, the second target where the second virtual object is located
- the second target expression is displayed on the position, and the second virtual object is a first virtual object controlled by a computer device among the at least two first virtual objects.
- multiple user avatars are displayed in the virtual scene, and the device also includes:
- the emoticon sending module is used to send the third target emoticon to the terminal corresponding to the target user avatar in response to the third target emoticon in the multiple first candidate emoticons being dragged to the position where the target user's avatar is located.
- the user avatar is any one of the plurality of user avatars.
- the device for displaying emoticons in a virtual scene provided by the above-mentioned embodiments displays emoticons in a virtual scene
- the division of the above-mentioned functional modules is used as an example for illustration. In practical applications, the above-mentioned functions can be allocated according to needs. It is completed by different functional modules, that is, the internal structure of the computer device is divided into different functional modules to complete all or part of the functions described above.
- the device for displaying emoticons in a virtual scene provided by the above-mentioned embodiments belongs to the same idea as the method embodiment for displaying emoticons in a virtual scene. For the specific implementation process, please refer to the method embodiment for details, and will not be repeated here.
- the user can trigger the display of the expression selection area by dragging the expression adding icon, so that the expression selection area can be used for expression selection, so that the selected first target expression will be displayed on the on the first target position in the virtual scene.
- the first target position is the position where the dragging operation ends
- the display position of the first target emoticon can be changed by adjusting the dragging operation, the operation is simple and convenient, and the human-computer interaction efficiency is high.
- the emoticon can be sent by selecting the first target emoticon to be displayed in the displayed emoticon selection area, and then dragging and dropping the emoticon.
- chat window call the emoticon selection panel in the chat window, select the emoticon in the emoticon selection panel, and then click the send control in the chat window to realize the sending of emoticons.
- the operation is simple and convenient, and the efficiency of human-computer interaction is improved.
- An embodiment of the present application provides a computer device for performing the above method.
- the computer device can be implemented as a terminal.
- the structure of the terminal is introduced below:
- FIG. 15 is a schematic structural diagram of a terminal provided by an embodiment of the present application.
- the terminal 1500 may be: a smart phone, a tablet computer, a notebook computer or a desktop computer.
- the terminal 1500 may also be called user equipment, portable terminal, laptop terminal, desktop terminal and other names.
- the terminal 1500 includes: one or more processors 1501 and one or more memories 1502 .
- the processor 1501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like.
- Processor 1501 can adopt at least one hardware form in DSP (Digital Signal Processing, digital signal processing), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, programmable logic array) accomplish.
- Processor 1501 may also include a main processor and a coprocessor, and the main processor is a processor for processing data in a wake-up state, also called a CPU (Central Processing Unit, central processing unit); the coprocessor is Low-power processor for processing data in standby state.
- CPU Central Processing Unit, central processing unit
- the coprocessor is Low-power processor for processing data in standby state.
- the processor 1501 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used for rendering and drawing the content that needs to be displayed on the display screen.
- the processor 1501 may also include an AI (Artificial Intelligence, artificial intelligence) processor, where the AI processor is used to process computing operations related to machine learning.
- AI Artificial Intelligence, artificial intelligence
- Memory 1502 may include one or more computer-readable storage media, which may be non-transitory. Memory 1502 may also include high-speed random access memory, and non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, the non-transitory computer-readable storage medium in the memory 1502 is used to store at least one computer program, and the at least one computer program is used to be executed by the processor 1501 to implement the methods provided by the method embodiments in this application. A method for displaying facial expressions in a virtual scene.
- the terminal 1500 may optionally further include: a peripheral device interface 1503 and at least one peripheral device.
- the processor 1501, the memory 1502, and the peripheral device interface 1503 may be connected through buses or signal lines.
- Each peripheral device can be connected to the peripheral device interface 1503 through a bus, a signal line or a circuit board.
- the peripheral device includes: at least one of a display screen 1505 , an audio circuit 1507 and a power supply 1509 .
- the peripheral device interface 1503 may be used to connect at least one peripheral device related to I/O (Input/Output, input/output) to the processor 1501 and the memory 1502 .
- the processor 1501, memory 1502 and peripheral device interface 1503 are integrated on the same chip or circuit board; in some other embodiments, any one of the processor 1501, memory 1502 and peripheral device interface 1503 or The two can be implemented on a separate chip or circuit board, which is not limited in this embodiment.
- the display screen 1505 is used to display a UI (User Interface, user interface).
- the UI can include graphics, text, icons, video, and any combination thereof.
- the display screen 1505 also has the ability to collect touch signals on or above the surface of the display screen 1505 .
- the touch signal can be input to the processor 1501 as a control signal for processing.
- the display screen 1505 can also be used to provide virtual buttons and/or virtual keyboards, also called soft buttons and/or soft keyboards.
- Audio circuitry 1507 may include a microphone and speakers.
- the microphone is used to collect sound waves of the user and the environment, and convert the sound waves into electrical signals and input them to the processor 1501 for processing.
- the power supply 1509 is used to supply power to various components in the terminal 1500 .
- Power source 1509 may be alternating current, direct current, disposable batteries, or rechargeable batteries.
- FIG. 15 does not constitute a limitation to the terminal 1500, and may include more or less components than shown in the figure, or combine certain components, or adopt a different component arrangement.
- a computer-readable storage medium such as a memory including a computer program
- the above-mentioned computer program can be executed by a processor to complete the method for displaying expressions in a virtual scene in the above-mentioned embodiment.
- the computer-readable storage medium can be a read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory, RAM), a read-only optical disc (Compact Disc Read-Only Memory, CD-ROM), Magnetic tapes, floppy disks, and optical data storage devices, etc.
- a computer program product or computer program the computer program product or computer program includes program code, the program code is stored in a computer-readable storage medium, and the processor of the computer device reads from the computer The program code is read by reading the storage medium, and the processor executes the program code, so that the computer device executes the above-mentioned method for displaying facial expressions in a virtual scene.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Architecture (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
一种在虚拟场景中显示表情的方法、装置、设备以及介质,属于计算机技术领域。方法包括:显示(301)虚拟场景;响应于对表情添加图标的拖拽操作,在虚拟场景中的第一目标位置上显示(302)表情选择区域;响应于对多个第一待选表情中的第一目标表情的选中操作,在虚拟场景中显示(303)第一目标表情。
Description
本申请要求于2021年05月26日提交的申请号为202110580625.X、发明名称为“虚拟场景中的表情显示方法、装置、设备以及介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及计算机技术领域,特别涉及一种在虚拟场景中显示表情的方法、装置、设备以及介质。
随着多媒体技术的发展,能够进行的游戏种类越来越多。自走棋游戏是一种比较盛行的游戏,在游戏过程中,不同虚拟对象可以在虚拟场景中进行对战。
发明内容
本申请实施例提供了一种在虚拟场景中显示表情的方法、装置、设备以及介质。所述技术方案如下:
一方面,提供了一种在虚拟场景中显示表情的方法,所述方法包括:
显示虚拟场景,所述虚拟场景中显示有表情添加图标,所述表情添加图标用于在所述虚拟场景中添加表情;
响应于对所述表情添加图标的拖拽操作,在所述虚拟场景中的第一目标位置上显示表情选择区域,所述第一目标位置为所述拖拽操作结束的位置,所述表情选择区域中显示有多个第一待选表情;
响应于对所述多个第一待选表情中的第一目标表情的选中操作,在所述虚拟场景中显示所述第一目标表情。
一方面,提供了一种在虚拟场景中显示表情的方法,所述方法包括:
显示虚拟场景,所述虚拟场景中显示有表情添加图标,所述表情添加图标用于在所述虚拟场景中添加表情;
响应于对所述表情添加图标的点击操作,在所述虚拟场景中显示所述表情选择区域,所述表情选择区域中显示有多个第一待选表情;
响应于对所述多个第一待选表情中的第二目标表情的拖拽操作,在第二目标位置上显示所述第二目标表情,所述第二目标位置为所述拖拽操作结束的位置。
一方面,提供了一种在虚拟场景中显示表情的装置,所述装置包括:
场景显示模块,用于显示虚拟场景,所述虚拟场景中显示有表情添加图标,所述表情添加图标用于在所述虚拟场景中添加表情;
区域显示模块,用于响应于对所述表情添加图标的拖拽操作,在所述虚拟场景中的第一目标位置上显示表情选择区域,所述第一目标位置为所述拖拽操作结束的位置,所述表情选择区域中显示有多个第一待选表情;
表情显示模块,用于响应于对所述多个第一待选表情中的第一目标表情的选中操作,在所述虚拟场景中显示所述第一目标表情。
一方面,提供了一种在虚拟场景中显示表情的装置,所述装置包括:
场景显示模块,用于显示虚拟场景,所述虚拟场景中显示有表情添加图标,所述表情添加图标用于在所述虚拟场景中添加表情;
表情选择区域显示模块,用于响应于对所述表情添加图标的点击操作,在所述虚拟场景中显示所述表情选择区域,所述表情选择区域中显示有多个第一待选表情;
目标表情显示模块,用于响应于对所述多个第一待选表情中的第二目标表情的拖拽操作,在第二目标位置上显示所述第二目标表情,所述第二目标位置为所述拖拽操作结束的位置。
一方面,提供了一种计算机设备,所述计算机设备包括一个或多个处理器和一个或多个存储器,所述一个或多个存储器中存储有至少一条计算机程序,所述计算机程序由所述一个或多个处理器加载并执行以实现所述在虚拟场景中显示表情的方法。
一方面,提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一条计算机程序,所述计算机程序由处理器加载并执行以实现所述在虚拟场景中显示表情的方法。
一方面,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括程序代码,该程序代码存储在计算机可读存储介质中,计算机设备的处理器从计算机可读存储介质读取该程序代码,处理器执行该程序代码,使得该计算机设备执行上述在虚拟场景中显示表情的方法。
通过本申请实施例提供的技术方案,用户能够通过拖拽表情添加图标的方式,来触发显示表情选择区域,使得通过该表情选择区域能够进行表情选择,从而被选中的第一目标表情会显示在虚拟场景中的第一目标位置上。由于第一目标位置是拖拽操作结束的位置,那么通过调整拖拽操作,即可改变第一目标表情的显示位置,操作简单便捷,人机交互效率高。例如,当用户想要在虚拟场景中发送表情时,通过对表情添加图标进行拖拽操作,然后在显示的表情选择区域中选择想要进行显示的第一目标表情,即可发送表情,相较于先调出聊天窗口,在聊天窗口中调用表情选择面板,在表情选择面板中选择表情,再点击聊天窗口的发送控件来实现表情的发送的方式,操作简单便捷,提高了人机交互的效率。
图1是本申请实施例提供的一种在虚拟场景中显示表情的方法的实施环境的示意图;
图2是本申请实施例提供的一种界面示意图;
图3是本申请实施例提供的一种在虚拟场景中显示表情的方法的流程图;
图4是本申请实施例提供的另一种在虚拟场景中显示表情的方法的流程图;
图5是本申请实施例提供的另一种界面示意图;
图6是本申请实施例提供的另一种界面示意图;
图7是本申请实施例提供的另一种界面示意图;
图8是本申请实施例提供的一种在虚拟场景中显示表情的方法的逻辑框图;
图9是本申请实施例提供的另一种在虚拟场景中显示表情的方法的流程图;
图10是本申请实施例提供的另一种界面示意图;
图11是本申请实施例提供的另一种界面示意图;
图12是本申请实施例提供的一种功能模块划分示意图;
图13是本申请实施例提供的一种在虚拟场景中显示表情的装置的结构示意图;
图14是本申请实施例提供的另一种在虚拟场景中显示表情的装置的结构示意图;
图15是本申请实施例提供的一种终端的结构示意图。
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
本申请中术语“第一”“第二”等字样用于对作用和功能基本相同的相同项或相似项进行区分,应理解,“第一”、“第二”、“第n”之间不具有逻辑或时序上的依赖关系,也不对数量和执行顺序进行限定。
本申请中术语“至少一个”是指一个或多个,“多个”的含义是指两个或两个以上,例如,多个子区域是指两个或两个以上的子区域。
相关技术中,用户在进行自走棋游戏时,如果想要发送表情,那么需要在自走棋游戏中调出聊天窗口,在聊天窗口中调用表情选择面板,在表情选择面板中选择表情,再点击聊天窗口的发送控件,才能够实现表情的发送。
在这种情况下,用户发送表情的步骤较为繁琐,导致人机交互的效率较低。
首先对本申请实施例涉及的名词进行介绍:
自走棋:一种新型多人对战策略游戏,用户自行培养棋子,搭配棋子的阵容以迎战对手阵容。在对战过程中,对战的失败方扣除虚拟生命值,并按淘汰顺序决定名次。
棋子:各异的战斗单位,用户可对棋子进行装备、升级、购入卖出等操作。棋子的获取来源大多为刷新棋子池,小部分来自‘选秀’与战斗活动。
特质:棋子的不同分类(一般有两种特质:职业、种族),同特质的一定数量棋子处于战斗区可激活特质的特殊能力、让棋子有更强战斗力。
购买和出售:用户消耗虚拟资源可获得棋子,用户出售棋子可获得虚拟资源(会有一定折价)。
图1是本申请实施例提供的一种在虚拟场景中显示表情的方法的实施环境示意图。参见图1,该实施环境中包括终端110和服务器140。
终端110通过无线网络或有线网络与服务器140相连。可选地,终端110是智能手机、平板电脑、笔记本电脑、台式计算机等,但并不局限于此。终端110安装和运行有支持虚拟场景显示的应用程序。
可选地,服务器140是独立的物理服务器,或者是多个物理服务器构成的服务器集群或者分布式系统,或者是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、分发网络(Content Delivery Network,CDN)以及大数据和人工智能平台等基础云计算服务的云服务器。
可选地,终端110泛指多个终端中的一个,本申请实施例仅以终端110来举例说明。
本领域技术人员可以知晓,上述终端的数量可以更多或更少。比如上述终端仅为一个,或者上述终端为几十个或几百个,或者更多数量,此时上述实施环境中还包括其他终端。本申请实施例对终端的数量和设备类型不加以限定。
对本申请实施例的实施环境进行介绍之后,下面对本申请实施例的应用场景进行介绍。
本申请实施例提供的技术方案能够应用在自走棋类游戏的场景下。可选地,在自走棋类游戏中,虚拟对象也即是棋子,分类图标也即是特质图标。为了对本申请实施例提供的技术方案进行更加清楚的说明,下面对自走棋类游戏的一些相关内容进行介绍:
自走棋类游戏为回合制类游戏,游戏的对战过程分为多个回合,用户能够在回合的间隙来对棋子进行升级以及对棋子的位置进行调整。对战分为不同用户之间的对战,以及用户与非用户控制角色(Non-Player-Controlled Character,NPC)之间的对战。在每个回合中,获得胜利的用户能够获得数量较多的虚拟资源,失败的用户只能获得数量较少的虚拟资源。可选地,虚拟资源为游戏中的金币。同时,失败的用户会被扣掉一定数量的虚拟生命值,当任一用户的虚拟生命值降为0时,该用户也就会被淘汰。
在一些实施例中,连续获得胜利的用户能够获得连胜奖励,也即是连续获胜的用户能够获得额外的虚拟资源作为奖励,当然,为了平衡游戏,连续失败的用户也能够得到连败奖励。在用户与NPC的对战回合中,用户击败NPC能够获得虚拟装备,虚拟装备能够提高棋子的属性,从而提高棋子的战斗能力。
自走棋类游戏中的棋子存在质量差异,这种质量差异通常采用星级来描述。在一些实施例中,棋子通常包括三个星级,对于同一个棋子来说,3星棋子的战斗能力要强于2星棋子的战斗能力,2星棋子的战斗能力要强于1星棋子的战斗能力,提升棋子星级的过程也是用户提升自己战斗能力的过程。可选地,用户初始获得的棋子均为1星的棋子,随着游戏的进展,用户能够对棋子的星级进行提升。在一些实施例中,三个相同的1星棋子能够合成一个2星棋子,三个相同的2星棋子能够合成一个3星棋子。用户能够通过商店来购买棋子,也即是用户在商店中消耗一定数量的虚拟资源,来换取一个棋子。在一些实施例中,商店中仅提供1星棋子。
自走棋类游戏中的棋子存在不同的特质(类型),比如,一个棋子的特质可能为护卫,另一个棋子的特质可能为法师。多个相同特质的棋子同时上场时,能够得到额外的属性加成,比如,当三个特质均为护卫的棋子同时上场时,这三个棋子均能够得到防御能力的提升;当三个特质均为法师的棋子同时上场时,这三个棋子均能够得到法术攻击能力的提升,当然,上述三个棋子为不同的棋子。用户收集相同特质的棋子越多,棋子的作战能力也就越强。比如,三个特质均为法师的棋子同时上场时,这三个棋子均能够得到法术攻击能力的提升100的加成,当五个特质均为法师的棋子同时上场时,这五个棋子均能够得到法术攻击能力的提升300的加成。也即是,用户除了能够通过上述收集虚拟装备和提升棋子星级的方法来提升 战斗能力之外,也能够通过收集相同特质的棋子来提升战斗能力。在一些实施例中,一个棋子可能对应于两个或更多的特质,比如,一个棋子可能既具有法师这一职业特质,同时还具有野兽这一阵营特质,用户能够基于棋子对应的特质来自行进行棋子的搭配。
图2为自走棋类游戏的一个游戏界面示意图,图2包括信息提示区域201、商店触发控件202、战场区域203、备战区域204、特质显示区域205、装备库206以及计分区域207。
下面将分别对上述各个区域的功能进行介绍。
信息提示区域201用于显示用户的游戏等级以及提升游戏等级所需的虚拟资源。用户的游戏等级决定用户能够在战场区域203中同时放置棋子的数量。比如,在游戏开始时,用户的游戏等级为1级,那么用户能够在战场区域203中同时放置3个棋子与其他用户进行对抗。若用户想要增加同时在战场区域203中放置棋子的数量,那么用户需要消耗虚拟资源来提升游戏等级。在一些实施例中,用户每提高一个游戏等级,能够增加一个在战场区域203中放置的棋子,也即是,若用户在1级时能够同时在战场区域203中放置3个棋子,那么用户的游戏等级提升到2级时,能够同时在战场区域203中放置4个棋子,以此类推。在一些实施例中,用户提升游戏等级所需的虚拟资源随着回合数量的增加而减少。比如,用户在游戏开始时的游戏等级为1级,用户在第一个回合想要将游戏等级提升至2级,需要消耗4个金币,当游戏进入第二个回合时,用户想要将游戏等级从1级提升至2级时,就只需要2个金币。
商店触发控件202用于在触发后显示对象交易区域,也即是商店区域;商店区域中提供了多种棋子,用户能够在商店区域中选择想要兑换的棋子。兑换棋子需要消耗一定数量的虚拟资源。在一些实施例中,兑换不同棋子消耗的虚拟资源的数量是不同的,战斗能力越强的棋子需要消耗的虚拟资源的数量越多。商店区域既能够在用户点击商店触发控件202时进行显示,也能够在每个游戏回合结束时自动进行显示。商店区域中提供的棋子类型由终端随机确定,商店区域中存在一个刷新控件,当商店区域中提供的棋子没有用户想要的棋子时,用户能够通过点击刷新控件,消耗一定数量的虚拟资源来更新商店区域中提供的棋子。
战场区域203为棋子进行战斗的区域,用户能够将棋子拖拽至战场区域203来与其他用户的棋子进行战斗。除此之外,用户也能够在战场区域203中调整棋子的位置。在每个游戏回合开始时,其他用户的棋子也会出现在战场区域203中,用户的棋子能够在战场区域203中与其他用户的棋子进行战斗。
备战区域204用于存放用户拥有但暂时没有上场的棋子,上场也即是将棋子放置在战场区域203中。由于用户提升棋子的星级时需要相同的三个棋子,备战区域204也即是用于存放这些棋子的区域,当用户收集到能够提升棋子星级所需数量的棋子时,那么战场区域204中的棋子会自动进行合成升级。比如战场区域203中存在一个1星棋子,备战区域204中也存在一个相同的1星棋子,当用户从商店区域再兑换一个相同的1星棋子时,备战区域204中的1星棋子也就会合成到战场区域203中的棋子上,战场区域203中的1星棋子会消失,战场区域203中的棋子由1星升级为2星,备战区域204中留出一个空位。
在一些实施例中,备战区域204中用于存放棋子的位置是有限的,当备战区域204中用于存放棋子的位置全部被占据时,用户也就不能在备战区域204中放置棋子。
特质显示区域205用于显示分类图标,也即是特质图标,特质图标用于提示用户拥有棋子具有的特质,图2中的“约德尔人”以及“换形师”也即是两种特质。比如,当用户拥有一个特质为法师的棋子时,特质显示区域能够显示用于表示法师特质的图标。在一些实施例中, 该图标的颜色为灰色,表示法师特质对应的属性加成未激活,该图标旁边显示有数字1/3,其中,1表示战场区域203中存在的法师特质的棋子数量,3表示激活法师特质对应的属性需要的棋子数量。当战场区域203中存在3个法师特质的棋子时,该用于表示法师特质的图标就会由灰色变为彩色,表示法师特质对应的属性加成被激活。
装备库206用于供用户查看拥有的虚拟装备的类型和数量。
计分区域207用于显示各个用户的昵称和分数,在一些实施例中,计分区域207中按照用户的虚拟生命值对各个用户的头像进行排序,用户能够通过计分区域207中自己头像的位置来确定自己在当前游戏中的排名。
在本申请实施例中,计算机设备能够配置为终端或者服务器。在一些实施例中,由终端作为执行主体来实施本申请实施例提供的技术方案,或者通过终端和服务器之间的交互来实施本申请提供的技术方案,本申请实施例对此不作限定。下面将以执行主体为终端为例进行说明:
图3是本申请实施例提供的一种在虚拟场景中显示表情的方法的流程图,参见图3,方法包括:
301、终端显示虚拟场景,虚拟场景中显示有表情添加图标,表情添加图标用于在虚拟场景中添加表情。
其中,表情添加图标为表情添加控件所对应的图标。用户点击表情添加图标,也即是点击表情添加控件;用户拖拽表情添加图标,也即是拖拽表情添加控件。
302、响应于对表情添加图标的拖拽操作,终端在虚拟场景中的第一目标位置上显示表情选择区域,第一目标位置为拖拽操作结束的位置,表情选择区域中显示有多个第一待选表情。
在一些实施例中,第一待选表情为技术人员提前配置好的表情,比如为美术人员提前绘制好的表情;或者为从数据库中下载的表情,该数据库用于维护用户所拥有的表情;或者为用户配置的表情,比如用户上传的表情、兑换的表情或者绘制的表情;本申请实施例对此不做限定。第一待选表情显示在表情选择区域中,用户能够在表情选择区域中自行选择第一待选表情。通过将第一待选表情显示在表情选择区域内,使得用户能够直观的查看该多个第一待选表情,提高第一待选表情的查看效率,从而提高用户的信息获取效率。
303、响应于对多个第一待选表情中的第一目标表情的选中操作,终端在虚拟场景中显示第一目标表情。
在一些实施例中,若表情选择区域中的任一第一待选表情被点击,那么该第一待选表情也即是被选中的第一目标表情。
通过本申请实施例提供的技术方案,用户能够通过拖拽表情添加图标的方式,来触发显示表情选择区域,使得通过该表情选择区域能够进行表情选择,从而被选中的第一目标表情会显示在虚拟场景中的第一目标位置上。由于第一目标位置是拖拽操作结束的位置,那么通过调整拖拽操作,即可改变第一目标表情的显示位置,操作简单便捷,人机交互效率高。例如,当用户想要在虚拟场景中发送表情时,通过对表情添加图标进行拖拽操作,然后在显示的表情选择区域中选择想要进行显示的第一目标表情,即可发送表情,相较于先调出聊天窗口,在聊天窗口中调用表情选择面板,在表情选择面板中选择表情,再点击聊天窗口的发送控件来实现表情的发送的方式,操作简单便捷,提高了人机交互的效率。
上述步骤301-303是本申请提供的技术方案的简单介绍,下面将结合一些例子,对本申请提供的技术方案进行详细说明。
图4是本申请实施例提供的另一种在虚拟场景中显示表情的方法的流程图,以计算机设备配置为终端,由终端作为执行主体为例,参见图4,方法包括:
401、终端显示虚拟场景,虚拟场景中显示有表情添加图标,表情添加图标用于在虚拟场景中添加表情。
在一些实施例中,虚拟场景为自走棋游戏的游戏场景,虚拟场景中显示有被控虚拟对象,多个第一类虚拟对象以及多个第二类虚拟对象。其中,被控虚拟对象为终端控制的虚拟对象,用户能够通过终端控制该虚拟对象在虚拟场景中进行移动。在一些实施例中,被控虚拟对象为用户在虚拟场景中的“虚拟形象”。第一类虚拟对象为用户在自走棋游戏中的“棋子”,第二类虚拟对象为与用户对战的其他用户在自走棋中的“棋子”,或者第二类虚拟对象为NPC(Non-player Character,非玩家角色),本申请实施例对此不做限定。在一些实施例中,在每个回合开始之前,用户能够对虚拟场景中的第一类虚拟对象进行调整,也即是对“上场”的棋子进行调整。
在一些实施例中,响应于用户开启一局竞技对战,终端显示本局竞技对战对应的虚拟场景。其中,一局竞技对战也即是一局自走棋游戏。终端在虚拟场景的第一图标位置上显示表情添加图标。其中,第一图标位置由技术人员根据实际情况进行设置,或者由用户在虚拟场景的设置界面中进行设置,本申请实施例对此不做限定。在一些实施例中,终端将表情添加图标显示在虚拟场景的左下角。
例如,参见图5,终端显示虚拟场景501,虚拟场景501中显示有表情添加图标502。
在一些实施中,终端在显示虚拟场景之前,还能够显示虚拟场景的表情设置界面,表情设置界面中显示有多个表情,用户能够在表情设置界面中对多个表情进行选择,被选中的表情是能够在虚拟场景中显示的表情。在一些实施例中,表情设置界面中还显示有表情绘制控件,响应于对表情绘制控件的点击操作,终端显示表情绘制界面,表情绘制界面中显示有多种绘制工具,用户能够采用这些绘制工具在表情绘制界面中进行绘制。响应于对表情绘制界面中的存储控件的点击操作,终端将表情绘制界面中的图像存储为一个表情,将该表情显示在表情设置界面中供用户选择。通过提供表情设置界面和表情绘制界面,使得用户能够对表情进行自定义,由于自定义表情的操作方便,流程简单,人机交互效率高。
402、响应于对表情添加图标的拖拽操作,终端在虚拟场景中的第一目标位置上显示表情选择区域,第一目标位置为拖拽操作结束的位置,表情选择区域中显示有多个第一待选表情。
在一些实施例中,响应于对表情添加图标的拖拽操作,终端能够在该拖拽操作结束的位置上显示表情选择区域,该拖拽操作结束的位置也即是第一目标位置。通过在拖拽操作结束的位置上显示表情选择区域,使得用户能够通过拖拽操作来控制表情选择区域所显示的位置,由于拖拽操作简单,使得触发显示表情选择区域的人机交互效率较高。
例如,参见图6,虚拟场景601中显示有表情添加图标602,响应于对表情添加图标602的拖拽操作,终端在该拖拽操作结束的位置上显示表情选择区域603,表情选择区域603中显示有多个第一待选表情604。在一些实施例中,响应于表情选择区域的显示时间符合目标时间条件,不再显示表情选择区域。
在一些实施例中,表情选择区域包括第一子区域和第二子区域,第一子区域用于显示表情的类型图标,响应于对表情添加图标的拖拽操作,终端在第一目标位置上显示第一子区域和第二子区域。响应于对第一子区域中的第一类型图标的选中操作,终端在第二子区域中显示与第一类型图标对应的多个第一待选表情。通过将表情选择区域划分为第一子区域和第二子区域,使得通过触发该第一子区域即能够切换第二子区域中显示的表情的类型,操作简单方便,人机交互效率高。
在上述实施方式的基础上,在一些实施例中,响应于对第一子区域中的第二类型图标的选中操作,终端将在第二子区域中的多个第一待选表情,切换为多个第二待选表情,第二待选表情为与第二类型图标对应的表情。在一些实施例中,第一子区域为圆形区域,第二子区域为与第一子区域内接的环形区域,第一子区域和第二子区域具有相同的圆心。由于用户能够在第一子区域中通过选择不同的类型图标来控制终端在第二子区域中显示不同类型的待选表情,则第一子区域中显示的类型图标也可以被称为待选表情的分类图标,通过为不同的类型图标设置对应的多个表情,使得用户通过触发不同的类型图标能够选择不同类型的表情,操作简单方便,人机交互效率高。
在一些实施例中,表情选择区域包括多个子区域,多个第一待选表情分别显示在多个子区域中。通过将表情选择区域划分为多个子区域,使得终端能够将多个第一待选表情分别显示在多个子区域中,从而不同子区域能够对多个第一待选表情进行分隔,使得用户能够通过不同子区域直观的查看和选择第一待选表情,查看效率高,且选择方式简单快捷,人机交互效率高。
例如,表情选择区域为圆形区域,一个子区域为圆形区域的一个部分,圆形区域的中心显示有多个第一待选表情对应的类型图标。在一些实施例中,该表情选择区域为能够进行转动的区域,响应于对表情选择区域的滑动操作,终端控制表情选择区域按照滑动操作的方向进行转动,使得用户能够通过滑动表情选择区域。在表情选择区域转动过程中,第一待选表情也会随之转动,用户能够将第一待选表情转动至想要的方向之后,再进行表情选择,此时,表情选择区域也被称为表情轮盘。圆形区域中心显示的类型图标用于表示子区域中显示的多个第一待选表情的类型,用户能够通过查看该类型图标来确定多个第一待选表情的类型。
在一些实施例中,虚拟场景中显示有至少一个第一虚拟对象,第一待选表情为第一虚拟对象对应的表情。在一些实施例中,第一虚拟对象为终端登录用户控制的虚拟对象,响应于用户控制任一个第一虚拟对象上场,终端在表情选择区域中添加与该第一虚拟对象对应的第一待选表情。其中,上场是指控制该第一虚拟对象在虚拟场景中进行显示,或者,是指将该第一虚拟对象从备战区域204拖拽至战场区域203,参见图2,本申请实施例对此不做限定。其中,第一虚拟对象与第一待选表情之间的对应关系,由技术人员根据实际情况进行设置,或者由终端基于图像识别算法来进行匹配。响应于该第一虚拟对象上场,终端对该第一虚拟对象进行图像识别,得到该第一虚拟对象的标签。终端基于该第一虚拟对象的标签,在表情数据库中进行匹配,得到与该第一虚拟对象对应的第一待选表情,本申请实施例对于终端确定与该第一虚拟对象对应的第一待选表情的方式不做限定。终端在表情选择区域中添加与该第一虚拟对象对应的第一待选表情是指,终端将与该第一虚拟对象对应的第一待选表情添加至表情选择区域对应的文件夹中,在显示表情选择区域时,终端能够将该第一待选表情显示在表情选择区域中。
在一些实施例中,第一虚拟对象既包括终端登录用户控制的虚拟对象,也包括与该用户进行对战的其他用户控制的虚拟对象。响应于任一个第一虚拟对象上场之后,终端在表情选择区域中添加与该第一虚拟对象对应的第一待选表情,用户能够在表情选择区域中进行表情选择。其中,终端在表情选择区域中添加与该第一虚拟对象对应的第一待选表情的方法,与上述实施例中属于同一发明构思,本申请实施例对此不做限定。通过在表情选择区域中添加上场的第一虚拟对象对应的表情,使得表情选择区域中显示的表情具有多样性和随机性,进而用户能够基于该表情选择控件快速的发送表情,相较于在表情选择区域设置固定的几个表情的方式,人机交互效率更高。
在一些实施例中,虚拟场景中显示有至少一个第一虚拟对象,响应于表情添加图标被拖拽至任一第一虚拟对象所在的位置,在第一虚拟对象所在的位置上显示表情选择区域,该第一虚拟对象所在的位置也即是第一目标位置。通过该表情选择区域选择的表情,被视作是向该第一虚拟对象发送的表情。通过这种实施方式,用户能够在游戏过程中自行决定表情的发送给哪个虚拟对象,给用户提供了更加丰富的表情发送方式,提高了用户的游戏体验,并且,由于将表情选择区域拖动到某个第一虚拟对象所在的位置即可向该第一虚拟对象发送表情,相较于选中第一虚拟对象之后再选择表情进行发送,或者选中表情之后再选择第一虚拟对象进行发送的方式,操作步骤简单,人机交互效率高。
在一些实施例中,虚拟场景中显示有多个用户头像,一个用户头像对应于一个参加竞技对战的用户。响应于表情添加图标被拖拽至任一用户头像所在的位置,在该用户头像所在的位置上显示表情选择区域,该用户头像所在的位置也即是第一目标位置。通过该表情选择区域选择的表情,被视作是向该用户头像对应的用户发送的表情。通过这种实施方式,用户能够在游戏过程中自行决定表情的发送给哪个用户,给用户提供了更加丰富的表情发送方式,提高了用户的游戏体验。
403、响应于对多个第一待选表情中的第一目标表情的选中操作,终端在虚拟场景中显示第一目标表情。
在一些实施例中,第一目标表情为多个第一待选表情中的任一表情。响应于对第一目标表情的选中操作,终端在虚拟场景中对第一目标表情进行放大显示。例如,第一目标表情为一个矢量图形,终端在显示第一目标表情时,能够对第一目标表情进行放大显示,便于用户进行查看。在一些实施例中,终端在虚拟场景中显示第一目标表情的位置为第一目标位置,由于表情选择区域也显示在第一目标位置,则终端在虚拟场景中显示第一目标表情的位置,与终端显示表情选择区域的位置相同。响应于对多个第一待选表情中的第一目标表情的选中操作,终端在虚拟场景中的第一目标位置上显示第一目标表情。
例如,响应于多个第一待选表情中的第一目标表情被点击,终端在虚拟场景中的第一目标位置上对第一目标表情进行放大显示。若第一目标位置为对表情添加图标的拖拽操作结束的位置,那么终端也就能够将第一目标表情显示在该结束位置上。对于用户来说,若用户想要在虚拟场景中的指定位置上显示表情,那么可以将表情添加控件拖拽至该指定位置上,则通过表情选择区域中选择的第一目标表情就能够显示在该指定位置上。比如,参见图6和图7,响应于对第一目标表情605的点击操作,终端在虚拟场景701中的第一目标位置上对第一目标表情605进行放大显示,也即是在虚拟场景701中显示放大后的第一目标表情702。
在一些实施例中,响应于对第一目标表情的选中操作,终端在虚拟场景中播放第一目标 表情对应的动画。其中,第一目标表情对应的动画由技术人员进行配置,比如,技术人员制作好表情和表情对应动画之后,将表情好表情对应的动画进行绑定存储。当第一目标表情被选中之后,终端就能够直接加载第一目标表情对应的动画,在虚拟场景中对该动画进行播放。
例如,响应于多个第一待选表情中的第一目标表情被点击,终端加载与第一目标表情对应的动画,在虚拟场景中的第一目标位置上播放该动画。若第一目标位置为对表情添加图标的拖拽操作结束的位置,那么终端也就能够在该结束位置上播放该动画。
在一些实施例中,虚拟场景中显示有至少一个虚拟对象,第一待选表情为第一虚拟对象对应的表情。终端能够控制目标虚拟对象执行与第一目标表情对应的动作,目标虚拟对象为至少一个第一虚拟对象中,与第一目标表情对应的第一虚拟对象。其中,控制在这里是指显示的意思,控制过程由服务器来执行,终端显示目标虚拟对象执行动作的过程,目标虚拟对象直接由终端控制执行动作,本申请实施例对此不做限定。第一目标表情与动作之间的对应关系,由技术人员根据实际情况进行设置。比如,技术人员将第一目标表情和对应动作制作完毕之后,能够将第一目标表情和对应动作进行绑定存储,响应于对第一目标表情的选中操作,终端控制目标虚拟对象执行与第一目标表情对应的动作。在这种实施方式下,终端除了能够在虚拟场景中显示第一目标表情之外,还能够控制目标虚拟对象执行对应的动作,丰富了第一目标表情的显示效果,提高了用户的游戏体验。
在一些实施例中,响应于对多个第一待选表情中的第一目标表情的选中操作,终端控制被控虚拟对象移动至第一目标位置,在第一目标位置上显示第一目标表情。其中,被控虚拟对象也即是用户登录终端控制的虚拟对象。终端控制被控虚拟对象中的控制是指显示的意思,控制过程是由服务器来执行,终端显示被控虚拟对象执行动作的过程,或者被控虚拟对象直接由终端控制执行动作,本申请实施例对此不做限定。在这种实施方式下,在显示第一目标表情时,终端还能够控制被控虚拟对象移动至第一目标表情的显示位置上,以丰富第一目标表情进行显示时的效果。
上述所有可选技术方案,可以采用任意结合形成本申请的可选实施例,在此不再一一赘述。
图8示出了上述步骤401-403的一个逻辑框图,参见图8,进入游戏之后,若检测到用户对表情添加图标的拖拽操作,显示表情轮盘,表情轮盘也即是表情选择区域。若检测到对表情轮盘中的任一个表情的点击操作,在拖拽操作的对应位置上显示被点击的表情。若未检测到用户对表情添加图标的拖拽操作,那么也就不显示表情;若未检测到用户在表情轮盘中的点击操作,也不显示表情。
通过本申请实施例提供的技术方案,用户能够通过拖拽表情添加图标的方式,来触发显示表情选择区域,使得通过该表情选择区域能够进行表情选择,从而被选中的第一目标表情会显示在虚拟场景中的第一目标位置上。由于第一目标位置是拖拽操作结束的位置,那么通过调整拖拽操作,即可改变第一目标表情的显示位置,操作简单便捷,人机交互效率高。例如,当用户想要在虚拟场景中发送表情时,通过对表情添加图标进行拖拽操作,然后在显示的表情选择区域中选择想要进行显示的第一目标表情,即可发送表情,相较于先调出聊天窗口,在聊天窗口中调用表情选择面板,在表情选择面板中选择表情,再点击聊天窗口的发送控件来实现表情的发送的方式,操作简单便捷,提高了人机交互的效率
除了上述步骤401-403,本申请还提供了另一种在虚拟场景中显示表情的方法,以计算机设备配置为终端,由终端作为执行主体为例,参见图9,方法包括:
901、终端显示虚拟场景,虚拟场景中显示有表情添加图标,表情添加图标用于在虚拟场景中添加表情。
终端显示虚拟场景的过程与上述步骤401属于同一发明构思,实现过程参见上述步骤401的相关描述,在此不再赘述。
902、响应于对表情添加图标的点击操作,终端在虚拟场景中显示表情选择区域,表情选择区域中显示有多个第一待选表情。
在一些实施例中,响应于对表情添加图标的点击操作,终端在虚拟场景中的第四目标位置上显示表情选择区域,第四目标位置为与表情添加图标相邻的位置。
例如,参见图10和图11,响应于对虚拟场景1001中的表情添加图标1002的点击操作,终端在虚拟场景1101的第四目标位置上显示表情选择区域1102,表情选择区域中显示有多个第一待选表情。
在一些实施例中,表情选择区域包括第一子区域和第二子区域,第一子区域用于显示表情的类型图标,响应于对表情添加图标的拖拽操作,终端在第一目标位置上显示第一子区域和第二子区域。响应于对第一子区域中的第一类型图标的选中操作,终端在第二子区域中显示与第一类型图标对应的多个第一待选表情。通过将表情选择区域划分为第一子区域和第二子区域,使得通过触发该第一子区域即能够切换第二子区域中显示的表情的类型,操作简单方便,人机交互效率高。另外,通过为不同的类型图标设置对应的多个表情,使得用户通过触发不同的类型图标能够选择不同类型的表情,操作简单方便,人机交互效率高。
例如,参见图11,表情选择区域1102包括第一子区域1103和第二子区域1104,第一子区域1103中显示有多个类型图标。响应于对第一子区域1103中的第一类型图标1105的点击操作,终端在第二子区域1104中显示与第一类型图标1105对应的多个第一待选表情。
在上述实施方式的基础上,在一些实施例中,响应于对第一子区域中的第二类型图标的选中操作,终端将在第二子区域中的多个第一待选表情,切换为多个第二待选表情,第二待选表情为与第二类型图标对应的表情。在一些实施例中,第一子区域为圆形区域,第二子区域为与第一子区域内接的环形区域,第一子区域和第二子区域具有相同的圆心。
例如,响应于对第一子区域1103中的第二类型图标1106的点击操作,终端在第二子区域1104中显示与第二类型图标1106对应的多个第二待选表情。
在一些实施例中,表情选择区域包括多个子区域,多个第一待选表情分别显示在多个子区域中。在这种实施方式下,终端能够将第一待选表情分别显示在多个子区域中,不同子区域能够对多个第一待选表情进行分隔,用户能够在不同子区域中进行选择想要的第一待选表情。通过将表情选择区域划分为多个子区域,使得终端能够将多个第一待选表情分别显示在多个子区域中,从而不同子区域能够对多个第一待选表情进行分隔,使得用户能够通过不同子区域直观的查看和选择第一待选表情,查看效率高,且选择方式简单快捷,人机交互效率高。
例如,表情选择区域为圆形区域,一个子区域为圆形区域的一个部分,圆形区域的中心显示有多个第一待选表情对应的类型图标。在一些实施例中,该表情选择区域为能够进行转动的区域,响应于对表情选择区域的滑动操作,终端控制表情选择区域按照滑动操作的方向 进行转动,使得用户能够通过滑动表情选择区域。在表情选择区域转动过程中,第一待选表情也会随之转动,用户能够将第一待选表情转动至想要的方向之后,再进行表情选择,此时,表情选择区域也被称为表情轮盘。圆形区域中心显示的类型图标用于表示子区域中显示的多个第一待选表情的类型,用户能够通过查看该类型图标来确定多个第一待选表情的类型。
在一些实施例中,虚拟场景中显示有至少一个第一虚拟对象,第一待选表情为第一虚拟对象对应的表情。在一些实施例中,第一虚拟对象为终端登录用户控制的虚拟对象,响应于用户控制任一个第一虚拟对象上场,终端在表情选择区域中添加与该第一虚拟对象对应的第一待选表情。通过在表情选择区域中添加上场的第一虚拟对象对应的表情,使得表情选择区域中显示的表情具有多样性和随机性,进而用户能够基于该表情选择控件快速的发送表情,相较于在表情选择区域设置固定的几个表情的方式,人机交互效率更高。
903、响应于对多个第一待选表情中的第二目标表情的拖拽操作,终端在第二目标位置上显示第二目标表情,第二目标位置为拖拽操作结束的位置。
在一些实施例中,第二目标位置为该拖拽操作结束的位置。也即是,用户能够对表情选择区域中的第一待选表情执行拖拽操作,被拖拽的第一待选表情可以视作被选中的第二目标表情。用户能够通过拖拽操作结束的位置来控制终端显示第二目标表情的位置。
在一些实施例中,若终端为手机、平板等具有触控功能的设备时,那么对第二目标表情的拖拽操作,也即是用户将手指放在第二目标表情对应的位置上,按下手指后控制手指在屏幕上进行拖拽,但用户想要结束拖拽操作时,只需抬起手指即可。对于终端来说,响应于检测到对第二目标表情的点击操作符合目标条件,终端将第二目标表情设置为可拖拽状态,可拖拽状态是指该第二目标表情能够随着拖拽操作的移动而移动。终端实时获取拖拽操作在屏幕上的位置,将第二目标表情显示在拖拽操作结束的位置上,从用户的角度来看,处于可拖拽状态的第二目标表情,始终位于屏幕上手指的下方。响应于在第二目标位置检测到拖拽操作结束,终端将第二目标表情显示在拖拽操作结束的位置上。其中,对第二目标表情的点击操作符合目标条件是指,对第二目标表情的点击操作的持续时间大于或等于时间阈值,或者对第二目标表情的点击操作的力度大于或等于力度阈值。其中,时间阈值和力度阈值由技术人员根据实际情况进行设置,本申请实施例对此不做限定。
例如,参见图12,响应于虚拟场景1201中的第二目标表情1202被拖拽至第二目标位置1203,终端将第二目标表情1202显示在第二目标位置1203上。
在一些实施例中,虚拟场景中显示有多个第一虚拟对象,响应于第二目标表情被拖拽至第三目标位置,且任一第一虚拟对象与第三目标位置之间的距离符合目标条件,终端在该第一虚拟对象所在的第二目标位置上显示第二目标表情。通过这种方式,终端能够将第二目标表情显示在符合目标条件的第一虚拟对象的上方,可以用来表示第二目标表情是该第一虚拟对象做出的表情,丰富了第二目标表情显示的丰富程度,便于用户通过第二目标表情来传达信息,由于拖拽操作执行简单,从而提高了发送表情的效率,进而提高了人机交互的效率。
例如,虚拟场景中显示有一个第一虚拟对象,响应于第二目标表情被拖拽至第三目标位置,终端确定该第一虚拟对象与第三目标位置之间的距离。响应于该第一虚拟对象与第三目标位置之间的距离小于或等于距离阈值,终端将该第一虚拟对象与第三目标位置之间的距离符合目标条件,在该第一虚拟对象所在的第二目标位置上显示第二目标表情。从用户的角度来看,当将第二目标表情拖拽至第一虚拟对象附近时,第二目标表情能够显示在该第一虚拟 对象的上方。当然,上述是以虚拟场景中显示有一个虚拟对象为例进行说明,在其他可能的实施方式中,虚拟场景中能够显示有多个第一虚拟对象,在这种情况下,响应于第二目标表情被拖拽至第三目标位置,终端确定多个第一虚拟对象与第三目标位置之间的距离,将多个第一虚拟对象中,与第三目标距离之间的距离最小的第一虚拟对象确定为与第三目标位置之间的距离符合目标条件的第一虚拟对象,终端将第二目标表情显示在该第一虚拟对象的上方。用户可以利用这种显示方式,控制终端将第二目标表情显示在不同的第一虚拟对象的上方,以传达不同的信息。
在一些实施例中,响应于该第一虚拟对象在虚拟场景中发生移动,终端能够调整第二目标表情的显示位置,以使得第二目标表情始终显示在该第一虚拟对象的上方。通过在将第二目标表情拖拽至该第一虚拟对象附近时,终端将第二目标表情显示在第一虚拟对象的上方,使得当第一虚拟对象在虚拟场景中进行移动时,第二目标表情会跟着第一虚拟对象一起移动,从而达到第二目标表情始终保持在第一虚拟对象的上方的效果。
在一些实施例中,虚拟场景中显示有多个第一虚拟对象,响应于第二目标表情被拖拽至第三目标位置,且至少两个第一虚拟对象与第三目标位置之间的距离符合目标条件,在第二虚拟对象所在的第二目标位置上显示第二目标表情,第二虚拟对象为至少两个第一虚拟对象中终端控制的第一虚拟对象。
例如,响应于第二目标表情被拖拽至第三目标位置,终端确定虚拟场景中的多个第一虚拟对象与第三目标位置之间的距离。响应于至少两个第一虚拟对象与第三目标位置之间的距离小于或等于距离阈值,终端确定该至少两个第一虚拟对象与第三目标位置之间的距离符合目标条件,终端在至少两个第一虚拟对象中确定出第二虚拟对象,第二虚拟对象也即是终端控制的第一虚拟对象,或者说是玩家控制的第一虚拟对象。终端将第二目标表情显示在第二虚拟对象所在的第二目标位置的上,能够达到将第二目标表情显示在了终端控制的虚拟对象的上方的效果。
在一些实施例中,响应于该第二虚拟对象在虚拟场景中发生移动,终端调整第二目标表情的显示位置,以使得第二目标表情始终显示在该第二虚拟对象的上方。通过在将第二目标表情拖拽至该第二虚拟对象附近时,终端将第二目标表情显示在第二虚拟对象的上方,使得当第二虚拟对象在虚拟场景中进行移动时,第二目标表情会跟着第二虚拟对象一起移动,从而达到第二目标表情始终保持在第二虚拟对象的上方的效果。
在一些实施例中,响应于对多个第一待选表情中的第二目标表情的选中操作,终端控制被控虚拟对象移动至第二目标位置,在第二目标位置上显示第二目标表情。其中,被控虚拟对象也即是用户登录终端控制的虚拟对象。终端控制被控虚拟对象中的控制是指显示的意思,控制过程由服务器来执行,终端显示被控虚拟对象执行动作的过程,或者被控虚拟对象直接由终端控制执行动作,本申请实施例对此不做限定。由于终端在显示第一目标表情时,还能够控制被控虚拟对象移动至第一目标表情的显示位置上,从而以丰富第一目标表情进行显示时的效果。
在一些实施例中,虚拟场景中显示有多个用户头像,响应于多个第一待选表情中的第三目标表情被拖拽至目标用户头像所在的位置,终端向目标用户头像对应的终端发送第三目标表情,目标用户头像为多个用户头像中的任一个。通过这种实施方式,向目标用户(目标用户头像对应的用户)发送表情时,只需将第一待选表情中的第三目标表情拖拽至目标用户的 用户头像即可,相较于先调出聊天窗口,在聊天窗口中调用表情选择面板,在表情选择面板中选择表情,再点击聊天窗口的发送控件来实现表情的发送的方式,操作简单便捷,提高了人机交互的效率。
例如,响应于第三目标表情被拖拽至目标用户头像所在的位置,终端向目标用户头像对应的终端发送第三目标表情,也即是将第三目标表情显示在虚拟场景中的聊天框中发送该第三目标表情,该第三目标表情的接收方为目标用户的终端。
上述所有可选技术方案,可以采用任意结合形成本申请的可选实施例,在此不再一一赘述。
通过本申请实施例提供的技术方案,用户能够通过拖拽表情添加图标的方式,来触发显示表情选择区域,使得通过该表情选择区域能够进行表情选择,从而被选中的第一目标表情会显示在虚拟场景中的第一目标位置上。由于第一目标位置是拖拽操作结束的位置,那么通过调整拖拽操作,即可改变第一目标表情的显示位置,操作简单便捷,人机交互效率高。例如,当用户想要在虚拟场景中发送表情时,通过在显示的表情选择区域中选择想要进行显示的第一目标表情,然后对表情进行拖拽操作,即可发送表情,相较于先调出聊天窗口,在聊天窗口中调用表情选择面板,在表情选择面板中选择表情,再点击聊天窗口的发送控件来实现表情的发送的方式,操作简单便捷,提高了人机交互的效率。
图13是本申请实施例提供的一种在虚拟场景中显示表情的装置的结构示意图,参见图13,装置包括:场景显示模块1301、区域显示模块1302以及表情显示模块1303。
场景显示模块1301,用于显示虚拟场景,虚拟场景中显示有表情添加图标,表情添加图标用于在虚拟场景中添加表情。
区域显示模块1302,用于响应于对表情添加图标的拖拽操作,在虚拟场景中的第一目标位置上显示表情选择区域,第一目标位置为拖拽操作结束的位置,表情选择区域中显示有多个第一待选表情。
表情显示模块1303,用于响应于对多个第一待选表情中的第一目标表情的选中操作,在虚拟场景中显示第一目标表情。
在一些实施例中,表情显示模块1303,用于响应于对多个第一待选表情中的第一目标表情的选中操作,在虚拟场景中的第一目标位置上显示第一目标表情。
在一些实施例中,虚拟场景中显示有被控虚拟对象,目标表情显示模块1303,用于响应于对多个第一待选表情中的第一目标表情的选中操作,控制被控虚拟对象移动至第一目标位置。
在一些实施例中,表情显示模块1303,用于响应于对多个第一待选表情中的第一目标表情的选中操作,在虚拟场景中播放第一目标表情对应的动画。
在一些实施例中,表情选择区域包括第一子区域和第二子区域,第一子区域用于显示表情的类型图标,一个类型图标对应于多个表情,表情显示模块1303,用于在虚拟场景中的第一目标位置上显示第一子区域和第二子区域。响应于对第一子区域中的第一类型图标的选中操作,在第二子区域中显示与第一类型图标对应的多个第一待选表情。
在一些实施例中,装置还包括:
表情切换模块,用于响应于对第一子区域中的第二类型图标的选中操作,将在第二子区 域中显示的多个第一待选表情,切换为多个第二待选表情,第二待选表情为与第二类型图标对应的表情。
在一些实施例中,表情选择区域包括多个子区域,多个第一待选表情分别显示在多个子区域中。
在一些实施例中,表情选择区域为圆形区域,一个子区域为圆形区域的一个部分,圆形区域的中心显示有多个第一待选表情对应的类型图标。
在一些实施例中,虚拟场景中显示有至少一个第一虚拟对象,第一待选表情为第一虚拟对象对应的表情。
在一些实施例中,装置还包括:
控制模块,用于控制目标虚拟对象执行与第一目标表情对应的动作,目标虚拟对象为至少一个第一虚拟对象中,与第一目标表情对应的第一虚拟对象。
需要说明的是:上述实施例提供的在虚拟场景中显示表情的装置在虚拟场景中显示表情时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将计算机设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的在虚拟场景中显示表情的装置与在虚拟场景中显示表情的方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
通过本申请实施例提供的技术方案,用户能够通过拖拽表情添加图标的方式,来触发显示表情选择区域,使得通过该表情选择区域能够进行表情选择,从而被选中的第一目标表情会显示在虚拟场景中的第一目标位置上。由于第一目标位置是与拖拽操作结束的位置,那么通过调整拖拽操作,即可改变第一目标表情的显示位置,操作简单便捷,人机交互效率高。例如,当用户想要在虚拟场景中发送表情时,通过对表情添加图标进行拖拽操作,然后在显示的表情选择区域中选择想要进行显示的第一目标表情,即可发送表情,相较于先调出聊天窗口,在聊天窗口中调用表情选择面板,在表情选择面板中选择表情,再点击聊天窗口的发送控件来实现表情的发送的方式,操作简单便捷,提高了人机交互的效率。
图14是本申请实施例提供的另一种在虚拟场景中显示表情的装置的结构示意图,参见图14,装置包括:场景显示模块1401、区域显示模块1402以及表情显示模块1403。
场景显示模块1401,用于显示虚拟场景,该虚拟场景中显示有表情添加图标,该表情添加图标用于在该虚拟场景中添加表情;
区域显示模块1402,用于响应于对该表情添加图标的点击操作,在该虚拟场景中显示该表情选择区域,该表情选择区域中显示有多个第一待选表情;
表情显示模块1403,用于响应于对该多个第一待选表情中的第二目标表情的拖拽操作,在第二目标位置上显示该第二目标表情,该第二目标位置为该拖拽操作结束的位置。
在一些实施例中,该虚拟场景中显示有多个第一虚拟对象,该表情显示模块1403,用于执行下述任一项:
响应于该第二目标表情被拖拽至第三目标位置,且任一该第一虚拟对象与该第三目标位置之间的距离符合目标条件,在该任一该第一虚拟对象所在的该第二目标位置上显示该第二目标表情;
响应于该第二目标表情被拖拽至该第三目标位置,且至少两个该第一虚拟对象与该第三目标位置之间的距离符合目标条件,在第二虚拟对象所在的第二目标位置上显示该第二目标表情,该第二虚拟对象为该至少两个该第一虚拟对象中计算机设备控制的第一虚拟对象。
在一些实施例中,该虚拟场景中显示有多个用户头像,该装置还包括:
表情发送模块,用于响应于该多个第一待选表情中的第三目标表情被拖拽至目标用户头像所在的位置,向该目标用户头像对应的终端发送该第三目标表情,该目标用户头像为该多个用户头像中的任一个。
需要说明的是:上述实施例提供的在虚拟场景中显示表情的装置在虚拟场景中显示表情时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将计算机设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的在虚拟场景中显示表情的装置与在虚拟场景中显示表情的方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
通过本申请实施例提供的技术方案,用户能够通过拖拽表情添加图标的方式,来触发显示表情选择区域,使得通过该表情选择区域能够进行表情选择,从而被选中的第一目标表情会显示在虚拟场景中的第一目标位置上。由于第一目标位置是拖拽操作结束的位置,那么通过调整拖拽操作,即可改变第一目标表情的显示位置,操作简单便捷,人机交互效率高。例如,当用户想要在虚拟场景中发送表情时,通过在显示的表情选择区域中选择想要进行显示的第一目标表情,然后对表情进行拖拽操作,即可发送表情,相较于先调出聊天窗口,在聊天窗口中调用表情选择面板,在表情选择面板中选择表情,再点击聊天窗口的发送控件来实现表情的发送的方式,操作简单便捷,提高了人机交互的效率。
本申请实施例提供了一种计算机设备,用于执行上述方法,该计算机设备可以实现为终端,下面对终端的结构进行介绍:
图15是本申请实施例提供的一种终端的结构示意图。该终端1500可以是:智能手机、平板电脑、笔记本电脑或台式电脑。终端1500还可能被称为用户设备、便携式终端、膝上型终端、台式终端等其他名称。
通常,终端1500包括有:一个或多个处理器1501和一个或多个存储器1502。
处理器1501可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器1501可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器1501也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器1501可以在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器1501还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器1502可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器1502还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个 磁盘存储设备、闪存存储设备。在一些实施例中,存储器1502中的非暂态的计算机可读存储介质用于存储至少一个计算机程序,该至少一个计算机程序用于被处理器1501所执行以实现本申请中方法实施例提供的在虚拟场景中显示表情的方法。
在一些实施例中,终端1500还可选包括有:外围设备接口1503和至少一个外围设备。处理器1501、存储器1502和外围设备接口1503之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口1503相连。具体地,外围设备包括:显示屏1505、音频电路1507和电源1509中的至少一种。
外围设备接口1503可被用于将I/O(Input/Output,输入/输出)相关的至少一个外围设备连接到处理器1501和存储器1502。在一些实施例中,处理器1501、存储器1502和外围设备接口1503被集成在同一芯片或电路板上;在一些其他实施例中,处理器1501、存储器1502和外围设备接口1503中的任意一个或两个可以在单独的芯片或电路板上实现,本实施例对此不加以限定。
显示屏1505用于显示UI(User Interface,用户界面)。该UI可以包括图形、文本、图标、视频及其它们的任意组合。当显示屏1505是触摸显示屏时,显示屏1505还具有采集在显示屏1505的表面或表面上方的触摸信号的能力。该触摸信号可以作为控制信号输入至处理器1501进行处理。此时,显示屏1505还可以用于提供虚拟按钮和/或虚拟键盘,也称软按钮和/或软键盘。
音频电路1507可以包括麦克风和扬声器。麦克风用于采集用户及环境的声波,并将声波转换为电信号输入至处理器1501进行处理。
电源1509用于为终端1500中的各个组件进行供电。电源1509可以是交流电、直流电、一次性电池或可充电电池。
本领域技术人员可以理解,图15中示出的结构并不构成对终端1500的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
在本申请实施例中,还提供了一种计算机可读存储介质,例如包括计算机程序的存储器,上述计算机程序可由处理器执行以完成上述实施例中的在虚拟场景中显示表情的方法。例如,该计算机可读存储介质可以是只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)、磁带、软盘和光数据存储设备等。
在本申请实施例中,还提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括程序代码,该程序代码存储在计算机可读存储介质中,计算机设备的处理器从计算机可读存储介质读取该程序代码,处理器执行该程序代码,使得该计算机设备执行上述在虚拟场景中显示表情的方法。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,该程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
上述仅为本申请的可选实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。
Claims (17)
- 一种在虚拟场景中显示表情的方法,由计算机设备执行,所述方法包括:显示虚拟场景,所述虚拟场景中显示有表情添加图标,所述表情添加图标用于在所述虚拟场景中添加表情;响应于对所述表情添加图标的拖拽操作,在所述虚拟场景中的第一目标位置上显示表情选择区域,所述第一目标位置为所述拖拽操作结束的位置,所述表情选择区域中显示有多个第一待选表情;响应于对所述多个第一待选表情中的第一目标表情的选中操作,在所述虚拟场景中显示所述第一目标表情。
- 根据权利要求1所述的方法,其中,所述响应于对所述多个第一待选表情中的第一目标表情的选中操作,在所述虚拟场景中显示所述第一目标表情,包括:响应于对所述多个第一待选表情中的第一目标表情的选中操作,在所述虚拟场景中的所述第一目标位置上显示所述第一目标表情。
- 根据权利要求1或2所述的方法,其中,所述虚拟场景中显示有被控虚拟对象,所述方法还包括:响应于对所述多个第一待选表情中的第一目标表情的选中操作,控制所述被控虚拟对象移动至所述第一目标位置。
- 根据权利要求1所述的方法,其中,所述响应于对所述多个第一待选表情中的第一目标表情的选中操作,在所述虚拟场景中显示所述第一目标表情,包括:响应于对所述多个第一待选表情中所述第一目标表情的选中操作,在所述虚拟场景中播放所述第一目标表情对应的动画。
- 根据权利要求1所述的方法,其中,所述表情选择区域包括第一子区域和第二子区域,所述第一子区域用于显示表情的类型图标,一个所述类型图标对应于多个表情;所述在所述虚拟场景中的第一目标位置上显示表情选择区域,包括:在所述虚拟场景中的第一目标位置上显示所述第一子区域和第二子区域;响应于对所述第一子区域中的第一类型图标的选中操作,在所述第二子区域中显示与所述第一类型图标对应的所述多个第一待选表情。
- 根据权利要求5所述的方法,其中,所述方法还包括:响应于对所述第一子区域中的第二类型图标的选中操作,将在所述第二子区域中显示的所述多个第一待选表情,切换为多个第二待选表情,所述第二待选表情为与所述第二类型图标对应的表情。
- 根据权利要求1所述的方法,其中,所述表情选择区域包括多个子区域,所述多个第 一待选表情分别显示在所述多个子区域中。
- 根据权利要求7所述的方法,其中,所述表情选择区域为圆形区域,一个所述子区域为所述圆形区域的一个部分,所述圆形区域的中心显示有所述多个第一待选表情对应的类型图标。
- 根据权利要求1-8任一项所述的方法,其中,所述虚拟场景中显示有至少一个第一虚拟对象,所述第一待选表情为所述第一虚拟对象对应的表情。
- 根据权利要求9所述的方法,其中,所述方法还包括:控制目标虚拟对象执行与所述第一目标表情对应的动作,所述目标虚拟对象为所述至少一个第一虚拟对象中,与所述第一目标表情对应的第一虚拟对象。
- 一种在虚拟场景中显示表情的方法,由计算机设备执行,所述方法包括:显示虚拟场景,所述虚拟场景中显示有表情添加图标,所述表情添加图标用于在所述虚拟场景中添加表情;响应于对所述表情添加图标的点击操作,在所述虚拟场景中显示表情选择区域,所述表情选择区域中显示有多个第一待选表情;响应于对所述多个第一待选表情中的第二目标表情的拖拽操作,在第二目标位置上显示所述第二目标表情,所述第二目标位置为所述拖拽操作结束的位置。
- 根据权利要求11所述的方法,其中,所述虚拟场景中显示有多个第一虚拟对象;所述响应于对所述多个第一待选表情中的第二目标表情的拖拽操作,在第二目标位置上显示所述第二目标表情,包括下述任一项:响应于所述第二目标表情被拖拽至第三目标位置,且任一所述第一虚拟对象与所述第三目标位置之间的距离符合目标条件,在所述任一所述第一虚拟对象所在的所述第二目标位置上显示所述第二目标表情;响应于所述第二目标表情被拖拽至所述第三目标位置,且至少两个所述第一虚拟对象与所述第三目标位置之间的距离符合目标条件,在第二虚拟对象所在的第二目标位置上显示所述第二目标表情,所述第二虚拟对象为所述至少两个所述第一虚拟对象中所述计算机设备控制的第一虚拟对象。
- 根据权利要求11所述的方法,其中,所述虚拟场景中显示有多个用户头像;所述方法还包括:响应于所述多个第一待选表情中的第三目标表情被拖拽至目标用户头像所在的位置,向所述目标用户头像对应的终端发送所述第三目标表情,所述目标用户头像为所述多个用户头像中的任一个。
- 一种在虚拟场景中显示表情的装置,所述装置包括:场景显示模块,用于显示虚拟场景,所述虚拟场景中显示有表情添加图标,所述表情添加图标用于在所述虚拟场景中添加表情;区域显示模块,用于响应于对所述表情添加图标的拖拽操作,在所述虚拟场景中的第一目标位置上显示表情选择区域,所述第一目标位置为所述拖拽操作结束的位置,所述表情选择区域中显示有多个第一待选表情;表情显示模块,用于响应于对所述多个第一待选表情中的第一目标表情的选中操作,在所述虚拟场景中显示所述第一目标表情。
- 一种在虚拟场景中显示表情的装置,所述装置包括:场景显示模块,用于显示虚拟场景,所述虚拟场景中显示有表情添加图标,所述表情添加图标用于在所述虚拟场景中添加表情;区域显示模块,用于响应于对所述表情添加图标的点击操作,在所述虚拟场景中显示所述表情选择区域,所述表情选择区域中显示有多个第一待选表情;表情显示模块,用于响应于对所述多个第一待选表情中的第二目标表情的拖拽操作,在第二目标位置上显示所述第二目标表情,所述第二目标位置为所述拖拽操作结束的位置。
- 一种计算机设备,其中,所述计算机设备包括一个或多个处理器和一个或多个存储器,所述一个或多个存储器中存储有至少一条计算机程序,所述计算机程序由所述一个或多个处理器加载并执行以实现如权利要求1至权利要求10任一项所述的在虚拟场景中显示表情的方法,或实现如权利要求11至权利要求13任一项所述的在虚拟场景中显示表情的方法。
- 一种计算机可读存储介质,其中,所述计算机可读存储介质中存储有至少一条计算机程序,所述计算机程序由处理器加载并执行以实现如权利要求1至权利要求10任一项所述的在虚拟场景中显示表情的方法,或实现如权利要求11至权利要求13任一项所述的在虚拟场景中显示表情的方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2023538694A JP2024500929A (ja) | 2021-05-26 | 2022-04-21 | 仮想シーンに表情を表示する方法と装置及びコンピュータ機器とプログラム |
US17/971,882 US20230048502A1 (en) | 2021-05-26 | 2022-10-24 | Method and apparatus for displaying expression in virtual scene |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110580625.X | 2021-05-26 | ||
CN202110580625.XA CN113144601B (zh) | 2021-05-26 | 2021-05-26 | 虚拟场景中的表情显示方法、装置、设备以及介质 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/971,882 Continuation US20230048502A1 (en) | 2021-05-26 | 2022-10-24 | Method and apparatus for displaying expression in virtual scene |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022247536A1 true WO2022247536A1 (zh) | 2022-12-01 |
Family
ID=76877779
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/088267 WO2022247536A1 (zh) | 2021-05-26 | 2022-04-21 | 在虚拟场景中显示表情的方法、装置、设备以及介质 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20230048502A1 (zh) |
JP (1) | JP2024500929A (zh) |
CN (1) | CN113144601B (zh) |
WO (1) | WO2022247536A1 (zh) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113144601B (zh) * | 2021-05-26 | 2023-04-07 | 腾讯科技(深圳)有限公司 | 虚拟场景中的表情显示方法、装置、设备以及介质 |
CN113996060B (zh) * | 2021-10-29 | 2024-10-15 | 腾讯科技(成都)有限公司 | 显示画面的调整方法和装置、存储介质及电子设备 |
CN116204250A (zh) * | 2021-11-30 | 2023-06-02 | 腾讯科技(深圳)有限公司 | 基于会话的信息展示方法、装置、设备、介质及程序产品 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160259526A1 (en) * | 2015-03-03 | 2016-09-08 | Kakao Corp. | Display method of scenario emoticon using instant message service and user device therefor |
CN107479784A (zh) * | 2017-07-31 | 2017-12-15 | 腾讯科技(深圳)有限公司 | 表情展示方法、装置及计算机可读存储介质 |
CN111010585A (zh) * | 2019-12-06 | 2020-04-14 | 广州华多网络科技有限公司 | 虚拟礼物的发送方法、装置、设备及存储介质 |
CN111589128A (zh) * | 2020-04-23 | 2020-08-28 | 腾讯科技(深圳)有限公司 | 基于虚拟场景的操作控件显示方法及装置 |
CN112569611A (zh) * | 2020-12-11 | 2021-03-30 | 腾讯科技(深圳)有限公司 | 互动信息显示方法、装置、终端及存储介质 |
CN113144601A (zh) * | 2021-05-26 | 2021-07-23 | 腾讯科技(深圳)有限公司 | 虚拟场景中的表情显示方法、装置、设备以及介质 |
-
2021
- 2021-05-26 CN CN202110580625.XA patent/CN113144601B/zh active Active
-
2022
- 2022-04-21 WO PCT/CN2022/088267 patent/WO2022247536A1/zh active Application Filing
- 2022-04-21 JP JP2023538694A patent/JP2024500929A/ja active Pending
- 2022-10-24 US US17/971,882 patent/US20230048502A1/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160259526A1 (en) * | 2015-03-03 | 2016-09-08 | Kakao Corp. | Display method of scenario emoticon using instant message service and user device therefor |
CN107479784A (zh) * | 2017-07-31 | 2017-12-15 | 腾讯科技(深圳)有限公司 | 表情展示方法、装置及计算机可读存储介质 |
CN111010585A (zh) * | 2019-12-06 | 2020-04-14 | 广州华多网络科技有限公司 | 虚拟礼物的发送方法、装置、设备及存储介质 |
CN111589128A (zh) * | 2020-04-23 | 2020-08-28 | 腾讯科技(深圳)有限公司 | 基于虚拟场景的操作控件显示方法及装置 |
CN112569611A (zh) * | 2020-12-11 | 2021-03-30 | 腾讯科技(深圳)有限公司 | 互动信息显示方法、装置、终端及存储介质 |
CN113144601A (zh) * | 2021-05-26 | 2021-07-23 | 腾讯科技(深圳)有限公司 | 虚拟场景中的表情显示方法、装置、设备以及介质 |
Also Published As
Publication number | Publication date |
---|---|
US20230048502A1 (en) | 2023-02-16 |
JP2024500929A (ja) | 2024-01-10 |
CN113144601A (zh) | 2021-07-23 |
CN113144601B (zh) | 2023-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022247536A1 (zh) | 在虚拟场景中显示表情的方法、装置、设备以及介质 | |
JP2024524734A (ja) | 試合生放送の表示方法と装置、コンピュータ機器及びコンピュータプログラム | |
CN111905363A (zh) | 虚拟对象的控制方法、装置、终端及存储介质 | |
WO2023016165A1 (zh) | 虚拟角色的选择方法、装置、终端及存储介质 | |
US20230330539A1 (en) | Virtual character control method and apparatus, device, storage medium, and program product | |
CN111589114B (zh) | 虚拟对象的选择方法、装置、终端及存储介质 | |
CN113332716A (zh) | 虚拟物品的处理方法、装置、计算机设备及存储介质 | |
JP7377601B2 (ja) | ゲームプログラム、ゲーム処理方法、情報処理装置 | |
WO2023138175A1 (zh) | 卡牌施放方法、装置、设备、存储介质及程序产品 | |
JP7559014B2 (ja) | プログラム、端末、及びゲームシステム | |
WO2023024880A1 (zh) | 虚拟场景中的表情显示方法、装置、设备以及介质 | |
US12083433B2 (en) | Virtual object control method and apparatus, terminal, and storage medium | |
CN114945915A (zh) | 信息推送方法、装置、电子设备及计算机可读介质 | |
Santasärkkä | The Digital Games Industry and its Direct and Indirect Impact on the Economy. Case study: Supercell and Finland. | |
WO2024060914A1 (zh) | 虚拟对象的生成方法、装置、设备、介质和程序产品 | |
WO2024104034A1 (zh) | 特效显示方法、装置、电子设备及存储介质 | |
WO2024103990A1 (zh) | 回合制棋类游戏的商店升级方法、装置、设备及介质 | |
WO2023226557A1 (zh) | 账号间的互动方法、装置、计算机设备及存储介质 | |
JP7482847B2 (ja) | プログラム、端末、及びゲームシステム | |
JP7532692B1 (ja) | 情報処理システム、プログラム、情報処理方法 | |
JP7146052B1 (ja) | ゲームシステム、ゲームプログラム及び情報処理方法 | |
JP7147015B2 (ja) | ゲームログを活用したベッティングサービス提供方法およびシステム | |
WO2024103991A1 (zh) | 信息显示方法、装置、终端设备、存储介质及程序产品 | |
JP7174125B2 (ja) | 制御プログラム、制御方法、コンピュータ及び端末装置 | |
WO2023160056A1 (zh) | 虚拟角色的处理方法、装置、设备、存储介质及程序产品 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22810269 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023538694 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17-04-2024) |