CN113893560A - Information processing method, device, equipment and storage medium in virtual scene - Google Patents

Information processing method, device, equipment and storage medium in virtual scene Download PDF

Info

Publication number
CN113893560A
CN113893560A CN202111195318.6A CN202111195318A CN113893560A CN 113893560 A CN113893560 A CN 113893560A CN 202111195318 A CN202111195318 A CN 202111195318A CN 113893560 A CN113893560 A CN 113893560A
Authority
CN
China
Prior art keywords
instant message
user
information
marking
instant
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111195318.6A
Other languages
Chinese (zh)
Other versions
CN113893560B (en
Inventor
钱杉杉
梁皓辉
林琳
高昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202111195318.6A priority Critical patent/CN113893560B/en
Publication of CN113893560A publication Critical patent/CN113893560A/en
Application granted granted Critical
Publication of CN113893560B publication Critical patent/CN113893560B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/85Providing additional services to players
    • A63F13/87Communicating with other players during game play, e.g. by e-mail or chat
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/533Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5375Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for graphically or textually suggesting an action, e.g. by displaying an arrow indicating a turn in a driving game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/58Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level

Abstract

The application relates to an information processing method, device, equipment and storage medium in a virtual scene, and relates to the technical field of virtual scenes. The method comprises the following steps: displaying a scene picture of a virtual scene, wherein the scene picture comprises user identifications of at least two users corresponding to the virtual scene; responding to the first instant message corresponding to a first user of the at least two users, and displaying the first instant message corresponding to the user identification of the first user; in response to receiving a marking trigger operation for the first instant message, marking the first instant message; the marking process is used for enhancing the reminding capability of the first instant message. By the method, the interaction between the user and the instant message is not limited to passive viewing, the interaction mode between the user and the instant message is enriched, and the prompt effect of the instant message on related events is improved.

Description

Information processing method, device, equipment and storage medium in virtual scene
Technical Field
The embodiment of the application relates to the technical field of virtual scenes, in particular to an information processing method, device, equipment and storage medium in a virtual scene.
Background
In an application program having a virtual scene, an information prompt function is usually provided to improve information interaction between a virtual scene interface and a user.
In the related art, when an important event occurs in a virtual scene, the event can be reminded through instant information, and after the instant information is displayed for a period of time, the display of the instant information is cancelled, so that the simplicity of an interface is ensured.
However, in the related art, the display mode of the instant message is relatively single, so that the prompt effect of the instant message on the related event is poor.
Disclosure of Invention
The embodiment of the application provides an information processing method, an information processing device, information processing equipment and a storage medium in a virtual scene, which can enrich the interaction mode of a user and instant information and improve the prompt effect of the instant information on related events. The technical scheme is as follows:
in one aspect, an information processing method in a virtual scene is provided, where the method includes:
displaying a scene picture of a virtual scene, wherein the scene picture comprises user identifications of at least two users corresponding to the virtual scene;
responding to a first instant message corresponding to a first user of the at least two users, and displaying the first instant message corresponding to a user identifier of the first user;
in response to receiving a marking trigger operation for the first instant message, marking the first instant message; the marking processing is used for strengthening the reminding capability of the first instant message.
In another aspect, an information processing apparatus in a virtual scene is provided, the apparatus including:
the image display module is used for displaying scene images of a virtual scene, and the scene images contain user identifications of at least two users corresponding to the virtual scene;
the information display module is used for responding to first instant information corresponding to a first user in the at least two users and displaying the first instant information corresponding to the user identification of the first user;
the mark processing module is used for responding to the received mark triggering operation of the first instant message and marking the first instant message; the marking processing is used for strengthening the reminding capability of the first instant message.
In one possible implementation manner, the tag processing module includes:
the control display sub-module is used for responding to the received marking trigger operation of the first instant message and displaying the message operation control;
and the marking processing sub-module is used for responding to the received triggering operation of the information operation control and marking the first instant message based on the marking mode corresponding to the information operation control.
In a possible implementation manner, the information operation control includes a public mark control and a non-public mark control;
marking the first instant message based on a marking mode corresponding to the public marking control, and enabling the first instant message to be visible to the at least two users;
marking the first instant message based on a marking mode corresponding to the non-public marking control, and enabling the first instant message to be visible to a target user; the target user is a user who performs the mark trigger operation.
In one possible implementation manner, the tag processing module includes:
the chat panel display submodule is used for responding to the received trigger operation of the public mark control and displaying a chat panel;
and the marked content display sub-module is used for publishing the marked content corresponding to the first instant message to the chat panel.
In one possible implementation manner, the tag processing module includes:
the candidate expression display sub-module is used for responding to the received trigger operation of the public mark control and displaying at least two candidate expressions;
and the target expression display sub-module is used for responding to the received trigger operation of the target expression and displaying the target expression corresponding to the display position of the first instant message, wherein the target expression is any one of at least two candidate expressions.
In a possible implementation manner, the tag processing module further includes:
and the identity display submodule is used for displaying the identity of the user selecting the target expression corresponding to the display position of the target expression.
In a possible implementation manner, the tag processing module further includes:
and the first display canceling submodule is used for canceling the display of the target expression in response to the display duration of the first instant message reaching the first duration.
In a possible implementation manner, the first instant message corresponds to a default display duration; and the mark processing module is used for responding to the received trigger operation of the non-public mark control and fixedly displaying the first instant message at a display position corresponding to the user identifier of the first user.
In a possible implementation manner, the tag processing module further includes:
and the identification display submodule is used for displaying an information identification corresponding to the first instant message, and the information identification is used for indicating that the first instant message is fixedly displayed.
In a possible implementation manner, the tag processing module further includes:
and the second display canceling submodule is used for canceling the fixed display of the first instant message in response to the fact that the triggering operation based on the message identification is received.
In a possible implementation manner, the tag processing module further includes:
and the third cancellation display sub-module is used for responding to that the user identification corresponding to the second user of the at least two users displays second instant information, fixedly displaying the second instant information at the display position corresponding to the user identification of the second user, and canceling the fixed display of the first instant information.
In one possible implementation, the flag triggering operation is a long press operation on the first instant message.
In another aspect, a computer device is provided, which includes a processor and a memory, where at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the information processing method in the virtual scene.
In another aspect, a computer-readable storage medium is provided, in which at least one instruction, at least one program, a code set, or a set of instructions is stored, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by a processor to implement the information processing method in the virtual scene.
In another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the computer device executes the information processing method in the virtual scene provided in the above-mentioned various optional implementation modes.
The technical scheme provided by the application can comprise the following beneficial effects:
when instant information is displayed corresponding to a user in a virtual scene in a scene picture of the virtual scene, the instant information is marked based on the marking trigger operation of the instant information, so that the prompt capability of the instant information is enhanced, the interaction between the user and the instant information is not limited to passive viewing, the interaction mode between the user and the instant information is enriched, the prompt effect of the instant information on related events is improved, and the interaction effect between the user and the scene picture of the virtual scene is also improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1 is a schematic illustration of a game play screen provided in an exemplary embodiment of the present application;
FIG. 2 illustrates a block diagram of a computer system provided by an exemplary embodiment of the present application;
FIG. 3 illustrates a schematic diagram of a state synchronization technique shown in an exemplary embodiment of the present application;
FIG. 4 illustrates a schematic diagram of a frame synchronization technique shown in an exemplary embodiment of the present application;
FIG. 5 is a flowchart illustrating a method for processing information in a virtual scene according to an embodiment of the present application;
FIG. 6 is a diagram illustrating a scene screen of a virtual scene in accordance with an exemplary embodiment of the present application;
FIG. 7 is a flowchart illustrating a method of processing information in a virtual scene according to an exemplary embodiment of the present application;
FIG. 8 is a diagram illustrating a scene screen of a virtual scene in accordance with an exemplary embodiment of the present application;
FIG. 9 is a diagram illustrating a scene screen of a virtual scene in accordance with an exemplary embodiment of the present application;
FIG. 10 is a diagram illustrating a scene screen of a virtual scene in accordance with an exemplary embodiment of the present application;
FIG. 11 is a diagram illustrating a scene screen of a virtual scene in accordance with an exemplary embodiment of the present application;
FIG. 12 is a diagram illustrating a scene screen of a virtual scene in accordance with an exemplary embodiment of the present application;
FIG. 13 is a diagram illustrating a scene screen of a virtual scene in accordance with an exemplary embodiment of the present application;
FIG. 14 shows a flow chart of a method of information processing in a virtual scene shown in an exemplary embodiment of the present application;
FIG. 15 is a block diagram illustrating an information processing apparatus in a virtual scene provided by an exemplary embodiment of the present application;
FIG. 16 is a block diagram illustrating the structure of a computer device in accordance with an exemplary embodiment;
FIG. 17 is a block diagram illustrating the structure of a computer device according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The application provides an information processing method in a virtual scene, which can improve interaction efficiency among users. For ease of understanding, several terms referred to in this application are explained below.
1) Virtual scene: refers to a virtual scene that an application shows (or provides) when running on a terminal. The virtual scene can be a simulation environment scene of a real world, can also be a semi-simulation semi-fictional three-dimensional environment scene, and can also be a pure fictional three-dimensional environment scene. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, and a three-dimensional virtual scene, and the following embodiments are illustrated by way of example, but not limited thereto, in which the virtual scene is a three-dimensional virtual scene.
2) Self-chess moving: the chess game is characterized in that the chess pieces are arranged in advance before the chess game is in match, and the chess pieces can be automatically matched according to the preset arrangement in the match process. "chess pieces" are usually represented by virtual characters, and during the fight process, the virtual characters automatically release various skills to fight. The battle usually adopts a turn system, when the 'chess pieces' of one party of the battle completely casually (namely, the life value of the virtual character is reduced to zero), the party is the loser of the battle. In some embodiments, in addition to the "chess piece type virtual characters" for performing the match, the two parties of the match are respectively provided with a virtual character for representing the users participating in the match, the virtual characters can not move to the match area or the reserve area as the "chess pieces", the virtual characters are also provided with life values (or blood volumes), the life values of the virtual characters are correspondingly reduced (match failure) or unchanged (match winning) according to the match result of each match, when the life values of the virtual characters are reduced to zero, the users corresponding to the virtual characters quit the match, and the rest users continue the match.
A chessboard: the self-propelled chess game fighting interface is an area for preparing and fighting, and the chessboard can be any one of a two-dimensional virtual chessboard, a 2.5-dimensional virtual chessboard and a three-dimensional virtual chessboard, which is not limited in the application.
The board is divided into a battle area and a reserve area. The fighting region comprises a plurality of fighting chess grids with the same size, and the fighting chess grids are used for placing fighting chesses for fighting in the fighting process; the fighting area comprises a plurality of fighting chessmen grids for placing fighting chessmen, the fighting chessmen do not participate in fighting in the fighting process, and can be dragged and placed in the fighting area in the preparation stage. The embodiment of the application takes the example that the fighting chessman characters comprise chessman characters located in a fighting area and chessman characters located in a standby area as an example.
Regarding the arrangement of the grids in the battle area, in some embodiments, the battle area includes n (rows) × m (columns) battle grids, schematically, n is an integer multiple of 2, and two adjacent rows of grids are aligned, or two adjacent rows of grids are staggered. In addition, the fight area is divided into two parts according to the row, namely a local fight area and an enemy fight area, users participating in the fight are respectively positioned on the upper side and the lower side of the fight interface, and in the preparation stage, the users can only place the chessmen in the local fight area. In other embodiments, the fighting area is divided into two parts according to the columns, namely a self-party fighting area and an enemy fighting area, and the users participating in the fighting are respectively positioned on the left side and the right side of the fighting interface. The shape of the chess grids can be any one of square, rectangle, circle and hexagon, and the shape of the chess grids is not limited in the embodiment of the application.
In some embodiments, the battle grid is always displayed in the board, in other embodiments, the battle grid is displayed when the user lays out the battle pieces, and the battle grid is not displayed when the battle pieces are placed in the grid.
Schematically, fig. 1 is a schematic diagram of a game match picture provided by an exemplary embodiment of the present application, and as shown in fig. 1, a game board 11 in a match interface includes a match area 111 and a match area 112, where the match area 111 includes 3 × 7 match grids, the grids are hexagonal in shape, two adjacent rows of grids are staggered, and the match area 112 includes 9 match grids.
3) Virtual characters in the self-propelled chess game: refers to pieces placed on a chessboard in a self-propelled chess game, and comprises a fighting piece role and candidate piece roles in a candidate piece role list (namely candidate piece roles in a virtual shop), wherein the fighting piece roles comprise a piece role positioned in a fighting area and a piece role positioned in a standby area. The virtual character can be a virtual chess piece, a virtual character, a virtual animal, an animation character and the like, and the virtual character can be displayed by adopting a three-dimensional model. The candidate chessman role can be combined with the fighting chessman role of the user to trigger the gain fighting effect, and can also be independently used as the chessman role to participate in the chess game fighting.
Alternatively, the position of the characters of the combo pieces on the board may be changed. In the preparation stage, the user can adjust the position of the chessman characters in the fighting area, adjust the position of the chessman characters in the standby area, move the chessman characters in the fighting area to the standby area (when the standby area has free standby chess grids), or move the chessman characters in the standby area to the fighting area. It should be noted that the positions of the chessman characters in the arming area can also be adjusted during the combat phase.
Optionally, during the fighting stage, the chessman characters are located at different positions in the fighting area than during the preparation stage. For example, in the fighting stage, the chessman role can automatically move from the fighting area of the opponent to the fighting area of the enemy, and attack the chessman role of the enemy; alternatively, the chessman character may automatically move from the A position of the own match area to the B position of the own match area.
Furthermore, in the preparation phase, the chess piece characters can only be arranged in the own fighting area, and the chess piece characters arranged by the enemy are not visible on the chessboard.
For the acquisition of the battle chessman characters, in some embodiments, during the course of the game, the user may purchase the chessman characters using virtual currency during the preparation phase.
It should be noted that in some embodiments, virtual characters, which may be virtual characters, virtual animals, cartoon characters, etc., are used to represent users participating in a battle, and the following embodiments name such virtual characters as player virtual characters or user virtual characters.
Schematically, as shown in fig. 1, a first battle chess character 111a, a second battle chess character 111b and a third battle chess character 111c are displayed in the battle area 111, and a first battle preparation chess character 112a, a second battle preparation chess character 112b and a third battle preparation chess character 112c are provided in the battle area 112. A player virtual character 113 is displayed beside the battle area and the reserve area.
The attributes are as follows: each of the pawn characters in the self-walking game has respective attributes, which include at least two of the following attributes: the exemplary embodiments of the present disclosure are not limited to the specific types of attributes, such as the formation to which the chessman role belongs (e.g., a league, B league, middle school, etc.), the occupation of the chessman role (e.g., soldier, shooter, juridical, stabber, guard, swordsman, gunman, fighter, etc.), the attack type of the chessman role (e.g., magic, physics, etc.), the identity of the chessman role (e.g., noble, demon, sprite, etc.), and so on.
Optionally, each chessman character has attributes of at least two dimensions, and equipment carried by the chessman character can improve the attributes of the chessman character.
Optionally, in the battle area, when different chess pieces have associated attributes (including that different chess pieces have the same attribute, or different chess pieces have attributes of complementary types), and the number reaches a number threshold (or called a bridle), the corresponding chess piece role having the attribute or all the chess piece roles in the battle area can obtain a gain effect corresponding to the attribute. For example, when 2 chessman characters with the attribute of warriors are simultaneously included in the fighting area, all the fighting chessman characters obtain 10 percent of defensive force addition; when 4 battle chessman roles with the attribute of fighters are simultaneously included in the battle area, all the battle chessman roles obtain 20 percent of defensive force addition; when the fighting area simultaneously comprises 3 chess-piece characters with the attribute of sprites, all the fighting chess-piece characters obtain the addition of the evasion probability of 20 percent.
It should be noted that the information processing method in the virtual scene shown in the present application may be applied to a scene with instant information display, such as the self-walking chess game-play scene, multi-player online tactics competition scene, single-ranking scene, live broadcast interaction scene, etc., and the self-walking chess game-play scene is taken as an example to describe the information processing method in the virtual scene provided in the present application.
FIG. 2 illustrates a block diagram of a computer system provided in an exemplary embodiment of the present application. The computer service includes: a first terminal 120, a server 140, and a second terminal 160.
The first terminal 120 is installed and operated with a self-propelled chess game application. The first terminal 120 is a terminal used by a first user, the first user uses the first terminal 120 to arrange chess characters in a fighting area of the chessboard in a preparation stage of the game, and the chess characters are automatically controlled to fight by the first terminal 120 in the fighting stage according to the attributes, skills and arrangement of the chess characters in the fighting area.
The first terminal 120 is connected to the server 140 through a wireless network or a wired network.
The server 140 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. Illustratively, the server 140 includes a processor 144 and a memory 142, the memory 142 including a receiving module 1421, a control module 1422, and a transmitting module 1423. The server 140 is configured to provide background services for the self-propelled chess game application, such as providing a picture rendering service for the self-propelled chess game. Illustratively, the receiving module 1421 is configured to receive layout information of the chess characters sent by the client; the control module 1422 is configured to control the chessman roles to automatically fight according to the layout information of the chessman roles; the sending module 1423 is configured to send the result of the battle to the client. Alternatively, the server 140 undertakes primary computational work and the first and second terminals 120, 160 undertake secondary computational work; alternatively, the server 140 undertakes the secondary computing work and the first terminal 120 and the second terminal 160 undertakes the primary computing work; alternatively, the server 140, the first terminal 120, and the second terminal 160 perform cooperative computing by using a distributed computing architecture.
The server 140 may employ synchronization techniques to make the visual appearance consistent among multiple clients. Illustratively, the synchronization techniques employed by the server 140 include: a state synchronization technique or a frame synchronization technique.
The state synchronization technology comprises the following steps: in an alternative embodiment based on fig. 2, the server 140 employs a state synchronization technique to synchronize with multiple clients. Fig. 3 illustrates a schematic diagram of a state synchronization technique, in which combat logic is run in the server 140, as shown in fig. 3, as illustrated in an exemplary embodiment of the present application. When a state change occurs to a chessman character in the battle chessboard, the server 140 sends a state synchronization result to all the clients, such as the clients 1 to 10.
In an illustrative example, client 1 sends a request to server 140, the request carrying a pawn character and a layout of the pawn characters participating in a game play, server 140 is configured to generate a state of the pawn character in a game according to the pawn character and the layout of the pawn, and server 140 sends the state of the pawn character in the game to client 1. Then, server 140 sends the data of the virtual item sent to client 1 to all clients, and all clients update the local data and the interface expression according to the data.
Frame synchronization technology: in an alternative embodiment based on fig. 2, the server 140 employs a frame synchronization technique to synchronize with multiple clients. Fig. 4 shows a schematic diagram of a frame synchronization technique shown in an exemplary embodiment of the present application, in which combat logic operates in various clients as shown in fig. 4. Each client sends a frame synchronization request to the server, where the frame synchronization request carries data changes local to the client. After receiving a frame synchronization request, the server 140 forwards the frame synchronization request to all clients. And after each client receives the frame synchronization request, processing the frame synchronization request according to local combat logic, and updating local data and interface expression.
The second terminal 160 is connected to the server 140 through a wireless network or a wired network.
The second terminal 160 is installed and operated with a self-propelled chess game application. The second terminal 160 is a terminal used by the second user, the second user uses the second terminal 160 to arrange the chessman characters in the fighting area of the chessboard in the preparation stage of the game, and the chessman characters are automatically controlled to fight according to the attribute, skill and arrangement of the chessman characters in the fighting area by the second terminal 160 in the fighting stage.
Optionally, the chess roles laid out by the first user through the first terminal 120 and the second user through the second terminal 160 are located in different areas on the same chessboard, i.e. the first user and the second user are in the same game.
Alternatively, the applications installed on the first terminal 120 and the second terminal 160 are the same, or the applications installed on the two terminals are the same type of application of different control system platforms. The first terminal 120 may generally refer to one of a plurality of terminals, and the second terminal 160 may generally refer to one of a plurality of terminals, and this embodiment is only illustrated by the first terminal 120 and the second terminal 160. The device types of the first terminal 120 and the second terminal 160 are the same or different, and include: at least one of a smartphone, a tablet, an e-book reader, a digital player, a laptop portable computer, and a desktop computer.
Those skilled in the art will appreciate that the number of terminals described above may be greater or fewer. For example, the number of the terminals may be only one (i.e., the user performs a match with artificial intelligence), or the number of the terminals may be 8 (1v1v1v1v1v1v1v1, 8 users perform cyclic match and elimination, and finally the winner is determined), or more. The number of terminals and the type of the device are not limited in the embodiments of the present application.
Fig. 5 shows a flowchart of an information processing method in a virtual scenario, which may be executed by a computer device, where the computer device may be a server or a terminal, and may be implemented as the server or the terminal shown in fig. 2, schematically, as shown in fig. 5, where the information processing method in the virtual scenario may include the following steps:
step 510, displaying a scene picture of the virtual scene, where the scene picture includes user identifications of at least two users corresponding to the virtual scene.
When the information processing method in the virtual scene provided by the application is executed by the server, the step of displaying the scene picture of the virtual scene can be realized in such a way that the server controls the target terminal to display the scene picture of the virtual scene.
The scene picture of the virtual scene may be one of at least two local pictures, where each local picture may be a picture of performing local registration by camps corresponding to two users, and each user corresponds to a respective user identifier, that is, at least two user identifiers included in the scene picture of the virtual scene are used to indicate users in the private camps of the users in the virtual scene.
Step 520, responding to the first instant message corresponding to the first user of the at least two users, and displaying the first instant message corresponding to the user identifier of the first user.
For example, the first instant message may be an instant message sent triggered by a change in relevant information of the first user, which may include, for example, user level upgrades, user-controlled virtual chess pieces to stars, user composition equipment, and the like associated with the first user in a self-propelled chess scenario. The present application does not limit the type, content, and form of the information content.
In a possible implementation manner, the display position of the first instant message corresponds to the display position of the user identifier of the first user in the scene picture of the virtual scene, fig. 6 shows a schematic diagram of the scene picture of the virtual scene shown in an exemplary embodiment of the present application, as shown in fig. 6, the scene picture of the virtual scene is a self-walking chess-to-game picture, the scene picture includes a user identifier display area 610, at least two user identifiers are displayed in the display area, at least two user identifiers respectively correspond to different users participating in the game in the virtual scene, wherein the user identifier 620 corresponding to the first user (i.e. the user identifier corresponding to the user 4 in fig. 6) displays a first instant message 630, the first instant message is used to indicate the raise star information of the virtual chess piece corresponding to the user 4, and illustratively, the raise star message may include the image message 631 of the virtual chess piece shown in fig. 6 and star level message 632, alternatively, the star information may be expressed as a text content expression or other image information, which is not limited in this application, and the display position of the first instant message 630 corresponds to the display position of the user identifier 620 of the first user, as shown in fig. 6, the display position of the first instant message 630 may be adjacent to or close to the display position of the user identifier 620 of the first user, so as to indicate that the first instant message is used to represent a change of related information of the first user.
Step 530, in response to receiving the marking trigger operation for the first instant message, marking the first instant message; the marking process is used for enhancing the reminding capability of the first instant message.
In a possible implementation manner, after a mark triggering operation on the first instant message is received, the first instant message is directly marked based on a default mark processing manner, for example, after a long press operation on the first instant message is received, the first instant message is directly sent to a chat panel, or, alternatively, the first instant message is fixedly displayed by default, and the like, where the default mark processing manner may be set based on a user requirement; or after receiving the mark triggering operation on the first instant message, displaying a selectable mark processing mode for a user executing the mark triggering operation, and marking the first instant message based on the mark processing mode subsequently selected by the user.
To sum up, according to the information processing method in the virtual scene provided in the embodiment of the present application, when the instant information is displayed in the scene picture of the virtual scene corresponding to the user in the virtual scene, the marking of the instant information is triggered and operated based on the marking of the instant information, so as to enhance the reminding capability of the instant information, so that the interaction between the user and the instant information is not limited to passive viewing, the interaction manner between the user and the instant information is enriched, the prompt effect of the instant information on the related events is improved, and the interaction effect between the user and the scene picture of the virtual scene is also improved.
Based on the example that after receiving a mark triggering operation on first instant information, a selectable mark processing mode is displayed for a user who executes the mark triggering operation, and mark processing is performed on the first instant information based on a mark processing mode subsequently selected by the user, an information processing method in a virtual scene provided by the present application is described in the embodiment of the present application, fig. 7 shows a flowchart of the information processing method in the virtual scene shown in an exemplary embodiment of the present application, where the method may be executed by a computer device, where the computer device may be a server or a terminal, and illustratively, the computer device may be implemented as the server or the terminal shown in fig. 2, and as shown in fig. 7, the information processing method in the virtual scene may include the following steps:
step 710, displaying a scene picture of the virtual scene, where the scene picture includes user identifications of at least two users corresponding to the virtual scene.
For example, taking self-chess playing as an example, the scene picture of the virtual scene may be a scene picture of performing a match between two camps, wherein the two camps may be two user camps, and the user camps refer to that a virtual object in the camps is a virtual object controlled by a corresponding user; or, the two camps may also be a user camps and an AI (Artificial Intelligence) camps, where the virtual object in the AI camps may be a virtual object or a set of virtual objects generated based on a preset application program; alternatively, two camps may be two AI camps.
One match picture or a plurality of match pictures can be displayed in the scene picture of one virtual scene, and at least two user identifiers included in the scene picture of the virtual scene are used for indicating users in respective matches in the virtual scene, as shown in fig. 6, 8 user identifiers and one match picture are included in the scene picture of the virtual scene, however, due to different combinations of matches, 4 to 8 virtual matches can be included in the virtual scene. Optionally, the at least two user identifiers are arranged based on an attribute value of a designated attribute of each user, where the designated attribute may be a blood volume value, a user score, a culling number, a virtual currency amount, a received virtual gift amount, and the like corresponding to each user.
Step 720, responding to the first instant message corresponding to the first user of the at least two users, and displaying the first instant message corresponding to the user identifier of the first user.
Optionally, the first instant message may be a message with an effective duration, that is, when the display duration of the first instant message reaches the first duration, the first instant message disappears, schematically, the first duration is 3 seconds, and then, correspondingly, in a general case, after the display duration of the first instant message reaches 3 seconds, the first instant message disappears.
Step 730, in response to receiving the mark triggering operation for the first instant message, displaying an information operation control.
In one possible implementation, the flag triggering operation is a long-press operation on the first instant message; or, the mark triggering operation may also be implemented as a double-click operation, a triple-click operation, a selection operation, and the like on the first instant message, and the implementation form of the mark triggering operation may be set by a relevant person based on an actual requirement, which is not limited in this application.
The number of the information operation controls can be one or more, the information operation controls can be divided into two types of open marking controls and non-open marking controls based on the difference of the number of users capable of seeing the marking content, each type of marking controls can comprise at least one marking control, and each marking control corresponds to a respective marking mode, so that the marking controls triggered by triggering operation are different, and the marking processing of the first instant information can be triggered through different marking modes. In the embodiment of the application, the information operation control comprises at least one of a public mark control and a non-public mark control; the first instant message is marked based on a marking mode corresponding to the public marking control, and the marking mode is visible to at least two users; marking the first instant message based on a marking mode corresponding to the non-public marking control, and enabling the first instant message to be visible to a target user; the target user is the user who performs the mark trigger operation.
And marking the first instant message based on the marking mode of the first operation control in response to the received trigger operation of the first operation control in the public marking control, wherein the first operation control is any one of at least two operation controls, and the first operation control is the control selected by the user in the information operation controls. That is, the labeled content generated based on the public labeling control is sent to the users corresponding to the at least two user identifiers, and the corresponding content is displayed in the terminal interface corresponding to each user, while the labeled content generated based on the non-public labeling control is sent to the target user, and the corresponding content is displayed in the terminal interface of the target terminal corresponding to the target user.
Two possible open markup controls are provided in embodiments of the present application: chat markup controls and emoticon markup controls, and a non-public markup control: and fixing the display control. It should be noted that, related personnel may set other types of mark controls based on the principle of the public mark and the non-public mark, for example, the public mark control may be set as a bullet screen mark control that can send a bullet screen related to the first instant message in the virtual scene picture, and the like.
Fig. 8 is a schematic diagram illustrating a scene screen of a virtual scene according to an exemplary embodiment of the present application, and as shown in fig. 8, a first instant message 810 displayed corresponding to a user 4 is displayed in the scene screen of the virtual scene, and three information operation controls are displayed in response to receiving a long-press operation of the user on the first instant message 810, where a chat mark control 820 and an expression mark control 830 belong to public mark controls, a fixed display control 840 belongs to a non-public mark control, and each information operation control corresponds to a respective mark mode, so that the user can mark the first instant message in different mark modes based on a trigger operation on different information operation controls.
Step 740, in response to receiving the trigger operation on the information operation control, performing a marking process on the first instant message based on a marking mode corresponding to the information operation control.
When the chat mark control in the public mark control is selected by the user, the process of marking the first instant message may be implemented as follows:
in response to receiving a trigger operation on the public mark control, displaying a chat panel;
and publishing the mark content corresponding to the first instant message to the chat panel.
The chat panel is visible to each user in the corresponding virtual scene and can be called a public screen or a world channel; in the chat panel, the information sent by each user or the content of the transmitted markup is visible to the respective user.
Or, the chat panel may also be a default chat panel preset based on a user operation of the target user, for example, the chat panel on which the target user can send a message includes a public screen, a group a, a group B, and a group C, and if the target user sets the group a as the default chat panel, the chat panel corresponding to the group a is displayed when receiving a trigger operation on the chat mark control.
The mark content corresponding to the first instant message is automatically generated based on the triggering operation of the target user on the chat mark control corresponding to the first instant message, and when the information is sent, the mark content is automatically sent to the chat panel, and the target user does not need to carry out related operation of information sending. In the process, when the mark content corresponding to the first instant message is sent to the chat panel based on the trigger operation instruction of the target user on the first instant message, the terminal corresponding to the target user sends the mark content to the server, so that the server synchronizes the mark content to the terminals of the users in the same chat panel with the target user, and the mark content can be viewed when the chat panel is opened by the users.
The markup content may include markup object information and tagged object information, and first instant information, fig. 9 illustrates a schematic view of a scene screen of a virtual scene illustrated in an exemplary embodiment of the present application, as illustrated in fig. 9, in response to receiving a user's trigger operation on a chat markup control 910, a chat panel 920 is called and displayed, the chat panel is used to display interactive information sent by the user, and while the chat panel is displayed, a control target user sends a piece of information in the chat panel, the information includes markup content 930, the tagger and the tagged person are indicated by a user name, as illustrated in fig. 9, the tagger is user 1, the tagged person is user 4, and the first instant information is indicated by the information content, or the markup content may also be implemented in a text description form, such as "user a tags user B hero 1 to three stars", and the like, the present application is not limited to this, and correspondingly, the display of the scene picture of the virtual scene is embodied as displaying the markup content in the chat panel.
In this case, if the first instant message is a message with an effective duration, the first instant message displayed in the scene picture of the virtual scene disappears after the display duration reaches the first duration (effective duration), while the mark content displayed in the chat panel is not affected, that is, the mark content does not disappear after the display duration reaches the first duration, and the user can view the history message in the chat panel and review the first instant message.
When the expression mark control in the public mark control is selected by the user, the process of marking the first instant message may be implemented as follows:
displaying at least two candidate expressions in response to receiving a trigger operation on the expression marking control;
and responding to the received trigger operation of the target expression, and displaying the target expression corresponding to the display position of the first instant message, wherein the target expression is any one of at least two candidate expressions.
Optionally, at least two candidate expressions may be displayed in an expression selection page, and the expression selection page may be displayed at any position in a scene picture of the virtual scene; alternatively, at least two candidate expressions may also be displayed around the expression markup control. Fig. 10 is a schematic diagram of a scene screen of a virtual scene according to an exemplary embodiment of the present application, and as shown in fig. 10, in response to a user's trigger operation on an expression mark control 1010, at least two candidate expressions 1020 are displayed around the expression mark control 1010, and the user may select one of the candidate expressions as a target expression.
In this embodiment of the application, the expression mark control belongs to a public mark control, that is, the target expression selected and sent based on the expression mark control is visible to at least two users, where a process that the target expression is visible to the at least two users may be implemented in which the target terminal sends the target expression determined by the target user to the server, so that the server synchronizes the target expression to terminals of other users who are in the same virtual scene as the target user, and the terminals of the other users display the target expression. In this case, in order to enable other users to explicitly transmit the user of the target expression, an identification of the user who selects the target expression may be displayed corresponding to the display position of the target expression, wherein the identification may be implemented as at least one of a head portrait, a nickname, and the like of the user. Fig. 11 is a schematic diagram of a scene screen of a virtual scene according to an exemplary embodiment of the present application, and as shown in fig. 11, after a user selects a target expression 1110, the target expression 1110 may be displayed at a display position corresponding to first instant information 1120, and at the same time, in order to mark the user identity of the user who sends the target expression, an identity 1130 (the head portrait of the target user) of the user is displayed at the display position corresponding to the target expression 1110. When there are multiple target expressions sent by multiple users based on the same first instant message, the multiple target expressions may be displayed corresponding to the display position of the same first instant message, for example, the multiple target expressions are displayed around the display position of the first instant message, optionally, identifiers of the senders are respectively displayed corresponding to the multiple target expressions, fig. 12 shows a schematic diagram of a scene screen of a virtual scene shown in an exemplary embodiment of the present application, as shown in fig. 12, two target expressions are displayed corresponding to the same first instant message 1210, each target expression has a corresponding sending user, and therefore, a corresponding identifier is displayed corresponding to each target expression, as shown in fig. 12, a target expression 1221 corresponds to an identifier 1222, and a target expression 1231 corresponds to an identifier 1232.
In a possible implementation manner, the first instant message has timeliness, the display state of the target expression is consistent with the display state of the first instant message, and if the effective duration of the first instant message is the first duration, the target expression is cancelled to be displayed in response to the display duration of the first instant message reaching the first duration; that is, the target expression disappears along with the disappearance of the first instant message.
In a possible implementation manner, the first instant message corresponds to a default display duration, that is, the first instant message has timeliness, and when the user selects a fixed display control in the non-public mark control, in response to receiving a trigger operation on the non-public mark control, the first instant message is fixedly displayed at a display position corresponding to the user identifier of the first user.
That is to say, when the trigger operation on the fixed display control is not received, the first instant message disappears after the display duration of the first instant message reaches the first duration. When a trigger operation of the fixed display control is received, the display attribute of the instant message may be changed, the display attribute of the first instant message is changed to be continuously displayed and does not disappear after reaching a first time length, the first instant message is shown to be fixedly displayed at a certain display position in the scene picture of the virtual scene, meanwhile, in order to show the change of the display attribute of the first instant message, a message identifier corresponding to the first instant message may be displayed, the message identifier being used for indicating that the first instant message is fixedly displayed, fig. 13 shows a schematic view of the scene picture of the virtual scene shown in an exemplary embodiment of the present application, as shown in fig. 13, after a trigger operation of the non-public mark control 1310 by the user is received, the display time length of the first instant message reaches the first time length, the first instant message does not disappear and corresponds to the display position of the first instant message, an information label 1320 corresponding to the non-public mark control is displayed, as shown in fig. 13, the information label 1320 may be represented as a "pin" image to indicate that the first instant message is fixedly displayed.
In one possible implementation manner, when the first instant message is in a fixed display state, the fixed display of the first instant message is cancelled in response to receiving a trigger operation based on the message identifier. That is to say, when the user does not need to continue to fixedly display the first instant message, the display of the first instant message can be cancelled through the triggering operation of the message identifier. When the first instant message disappears, the message identification disappears.
In a possible case, the maximum number of the instant messages which can be fixedly displayed based on the triggering operation of the non-public mark control is 1; that is, when the first instant message displayed corresponding to the user identifier of the first user is in the fixed display state, the fixed display of the first instant message is cancelled in response to the user identifier corresponding to the second user of the at least two users displaying the second instant message and fixedly displaying the second instant message at the display position corresponding to the user identifier of the second user. That is, for the instant message in the fixed display, when the user determines to perform the fixed display operation on the other instant message, the instant message in the fixed display state before disappears.
To sum up, according to the information processing method in the virtual scene provided in the embodiment of the present application, when the instant information is displayed in the scene picture of the virtual scene corresponding to the user in the virtual scene, the marking of the instant information is triggered and operated based on the marking of the instant information, so as to enhance the reminding capability of the instant information, so that the interaction between the user and the instant information is not limited to passive viewing, the interaction manner between the user and the instant information is enriched, the prompt effect of the instant information on the related events is improved, and the interaction effect between the user and the scene picture of the virtual scene is also improved.
Taking the scene picture of the virtual scene as a scene picture corresponding to the self-walking chess game scenario as an example, at least two user identifiers included in the scene picture of the virtual scene may refer to user identifiers of users in a leader board included in the scene picture of the virtual scene, and taking the instant message as an example of having timeliness, fig. 14 shows a flowchart of an information processing method in the virtual scene shown in an exemplary embodiment of the present application, where the method may be executed by a computer device, the computer device may be a server or a terminal, illustratively, the computer device may be implemented as the server or the terminal shown in fig. 2, and as shown in fig. 14, the information processing method in the virtual scene may include the following steps:
s1401, instant message appears.
When multiple pieces of instant information exist in the scene picture of the virtual scene at the same time, the user may interact with one or more pieces of the multiple pieces of instant information within the effective display duration of the instant information, that is, within the first duration, and the interaction manner may include: sending instant information to a chat panel, sending expressions visible to multiple persons corresponding to the instant information, fixedly displaying the instant information in a scene picture of a virtual scene, and the like. In the embodiment of the present application, the interactive mode includes the three operations listed above as an example for description, but it should be noted that, based on different settings of related people, the interactive mode of the user and the instant message may include, but is not limited to, the three operations described above.
And S1402, judging whether a long-press operation based on the instant message is received, if so, executing S1403, and otherwise, executing S1410.
S1403, three operations are invoked.
S1404, choose to send to chat.
S1405, sending the mark content corresponding to the instant message to the chat panel.
In this case, after the first period of time, the instant message disappears, but the mark content of the instant message displayed in the chat panel is not affected.
S1406, selects the transmission expression.
S1407, sending an expression mark corresponding to the display position of the instant message.
In this case, after the first period of time, the instant message disappears, and the expression mark disappears accordingly.
And S1408, selecting a pin for fixing.
S1409, displaying the marker flag.
In this case, after the first duration, the instant message does not disappear, and one display cancellation manner is to cancel the display of the instant message and simultaneously disappear when the user clicks the mark again; the other display cancellation mode is that the maximum number of the instant messages in the fixed display state in the scene picture of the same virtual scene is set to be one, and when the pin fixing operation selected by the user based on other instant messages is received, the instant messages in the fixed display state before being displayed are cancelled.
And S1410, after the first time period, the instant message disappears.
To sum up, according to the information processing method in the virtual scene provided in the embodiment of the present application, when the instant information is displayed in the scene picture of the virtual scene corresponding to the user in the virtual scene, the marking of the instant information is triggered and operated based on the marking of the instant information, so as to enhance the reminding capability of the instant information, so that the interaction between the user and the instant information is not limited to passive viewing, the interaction manner between the user and the instant information is enriched, the prompt effect of the instant information on the related events is improved, and the interaction effect between the user and the scene picture of the virtual scene is also improved.
Fig. 15 is a block diagram illustrating an information processing apparatus in a virtual scene according to an exemplary embodiment of the present application, where, as shown in fig. 15, the apparatus includes:
a picture display module 1510, configured to display a scene picture of a virtual scene, where the scene picture includes user identifiers of at least two users corresponding to the virtual scene;
the information display module 1520, configured to respond to that a first user of the at least two users corresponds to a first instant message, display the first instant message corresponding to a user identifier of the first user;
the mark processing module 1530 is configured to perform mark processing on the first instant message in response to receiving a mark trigger operation on the first instant message; the marking processing is used for strengthening the reminding capability of the first instant message.
In one possible implementation, the mark processing module 1530 includes:
the control display sub-module is used for responding to the received marking trigger operation of the first instant message and displaying the message operation control;
and the marking processing sub-module is used for responding to the received triggering operation of the information operation control and marking the first instant message based on the marking mode corresponding to the information operation control.
In a possible implementation manner, the information operation control includes a public mark control and a non-public mark control;
marking the first instant message based on a marking mode corresponding to the public marking control, and enabling the first instant message to be visible to the at least two users;
marking the first instant message based on a marking mode corresponding to the non-public marking control, and enabling the first instant message to be visible to a target user; the target user is a user who performs the mark trigger operation.
In one possible implementation, the mark processing module 1530 includes:
the chat panel display submodule is used for responding to the received trigger operation of the public mark control and displaying a chat panel;
and the marked content display sub-module is used for publishing the marked content corresponding to the first instant message to the chat panel.
In one possible implementation, the mark processing module 1530 includes:
the candidate expression display sub-module is used for responding to the received trigger operation of the public mark control and displaying at least two candidate expressions;
and the target expression display sub-module is used for responding to the received trigger operation of the target expression and displaying the target expression corresponding to the display position of the first instant message, wherein the target expression is any one of at least two candidate expressions.
In a possible implementation manner, the mark processing module 1530 further includes:
and the identity display submodule is used for displaying the identity of the user selecting the target expression corresponding to the display position of the target expression.
In a possible implementation manner, the mark processing module 1530 further includes:
and the first display canceling submodule is used for canceling the display of the target expression in response to the display duration of the first instant message reaching the first duration.
In a possible implementation manner, the first instant message corresponds to a default display duration; the mark processing module 1530 is configured to, in response to receiving the trigger operation on the non-public mark control, fixedly display the first instant message at a display position corresponding to the user identifier of the first user.
In a possible implementation manner, the mark processing module 1530 further includes:
and the identification display submodule is used for displaying an information identification corresponding to the first instant message, and the information identification is used for indicating that the first instant message is fixedly displayed.
In a possible implementation manner, the mark processing module 1530 further includes:
and the second display canceling submodule is used for canceling the fixed display of the first instant message in response to the fact that the triggering operation based on the message identification is received.
In a possible implementation manner, the mark processing module 1530 further includes:
and the third cancellation display sub-module is used for responding to that the user identification corresponding to the second user of the at least two users displays second instant information, fixedly displaying the second instant information at the display position corresponding to the user identification of the second user, and canceling the fixed display of the first instant information.
In one possible implementation, the flag triggering operation is a long press operation on the first instant message.
To sum up, the information processing apparatus in a virtual scene provided in the embodiment of the present application implements the mark processing of the instant information based on the mark triggering operation of the instant information when the instant information is displayed corresponding to the user in the virtual scene in the scene picture of the virtual scene, so as to enhance the reminding capability of the instant information, so that the interaction between the user and the instant information is not limited to passive viewing, the interaction manner between the user and the instant information is enriched, the prompt effect of the instant information on the related events is improved, and the interaction effect between the user and the scene picture of the virtual scene is also improved.
Fig. 16 shows a block diagram of a computer device 1600, shown in an example embodiment of the present application. The computer device may be implemented as a server in the above-mentioned aspects of the present application. The computer device 1600 includes a Central Processing Unit (CPU) 1601, a system Memory 1604 including a Random Access Memory (RAM) 1602 and a Read-Only Memory (ROM) 1603, and a system bus 1605 connecting the system Memory 1604 and the CPU 1601. The computer device 1600 also includes a mass storage device 1606 for storing an operating system 1609, application programs 1610, and other program modules 1611.
The mass storage device 1606 is connected to the central processing unit 1601 by a mass storage controller (not shown) connected to the system bus 1605. The mass storage device 1606 and its associated computer-readable media provide non-volatile storage for the computer device 1600. That is, the mass storage device 1606 may include a computer-readable medium (not shown) such as a hard disk or Compact Disc-Only Memory (CD-ROM) drive.
Without loss of generality, the computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), flash Memory or other solid state Memory technology, CD-ROM, Digital Versatile Disks (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage, or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 1604 and mass storage device 1606 described above may collectively be referred to as memory.
According to various embodiments of the disclosure, the computer device 1600 may also operate as a remote computer connected to a network via a network, such as the Internet. That is, the computer device 1600 may be connected to the network 1608 through a network interface unit 1607 that is coupled to the system bus 1605, or alternatively, may be connected to other types of networks or remote computer systems (not shown) using the network interface unit 1607.
The memory further includes at least one instruction, at least one program, a code set, or a set of instructions, which is stored in the memory, and the central processing unit 1601 is configured to implement all or part of the steps in the information processing method in the virtual scene shown in the foregoing embodiments by executing the at least one instruction, the at least one program, the code set, or the set of instructions.
Fig. 17 shows a block diagram of a computer device 1700 according to an exemplary embodiment of the present application. The computer device 1700 may be implemented as a terminal as described above, such as: a smartphone, a tablet, a laptop, or a desktop computer. Computer device 1700 may also be referred to by other names such as user equipment, portable terminals, laptop terminals, desktop terminals, and the like.
Generally, computer device 1700 includes: a processor 1701 and a memory 1702.
The processor 1701 may include one or more processing cores, such as a 4-core processor, a 17-core processor, and the like. The processor 1701 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1701 may also include a main processor, which is a processor for Processing data in an awake state, also called a Central Processing Unit (CPU), and a coprocessor; a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1701 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, the processor 1701 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
The memory 1702 may include one or more computer-readable storage media, which may be non-transitory. The memory 1702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1702 is used to store at least one instruction for execution by the processor 1701 to implement all or part of the steps in the information processing method in the virtual scene provided by the method embodiments of the present application.
In some embodiments, computer device 1700 may also optionally include: a peripheral interface 1703 and at least one peripheral. The processor 1701, memory 1702 and peripheral interface 1703 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 1703 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuit 1704, display screen 1705, camera assembly 1706, audio circuit 1707, positioning assembly 1708, and power supply 1709.
The peripheral interface 1703 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1701 and the memory 1702. In some embodiments, the processor 1701, memory 1702, and peripheral interface 1703 are integrated on the same chip or circuit board; in some other embodiments, any one or both of the processor 1701, the memory 1702, and the peripheral interface 1703 may be implemented on separate chips or circuit boards, which are not limited in this embodiment.
In some embodiments, computer device 1700 also includes one or more sensors 1710. The one or more sensors 1710 include, but are not limited to: acceleration sensor 1711, gyro sensor 1712, pressure sensor 1713, fingerprint sensor 1714, optical sensor 1715, and proximity sensor 1716.
Those skilled in the art will appreciate that the architecture shown in FIG. 17 is not intended to be limiting of the computer device 1700 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, a computer readable storage medium is further provided, which stores at least one instruction, at least one program, a code set, or a set of instructions, which is loaded and executed by a processor to implement all or part of the steps of the information processing method in the above virtual scene. For example, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product or a computer program is also provided, which comprises computer instructions, which are stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions to cause the computer device to perform all or part of the steps of the method shown in any one of the embodiments of fig. 5, fig. 7 or fig. 14.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (16)

1. An information processing method in a virtual scene, the method comprising:
displaying a scene picture of a virtual scene, wherein the scene picture comprises user identifications of at least two users corresponding to the virtual scene;
responding to a first instant message corresponding to a first user of the at least two users, and displaying the first instant message corresponding to a user identifier of the first user;
in response to receiving a marking trigger operation for the first instant message, marking the first instant message; the marking processing is used for strengthening the reminding capability of the first instant message.
2. The method of claim 1, wherein in response to receiving a marking trigger operation for the first instant message, marking the first instant message comprises:
responding to the received mark triggering operation of the first instant message, and displaying an information operation control;
and in response to receiving the triggering operation of the information operation control, marking the first instant message based on a marking mode corresponding to the information operation control.
3. The method of claim 2, wherein the information handling controls include at least one of public mark controls and non-public mark controls;
marking the first instant message based on a marking mode corresponding to the public marking control, and enabling the first instant message to be visible to the at least two users;
marking the first instant message based on a marking mode corresponding to the non-public marking control, and enabling the first instant message to be visible to a target user; the target user is a user who performs the mark trigger operation.
4. The method according to claim 3, wherein the marking the first instant message based on the marking mode corresponding to the information operation control in response to receiving the trigger operation on the information operation control comprises:
in response to receiving a trigger operation on the public mark control, displaying a chat panel;
and publishing the mark content corresponding to the first instant message to the chat panel.
5. The method according to claim 3, wherein the marking the first instant message based on the marking mode corresponding to the information operation control in response to receiving the trigger operation on the information operation control comprises:
displaying at least two candidate expressions in response to receiving a triggering operation on the public mark control;
and responding to the received trigger operation of the target expression, and displaying the target expression corresponding to the display position of the first instant message, wherein the target expression is any one of at least two candidate expressions.
6. The method of claim 5, further comprising:
and displaying the identity of the user selecting the target expression corresponding to the display position of the target expression.
7. The method of claim 5, further comprising:
and canceling the display of the target expression in response to the display duration of the first instant message reaching a first duration.
8. The method of claim 3, wherein the first instant message corresponds to a default display duration; the marking processing of the first instant message based on the marking mode corresponding to the information operation control in response to the receiving of the trigger operation of the information operation control comprises:
and responding to the received trigger operation of the non-public mark control, and fixedly displaying the first instant message at a display position corresponding to the user identifier of the first user.
9. The method of claim 8, further comprising:
displaying an information identifier corresponding to the first instant message, wherein the information identifier is used for indicating that the first instant message is fixedly displayed.
10. The method of claim 9, further comprising:
and canceling the fixed display of the first instant message in response to receiving the triggering operation based on the message identification.
11. The method of claim 8, further comprising:
and responding to that a user identifier corresponding to a second user of the at least two users displays second instant information, fixedly displaying the second instant information at a display position corresponding to the user identifier of the second user, and canceling the fixed display of the first instant information.
12. The method of any of claims 1 to 11, wherein the flag trigger operation is a long press operation on the first instant message.
13. An information processing apparatus in a virtual scene, the apparatus comprising:
the image display module is used for displaying scene images of a virtual scene, and the scene images contain user identifications of at least two users corresponding to the virtual scene;
the information display module is used for responding to first instant information corresponding to a first user in the at least two users and displaying the first instant information corresponding to the user identification of the first user;
the mark processing module is used for responding to the received mark triggering operation of the first instant message and marking the first instant message; the marking processing is used for strengthening the reminding capability of the first instant message.
14. A computer device, characterized in that it comprises a processor and a memory, said memory storing at least one computer program which is loaded and executed by said processor to implement the information processing method in a virtual scenario according to any one of claims 1 to 12.
15. A computer-readable storage medium, in which at least one computer program is stored, the computer program being loaded and executed by a processor to implement the information processing method in a virtual scene according to any one of claims 1 to 12.
16. A computer program product, characterized in that it comprises at least one computer program which is loaded and executed by a processor to implement the method of information processing in a virtual scene according to any one of claims 1 to 12.
CN202111195318.6A 2021-10-13 2021-10-13 Information processing method, device, equipment and storage medium in virtual scene Active CN113893560B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111195318.6A CN113893560B (en) 2021-10-13 2021-10-13 Information processing method, device, equipment and storage medium in virtual scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111195318.6A CN113893560B (en) 2021-10-13 2021-10-13 Information processing method, device, equipment and storage medium in virtual scene

Publications (2)

Publication Number Publication Date
CN113893560A true CN113893560A (en) 2022-01-07
CN113893560B CN113893560B (en) 2023-07-21

Family

ID=79191913

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111195318.6A Active CN113893560B (en) 2021-10-13 2021-10-13 Information processing method, device, equipment and storage medium in virtual scene

Country Status (1)

Country Link
CN (1) CN113893560B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023138321A1 (en) * 2022-01-21 2023-07-27 北京字节跳动网络技术有限公司 Method and apparatus for interaction in instant messaging, and computer device and storage medium
WO2023221716A1 (en) * 2022-05-20 2023-11-23 腾讯科技(深圳)有限公司 Mark processing method and apparatus in virtual scenario, and device, medium and product

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104616357A (en) * 2015-02-12 2015-05-13 刘鹏 Three-dimensional simulation model interactive-use network platform system attached with instant message
US20160361659A1 (en) * 2015-06-12 2016-12-15 Splendor Game Technology Co., Ltd. Method and system for instant messaging and gaming
CN106487658A (en) * 2016-10-19 2017-03-08 任峰 A kind of instant message display packing based on labelling
US20170282063A1 (en) * 2016-03-30 2017-10-05 Sony Computer Entertainment Inc. Personalized Data Driven Game Training System
CN112891944A (en) * 2021-03-26 2021-06-04 腾讯科技(深圳)有限公司 Interaction method and device based on virtual scene, computer equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104616357A (en) * 2015-02-12 2015-05-13 刘鹏 Three-dimensional simulation model interactive-use network platform system attached with instant message
US20160361659A1 (en) * 2015-06-12 2016-12-15 Splendor Game Technology Co., Ltd. Method and system for instant messaging and gaming
US20170282063A1 (en) * 2016-03-30 2017-10-05 Sony Computer Entertainment Inc. Personalized Data Driven Game Training System
CN106487658A (en) * 2016-10-19 2017-03-08 任峰 A kind of instant message display packing based on labelling
CN112891944A (en) * 2021-03-26 2021-06-04 腾讯科技(深圳)有限公司 Interaction method and device based on virtual scene, computer equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023138321A1 (en) * 2022-01-21 2023-07-27 北京字节跳动网络技术有限公司 Method and apparatus for interaction in instant messaging, and computer device and storage medium
WO2023221716A1 (en) * 2022-05-20 2023-11-23 腾讯科技(深圳)有限公司 Mark processing method and apparatus in virtual scenario, and device, medium and product

Also Published As

Publication number Publication date
CN113893560B (en) 2023-07-21

Similar Documents

Publication Publication Date Title
CN113134237B (en) Virtual rewarding resource allocation method and device, electronic equipment and storage medium
CN110339564B (en) Virtual object display method, device, terminal and storage medium in virtual environment
CN113069767B (en) Virtual interaction method, device, terminal and storage medium
US20220280870A1 (en) Method, apparatus, device, and storage medium, and program product for displaying voting result
CN111672116B (en) Method, device, terminal and storage medium for controlling virtual object release technology
TWI804032B (en) Method for data processing in virtual scene, device, apparatus, storage medium and program product
US20230336792A1 (en) Display method and apparatus for event livestreaming, device and storage medium
CN113893560A (en) Information processing method, device, equipment and storage medium in virtual scene
EP3943175A1 (en) Information display method and apparatus, and device and storage medium
CN111672111A (en) Interface display method, device, equipment and storage medium
TWI818351B (en) Messaging method, device, terminal, and medium for a multiplayer online battle program
WO2022193838A1 (en) Game settlement interface display method and apparatus, device and medium
CN112891942B (en) Method, device, equipment and medium for obtaining virtual prop
CN110801629B (en) Method, device, terminal and medium for displaying virtual object life value prompt graph
CN112691366A (en) Virtual item display method, device, equipment and medium
JP2023164787A (en) Picture display method and apparatus for virtual environment, and device and computer program
WO2022083451A1 (en) Skill selection method and apparatus for virtual object, and device, medium and program product
CN113975824A (en) Game fighting reminding method and related equipment
CN114288639A (en) Picture display method, providing method, device, equipment and storage medium
US20230330539A1 (en) Virtual character control method and apparatus, device, storage medium, and program product
WO2023024880A1 (en) Method and apparatus for expression displaying in virtual scenario, and device and medium
WO2024060914A1 (en) Virtual object generation method and apparatus, device, medium, and program product
CN116549974A (en) Information communication method, device and product in virtual fight
CN116650954A (en) Game progress control method and device, electronic equipment and storage medium
KR20210132302A (en) Contents supply system using cloud game

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant