CN111672099B - Information display method, device, equipment and storage medium in virtual scene - Google Patents
Information display method, device, equipment and storage medium in virtual scene Download PDFInfo
- Publication number
- CN111672099B CN111672099B CN202010468038.7A CN202010468038A CN111672099B CN 111672099 B CN111672099 B CN 111672099B CN 202010468038 A CN202010468038 A CN 202010468038A CN 111672099 B CN111672099 B CN 111672099B
- Authority
- CN
- China
- Prior art keywords
- scene
- virtual
- picture
- bullet screen
- virtual object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 92
- 230000003993 interaction Effects 0.000 claims abstract description 130
- 230000002452 interceptive effect Effects 0.000 claims abstract description 121
- 230000008569 process Effects 0.000 claims abstract description 31
- 230000004044 response Effects 0.000 claims description 46
- 230000009183 running Effects 0.000 claims description 32
- 230000006870 function Effects 0.000 claims description 29
- 230000000007 visual effect Effects 0.000 claims description 23
- 230000015572 biosynthetic process Effects 0.000 claims description 8
- 238000005096 rolling process Methods 0.000 claims description 7
- 230000001960 triggered effect Effects 0.000 claims description 5
- 238000004321 preservation Methods 0.000 claims 3
- 239000002699 waste material Substances 0.000 abstract description 10
- 238000010586 diagram Methods 0.000 description 28
- 238000012545 processing Methods 0.000 description 15
- 238000004891 communication Methods 0.000 description 12
- 230000002093 peripheral effect Effects 0.000 description 10
- 230000001133 acceleration Effects 0.000 description 9
- 238000003307 slaughter Methods 0.000 description 8
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 7
- 239000010931 gold Substances 0.000 description 7
- 229910052737 gold Inorganic materials 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 7
- 230000000694 effects Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 230000002045 lasting effect Effects 0.000 description 3
- 241000282461 Canis lupus Species 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 239000000919 ceramic Substances 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000009187 flying Effects 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 238000011084 recovery Methods 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 241001672694 Citrus reticulata Species 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000001816 cooling Methods 0.000 description 1
- 230000009193 crawling Effects 0.000 description 1
- 239000013078 crystal Substances 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 230000004083 survival effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
- 230000009184 walking Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/53—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/30—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
- A63F2300/308—Details of the user interface
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application relates to a method, a device, equipment and a storage medium for displaying information in a virtual scene, and relates to the technical field of networks. The method comprises the following steps: and displaying a scene picture corresponding to the virtual scene, and displaying a bullet screen element containing the target interactive content on the scene picture after acquiring the interactive information containing the target interactive content sent by the first virtual object. By the method, the user can directly know the interactive content in the scene picture in the display process of the interactive information, the switching of the scene picture is reduced, the information interaction efficiency in the virtual scene is improved, and the waste of terminal resources is reduced.
Description
Technical Field
The present application relates to the field of network technologies, and in particular, to a method, an apparatus, a device, and a storage medium for displaying information in a virtual scene.
Background
With the development of internet technology, team-style game modes are favored by more and more users.
In a team-type game mode, the cooperation between the game applications is often realized through communication, and generally, the communication between the game applications is realized through inputting text information in a chat interface set in the game applications.
However, when the user needs to play the game and communicate information, the user needs to switch back and forth between the chat interface and the game interface, and the switching process needs to consume more computing resources, which results in resource waste of the terminal.
Disclosure of Invention
The embodiment of the application provides an information display method, an information display device, information display equipment and a storage medium in a virtual scene, which can be used for reducing waste of terminal resources, and the technical scheme is as follows:
in one aspect, a method for displaying information in a virtual scene is provided, where the method includes:
displaying a scene picture corresponding to a virtual scene, wherein the virtual scene comprises a plurality of virtual objects;
acquiring first interaction information sent by a first virtual object, wherein the first interaction information comprises target interaction content; the first virtual object is any one of the plurality of virtual objects;
and displaying a bullet screen element on the scene picture, wherein the bullet screen element comprises the target interactive content.
In one aspect, a method for displaying information in a virtual scene is provided, where the method includes:
displaying a scene picture corresponding to a virtual scene, wherein the scene picture comprises a first virtual object, and the first virtual object is a virtual object controlled by a terminal for displaying the scene picture;
responding to the scene picture being a virtual scene running picture, and displaying the text content of the interactive information sent by the same marketing object in a bullet screen rolling mode on the scene picture; the same-marketing object is a virtual object in the same marketing as the first virtual object;
in response to the scene picture being a virtual scene ending picture, displaying the text content of the interactive information sent by the winning marketing object in a bullet screen rolling mode on the scene picture; the winning marketing object is a virtual object in a marketing that wins in the virtual scene.
In one aspect, an apparatus for displaying information in a virtual scene is provided, the apparatus comprising:
the system comprises a first display module, a second display module and a display module, wherein the first display module is used for displaying scene pictures corresponding to a virtual scene, and the virtual scene comprises a plurality of virtual objects;
the first acquisition module is used for acquiring first interaction information sent by a first virtual object, wherein the first interaction information comprises target interaction content; the first virtual object is any one of the plurality of virtual objects;
and the second display module is used for displaying the bullet screen element on the scene picture, and the bullet screen element comprises the target interactive content.
In a possible implementation manner, the scene picture is a virtual scene running picture, and the virtual scene running picture is a picture of the virtual scene observed at a viewing angle of a second virtual object during the running process of the virtual scene; the second virtual object is a terminal-controlled virtual object showing the scene picture;
or,
the scene picture is a virtual scene ending picture, and the virtual scene ending picture is a picture displayed when the operation of the virtual scene is ended.
In a possible implementation manner, the first obtaining module is configured to obtain the first interaction information in response to that the scene picture is a virtual scene operation picture, and the first virtual object and the second virtual object are in the same formation.
In a possible implementation manner, the second display module is configured to, in response to that the scene picture is a virtual scene running picture, and the first virtual object and the second virtual object are in the same battle, display the barrage element with a first visualization effect on the scene picture;
in response to that the scene picture is a virtual scene running picture and the first virtual object and the second virtual object are in different camps, displaying the bullet screen element with a second visual effect on the scene picture;
the first visual effect is different from the second visual effect.
In a possible implementation manner, the first obtaining module is configured to obtain the first interaction information in response to that the scene picture is a virtual scene end picture and the first virtual object is a virtual object in a winning marketing in the virtual scene.
In one possible implementation, the apparatus further includes:
a third display module, configured to display an interactive control on the virtual scene end picture in response to that the scene picture is a virtual scene end picture and the second virtual object is a virtual object in a winning battle in the virtual scene;
the second acquisition module is used for responding to the triggering operation of the interaction control and acquiring input second interaction information;
and the sending module is used for sending the second interaction information to a server so that the server can send the second interaction information to terminals corresponding to all virtual objects in the virtual scene.
In a possible implementation manner, the second display module is configured to display the barrage element on the scene picture in response to that a terminal displaying the scene picture has started a barrage function.
In a possible implementation manner, the scene picture includes a bullet screen switch control;
the responding to the terminal displaying the scene picture having opened the barrage function, before displaying the barrage element on the scene picture, the apparatus further comprises:
and the determining module is used for responding to the starting operation executed on the bullet screen switch control and determining that the terminal displaying the scene picture starts the bullet screen function.
In one possible implementation, the apparatus further includes:
and the third acquisition module is used for responding to the fact that the bullet screen function is started by the terminal for displaying the scene picture, the first interaction information is voice information, and the first interaction information is subjected to voice recognition to obtain the target interaction content.
In one possible implementation, the apparatus further includes:
and the first playing module is used for responding to the situation that the terminal for displaying the scene picture does not start the barrage function, the first interaction information is voice information, and the first interaction information is played through the terminal corresponding to the first virtual object.
In one possible implementation, the apparatus further includes:
the screen recording module is used for recording the screen of the scene picture in response to receiving a screen recording instruction to obtain a screen recording file;
the saving module is used for correspondingly saving the bullet screen elements displayed on the scene picture in the screen recording process of the screen recording file;
the second playing module is used for responding to an instruction for playing the screen recording file and playing the picture in the screen recording file;
and the fourth display module is used for displaying the barrage element corresponding to the screen recording file on the scene picture in the screen recording file playing process.
In one aspect, a computer device is provided, which includes a processor and a memory, where at least one instruction, at least one program, a code set, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the information presentation method in the above virtual scene.
In one aspect, a computer-readable storage medium is provided, where at least one instruction, at least one program, a code set, or a set of instructions is stored in the storage medium, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by a processor to implement the information presentation method in the virtual scene.
The technical scheme provided by the application can comprise the following beneficial effects:
by displaying a scene picture corresponding to the virtual scene, after acquiring the interaction information which is sent by the first virtual object and contains the target interaction content, displaying the bullet screen element containing the target interaction content on the scene picture. In the process of displaying the interactive information, the user can directly know the interactive content in the scene picture, so that the switching of the scene picture is reduced, the information interaction efficiency in the virtual scene is improved, and the waste of terminal resources is reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1 illustrates a block diagram of a computer system provided in an exemplary embodiment of the present application;
FIG. 2 illustrates a schematic diagram of a map provided by a virtual scene of a MOBA game, shown in an exemplary embodiment of the present application;
FIG. 3 is a flowchart illustrating a method for presenting information in a virtual scene according to an exemplary embodiment of the present application;
FIG. 4 is a schematic diagram illustrating an interface for displaying a bullet screen element on a scene screen according to an exemplary embodiment of the present application;
FIG. 5 is a flowchart illustrating a method for presenting information in a virtual scene according to an exemplary embodiment of the present disclosure;
FIG. 6 is a diagram illustrating a virtual scene end screen in accordance with an exemplary embodiment of the present application;
FIG. 7 is an interface diagram illustrating a virtual scene end screen in accordance with an exemplary embodiment of the present application;
fig. 8 is a diagram illustrating a terminal sending interaction information according to an exemplary embodiment of the present application;
FIG. 9 illustrates an interface diagram of a scene screen shown in an exemplary embodiment of the present application;
FIG. 10 is a diagram illustrating presentation of first interaction information in the form of a dialog box according to an exemplary embodiment of the present application;
FIG. 11 is a diagram illustrating a scene screen according to an exemplary embodiment of the present application;
fig. 12 is a diagram illustrating a barrage element when playing back a screen recording file according to an exemplary embodiment of the present application;
FIG. 13 is a flow chart illustrating a process for recording and playing back content for a display interface according to an exemplary embodiment of the present application;
FIG. 14 illustrates a flow chart of a method for information presentation in a virtual scene, shown in an exemplary embodiment of the present application;
FIG. 15 is a flowchart illustrating a method for presenting information in a virtual scene according to an exemplary embodiment of the present application;
FIG. 16 is a block diagram of an information presentation apparatus in a virtual scene provided by an exemplary embodiment of the present application;
FIG. 17 is a block diagram illustrating the structure of a computer device in accordance with an exemplary embodiment;
FIG. 18 is a block diagram illustrating the structure of a computer device according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It is to be understood that reference herein to "a number" means one or more and "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The embodiment of the application provides an information display method in a virtual scene, which can reduce the switching frequency of a user between a chat interface and a game interface in a terminal, improve the information interaction efficiency in the virtual scene and further reduce the waste of terminal resources. To facilitate understanding, several terms referred to in this application are explained below.
1) Virtual scene
The virtual scene refers to a virtual scene displayed (or provided) when an application program runs on the terminal. The virtual scene can be a simulation environment scene of a real world, can also be a semi-simulation semi-fictional three-dimensional environment scene, and can also be a pure fictional three-dimensional environment scene. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, and a three-dimensional virtual scene, and the following embodiments are illustrated by way of example, but not limited thereto, in which the virtual scene is a three-dimensional virtual scene. Optionally, the virtual scene is also used for virtual scene fight between at least two virtual characters. Optionally, the virtual scene has virtual resources available for at least two virtual characters. Optionally, the virtual scene includes that the virtual world includes a square map, the square map includes a symmetric lower left corner region and an upper right corner region, virtual characters belonging to two enemy camps occupy one of the regions respectively, and a target building/site/base/crystal deep in the other region is destroyed to serve as a winning target.
2) Virtual object
A virtual object refers to a movable object in a virtual scene. The movable object may be at least one of a virtual character, a virtual animal, and an animation character. Alternatively, when the virtual scene is a three-dimensional virtual scene, the virtual object may be a three-dimensional stereo model. Each virtual object has its own shape and volume in the three-dimensional virtual scene, occupying a portion of the space in the three-dimensional virtual scene. Optionally, the virtual character is a three-dimensional character constructed based on three-dimensional human skeleton technology, and the virtual character realizes different external images by wearing different skins. In some implementations, the virtual role can also be implemented by using a 2.5-dimensional or 2-dimensional model, which is not limited in this application.
3) Multi-person online tactical sports
The multi-person online tactical competition is that on a map provided by a virtual scene, different virtual teams belonging to at least two enemy camps respectively occupy respective map areas, and the competition is carried out by taking a certain winning condition as a target. Such winning conditions include, but are not limited to: the method comprises the following steps of occupying site points or destroying enemy battle site points, killing virtual characters of enemy battles, guaranteeing the survival of the enemy battles in a specified scene and time, seizing certain resources, and comparing the resource with the resource of the other party in the specified time. The tactical competition can be carried out in the unit of a game, and the map of each tactical competition can be the same or different. Each virtual team includes one or more virtual roles, such as 1, 3, or 5.
4) MOBA (Multiplayer Online Battle Arena) game
The MOBA game is a game which provides a plurality of base points in the virtual world, and users in different camps control virtual characters to fight against, occupy the base points or destroy enemy camp base points in the virtual world. For example, the MOBA game may divide the user into two enemy paradigms, and disperse the virtual characters controlled by the user in the virtual world to compete with each other, so as to destroy or occupy all the points of enemy as winning conditions. The MOBA game is in hands, and the duration of a hand of the MOBA game is from the time of starting the game to the time of reaching a winning condition.
5) Settlement interface
In the MOBA game, at the end of each battle, whether the battle of the team of the player succeeds or fails, the battle enters a battle settlement interface, and the battle settlement interface can display the battle performance, the obtainable reward, the experience and the like of each player in the team of the player in the local game.
Meanwhile, interaction operations such as approval, gift sending and the like can be performed among players on the same team in the settlement interface.
FIG. 1 shows a block diagram of a computer system provided in an exemplary embodiment of the present application. The computer system 100 includes: a first terminal 110, a server cluster 120, a second terminal 130.
The first terminal 110 is installed and operated with a client 111 supporting a virtual scene, and the client 111 may be a multiplayer online battle program. When the first terminal runs the client 111, a user interface of the client 111 is displayed on the screen of the first terminal 110. The client can be any one of a MOBA game, a large fleeing and killing shooting game and an SLG game. In the present embodiment, the client is an MOBA game for example. The first terminal 110 is a terminal used by the first user 101, and the first user 101 uses the first terminal 110 to control a first virtual character located in the virtual scene to perform an activity, where the first virtual character may be referred to as a master virtual character of the first user 101. The activities of the first avatar include, but are not limited to: adjusting at least one of a body pose, crawling, walking, running, riding, flying, jumping, driving, picking, shooting, attacking, throwing. Illustratively, the first avatar is a first virtual character, such as a simulated persona or an animated persona.
The second terminal 130 is installed and operated with a client 131 supporting a virtual scene, and the client 131 may be a multi-player online battle program. When the second terminal 130 runs the client 131, a user interface of the client 131 is displayed on a screen of the second terminal 130. The client may be any one of a MOBA game, a big flee and kill shooting game, and an SLG game, and in this embodiment, the client is the MOBA game for example. The second terminal 130 is a terminal used by the second user 102, and the second user 102 uses the second terminal 130 to control a second virtual character located in the virtual scene to perform an activity, where the second virtual character may be referred to as a master virtual character of the second user 102. Illustratively, the second avatar is a second virtual character, such as a simulated persona or an animated persona.
Optionally, the first virtual character and the second virtual character are in the same virtual scene. Optionally, the first virtual character and the second virtual character may belong to the same camp, the same team, the same organization, a friend relationship, or a temporary communication right. Alternatively, the first virtual character and the second virtual character may belong to different camps, different teams, different organizations, or have a hostile relationship.
Optionally, the clients installed on the first terminal 110 and the second terminal 130 are the same, or the clients installed on the two terminals are the same type of client on different operating system platforms (android or IOS). The first terminal 110 may generally refer to one of a plurality of terminals, and the second terminal 130 may generally refer to another of the plurality of terminals, and this embodiment is only exemplified by the first terminal 110 and the second terminal 130. The first terminal 110 and the second terminal 130 have the same or different device types, which include: at least one of a smartphone, a tablet, an e-book reader, an MP1MP3 player, an MP4 player, a laptop portable computer, and a desktop computer.
Only two terminals are shown in fig. 1, but there are a plurality of other terminals 140 that may access the server cluster 120 in different embodiments. Optionally, one or more terminals 140 are terminals corresponding to the developer, a development and editing platform for a client of the virtual scene is installed on the terminal 140, the developer can edit and update the client on the terminal 140, and transmit the updated client installation package to the server cluster 120 through a wired or wireless network, and the first terminal 110 and the second terminal 110 can download the client installation package from the server cluster 120 to update the client.
The first terminal 110, the second terminal 130, and the other terminals 140 are connected to the server cluster 120 through a wireless network or a wired network.
The server cluster 120 includes at least one of an independent physical server, a plurality of independent physical servers, a cloud server providing cloud computing services, a cloud computing platform, and a virtualization center. The server cluster 120 is used for providing background services for clients supporting three-dimensional virtual scenes. Optionally, the server cluster 120 undertakes primary computing work and the terminals undertake secondary computing work; or, the server cluster 120 undertakes the secondary computing work, and the terminal undertakes the primary computing work; alternatively, the server cluster 120 and the terminal perform cooperative computing by using a distributed computing architecture.
In one illustrative example, server cluster 120 includes server 121 and server 126, where server 121 includes processor 122, user account database 123, combat service module 124, and user-oriented Input/Output Interface (I/O Interface) 125. The processor 122 is configured to load an instruction stored in the server 121, and process data in the user account database 121 and the combat service module 124; the user account database 121 is configured to store data of user accounts used by the first terminal 110, the second terminal 130, and the other terminals 140, such as a head portrait of the user account, a nickname of the user account, a fighting capacity index of the user account, and a service area where the user account is located; the fight service module 124 is used for providing a plurality of fight rooms for the user to fight, such as 1V1 fight, 3V3 fight, 5V5 fight, etc.; the user-facing I/O interface 125 is used to establish communication with the first terminal 110 and/or the second terminal 130 through a wireless network or a wired network to exchange data. Optionally, an intelligent signal module 127 is disposed in the server 126, and the intelligent signal module 127 is configured to implement the information display method in the virtual scene provided in the following embodiments.
Fig. 2 is a diagram illustrating a map provided by a virtual scene of a MOBA game according to an exemplary embodiment of the present application. The map 200 is square. The map 200 is diagonally divided into a lower left triangular area 220 and an upper right triangular area 240. There are three routes from the lower left corner of the lower left triangular region 220 to the upper right corner of the upper right triangular region 240: an upper lane 21, a middle lane 22 and a lower lane 23. In a typical game, 10 avatars are required to divide into two teams for competition. The 5 avatars of the first camp occupy the lower left triangular area 220 and the 5 avatars of the second camp occupy the upper right triangular area 240. The first camp takes the whole base points which destroy or occupy the second camp as winning conditions, and the second camp takes the whole base points which destroy or occupy the first camp as winning conditions.
Illustratively, the sites of the first campaign include: 9 defensive towers 24 and a first base 25. Wherein, there are 3 in 9 defence towers 24 respectively on the way 21, the way 22 and the way 23; the first base 25 is located at the lower left corner of the lower left triangular region 220.
Illustratively, the sites of the second row include: 9 defensive towers 24 and a second base 26. Wherein, there are 3 in 9 defence towers 24 respectively on the way 21, the way 22 and the way 23; the second base 26 is located in the upper right corner of the upper right triangular area 220.
The position of the dotted line in fig. 2 may be referred to as a river channel region. This river course region belongs to the common region of first formation camp and second formation camp, is also the region of bordering on of left lower triangle region 220 and upper right triangle region 240.
The MOBA game requires each virtual character to acquire resources in the map 200, thereby improving the combat ability of the virtual character. The resources include:
1. the soldiers who periodically appear on the upper road 21, the middle road 22 and the lower road 23 get experience and gold coins when the soldiers are killed.
2. The middle road (diagonal line from bottom left to top right) and the river channel area (diagonal line from top left to bottom right) can be divided into 4 triangular areas A, B, C, D (also called four wild areas), the 4 triangular areas A, B, C, D can refresh wild monsters periodically, and when the wild monsters are killed, the nearby virtual characters can obtain experience, gold coins and gain (BUFF) effects.
3. There are periodically refreshed major and minor dragons 27, 28 in two symmetrical positions in the river area. When the dragon 27 and the dragon 28 are killed, the virtual roles of the killing party camp all obtain experience, gold coins and BUFF effects. The major dragon 27 may be referred to by other names such as "leading" or "kaiser", and the minor dragon 28 may be referred to by other names such as "tyrant" or "magic dragon".
In one example, there is a monster of gold coins at each of the up and down riverways, which occurs 30 seconds into the opening. And obtaining the gold coins after killing, and refreshing for 70 seconds.
And a region A: there were one red BUFF, two common monsters (one pig and one bird), and one tyrant (small dragon). Red BUFF and wilderness appeared at 30 seconds of opening, refreshed at 70 seconds after ordinary wilderness kill, and refreshed every 90 seconds after red BUFF kill.
The tyrant appears 2 minutes after opening the game, refreshes in three minutes after killing, and obtains the gold coins and experience rewards for the whole team after killing. The gentleman falls into the dark in 9 minutes and 55 seconds, the dark gentleman appears in 10 minutes, and the revenge BUFF of the gentleman is obtained by killing the dark gentleman.
And a B region: there was a blue BUFF, two common fantasy (a wolf and a bird), which also appeared in 30 seconds and was refreshed every 90 seconds after killing.
And a C region: zone C is identical to zone B, two common monsters (a wolf and a bird), and also blue BUFF appears for 30 seconds, refreshed every 90 seconds.
And (3) region D: zone D is similar to zone a, a red BUFF, two common monsters (one pig and one bird), and a red BUFF also increases output and decelerates. And the other one is mainly (Dalong). The main slaughter appears 8 minutes after the opening of the house, and is refreshed five minutes after the slaughter, and the slaughter main slaughter can obtain a main slaughter BUFF, a bridle BUFF and an on-line main slaughter pioneer (or a manually summoned sky dragon (also called a bone dragon)).
In one illustrative example, BUFF specifies:
red BUFF: lasting 70 seconds, the attack will be accompanied by persistent burning injury and deceleration.
Blue BUFF: lasting for 70 seconds, the cooling time can be shortened, and a certain normal force is additionally recovered every second.
Killing the blatant june to obtain the blaff of the blatant june and the friendship BUFF:
and (3) dark tyrant BUFF: increase the physical attack of the whole team (80 +5% of the current attack), increase the legal attack of the whole team (120 +5% of the current legal attack), and continue for 90S.
Fries BUFF: the output of the dominating animal is reduced by 50 percent, and death does not disappear and lasts for 90 seconds.
The killing master can obtain the master BUFF and the friendship BUFF:
dominating BUFF: can improve the life recovery and normal recovery of the whole team by 1.5 percent per second. Lasting 90 seconds. Death will lose the dominating BUFF.
Trip BUFF: the output of the drug to the blaere junior is reduced by 50 percent, and the death does not disappear and lasts for 90 seconds.
The benefits can be obtained after the slaughtering and the main slaughtering:
1. the team members receive 100 coins and gain benefits regardless of whether the master virtual character is not participating in the lead, including the master virtual character on the revived CD.
2. From the moment of killing the main body, the next three wave (three paths) soldiers on the killing side are all changed into main pioneers (flying dragon). The leading pioneer is very powerful, and can push in three ways simultaneously, bringing huge soldier line pressure to the opponent, the opponent needs to be defended in a shunting way. The map will send out the alarm of leading front, and the middle will prompt the leading front to come in wave number (typically three waves).
The combat capability of 10 avatars includes two parts: grades and equipment, the grades being obtained from accumulated empirical values, and the equipment being purchased from accumulated gold coins. The 10 avatars can be obtained by the server by matching 10 user accounts online. Illustratively, the server matches the interfaces of 2 or 6 or 10 user accounts for competition in the same virtual world online. The 2 or 6 or 10 virtual roles belong to two enemy camps respectively, and the number of the virtual roles corresponding to the two camps is the same. For example, each camp has 5 virtual roles, and the division of the 5 virtual roles may be: warrior-type characters, licker-type characters, jurisdictional-type characters, assistive (or carnot-type) characters, and shooter-type characters.
The above-mentioned battles can be carried out in units of bureaus, and the maps for each battle may be the same or different. Each of the banks includes one or more avatars, such as 1, 3, or 5.
Referring to fig. 3, a flowchart of an information presentation method in a virtual scene according to an exemplary embodiment of the present application is shown. The information presentation method in the virtual scene may be executed by a terminal, where the terminal may be the terminal shown in fig. 1. As shown in fig. 3, the information presentation method in the virtual scene includes the following steps:
In a possible implementation manner, the scene picture corresponding to the virtual scene is a picture of a terminal display interface corresponding to any one virtual object in the plurality of virtual objects.
In a possible implementation manner, a first user account is logged in a first terminal, and the first user account corresponds to a first virtual object in a virtual scene, that is, the first terminal logged in the first user account has a control right for the first virtual account, and a user can control the first virtual object in the virtual scene through the first terminal.
In a possible implementation manner, an application program supporting a virtual scene is installed in a first terminal, and a user corresponding to the first terminal can log in the application program through a first user account in the first terminal. In the application program, the first terminal can form a team or fight with at least one terminal installed with the same application program, so that a virtual object corresponding to the terminal formed by the team or combated with the first terminal interacts with a first virtual object corresponding to the first terminal in a virtual scene corresponding to the application program.
In a possible implementation manner, each first virtual object may send at least one piece of interaction information, each piece of interaction information has interaction content corresponding to the interaction information, and the interaction information may be interaction information directly sent to a target virtual object, or interaction information collectively sent to all virtual objects in an arrangement in which the target virtual object is located, or interaction information visible to all virtual objects in a virtual scene; the target virtual object is any one of a plurality of virtual objects in the virtual scene except the first virtual object.
In a possible implementation manner, the first interactive information includes an interactive information identifier and interactive information content, the interactive information identifier includes user account information corresponding to a terminal that sends the interactive information to the first terminal, and the user account information may include a user account name, a user account code, a relationship with the first user account, and the like, so that the terminal can determine a sending source of the first interactive information according to the interactive information identifier.
In one possible implementation, the interactive information content is at least one of text information content and voice information content. Each interactive information content in the first interactive information has a text content corresponding to the interactive information content, and the target interactive content is a text content displayed on the scene picture.
In one possible implementation, the bullet screen element is a visual element that is presented in a bullet screen scrolling manner.
Referring to fig. 4, which illustrates an interface diagram of a bullet screen element displayed on a scene screen according to an exemplary embodiment of the present application, as shown in fig. 4, a bullet screen element 410 is displayed on a scene screen 400, the bullet screen element includes target interactive content, and the bullet screen element moves from one side of a display interface to another side of the display interface at a certain moving speed. In one possible implementation manner, the barrage element is displayed on a preset area of the scene picture, where the preset area is preset by the application program, or the preset area is set by the user.
In one possible implementation manner, at least one of the moving speed, the moving direction, and the font, the color, the size, and other attributes of the target interactive content included in the bullet screen element on the scene screen is settable.
To sum up, according to the information display method in the virtual scene provided in the embodiment of the present application, by displaying the scene picture corresponding to the virtual scene, after the interaction information including the target interaction content sent by the first virtual object is acquired, the bullet screen element including the target interaction content is displayed on the scene picture. In the process of displaying the interactive information, the user can directly know the interactive content in the scene picture, so that the switching of the scene picture is reduced, the information interaction efficiency in the virtual scene is improved, and the waste of terminal resources is reduced.
Referring to fig. 5, a flowchart of an information presentation method in a virtual scene according to an exemplary embodiment of the present application is shown. The information display method in the virtual scene may be interactively executed by a terminal and a server, where the terminal may be the terminal shown in fig. 1, and the server may be the server shown in fig. 1.
As shown in fig. 5, the information presentation method in the virtual scene includes the following steps:
In one possible implementation manner, the scene picture is a virtual scene running picture, and the virtual scene running picture is a picture of a virtual scene observed from the view angle of the second virtual object in the virtual scene running process; the second virtual object is a terminal-controlled virtual object that presents a scene picture.
In one possible implementation, the second virtual object is the same virtual object as the first virtual object. That is, the scene picture of the current virtual scene object may be a picture in which the virtual scene is observed from the perspective of the virtual object that transmitted the first interaction information, or may be a picture in which the virtual scene is observed from the perspective of the virtual object that received the first interaction information.
The information display method of the virtual scene terminal provided by the application is explained in the embodiment of the application, taking the example that the scene picture corresponding to the virtual scene is the picture of the virtual scene observed from the view angle of the virtual object receiving the first interaction information.
In a possible implementation manner, the second virtual object is any one virtual object in the same virtual scene as the first virtual object, or the second virtual object is a virtual object in the view of god viewing the virtual scene, for example, a spectator in a game, which may substitute the view of any one virtual object in the virtual scene to view the virtual scene, or view all virtual objects in the virtual scene globally.
In one possible implementation, the virtual scene running picture is a scene picture in the interface shown in fig. 5, in which the second virtual object can interact with the virtual objects in other teams in the virtual scene in the form of a team with other virtual objects, or observe the virtual objects in the virtual scene from a viewpoint of beyond.
Or,
the scene picture is a virtual scene end picture, and the virtual scene end picture is a picture displayed when the running of the virtual scene is ended. For example, the virtual scene ending screen is a settlement screen entered after the second virtual object finishes a game, please refer to fig. 6, which shows a schematic diagram of the virtual scene ending screen shown in an exemplary embodiment of the present application, as shown in fig. 6. In the virtual scene ending screen 600, the win-or-lose condition of the team corresponding to the second virtual object in the local game is displayed, and in a possible implementation manner, the virtual images of the virtual objects in the team corresponding to the second virtual object, the nicknames corresponding to the virtual objects, the obtained rewards, and the like (not shown in the figure) are also displayed in the virtual scene ending screen.
The embodiment of the present application takes the second virtual object as an example of any one of the virtual objects in the virtual scene, and explains the information display method in the virtual scene provided by the present application.
In a possible implementation manner, in response to that the scene picture is a virtual scene running picture, and the first virtual object and the second virtual object are in the same battle, first interaction information is obtained, where the first interaction information is interaction information sent by the first virtual object in the virtual scene.
In a possible implementation manner, when the scene screen is a virtual scene running screen, the first interaction information refers to interaction information sent by a first virtual object in the same camp as a second virtual object to the second virtual object or all virtual objects in the camp where the second virtual object is located. Namely, the second virtual object only acquires the interaction information sent by the virtual object in the same battle with the second virtual object, so as to ensure the accuracy of receiving the information.
In a possible implementation manner, in response to that the scene picture is a virtual scene running picture, on the basis of obtaining the interaction information sent by the virtual object in the same battle as the second virtual object in the virtual scene, the interaction information sent by the target virtual object is obtained, where the target virtual object is any one of virtual objects in different battles as the second virtual object in the virtual scene, or the target virtual object is any one of virtual objects in a specific manner, for example, the target virtual object is a virtual object speaking in a world channel, or the target virtual object is a virtual object in a friend relationship with the second virtual object, or the like, that is, in response to that the scene picture is a virtual scene running picture, the first virtual object is at least one of a virtual object in the same battle as the second virtual object and the target virtual object.
In one possible implementation manner, the first interaction information is acquired in response to that the scene picture is a virtual scene end picture and the first virtual object is a virtual object in a winning battle in the virtual scene.
In a possible implementation manner, when the first virtual object is a virtual object of a winning camp, a settlement interface displayed at a terminal of the first virtual object includes an interactive information sending control, a user corresponding to the first virtual object can send first interactive information through the interactive information sending control, and correspondingly, the second virtual object can obtain the first interactive information.
In a possible implementation manner, in response to that the scene picture is a virtual scene end picture and the first virtual object is a virtual object in battle in the virtual scene, when the first user account corresponding to the first virtual object unlocks the barrage interaction right, the interaction control is displayed on the virtual scene end picture. For example, the preset interaction information sending condition refers to that the first user account corresponding to the first virtual object obtains an interaction information sending prop, or a bullet screen interaction permission is charged and unlocked, and the like.
That is, when the scene screen is the end screen of the virtual scene, the terminal corresponding to the virtual object of the winning camp displays the interactive information sending control or the interactive information sending control is in an operable state; in the virtual object of the failed marketing, an interactive information sending control is displayed in the terminal with the barrage interaction authority unlocked or the interactive information sending control is in an operable state; and in the virtual object which fails to camp, the interactive information sending control is not displayed in the terminal which does not unlock the barrage interactive authority or the interactive information sending control is in an inoperable state.
In a possible implementation manner, the second virtual object and the first virtual object are in the same camp, and the camp in which the first virtual object is located is a winning camp, at this time, the first interactive information is interactive information that the first virtual object sends to any virtual object in the virtual scene.
In a possible implementation manner, the second virtual object and the first virtual object are in different camps, and the camps in which the first virtual object is located are winning camps, at this time, the first interactive information is the interactive information that the first virtual object sends to any one virtual object in the camps in which the second virtual object is located.
In one possible implementation, in response to the scene picture being a virtual scene end picture and the second virtual object being a virtual object in a winning battle in the virtual scene, displaying an interactive control on the virtual scene end picture;
responding to the triggering operation of the interactive control, and acquiring input second interactive information;
and sending the second interaction information to the server so that the server can send the second interaction information to the terminals corresponding to the virtual objects in the virtual scene.
In a possible implementation manner, the interactive control is a control for opening a second interactive information input interface, and in response to a terminal receiving a touch operation on the interactive control, the interactive information input interface is displayed on a virtual scene ending screen, please refer to fig. 7, which shows an interface schematic diagram of a virtual scene ending screen shown in an exemplary embodiment of the present application, as shown in fig. 7, an interactive control 710 is displayed on a virtual scene ending screen 700, in response to receiving the touch operation based on the interactive control 710, the interactive information input interface is displayed on the virtual scene ending screen, the interactive information input interface includes an interactive information sending control 720, and in response to receiving the touch operation based on the interactive control 710 again, the interactive information input interface is hidden.
In a possible implementation manner, the interactive information content of the second interactive information includes at least one of voice information and text information. And responding to the completion of the input of the interactive information content of the second interactive information by the user corresponding to the second virtual object, the second terminal acquires the second interactive information and sends the second interactive information to the server, so that the server sends the second interactive information to the terminals corresponding to the virtual objects in the virtual scene, and the terminals corresponding to the virtual objects in the virtual scene process and display the second interactive information. Referring to fig. 8, which shows a schematic diagram of a terminal sending interaction information according to an exemplary embodiment of the present application, as shown in fig. 8, a terminal 810 is a terminal corresponding to a first virtual object, and a terminal 820 is a terminal corresponding to a second virtual object, when the second virtual object needs to send interaction information to the first virtual object, the terminal 820 first sends the interaction information to a server 830, and the server analyzes the interaction information and then forwards the interaction information to the terminal 810.
In a possible implementation manner, in response to that the interactive information content of the second interactive information is voice information, the second terminal performs voice recognition on the voice information to acquire text information in the voice information, and sends the interactive information content of the second interactive information to the server in a text information form.
That is, the voice recognition process for the voice information may be performed by the terminal that transmitted the voice information, or may be performed by the terminal that received the voice information.
And responding to that the terminal for displaying the scene picture has started the barrage function, and the first interactive information is voice information, and performing voice recognition on the first interactive information to obtain target interactive content.
In a possible implementation manner, the terminal performs voice recognition on the received first interaction information by calling a voice recognition model, and obtains target interaction content corresponding to the first interaction information, wherein the voice recognition model is obtained by training a neural network model through a voice corpus and a text corpus corresponding to the voice corpus.
In a possible implementation manner, the speech corpus includes speech sample data of different languages, and correspondingly, the text corpus includes text sample data of different languages, for example, the speech sample data includes chinese speech data, english speech data, german speech data, and the like, and correspondingly, the text corpus includes chinese text data, english text data, german text data, and the like, or the speech corpus includes speech sample data of the same language and different regions, and correspondingly, the text corpus includes specific text sample data of the language, for example, the speech corpus includes southeast speech data, northeast speech data, cantonese speech data, and the like, and correspondingly, the text corpus includes mandarin text data.
In a possible implementation manner, when the speech recognition model is trained, the speech sample data in the speech corpus corresponds to the text sample data in the text corpus one to one, that is, the chinese speech data corresponds to the chinese text data, the english speech data corresponds to the english text data, and the german speech data corresponds to the german text data. Therefore, the speech recognition model obtained by training can extract speech information of different languages into text information of corresponding languages.
Alternatively, in another possible implementation manner, one text sample data in the text corpus corresponds to speech sample data of multiple languages in the speech corpus, for example, chinese speech data, english speech data, german speech data, and the like all correspond to chinese text data. Therefore, the trained voice recognition model can extract voice information of different languages into text information of a specific language, namely, the translation of the voice information of multiple languages is realized. The text information of the specific language corresponding to the voice information of the multiple languages is settable.
The bullet screen element can be displayed on the scene picture by referring to the bullet screen element 410 and the bullet screen element 610 shown in fig. 4 and 6, where fig. 4 shows the display of the bullet screen element when the scene picture is the virtual scene running picture, and fig. 6 shows the display of the bullet screen element when the scene picture is the virtual scene ending picture.
In a possible implementation manner, a scene picture includes a bullet screen switch control;
and responding to the opening operation executed on the bullet screen switch control, and determining that the terminal for displaying the scene picture has opened the bullet screen function. Referring to fig. 9, which illustrates an interface schematic diagram of a scene screen according to an exemplary embodiment of the present application, as shown in fig. 9, a scene screen 900 includes a bullet screen switch control 910 for controlling a bullet screen display function switch, and when a scene screen does not display a bullet screen element and a start operation performed on the bullet screen switch control 910 is received, the bullet screen element is displayed on the scene screen; when a bullet screen element is displayed on the scene picture, when a closing operation performed on the bullet screen switch control 910 is received, the function of displaying the bullet screen element on the scene picture is closed.
In one possible implementation manner, in order to distinguish interaction information sent by different virtual objects, in response to that a scene picture is a virtual scene running picture, and a first virtual object and a second virtual object are in the same formation, a bullet screen element is displayed on the scene picture with a first visual effect;
and in response to the scene picture being a virtual scene running picture and the first virtual object and the second virtual object being in different camps, displaying the bullet screen element with a second visual effect on the scene picture.
The first visual effect is different from the second visual effect.
In one possible implementation, the first visual effect and the second visual effect are different expressions of the same attribute, and the attribute includes at least one of a color, a font size, and a background color of the text. For example, the first visual effect is represented by that the character color of the target interactive content in the bullet screen element is red, and the second visual effect is represented by that the character color of the target interactive content in the bullet screen element is blue; or the first visual effect is expressed that the font of the target interactive content in the bullet screen element is a regular script, and the second visual effect is expressed that the font of the target interactive content in the bullet screen element is a black body and the like. Or, adding an identifier for distinguishing two parties of an enemy and a friend to the bullet screen element, for example, adding an identifier of a friend to the bullet screen element corresponding to the interaction information sent by the virtual object in the same marketing with the second virtual object, and adding an identifier of an enemy to the bullet screen element corresponding to the interaction information sent by the virtual object in a different marketing with the second virtual object.
In a possible implementation manner, in response to that the terminal displaying the scene picture does not start the barrage function, and the first interactive information is voice information, the first interactive information is played through the terminal corresponding to the second virtual object.
In a possible implementation manner, when the terminal that displays the scene picture does not start the barrage function and the first interaction information is the voice information, the terminal corresponding to the second virtual object automatically plays the voice information when receiving the first interaction information, or after receiving the first interaction information, the terminal corresponding to the second virtual object plays the voice information in response to receiving the play operation of the user based on the voice information.
In one possible implementation manner, in response to that the bullet screen function is not started at the terminal for displaying the scene picture, the first interactive information is displayed in a default interactive information display manner.
In one possible implementation manner, the default interactive information display manner is to display first interactive information in a dialog box form, where the first interactive information is at least one of voice information and text information.
In a possible implementation manner, a scene picture comprises a dialog box control used for controlling the expansion and the hiding of a dialog box, on the premise that the dialog box is in the hidden state, the hidden state of the dialog box is changed into the expanded state in response to the terminal receiving the touch operation on the dialog box control, and first interaction information sent by other terminals is displayed in the dialog box; and on the premise that the dialog box is in the expanded state, the dialog box is changed from the expanded state to the hidden state in response to the terminal receiving the touch operation on the dialog box control. Referring to fig. 10, which illustrates a schematic diagram of displaying first interaction information in a dialog box form according to an exemplary embodiment of the present application, a portion a in fig. 10 illustrates a schematic diagram of a display interface when a dialog box is hidden, and a portion B in fig. 10 illustrates a schematic diagram of a display interface when a dialog box is expanded, as illustrated in fig. 10, a dialog box control 1010 is included in a scene screen 1000, and in the portion a in fig. 10, the dialog box is in a hidden state, and after a terminal receives a touch operation on the dialog box control 1010, the dialog box is expanded, that is, the display interface is changed to a state as in the portion B in fig. 10, that is, an expanded state of the dialog box, in which a user can know interaction information sent by another terminal through the dialog box, and at the same time, a user corresponding to a second virtual object can also send second interaction information to another terminal through inputting the second interaction information in the dialog box, where the second interaction information may be text information or voice information. Receiving the touch operation on the dialog control 1010 again, the display interface is restored from the expanded state shown in part B in fig. 10 to the hidden state shown in part a in fig. 10.
In one possible implementation, the method further includes:
in response to receiving a screen recording instruction, recording a screen of a scene picture to obtain a screen recording file;
and correspondingly storing the bullet screen elements displayed on the scene picture in the screen recording process of the screen recording file.
In a possible implementation manner, a scene picture includes a recording control for storing the content of a terminal display interface;
the above process may be implemented as:
and responding to the received touch operation executed on the recording control, and storing the display interface content in the preset time period in the terminal display interface, wherein the display interface content comprises a scene picture and a bullet screen element displayed on the scene picture. Referring to fig. 11, which illustrates a schematic diagram of a scene screen shown in an exemplary embodiment of the present application, as shown in fig. 11, a recording control 1110 for saving display interface content is included in the scene screen 1100, and when a touch operation by a user based on the recording control 1110 is received, the display interface content in a preset time period from the reception of the touch operation in a display interface of a terminal is saved, where in one possible implementation, the preset time period is preset by an application program or is set by the user himself. Assuming that the preset time period is 30 seconds, after receiving the touch operation with the saving control, saving the content of the display interface in the next 30 seconds.
In another possible implementation manner, in response to that the second virtual object meets a preset condition, the content of the display interface in the display interface within a preset time period is stored, where the content of the display interface includes bullet screen content and a scene picture.
That is, when the second virtual object triggers a preset condition for saving the display interface content in the display interface, the display interface content in the display interface within a preset time period is saved, for example, the preset condition may be that the second virtual object continuously defeats the other party five times, or that the second virtual object occupies the other party's base point, and so on.
In another possible time mode, in response to that the content of the bullet screen within the preset time period reaches a preset threshold, the content of the display interface within the preset time period in the display interface is stored. The preset threshold value can be preset by an application program or can be set by a user.
It should be noted that, the condition for saving the display interface content in the display interface within the preset time period may be selected and set when the virtual object controlled by the user does not enter the virtual scene yet, and the user may select at least one of the conditions as a trigger condition for saving the display interface content.
In a possible implementation manner, in a process of recording a screen of a screen recording file, correspondingly saving a barrage element displayed on a scene picture includes:
and saving the bullet screen content in the bullet screen elements displayed on the scene picture and the appearance time points corresponding to the bullet screen elements respectively.
In a possible implementation manner, the time when the screen recording instruction is received is used as an initial timing point, and the occurrence time point of each bullet screen element is obtained, for example, when the 3 rd minute after the game starts, the user triggers the storage of the content of the display interface, and the time when each bullet screen element appears and the content are stored by using the time as the initial timing point.
Responding to an instruction of playing the screen recording file, and playing a picture in the screen recording file;
and in the playing process of the screen recording file, displaying the bullet screen elements corresponding to the screen recording file on a scene picture in the screen recording file.
In a possible implementation manner, in the playing process of the screen recording file, the bullet screen elements corresponding to the screen recording file are sequentially displayed on a scene picture in the screen recording file according to the occurrence time of each stored bullet screen element.
In a possible implementation manner, the storage of each bullet screen element is performed by storing a difference between each bullet screen element and a time point when recording starts, target interactive content in each bullet screen element, and a playing position of each bullet screen element. And storing the data corresponding to each bullet screen element by using a queue form in the data structure according to the sequence of the difference value between each bullet screen element and the recording starting time point from small to large to form a data queue. Referring to fig. 12, which shows a schematic diagram of bullet screen elements during playback of a screen recording file according to an exemplary embodiment of the present application, as shown in fig. 12, when playing back a bullet screen element, data corresponding to each bullet screen element is obtained from the head of a data queue, so that each bullet screen element is correspondingly displayed at a corresponding position of a corresponding time of a scene picture. Therefore, all data of each bullet screen element does not need to be traversed, and only the first bullet screen element of the data queue needs to be judged, so that the computing resource of the terminal is saved.
Referring to fig. 13, a flowchart of a recording and playing back process of display interface content according to an exemplary embodiment of the present application is shown, where as shown in fig. 13, the recording and playing back process of display interface content includes:
and S1301, displaying a scene picture corresponding to the virtual scene.
And S1302, judging whether to trigger the content record of the display interface, if so, executing S1303, and if not, returning to S1301.
And S1303, saving the current scene picture and the bullet screen element.
And S1304, judging whether the content playback of the display interface is triggered, if so, executing S1305, otherwise, returning to S1303.
S1305, playing the scene picture and the bullet screen elements in sequence.
S1306, the first bullet screen element is played completely.
S1307, determining whether there is a next bullet screen element, if yes, executing S1308, otherwise, executing S1309.
S1308, continue to play the next bullet screen element.
S1309, the playback is stopped.
In one possible implementation manner, in response to receiving a skip operation based on a bullet screen element, a bullet screen element corresponding to a current scene picture is replaced with a bullet screen element corresponding to a next n-frame scene picture, where n is a positive integer.
To sum up, according to the information display method in the virtual scene provided in the embodiment of the present application, by displaying the scene picture corresponding to the virtual scene, after the interaction information including the target interaction content sent by the first virtual object is acquired, the bullet screen element including the target interaction content is displayed on the scene picture. In the process of displaying the interactive information, the user can directly know the interactive content in the scene picture, so that the switching of the scene picture is reduced, the information interaction efficiency in the virtual scene is improved, and the waste of terminal resources is reduced.
Taking a virtual environment as a game environment, and taking an example that a first virtual object and a second virtual object are in the same battle, please refer to fig. 14, which shows a flowchart of an information presentation method in a virtual scene shown in an exemplary embodiment of the present application, where the information presentation method in the virtual scene may be executed by a terminal, where the terminal may be the terminal shown in fig. 1. As shown in fig. 14, the information presentation method in the virtual scene includes the following steps:
s1401, the first virtual object enters a game and sends first interaction information to the second virtual object, wherein the first interaction information is text information.
S1402 determines whether the second virtual object is in the bullet screen function open state, if so, executes S1403, otherwise, executes S1404.
And S1403, displaying the first interaction information in a bullet screen form in a second terminal interface, wherein the second terminal is a terminal corresponding to the second virtual object.
And S1404, displaying the first interactive message in a dialog box of the second terminal.
S1405, the first virtual object sends second interactive information to the second virtual object, where the second interactive information is voice information.
S1406, determine whether the second interactive information is converted into text information, if so, execute S1407, otherwise execute 1408.
And S1407, displaying the text information corresponding to the second interactive information in the second terminal interface in a bullet screen mode.
S1408, the second terminal plays the second interactive information.
And S1409, judging whether the game is finished, if so, executing S1410, otherwise, continuing to S1407 or S1408.
And S1410, entering a global bullet screen stage, wherein the bullet screens of both the friend and the foe are visible in the stage.
S1411, determine whether the second virtual object can use the bullet screen, if yes, go to step 1412.
In a possible case, when the second virtual object belongs to the winning camp of the local bureau, or the second virtual object belongs to the battle camp of the local bureau, but the barrage interaction authority is unlocked, the second virtual object can use the barrage, otherwise, the barrage cannot be used.
And S1412, the second virtual object sends a bullet screen, and the whole bullet screen is visible.
To sum up, according to the information display method in the virtual scene provided in the embodiment of the present application, by displaying the scene picture corresponding to the virtual scene, after the interaction information including the target interaction content sent by the first virtual object is acquired, the bullet screen element including the target interaction content is displayed on the scene picture. In the process of displaying the interactive information, the user can directly know the interactive content in the scene picture, so that the switching of the scene picture is reduced, the information interaction efficiency in the virtual scene is improved, and the waste of terminal resources is reduced.
Referring to fig. 15, a flowchart of an information presentation method in a virtual scene, which may be executed by a terminal, according to an exemplary embodiment of the present application, is shown, where the terminal may be the terminal shown in fig. 1. As shown in fig. 15, the information presentation method in the virtual scene includes the following steps:
step 1510, displaying a scene picture corresponding to the virtual scene, where the scene picture includes a first virtual object, and the first virtual object is a virtual object controlled by a terminal displaying the scene picture.
To sum up, according to the information display method in the virtual scene provided by the embodiment of the present application, by displaying the scene picture corresponding to the virtual scene, after the interaction information including the target interaction content sent by the first virtual object is acquired, the bullet screen element including the target interaction content is displayed on the scene picture. In the process of displaying the interactive information, a user can directly know the interactive content in the scene picture, so that the switching of the scene picture is reduced, the information interaction efficiency in the virtual scene is improved, and the waste of terminal resources is reduced.
Referring to fig. 16, a block diagram of an information displaying apparatus in a virtual scene according to an exemplary embodiment of the present application is shown. The information display method in the virtual scene may be performed by a terminal and a server in an interactive manner, where the terminal may be the terminal shown in fig. 1, and the server may be the server shown in fig. 1.
As shown in fig. 16, the information presentation apparatus in the virtual scene includes:
a first display module 1610, configured to display a scene picture corresponding to a virtual scene, where the virtual scene includes a plurality of virtual objects;
a first obtaining module 1620, configured to obtain first interaction information sent by a first virtual object, where the first interaction information includes target interaction content; the first virtual object is any one of the plurality of virtual objects;
the second display module 1630 is configured to display a bullet screen element on the scene picture, where the bullet screen element includes the target interactive content.
In one possible implementation manner, the scene picture is a virtual scene running picture, and the virtual scene running picture is a picture of a virtual scene observed from the view angle of the second virtual object in the virtual scene running process; the second virtual object is a terminal-controlled virtual object showing the scene picture;
or,
the scene picture is a virtual scene ending picture, and the virtual scene ending picture is a picture displayed when the operation of the virtual scene is ended.
In a possible implementation manner, the first obtaining module 1620 is configured to obtain the first interaction information in response to that the scene picture is a virtual scene running picture, and the first virtual object and the second virtual object are in the same formation.
In a possible implementation manner, the second display module 1640 is configured to, in response to that the scene picture is a virtual scene running picture, and the first virtual object and the second virtual object are in the same battle, display the bullet screen element with the first visual effect on the scene picture;
responding to the scene picture that is a virtual scene running picture and the first virtual object and the second virtual object are in different camps, and displaying the bullet screen element with a second visual effect on the scene picture;
the first visual effect is different from the second visual effect.
In a possible implementation manner, the first obtaining module 1620 is configured to obtain the first interaction information in response to that the scene picture is a virtual scene end picture and the first virtual object is a virtual object in a winning formation in the virtual scene.
In one possible implementation, the apparatus further includes:
the third display module is used for responding to the scene picture being a virtual scene ending picture, and the second virtual object being a virtual object in a successful marketing in the virtual scene, and displaying the interactive control on the virtual scene ending picture;
the second acquisition module is used for responding to the trigger operation of the interactive control and acquiring input second interactive information;
and the sending module is used for sending the second interaction information to the server so that the server can send the second interaction information to the terminals corresponding to the virtual objects in the virtual scene.
In one possible implementation, the second display module 1640 is configured to display the barrage element on the scene picture in response to the terminal displaying the scene picture having turned on the barrage function.
In a possible implementation manner, the scene picture includes a bullet screen switch control;
in response to the terminal displaying the scene picture having turned on the pop-up screen function, before displaying the pop-up screen element on the scene picture, the apparatus further includes:
and the determining module is used for responding to the opening operation executed on the bullet screen switch control and determining that the bullet screen function is opened by the terminal for displaying the scene picture.
In one possible implementation, the apparatus further includes:
and the third acquisition module is used for responding to the fact that the bullet screen function is started by the terminal for displaying the scene picture, and the first interaction information is voice information, and performing voice recognition on the first interaction information to acquire target interaction content.
In one possible implementation, the apparatus further includes:
and the first playing module is used for responding to the situation that the terminal for displaying the scene picture does not start the barrage function, the first interaction information is voice information, and the first interaction information is played through the terminal corresponding to the first virtual object.
In one possible implementation, the apparatus further includes:
the screen recording module is used for recording a screen of a scene picture in response to receiving a screen recording instruction to obtain a screen recording file;
the saving module is used for correspondingly saving the barrage elements displayed on the scene picture in the screen recording process of the screen recording file;
the second playing module is used for responding to an instruction of playing the screen recording file and playing the picture in the screen recording file;
and the fourth display module is used for displaying the bullet screen elements corresponding to the screen recording file on a scene picture in the screen recording file playing process.
To sum up, the information display device in the virtual scene provided in the embodiment of the present application displays, by displaying the scene picture corresponding to the virtual scene, after acquiring the interaction information that includes the target interaction content and is sent by the first virtual object, the bullet screen element that includes the target interaction content on the scene picture. In the process of displaying the interactive information, the user can directly know the interactive content in the scene picture, so that the switching of the scene picture is reduced, the information interaction efficiency in the virtual scene is improved, and the waste of terminal resources is reduced.
Fig. 17 is a block diagram illustrating the structure of a computer device 1700 according to an example embodiment. The computer device 1700 may be a user terminal, such as a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Computer device 1700 may also be referred to by other names such as user equipment, portable terminals, laptop terminals, desktop terminals, and the like.
Generally, computer device 1700 includes: a processor 1701 and a memory 1702.
The processor 1701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 1701 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1701 may also include a main processor, which is a processor for Processing data in an awake state, also called a Central Processing Unit (CPU), and a coprocessor; a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1701 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and rendering content that the display screen needs to display. In some embodiments, the processor 1701 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
The memory 1702 may include one or more computer-readable storage media, which may be non-transitory. The memory 1702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer-readable storage medium in the memory 1702 is used to store at least one instruction for execution by the processor 1701 to implement the leaderboard display method provided by the method embodiments of the present application.
In some embodiments, computer device 1700 may also optionally include: a peripheral interface 1703 and at least one peripheral. The processor 1701, memory 1702 and peripheral interface 1703 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 1703 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 1704, a display screen 1705, a camera 1706, an audio circuit 1707, and a power supply 1708.
The peripheral interface 1703 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 1701 and the memory 1702. In some embodiments, the processor 1701, memory 1702, and peripheral interface 1703 are integrated on the same chip or board; in some other embodiments, any one or both of the processor 1701, the memory 1702, and the peripheral interface 1703 may be implemented on separate chips or circuit boards, which are not limited in this embodiment.
The Radio Frequency circuit 1704 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1704 communicates with a communication network and other communication devices via electromagnetic signals. The rf circuit 1704 converts the electrical signal into an electromagnetic signal for transmission, or converts the received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1704 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1704 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1704 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1705 is used to display a UI (user interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1705 is a touch display screen, the display screen 1705 also has the ability to capture touch signals on or above the surface of the display screen 1705. The touch signal may be input as a control signal to the processor 1701 for processing. At this point, the display 1705 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1705 may be one, providing the front panel of computer device 1700; in other embodiments, the display screens 1705 may be at least two, each disposed on a different surface of the computer device 1700 or in a folded design; in some embodiments, display 1705 may be a flexible display, disposed on a curved surface or on a folded surface of computer device 1700. Even further, the display screen 1705 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display screen 1705 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1706 is used to capture images or video. Optionally, camera assembly 1706 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of a terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1706 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp and can be used for light compensation under different color temperatures.
The audio circuit 1707 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, inputting the electric signals into the processor 1701 for processing, or inputting the electric signals into the radio frequency circuit 1704 for voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location on the computer device 1700. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1701 or the radio frequency circuit 1704 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1707 may also include a headphone jack.
In some embodiments, computer device 1700 also includes one or more sensors 1709. The one or more sensors 1709 include, but are not limited to: acceleration sensor 1710, gyro sensor 1711, pressure sensor 1712, optical sensor 1713, and proximity sensor 1714.
The acceleration sensor 1710 can detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the computer apparatus 1700. For example, the acceleration sensor 1710 may be used to detect the components of the gravitational acceleration in three coordinate axes. The processor 1701 may control the display screen 1705 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1710. The acceleration sensor 1710 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1711 may detect a body direction and a rotation angle of the computer apparatus 1700, and the gyro sensor 1711 may cooperate with the acceleration sensor 1710 to acquire a 3D motion of the user on the computer apparatus 1700. The processor 1701 may perform the following functions based on the data collected by the gyro sensor 1711: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 1712 may be disposed on the side bezel of computer device 1700 and/or underlying display screen 1705. When the pressure sensor 1713 is disposed on the side frame of the computer apparatus 1700, a user's grip signal to the computer apparatus 1700 can be detected, and the processor 1701 performs right-left hand recognition or shortcut operation based on the grip signal collected by the pressure sensor 1712. When the pressure sensor 1712 is disposed below the display screen 1705, the processor 1701 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 1705. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The optical sensor 1713 is used to collect the ambient light intensity. In one embodiment, the processor 1701 may control the display brightness of the display screen 1705 based on the ambient light intensity collected by the optical sensor 1713. Specifically, when the ambient light intensity is high, the display brightness of the display screen 1705 is increased; when the ambient light intensity is low, the display brightness of the display screen 1705 is reduced. In another embodiment, the processor 1701 may also dynamically adjust the shooting parameters of the camera assembly 1706 according to the ambient light intensity collected by the optical sensor 1713.
A proximity sensor 1714, also known as a distance sensor, is typically disposed on the front panel of the computer device 1700. Proximity sensor 1714 is used to capture the distance between the user and the front of computer device 1700. In one embodiment, the processor 1701 controls the display screen 1705 to switch from a bright screen state to a dark screen state when the proximity sensor 1714 detects that the distance between the user and the front surface of the computer device 1700 is gradually reduced; when the proximity sensor 1714 detects that the distance between the user and the front of the computer device 1700 is gradually increased, the processor 1701 controls the display 1705 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the architecture shown in FIG. 17 is not intended to be limiting of the computer device 1700 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
Fig. 18 is a block diagram illustrating the structure of a computer device 1800, according to an example embodiment. The computer device may be implemented as a server in the above-described aspects of the present disclosure. The computer device 1800 includes a Central Processing Unit (CPU) 1801, a system Memory 1804 including a Random Access Memory (RAM) 1802 and a Read-Only Memory (ROM) 1803, and a system bus 1805 connecting the system Memory 1804 and the central processing unit 1801. The computer device 1800 also includes a basic input/output system (I/O system) 1806, which facilitates transfer of information between devices within the computer, and a mass storage device 1807, which stores an operating system 1813, application programs 1814, and other program modules 1815.
The basic input/output system 1806 includes a display 1808 for displaying information and an input device 1809, such as a mouse, keyboard, etc., for a user to input information. Wherein the display 1808 and the input device 1809 are coupled to the central processing unit 1801 via an input/output controller 1810 coupled to the system bus 1805. The basic input/output system 1806 may also include an input/output controller 1810 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input-output controller 1810 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 1807 is connected to the central processing unit 1801 through a mass storage controller (not shown) connected to the system bus 1805. The mass storage device 1807 and its associated computer-readable media provide non-volatile storage for the computer device 1800. That is, the mass storage device 1807 may include a computer-readable medium (not shown) such as a hard disk or Compact disk-Only Memory (CD-ROM) drive.
Without loss of generality, the computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, erasable Programmable Read-Only Memory (EPROM), electrically Erasable Programmable Read-Only Memory (EEPROM), flash Memory or other solid state Memory technology, CD-ROM, digital Versatile Disks (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage, or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 1804 and mass storage device 1807 described above may be collectively referred to as memory.
The computer device 1800 may also operate in accordance with various embodiments of the present disclosure by being connected to remote computers over a network, such as the internet. That is, the computer device 1800 may be connected to the network 1812 through the network interface unit 1811 that is coupled to the system bus 1805, or the network interface unit 1811 may be used to connect to other types of networks or remote computer systems (not shown).
The memory further includes one or more programs, the one or more programs are stored in the memory, and the central processing unit 1801 implements all or part of the steps of the method shown in the embodiment of fig. 3, 5, 14 or 15 by executing the one or more programs.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in embodiments of the disclosure may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The embodiment of the present disclosure further provides a computer-readable storage medium for storing computer software instructions for the computer device, which includes a program designed to execute the avatar display method for the virtual object. For example, the computer readable storage medium may be a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a non-transitory computer readable storage medium including instructions, such as a memory including at least one instruction, at least one program, set of codes, or set of instructions, executable by a processor to perform all or part of the steps of the method shown in any of the embodiments of fig. 3, 5, or 14 is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.
Claims (14)
1. An information display method in a virtual scene, the method comprising:
displaying a scene picture corresponding to a virtual scene, wherein the virtual scene comprises a plurality of virtual objects;
receiving first interaction information sent by a first virtual object, wherein the first interaction information comprises target interaction content; the first virtual object is any one of the plurality of virtual objects; the first interactive information is a voice message, and the target interactive content is text information of a specific language obtained by performing voice recognition on the first interactive information;
displaying a bullet screen element on the scene picture, wherein the bullet screen element comprises the target interactive content;
in response to receiving a screen recording instruction, storing the display interface content in a preset time period in a terminal display interface to obtain a screen recording file; the screen recording instruction is an instruction triggered when a storage condition is reached; the preservation conditions include: the second virtual object achieves at least one of a preset condition and the number of the bullet screen elements in a preset time period reaches a preset threshold value; the second virtual object is a terminal-controlled virtual object showing the scene picture; the display interface content comprises a scene picture and a bullet screen element displayed on the scene picture, and the scene picture and the bullet screen element are stored separately;
in the recording process of the screen recording file, taking the moment of receiving the screen recording instruction as an initial timing point, and storing the bullet screen content in the bullet screen elements displayed on the scene picture, the playing position of each bullet screen element and the occurrence time point of each bullet screen element corresponding to the initial timing point respectively in a data queue manner;
responding to the playback of the bullet screen elements, and starting to acquire data corresponding to each bullet screen element from the head of the data queue, so that each bullet screen element is correspondingly displayed at a corresponding position of the scene picture at a corresponding moment;
and in response to the fact that the skipping operation based on the bullet screen elements is received, replacing the bullet screen elements corresponding to the current scene picture with the bullet screen elements corresponding to the next n frames of scene pictures, wherein n is a positive integer.
2. The method of claim 1,
the scene picture is a virtual scene running picture, and the virtual scene running picture is a picture for observing the virtual scene from the visual angle of a second virtual object in the running process of the virtual scene;
or,
the scene picture is a virtual scene ending picture, and the virtual scene ending picture is a picture displayed when the virtual scene runs.
3. The method of claim 2, wherein the receiving the first interaction information sent by the first virtual object comprises:
and acquiring the first interaction information in response to the scene picture being a virtual scene running picture and the first virtual object and the second virtual object being in the same formation.
4. The method of claim 2, wherein the displaying of the bullet screen element on the scene picture comprises:
responding to the scene picture is a virtual scene running picture, and the first virtual object and the second virtual object are in the same formation, and displaying the bullet screen element with a first visual effect on the scene picture;
in response to that the scene picture is a virtual scene running picture and the first virtual object and the second virtual object are in different camps, displaying the bullet screen element with a second visual effect on the scene picture;
the first visual effect is different from the second visual effect.
5. The method of claim 2, wherein the receiving the first interaction information sent by the first virtual object comprises:
the first interaction information is acquired in response to the scene picture being a virtual scene end picture and the first virtual object being a virtual object in a battle winning in the virtual scene.
6. The method of claim 5, further comprising:
in response to the scene view being a virtual scene end view and the second virtual object being a virtual object in a winning lineup in the virtual scene, showing an interactive control on the virtual scene end view;
responding to the triggering operation of the interaction control, and acquiring input second interaction information;
and sending the second interaction information to a server so that the server can send the second interaction information to terminals corresponding to all virtual objects in the virtual scene.
7. The method of claim 1, wherein the displaying of the bullet screen element on the scene picture comprises:
and responding to that the terminal for displaying the scene picture opens the bullet screen function, and displaying the bullet screen element on the scene picture.
8. The method according to claim 7, wherein the scene picture contains a bullet screen switch control;
the responding to the terminal which displays the scene picture and opens the barrage function further comprises the following steps before the barrage element is displayed on the scene picture:
and responding to the opening operation executed on the bullet screen switch control, and determining that the terminal displaying the scene picture opens the bullet screen function.
9. The method of claim 1, further comprising:
responding to the situation that the terminal displaying the scene picture does not start the barrage function, and playing the first interactive information through the terminal displaying the scene picture, wherein the first interactive information is voice information.
10. The method of claim 1, further comprising:
responding to an instruction for playing the screen recording file, and playing a picture in the screen recording file;
and in the playing process of the screen recording file, displaying the bullet screen elements corresponding to the screen recording file on the scene picture in the screen recording file.
11. An information display method in a virtual scene, the method comprising:
displaying a scene picture corresponding to a virtual scene, wherein the scene picture comprises a first virtual object, and the first virtual object is a virtual object controlled by a terminal for displaying the scene picture;
responding to the scene picture being a virtual scene running picture, and displaying the text content of the interactive information sent by the same marketing object in a bullet screen rolling mode on the scene picture; the same-marketing object is a virtual object in the same marketing as the first virtual object; the interactive information is a voice message, and the text content is text information of a specific language obtained by performing voice recognition on the interactive information;
in response to the scene picture being a virtual scene ending picture, displaying the text content of the interactive information sent by the winning marketing object in a bullet screen rolling mode on the scene picture; the winning marketing object is a virtual object in a marketing that wins in the virtual scene;
when a screen recording instruction is received, the display interface content in a preset time period in the terminal display interface is triggered to be stored, and when a screen recording file is obtained, the storage mode of the text content displayed in the rolling manner of the bullet screen comprises the following steps: in the recording process of the screen recording file, the moment when the screen recording instruction is received is taken as a starting timing point, the text contents displayed on the scene picture, the playing positions of the text contents and the appearance time points of the text contents corresponding to the starting timing point respectively are stored in a data queue mode; the screen recording instruction is an instruction triggered when a storage condition is reached; the preservation conditions include: the second virtual object achieves at least one of a preset condition and the number of the barrage elements in a preset time period reaches a preset threshold value; the second virtual object is a terminal-controlled virtual object showing the scene picture; the display interface content comprises a scene picture and a bullet screen element displayed on the scene picture, and the scene picture and the bullet screen element are stored separately;
responding to the playback of the text content, and acquiring data corresponding to each text content from the head of the data queue, so as to correspondingly display each text content at a corresponding position of the scene picture at a corresponding moment;
and in response to the fact that the skipping operation based on the text content is received, replacing the text content corresponding to the current scene picture with the text content corresponding to the next n frames of scene pictures, wherein n is a positive integer.
12. An apparatus for presenting information in a virtual scene, the apparatus comprising:
the first display module is used for displaying scene pictures corresponding to a virtual scene, and the virtual scene comprises a plurality of virtual objects;
the first acquisition module is used for receiving first interaction information sent by a first virtual object, wherein the first interaction information comprises target interaction content; the first virtual object is any one of the plurality of virtual objects; the first interactive information is a voice message, and the target interactive content is text information of a specific language obtained by performing voice recognition on the first interactive information;
the second display module is used for displaying a bullet screen element on the scene picture, wherein the bullet screen element comprises the target interactive content;
the screen recording module is used for responding to the received screen recording instruction, storing the display interface content in the terminal display interface within a preset time period, and obtaining a screen recording file; the screen recording instruction is an instruction triggered when a storage condition is reached; the preservation conditions include: the second virtual object achieves at least one of a preset condition and the number of the bullet screen elements in a preset time period reaches a preset threshold value; the second virtual object is a terminal-controlled virtual object showing the scene picture; the display interface content comprises a scene picture and a bullet screen element displayed on the scene picture, and the scene picture and the bullet screen element are stored separately;
the storage module is used for storing the bullet screen contents in the bullet screen elements displayed on the scene picture, the playing positions of the bullet screen elements and the occurrence time points of the bullet screen elements corresponding to the initial timing points respectively in a data queue mode by taking the moment of receiving the screen recording instruction as the initial timing point in the recording process of the screen recording file;
a fourth display module, configured to respond to playback of the barrage elements, and start to acquire, from a head of the data queue, data corresponding to each barrage element, so as to correspondingly display each barrage element at a corresponding position of the scene picture at a corresponding time; and replacing the bullet screen element corresponding to the current scene picture with the bullet screen element corresponding to the next n frames of scene pictures in response to receiving the skipping operation based on the bullet screen elements, wherein n is a positive integer.
13. A computer device comprising a processor and a memory, wherein the memory stores at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement the method of information presentation in a virtual scene according to any one of claims 1 to 11.
14. A computer-readable storage medium, wherein at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the storage medium, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by a processor to implement the information presentation method in the virtual scene according to any one of claims 1 to 11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010468038.7A CN111672099B (en) | 2020-05-28 | 2020-05-28 | Information display method, device, equipment and storage medium in virtual scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010468038.7A CN111672099B (en) | 2020-05-28 | 2020-05-28 | Information display method, device, equipment and storage medium in virtual scene |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111672099A CN111672099A (en) | 2020-09-18 |
CN111672099B true CN111672099B (en) | 2023-03-24 |
Family
ID=72453205
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010468038.7A Active CN111672099B (en) | 2020-05-28 | 2020-05-28 | Information display method, device, equipment and storage medium in virtual scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111672099B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112494958B (en) * | 2020-12-18 | 2022-09-23 | 腾讯科技(深圳)有限公司 | Method, system, equipment and medium for converting words by voice |
JP7560202B2 (en) * | 2020-12-18 | 2024-10-02 | テンセント・テクノロジー・(シェンジェン)・カンパニー・リミテッド | Speech-to-text conversion method, system, device, equipment, and program |
CN112870705B (en) * | 2021-03-18 | 2023-04-14 | 腾讯科技(深圳)有限公司 | Method, device, equipment and medium for displaying game settlement interface |
CN113262481B (en) * | 2021-05-18 | 2024-06-25 | 网易(杭州)网络有限公司 | Interaction method, device, equipment and storage medium in game |
CN113181645A (en) * | 2021-05-28 | 2021-07-30 | 腾讯科技(成都)有限公司 | Special effect display method and device, electronic equipment and storage medium |
KR20220161252A (en) | 2021-05-28 | 2022-12-06 | 텐센트 테크놀로지(센젠) 컴퍼니 리미티드 | Method and apparatus, device, and storage medium for generating special effects in a virtual environment |
CN113408484A (en) * | 2021-07-14 | 2021-09-17 | 广州繁星互娱信息科技有限公司 | Picture display method, device, terminal and storage medium |
CN115297354A (en) * | 2022-07-26 | 2022-11-04 | 深圳市小财报信息科技有限公司 | Interaction method and device based on online exhibition hall and electronic equipment |
CN116983625A (en) * | 2022-09-26 | 2023-11-03 | 腾讯科技(成都)有限公司 | Social scene-based message display method, device, equipment, medium and product |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5571734B2 (en) * | 2012-05-10 | 2014-08-13 | 株式会社 ディー・エヌ・エー | Game system for exchanging game media in a game |
CN105898603A (en) * | 2015-12-15 | 2016-08-24 | 乐视网信息技术(北京)股份有限公司 | Voice danmaku generation method and device |
CN105435453B (en) * | 2015-12-22 | 2019-02-19 | 网易(杭州)网络有限公司 | A kind of barrage information processing method, device and system |
CN106331832B (en) * | 2016-09-14 | 2019-09-13 | 腾讯科技(深圳)有限公司 | Information display method and device |
CN109391848B (en) * | 2017-08-03 | 2021-03-09 | 掌游天下(北京)信息技术股份有限公司 | Interactive advertisement system |
CN107613392B (en) * | 2017-09-22 | 2019-09-27 | Oppo广东移动通信有限公司 | Information processing method, device, terminal device and storage medium |
CN107734373A (en) * | 2017-10-12 | 2018-02-23 | 网易(杭州)网络有限公司 | Barrage sending method and device, storage medium, electronic equipment |
CN108156506A (en) * | 2017-12-26 | 2018-06-12 | 优酷网络技术(北京)有限公司 | The progress adjustment method and device of barrage information |
CN108566565B (en) * | 2018-03-30 | 2021-08-17 | 科大讯飞股份有限公司 | Bullet screen display method and device |
CN109040850B (en) * | 2018-08-06 | 2021-09-03 | 广州方硅信息技术有限公司 | Game live broadcast interaction method and system, electronic equipment and storage medium |
CN109195023A (en) * | 2018-10-15 | 2019-01-11 | 武汉斗鱼网络科技有限公司 | A kind of processing method and processing device identifying barrage information |
CN110237531A (en) * | 2019-07-17 | 2019-09-17 | 网易(杭州)网络有限公司 | Method, apparatus, terminal and the storage medium of game control |
CN110473531B (en) * | 2019-09-05 | 2021-11-09 | 腾讯科技(深圳)有限公司 | Voice recognition method, device, electronic equipment, system and storage medium |
CN111163359B (en) * | 2019-12-31 | 2021-01-05 | 腾讯科技(深圳)有限公司 | Bullet screen generation method and device and computer readable storage medium |
-
2020
- 2020-05-28 CN CN202010468038.7A patent/CN111672099B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN111672099A (en) | 2020-09-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111672099B (en) | Information display method, device, equipment and storage medium in virtual scene | |
CN111462307B (en) | Virtual image display method, device, equipment and storage medium of virtual object | |
CN111589133B (en) | Virtual object control method, device, equipment and storage medium | |
CN111589124B (en) | Virtual object control method, device, terminal and storage medium | |
CN111760278B (en) | Skill control display method, device, equipment and medium | |
CN111589139B (en) | Virtual object display method and device, computer equipment and storage medium | |
CN114339368B (en) | Display method, device and equipment for live event and storage medium | |
CN111596838B (en) | Service processing method and device, computer equipment and computer readable storage medium | |
WO2021244243A1 (en) | Virtual scenario display method and device, terminal, and storage medium | |
CN112569600B (en) | Path information sending method in virtual scene, computer device and storage medium | |
CN113289331B (en) | Display method and device of virtual prop, electronic equipment and storage medium | |
CN110585710A (en) | Interactive property control method, device, terminal and storage medium | |
CN112870705B (en) | Method, device, equipment and medium for displaying game settlement interface | |
CN113117331B (en) | Message sending method, device, terminal and medium in multi-person online battle program | |
CN112221135B (en) | Picture display method, device, equipment and storage medium | |
CN111589144B (en) | Virtual character control method, device, equipment and medium | |
CN112973117A (en) | Interaction method of virtual objects, reward issuing method, device, equipment and medium | |
CN111672108A (en) | Virtual object display method, device, terminal and storage medium | |
CN112891939B (en) | Contact information display method and device, computer equipment and storage medium | |
CN112774195B (en) | Information display method, device, terminal and storage medium | |
CN113813594A (en) | Using method, device, terminal and storage medium of virtual prop | |
CN111651616B (en) | Multimedia resource generation method, device, equipment and medium | |
CN111672101B (en) | Method, device, equipment and storage medium for acquiring virtual prop in virtual scene | |
CN112156454A (en) | Virtual object generation method and device, terminal and readable storage medium | |
CN111589147A (en) | User interface display method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |