CN113490006A - Live broadcast interaction method and equipment based on bullet screen - Google Patents

Live broadcast interaction method and equipment based on bullet screen Download PDF

Info

Publication number
CN113490006A
CN113490006A CN202110745893.2A CN202110745893A CN113490006A CN 113490006 A CN113490006 A CN 113490006A CN 202110745893 A CN202110745893 A CN 202110745893A CN 113490006 A CN113490006 A CN 113490006A
Authority
CN
China
Prior art keywords
target virtual
target
bullet screen
virtual scene
virtual object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110745893.2A
Other languages
Chinese (zh)
Inventor
李启光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yunsheng Everything Technology Co ltd
Original Assignee
Beijing Yunsheng Everything Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yunsheng Everything Technology Co ltd filed Critical Beijing Yunsheng Everything Technology Co ltd
Priority to CN202110745893.2A priority Critical patent/CN113490006A/en
Publication of CN113490006A publication Critical patent/CN113490006A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • H04N21/2355Processing of additional data, e.g. scrambling of additional data or processing content descriptors involving reformatting operations of additional data, e.g. HTML pages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • H04N21/4355Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the application relates to the technical field of internet, and provides a live broadcast interaction method and equipment based on a bullet screen. The method comprises the following steps: after receiving at least one bullet screen from the terminal, the server analyzes the semantics of at least one target word contained in the at least one bullet screen, and further generates a target virtual scene according to the semantics of the at least one target word, so that the terminal displays the target virtual scene in a live broadcast interface. The target virtual scene comprises at least one target virtual object, and the at least one target virtual object is obtained based on the semantics of the corresponding target words respectively. By adopting the technical scheme, in the live broadcast process, the server can determine the target virtual object according to the semantics of the target words contained in the bullet screen, so that the target virtual object with rich forms is contained in the target virtual scene, thus, the interaction forms of audiences and the live broadcast virtual scene are more rich, and the experience of watching the live broadcast by the audiences can be optimized.

Description

Live broadcast interaction method and equipment based on bullet screen
Technical Field
The embodiment of the application relates to the technical field of internet, in particular to a live broadcast interaction method and equipment based on a bullet screen.
Background
In the live game process, audiences can watch live game and send barrage through the terminal. In some common implementation manners, in order to further improve the participation sense of the audience watching the live game, the barrage sent by the audience can be converted into an object in the live game, so that the audience can interact with the live game by sending the barrage.
In actual implementation, the barrage is usually converted into a single game object, so that the interaction form between the audience and the live game is relatively single, and the experience of the audience is poor.
Disclosure of Invention
The embodiment of the application provides live broadcast interaction method and equipment based on a bullet screen, and aims to solve the problem that the interaction form between audiences and a game in live broadcast is single.
In a first aspect, an embodiment of the present application provides a live broadcast interaction method based on a bullet screen, where the method includes:
responding to at least one bullet screen from the terminal, and acquiring at least one target word contained in the at least one bullet screen;
parsing semantics of at least one target word;
generating a target virtual scene according to the semantics of at least one target word, wherein the target virtual scene comprises at least one target virtual object, and the at least one target virtual object is obtained based on the semantics of the corresponding target word;
and sending the target virtual scene to the terminal so that the terminal can display the target virtual scene in a live interface.
In a second aspect, an embodiment of the present application further provides a live broadcast interaction method based on a barrage, where the method includes:
displaying a live broadcast interface;
responding to the operation of inputting at least one bullet screen by a user, and sending at least one bullet screen to a server;
responding to the target virtual scene sent by the server, showing the target virtual scene in the live broadcasting interface, wherein the target virtual scene comprises at least one target virtual object,
the target virtual scene is generated by the server by analyzing the semantics of at least one target word contained in at least one bullet screen and according to the semantics of the at least one target word, and at least one target virtual object is obtained based on the semantics of the corresponding target word.
In a third aspect, an embodiment of the present application further provides a live broadcast interaction device based on a barrage, where the device includes:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for responding to at least one bullet screen from a terminal and acquiring at least one target word contained in the at least one bullet screen;
the analysis module is used for analyzing the semantics of at least one target word;
the generating module is used for generating a target virtual scene according to the semantics of at least one target word, wherein the target virtual scene comprises at least one target virtual object, and the at least one target virtual object is obtained based on the semantics of the corresponding target word;
and the sending module is used for sending the target virtual scene to the terminal so that the terminal can display the target virtual scene in the live broadcast interface.
In a possible implementation manner, the generating module is further configured to determine a processing instruction according to the semantics of the at least one target word, and generate a target virtual scene including the at least one target virtual object according to the processing instruction, where the processing instruction includes description information of the at least one target virtual object.
In one possible implementation, the description information of the target virtual object includes at least one of the following:
the attribute information and the display information are displayed,
the attribute information is used for indicating the attribute of the target virtual object in the target virtual scene;
the display information is used for indicating the display position, the activity track and the display duration of the target virtual object in the target virtual scene.
In a possible implementation manner, the obtaining module is further configured to determine, for any word included in any bullet screen of the at least one bullet screen, that the word is a target word if the word is matched with at least one pre-configured keyword, where the at least one keyword corresponds to one target virtual object respectively. In a fourth aspect, an embodiment of the present application further provides a live broadcast interaction device based on a barrage, the device includes:
the first display module is used for displaying a live broadcast interface;
the sending module is used for responding to the operation of inputting at least one bullet screen by a user and sending at least one bullet screen to the server;
a second display module, configured to display a target virtual scene in the live broadcast interface in response to the target virtual scene sent by the server, where the target virtual scene includes at least one target virtual object,
the target virtual scene is generated by the server by analyzing the semantics of at least one target word contained in at least one bullet screen and according to the semantics of the at least one target word, and at least one target virtual object is obtained based on the semantics of the corresponding target word.
In a fifth aspect, an embodiment of the present application provides a server, where the server includes a processor and a memory, where the memory stores instructions or a program, and the instructions or the program are executed by the processor to implement the live broadcast interaction method based on a bullet screen according to the first aspect.
In a sixth aspect, an embodiment of the present application provides a terminal, where the terminal includes a processor and a memory, where the memory stores instructions or programs, and the instructions or programs are executed by the processor according to the live broadcast interaction method based on a bullet screen in the second aspect.
In a seventh aspect, an embodiment of the present application provides a computer-readable storage medium, where instructions or a program are stored in the computer-readable storage medium, and the instructions or the program are executed by the processor to perform the live broadcast interaction method based on a bullet screen according to the first aspect or the second aspect.
In an eighth aspect, an embodiment of the present application provides a computer program product, where the computer program product includes computer program codes, and when the computer program codes run on a computer, the computer is enabled to implement the live broadcast interaction method based on a bullet screen according to the first aspect or the second aspect.
After receiving at least one barrage from the terminal, the server acquires at least one target word contained in the at least one barrage, analyzes the semantics of the at least one target word, and generates a target virtual scene according to the semantics of the at least one target word, so that the terminal displays the target virtual scene to audiences in a live broadcast interface. Specifically, the target virtual scene includes at least one target virtual object, and the at least one target virtual object is obtained based on the semantics of at least one target word. Therefore, by adopting the technical scheme, in the live broadcast process, the server can determine the target virtual object according to the semantics of the target words contained in the bullet screen, so that the target virtual object with rich forms is contained in the target virtual scene, the interaction forms of audiences and the live broadcast virtual scene are rich, and the live broadcast experience of the audiences can be optimized.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the embodiments of the present application will be briefly described below. It should be understood that other figures may be derived from these figures by those of ordinary skill in the art without inventive exercise.
Fig. 1 is a schematic diagram of an exemplary architecture of a live broadcast system provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of an exemplary method of a live broadcast interaction method based on a bullet screen according to an embodiment of the present application;
fig. 3 is a schematic flowchart of another exemplary method of a live broadcast interaction method based on a bullet screen according to an embodiment of the present application;
fig. 4 is an exemplary interface diagram of a live interface 50 provided in an embodiment of the present application;
fig. 5A is an exemplary interface schematic diagram of a live interface showing a target virtual scene provided in an embodiment of the present application;
fig. 5B is another exemplary interface diagram of a live interface showing a target virtual scene provided in an embodiment of the present application;
fig. 5C is a schematic view of a third exemplary interface for displaying a target virtual scene on a live interface according to an embodiment of the present application;
fig. 5D is a fourth exemplary interface diagram of a live interface showing a target virtual scene provided in the embodiment of the present application;
fig. 5E is a schematic diagram of a fifth exemplary interface for displaying a target virtual scene on a live interface according to an embodiment of the present application;
fig. 6 is a signaling interaction diagram of a live broadcast interaction method based on a bullet screen according to an embodiment of the present application;
fig. 7A is a schematic diagram illustrating an exemplary composition of a live interactive device 80 based on a bullet screen according to an embodiment of the present application;
fig. 7B is an exemplary composition diagram of the server 81 provided in the embodiment of the present application.
Fig. 8A is a schematic diagram illustrating an exemplary composition of a live interactive device 90 based on a bullet screen according to an embodiment of the present application;
fig. 8B is an exemplary structural diagram of the terminal 91 provided in the embodiment of the present application.
Detailed Description
The following describes technical solutions of the embodiments of the present application with reference to the drawings in the embodiments of the present application.
The terminology used in the following examples of the present application is for the purpose of describing particular embodiments and is not intended to be limiting of the technical solutions of the present application. As used in the specification of the present application and the appended claims, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
The following describes related art related to embodiments of the present application.
1. Live broadcast
The live broadcast related to the embodiment of the application can be referred to as 'network live broadcast', and the network live broadcast is that a live broadcast platform synchronizes information captured by a source terminal or information of an application running in the source terminal to at least one live broadcast terminal to play on the basis of the internet. And the at least one live broadcast terminal plays the information of the source terminal in real time through a live broadcast interface. Optionally, the live information may include an audio and video captured by the source terminal in real time, such as a ball game, a social event, and the like, and may also include an audio and video played by an APP (application) that the source terminal is running, such as a game scene, a movie, or a content of a tv show. Optionally, the technical scheme is described by taking the description and live game as examples.
Optionally, the live broadcast platform provides an interactive entry for a client of the live broadcast terminal, so that any live broadcast terminal can receive discussion information input by a corresponding audience. And then, the live broadcast platform synchronizes the discussion information to the clients of all live broadcast terminals and the client of the source terminal. Illustratively, any discussion information input by the viewer may be presented in the form of a bullet screen.
2. Bullet screen
The bullet screen is an information line displayed in a rolling mode in a live interface. For example, in the horizontal direction, scrolling is performed from the right end to the left end of the live interface and then no longer displayed. In the embodiment of the application, after receiving the discussion information sent by any live terminal, the live platform can request a server of the live information, such as a game server, to process the corresponding discussion information into information lines which are displayed in a live scene in a rolling manner. And then, the live broadcast platform receives the scene picture to be live broadcast sent by the corresponding server, and then distributes the scene picture to be live broadcast to each live broadcast terminal, so that each live broadcast terminal can live broadcast the scene picture containing the bullet screen.
3. Virtual scene
A virtual scene is a virtual scene that is displayed (or provided) by a game application when running on a terminal. The virtual scene can be a simulation environment scene of a real world, can also be a semi-simulation semi-fictional three-dimensional environment scene, and can also be a pure fictional three-dimensional environment scene. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, and a three-dimensional virtual scene. Optionally, the virtual scene may include a virtual object.
The virtual scene is typically generated by a game server according to game event rendering, and then transmitted to the terminal for presentation by the hardware (such as a screen) of the terminal.
4. Virtual object
A virtual object refers to a movable object in a virtual scene. The movable object may be at least one of a virtual character, a virtual animal, a virtual vehicle, a virtual item. Optionally, when the virtual scene is a three-dimensional virtual scene, the virtual object is a three-dimensional stereo model created based on an animated skeleton technique. Each virtual object has its own shape, volume and orientation in the three-dimensional virtual scene and occupies a portion of the space in the three-dimensional virtual scene.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating an exemplary architecture of a live game system according to an embodiment of the present disclosure. This live system of game includes: a server 10, a first terminal 20 and a second terminal 30.
It is to be understood that fig. 1 is only a schematic illustration, and does not limit the game live system related to the embodiment of the present application. In practical implementation, the live game system according to the embodiment of the present application may further include more or fewer devices. The embodiments of the present application do not limit this.
The server 10 may comprise one of a server cluster of multiple servers, a cloud computing platform, and a virtualization center. The server 10 may provide computing resources for the method designed in the present technical solution, and process all configuration of the game, logic related to parameters, and the like, including providing computing services such as a database, a function, storage, a Network service, communication, a middleware service, a domain name service, a security service, a CDN (Content Delivery Network), and a big data and artificial intelligence platform for live broadcast operation. Alternatively, the server 10 may process a bullet screen input by the viewer through the terminal, render a scene picture of the virtual scene according to the target words included in the bullet screen, and then distribute the corresponding scene picture to each terminal.
It should be noted that in the game live system illustrated in fig. 1, the server 10 refers to all service platforms involved in the game live system. In actual implementation, the server 10 may include a live server, a bullet screen processing server, and a game server (not shown in fig. 1). The live broadcast server, the bullet screen processing server and the game server can carry out information interaction. The live broadcast server can be used for interacting with each live broadcast client, acquiring a barrage input by audiences through the live broadcast client, transmitting the barrage to the barrage processing server, and distributing scene pictures to be displayed in a live broadcast mode to each live broadcast client. The bullet screen processing server may be configured to process each bullet screen to obtain a target word included in the bullet screen, and transmit information related to the virtual object corresponding to the target word to the game server. The game server can be used for rendering scene pictures of the virtual scene to be live-displayed based on the relevant information of the virtual object.
The first terminal 20 and the second terminal 30 are devices supporting interface presentation, and may be implemented as electronic devices such as a mobile phone, a tablet Computer, a game console, an e-book reader, a multimedia player, a wearable device, and a PC (Personal Computer). The device types of the first terminal 20 and the second terminal 30 are the same or different. And are not limited herein. The first terminal 20 and the second terminal 30 may be installed with live clients and display live interfaces. Alternatively, the first terminal 20 may operate a main client and the second terminal 30 may operate a viewer client. A client of the live game may also be running in the first terminal 20.
The virtual scenes displayed in the live broadcast interface by the first terminal 20 and the second terminal 30 are rendered by the server 10 and are respectively sent. The scene displayed in the live interface of the first terminal 20 and the second terminal 30 is the same. Optionally, the scene displayed in the live interface of the first terminal 20 may be rendered by the server 10 according to the running logic of the game in response to the operation received by the first terminal 20. The scene displayed in the live interface of the second terminal 30 may be a scene in which the server 10 acquires the first terminal 20 from the first terminal 20 and the first terminal 20 runs the game application.
Illustratively, the first terminal 20 is, for example, a terminal used by a user 201, and the user 201 is, for example, a game anchor. The user 201 can play a game using the first terminal 20 and present scene pictures of a virtual scene involved in the game through the live client to share the game pictures with other users. The second terminal 30 is, for example, a terminal used by a user 301, and the user 301 is, for example, a viewer. The user 301 can use the second terminal 30 to view a scene interface of the game operation performed by the user 201. In this process, the user 301 may use the second terminal 30 to add a virtual object in a virtual scene of the game by sending a barrage, thereby participating in the game in live broadcasting. Optionally, by sending the virtual object added to the bullet screen, different objects may be implemented according to the content of the bullet screen and the virtual scene, for example, fire, obstacles, and soldiers may be implemented. The embodiments of the present application do not limit this.
Alternatively, the server 10 and the first terminal 20, and the server 10 and the second terminal 30 may be directly or indirectly connected through a wired network or a wireless network, which is not limited in the embodiment of the present application.
It is noted that fig. 1 is a schematic of a logical functional level, and in a practical implementation, the server 10 may comprise at least one server device entity, and the first terminal 20 and the second terminal 30 may be any two terminal entities of several terminal entities to which the server 10 is connected. The embodiments of the present application will not be described in detail herein.
The embodiment of the application discloses a live broadcast interaction method based on a bullet screen, and audiences can participate in related games by sending the bullet screen in the process of watching live broadcasts of the games. In an actual implementation scene, after receiving a bullet screen sent by a spectator, a server can add a virtual object in a virtual scene of a game according to the semantics of a target word contained in the bullet screen, so as to realize interaction between the spectator and live broadcast. The semantics of different target words in the bullet screen correspond to target virtual objects with different images, so that the interaction form between audiences and live broadcast virtual scenes can be enriched, and the live broadcast experience of the audiences can be optimized.
The technical solutions of the embodiments of the present application and the technical effects produced by the technical solutions of the present application will be described below through descriptions of several exemplary embodiments.
Referring to fig. 2, an exemplary live broadcast interaction method based on a bullet screen is provided in the embodiment of the present application. The present embodiment is illustrated by taking an implementation of a server side as an example, and the server may be a server 10 illustrated in fig. 1. The method may be implemented by the following steps.
Step S101, responding to at least one bullet screen from the terminal, and acquiring at least one target word contained in the at least one bullet screen.
In this embodiment, the terminal may be a terminal used by a viewer, such as the second terminal 30 illustrated in fig. 1. In conjunction with the foregoing description of the terminal, the embodiments of the present application relate to at least one barrage, which may be input from one viewer through one terminal entity, or may be input from multiple viewers through multiple terminal entities, respectively. The embodiments of the present application do not limit this.
The target words can provide an index function for determining the interaction mode of the audience and the live game, and each target word indicates an interaction rule and an image description of a virtual object, so that the server can determine the target virtual object to be added into the virtual scene.
It should be noted that, in the embodiment of the present application, a plurality of interaction rules may be preconfigured, where the interaction rules may indicate information such as an image of a target virtual object added to a virtual scene, and a position of a corresponding target virtual object in the virtual scene. Correspondingly, in the embodiment of the present application, at least one keyword or keyword may be preconfigured, where the at least one keyword is used as an index information base of the plurality of interaction rules, that is, the at least one keyword corresponds to one target virtual object respectively. Further, after the server obtains at least one bullet screen, for any word contained in any bullet screen, if the word is matched with at least one pre-configured keyword, the word is determined to be a target word.
In an alternative example, "matching with a keyword" may be implemented as "the same word as the keyword", and accordingly, the target word may be understood as a word that hits the keyword among at least one word of the bullet screen.
Optionally, after obtaining at least one bullet screen, the server may perform word segmentation on each bullet screen to obtain a plurality of words. And then, the server respectively detects whether each word hits the keyword in at least one keyword. And if the keywords in the at least one keyword are hit, determining the corresponding word as the target word. Illustratively, the server performs the word segmentation on at least one bullet screen by using python j ieba word segmentation technology, for example.
Illustratively, each of the at least one target word may be implemented as a noun, such as "fire", "rain", "bullet", etc., or as a combination of adjectives and nouns, such as "heavy rain", "heavy fire", "shot", etc. In the scene where the target word is implemented as a different word, the visual image of the corresponding target virtual object is also different, which is described in the following embodiments.
With reference to the foregoing description of the server, optionally, the live broadcast server may obtain the at least one bullet screen, and then the live broadcast server sends the at least one bullet screen to the bullet screen processing server, and the bullet screen processing server obtains at least one target word included in the at least one bullet screen.
By adopting the implementation mode, multiple interaction rules are pre-configured, and the implementation basis is provided for enriching the interaction form of audiences and live broadcasting.
Step S102, analyzing the semanteme of at least one target word.
In order to improve the interaction experience of the audience, the server can add a visual image corresponding to the target word in the game scene. Based on this, after the server obtains at least one target word, the server may analyze the semantics of the at least one target word, and further determine the visual image corresponding to the corresponding target word, that is, the target virtual object corresponding to the target word, according to the semantics of each target word.
Step S103, generating a target virtual scene according to the semantics of at least one target word, wherein the target virtual scene comprises at least one target virtual object, and the at least one target virtual object is obtained based on the semantics of the corresponding target word.
Wherein the target virtual scene is a game scene after at least one target virtual object is added. The at least one target virtual object is obtained based on semantics of the corresponding target words, respectively.
For example, at least one target word may correspond to one target virtual object one by one, and accordingly, one target virtual object may be obtained according to the semantic meaning of each target word in the semantic meaning of the at least one target word. For example, at least one target word includes the word "heavy rain", the word "bullet" and the word "flame", so that virtual ladybird heavy rain of the target virtual object can be obtained according to the semantic of the word "heavy rain", virtual bullet of the target virtual object can be obtained according to the semantic of the word "bullet", and virtual flame of the target virtual object can be obtained according to the semantic of the word "flame".
In combination with the foregoing description of the target words, when the semantics of two target words express the same scene, but the target words contain different adjectives, the target virtual objects corresponding to the two target words present images with different effects. For example, the target word "light rain" includes the adjective "small", then the virtual object corresponding to the target word may be virtual capillary rain; for another example, the target word "heavy rain" includes the adjective west "large", then the virtual object corresponding to the target word may be a virtual ladybug heavy rain; for another example, if the target word "flame" does not contain any adjective, then the virtual object corresponding to the target word may be a virtual flame; for another example, if the target word "fire" includes the adjective "fire", then the virtual object to which the target word corresponds may be a virtual fire with a very strong fire.
It should be noted that, for a game, the server usually generates a corresponding virtual scene by a rendering manner according to a game event. The game event may be a program file indicating virtual objects contained in the virtual scene, display information of each virtual object, activity information of each virtual object, status information of each virtual object, and the like. Based on this, in the embodiment of the application, the server may obtain the processing instruction according to the semantics of the at least one target word, and obtain the target virtual scene according to the rendering of the processing instruction. Optionally, the processing instruction is the game event, and the processing instruction includes description information of the at least one target virtual object, so that the server renders a target virtual scene including the at least one target virtual object, and converts the barrage into a visual virtual object in the game.
Optionally, the description information of the target virtual object may include at least one of attribute information and display information. The attribute information is used to indicate the attribute of the target virtual object in the virtual scene, for example, the role of the target virtual object in the virtual scene. The display information is used for indicating the display position, the activity track, the display duration and the like of the target virtual object in the virtual scene.
Optionally, in this embodiment of the application, the at least one target word may correspond to one target virtual object respectively.
It is to be understood that the above listed specific information of the attribute information and the display information of the target virtual object are only schematic descriptions and do not limit the embodiments of the present application. In other embodiments, the attribute information and the display information of the target virtual object related to the embodiments of the present application may include more or less specific information, and the embodiments of the present application are not described in detail.
Optionally, in an actual implementation scenario, the bullet screen processing server obtains a processing instruction according to the semantic meaning of at least one target word, and then sends the processing instruction to the game server. And the game server renders according to the processing instruction to obtain the target virtual scene.
And step S104, sending the target virtual scene to the terminal so that the terminal can display the target virtual scene in a live interface.
The terminals in this embodiment may be all terminals running live clients, such as the first terminal 20 and the second terminal 30 illustrated in fig. 1. Therefore, all terminals operating the live broadcast client can display the scene picture of the target virtual scene in the live broadcast interface of the device.
Optionally, after rendering the target virtual scene, the game server may send a scene picture of the target virtual scene to a terminal (e.g., the first terminal 20 in fig. 1) corresponding to the anchor client. Further, the live broadcast server acquires a scene screen of the target virtual scene from a terminal corresponding to the anchor client, and then distributes the scene screen of the target virtual scene to terminals (for example, the second terminal 30 in fig. 1) corresponding to the respective viewers.
Illustratively, the target virtual object may be displayed in at least one of an image, animation, and text. As shown in fig. 5A to 5E. If the target virtual object is displayed in the form of an image or animation, the target virtual object can be displayed as a two-dimensional or three-dimensional image.
The above is a description of an embodiment in which the bullet screens include the target word, in an actual implementation scenario, part of the bullet screens in at least one bullet screen may not include the target word, and for bullet screens that do not include the target word, the server only needs to treat these bullet screens as ordinary bullet screens. The embodiments of the present application will not be described in detail herein.
It can be seen that, with the implementation manner, after at least one barrage from the terminal is received, the server obtains at least one target word included in the at least one barrage, and then parses the semantics of the at least one target word, and generates a target virtual scene according to the semantics of the at least one target word, so that the terminal displays the target virtual scene to audiences in a live broadcast interface. Specifically, the target virtual scene includes at least one target virtual object, and the at least one target virtual object is obtained based on the semantics of at least one target word. Therefore, by adopting the technical scheme, in the live broadcast process, the server can determine the target virtual object according to the semantics of the target words contained in the bullet screen, so that the target virtual object with rich forms is contained in the target virtual scene, the interaction forms of audiences and the live broadcast virtual scene are rich, and the live broadcast experience of the audiences can be optimized.
Fig. 2 is an embodiment of a live broadcast interaction method based on a bullet screen according to an embodiment of the present application, which is introduced from a server. The live broadcast interaction method based on the barrage in the embodiment of the application is introduced from the perspective of the terminal.
Referring to fig. 3, another exemplary live broadcast interaction method based on a bullet screen is provided in the embodiments of the present application. The present embodiment is illustrated by taking as an example an implementation of a terminal, such as a terminal used by a viewer, for example, the second terminal 30 illustrated in fig. 1. It should be noted that the embodiment illustrated in fig. 3 corresponds to the embodiment illustrated in fig. 2, and features and technical terms related to the embodiment illustrated in fig. 3 and the embodiment illustrated in fig. 2 may refer to related descriptions of the embodiment portion illustrated in fig. 2, and are not described again here.
The method can be realized by the following steps:
and step S201, displaying a live interface.
The live interface is used for displaying live content of the anchor equipment in real time. The live interface can comprise a display area and a barrage input area, wherein the display area is used for displaying live content, and the barrage input area is used for receiving barrages input by audiences.
For example, referring to fig. 4, fig. 4 illustrates an exemplary live interface 50, and as shown in fig. 4, the live interface 50 includes a display area 51, a bullet screen input area 52, and a bullet screen display area 53. The display area 51 is used for displaying a scene interface of the virtual scene received by the terminal, and the bullet screen displayed in the scene interface is updated synchronously with the bullet screen in the bullet screen display area 53. The barrage input area 52 includes a text control 521 and a function control 522, where the text control 521 is used to receive barrage information input by a viewer, and the function control 522 is used to respond to a touch operation of the viewer and send the barrage to the live platform. The bullet screen display area 53 is used for scrolling and displaying bullet screens sent by all viewers watching live broadcast, and the bullet screen display area 53 can display the latest bullet screen at the lowermost end and scroll all the bullet screens in the display area upwards in sequence and strip by strip.
It is understood that fig. 4 is only a schematic illustration, and does not limit the live interface of the embodiment of the present application. In actual implementation, other interface elements such as a comment area, a reward anchor control, and the like may be further included in the live interface. The embodiments of the present application will not be described in detail herein.
Step S202, responding to the operation of inputting at least one bullet screen by the user, and sending at least one bullet screen to the server.
The server determines at least one target virtual object according to the semantics of at least one target word contained in at least one bullet screen, further generates a target virtual scene containing the at least one target virtual object, and sends the target virtual scene to the terminal. The operation of the server after receiving at least one barrage is described in detail in the embodiment illustrated in fig. 2, and will not be described in detail here.
Step S203, responding to the target virtual scene sent by the server, and displaying the target virtual scene in the live broadcast interface, wherein the target virtual scene comprises at least one target virtual object.
Alternatively, the at least one barrage may be input by the viewer via a barrage input area 52 illustrated in fig. 4. In some implementations, the at least one barrage can be input by a viewer using the terminal, and in other implementations, the at least one barrage can be input by at least one other viewer using the terminal. The embodiments of the present application do not limit this.
Optionally, the terminal may display a scene picture of the target virtual scene in the display area 51, and display the content of the at least one barrage in the barrage display area 53, which is shown in detail in fig. 5A to 5E.
For any target virtual object, in some embodiments, the target virtual object may be dynamically displayed in the live interface. For example, the target virtual object dynamically displays the target virtual object according to the set trajectory, as shown in fig. 5A, 5B, or 5C described below, or the target virtual object dynamically displays the target virtual object in response to the action of another virtual object in the virtual scene, as shown in fig. 5E described below. In other embodiments, the target virtual object is displayed in a set area of the live interface for a set duration, as shown in fig. 5D below.
Next, a scene screen in live broadcasting according to an embodiment of the present application will be described by taking a virtual scene of game x as an example. The game x is, for example, a game in which the main control object is controlled to advance while avoiding the obstacle.
Referring to fig. 5A, fig. 5A is an exemplary interface schematic diagram of a live interface showing a target virtual scene provided in an embodiment of the present application. Fig. 5A illustrates a target virtual scene 610 and a bullet screen display area 620 shown in the live interface. Detailed information of each bullet screen in the target virtual scene 610, for example, information "viewer a: i is the barrage a ", the information" audience B: to pay attention to me, and information "audience C: this game is too hot ". In this case, the word "fire" in the bullet screen "this game is too fire" input by the viewer C, for example, the hit keyword is the target word. It is apparent that the target word in this example is the term "fire". Correspondingly, the target virtual scene 610 includes a main control object 611, a target virtual object 612 and a bullet screen 613, and the target virtual object 612 is presented as an visualized flame according to the semantic of "fire" and is displayed in combination with the bullet screen "this game is too fire". Target virtual object 612 is an obstacle object in game x that master object 611 needs to avoid, and target virtual object 612 can move in the live interface with the bullet screen "this game is too hot". The bullet screen "i am the bullet screen a" and the bullet screen "to focus on me" is displayed as the bullet screen 613.
In some implementations, the movement trajectory of the target virtual object may be related to the semantics of the target word. For example, referring to fig. 5B, fig. 5B is another exemplary interface diagram of a live interface showing a target virtual scene provided in the embodiment of the present application. The live interface illustrated in fig. 5B includes a target virtual scene 630 and a bullet screen display area 640. The bullet screen display area 640 includes detailed information of the bullet screen, and the display form of the detailed information is described in detail in the embodiment illustrated in fig. 5A and will not be described in detail here. In this example, the word "lower side" in the bullet screen "from lower side" input by the viewer B is the target word, for example, the hit keyword, and the word "upper side" in the bullet screen "from upper side" input by the viewer C is the target word, for example, the hit keyword. The target virtual scene 630 includes a main control object 631, a target virtual object 632, a target virtual object 633 and a bullet screen 634, wherein the bullet screen "from the lower side" corresponds to the target virtual object 632, and the bullet screen "from the upper side" corresponds to the target virtual object 633. As can be seen, in this example, target virtual object 632 and target virtual object 633 are presented in text and implemented as obstacle objects in game x that the master object 631 needs to avoid. Optionally, in combination with the semantics of the corresponding target word, the target virtual object 632 moves obliquely from the lower left end of the live interface illustrated in fig. 5B to the upper right end of the live interface, and the obstacle object 633 moves obliquely from the upper right end of the live interface illustrated in fig. 5B to the lower left end of the live interface.
In other implementations, the semantics of the target word may be expressed by the presentation form and the movement trajectory of the target virtual object. For example, please refer to fig. 5C, and fig. 5C is a third exemplary interface diagram illustrating a target virtual scene for a live interface provided in the embodiment of the present application. The live interface illustrated in fig. 5C includes a target virtual scene 650 including a master object 651, a target virtual object 652, and a barrage 653. In this example, the target virtual object 652 is presented in text, and is implemented as an obstacle object that the master object 651 needs to avoid in game x. Details of the bullet screen 653 in the target virtual scene 650 and details of the bullet screen related to the target virtual object 652 are displayed in the bullet screen display area, which are not described in detail herein. In this example, the bullet screen "pop" itself input by the viewer C, for example, the hit keyword, is the target word. Based on this, in this interface, the bullet screen "shot" corresponds to the target virtual object 652. In combination with the semantic meaning of "shot," in the target virtual scene illustrated in fig. 5C, the target virtual object includes a plurality of sub-objects, and the sub-objects move radially, and optionally, each sub-object may act on the main control object 651 as an independent obstacle object.
In another embodiment, if the target word is "bullet", the corresponding target virtual object may be a virtual bullet image according to the semantic of "bullet", and the virtual bullet image may be presented as one of the sub-objects in fig. 5C, for example. And the virtual bullet image can move in the live game scene in a right-to-left track in the horizontal direction.
Fig. 5A to 5C illustrate embodiments in which target virtual objects are all dynamically displayed in a target virtual scene, and fig. 5D illustrates an exemplary scene interface in which the target virtual objects are statically displayed in the target virtual scene. Referring to fig. 5D, the live interface illustrated in fig. 5D includes a target virtual object 660, and the target virtual object 660 is a target virtual object corresponding to the bullet screen "big fire" input by the viewer B. In this example, the target virtual object 660 is in the lower display of the scene interface illustrated in fig. 5D, and the target virtual object 660 may disappear after being statically displayed for a set length of time. Alternatively, the set time period is, for example, 2 seconds, and the target virtual object 660 may disappear in the scene interface in a fade-out manner.
In still other implementations, two target virtual objects in a target virtual scene may interact and, in turn, at least one of the two target virtual objects may change accordingly. For example, in the live interface illustrated in fig. 5E, the target virtual scene 670 includes a target virtual object 671 and a target virtual object 672. The target virtual object 671 corresponds to a bullet screen "big fire", and is displayed in the manner shown in the example illustrated in fig. 5D. The target virtual object 672 corresponds to the barrage "heavy rain", and the target virtual object 672 moves from left to right, for example, from the upper end of the live interface, and is displayed as an avatar of heavy rain. Alternatively, when the target virtual object 672 and the target virtual object 671 start to coincide in the vertical direction, rainwater falling by the target virtual object 672 falls on a fire of the target virtual object 671, and the target virtual object 671 displays a larger fire more and less in response to the action of the target virtual object 672 until the fire goes out.
The target word "fire" in fig. 5D and 5E includes the adjective "fire" relative to the target word "fire" in fig. 5A. Based on this, the semantic of the word "fire" is different from the semantic of "fire", and as shown in the figure, the image of the target virtual object corresponding to the target word "fire" is more intense than the image of the target virtual object corresponding to the target word "fire". The target word "heavy rain" illustrated in fig. 5E is different from "rain" or "light rain" in other embodiments, and the corresponding target virtual object is also different in image.
It should be understood that fig. 5A to 5E are all examples for illustrating the present technical solution, and the virtual scene related to the embodiment of the present application is not limited. Even in other virtual scenes of the game corresponding to fig. 5A to 5E, the scene pictures may be different from those shown in fig. 5A to 5E, and accordingly, the virtual objects in different scene pictures and the display effect of the corresponding virtual objects in the corresponding virtual scenes may also be different from those shown in fig. 5A to 5E. The embodiments of the present application are not described one by one here.
In addition, in actual implementation, the target virtual scene displayed on the live interface may be different according to different games, and accordingly, scene pictures of the target virtual scene may be different from those shown in fig. 5A to 5E. In addition, the display effect of the target virtual object in the target virtual scene is the same as the display effect illustrated in fig. 5A to 5E. The embodiments of the present application do not limit this.
Therefore, by adopting the realization mode, in the process that audiences watch live games through the live platform, after some barrages are input through the live platform, the barrages can be converted into visual virtual objects with various forms in the live games, and the visual virtual objects are related to the semantics of target words in the barrages and can interact with virtual objects controlled by a main broadcast end in different forms. Like this for spectator can be through the mode of input barrage watching the live in-process of recreation, and the recreation in with the live is interactive, and interactive form is comparatively abundant, thereby can improve spectator's the experience of watching.
Fig. 2 and fig. 3 are schematic embodiments of introduction of a live broadcast interaction method based on a bullet screen according to an embodiment of the present application from the perspective of an independent device. The live broadcast interaction method based on the barrage in the embodiment of the present application is introduced from the perspective of device interaction.
Referring to fig. 6, fig. 6 illustrates an exemplary signaling interaction diagram of a live broadcast interaction method based on a bullet screen. The embodiment illustrated in fig. 6 relates to, for example, a anchor terminal, a spectator terminal, a live server, a barrage processing server, and a game server. For example, a keyword library is maintained in the bullet screen processing server, and each keyword in the keyword library indicates an interaction rule and a target virtual object. The functions of each device in the anchor terminal, the spectator terminal, the live broadcast server, the barrage processing server, and the game server are described in detail in the embodiment corresponding to fig. 1, and are not described herein again.
The method can be realized by the following steps:
step S301, the spectator terminal receives at least one barrage input by the spectator in the process of displaying the live game picture.
Optionally, the at least one barrage may be input by the same viewer or may be input by different viewers.
Step S302, the audience terminal sends the at least one barrage to a live broadcast server.
Step S303, the live broadcast server sends the at least one bullet screen to a bullet screen processing server.
Optionally, the audience terminal operates a live broadcast client, and the live broadcast client establishes a connection with the live broadcast server. And the live broadcast server establishes connection with the bullet screen processing server based on the bullet screen interface. Based on this, the live broadcast server is firstly transmitted to by the audience through the bullet screen input by the audience terminal, and then the live broadcast server forwards the bullet screen to the bullet screen processing server through the bullet screen interface.
Step S304, the bullet screen processing server performs word segmentation on the at least one bullet screen, and determines words hitting the keywords from the words to obtain at least one target word included in the at least one bullet screen.
In step S305, the bullet screen processing server generates a game event, where the game event includes description information of target virtual objects respectively determined according to the semantics of the at least one target word, and display information of bullet screens not including the target word.
For example, the bullet screen processing server may parse the semantics of the at least one target word, and then may determine the description information of the at least one target virtual object according to the semantics of the at least one target word, where the at least one target virtual object is an visualized representation of the semantics of the at least one target word. And then, the bullet screen processing server generates a game event according to the description information of at least one target virtual object.
In step S306, the bullet screen processing server sends the game event to the game server.
Step S307, the game server renders a new game scene according to the game event, where the new game scene includes the target virtual object and the bullet screen that does not include the target word.
Step S308, the game server sends the scene picture of the new game scene to the anchor terminal, so that the anchor plays the game corresponding to the scene picture.
Step S309, the live broadcast server obtains the game picture of the main broadcast terminal and distributes the game picture to each audience terminal for live broadcast display.
Alternatively, the scene displayed by the viewer terminal may be as shown in any one of fig. 5A to 5E, and will not be described in detail here.
It is understood that fig. 6 is only a schematic description, and does not limit the live broadcast interaction method based on the bullet screen in the embodiment of the present application. In practical implementation, the device related to the technical scheme can also be other devices. In addition, the implementation steps related to the present technical solution may be other, for example, in another embodiment, the device that obtains the game event according to at least one bullet screen may be a game server. The embodiments of the present application do not limit this.
To sum up, in an implementation manner of the embodiment of the present application, after receiving at least one bullet screen from a terminal, a server obtains at least one target word included in the at least one bullet screen, and then parses a semantic meaning of the at least one target word, and generates a target virtual scene according to the semantic meaning of the at least one target word, so that the terminal displays the target virtual scene to audiences in a live broadcast interface. Specifically, the target virtual scene includes at least one target virtual object, and the at least one target virtual object is obtained based on the semantics of at least one target word. Therefore, by adopting the technical scheme, in the live broadcast process, the server can determine the target virtual object according to the semantics of the target words contained in the bullet screen, so that the target virtual object with rich forms is contained in the target virtual scene, the interaction forms of audiences and the live broadcast virtual scene are rich, and the live broadcast experience of the audiences can be optimized.
The foregoing embodiment introduces various implementation manners of the live broadcast interaction method based on the bullet screen, which are provided by the embodiment of the present application, from the perspective of actions executed by various devices, such as determination of a target word, parsing of semantics of the target word, determination of a target virtual object, generation and display of a target virtual scene, and the like. It should be understood that the embodiments of the present application may implement the above functions in the form of hardware or a combination of hardware and computer software by using processing steps of determining a corresponding target word, parsing the semantics of the target word, determining a target virtual object, generating and displaying a target virtual scene, and the like. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
For example, if the above implementation steps implement corresponding functions through software modules, as shown in fig. 7A, a live broadcast interaction device 80 based on a barrage is provided, and the live broadcast interaction device 80 based on a barrage may include an obtaining module 801, a parsing module 802, a generating module 803, and a sending module 804. The live interactive bullet-based device 80 can be used to perform part or all of the operations of the servers in fig. 2 and 3, and part or all of the operations of the live server, the bullet-screen processing server, and the game server in fig. 6.
For example: the obtaining module 801 may be configured to obtain at least one target word included in at least one bullet screen in response to at least one bullet screen from the terminal. Parsing module 802 may be configured to parse semantics of at least one target word. The generating module 803 may be configured to generate a target virtual scene according to the semantics of at least one target word, where the target virtual scene includes at least one target virtual object, and the at least one target virtual object is obtained based on the semantics of the corresponding target word. The sending module 804 may be configured to send the target virtual scene to the terminal, so that the terminal displays the target virtual scene in the live interface.
Therefore, in the live broadcast interaction device 80 based on the barrage provided in the embodiment of the present application, after receiving at least one barrage from the terminal, the semantics of at least one target word included in the at least one barrage are analyzed, and then a target virtual scene is generated according to the semantics of the at least one target word, so that the terminal displays the target virtual scene to the audience in the live broadcast interface. Specifically, the target virtual scene includes at least one target virtual object, and the at least one target virtual object is obtained based on the semantics of at least one target word. Therefore, by adopting the technical scheme, in the live broadcast process, the device 80 can determine the target virtual object according to the semantics of the target words contained in the bullet screen, so that the target virtual object with rich forms is contained in the target virtual scene, the interaction forms of the audience and the live broadcast virtual scene are rich, and the live broadcast experience of the audience can be optimized.
Optionally, the generating module 803 is further configured to determine a processing instruction according to the semantics of the at least one target word, and generate a target virtual scene including the at least one target virtual object according to the processing instruction, where the processing instruction includes description information of the at least one target virtual object.
Optionally, the description information of the target virtual object includes at least one of the following:
the attribute information and the display information are displayed,
the attribute information is used for indicating the attribute of the target virtual object in the target virtual scene;
the display information is used for indicating the display position, the activity track and the display duration of the target virtual object in the target virtual scene.
Optionally, the obtaining module 801 is further configured to, for any word included in any bullet screen of the at least one bullet screen, determine that the word is a target word if the word is matched with at least one pre-configured keyword, where the at least one keyword corresponds to one target virtual object respectively.
It is understood that the above division of the modules is only a division of logical functions, and in actual implementation, the functions of the above modules may be integrated into a hardware entity, for example, the functions of the obtaining module 801, the parsing module 802, and the generating module 803 may be integrated into a processor, the functions of the sending module 804 may be integrated into a transceiver, and the like.
As shown in fig. 7B, fig. 7B provides a server 81, and the server 81 can implement the functions of the server in the embodiments illustrated in fig. 2 and 3 and the functions of any service platform in the embodiment illustrated in fig. 6. Server 81 may vary widely by configuration or performance and may include one or more Central Processing Units (CPUs) 811 (e.g., one or more processors) and memory 812, one or more storage media 813 (e.g., one or more mass storage devices) storing applications 8131 or data 8132. Memory 812 and storage medium 813 may be, among other things, transient or persistent storage. The program stored in the storage medium 813 may include one or more modules (not shown), each of which may include a series of instruction operations in the server. Still further, central processor 811 may be configured to communicate with storage medium 813 to execute a series of instruction operations in storage medium 813 on server 81.
The server 81 may also include one or more power supplies 814, one or more wired or wireless network interfaces 815, one or more input-output interfaces 816, and/or one or more operating systems 817 such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
As shown in fig. 8A, the embodiment of the present application further provides a live broadcast interaction device 90 based on a bullet screen. The live bullet-based interactive device 90 may include a first display module 901, a sending module 902, and a second display module 903. The live interactive barrage-based device 90 can be used to perform part or all of the operations of the terminal in fig. 3 and part or all of the operations of the viewer terminal in fig. 6.
For example: the first presentation module 901 may be used to present a live interface. The sending module 902 may be configured to send at least one bullet screen to the server in response to an operation of inputting at least one bullet screen by a user. The second display module 903 may be configured to display a target virtual scene in a live broadcast interface in response to the target virtual scene sent by a server, where the target virtual scene includes at least one target virtual object, where the target virtual scene is generated by the server analyzing semantics of at least one target word included in at least one bullet screen and according to the semantics of the at least one target word, and the at least one target virtual object is obtained based on the semantics of the corresponding target word.
It can be seen from this that live interactive installation 90 based on barrage, in the live in-process of show recreation through live platform, after receiving some barrages through live platform, these barrages can be converted into the manifold visual virtual object of form in the recreation of living broadcast, and these visual virtual objects are relevant with the semanteme of target word in the barrage. The live interactive device 90 based on the barrage can display the pictures of the visual virtual objects interacting with the virtual objects controlled by the main player in different forms. Like this for spectator can be through the mode of input barrage watching the live in-process of recreation, and the recreation in with the live is interactive, and interactive form is comparatively abundant, thereby can improve spectator's the experience of watching.
It is understood that the above modules are merely a logical division, and in practical implementation, the above modules may be integrated into a hardware entity, for example, the function of the first display module 901 may be integrated into a processor implementation, the function of the sending module 902 may be integrated into a transceiver implementation, the function of the second display module 903 may be integrated into a display implementation, and the like.
Referring to fig. 8B, fig. 8B illustrates an exemplary terminal 91. The terminal 91 may be used as the aforementioned second terminal, such as a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts Group Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts Group Audio Layer 4), a laptop computer, or a desktop computer. Terminal 91 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
In general, the terminal 91 comprises: a processor 911 and a memory 912.
Processor 911 may include one or more processing cores such as a 4-core processor, an 8-core processor, or the like. The processor 911 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). Processor 911 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 911 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, processor 911 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 912 may include one or more computer-readable storage media, which may be non-transitory. The memory 912 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, the non-transitory computer readable storage medium in the memory 912 is configured to store at least one instruction, which is configured to be executed by the processor 911 to implement all or part of the steps of the bullet-screen based live interaction method illustrated in this embodiment of the present application.
In some embodiments, the terminal 91 may further include: a peripheral interface 913 and at least one peripheral. The processor 911, memory 912, and peripheral interface 913 may be connected by a bus or signal line. Various peripheral devices may be connected to peripheral interface 913 through a bus, signal lines, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 914, a display 915, a camera assembly 916, an audio circuit 917, a positioning component 918, and a power supply 919.
The peripheral interface 913 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 911 and the memory 912. In some embodiments, the processor 911, memory 912, and peripheral interface 913 may be integrated on the same chip or circuit board; in some other embodiments, any one or both of the processor 911, the memory 912, and the peripheral interface 913 may be implemented on separate chips or circuit boards, which are not limited in this application.
The Radio Frequency circuit 914 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The display screen 915 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. The UI includes scene screens of the aforementioned virtual scene, as shown in any one of fig. 5A to 5E. When the display screen 915 is a touch display screen, the display screen 915 also has the ability to capture touch signals on or over the surface of the display screen 915. The touch signal may be input as a control signal to the processor 911 for processing. At this point, the display 915 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 915 may be one, providing the front panel of the terminal 91; in other embodiments, the display 915 may be at least two, respectively disposed on different surfaces of the terminal 91 or in a folded design; in some embodiments, the display 915 may be a flexible display disposed on a curved surface or a folded surface of the terminal 91. Even more, the display 915 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display screen 915 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 916 is used to capture images or video. The audio circuitry 917 may include a microphone and a speaker. The positioning component 918 is used for positioning the current geographical position of the terminal 91 to implement navigation or LBS (Location Based Service). A power supply 919 is used to power the various components in terminal 91.
In some embodiments, terminal 91 also includes one or more sensors 920. The one or more sensors 920 include, but are not limited to: acceleration sensor 921, gyro sensor 922, pressure sensor 923, fingerprint sensor 924, optical sensor 925, and proximity sensor 926.
It is to be understood that fig. 8B is only a schematic illustration and does not constitute a limitation of the terminal 91. In other embodiments, terminal 91 may include more or fewer components than shown in FIG. 8B, or some components may be combined, or a different arrangement of components may be used.
The embodiment of the present application further provides a computer-readable storage medium, in which instructions related to the present technical solution are stored, and when the instructions are executed on a computer, the computer is enabled to perform some or all of the steps in the method described in the foregoing embodiments shown in fig. 2 to 6.
Also provided in an embodiment of the present application is a computer program product including instructions related to live interaction, which when run on a computer, cause the computer to perform some or all of the steps in the method described in the embodiments of fig. 2 to 6.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a game control device, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
While alternative embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
The above-mentioned embodiments, objects, technical solutions and advantages of the present application are further described in detail, it should be understood that the above-mentioned embodiments are only examples of the present application, and are not intended to limit the scope of the present application, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the present application should be included in the scope of the present invention.

Claims (10)

1. A live broadcast interaction method based on a bullet screen is characterized by comprising the following steps:
responding to at least one bullet screen from a terminal, and acquiring at least one target word contained in the at least one bullet screen;
parsing semantics of the at least one target word;
generating a target virtual scene according to the semantics of the at least one target word, wherein the target virtual scene comprises the at least one target virtual object, and the at least one target virtual object is obtained based on the semantics of the corresponding target word;
and sending the target virtual scene to the terminal so that the terminal displays the target virtual scene in a live interface.
2. The method of claim 1, wherein generating the target virtual scene according to the semantics of the at least one target word comprises:
determining a processing instruction according to the semantics of the at least one target word, wherein the processing instruction comprises the description information of the at least one target virtual object;
and generating a target virtual scene containing the at least one target virtual object according to the processing instruction.
3. The method of claim 2, wherein the description information of the target virtual object comprises at least one of:
the attribute information and the display information are displayed,
the attribute information is used for indicating the attribute of the target virtual object in the target virtual scene;
the display information is used for indicating the display position, the activity track and the display duration of the target virtual object in the target virtual scene.
4. The method according to claim 1, wherein said obtaining at least one target word contained in said at least one bullet screen comprises:
and for any word contained in any bullet screen in the at least one bullet screen, if the word is matched with at least one preset keyword, determining that the word is the target word, wherein the at least one keyword corresponds to one target virtual object respectively.
5. A live broadcast interaction method based on a bullet screen is characterized by comprising the following steps:
displaying a live broadcast interface;
responding to the operation of inputting at least one bullet screen by a user, and sending the at least one bullet screen to a server;
responding to a target virtual scene sent by the server, showing the target virtual scene in the live broadcasting interface, wherein the target virtual scene comprises at least one target virtual object,
the target virtual scene is generated by the server by analyzing the semantics of at least one target word contained in the bullet screen and according to the semantics of the at least one target word, and the at least one target virtual object is obtained based on the semantics of the at least one target word.
6. The utility model provides a live interactive installation based on barrage, its characterized in that, the device includes:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for responding to at least one bullet screen from a terminal and acquiring at least one target word contained in the bullet screen;
the analysis module is used for analyzing the semantics of the at least one target word;
a generating module, configured to generate a target virtual scene according to semantics of the at least one target word, where the target virtual scene includes the at least one target virtual object, and the at least one target virtual object is obtained based on the semantics of the corresponding target word;
and the sending module is used for sending the target virtual scene to the terminal so that the terminal can display the target virtual scene in a live interface.
7. The utility model provides a live interactive installation based on barrage, its characterized in that, the device includes:
the first display module is used for displaying a live broadcast interface;
the system comprises a sending module, a receiving module and a display module, wherein the sending module is used for responding to the operation of inputting at least one bullet screen by a user and sending the at least one bullet screen to a server;
a second presentation module, configured to present a target virtual scene in the live broadcast interface in response to the target virtual scene sent by the server, where the target virtual scene includes at least one target virtual object,
the target virtual scene is generated by the server by analyzing the semantics of at least one target word contained in the bullet screen and according to the semantics of the at least one target word, and the at least one target virtual object is obtained based on the semantics of the at least one target word.
8. A server, characterized in that the server comprises a processor and a memory, wherein the memory stores instructions or programs, and the instructions or programs are loaded and executed by the processor to realize the live bullet screen based interaction method according to any one of claims 1 to 4.
9. A terminal characterized in that it comprises a processor and a memory, in which are stored instructions or programs, which are loaded and executed by the processor to implement the bullet-screen based live interaction method according to claim 5.
10. A computer-readable storage medium having stored thereon instructions or a program, which are loaded and executed by a processor to implement the live bullet screen-based interaction method according to any one of claims 1 to 4, or the live bullet screen-based interaction method according to claim 5.
CN202110745893.2A 2021-07-01 2021-07-01 Live broadcast interaction method and equipment based on bullet screen Pending CN113490006A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110745893.2A CN113490006A (en) 2021-07-01 2021-07-01 Live broadcast interaction method and equipment based on bullet screen

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110745893.2A CN113490006A (en) 2021-07-01 2021-07-01 Live broadcast interaction method and equipment based on bullet screen

Publications (1)

Publication Number Publication Date
CN113490006A true CN113490006A (en) 2021-10-08

Family

ID=77939195

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110745893.2A Pending CN113490006A (en) 2021-07-01 2021-07-01 Live broadcast interaction method and equipment based on bullet screen

Country Status (1)

Country Link
CN (1) CN113490006A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113905251A (en) * 2021-10-26 2022-01-07 北京字跳网络技术有限公司 Virtual object control method and device, electronic equipment and readable storage medium
CN114286155A (en) * 2021-12-07 2022-04-05 咪咕音乐有限公司 Picture element modification method, device, equipment and storage medium based on barrage
CN114938459A (en) * 2022-05-16 2022-08-23 完美世界征奇(上海)多媒体科技有限公司 Virtual live broadcast interaction method and device based on barrage, storage medium and equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104063427A (en) * 2014-06-06 2014-09-24 北京搜狗科技发展有限公司 Expression input method and device based on semantic understanding
CN106303735A (en) * 2016-09-07 2017-01-04 腾讯科技(深圳)有限公司 A kind of barrage display system, method, device and service customer end
CN109660878A (en) * 2017-10-10 2019-04-19 武汉斗鱼网络科技有限公司 Living broadcast interactive method, storage medium, electronic equipment and system based on barrage
CN110418151A (en) * 2019-07-24 2019-11-05 网易(杭州)网络有限公司 The transmission of barrage information, processing method, device, equipment, medium in game live streaming
CN110719525A (en) * 2019-08-28 2020-01-21 咪咕文化科技有限公司 Bullet screen expression package generation method, electronic equipment and readable storage medium
CN111330287A (en) * 2020-02-27 2020-06-26 网易(杭州)网络有限公司 Bullet screen display method and device in game, electronic equipment and storage medium
CN111970532A (en) * 2020-08-27 2020-11-20 网易(杭州)网络有限公司 Video playing method, device and equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104063427A (en) * 2014-06-06 2014-09-24 北京搜狗科技发展有限公司 Expression input method and device based on semantic understanding
CN106303735A (en) * 2016-09-07 2017-01-04 腾讯科技(深圳)有限公司 A kind of barrage display system, method, device and service customer end
CN109660878A (en) * 2017-10-10 2019-04-19 武汉斗鱼网络科技有限公司 Living broadcast interactive method, storage medium, electronic equipment and system based on barrage
CN110418151A (en) * 2019-07-24 2019-11-05 网易(杭州)网络有限公司 The transmission of barrage information, processing method, device, equipment, medium in game live streaming
CN110719525A (en) * 2019-08-28 2020-01-21 咪咕文化科技有限公司 Bullet screen expression package generation method, electronic equipment and readable storage medium
CN111330287A (en) * 2020-02-27 2020-06-26 网易(杭州)网络有限公司 Bullet screen display method and device in game, electronic equipment and storage medium
CN111970532A (en) * 2020-08-27 2020-11-20 网易(杭州)网络有限公司 Video playing method, device and equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113905251A (en) * 2021-10-26 2022-01-07 北京字跳网络技术有限公司 Virtual object control method and device, electronic equipment and readable storage medium
WO2023071443A1 (en) * 2021-10-26 2023-05-04 北京字跳网络技术有限公司 Virtual object control method and apparatus, electronic device, and readable storage medium
CN114286155A (en) * 2021-12-07 2022-04-05 咪咕音乐有限公司 Picture element modification method, device, equipment and storage medium based on barrage
CN114938459A (en) * 2022-05-16 2022-08-23 完美世界征奇(上海)多媒体科技有限公司 Virtual live broadcast interaction method and device based on barrage, storage medium and equipment

Similar Documents

Publication Publication Date Title
CN112104594B (en) Immersive interactive remote participation in-situ entertainment
CN113490006A (en) Live broadcast interaction method and equipment based on bullet screen
CN111278518A (en) Cross-platform interactive streaming
CN105430455A (en) Information presentation method and system
US20150332515A1 (en) Augmented reality system
TW201250577A (en) Computer peripheral display and communication device providing an adjunct 3D user interface
CN112915537B (en) Virtual scene picture display method and device, computer equipment and storage medium
WO2009108189A1 (en) Systems and methods for a gaming platform
WO2022267701A1 (en) Method and apparatus for controlling virtual object, and device, system and readable storage medium
CN112516589A (en) Game commodity interaction method and device in live broadcast, computer equipment and storage medium
CN113490061B (en) Live broadcast interaction method and equipment based on bullet screen
CN111836110B (en) Method and device for displaying game video, electronic equipment and storage medium
US20210397260A1 (en) Methods and systems for decoding and rendering a haptic effect associated with a 3d environment
CN114339438B (en) Interaction method and device based on live broadcast picture, electronic equipment and storage medium
CN114666671B (en) Live broadcast praise interaction method, device, equipment and storage medium
US20230343056A1 (en) Media resource display method and apparatus, device, and storage medium
US20230071445A1 (en) Video picture display method and apparatus, device, medium, and program product
US10868889B2 (en) System for providing game play video by using cloud computer
CN112929685B (en) Interaction method and device for VR live broadcast room, electronic device and storage medium
US20220254082A1 (en) Method of character animation based on extraction of triggers from an av stream
US11878250B2 (en) Content enhancement system and method
CN115690322A (en) Information presentation method and device and electronic equipment
CN116474379A (en) Data processing method based on game live broadcast and related equipment
CN113660500A (en) Live broadcast room display method and device, storage medium and electronic equipment
CN117395445A (en) Live interaction method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20211008