CN117915158A - Live broadcasting room interaction control method and device, electronic equipment and storage medium - Google Patents

Live broadcasting room interaction control method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117915158A
CN117915158A CN202211248531.3A CN202211248531A CN117915158A CN 117915158 A CN117915158 A CN 117915158A CN 202211248531 A CN202211248531 A CN 202211248531A CN 117915158 A CN117915158 A CN 117915158A
Authority
CN
China
Prior art keywords
target words
target
live
display
words
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211248531.3A
Other languages
Chinese (zh)
Inventor
刘家诚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bilibili Technology Co Ltd
Original Assignee
Shanghai Bilibili Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Bilibili Technology Co Ltd filed Critical Shanghai Bilibili Technology Co Ltd
Priority to CN202211248531.3A priority Critical patent/CN117915158A/en
Publication of CN117915158A publication Critical patent/CN117915158A/en
Pending legal-status Critical Current

Links

Landscapes

  • Information Transfer Between Computers (AREA)

Abstract

The disclosure provides a live broadcasting room interaction control method and device applied to a host client, a live broadcasting room interaction control method and device applied to a viewer client, electronic equipment and storage medium. The implementation scheme of the live broadcasting room interaction control method applied to the anchor client side is as follows: acquiring bullet screen information of a live broadcasting room; word segmentation processing is carried out on the obtained barrage message; based on the word segmentation processing result, one or more target words are obtained; determining a first motion trail of the one or more target words according to a first preset configuration; and displaying the one or more target words according to the first motion trajectory.

Description

Live broadcasting room interaction control method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of internet, in particular to a live broadcasting room interaction control method and device, electronic equipment, a computer readable storage medium and a computer program product.
Background
With the continuous development of internet technology and the progress of streaming media technology, web live broadcast is receiving more and more attention from users. At present, common live broadcasting modes comprise live broadcasting interaction modes based on a live broadcasting true man, and interaction is carried out between live broadcasting pictures of the live broadcasting true man and audience in a live broadcasting room through capturing. In addition, along with diversification of live broadcasting modes, virtual live broadcasting modes based on virtual images are also widely used. Compared with the live broadcasting mode of the live broadcasting true man, the virtual live broadcasting can realize the live broadcasting virtualization and the live broadcasting scene virtualization.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, the problems mentioned in this section should not be considered as having been recognized in any prior art unless otherwise indicated.
Disclosure of Invention
The present disclosure provides a live room interaction control method and apparatus applied to a hosting client, a live room interaction control method and apparatus electronic device applied to a spectator client, a computer readable storage medium and a computer program product.
According to an aspect of the present disclosure, there is provided a live room interaction control method applied to a hosting client, including: acquiring bullet screen information of a live broadcasting room; word segmentation processing is carried out on the obtained barrage message; based on the word segmentation processing result, one or more target words are obtained; determining a first motion trail of the one or more target words according to a first preset configuration; and displaying the one or more target words according to the first motion trajectory.
According to another aspect of the present disclosure, there is also provided a live room interaction control method applied to a viewer client, including: transmitting a barrage message; receiving, from a host client, a live stream of a live room, the live stream including one or more target words associated with a barrage message and a motion profile of the one or more target words; and displaying the one or more target words based on the live stream, wherein the one or more target words are obtained according to the live-room interaction control method applied to the anchor client, and wherein the motion trail is determined according to the live-room interaction control method applied to the anchor client.
According to another aspect of the present disclosure, there is also provided a live room interaction control apparatus applied to a hosting client, including: an acquisition unit configured to acquire a barrage message of a live broadcasting room; the word segmentation unit is configured to segment the acquired barrage message; an obtaining unit configured to obtain one or more target words based on a result of the word segmentation processing; a determining unit configured to determine a first motion trajectory of the one or more target words according to a first preset configuration; and a display unit configured to display the one or more target words according to the first motion trajectory.
According to another aspect of the present disclosure, there is also provided a live room interaction control apparatus applied to a viewer client, including: a transmitting unit configured to transmit a barrage message; a receiving unit configured to receive, from a host client, a live stream of a live room, the live stream including one or more target words associated with a barrage message and a motion trail of the one or more target words; and a display unit configured to display the one or more target words based on the live stream, wherein the one or more words are obtained according to the live-room interaction control method applied to the anchor client, and wherein the motion trail is determined according to the live-room interaction control method applied to the anchor client.
According to another aspect of the present disclosure, there is also provided an electronic apparatus including: at least one processor; and at least one memory communicatively coupled to the at least one processor, wherein the at least one memory stores a computer program that when executed by the at least one processor implements the live room interaction control method described above.
According to another aspect of the present disclosure, there is also provided a non-transitory computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the above-described live room interaction control method.
According to another aspect of the present disclosure, there is also provided a computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the live room interaction control method described above.
According to one or more embodiments of the present disclosure, a barrage message can be split into one or more target words and displayed with a particular motion profile. Compared with the traditional method that bullet screen messages are displayed in the live content area in sequence only in the form of characters, the bullet screen messages are subjected to word segmentation processing and are configured with the preset tracks for display, operability of bullet screens can be improved, participation of audience users to live broadcast is improved, and immersion of users in the live broadcast process is improved. On the other hand, the target words in the barrage message are moved according to the preset track, so that direct interaction between the audience user and the anchor user can be realized, the viscosity of the live broadcast content to the audience user is enhanced, and the user experience of the live broadcast platform is improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The accompanying drawings illustrate exemplary embodiments and, together with the description, serve to explain exemplary implementations of the embodiments. The illustrated embodiments are for exemplary purposes only and do not limit the scope of the claims. Throughout the drawings, identical reference numerals designate similar, but not necessarily identical, elements.
FIG. 1 illustrates a schematic diagram of an exemplary system in which various methods described herein may be implemented, in accordance with an embodiment of the present disclosure;
FIG. 2 illustrates a flow chart of a live room interaction control method applied to a hosting client according to an embodiment of the present disclosure;
FIG. 3 illustrates a schematic diagram showing a plurality of target words obtained after word segmentation of a barrage message, in accordance with an embodiment of the present disclosure;
FIG. 4 illustrates a flow diagram of one or more target words obtained after word segmentation of a barrage message according to a motion profile display, in accordance with an embodiment of the present disclosure;
FIG. 5 illustrates a flowchart for determining whether a motion trajectory of one or more target words obtained after a segmentation process on a barrage message overlaps with a location of a display avatar of a host user, in accordance with an embodiment of the present disclosure;
FIG. 6 illustrates a schematic diagram of generating collision volumes for target words and display images of anchor users obtained after word segmentation of barrage messages, respectively, in accordance with an embodiment of the present disclosure;
FIG. 7 illustrates a flowchart of configuring a preset display effect for one or more target words obtained after a barrage message is segmented during an overlapping of a motion trajectory of the one or more target words with a position of a display avatar of a host user, according to an embodiment of the present disclosure;
FIG. 8 illustrates a schematic diagram of a corresponding animation effect displayed as a display avatar configuration of a host user, in accordance with an embodiment of the present disclosure;
FIG. 9 illustrates a flowchart of a live room interaction control method applied to an audience client, in accordance with an embodiment of the present disclosure;
Fig. 10 illustrates a block diagram of a live room interaction control apparatus applied to a hosting client according to an embodiment of the present disclosure;
FIG. 11 illustrates a block diagram of a live room interaction control device applied to a viewer client according to an embodiment of the present disclosure; and
Fig. 12 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, the use of the terms "first," "second," and the like to describe various elements is not intended to limit the positional relationship, timing relationship, or importance relationship of the elements, unless otherwise indicated, and such terms are merely used to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, they may also refer to different instances based on the description of the context.
The terminology used in the description of the various illustrated examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, the elements may be one or more if the number of the elements is not specifically limited. Furthermore, the term "and/or" as used in this disclosure encompasses any and all possible combinations of the listed items.
With the continuous abundance of live broadcast categories, live broadcast is not limited to live broadcast of a real person any more, but virtual anchor can be generated based on various virtualization technologies and virtual live broadcast is performed based on the virtual anchor. The inventor notes that, whether live or virtual live, the barrage sent by the audience user during the live broadcast is only displayed in the live content area in sequence in the form of text, and the live content is not affected. This will result in less interaction between the audience user and the anchor user, thereby reducing the user's immersion in the live broadcast.
In view of this, the embodiment of the disclosure provides a live room interaction control method, which can improve operability of a live broadcast, and meet requirements of audience users for creating live broadcast content, compared with the traditional method that live broadcast content areas are sequentially displayed with live broadcast messages only in text form, so that interaction diversity in live broadcast scenes is expanded, user immersion in the live broadcast process is improved, and user experience of a live broadcast platform is further improved.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented, in accordance with an embodiment of the present disclosure. Referring to fig. 1, a system 100 includes one or more client devices 101, 102, 103, 104, 105, and 106, a server 120, and one or more communication networks 110 coupling the one or more client devices to the server 120. The client devices 101, 102, 103, 104, 105, and 106 may be configured to execute one or more applications to implement the live room interaction control methods as described in this disclosure. It should be appreciated that although fig. 1 depicts only six client devices, the present disclosure may support any number of client devices.
In the configuration shown in fig. 1, server 120 may include one or more components. These components may include software components, hardware components, or a combination thereof that are executable by one or more processors. A user operating client devices 101, 102, 103, 104, 105, and/or 106 may in turn utilize one or more client applications to interact with server 120 to utilize the services provided by these components. It should be appreciated that a variety of different system configurations are possible, which may differ from system 100. Accordingly, FIG. 1 is one example of a system for implementing the various methods described herein and is not intended to be limiting.
A user may initiate communication with server 120 using client devices 101, 102, 103, 104, 105, and/or 106. The client device may provide an interface that enables a user of the client device to interact with the client device. The client device may also output information to the user via the interface.
Client devices 101, 102, 103, 104, 105, and/or 106 may include various types of computer devices, such as portable handheld devices, general purpose computers (such as personal computers and laptop computers), workstation computers, wearable devices, smart screen devices, self-service terminal devices, service robots, gaming systems, thin clients, various messaging devices, sensors or other sensing devices, and the like. These computer devices may run various types and versions of software applications and operating systems, such as MICROSOFT Windows, APPLE iOS, UNIX-like operating systems, linux, or linux-like operating systems (e.g., GOOGLE chrome OS); or include various mobile operating systems such as MICROSOFT Windows Mobile OS, iOS, windows phone, android. Portable handheld devices may include cellular telephones, smart phones, tablet computers, personal Digital Assistants (PDAs), and the like. Wearable devices may include head mounted displays (such as smart glasses) and other devices. The gaming system may include various handheld gaming devices, internet-enabled gaming devices, and the like. The client device is capable of executing a variety of different applications, such as various Internet-related applications, communication applications (e.g., email applications), short Message Service (SMS) applications, and may use a variety of communication protocols.
Network 110 may be any type of network known to those skilled in the art that may support data communications using any of a number of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. For example only, the one or more networks 110 may be a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (e.g., bluetooth, WIFI), and/or any combination of these and/or other networks.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture that involves virtualization (e.g., one or more flexible pools of logical storage devices that may be virtualized to maintain virtual storage devices of the server).
The computing units in server 120 may run one or more operating systems including any of the operating systems described above as well as any commercially available server operating systems. Server 120 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, etc.
In some implementations, server 120 may include one or more applications to analyze and consolidate data feeds and/or event updates received from users of client devices 101, 102, 103, 104, 105, and/or 106. Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of client devices 101, 102, 103, 104, 105, and/or 106.
In some implementations, the server 120 may be a server of a distributed system or a server that incorporates a blockchain. The server 120 may also be a cloud server, or an intelligent cloud computing server or intelligent cloud host with artificial intelligence technology. The cloud server is a host product in a cloud computing service system, so as to solve the defects of large management difficulty and weak service expansibility in the traditional physical host and Virtual special server (VPS PRIVATE SERVER) service.
The system 100 may also include one or more databases 130. In some embodiments, these databases may be used to store data and other information. For example, one or more of the databases 130 may be used to store information such as barrage messages, target words obtained after a barrage message has been segmented. Database 130 may reside in various locations. For example, the databases used by clients 101, 102, 103, 104, 105, and/or 106 may be local to the clients, or may be remote from the clients and may communicate with the clients via network-based or dedicated connections. As another example, the database used by the server 120 may be local to the server 120 or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. Database 130 may be of different types. In some embodiments, the database used by server 120 may be, for example, a relational database. One or more of these databases may store, update, and retrieve the databases and data from the databases in response to the commands.
In some embodiments, one or more of databases 130 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key value stores, object stores, or conventional stores supported by the file system.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with the present disclosure.
Fig. 2 illustrates a flowchart of a live room interaction control method 200 applied to a hosting client according to an embodiment of the present disclosure. As shown in fig. 2, method 200 may include: step S210, acquiring bullet screen information of a live broadcasting room; step S220, word segmentation processing is carried out on the obtained barrage message; step S230, obtaining one or more target words based on the word segmentation processing result; step S240, determining a first motion trail of the one or more target words according to a first preset configuration; and step S250, displaying the one or more target words according to the motion trail.
By word segmentation processing on the barrage message, the barrage message can be split into one or more target words and displayed in a specific motion track. Compared with the traditional method that bullet screen information is displayed in the live content area in sequence only in the form of characters, the method can improve operability of bullet screens, so that participation of audience users to live broadcast is improved, and further immersion of the users in the live broadcast process is improved. On the other hand, the target words in the barrage message are moved according to the preset track, so that direct interaction between the audience user and the anchor user can be realized, the viscosity of the live broadcast content to the audience user is enhanced, and the user experience of the live broadcast platform is improved.
According to some embodiments of the present disclosure, live room barrage messages may be obtained from a server via a communication protocol (e.g., websocket) through an application such as facial capture software.
According to some embodiments of the present disclosure, the method 200 may further include: step S260, detecting the setting of the second preset configuration associated with the barrage message by the anchor user. In this case, the step S210 of acquiring the bullet screen message of the live room may include: and in response to detecting the setting of the host user on the second preset configuration associated with the barrage message, acquiring the barrage message corresponding to the second preset configuration.
According to some embodiments of the present disclosure, a setting of a second preset configuration may be detected in response to an activation of a live room barrage message selection function by a host user, and a barrage message corresponding to the second preset configuration may be obtained from a viewer client via a server.
Because the second preset configuration can be associated with the preference of the anchor user, by acquiring the barrage message corresponding to the second preset configuration, the barrage message which more accords with the preference or the actual requirement of the anchor user can be processed and displayed, thereby being beneficial to enhancing the interactivity between the anchor user and the audience user. On the other hand, the live broadcast interface can be prevented from displaying excessive bullet screen information, so that the live broadcast interface is prevented from affecting live broadcast contents due to disorder.
According to some embodiments of the present disclosure, the second preset configuration may include: all bullet screen messages are selected from the bullet screen messages. In this case, the anchor client will receive all of the barrage messages from the audience client for subsequent word segmentation processing, thereby avoiding anchor users from missing important barrage messages.
According to further embodiments of the present disclosure, the second preset configuration may include: selecting a barrage message from audience clients meeting preset conditions from the barrage messages. For example, when the amount of the gift gifted by the spectator user at the live platform reaches a predetermined value (e.g., 10 yuan, 50 yuan, 100 yuan, etc.), the spectator user is set as the spectator user satisfying the preset condition. Accordingly, the ID of the spectator user may be obtained and the barrage message selected when the spectator user again sends the barrage message. In other examples, when the audience user sends a number of bullet screens at the live platform up to a certain number (e.g., 50, 100, 200, etc.), the audience user is set to be the audience user that satisfies the preset condition. Accordingly, the barrage message sent by the spectator user may be obtained for subsequent word segmentation. In still other examples, the spectator user is set to a spectator user that meets the preset condition when the spectator user reaches a certain level (e.g., Captain) at the level of the live platform. Accordingly, the barrage message may be acquired for subsequent word segmentation processing when the audience user sends the barrage message. In still other examples, where a viewer user presents a particular gift on a live platform, a message barrage associated with the gift may be selected as a barrage message meeting a preset condition.
By selecting the barrage message from the audience client meeting the preset conditions, the specific audience user can be highlighted, so that the interaction between the main broadcasting user and the audience user is more targeted, and the specific audience user is focused. On the other hand, it is possible to reduce the storage space required for storing the barrage message and to reduce the amount of computation and the time cost required for word segmentation processing of the barrage message.
It should be appreciated that the above-described preset conditions are shown for illustrative and non-limiting purposes only, and may be a combination of two or more of these preset conditions, or may be other preset conditions suitable for a live platform, the scope of the presently claimed subject matter is not limited in this respect.
In step S220, each word in the barrage message may be split to obtain one or more words, according to some embodiments of the present disclosure. For example, for a barrage message of "i liked tomato" we would get seven words of "i", "j", "happy", "j" after word segmentation processing in this way. Word segmentation in this way will make the operation simpler.
According to other embodiments of the present disclosure, in step S220, the target word in the barrage message may be split based on semantic analysis to obtain one or more words or terms. Continuing with the above example, for the barrage message of "I liked to eat tomato" three words and two words, i.e. "I", "I" and "like", "eat", "tomato" will be obtained after the word segmentation process in this way. Word segmentation in this manner will further assist the anchor user in understanding the specific content of the barrage message displayed at the live room.
It should be appreciated that the retrieved barrage message may also be processed by word segmentation using any other suitable word segmentation technique, and the scope of the presently claimed subject matter is not limited in this respect.
According to some embodiments of the present disclosure, the first preset configuration may include a velocity of each of the one or more target words. For example, the motion velocity of each target word may be set to be a constant to display a target word that moves at a uniform velocity with a motion trajectory being a straight line.
According to further embodiments of the present disclosure, the first preset configuration may include an acceleration of each of the one or more target words. For example, the motion acceleration of each target word may be set to be a gravitational acceleration to generate a motion trajectory having a free fall characteristic.
According to further embodiments of the present disclosure, the first preset configuration may include a direction of motion of each of the one or more target words. For example, the direction of motion of each target word may be set to a display character towards the anchor user to simulate the animation effect of the target word smashing the display character towards the anchor user.
According to other embodiments of the present disclosure, the first preset configuration may include a display duration of each of the one or more target words, for example, the same display duration may be configured for target words belonging to one barrage message, such that the display is simultaneously in the live interface and the display is stopped in the live interface.
Through the first preset configuration, customized motion tracks can be configured for each split target word, so that the diversity of display effects of the live broadcasting room is enriched. It should be appreciated that although the above examples are shown separately for illustrative and non-limiting purposes, they may be combined in any manner, e.g., the first preset configuration may include one or more of the following: the speed, acceleration, direction of motion, duration of display of each of the one or more target words.
According to some embodiments of the present disclosure, the first preset configuration may further include a start position of each of the one or more target words, and wherein determining the first motion trajectory of the one or more target words according to the first preset configuration in step S240 may include one or more of: determining the first motion trail of the one or more target words as a falling motion trail; determining the first motion trail of the one or more target words as a floating motion trail; and determining the first motion trail of the one or more target words as a motion trail from a starting position to a target position, wherein the target position is a position of a display image of a host user.
In some examples, the starting location of the target word may be any location at the top, bottom, left, right, etc. of the live interface. After the initial position of the target word is obtained, the acceleration can be set for the target word through the first preset configuration so as to control the motion trail of the target word. For example, the acceleration may be the gravitational acceleration of the earth, and the target word has the characteristic of free falling on the earth, and the motion trajectory is a falling motion trajectory. The acceleration may be the gravitational acceleration of the moon, and the target word has the characteristic of moving on the moon, and the motion track is also the falling motion track. The acceleration can also be zero, so that the target word has the characteristic of weightlessness, and the motion track is a floating motion track.
In other examples, the starting location of the target word may be a barrage message display area and the target location is the display avatar of the anchor user. In this case, as described above, the display effect of the display image of the bullet screen message to the anchor user can be simulated.
It will be appreciated that the displayed avatar of the anchor user referred to herein may be the actual avatar of the live anchor as displayed in the live page or may be the avatar of the virtual anchor as displayed in the live page, as the scope of the claimed subject matter is not limited in this respect.
By setting the initial position and the target position of the target word obtained after the segmentation of the barrage message, various motion tracks can be obtained, for example, the target word after the segmentation can be moved in a virtual scene of a live broadcasting room according to the actual demand by moving the target word in a transverse movement track, a diagonal movement track and the like, so that the operability of the barrage message is further expanded.
FIG. 3 is a schematic diagram showing a plurality of target words obtained after word segmentation of a barrage message, according to an embodiment of the present disclosure. Continuing with the above example, for the barrage message of "i liked tomato" seven words of "i", "j", "happy", "eating", "v", "i" are obtained after the word segmentation process, fig. 3 schematically shows the display effect in the live-room virtual scene in the case where the motion trajectories of the 7 words are floating motion trajectories.
FIG. 4 illustrates a flow chart showing one or more target words obtained after word segmentation of a barrage message according to a motion profile display, in accordance with an embodiment of the present disclosure. As shown in fig. 4, step S250 of displaying one or more target words according to the motion trajectory may include: step S410, determining whether the first motion trail of the one or more target words overlaps with the position of the display image of the anchor user; and step S420, in response to determining that the first motion trail of the one or more target words overlaps with the position of the display image of the anchor user, configuring a preset display effect for the one or more target words for display during the overlapping.
Displaying the target word according to the motion trajectory of the target word determined by the first preset configuration may occur when the motion trajectory of the target word overlaps with the current position of the display avatar of the anchor user, for example, referring to fig. 3, in which the current motion trajectory of "cheering" overlaps with the current position of the display avatar of the anchor user. Under the condition, by configuring the preset display effect for the target word in the overlapping period, the barrage message can be further associated with the anchor user, the interaction diversity between the anchor user and the audience user is expanded, the live interest is increased, the viscosity of the live broadcasting room to the audience user is enhanced, and the live broadcasting room is favorable for attracting more audience users to enter the live broadcasting room.
FIG. 5 illustrates a flowchart for determining whether a motion trajectory of one or more target words obtained after a segmentation process of a barrage message overlaps with a location of a display avatar of a host user, in accordance with an embodiment of the present disclosure. As shown in fig. 5, determining whether the first motion trajectory of the one or more target words overlaps with the position of the display avatar of the anchor user in step S410 may include: step S510, generating corresponding collision bodies for each target word in the one or more target words and the display image of the anchor user respectively; step S520, for each target word in the one or more target words, determining one or more first coordinates of the outline of the collision volume corresponding to the target word; step S530, determining one or more second coordinates of the outline of the collision body corresponding to the display image of the anchor user; and step S540, in response to determining that at least one first coordinate of the one or more first coordinates is identical to at least one second coordinate of the one or more second coordinates, determining that the first motion trail of the target word overlaps with the position of the display avatar of the anchor user.
According to some embodiments of the present disclosure, a corresponding collision volume may be generated for each target word using the Unity engine. For example, a collision volume corresponding to a target word may be set as a sphere using the Unity engine, in which case coordinates of the outline of the spherical collision volume may be determined based on the center of the sphere and the radius of the sphere.
According to other embodiments of the present disclosure, in case that a displayed avatar of a host user is an avatar of an avatar, a corresponding collision body may be generated for the avatar using a Unity engine. For example, a collision volume corresponding to an avatar of a host user may be set as an ellipsoid using a Unity engine, in which case coordinates of a contour of the ellipsoid collision volume may be determined based on a focus, a major axis, and a minor axis.
Fig. 6 shows collision volumes 630 and 640 generated for the target word 610 "me" obtained by word segmentation of the barrage message and the avatar 620 of the anchor user, respectively, in a 2D scene. As shown in fig. 6, the collision body 630 of the target word 610 may have a circular outline, and the collision body 640 of the avatar 620 may have an elliptical outline. When the target word 610 moves according to the motion trajectory, if the coordinates of any point on the contour of the collision body 630 are identical to those of any point on the contour of the collision body 640, it may be determined that the first motion trajectory of the target word overlaps with the position of the avatar of the anchor user.
According to other embodiments of the present disclosure, in the case where the displayed avatar of the anchor user is an actual avatar of a live anchor, a range of the actual avatar may be determined through a face and/or body recognition technique, and then a collision body corresponding to the actual avatar is generated based on the determined range.
It should be appreciated that the respective collision volumes may also be generated for each target word and the display image of the anchor user by other suitable methods, and are not limited to the Unity engine, face, and/or human recognition techniques in the examples described above. It should also be appreciated that the collision volumes generated for the target word and the display image of the anchor user are not displayed in the live interface. In addition, the 2D or 3D display avatar model and the target word model may also be generated by other suitable algorithms, such that when the contours of the two overlap, a corresponding collision effect is triggered.
By generating a corresponding collision body for each target word and the display image of the anchor user, the collision effect can be triggered when the motion track of the target word overlaps with the position of the display image of the anchor user, so that the direct interaction between the anchor user and the audience user can be realized according to the characteristics of the collision body. In addition, based on characteristics of the target word and the collision body of the host user display image, various collision effects can be conveniently realized for display.
Fig. 7 is a flowchart illustrating configuring a preset display effect for one or more target words obtained after a barrage message is segmented during a movement track of the one or more target words overlaps with a position of a display image of a host user according to an embodiment of the present disclosure. As shown in fig. 7, in response to determining that the first motion trajectory of the one or more target words overlaps with the position of the display avatar of the anchor user, configuring the one or more target words with the preset display effect for display during the overlapping may include: step S710, for each of the one or more target words, changing the first motion trajectory of the target word in response to determining that the first motion trajectory of the target word overlaps with the location of the display avatar of the anchor user. For example, if the motion trajectory is a falling motion trajectory and the target word having the free fall characteristic collides with the head of the display image of the anchor user during the falling process, in this scenario, the target word may be displayed on the live interface so as to deviate from the original motion trajectory due to the collision, for example, to deflect to one side and present a rebound effect (associated with the current speed, acceleration, etc. of the target word), and then continue the free fall. If the overlapping collision with the outline of the display image of the anchor user is continued in the subsequent free fall, the motion track of the target word can be further changed until the target word reaches the bottom of the live interface.
With continued reference to fig. 7, in response to determining that the first motion trajectory of the one or more target words overlaps with the position of the display avatar of the anchor user, configuring the one or more target words with a preset display effect for display during the overlapping may further include: step S720, for each target word in the one or more target words, in response to determining that the first motion trail of the target word overlaps with the position of the display image of the anchor user, configuring a first animation effect for the target word for display.
Fig. 8 schematically illustrates configuring a target word 810 with an animation effect for display on the live interface 800 when the outline of the collision volume of the target word 810 "me" overlaps with the outline of the collision volume of the display avatar 820 of the anchor user. As shown in fig. 8, the animation effect may be an animation effect of collision sparks. It should be appreciated that in addition to the example shown in FIG. 8, animation effects configured for a target word may also include effects such as target word split, eruption of colored dots of light, and the like. In addition, in order to further enhance the viewer's sense of immersion in the live content, the associated sound effects, such as a collision or burst sound, etc., may also be played while the animation effects of the target word are displayed.
With continued reference to fig. 7, in response to determining that the first motion trajectory of the one or more target words overlaps with the position of the display avatar of the anchor user, configuring the one or more target words with a preset display effect for display during the overlapping may further include: step S730, for each target word of the one or more target words, in response to determining that the first motion track of the target word overlaps with the position of the display image of the anchor user, configuring a corresponding second animation effect for the display image of the anchor user for display.
In the case where the displayed avatar of the anchor user is an avatar of the anchor, the second animation effect may be an expression and action of the avatar of the anchor user. For example, with continued reference to FIG. 8, an squint action configured for a display avatar 820 is schematically illustrated when the outline of a collision volume of a target word 810 overlaps the outline of a collision volume of a display avatar 820 of a host user. It should be appreciated that in addition to the example shown in fig. 8, animation effects configured for the display of an avatar by an anchor user may also include eye-lay, body tilt, expression surprise, anger, and the like.
In the case where the display image of the anchor user is the real image of the live anchor, the second animation effect may be an additional expression and action configured for the display image of the anchor user. For example, when the outline of the collision volume of the target word overlaps with the outline of the collision volume of the display image of the anchor user, an squint map may be configured to cover the true expression of the live anchor. Similarly, examples of additional expressions and actions are described above.
In addition, in order to further enhance the immersion of the audience user to the live content, the associated sound effect may be played while the animation effect of the display image of the anchor user is displayed, for example, the anchor emits sound such as "Oh, my goodness".
By configuring the corresponding animation effect when the motion trail of the target word overlaps with the position of the display image of the anchor user, the collision effect can be truly simulated, the creation of the live broadcast content by the audience user is met, and the participation of the audience user to the live broadcast is improved. The method can not only improve the immersion degree of the user in the live broadcast process, but also increase the interest of the live broadcast, thereby enhancing the viscosity of the live broadcast content to the audience user.
For illustrative and non-limiting purposes, FIG. 7 shows three operations when the motion trajectory of the target word overlaps the location of the display avatar of the anchor user, but it should be understood that each of these three operations may also be performed alone or in any combination.
According to some embodiments of the present disclosure, the live room interaction control method 200 may further include: detecting an input of the anchor user associated with one or more target words, and wherein step S250 of displaying the one or more target words according to the first motion trajectory may include: for each of the one or more target words, in response to detecting input by the anchor user associated with the target word, a third animated effect is configured for the target word for display.
According to some embodiments, the input of the anchor user may be detected via the anchor user's touch (e.g., click) of the target word on the live interface.
According to some embodiments, the third animation effect may comprise a grab effect, i.e., a bouncing animation effect that triggers the target word in response to detecting a click of the target word by the anchor user. According to other embodiments, the third animation effect may include disappearing after the collision animation effect, i.e., when the motion trajectory of the target word overlaps with the position of the display image of the anchor user and triggers the collision effect, the anchor user may click on the target word that has collided such that the display of the target word is stopped by deleting the motion trajectory of the target word after the corresponding collision animation effect is displayed. According to further embodiments, the third animated effect may include a shaking effect, i.e., a shaking effect (e.g., dithering) that triggers the target word in response to detecting a click of the target word by the anchor user.
Thus, the diversity of interactions between audience users and anchor users is further extended. Meanwhile, a more real virtual live broadcast scene is created for audience users.
It should be appreciated that the above examples are shown for illustrative purposes only and not limitation, and that the third animation effect may also be a combination of two or more of the above examples, or may be other suitable preset animation effects of a live platform, the scope of the presently claimed subject matter being not limited in this respect.
According to some embodiments of the present disclosure, the live room interaction control method 200 may further include: storing the one or more target words to form a set of target words arranged according to a chronological order of the barrage message corresponding to the one or more target words; acquiring a third preset configuration comprising displaying the maximum number of target words; and displaying the one or more target words based on displaying the maximum number of target words and the set of target words.
In some embodiments, the order of the barrage messages may be determined according to the time when the audience user sends the barrage messages or the time when the anchor user receives the barrage messages, and target words obtained after the barrage messages are word-segmented may be stored according to the order. The ordered target words may form a string queue for sequential display on the live interface.
In some embodiments, the maximum number of target words, e.g., 20 words, 50 words, 100 words, etc., that can be simultaneously displayed in the live interface may be set via the anchor user.
For example, for the barrage message 1 "i prefer to eat tomato" and the barrage message 2 "refuel", in the case where the time when the audience user sends or the anchor user receives barrage message 1 is earlier than the time when the audience user sends or the anchor user receives barrage message 2, the target words obtained after the two barrage message word segmentation processes are ordered in such a way that "i", "super", "happy", "eat", "tomato", "refuel". Then, these target words are sequentially stored in the character string queue in the above order. When the number of currently displayed target words does not exceed the maximum number of displayed target words and the string queue is not empty, the thread may fetch the target words to be displayed from the string queue in a "first in first out" manner and configure them with the motion trajectories and/or generate collision volumes as described above. Continuing with the above example, where the string queues store the sequentially ordered target words "me", "super", "happy", "eat", "guage", "add", "oil", where the maximum number of target words is 50 words and 48 target words are currently displayed in the live interface, the thread will select the target words "me" and "super" from the string queues in sequence and configure the motion trajectories and/or generate collision volumes for them, respectively, to display the target words "me" and "super" in sequence on the live interface.
By acquiring the maximum number of the displayed target words, the display of excessive bullet screen messages on the live broadcast interface can be prevented, so that the live broadcast interface is prevented from affecting live broadcast contents due to disorder.
According to some embodiments of the present disclosure, based on displaying the maximum number of target words and the target word set, displaying the one or more target words may include: determining the number of the target words currently displayed; in response to determining that the sum of the number of target words currently displayed and the number of target words obtained after word segmentation processing of the barrage message to which the next target word to be displayed belongs exceeds the maximum number of target words displayed: stopping displaying a part of target words in the currently displayed target words; and displaying the target word obtained after the segmentation processing of the barrage message of the next target word to be displayed.
For example, the maximum number of displayed target words set by the anchor user in the live interface is 50 words, 48 target words are currently displayed in the live interface, and the barrage message to which the next target word to be displayed belongs is "i prefer to eat tomatoes".
In this case, in some examples, since the number of target words obtained after the word segmentation process on the barrage message is 7 (i.e., "i", "super", "happy", "eat", "double" and "eggplant") and the sum of the number of target words currently displayed in the live interface exceeds the maximum number of displayed target words 50, it is possible to stop displaying the earliest displayed 5 target words among the currently displayed target words and display the 7 target words obtained after the word segmentation process in the live interface. By determining whether the sum of the number of target words currently displayed and the number of one or more target words included in the barrage message to which the next target word to be displayed belongs exceeds the maximum number of target words displayed, it is ensured that target words obtained after word segmentation processing of each barrage message can be simultaneously displayed in a live interface, and the meaning of the barrage message is convenient for anchor users and audience users to understand.
In other examples, since the maximum number of displayed target words is 50, it is also possible to display only the first two target words, i.e., "me" and "super", among 7 target words obtained after the word segmentation process, and continue to display the target word "happy" when the earliest displayed target word among the currently displayed target words stops being displayed due to, for example, the display duration configuration or the like, and so on.
According to some embodiments of the present disclosure, the live room interaction control method 200 may further include: performing semantic analysis on one or more target words; and displaying the one or more target words and/or updating the virtual background of the live room based on the results of the semantic analysis, the maximum number of target words displayed, and the target word set.
According to some embodiments, semantic analysis may be performed on one or more target words by natural language processing, NLP. According to other embodiments, a semantic analysis may also be performed on one or more target words using a particular trained neural network model.
By performing semantic analysis on the target word, the actual meaning of the barrage message sent by the audience user can be obtained. Based on the above, a more suitable motion trail or a more suitable interaction effect with the display image of the anchor user can be configured for each target word. Therefore, the operability of the barrage message can be further improved, so that the main broadcasting user can timely know the demands of audience users, and live broadcasting interactivity and liveness are improved.
According to some embodiments of the present disclosure, displaying the one or more target words and/or updating the virtual background of the live room may include, based on the results of the semantic analysis, the maximum number of target words displayed, and the target word set: determining an avatar associated with the result of the semantic analysis; determining a second motion trail of the avatar according to a fourth preset configuration; and simultaneously displaying the avatar based on the second motion trail while displaying one or more target words.
According to some embodiments, a set of preset avatars may be obtained and the semantic analysis results matched with each avatar in the set to determine the associated avatar.
According to some embodiments, the fourth preset configuration may include one or more of a speed, an acceleration, a direction of movement, a duration of display, a starting position, a target position.
Continuing the example above where the barrage message is "i prefer to eat tomatoes", after obtaining a semantic analysis result regarding the target word "like", an avatar associated with "like", such as love, etc., may be determined and generated. Accordingly, while the motion trail of the target word "like" is displayed in the live interface, the loving avatar may fly out from the left side of the live display interface and move according to, for example, a preset trail, motion parameters, and the like included in the fourth preset configuration. A transitional animation effect from the target word to the corresponding avatar may also be displayed, i.e., the target word "like" may fly out from the left of the live interface and move according to, for example, preset trajectories and motion parameters, etc., included in the first preset configuration, and then slowly transition into the loving avatar and interact, e.g., collide, with the avatar of the anchor user.
By generating corresponding virtual images for the target words, the property of the barrage message can be expanded, and the live broadcast interestingness is further improved.
According to some embodiments of the present disclosure, displaying the one or more target words and/or updating the virtual background of the live room based on the results of the semantic analysis, the maximum number of target words displayed, and the target word set may further comprise: one or more of brightness, sound of the live room is updated.
In one example, by performing semantic analysis on a target word to determine that the meaning of the target word is "sun," a trajectory of movement from east to west of the live room may be configured for the target word. Meanwhile, the brightness of the direct broadcasting room can be continuously increased when the target word 'sun' rises from the east, and the brightness of the direct broadcasting room can be continuously reduced when the target word 'sun' falls from the west. In another example, by performing semantic analysis on a target word to determine that the meaning of the target word is "applause", the sound effect of applause can be configured and the volume of the live room can be increased while the motion trail of the target word is displayed.
By updating the brightness and/or sound of the live broadcasting room, the live broadcasting content can be enriched from multiple dimensions, the interactivity of audience users and the live broadcasting content is further improved, and the user viscosity is enhanced.
Fig. 9 illustrates a flowchart of a live room interaction control method 900 applied to an audience client, in accordance with an embodiment of the present disclosure. As shown in fig. 9, the live room interaction control method 900 may include: step S910, transmitting a barrage message; step S920, receiving a live stream of a live room from a host client, wherein the live stream comprises one or more target words and motion tracks of the one or more target words associated with a barrage message; and step S930, displaying the one or more target words based on the live stream, wherein the one or more target words are obtained according to the live-room interaction control method of the present disclosure applied to the hosting client, and wherein the motion trail is determined according to the live-room interaction control method of the present disclosure applied to the hosting client.
In some embodiments, the motion trajectories for one or more target words may be configured and rendered at the anchor client, and then a live stream containing the motion trajectories for the rendered target words is transmitted to the viewer client. By pulling the live stream, the audience user can see the motion trail display effect of the target words.
It should be appreciated that the operations, features, elements, etc. included in the various steps of the live room interaction control method 900 may correspond to the operations, features, elements, etc. included in the various steps of the live room interaction control method 200, and have been described with reference to fig. 2, 4, 5, and 7. Thus, the advantages described above with respect to the live room interaction control method 200 apply equally to the live room interaction control method 900. For brevity, certain operations, features and advantages are not described in detail herein.
Fig. 10 illustrates a block diagram of a live room interaction control apparatus 1000 according to an embodiment of the present disclosure. As shown in fig. 10, the live room interaction control apparatus 1000 may include: an acquisition unit 1010 configured to acquire a bullet screen message of a live broadcasting room; a word segmentation unit 1020 configured to perform word segmentation processing on the acquired barrage message; an obtaining unit 1030 configured to obtain one or more target words based on a result of the word segmentation processing; a determining unit 1040 configured to determine a first motion trajectory of one or more target words according to a first preset configuration; and a display unit 1050 configured to display the one or more target words according to the first motion trajectory.
According to some embodiments of the present disclosure, the live room interaction control device 1000 may further include a first detection unit configured to detect a setting of a second preset configuration associated with the barrage message by the anchor user, and wherein the obtaining unit 1010 may include a unit configured to obtain the barrage message corresponding to the second preset configuration in response to detecting the setting of the second preset configuration associated with the barrage message by the anchor user.
According to some embodiments of the present disclosure, the second preset configuration may include: selecting all bullet screen messages from bullet screen messages; or selecting a barrage message from the audience clients satisfying the preset condition from the barrage messages.
According to some embodiments of the present disclosure, the first preset configuration may include one or more of the following: the speed, acceleration, direction of motion, duration of display of each of the one or more target words.
According to some embodiments of the present disclosure, the first preset configuration may further include a start position of each of the one or more target words, and wherein the determining unit 1040 may include one or more of: a first determining subunit configured to determine a first motion trajectory of one or more target words as a falling motion trajectory; a second determining subunit configured to determine the first motion trajectory of the one or more target words as a floating motion trajectory; and a third determining subunit configured to determine the first motion trajectory of the one or more target words as a motion trajectory from a start position to a target position, wherein the target position is a position of a display image of the anchor user.
According to some embodiments of the present disclosure, wherein the display unit 1050 may include: an overlapping unit configured to determine whether a first motion trajectory of one or more target words overlaps with a position of a display image of a hosting user; and a configuration unit configured to configure a preset display effect for the one or more target words for display during the overlapping occurrence in response to determining that the first motion trajectory of the one or more target words overlaps with the position of the display avatar of the anchor user.
According to some embodiments of the disclosure, wherein the overlapping unit may comprise: a unit configured to generate a respective collision volume for each of the one or more target words and the display image of the anchor user, respectively; a unit configured to determine, for each of one or more target words, one or more first coordinates of a contour of a collision volume corresponding to the target word; a unit configured to determine one or more second coordinates of a contour of the collision volume corresponding to the display image of the anchor user; and means for determining that the first motion trajectory of the target word overlaps with the location of the display avatar of the anchor user in response to determining that at least one of the one or more first coordinates is the same as at least one of the one or more second coordinates.
According to some embodiments of the present disclosure, the configuration unit may include: a unit configured to, for each of the one or more target words, perform one or more of the following in response to determining that the first trajectory of motion of the target word overlaps with the location of the display avatar of the anchor user: changing a first motion trail of the target word; configuring a first animation effect for the target word for display; and configuring a corresponding second animation effect for the display image of the anchor user for display.
According to some embodiments of the present disclosure, the live room interaction control apparatus 1000 may further include: a second detection unit configured to detect an input of a host user associated with one or more target words, and wherein the display unit 1050 may include: for each of the one or more target words, in response to detecting input by the anchor user associated with the target word, a third animated effect is configured for the target word for display.
According to some embodiments of the present disclosure, the live room interaction control apparatus 1000 may further include: a storage unit configured to store one or more target words to form a target word set arranged according to a time sequence of barrage messages corresponding to the one or more target words; an additional obtaining unit configured to obtain a third preset configuration including displaying the maximum number of target words; and a first additional display unit configured to display one or more target words based on the maximum number of target words displayed and the target word set.
According to some embodiments of the disclosure, wherein the additional display unit may include: a unit configured to determine a number of target words currently displayed; a unit configured to perform the following operations in response to determining that the sum of the number of target words currently displayed and the number of target words obtained after word segmentation processing of a barrage message to which a next target word to be displayed belongs exceeds the maximum number of target words displayed: stopping displaying a part of target words in the currently displayed target words; and displaying the target word obtained after the segmentation processing of the barrage message of the next target word to be displayed.
According to some embodiments of the present disclosure, the live room interaction control apparatus 1000 may further include: a semantic analysis unit configured to perform semantic analysis on one or more target words; and a second additional display unit configured to display the one or more target words and/or update a virtual background of the live room based on a result of the semantic analysis, the maximum number of target words displayed, and the target word set.
According to some embodiments of the disclosure, wherein the second additional display unit may include: a unit configured to determine an avatar associated with a result of the semantic analysis; a unit configured to determine a second motion trajectory of the avatar according to a fourth preset configuration; and a unit configured to simultaneously display the avatar based on the second motion trail while displaying the one or more target words.
According to some embodiments of the disclosure, wherein the second additional display unit may further include: and an updating unit configured to update one or more of brightness and sound of the live broadcasting room.
It should be appreciated that the various elements of the apparatus 1000 shown in fig. 10 may correspond to the various steps S210-S260, S410-S420, S510-S540, and S710-S730 in the method 200 described with reference to fig. 2, 4,5, and 7. Thus, the operations, features and advantages described above with respect to method 200 are equally applicable to apparatus 1000 and the units comprised thereof. For brevity, certain operations, features and advantages are not described in detail herein.
Fig. 11 illustrates a block diagram of a live room interaction control apparatus 1100 applied to an audience client according to an embodiment of the present disclosure. As shown in fig. 11, the live room interaction control apparatus 1100 may include: a transmitting unit 1110 configured to transmit a barrage message; a receiving unit 1120 configured to receive a live stream of a live room from a host client, the live stream including one or more target words associated with a barrage message and a motion trail of the one or more target words; and a display unit 1130 configured to display the one or more target words based on the live stream, wherein the one or more words are obtained according to a live room interaction control method of the present disclosure applied to the anchor client, and wherein the motion trail is determined according to a live room interaction control method of the present disclosure applied to the anchor client.
It should be appreciated that the various elements of the apparatus 1100 shown in fig. 11 may correspond to the various steps S910-S930 in the method 900 described with reference to fig. 9. Thus, the operations, features and advantages described above with respect to method 900 are equally applicable to apparatus 1100 and the units comprised thereby. For brevity, certain operations, features and advantages are not described in detail herein.
It should also be appreciated that various techniques may be described herein in the general context of software hardware elements or program modules. The various units described above with respect to fig. 10 and 11 may be implemented in hardware or in hardware in combination with software and/or firmware. For example, the units may be implemented as computer program code/instructions configured to be executed in one or more processors and stored in a computer-readable storage medium. Alternatively, these units may be implemented as hardware logic/circuitry. For example, in some embodiments, one or more of the acquisition unit 1010, the word segmentation unit 1020, the acquisition unit 1030, the determination unit 1040, and the display unit 1050, or one or more of the transmission unit 1110, the reception unit 1120, and the display unit 1130 may be implemented together in a System on Chip (SoC). The SoC may include an integrated circuit chip including one or more components of a processor (e.g., a central processing unit (Central Processing Unit, CPU), microcontroller, microprocessor, digital signal processor (DIGITAL SIGNAL processor, DSP), etc.), memory, one or more communication interfaces, and/or other circuitry, and may optionally execute received program code and/or include embedded firmware to perform functions.
According to another aspect of the present disclosure, there is also provided an electronic apparatus including: at least one processor; and at least one memory communicatively coupled to the at least one processor; the at least one memory stores a computer program that, when executed by the at least one processor, implements the live room interaction control method described above as being applied to the anchor client or the live room interaction control method as being applied to the viewer client.
According to another aspect of the present disclosure, there is also provided a non-transitory computer readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the above-described live room interaction control method applied to a hosting client or live room interaction control method applied to a spectator client.
According to another aspect of the present disclosure, there is also provided a computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the above-described live room interaction control method applied to a hosting client or live room interaction control method applied to a spectator client.
Referring to fig. 12, a block diagram of an electronic device 1200 that may be a hosting client or a spectator client of the present disclosure will now be described, which is an example of a hardware device that may be applied to aspects of the present disclosure. The electronic devices may be different types of computer devices, such as laptop computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 12, the electronic device 1200 may include at least one processor 1210, a working memory 1220, an input unit 1240, a display unit 1250, a speaker 1260, a storage unit 1270, a communication unit 1280, and other output units 1290 that can communicate with each other through a system bus 1230.
Processor 1210 may be a single processing unit or multiple processing units, all of which may include a single or multiple computing units or multiple cores. Processor 1210 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. The processor 1210 may be configured to obtain and execute computer readable instructions stored in the working memory 1220, the storage unit 1270, or other computer readable medium, such as program code of the operating system 1220a, program code of the application programs 1220b, and the like.
The working memory 1220 and storage unit 1270 are examples of computer-readable storage media for storing instructions that are executed by the processor 1210 to perform the various functions described above. The working memory 1220 may include both volatile memory and nonvolatile memory (e.g., RAM, ROM, etc.). In addition, the storage unit 1270 may include hard disk drives, solid state drives, removable media, including external and removable drives, memory cards, flash memory, floppy disks, optical disks (e.g., CD, DVD), storage arrays, network attached storage, storage area networks, and the like. The working memory 1220 and storage unit 1270 may both be referred to herein collectively as memory or computer-readable storage medium, and may be non-transitory media capable of storing computer-readable, processor-executable program instructions as computer program code that may be executed by the processor 1210 as a particular machine configured to implement the operations and functions described in the examples herein.
The input unit 1260 may be any type of device capable of inputting information to the electronic device 1200, the input unit 1260 may receive input numeric or character information and generate key signal inputs related to user settings and/or function control of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a trackpad, a trackball, a joystick, a microphone, and/or a remote control. The output unit may be any type of device capable of presenting information and may include, but is not limited to, a display unit 1250, a speaker 1260, and other output units 1290 may include, but are not limited to, video/audio output terminals, vibrators, and/or printers. Communication unit 1280 allows electronic device 1200 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth TM devices, 802.11 devices, wi-Fi devices, wiMAX devices, cellular communication devices, and/or the like.
The application 1220b in the working register 1220 may be loaded to perform the various methods and processes described above, such as steps S210-S260 in fig. 2, steps S410-S420 in fig. 4, steps S510-S540 in fig. 5, steps S710-S730 in fig. 7, and steps S910-S930 in fig. 9. For example, in some embodiments, the methods 200, 900 described above may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 1270. In some embodiments, some or all of the computer programs may be loaded onto and/or installed onto electronic device 1200 via storage unit 1270 and/or communication unit 1280. One or more of the steps of the methods 200, 900 described above may be performed when a computer program is loaded and executed by the processor 1210. Alternatively, in other embodiments, processor 1210 may be configured to perform methods 200, 900 by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the foregoing methods, systems, and apparatus are merely exemplary embodiments or examples, and that the scope of the present invention is not limited by these embodiments or examples but only by the claims following the grant and their equivalents. Various elements of the embodiments or examples may be omitted or replaced with equivalent elements thereof. Furthermore, the steps may be performed in a different order than described in the present disclosure. Further, various elements of the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced by equivalent elements that appear after the disclosure.

Claims (20)

1. A live broadcasting room interaction control method is applied to a host broadcasting client and comprises the following steps:
Acquiring bullet screen information of a live broadcasting room;
word segmentation processing is carried out on the obtained barrage message;
based on the word segmentation processing result, one or more target words are obtained;
determining a first motion trail of the one or more target words according to a first preset configuration; and
And displaying the one or more target words according to the first motion trail.
2. The method of claim 1, further comprising:
Detecting a setting of a second preset configuration associated with the barrage message by a host user, an
The method for acquiring the barrage message of the live broadcasting room comprises the following steps:
and acquiring the barrage message corresponding to the second preset configuration in response to detecting the setting of the host user on the second preset configuration associated with the barrage message.
3. The method of claim 2, wherein the second preset configuration comprises:
Selecting all bullet screen messages from the bullet screen messages; or alternatively
Selecting the barrage message from the audience client meeting the preset condition from the barrage messages.
4. The method of claim 1, wherein the first preset configuration comprises one or more of: the speed, acceleration, direction of motion, duration of display of each of the one or more target words.
5. The method of claim 4, wherein the first preset configuration further includes a starting location of each of the one or more target words, and
Wherein determining the first motion trajectory of the one or more target words according to the first preset configuration includes one or more of:
Determining the first motion trail of the one or more target words as a falling motion trail;
Determining the first motion trail of the one or more target words as a floating motion trail; and
And determining the first motion trail of the one or more target words as the motion trail from the starting position to the target position, wherein the target position is the position of the display image of the anchor user.
6. The method of claim 1, wherein displaying the one or more target words according to the first motion trajectory comprises:
Determining whether the first motion trail of the one or more target words overlaps with a location of a display avatar of the anchor user; and
In response to determining that the first motion trajectory of the one or more target words overlaps with a location of a display avatar of the anchor user, configuring a preset display effect for the one or more target words for display during the overlapping.
7. The method of claim 6, wherein determining whether the first motion trajectory of the one or more target words overlaps with a location of a display avatar of a host user comprises:
generating corresponding collision bodies for each target word in the one or more target words and the display image of the anchor user respectively;
for each of the one or more target words, determining one or more first coordinates of a contour of a collision volume corresponding to the target word;
determining one or more second coordinates of a contour of a collision volume corresponding to a display image of the anchor user; and
In response to determining that at least one of the one or more first coordinates is the same as at least one of the one or more second coordinates, it is determined that the first motion trajectory of the target word overlaps with a location of a display avatar of the anchor user.
8. The method of claim 6 or 7, wherein configuring a preset display effect for the one or more target words for display during overlapping occurs in response to determining that the first trajectory of motion of the one or more target words overlaps with a location of the display avatar of the anchor user comprises:
For each of the one or more target words, in response to determining that the first motion trajectory of the target word overlaps with the location of the display avatar of the anchor user, performing one or more of:
changing a first motion trail of the target word;
Configuring a first animation effect for the target word for display; and
And configuring a corresponding second animation effect for the display image of the anchor user for display.
9. The method of claim 1, further comprising: detecting an input of a host user associated with the one or more target words, an
Wherein displaying the one or more target words according to the first motion trajectory comprises:
For each of the one or more target words, in response to detecting the input of the anchor user associated with the target word, configuring a third animated effect for the target word for display.
10. The method of claim 1, further comprising:
Storing the one or more target words to form a set of target words arranged according to a chronological order of the barrage message corresponding to the one or more target words;
acquiring a third preset configuration comprising displaying the maximum number of target words; and
The one or more target words are displayed based on the maximum number of target words displayed and the target word set.
11. The method of claim 10, wherein displaying the one or more target words based on the maximum number of target words displayed and the set of target words comprises:
determining the number of the target words currently displayed;
responding to the fact that the sum of the number of the currently displayed target words and the number of target words obtained after word segmentation processing is carried out on the barrage message to which the next target word to be displayed belongs exceeds the maximum number of the displayed target words:
stopping displaying a part of the target words in the currently displayed target words; and
And displaying the target word obtained after the segmentation processing of the barrage message to which the next target word to be displayed belongs.
12. The method of claim 10 or 11, further comprising:
Performing semantic analysis on the one or more target words; and
Based on the result of the semantic analysis, the maximum number of displayed target words, and the target word set, displaying the one or more target words and/or updating the virtual background of the live room.
13. The method of claim 12, wherein displaying the one or more target words and updating the virtual background of the live room based on the results of the semantic analysis, the maximum number of displayed target words, and the target word set comprises:
determining an avatar associated with a result of the semantic analysis;
determining a second motion trail of the avatar according to a fourth preset configuration; and
The avatar based on the second motion trajectory is simultaneously displayed while the one or more target words are displayed.
14. The method of claim 13, wherein displaying the one or more target words and updating the virtual background of the live room based on the results of the semantic analysis, the maximum number of displayed target words, and the target word set further comprises:
Updating one or more of brightness and sound of the live room.
15. A live room interaction control method is applied to a viewer client and comprises the following steps:
Transmitting a barrage message;
Receiving a live stream of the live room from a host client, the live stream including one or more target words associated with the barrage message and a motion trail of the one or more target words; and
Displaying the one or more target words based on the live stream,
Wherein the one or more target words are obtained according to the method of any one of claims 1-14, and wherein the motion trajectory is determined according to the method of any one of claims 1-14.
16. A live room interaction control device applied to a hosting client, comprising:
An acquisition unit configured to acquire a barrage message of a live broadcasting room;
The word segmentation unit is configured to segment the acquired barrage message;
An obtaining unit configured to obtain one or more target words based on a result of the word segmentation processing;
a determining unit configured to determine a first motion trajectory of the one or more target words according to a first preset configuration; and
And a display unit configured to display the one or more target words according to the first motion trajectory.
17. A live room interaction control device applied to a viewer client, comprising:
A transmitting unit configured to transmit a barrage message;
A receiving unit configured to receive a live stream of the live room from a hosting client, the live stream including one or more target words associated with the barrage message and a motion trail of the one or more target words; and
A display unit configured to display the one or more target words based on the live stream,
Wherein the one or more words are obtained according to the method of any one of claims 1-14, and wherein the motion profile is determined according to the method of any one of claims 1-14.
18. An electronic device, comprising:
At least one processor; and
At least one memory communicatively coupled to the at least one processor,
Wherein the at least one memory stores a computer program that, when executed by the at least one processor, implements the method of any of claims 1-15.
19. A non-transitory computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the method of any of claims 1-15.
20. A computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the method of any of claims 1-15.
CN202211248531.3A 2022-10-12 2022-10-12 Live broadcasting room interaction control method and device, electronic equipment and storage medium Pending CN117915158A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211248531.3A CN117915158A (en) 2022-10-12 2022-10-12 Live broadcasting room interaction control method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211248531.3A CN117915158A (en) 2022-10-12 2022-10-12 Live broadcasting room interaction control method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117915158A true CN117915158A (en) 2024-04-19

Family

ID=90695143

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211248531.3A Pending CN117915158A (en) 2022-10-12 2022-10-12 Live broadcasting room interaction control method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117915158A (en)

Similar Documents

Publication Publication Date Title
US20180365270A1 (en) Context aware digital media browsing
KR101968723B1 (en) Method and system for providing camera effect
US20130290876A1 (en) Augmented reality representations across multiple devices
WO2020024692A1 (en) Man-machine interaction method and apparatus
CN112527115B (en) User image generation method, related device and computer program product
US20180367626A1 (en) Automatic digital media interaction feedback
WO2022142626A1 (en) Adaptive display method and apparatus for virtual scene, and electronic device, storage medium and computer program product
US11559745B2 (en) Video modification and transmission using tokens
KR20220050106A (en) Image quality enhancing method, apparatus, device, and medium
WO2024036899A1 (en) Information interaction method and apparatus, device and medium
WO2023231664A1 (en) Method and apparatus for interacting with vehicle-mounted display device, and device, storage medium, and computer program product
KR20210064239A (en) Techniques for Inducing High Input Latency in Multiplayer Programs
US20230343324A1 (en) Dynamically adapting given assistant output based on a given persona assigned to an automated assistant
US20190302255A1 (en) Accessibility of Virtual Environments Via Echolocation
US11704626B2 (en) Relocation of content item to motion picture sequences at multiple devices
CN115220613A (en) Event prompt processing method, device, equipment and medium
JP6721727B1 (en) Information processing apparatus control program, information processing apparatus control method, and information processing apparatus
US11967343B2 (en) Automated video editing
CN117915158A (en) Live broadcasting room interaction control method and device, electronic equipment and storage medium
CN113436604B (en) Method and device for broadcasting content, electronic equipment and storage medium
CN116361547A (en) Information display method, device, equipment and medium
WO2018236601A1 (en) Context aware digital media browsing and automatic digital media interaction feedback
CN113722594A (en) Recommendation model training method, recommendation device, electronic equipment and medium
KR20210081935A (en) Method, system, and computer program for expressing emotion to conversation message using gesture
US20240205174A1 (en) Device sensor information as context for interactive chatbot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination