CN117376592A - Display control method and device, display method and device, equipment and medium - Google Patents

Display control method and device, display method and device, equipment and medium Download PDF

Info

Publication number
CN117376592A
CN117376592A CN202210775134.5A CN202210775134A CN117376592A CN 117376592 A CN117376592 A CN 117376592A CN 202210775134 A CN202210775134 A CN 202210775134A CN 117376592 A CN117376592 A CN 117376592A
Authority
CN
China
Prior art keywords
audience
virtual
user
display
live
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210775134.5A
Other languages
Chinese (zh)
Inventor
司志伟
刘华清
陈玮
朱超
李�权
王世薪
章怀宙
褚波
赵志鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bilibili Technology Co Ltd
Original Assignee
Shanghai Bilibili Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Bilibili Technology Co Ltd filed Critical Shanghai Bilibili Technology Co Ltd
Priority to CN202210775134.5A priority Critical patent/CN117376592A/en
Publication of CN117376592A publication Critical patent/CN117376592A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video

Abstract

The disclosure provides a live broadcast room display control method and device, a live broadcast room display method and device, electronic equipment and media. The implementation scheme of the live broadcasting room display control method is as follows: receiving configuration information associated with the virtual audience, the configuration information being set by a moderator user via a moderator client; configuring a virtual auditorium of the living room based on the configuration information, the virtual auditorium comprising an avatar of an audience user; and providing the live stream configured with the virtual auditorium to the audience client such that the live room is displayed at the audience client as a live room scene containing the virtual auditorium.

Description

Display control method and device, display method and device, equipment and medium
Technical Field
The disclosure relates to the technical field of internet, in particular to a live broadcasting room display control method and device, a live broadcasting room display method and device, electronic equipment, a computer readable storage medium and a computer program product.
Background
With the continuous development of internet technology and the progress of streaming media technology, web live broadcast is receiving more and more attention from users. At present, common live broadcasting modes comprise live broadcasting interaction modes based on a live broadcasting true man, and interaction is carried out between live broadcasting pictures of the live broadcasting true man and audience in a live broadcasting room through capturing. In addition, along with diversification of live broadcasting modes, virtual live broadcasting modes based on virtual images are also widely used. Compared with the live broadcasting mode of the live broadcasting true man, the virtual live broadcasting can realize the live broadcasting virtualization and the live broadcasting scene virtualization.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, the problems mentioned in this section should not be considered as having been recognized in any prior art unless otherwise indicated.
Disclosure of Invention
The present disclosure provides a live room display control method and apparatus, a live room display method and apparatus, an electronic device, a computer readable storage medium, and a computer program product.
According to an aspect of the present disclosure, there is provided a live room display control method, applied to a server, including: receiving configuration information associated with a virtual audience, the configuration information being set by a chairman user via a chairman client; configuring a virtual auditorium of the living room based on the configuration information, the virtual auditorium comprising an avatar of an audience user; and providing a live stream configured with the virtual auditorium to an audience client such that the live room is displayed at the audience client as a live room scene containing the virtual auditorium.
According to another aspect of the present disclosure, there is also provided a live room display method applied to a hosting client, including: detecting a setting of configuration information associated with a virtual audience by a chairman user; in response to detecting the setting of the configuration information by the anchor user, configuring, at the anchor client, an virtual audience of the live room based on the configuration information, the virtual audience comprising an avatar of an audience user; and in response to detecting the selection of the function of the virtual auditorium by the anchor user, causing the anchor client to display a live room scene containing the virtual auditorium.
According to another aspect of the present disclosure, there is also provided a live room display control apparatus applied to a server, including: a receiving unit configured to receive configuration information associated with a virtual audience, the configuration information being set by a chairman user via a chairman client; a configuration unit configured to configure a virtual audience of the living room based on the configuration information, the virtual audience including an avatar of an audience user; and a providing unit configured to provide a live stream configured with the virtual auditorium to an audience client such that the live room is displayed at the audience client as a live room scene containing the virtual auditorium.
According to another aspect of the present disclosure, there is also provided a live room display apparatus applied to a hosting client, including: a detection unit configured to detect a setting of configuration information associated with a virtual audience by a chairman user; a configuration unit configured to configure, at the anchor client, an virtual audience of the live room based on the configuration information, the virtual audience including an avatar of an audience user, in response to detecting a setting of the configuration information by the anchor user; and a display unit configured to cause the anchor client to display a live room scene containing the virtual auditorium in response to detecting a selection of a function of the virtual auditorium by the anchor user.
According to another aspect of the present disclosure, there is also provided an electronic apparatus including: at least one processor; and at least one memory communicatively coupled to the at least one processor, wherein the at least one memory stores a computer program that when executed by the at least one processor implements a method according to the above.
According to another aspect of the present disclosure, there is also provided a non-transitory computer readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements a method according to the above.
According to another aspect of the present disclosure, there is also provided a computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements a method according to the above.
According to one or more embodiments of the present disclosure, an avatar corresponding to a viewer user can be generated in a live broadcast room scene, so that direct interaction between a virtual anchor and the viewer user is realized, and meanwhile, the requirement of the viewer user on creation of live broadcast content can be satisfied, thereby enhancing the reality of live broadcast interaction and improving the user experience of a live broadcast platform.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The accompanying drawings illustrate exemplary embodiments and, together with the description, serve to explain exemplary implementations of the embodiments. The illustrated embodiments are for exemplary purposes only and do not limit the scope of the claims. Throughout the drawings, identical reference numerals designate similar, but not necessarily identical, elements.
FIG. 1 illustrates a schematic diagram of an exemplary system in which various methods described herein may be implemented, in accordance with an embodiment of the present disclosure;
fig. 2 illustrates a flowchart of a live room display control method according to an embodiment of the present disclosure;
fig. 3A illustrates a schematic diagram of a live room scene including a virtual auditorium in accordance with an embodiment of the present disclosure;
fig. 3B illustrates a schematic diagram of a live room scene including a virtual auditorium in accordance with further embodiments of the present disclosure;
FIG. 4 illustrates a flow chart for configuring virtual auditoriums of a live room based on configuration information in accordance with an embodiment of the present disclosure;
FIG. 5 illustrates a flow chart for configuring virtual auditoriums of a live room based on configuration information in accordance with further embodiments of the present disclosure;
fig. 6 illustrates a flowchart of a live room display control method according to further embodiments of the present disclosure;
fig. 7 illustrates a flowchart of a live room display control method according to further embodiments of the present disclosure;
fig. 8 illustrates a flowchart of a live room display method according to an embodiment of the present disclosure;
fig. 9 illustrates a block diagram of a live room display control apparatus according to an embodiment of the present disclosure;
fig. 10 illustrates a block diagram of a live room display device according to an embodiment of the present disclosure; and
Fig. 11 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, the use of the terms "first," "second," and the like to describe various elements is not intended to limit the positional relationship, timing relationship, or importance relationship of the elements, unless otherwise indicated, and such terms are merely used to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, they may also refer to different instances based on the description of the context.
The terminology used in the description of the various illustrated examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, the elements may be one or more if the number of the elements is not specifically limited. Furthermore, the term "and/or" as used in this disclosure encompasses any and all possible combinations of the listed items.
With the continuous abundance of live broadcast categories, live broadcast is not limited to live broadcast of a real person any more, but virtual anchor can be generated based on various virtualization technologies and virtual live broadcast is performed based on the virtual anchor. The inventor notices that the virtual live room created based on the virtualization technology only realizes the virtualization of the anchor and the live scene at present, and the virtual anchor needs to live according to the preset scene and flow during the virtual live. Because the audience users in the virtual living room cannot participate in creation of living broadcast contents, the interaction rate between the audience users and the anchor users is low, and therefore user experience of a living broadcast platform is reduced.
In view of this, embodiments of the present disclosure provide a live broadcast room display control method, which can realize direct interaction between a virtual anchor and a viewer user, and meet the requirement of the viewer user on creating live broadcast content, thereby improving live broadcast interaction diversity, improving interaction perceptibility, and further improving user experience of a live broadcast platform.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented, in accordance with an embodiment of the present disclosure. Referring to fig. 1, a system 100 includes one or more client devices 101, 102, 103, 104, 105, and 106, a server 120, and one or more communication networks 110 coupling the one or more client devices to the server 120. Client devices 101, 102, 103, 104, 105, and 106 may be configured to execute one or more applications. In embodiments of the present disclosure, client devices 101, 102, 103, 104, 105, and 106 may be configured to execute one or more applications that enable live room display methods as described in the present disclosure.
In embodiments of the present disclosure, the server 120 may run one or more services or software applications that enable execution of the live room display control method as described in the present disclosure.
In some embodiments, server 120 may also provide other services or software applications that may include non-virtual environments and virtual environments. In some embodiments, these services may be provided as web-based services or cloud services, for example, provided to users of client devices 101, 102, 103, 104, 105, and/or 106 under a software as a service (SaaS) model.
In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof that are executable by one or more processors. A user operating client devices 101, 102, 103, 104, 105, and/or 106 may in turn utilize one or more client applications to interact with server 120 to utilize the services provided by these components. It should be appreciated that a variety of different system configurations are possible, which may differ from system 100. Accordingly, FIG. 1 is one example of a system for implementing the various methods described herein and is not intended to be limiting.
A user may initiate communication (e.g., via DoH or other type of network protocol) with server 120 using client devices 101, 102, 103, 104, 105, and/or 106. The client device may provide an interface that enables a user of the client device to interact with the client device. The client device may also output information to the user via the interface. Although fig. 1 depicts only six client devices, those skilled in the art will appreciate that the present disclosure may support any number of client devices.
Client devices 101, 102, 103, 104, 105, and/or 106 may include various types of computer devices, such as portable handheld devices, general purpose computers (such as personal computers and laptop computers), workstation computers, wearable devices, smart screen devices, self-service terminal devices, service robots, gaming systems, thin clients, various messaging devices, sensors or other sensing devices, and the like. These computer devices may run various types and versions of software applications and operating systems, such as MICROSOFT Windows, APPLE iOS, UNIX-like operating systems, linux, or Linux-like operating systems (e.g., GOOGLE Chrome OS); or include various mobile operating systems such as MICROSOFT Windows Mobile OS, iOS, windows Phone, android. Portable handheld devices may include cellular telephones, smart phones, tablet computers, personal Digital Assistants (PDAs), and the like. Wearable devices may include head mounted displays (such as smart glasses) and other devices. The gaming system may include various handheld gaming devices, internet-enabled gaming devices, and the like. The client device is capable of executing a variety of different applications, such as various Internet-related applications, communication applications (e.g., email applications), short Message Service (SMS) applications, and may use a variety of communication protocols.
Network 110 may be any type of network known to those skilled in the art that may support data communications using any of a number of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. For example only, the one or more networks 110 may be a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (e.g., bluetooth, WIFI), and/or any combination of these and/or other networks.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture that involves virtualization (e.g., one or more flexible pools of logical storage devices that may be virtualized to maintain virtual storage devices of the server). In various embodiments, server 120 may run one or more services or software applications that provide the functionality described below.
The computing units in server 120 may run one or more operating systems including any of the operating systems described above as well as any commercially available server operating systems. Server 120 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, etc.
In some implementations, server 120 may include one or more applications to analyze and consolidate data feeds and/or event updates received from users of client devices 101, 102, 103, 104, 105, and/or 106. Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of client devices 101, 102, 103, 104, 105, and/or 106.
In some implementations, the server 120 may be a server of a distributed system or a server that incorporates a blockchain. The server 120 may also be a cloud server, or an intelligent cloud computing server or intelligent cloud host with artificial intelligence technology. The cloud server is a host product in a cloud computing service system, so as to solve the defects of large management difficulty and weak service expansibility in the traditional physical host and virtual private server (VPS, virtual Private Server) service.
The system 100 may also include one or more databases 130. In some embodiments, these databases may be used to store data and other information. For example, one or more of databases 130 may be used to store information such as audio files and video files. Database 130 may reside in various locations. For example, the database used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. Database 130 may be of different types. In some embodiments, the database used by server 120 may be, for example, a relational database. One or more of these databases may store, update, and retrieve the databases and data from the databases in response to the commands.
In some embodiments, one or more of databases 130 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key value stores, object stores, or conventional stores supported by the file system.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with the present disclosure.
Fig. 2 illustrates a flowchart of a live room display control method 200 applied to a server according to an embodiment of the present disclosure. As depicted in fig. 2, method 200 may include: step S210, receiving configuration information associated with the virtual auditorium, wherein the configuration information is set by a anchor user through an anchor client; step S220, configuring a virtual auditorium of the live broadcasting room based on the configuration information, wherein the virtual auditorium comprises an avatar of an audience user; and step S230 of providing the live stream configured with the virtual auditorium to the audience client such that the live room is displayed at the audience client as a live room scene containing the virtual auditorium.
By generating the virtual image corresponding to the audience user in the live broadcast room scene, the direct interaction between the virtual anchor and the audience user can be realized, the requirement of the audience user on creation of live broadcast content is met, the distance between the audience user and the anchor user is effectively pulled in, and the interaction rate is increased. In addition, through intuitively displaying the interaction, the reality of the live interaction can be enhanced, so that the user experience of the live platform is improved.
According to some embodiments, configuration information associated with a virtual audience may be received in response to a live user opening a virtual audience function of a living room. In this case, a virtual auditorium function on-off button is provided on the live broadcast interface. Only the visual effect of the audience user watching the anchor live is provided when the virtual auditorium function is turned off. When the anchor user clicks the button to turn on the virtual auditorium function, configuration information for the virtual auditorium may be received via the anchor client to provide a live room scene containing the virtual auditorium.
According to other embodiments, configuration information associated with the virtual audience may also be received in response to a host user opening the living room. In this case, the default virtual auditorium function is on.
According to some embodiments, the configuration information in step S210 may include one or more of the following: virtual auditorium arrangement direction, virtual auditorium seat number, virtual auditorium bullet screen opening and closing, virtual auditorium bullet screen display duration, virtual auditorium assembly opacity, virtual auditorium text display.
The virtual auditorium alignment direction indicates an alignment direction of the virtual auditorium displayed at the living room, and may include, but is not limited to, an arrangement in which the avatars are horizontally aligned, vertically aligned, partially horizontally aligned, partially vertically aligned, circumferentially aligned, etc. on the virtual auditorium. The number of virtual audience seats may be any number of avatars set according to the requirements of the anchor user. In some examples, the virtual audience seat number may be associated with a virtual audience seat arrangement direction. For example, when the virtual audience arrangement direction is the lateral direction, the virtual audience seat number may be any one of 1 to 12; when the virtual audience arrangement direction is the vertical direction, the number of virtual audience seats may be any one of 1 to 6, or the like. The virtual auditorium barrage opening and closing instruction indicates whether to display barrages sent by audience users in live broadcasting room scenes. In the case of opening the virtual audience segments, the display duration of each segment in the living room may be configured by the virtual audience segment display duration, e.g., 1 second, 3 seconds, 5 seconds, 10 seconds, etc. The virtual auditorium component opacity indicates the level of transparency that the avatar displays in the live room scene, which may be any value in the range of 0 to 100% set according to the requirements of the anchor user. The virtual auditorium text display indicates text display modes except for the bullet screen on the virtual auditorium, and can comprise ticker display, fade-in fade-out display and the like.
According to some embodiments, the above configuration information may be received by the anchor user via the anchor client before the live broadcast begins. According to other embodiments, the updated configuration information may also be received by the anchor user via the anchor client during the live broadcast process and the configuration of the virtual auditorium updated. According to further embodiments, the latest configuration information received via the anchor client during the live broadcast may be stored via a memory and the stored configuration information utilized to configure the virtual auditorium of the live broadcast room the next time the anchor user opens the live broadcast without requiring the anchor user to set again.
By receiving one or more of the configuration information, a proper virtual audience can be configured for display based on the requirements of the anchor users, so that the different requirements of different anchor users and different live broadcasting room scenes can be met. In addition, by receiving the configuration information updated in real time by the anchor user according to the live broadcast, a more appropriate virtual audience can be configured according to the interaction condition with the audience user, so that the interaction experience of the audience user is enhanced. For example, as the number of audience users entering a living room increases rapidly, better living vision may be achieved by increasing the number of virtual audience seats, changing the virtual audience arrangement direction to a circumferential arrangement, or reducing the virtual audience bullet screen display duration, etc. For another example, when a live room scene changes, the audience user may better view the updated live room scene by reducing the number of virtual audience seats, changing the virtual audience arrangement direction to vertical, closing the virtual audience bullet screen, and so on.
Fig. 3A and 3B illustrate schematic diagrams of live room scenes 300A and 300B, respectively, containing virtual auditoriums, according to embodiments of the disclosure. As shown in fig. 3A, the avatar 310 corresponding to the anchor user is located in the middle of the live room scene, the virtual auditorium 320 is arranged in a landscape orientation and the virtual auditorium includes avatars 330 corresponding to 10 audience users. Similarly, as shown in fig. 3B, the avatar 340 corresponding to the anchor user is located in the middle of the live room scene, the virtual auditorium 350 is arranged vertically and the virtual auditorium includes the avatars 360 corresponding to 4 audience users.
Although the same avatars in the virtual audience are shown in fig. 3A and 3B, it will be appreciated that the avatars corresponding to different audience users may vary from one audience to another.
According to some embodiments, step S220, configuring the virtual auditorium of the living room based on the configuration information may include: for each avatar in the virtual auditorium, acquiring position information and/or avatar information of the avatar, wherein the position information indicates the display position of the avatar in the virtual auditorium, and the avatar information indicates the display avatar and the display size of the avatar in the virtual auditorium; and setting a display position and/or a display avatar based on the configuration information.
The display avatar of the avatar may be provided through a live platform. In some examples, avatars may be randomly generated by face-spelling (e.g., acFun-side capture assistant) based on virtualization techniques such as Live2D, VRM, unity, UE, and in the event that the avatars generated by the Live platform do not meet the needs of the spectator user, the spectator user may automatically change the displayed avatar of the avatar corresponding to the spectator user by sending bullet information such as "change-to-change" to meet the individualization needs of the spectator user. In other examples, a particular avatar may be configured for a particular viewer user and stored in memory, both to highlight the particular viewer user and to reduce the amount of computation and time to generate the avatar.
Fig. 4 illustrates a flow chart for configuring virtual auditoriums of a live room based on configuration information in accordance with an embodiment of the present disclosure. According to some embodiments, the configuration information includes a number of virtual audience seating positions and the avatar corresponds to a selected audience user entering the living room. In this case, as shown in fig. 4, configuring the virtual auditorium of the living room based on the configuration information in step S220 may include: step S410, sorting the virtual images corresponding to the selected audience users according to a first sorting rule to obtain a first sorting result; and step S420, configuring the virtual auditorium based on the number of auditorium positions and the first sorting result.
The selected audience user may be an audience user who satisfies a preset rule. According to some examples, when a viewer user sends a number of bullet screens at a live platform up to a certain number (e.g., 50, 100, 200, etc.), the viewer user is set as the selected viewer user. Accordingly, the avatars corresponding to the selected audience users may be ranked according to how many bullet screens are transmitted. According to other examples, when the spectator user gives the gift amount to a certain level (e.g., 100 yuan, 500 yuan, 1000 yuan, etc.) at the live platform, the spectator user is set as the selected user. Accordingly, the avatars corresponding to the selected spectator users may be ranked according to the gift amount level from top to bottom. According to yet other examples, a viewer user is set as a selected user when the viewer user views a live broadcast on a live platform for a certain period of time (e.g., 50 hours, 100 hours, 200 hours, etc.). Accordingly, the avatars corresponding to the selected viewer user may be ranked according to the length of the viewing period from long to short.
It should be appreciated that the above-described preset rules are shown for illustrative purposes only, and may be a combination of these preset rules, or may be other rules suitable for a live platform, the scope of the presently claimed subject matter is not limited in this respect.
By configuring the virtual auditorium to include the virtual images corresponding to the selected audience users, the exposure of the audience users meeting the preset rules can be increased, thereby being beneficial to attracting more selected audience users to enter the living room, prolonging the stay time of the audience users in the living room, and improving the enthusiasm of the audience users in the living room, and further improving the user retention rate and the living room flow.
Fig. 5 illustrates a flow chart for configuring virtual auditoriums of a live room based on configuration information in accordance with further embodiments of the present disclosure. According to some embodiments, the configuration information includes a number of virtual audience seating positions, and one or more first ones of the avatars correspond to selected audience users entering the living room and one or more second ones of the avatars correspond to random audience users entering the living room. In this case, configuring the virtual auditorium of the living room based on the configuration information in step S220 may include: step S510, sorting one or more first virtual images according to a first sorting rule to obtain a first sorting result; step S520, sorting one or more second avatars according to a second sorting rule different from the first sorting rule to obtain a second sorting result; and step S530, configuring the virtual auditorium based on the number of auditorium seats, the first sorting result and the second sorting result.
According to some embodiments, in step S510, the selected viewer user and the first ordering rule may be similar to the selected viewer user and ordering rule described with reference to fig. 4. For brevity, no further description is provided herein.
According to some embodiments, in step S520, the second ordering rule may be a random order in which the viewer user enters the living room. Accordingly, the avatars corresponding to random audience users may be ordered such that the avatar corresponding to random audience users late in the live room is configured to precede the avatar corresponding to random audience users early in the live room.
According to some embodiments, the number of first avatars and the number of second avatars set via the anchor client may be received such that a sum of the number of first avatars and the number of second avatars is a number of virtual audience seat positions.
By configuring the virtual auditorium to include both a first avatar corresponding to the selected audience user and a second avatar corresponding to the random audience user, both the selected audience user and the random audience user may be simultaneously enticed into the living room, which is particularly advantageous for anchor users that newly enter the living platform, such as to increase living room traffic.
According to some embodiments, in step S530, one or more first avatars may be configured to be positioned before one or more second avatars in the virtual audience. Thus, exposure to selected audience users is ensured, and random audience users are attracted to enter the living room.
According to some embodiments, step S220, configuring the virtual auditorium of the living room based on the configuration information may further include: after step S520 and before step S530, determining whether or not repetition is performed between the selected viewer user corresponding to the one or more first avatars and the random viewer user corresponding to the one or more second avatars; and in response to determining that there is a duplication between the selected audience user corresponding to the one or more first avatars and the random audience user corresponding to the one or more second avatars, removing the second avatar corresponding to the duplicated random audience user from the second ranking result.
In some examples, the audience users entering the living room may be selected audience users that meet preset rules while also being selected by the living platform as random audience users. In this case, by removing the second avatar corresponding to the repeated random audience user, it is possible to avoid displaying two avatars for the same audience user in the audience, thereby providing a more intuitive and realistic interactive experience for more audience users.
According to some embodiments, step S220, configuring the virtual auditorium of the living room based on the configuration information may further comprise one or more of: updating the first sequencing result according to a first preset rule; and updating the second sequencing result according to a second preset rule different from the first preset rule.
By updating the order of the avatars corresponding to selected audience users and/or the order of the avatars corresponding to random audience users, the virtual audience can be configured in real time according to the audience users entering the living room, and the avatars corresponding to the audience users leaving the living room can be removed in time, so that the enthusiasm of the audience users in the living room can be improved, and the user retention rate can be improved.
In some examples, the order of avatars on the virtual audience mat corresponding to the selected audience user may be updated every predetermined period of time, such as every 3 seconds, every 10 seconds, every minute, etc. In other examples, the order of avatars on the virtual auditorium corresponding to random audience users may be updated each time an audience user enters the living room. It should be appreciated that the ordering of the avatars may also be updated according to any suitable preset rules to increase the interactivity of the anchor user with the audience user and increase the user retention.
Fig. 6 illustrates a flow chart of a live room display control method 600 according to further embodiments of the present disclosure. As shown in fig. 6, method 600 may include: steps S610-S630 identical or similar to the embodiment of steps S210-S230 in fig. 2; step S640, receiving one or more inputs from a anchor user via an anchor client to an avatar in an virtual audience; and step S650, updating the virtual audience space in response to receiving the one or more inputs.
According to some embodiments, a moderator user may implement one or more inputs to the avatar in the virtual auditorium via one or more input devices, such as a mouse, touch display screen, or the like, and transmit to a server-side, such as server 120, via a network 110, such as that shown in FIG. 1. In this case, the one or more inputs may include a tap, zoom in, zoom out, drag, long press, etc. of the avatar.
According to other embodiments, one or more images associated with the anchor may be captured by a camera of the anchor client and one or more inputs by the anchor user to the avatar in the virtual audience are determined based on the one or more images and then transmitted to a server side, such as server 120, through a network 110, such as that shown in FIG. 1. For example, in Virtual Reality (VR) technology, after capturing an image associated with a host via a camera, by identifying a particular gesture, action, gesture, etc. of a host user in the image, a corresponding one or more inputs may be determined. Thus, one or more inputs of the anchor user may be more accurately and flexibly identified.
By receiving the input operation of the anchor user on the virtual image and correspondingly updating the virtual auditorium, the direct interaction with the audience user can be realized, and the immersive experience of the audience can be improved.
Although steps S610-S650 are depicted in fig. 6 as being in a particular order, this should not be construed as requiring that the steps must be performed in the particular order shown or in an order that is antegrade. For example, step S640 may be performed in parallel with step S630.
According to some embodiments, step S650, in response to receiving the one or more inputs, updating the virtual audience may include one or more of: changing a display size of one or more of the avatars; changing a display position of one or more of the avatars; changing an animation effect corresponding to one or more of the avatars; and changing a display avatar of one or more of the avatars.
According to some examples, in the case where a spectator user sends a barrage, in response to receiving a tap, zoom-in or zoom-out operation by a host user via an input device, or a corresponding gesture, motion, and gesture captured via a camera, a zoom-in or zoom-out of a corresponding avatar may be achieved, thereby indicating, for example, that the host user is talking to the spectator user or has completed talking to the spectator user. According to other examples, in response to receiving a drag operation by an anchor user via an input device or a corresponding gesture, action, and gesture captured by a camera, a change in a corresponding avatar position may be implemented. For example, the position of one or more avatars in a virtual auditorium may be changed to be in the center of a live room scene or in an empty location around a virtual anchor, thereby indicating, for example, that an anchor user is talking to a corresponding spectator user or that the corresponding spectator user is gifted with a gift in a live room, thereby enhancing the exposure of the spectator user. According to still other examples, in response to receiving a long press operation of the anchor user via the input device or a corresponding gesture, motion, and gesture captured by the camera, a change in animation effect of the corresponding avatar may be achieved, e.g., causing the avatar to rotate, raising a love after the avatar, etc. According to still other examples, the displayed avatar of the avatar may also be changed based on the received one or more inputs, such as adding an avatar to the avatar, changing an avatar hair color, etc.
It should be understood that the above examples are shown for illustrative purposes only, and two or more of the above examples may also be implemented simultaneously, e.g., changing the position of the avatar to be in the center of the live room scene or in the empty position around the avatar, implementing the enlargement of the avatar, raising the loving animation effect after the avatar, etc. The scope of the presently claimed subject matter is not limited in this respect.
According to other embodiments, the method 600 may further comprise: receiving bullet screen information sent by a spectator user through a spectator client; and updating an avatar in the virtual audience space corresponding to the audience user in response to receiving the bullet screen information. For example, as described above, in case that an avatar generated by a live platform or an avatar configured by a cast user does not satisfy the needs of the audience user, the audience user may transmit bullet screen information such as "change-over" so as to change the display avatar of the avatar corresponding to the audience user to satisfy the personalized needs of the audience user.
Fig. 7 illustrates a flow chart of a live room display control method 700 according to further embodiments of the present disclosure. As shown in fig. 7, method 700 may include: steps S710-S730 identical or similar to the embodiment of steps S210-S230 in fig. 2; step 740, for each virtual image in the virtual audience, determining the real-time state of the audience user corresponding to the virtual image in the live broadcast room; and step S750, configuring corresponding animation effects for the avatar and/or the anchor avatar according to the real-time state of the audience user corresponding to the avatar in the live broadcast room.
According to some examples, the real-time state of the audience user at the living room may be a standby state, which represents any operation that the audience user has not received via the audience client for a period of time such as 1 minute, 5 minutes, 10 minutes, etc. In this case, an animated effect of blinking may be configured for an avatar corresponding to the spectator user. According to other examples, the real-time status of the viewer at the living room may be the gift status. In this case, an animation effect such as a hand pop-up may be configured for an avatar corresponding to the viewer user. According to yet other examples, the real-time status of the viewer at the living room may be a status of gifting a gift such as sunglasses. In this case, an animation effect such as wearing sunglasses may be configured for the anchor avatar.
By configuring the virtual effect corresponding to the real-time state of the audience user for the virtual image of the audience user and/or the virtual image of the anchor user, the anchor user can be helped to know the state or the requirement of the audience user in real time, the anchor can establish the interaction relation with the specific audience user or adjust the live content in time, and meanwhile, the interestingness of live interaction is further enhanced.
According to some embodiments, the live room display control methods 200, 600 and/or 700 described above with reference to fig. 2, 6 and 7 may further include: for each audience user entering a live broadcast room, acquiring grade information of the audience user; determining whether the rating information of the audience user meets a preset rating threshold; and in response to determining that the rating information of the spectator user meets a preset rating threshold, configuring an approach special effect for the spectator user corresponding to the rating information, wherein the approach special effect is provided to the spectator client as part of the live stream, such that a live room scene displayed at the spectator client is updated.
According to some examples, different viewer users may be ranked, e.g., 0-10, 10-20, 20-30, etc., according to their data information for the live platform. The data information may include, for example, but is not limited to, live viewing time, live barrage delivery, live gift gifting information, live recharge information, and the like. The preset level threshold may unify any suitable value preset for the live platform or configured by the anchor user via the anchor client. And when the grade of the audience user is higher than a preset grade threshold, configuring an approach special effect corresponding to the grade for the audience user. For example, when a 0-20 level audience user enters a living room, the audience user is only informed by broadcasting or displayed by words and has no entrance special effect; and displaying specific approach special effects when the audience users of 20-30 grades enter the living broadcast room, such as a sitting ship enters the living broadcast room.
It should be noted that unlike the approach special effects displayed in traditional live broadcasts, in embodiments of the present disclosure, an approach special effect configured for a viewer user may cause an update of a live room scene. For example, in the above example of audience user ratings of 20-30, the approach special effects of a seated vessel may be such that special effects of a scene fix in a living room such as a room, classroom, etc. being washed away by sea waves are created.
By configuring different entrance special effects for different audience users and correspondingly updating the live broadcasting room scene, the infectivity and the exposure rate of the specific audience user entrance special effects can be improved, and the audience users with higher attraction level can enter the live broadcasting room more easily. Meanwhile, the live broadcasting interaction perceptibility can be improved by updating the live broadcasting room scene, and the user experience is further improved.
According to some embodiments, the live room display control methods 200, 600 and/or 700 described above with reference to fig. 2, 6 and 7 may further include: receiving bullet screen information sent by an audience user corresponding to an avatar in the virtual audience in response to the configuration information indicating that the virtual audience bullet screen has been opened; configuring a corresponding display layer priority for each bullet screen information in the bullet screen information based on the time sequence of the received bullet screen information, wherein bullet screen information with later receiving time has a higher display layer priority than bullet screen information with earlier receiving time; and configuring a display order of the bullet screen information based on the display layer priority.
By configuring the bullet screen information with later receiving time with higher display layer priority than bullet screen information with earlier receiving time, the bullet screen information with later receiving time can be displayed to cover bullet screen information with earlier receiving time, so that the condition that the bullet screen information stops displaying before the user of the host player does not see the bullet screen information can be avoided, and the interaction between the user of the host player and the audience user who sends the bullet screen can be facilitated. This is particularly advantageous where the virtual audience arrangement direction is landscape, as in landscape virtual audience, the distance between each avatar may not be sufficient to simultaneously display completely the bullet screen information sent by the audience user corresponding to one or more of the avatars. In the case where the display layer priority is not set, there may be a case where the bullet screen information having a late reception time is covered with the bullet screen information having an early reception time, thereby causing the anchor user to miss the bullet screen information.
Fig. 8 illustrates a flowchart of a live room display method 800 applied to a hosting client according to an embodiment of the present disclosure. As depicted in fig. 8, method 800 may include: step 810, detecting setting of configuration information associated with a virtual audience by a chairman user; step S820, in response to detecting the setting of the configuration information by the anchor user, configuring, at the anchor client, a virtual auditorium of the live room based on the configuration information, the virtual auditorium comprising an avatar of an audience user; and step S830, in response to detecting the selection of the function of the virtual auditorium by the anchor user, causing the anchor client to display a live room scene containing the virtual auditorium.
By displaying the virtual image corresponding to the audience user in the live broadcast room scene, the direct interaction between the virtual anchor and the audience user can be realized, the requirement of the audience user on creation of live broadcast content is met, the distance between the audience user and the anchor user is effectively pulled in, and the interaction rate is increased. In addition, through intuitively displaying the interaction, the reality of the live interaction can be enhanced, so that the user experience of the live platform is improved.
According to some embodiments, the configuration information may include one or more of the following: virtual auditorium arrangement direction, virtual auditorium seat number, virtual auditorium bullet screen opening and closing, virtual auditorium bullet screen display duration, virtual auditorium assembly opacity, virtual auditorium text display.
According to some embodiments, the method 800 may further comprise: detecting one or more inputs by a anchor user to an avatar in a virtual audience; and in response to detecting the one or more inputs, causing the anchor client to display the updated virtual audience.
According to some embodiments, wherein detecting one or more inputs by a anchor user to an avatar in an virtual audience may include: one or more inputs of the anchor user are detected via one or more input devices of the anchor client, or one or more inputs of the anchor user are detected based on one or more images captured via a camera of the anchor client.
According to some embodiments, wherein displaying the updated virtual auditorium may include one or more of: displaying an virtual audience space in which a display size of one or more avatars is changed; an virtual auditorium in which a display position of one or more virtual figures is changed; displaying an virtual audience member in which an animation effect corresponding to one or more of the avatars changes; and displaying the virtual audience space in which the display image of the one or more virtual images is changed.
According to some embodiments, wherein the configuration information includes a number of virtual auditorium positions, wherein the avatar corresponds to a selected audience user entering the living room, and wherein causing the anchor client to display the living room scene containing the virtual auditorium may include: the anchor client is caused to display virtual auditoriums based on a first ranking result of the number of auditoriums, the first ranking result indicating an order of avatars corresponding to the selected audience user.
According to some embodiments, wherein the configuration information includes a number of virtual audience seats, wherein one or more first ones of the avatars correspond to selected audience users entering the living room and one or more second ones of the avatars correspond to random audience users entering the living room, and wherein causing the anchor client to display the living room scene including the virtual audience seats may include: the anchor client is caused to display the virtual audience based on the number of audience seats, a first ranking result indicating an order of the one or more first avatars, and a second ranking result indicating an order of the one or more second avatars.
According to some embodiments, in the virtual audience space, the one or more first avatars may be displayed before the one or more second avatars.
According to some embodiments, the method 800 may further comprise: for each virtual image in the virtual audience, determining the real-time state of the audience user corresponding to the virtual image in the live broadcasting room; and according to the real-time state of the audience user corresponding to the virtual image in the living broadcast room, enabling the anchor client to display a corresponding animation effect for the virtual image.
It should be appreciated that operations, features, elements, etc. included in the respective steps of the live room display method 800 may correspond to operations, features, elements, etc. included in the respective steps of the live room display control methods 200, 600, and 700, and have been described with reference to fig. 6 and 7. Thus, the advantages described above with respect to the live room display control methods 200, 600, and 700 apply equally to the live room display method 800. For brevity, certain operations, features and advantages are not described in detail herein.
Fig. 9 illustrates a block diagram of a live room display control apparatus 900 according to an embodiment of the present disclosure. As illustrated in fig. 9, the live room display control apparatus 900 may include a receiving unit 910 configured to receive configuration information associated with a virtual audience, the configuration information being set by a host user via a host client; a configuration unit 920 configured to configure virtual auditoriums of the living room based on the configuration information, the virtual auditoriums including an avatar of an audience user; and a providing unit 930 configured to provide the live stream configured with the virtual auditorium to the audience client such that the live room is displayed at the audience client as a live room scene containing the virtual auditorium.
It should be appreciated that the respective units 910-930 of the apparatus 900 shown in fig. 9 may correspond to the respective steps S210-S230, S610-S630 and S710-S730 in the methods 200, 600 and 700 described with reference to fig. 2, 6 and 7. Thus, the operations, features and advantages described above with respect to methods 200, 600 and 700 are equally applicable to apparatus 900 and the units comprised thereof. For brevity, certain operations, features and advantages are not described in detail herein.
Fig. 10 illustrates a block diagram of a live room display device 1000 according to an embodiment of the present disclosure. As shown in fig. 10, the live room display device 1000 may include a detection unit 1010 configured to detect a setting of configuration information associated with a virtual audience by a chairman user; a configuration unit 1020 configured to configure, at the anchor client, a virtual audience of the live room based on the configuration information, the virtual audience including an avatar of the audience user, in response to detecting the setting of the configuration information by the anchor user; and a display unit 1030 configured to cause the anchor client to display a live room scene containing the virtual auditorium in response to detecting the selection of the function of the virtual auditorium by the anchor user.
It should be appreciated that the various units 1010-1030 of the apparatus 1000 shown in fig. 10 may correspond to the various steps S810-S830 in the method 800 described with reference to fig. 8. Thus, the operations, features and advantages described above with respect to method 800 are equally applicable to apparatus 800 and the units comprised thereof. For brevity, certain operations, features and advantages are not described in detail herein.
It should also be appreciated that various techniques may be described herein in the general context of software hardware elements or program modules. The various units described above with respect to fig. 9 and 10 may be implemented in hardware or in hardware in combination with software and/or firmware. For example, the units may be implemented as computer program code/instructions configured to be executed in one or more processors and stored in a computer-readable storage medium. Alternatively, these units may be implemented as hardware logic/circuitry. For example, in some embodiments, one or more of the receiving unit 910, the configuring unit 920, and the providing unit 930, or one or more of the detecting unit 1010, the configuring unit 1020, and the display unit 1030 may be implemented together in a System on Chip (SoC). The SoC may include an integrated circuit chip including one or more components of a processor (e.g., a central processing unit (Central Processing Unit, CPU), microcontroller, microprocessor, digital signal processor (Digital Signal Processor, DSP), etc.), memory, one or more communication interfaces, and/or other circuitry, and may optionally execute received program code and/or include embedded firmware to perform functions.
According to another aspect of the present disclosure, there is also provided an electronic apparatus including: at least one processor; and at least one memory communicatively coupled to the at least one processor; wherein the at least one memory stores a computer program which, when executed by the at least one processor, implements the live room display control method or live room display method described above.
According to another aspect of the present disclosure, there is also provided a non-transitory computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the above-mentioned live room display control method or live room display method.
According to another aspect of the present disclosure, there is also provided a computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the above-mentioned live room display control method or live room display method.
Referring to fig. 11, a block diagram of an electronic device 1100 that may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. The electronic devices may be different types of computer devices, such as laptop computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 11, the electronic device 1100 may include at least one processor 1101, a working memory 1102, an input unit 1104, a display unit 1105, a speaker 1106, a storage unit 1107, a communication unit 1108, and other output units 1109 that are capable of communicating with each other through a system bus 1103.
The processor 1101 may be a single processing unit or multiple processing units, all of which may include a single or multiple computing units or multiple cores. The processor 1101 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. The processor 1101 may be configured to obtain and execute computer readable instructions stored in the working memory 1102, the storage unit 1107, or other computer readable media, such as program code of the operating system 1102a, program code of the application program 1102b, and the like.
The working memory 1102 and the storage unit 1107 are examples of computer-readable storage media for storing instructions that are executed by the processor 1101 to implement the various functions described previously. The working memory 1102 may include both volatile memory and nonvolatile memory (e.g., RAM, ROM, etc.). In addition, the storage unit 1107 may include a hard disk drive, a solid state drive, a removable media, including external and removable drives, memory cards, flash memory, floppy disks, optical disks (e.g., CDs, DVDs), storage arrays, network attached storage, storage area networks, and the like. The working memory 1102 and storage unit 1107 may both be referred to herein collectively as memory or computer-readable storage medium and may be non-transitory medium capable of storing computer-readable, processor-executable program instructions as computer program code that may be executed by the processor 1101 as a particular machine configured to implement the operations and functions described in the examples herein.
The input unit 1106 may be any type of device capable of inputting information to the electronic device 1100, the input unit 1106 may receive input numeric or character information and generate key signal inputs related to user settings and/or function control of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a trackpad, a trackball, a joystick, a microphone, and/or a remote control. The output unit may be any type of device capable of presenting information and may include, but is not limited to, a display unit 1105, a speaker 1106, and other output units 1109 may include, but are not limited to, a video/audio output terminal, a vibrator, and/or a printer. The communication unit 1108 allows the electronic device 1100 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth TM Devices, 802.11 devices, wi-Fi devices, wiMAX devices, cellular communication devices, and/or the like.
Application 1102b in working register 702 may be loaded to perform the various methods and processes described above, such as steps S110-S140 in FIG. 2, steps S610-S650 in FIG. 6, steps S710-S750 in FIG. 7, and steps S810-S830 in FIG. 8. For example, in some embodiments, the methods 200, 600, 700, 800 described above may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 1107. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 1100 via the storage unit 1107 and/or the communication unit 1108. One or more of the steps of the methods 200, 600, 700, 800 described above may be performed when the computer program is loaded and executed by the processor 1101. Alternatively, in other embodiments, the processor 1101 may be configured to perform the methods 200, 600, 700, 800 by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the foregoing methods, systems, and apparatus are merely exemplary embodiments or examples, and that the scope of the present invention is not limited by these embodiments or examples but only by the claims following the grant and their equivalents. Various elements of the embodiments or examples may be omitted or replaced with equivalent elements thereof. Furthermore, the steps may be performed in a different order than described in the present disclosure. Further, various elements of the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced by equivalent elements that appear after the disclosure.

Claims (27)

1. A live broadcast room display control method is applied to a server and comprises the following steps:
receiving configuration information associated with a virtual audience, the configuration information being set by a chairman user via a chairman client;
configuring a virtual auditorium of the living room based on the configuration information, the virtual auditorium comprising an avatar of an audience user; and
a live stream configured with the virtual auditorium is provided to an audience client such that the live room is displayed at the audience client as a live room scene containing the virtual auditorium.
2. The method of claim 1, wherein configuring the virtual audience for the living room based on the configuration information comprises:
for each avatar in the virtual auditorium, acquiring position information and/or image information of the avatar, wherein the position information indicates the display position of the avatar in the virtual auditorium, and the image information indicates the display image and the display size of the avatar in the virtual auditorium; and
and setting the display position and/or the display image based on the configuration information.
3. The method of claim 1, further comprising:
receiving one or more inputs by the anchor user to the avatar in the virtual audience via the anchor client; and
the virtual audience is updated in response to receiving the one or more inputs.
4. The method of claim 3, wherein updating the virtual audience comprises one or more of:
changing a display size of one or more of the avatars;
changing a display position of one or more of the avatars;
Changing an animation effect corresponding to one or more of the avatars; and
changing a display avatar of one or more of the avatars.
5. The method of any of claims 1-4, wherein the configuration information includes a number of virtual audience seats, wherein the avatar corresponds to a selected audience user entering the living room, and wherein configuring the virtual audience seats of the living room based on the configuration information includes:
sorting the avatars corresponding to the selected audience users according to a first sorting rule to obtain a first sorting result; and
the virtual auditorium is configured based on the number of auditorium positions and the first ranking result.
6. The method of any of claims 1-4, wherein the configuration information includes a number of virtual audience seats, wherein one or more first ones of the avatars correspond to selected audience users entering the living room and one or more second ones of the avatars correspond to random audience users entering the living room, and wherein configuring the virtual audience seats of the living room based on the configuration information includes:
Sorting the one or more first avatars according to a first sorting rule to obtain a first sorting result;
sorting the one or more second avatars according to a second sorting rule different from the first sorting rule to obtain a second sorting result; and
the virtual audience is configured based on the number of audience seats, the first ranking result, and the second ranking result.
7. The method of claim 6, wherein the one or more first avatars are configured to be positioned before the one or more second avatars in the virtual audience.
8. The method of claim 6 or 7, wherein configuring the virtual auditorium of the living room based on the configuration information further comprises:
determining whether there is a duplication between the selected audience user corresponding to the one or more first avatars and the random audience user corresponding to the one or more second avatars; and
in response to determining that there is a duplication between the selected audience user corresponding to the one or more first avatars and the random audience user corresponding to the one or more second avatars, removing a second avatar corresponding to the duplicated random audience user from the second ranking result.
9. The method of any of claims 5-8, wherein configuring the virtual audience for the living room based on the configuration information further comprises one or more of:
updating the first sequencing result according to a first preset rule; and
and updating the second sequencing result according to a second preset rule different from the first preset rule.
10. The method of any of claims 1-9, further comprising:
for each virtual image in the virtual audience, determining the real-time state of the audience user corresponding to the virtual image in the live broadcasting room; and
and configuring corresponding animation effects for the avatar and/or the anchor avatar according to the real-time state of the audience user corresponding to the avatar in the live broadcasting room.
11. The method of any of claims 1-10, further comprising:
for each audience user entering the living broadcast room, acquiring grade information of the audience user;
determining whether the rating information of the audience user meets a preset rating threshold; and
in response to determining that the ranking information of the audience user meets the preset ranking threshold, configuring the audience user with an approach special effect corresponding to the ranking information,
Wherein the approach special effects are provided to the viewer client as part of the live stream such that the live room scene displayed at the viewer client is updated.
12. The method of any of claims 1-11, further comprising:
receiving bullet screen information sent by the audience user corresponding to the avatar in the virtual audience in response to the configuration information indicating that the virtual audience bullet screen has been opened;
configuring a corresponding display layer priority for each bullet screen information in the bullet screen information based on the time sequence of the received bullet screen information, wherein bullet screen information with later receiving time has higher display layer priority than bullet screen information with earlier receiving time; and
and configuring the display sequence of the barrage information based on the display layer priority.
13. The method of any of claims 1-12, wherein the configuration information includes one or more of: virtual auditorium arrangement direction, virtual auditorium seat number, virtual auditorium bullet screen opening and closing, virtual auditorium bullet screen display duration, virtual auditorium assembly opacity, virtual auditorium text display.
14. A live room display method applied to a hosting client, comprising:
detecting a setting of configuration information associated with a virtual audience by a chairman user;
in response to detecting the setting of the configuration information by the anchor user, configuring, at the anchor client, an virtual audience of the live room based on the configuration information, the virtual audience comprising an avatar of an audience user; and
in response to detecting selection of the function of the virtual auditorium by the anchor user, causing the anchor client to display a live room scene containing the virtual auditorium.
15. The method of claim 14, further comprising:
detecting one or more inputs by the anchor user to the avatar in the virtual audience; and
in response to detecting the one or more inputs, causing the anchor client to display the updated virtual audience.
16. The method of claim 15, wherein detecting one or more inputs by the anchor user to the avatar in the virtual audience comprises:
detecting the one or more inputs of the anchor user via one or more input devices of the anchor client, or
The one or more inputs of the anchor user are detected based on one or more images captured via a camera of the anchor client.
17. The method of claim 15, wherein displaying the updated virtual auditorium comprises one or more of:
displaying the virtual audience space in which the display size of one or more avatars is changed;
displaying the virtual audience space in which the display position of one or more avatars is changed;
displaying the virtual audience segments with animation effect changes corresponding to one or more of the avatars; and
displaying the virtual audience space in which the display image of one or more virtual images is changed.
18. The method of any of claims 14-17, wherein the configuration information includes a number of virtual audience seats, wherein the avatar corresponds to a selected audience user entering the living room, and wherein causing the anchor client to display a living room scene containing the virtual audience seats comprises:
causing the anchor client to display the virtual audience segments based on a first ranking result of the number of audience segments, the first ranking result indicating an order of the avatars corresponding to the selected audience users.
19. The method of any of claims 14-17, wherein the configuration information includes a number of virtual audience seats, wherein one or more first ones of the avatars correspond to selected audience users entering the living room and one or more second ones of the avatars correspond to random audience users entering the living room, and wherein causing the anchor client to display a living room scene containing the virtual audience seats includes:
causing the chairman client to display the virtual audience based on the number of audience seats, a first ranking result indicating an order of the one or more first avatars, and a second ranking result indicating an order of the one or more second avatars.
20. The method of claim 19, wherein the one or more first avatars are displayed in the virtual audience space before the one or more second avatars.
21. The method of any of claims 14-20, further comprising:
for each virtual image in the virtual audience, determining the real-time state of the audience user corresponding to the virtual image in the live broadcasting room; and
And according to the real-time state of the audience user corresponding to the virtual image in the live broadcasting room, enabling the anchor client to display a corresponding animation effect for the virtual image.
22. The method of any of claims 14-21, wherein the configuration information includes one or more of: virtual auditorium arrangement direction, virtual auditorium seat number, virtual auditorium bullet screen opening and closing, virtual auditorium bullet screen display duration, virtual auditorium assembly opacity, virtual auditorium text display.
23. A live room display control device applied to a server, comprising:
a receiving unit configured to receive configuration information associated with a virtual audience, the configuration information being set by a chairman user via a chairman client;
a configuration unit configured to configure a virtual audience of the living room based on the configuration information, the virtual audience including an avatar of an audience user; and
a providing unit configured to provide a live stream configured with the virtual auditorium to an audience client such that the live room is displayed at the audience client as a live room scene containing the virtual auditorium.
24. A live room display device for use with a hosting client, comprising:
a detection unit configured to detect a setting of configuration information associated with a virtual audience by a chairman user;
a configuration unit configured to configure, at the anchor client, an virtual audience of the live room based on the configuration information, the virtual audience including an avatar of an audience user, in response to detecting a setting of the configuration information by the anchor user; and
and a display unit configured to cause the anchor client to display a live room scene containing the virtual auditorium in response to detecting a selection of a function of the virtual auditorium by the anchor user.
25. An electronic device, comprising:
at least one processor; and
at least one memory communicatively coupled to the at least one processor,
wherein the at least one memory stores a computer program that, when executed by the at least one processor, implements the method of any of claims 1-22.
26. A non-transitory computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the method of any of claims 1-22.
27. A computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the method of any of claims 1-22.
CN202210775134.5A 2022-07-01 2022-07-01 Display control method and device, display method and device, equipment and medium Pending CN117376592A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210775134.5A CN117376592A (en) 2022-07-01 2022-07-01 Display control method and device, display method and device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210775134.5A CN117376592A (en) 2022-07-01 2022-07-01 Display control method and device, display method and device, equipment and medium

Publications (1)

Publication Number Publication Date
CN117376592A true CN117376592A (en) 2024-01-09

Family

ID=89391626

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210775134.5A Pending CN117376592A (en) 2022-07-01 2022-07-01 Display control method and device, display method and device, equipment and medium

Country Status (1)

Country Link
CN (1) CN117376592A (en)

Similar Documents

Publication Publication Date Title
US20190332400A1 (en) System and method for cross-platform sharing of virtual assistants
US10607382B2 (en) Adapting content to augumented reality virtual objects
US20190149852A1 (en) Live broadcasting method, method for presenting live broadcasting data stream, and terminal
US20230095314A1 (en) Configuring 360-degree video within a virtual conferencing system
EP2887686A1 (en) Sharing content on devices with reduced user actions
US20150012831A1 (en) Systems and methods for sharing graphical user interfaces between multiple computers
CN107004226A (en) Multi-endpoint is controllable to be notified
CN111309431B (en) Display method, device, equipment and medium in group session
TW202325030A (en) Platform for video-based stream synchronization
US10698744B2 (en) Enabling third parties to add effects to an application
WO2022205784A1 (en) Interface information control method and apparatus
US20200410760A1 (en) Establishment of positional timers in an augmented reality environment
CN117244249A (en) Multimedia data generation method and device, readable medium and electronic equipment
WO2023246852A1 (en) Virtual image publishing method and apparatus, electronic device, and storage medium
WO2023246859A1 (en) Interaction method and apparatus, electronic device, and storage medium
CN113965539A (en) Message sending method, message receiving method, device, equipment and medium
CN117376592A (en) Display control method and device, display method and device, equipment and medium
CN112187628B (en) Method and device for processing identification picture
CN113923469A (en) Processing method, device, equipment and storage medium for continuous delivery of gifts in live broadcast
CN114995924A (en) Information display processing method, device, terminal and storage medium
KR101983837B1 (en) A method and system for producing an image based on a user-feedbackable bots, and a non-transient computer-readable recording medium
EP3389049B1 (en) Enabling third parties to add effects to an application
CN115022666B (en) Virtual digital person interaction method and system
US20240106875A1 (en) Dynamically assigning participant video feeds within virtual conferencing system
CN114546229B (en) Information processing method, screen capturing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination