CN113032542B - Live broadcast data processing method, device, equipment and readable storage medium - Google Patents

Live broadcast data processing method, device, equipment and readable storage medium Download PDF

Info

Publication number
CN113032542B
CN113032542B CN202110390618.3A CN202110390618A CN113032542B CN 113032542 B CN113032542 B CN 113032542B CN 202110390618 A CN202110390618 A CN 202110390618A CN 113032542 B CN113032542 B CN 113032542B
Authority
CN
China
Prior art keywords
user
anchor
target
users
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110390618.3A
Other languages
Chinese (zh)
Other versions
CN113032542A (en
Inventor
张艳军
武斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110390618.3A priority Critical patent/CN113032542B/en
Publication of CN113032542A publication Critical patent/CN113032542A/en
Application granted granted Critical
Publication of CN113032542B publication Critical patent/CN113032542B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3343Query execution using phonetics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/338Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/194Calculation of difference between files

Abstract

The embodiment of the application discloses a live broadcast data processing method, a device, equipment and a readable storage medium, wherein the method comprises the following steps: the method comprises the steps that a first terminal responds to an interaction request of a first participant in a voice live virtual room aiming at an interaction control, and a question text is displayed; responding to an answer input operation of a first participant user aiming at a question text, and acquiring a first answer text provided by the first participant user; when the first answer text is matched with the key label of the target anchor user, the anchor identification of the target anchor user and the participation user identification of the first participation user are jointly displayed in the team area corresponding to the target anchor user in the voice live virtual room. By adopting the method and the device, the matching efficiency of the audience user and the anchor user can be improved, frequent data requests are avoided, and data traffic is saved.

Description

Live broadcast data processing method, device, equipment and readable storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a readable storage medium for processing live broadcast data.
Background
With the development of computer technology and network technology, network live broadcast has been widely popularized, and users can log in the application program of the network live broadcast, enter the interested live broadcast room, and watch the live broadcast program of the host broadcast.
Currently, in live applications, if a viewer user desires to find a host of interest, he needs to enter one live room after another and watch for a period of time in each live room to find his own favorite host and stay. Then in the process of finding the interested anchor by the audience user, the data request needs to be continuously sent out, which brings about no small flow pressure to the server; at the same time, it will take a lot of time for the viewer to find, which is inefficient.
Disclosure of Invention
The embodiment of the application provides a live broadcast data processing method, device and equipment and a readable storage medium, which can improve the matching efficiency of audience users and anchor users, avoid frequent data requests and save data traffic.
In one aspect, an embodiment of the present application provides a live broadcast data processing method, including:
the method comprises the steps that a first terminal responds to an interaction request of a first participant in a voice live virtual room aiming at an interaction control, and a question text is displayed; the questioning text is associated with key labels of N anchor users in the voice live virtual room; n is a positive integer;
responding to an answer input operation of a first participant user aiming at a question text, and acquiring a first answer text provided by the first participant user;
When the first answer text is matched with the key label of the target anchor user, displaying the anchor identification of the target anchor user and the participation user identification of the first participation user in the team area corresponding to the target anchor user in the voice live virtual room; the N anchor users include target anchor users.
In one aspect, an embodiment of the present application provides a live broadcast data processing apparatus, including:
the text display module is used for responding to an interaction request of the first participant user for the interaction control in the voice live virtual room and displaying the question text; the questioning text is associated with key labels of N anchor users in the voice live virtual room; n is a positive integer;
the text acquisition module is used for responding to the answer input operation of the first participant user for the question text and acquiring a first answer text provided by the first participant user;
the identification display module is used for jointly displaying the anchor identification of the target anchor user and the participation user identification of the first participation user in the team area corresponding to the target anchor user in the voice live virtual room when the first answer text is matched with the key label of the target anchor user; the N anchor users include target anchor users.
In one embodiment, the question text includes configuration answer text; the answer input operation includes a text selection operation;
the text acquisition module comprises:
the operation response unit is used for responding to the text selection operation of the target user aiming at the configuration answer text and obtaining the target configuration answer text corresponding to the text selection operation;
and a text determining unit for determining the target configuration answer text as a first answer text provided by the target user.
In one embodiment, the identity display module comprises:
the area acquisition unit is used for acquiring a second identification display area adjacent to the first identification display area in the team formation area corresponding to the target anchor user; a host broadcast identifier of a target host broadcast user is displayed in the first identifier display area;
the identification display unit is used for acquiring the participation user identification of the first participation user and displaying the participation user identification of the first participation user in the second identification display area.
In one embodiment, the live data processing apparatus further comprises:
the interface display module is used for responding to the triggering operation of the anchor identification of the target anchor user in the team area corresponding to the target anchor user and displaying an attribute information interface comprising the user attribute information of the target anchor user; the attribute information interface comprises a user description information area;
And the label display module is used for displaying the key labels of the target anchor users in the user description information area.
In one embodiment, the live data processing apparatus further comprises:
the interactive control display module is used for displaying a first interactive control in a first interactive display area of a team forming area corresponding to the target anchor user and displaying a second interactive control in a second interactive display area of the team forming area corresponding to the residual anchor user in the team competing time period; the rest of the anchor users are anchor users except the target anchor user among the matched anchor users contained in the voice live virtual room; the first interaction control is used for enabling the browsing user to interact with the target anchor user and the first participating user; the second interaction control is used for interacting the browsing user with the rest of anchor users and the rest of participating users; the participation user identification of the residual participation user and the anchor identification of the residual anchor user are jointly displayed in a team area corresponding to the residual anchor user; the rest participating users are the participating users except the first participating user among the participating users contained in the voice live virtual room; browsing users refers to users watching live voice virtual rooms;
The collarband information display module is used for displaying a collarband display area in the voice live broadcast virtual room when the system time reaches the maximum timestamp of the collarband competitive time period and the number of first interaction behaviors displayed for the first interaction control in the first interaction display area is larger than the number of second interaction behaviors displayed for the second interaction control in the second interaction display area;
the collarband information display module is further used for displaying first user information of the first participant and collarband prompt information aiming at the first participant in the collarband display area.
In one embodiment, the lead information display module includes:
the system comprises a camping determining unit, a target anchor user and a first participant user, wherein the camping determining unit is used for determining a camping formed by the first participant user and the target anchor user as a collarband camping;
the information display unit is used for acquiring the additional virtual resource rate aiming at the camping of the collarband and generating resource allocation prompt information according to the camping of the collarband and the additional virtual resource rate; the resource allocation prompt information is used for prompting that the camping allocated with the additional virtual resource rate is a collarband camping; the additional virtual resource rate is used for determining additional virtual resources allocated to the camping of the collarband;
the information display unit is further used for displaying the first user information, the collarband prompt information and the resource allocation prompt information in the collarband display area.
In one embodiment, the live data processing apparatus further comprises:
the interactive data statistics module is used for counting first interactive data of a first interactive user aiming at a first interactive control and second interactive data of a second interactive user aiming at a second interactive control when the system time reaches the maximum timestamp of the collage period; the first interactive data comprises interactive behaviors and first interactive user identification information; the second interaction data comprises interaction behaviors and second interaction user identification information; the browsing user comprises a first interactive user and a second interactive user;
the interactive data statistics module is further used for determining a first identification number of the first interactive user identification information and a second identification number of the second interactive user identification information;
the interactive data statistics module is further used for taking the first identification number as a first interactive behavior number and the second identification number as a second interactive behavior number;
the quantity display module is used for displaying the first interactive behavior quantity in the first interactive display area and displaying the second interactive behavior quantity in the second interactive display area.
In one embodiment, the live data processing apparatus further comprises:
The transmission control display module is used for displaying a first resource transmission control in a first resource display area of a team formation area corresponding to the target anchor user in the team formation competitive period, and displaying a second resource transmission control in a second resource display area of the team formation area corresponding to the residual anchor user; the first resource sending control is used for browsing virtual resources sent by the user for the camping of the collarband; the second resource sending control is used for browsing virtual resources sent by the user for the rest camps; the residual camping is formed by residual anchor users and residual participating users;
and the lineup display module is used for displaying an optimal lineup display area in the voice live broadcast virtual room when the system time reaches the maximum timestamp of the lineup competitive choice time period and the first virtual resource displayed by the first resource transmission control in the first resource display area is larger than the second virtual resource displayed by the second resource transmission control in the second resource display area, and displaying the anchor identification of the target anchor user and the participation user identification of the first participation user in the optimal lineup display area.
In one embodiment, the live data processing apparatus further comprises:
The resource acquisition module is used for acquiring initial virtual resources of the camping of the collarband when the system time reaches the maximum timestamp of the camping competitive period; the initial virtual resource is a virtual resource sent by a browsing user to a collarband camp; the maximum timestamp of the camping election period is later than the maximum timestamp of the collarband election period;
the resource acquisition module is also used for acquiring second virtual resources of the rest camping; the second virtual resource is a virtual resource sent by the browsing user to the rest camps;
the resource acquisition module is also used for determining additional virtual resources of the camping team according to the additional virtual resource rate and the initial virtual resources;
the resource acquisition module is also used for determining a first virtual resource of the camping of the collarband according to the additional virtual resource and the initial virtual resource;
and the resource display module is used for displaying the first virtual resource in the first resource display area and displaying the second virtual resource in the second resource display area.
In one embodiment, the N anchor users include anchor user k i
The text display module comprises:
a request response unit for responding to the interaction request of the first participant user for the interaction control in the live voice virtual room to obtain the anchor user k i Key tags of (2);
a keyword acquisition unit for acquiring and hosting user k in the configuration keyword set i The matched configuration keywords of the key labels of the database are used as target keywords;
a table acquisition unit for acquiring a text mapping table; the text mapping table comprises mapping relations between configuration question texts and configuration keywords;
the text display unit is used for acquiring the configuration question text with the mapping relation with the target keyword in the text mapping table, determining the acquired configuration question text as the question text, and displaying the question text.
In one embodiment, the keyword acquisition unit includes:
a vector acquisition subunit for acquiring the anchor user k i A label vector corresponding to the key label of (a);
the vector obtaining subunit is further used for obtaining word vectors corresponding to each configuration keyword in the configuration keyword set to obtain a word vector set;
the keyword determining subunit is used for determining the similarity between the tag vector and each word vector in the word vector set respectively to obtain a similarity set;
the keyword determining subunit is further configured to determine, as the target similarity, a similarity obtained from the similarity set and greater than or equal to a similarity threshold, and determine, as the target keyword, a configuration keyword corresponding to the target similarity.
In one embodiment, the first answer text is associated with a configuration keyword; n is a positive integer;
the live broadcast data processing device further includes:
the matching degree determining module is used for determining the matching degree between the configuration keywords and the key labels of the N anchor users respectively to obtain N matching degrees;
the matching relation determining module is used for acquiring target anchor users from N anchor users according to the N matching degrees;
and the matching relation determining module is also used for determining that the first answer text is matched with the key label of the target anchor user.
In one embodiment, the matching relationship determination module includes:
the first user determining unit is used for acquiring the maximum matching degree from the N matching degrees;
the first user determining unit is further configured to determine the anchor user corresponding to the maximum matching degree as a target anchor user.
In one embodiment, the matching relationship determination module includes:
the second user determining unit is used for determining a matching degree which is larger than or equal to a matching degree threshold value in the N matching degrees as a candidate matching degree, and determining a anchor user corresponding to the candidate matching degree as a candidate anchor user;
the second user determining unit is further used for acquiring candidate anchor user information corresponding to the candidate anchor user and displaying the candidate anchor user information;
The second user determining unit is further configured to determine, in response to a host selection operation of the first participant user for the candidate host user information, a host user corresponding to the selected candidate host user information as a target host user.
In one embodiment, the matching relationship determination module includes:
the third user determining unit is used for determining a matching degree which is larger than or equal to a matching degree threshold value in the N matching degrees as a candidate matching degree, and determining a anchor user corresponding to the candidate matching degree as a candidate anchor user;
the third user determining unit is further used for obtaining the number of concerned audience and the media data activity value corresponding to the candidate anchor user, and determining the popularity value of the candidate anchor user according to the number of concerned audience and the media data activity value;
the third user determining unit is further configured to determine the candidate anchor user with the largest popularity value as the target anchor user.
In one embodiment, the live data processing apparatus further comprises:
the information sending module is used for sending first user information corresponding to the first participating user to a target anchor terminal corresponding to the target anchor user;
the information receiving module is used for receiving user confirmation information returned by the target anchor terminal based on the first user information; the user confirmation information comprises target participation users selected by target anchor users based on the first user information and the second user information; the second user information is user information corresponding to a second participating user sent by the second terminal to the target anchor terminal; the second answer text provided by the second participating user is matched with the key label of the target anchor user; the transmission time stamp of the first user information and the transmission time stamp of the second user information are in the same time range;
And the step execution module is used for executing the step of jointly displaying the anchor identification of the target anchor user and the user identification of the first participant user in the team area corresponding to the target anchor user in the voice live virtual room when the target participant user included in the user confirmation information is the first participant user.
In one aspect, a computer device is provided, including: a processor and a memory;
the memory stores a computer program that, when executed by the processor, causes the processor to perform the methods of embodiments of the present application.
In one aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program, where the computer program includes program instructions that, when executed by a processor, perform a method in an embodiment of the present application.
In one aspect of the present application, a computer program product or computer program is provided that includes computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the method provided in an aspect of the embodiments of the present application.
In the embodiment of the application, after the audience user (such as the first participating user) enters the live voice virtual room, the user can click the interaction control to apply for interaction between the user and the anchor user; and the terminal corresponding to the first participant (such as the first terminal) can respond to the triggering operation to display the question text associated with the anchor user to the first participant, the first participant needs to answer the question text, the first terminal can be matched with the key label of the anchor user after acquiring the answer text (such as the first answer text) provided by the first participant, and the anchor identification of the target anchor user and the participant identification of the first participant can be jointly displayed in the team formation area of the target anchor user successfully matched after successful matching. It should be appreciated that the first participating user's participant user identification may be displayed in conjunction with the target anchor user's anchor identification to allow the first participating user to interact with the target anchor user. It can be seen that in the live broadcast process of the voice live broadcast virtual room, the key labels of the anchor user and the answer text of the audience user can be quickly matched. Because the key labels of the anchor users represent the favorites of the anchor users and the answer texts of the audience users represent the favorites of the audience users, the anchor users which accord with the favorites of the audience users can be quickly matched based on the matching results of the anchor users and the audience users, that is, whether the anchor users matched with the anchor users exist or not can be determined by the audience users only by answering questions, redundant data requests can be avoided, and data traffic is saved. In conclusion, the matching efficiency of audience users and anchor users can be improved, frequent data requests are avoided, and data traffic is saved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a network architecture diagram provided in an embodiment of the present application;
FIGS. 2 a-2 b are schematic views of a scenario for a host user to apply for a barley, provided in an embodiment of the present application;
FIGS. 3 a-3 d are schematic views of a scenario for audience user interaction according to embodiments of the present application;
fig. 4 is a flow chart of a live broadcast data processing method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a relationship among question text, configuration answer text, and configuration keywords provided in an embodiment of the present application;
FIG. 6 is a schematic diagram of acquiring a question text according to an embodiment of the present application;
FIG. 7 is a flow chart of a system provided in an embodiment of the present application;
fig. 8 is a schematic view of a scenario for determining a camping of a convoy according to an embodiment of the present disclosure;
Fig. 9 is a schematic diagram of a scenario for determining an optimal camping according to an embodiment of the present application;
fig. 10 is a schematic flow chart of a process of applying for a barley by a host user according to an embodiment of the present application;
FIG. 11 is a schematic flow chart of applying for a barley by an audience according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a live broadcast data processing apparatus according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
Referring to fig. 1, fig. 1 is a network architecture diagram provided in an embodiment of the present application. As shown in fig. 1, the network architecture may include a service server 1000 and a user terminal cluster, which may include one or more user terminals, the number of which will not be limited here. As shown in fig. 1, the plurality of user terminals may include a user terminal 100a, a user terminal 100b, user terminals 100c, …, a user terminal 100n; as shown in fig. 1, the user terminals 100a, 100b, 100c, …, 100n may respectively make a network connection with the service server 1000, so that each user terminal may perform data interaction with the service server 1000 through the network connection.
It will be appreciated that each user terminal as shown in fig. 1 may be provided with a target application, which when running in each user terminal, may interact with the service server 1000 shown in fig. 1, respectively, so that the service server 1000 may receive service data from each user terminal. The target application may include an application with a function of displaying text, image, audio and video data information, and the application may be any application that may be subjected to live broadcast, for example, the application may be a social application, a live broadcast application, a short video application, an online conference application, and the like.
The embodiment of the application can select one user terminal from a plurality of user terminals as a target user terminal, and the user terminal can comprise: smart phones, tablet computers, notebook computers, desktop computers, smart televisions, smart speakers, desktop computers, smart watches, smart vehicles, and the like, carry smart terminals of multimedia data processing functions (e.g., video data playing functions, music data playing functions), but are not limited thereto. For example, the embodiment of the present application may use the user terminal 100a shown in fig. 1 as the target user terminal, where the target user terminal may be integrated with the target application, and at this time, the target user terminal may perform data interaction between the target application and the service server 1000.
It should be understood that the user terminal cluster may be a user terminal corresponding to a host user or a viewer user, and the target user terminal may be a user terminal corresponding to a host user or a viewer user. For example, the user terminal 100a may be a target user terminal, which is a terminal used by the anchor user a, and the user terminal 100a may be referred to as an anchor terminal a; the anchor user a may create a virtual room based on the target application request in the anchor terminal a, where a virtual room refers to a virtual network space created based on a simulation of a real room. The virtual network space may simultaneously allow multiple user accounts (e.g., anchor user's account, audience user's account) to be online simultaneously, which may be in real-time online interactions in the virtual network space. In one possible implementation, the virtual rooms are different in room type and the corresponding real-time online interaction mode is also different. For example, if the room type of the virtual room is a voice room, the corresponding real-time online interaction mode is voice communication; if the room type of the virtual room is video room, the corresponding real-time online interaction mode is video communication. Taking the room type of the virtual room as a voice room as an example, the anchor user a can create a voice live virtual room based on a target application request in the anchor terminal a, and the voice live virtual room needs to have a condition that the anchor user and the audience user can form a team (form a camp); accordingly, the anchor terminal a may send a room creation request to the service server 1000, and the service server 1000 may create a live voice virtual room supporting the anchor user and the audience user team after receiving the room creation request.
Further, taking the user terminal 100B as an example of a terminal used by the anchor user B, the user terminal 100B may be a target user terminal, which may be referred to as the anchor terminal B. The anchor user B can request to upload a wheat to a certain wheat position of the live voice virtual room created by the anchor user a based on a target application installed in the anchor terminal B; accordingly, the anchor terminal B may transmit the request for the upload to the service server 1000. The service server 1000 may send the listing request to the aforesaid anchor terminal a, the anchor user a may audit the listing request, and after the audit result is that the listing request is passed, the anchor user b may listing a certain position of the live voice virtual room (e.g., in a certain area of the live voice virtual room, an anchor identifier of the anchor user b may be displayed (e.g., an anchor head of the anchor user b), and the area may be used as a team formation area of the anchor user b, where the team formation area includes a spare position, and the spare position may be used for the audience user to listing). The above-mentioned wheat position can be a position preset for the host user in the live voice virtual room, namely a seat, and the live voice virtual room can include a display area of each position; and the uploading may mean that the host user applies to become the host user in the live voice virtual room, that is, applies to live in the live voice virtual room, and if the uploading is successful, a certain position in the live voice virtual room may correspond to the host user, and the host identifier of the host user may be displayed in a display area corresponding to the position. Meanwhile, after the successful barley loading, the anchor user can live in the voice live virtual room to interact with the audience user.
Further, taking the terminal used by the user terminal 100c as the audience user c as an example, the user terminal 100c may be a target user terminal, which may be referred to as a first terminal, and the audience user c may be referred to as a first participating user. The first participant user may request to upload a call to a call site in a team area of a certain anchor user of the live voice virtual room based on a target application installed in the first terminal, and perform voice interaction with the anchor user, and the request for uploading the call of the first participant user may be referred to as an interaction request. Accordingly, the first terminal may send the interaction request to the service server 1000. The service server 1000 may send the interaction request to the anchor terminal a, and the anchor user a may audit the interaction request. After the auditing result is that the auditing result is passed, the service server 1000 can acquire a key label of a host user in the voice live broadcast virtual room, acquire a question text according to the key label, and the service server 1000 can send the question text to the first terminal; the first terminal can display the question text and acquire an answer text provided by a first participant for the question text, and the answer text provided by the first participant can be called a first answer text; further, the first terminal may send the first answer text to the service server 1000, and the service server 1000 may match the first answer text with a key tag of a anchor user in the live voice virtual room, to determine whether there is a successful anchor user; taking the example that the key label of the anchor user b is successfully matched with the first answer text, the anchor user b can be used as a target anchor user, and after the key label is successfully matched with the first answer text, the first terminal can display the participation user identification (such as the user head of the first participation user) of the first participation user in the team area of the anchor user b in the live voice virtual room, that is, the first participation user is on the spare wheat position in the team area of the anchor user b. It should be appreciated that, because the anchor user b's anchor identity is already displayed in the group area of the anchor user b, the anchor identity of the anchor user b is co-displayed with the first participant user's participant user identity after the first participant user's participant user identity is displayed; it should be understood that after the first participant users successfully play the camping, the first participant users and the anchor user b form a camping, and the first participant users and the anchor user b can perform voice interaction together, such as completing singing tasks together, completing idiom receiving tasks together, completing story receiving tasks together, and the like, so that interaction modes of the audience users and the anchor users in the anchor process can be enriched, and interaction feeling and participation feeling of the audience users can be improved. Optionally, the first participant user b may engage in camps with other camps (camps of other anchor users with other audience users) (PlayerKilling, PK).
It is understood that the method provided by the embodiments of the present application may be performed by a computer device, including but not limited to a user terminal or a service server. The service server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligence platforms.
The user terminal and the service server may be directly or indirectly connected through a wired or wireless communication manner, which is not limited herein.
For ease of understanding, please refer to fig. 2 a-2 b, fig. 2 a-2 b are schematic views of a scenario in which a host user applies for a barley. The service server shown in fig. 2 a-2 b may be the service server 1000 shown in fig. 1; the anchor terminal a shown in fig. 2 a-2 b may be any one of the user terminals selected from the user terminal clusters in the embodiment corresponding to fig. 1, for example, the user terminal may be the user terminal 100b; the anchor terminal B shown in fig. 2 a-2B may be any one of the user terminals selected from the user terminal clusters in the embodiment corresponding to fig. 1, for example, the user terminal may be the user terminal 100a.
Taking the live voice virtual room as an example created by the host user b (i.e., the host b shown in fig. 2 a-2 b), as shown in fig. 2a, the live voice virtual room may include a team forming area 1, a team forming area 2, a team forming area 3, and a team forming area 4, where the team forming area 1 may be a team forming area corresponding to the host 1, the team forming area 2 may be a team forming area corresponding to the host 2, the team forming area 3 may be a team forming area corresponding to the host 3, and the team forming area 4 may be an area corresponding to the host 4. Each team area can comprise two mark display areas, one mark display area is used for displaying a main broadcasting mark of a main broadcasting, and the mark display areas are wheat positions of the main broadcasting; the other identification display area can be used for displaying the audience user identification of the audience user who is successful in the uploading, the audience is the participating user after the uploading is successful, the audience user identification is the participating user identification, and the identification display area is the position of the participating user.
Taking the team forming area 2 as an example, as shown in fig. 2a, the team forming area 2 includes an identifier display area 20a and an identifier display area 20b, where the identifier display area 20a has displayed a head portrait of the anchor 2 (i.e. anchor identifier), and proves that the anchor 2 has been successfully winded, and the team forming area 2 is the team forming area corresponding to the anchor 2; the identification display area 20b in the team area 2 does not display the participant user identification of the participant user, and the corresponding wheat position of the participant user is the spare wheat position, which proves that the anchor 2 is the anchor to be matched, and the subsequent audience user has the opportunity to upload the wheat to the spare wheat position corresponding to the anchor 2. Taking the team forming area 4 as an example, the team forming area 4 includes an identification display area 20c and an identification display area 20d, the identification display area 20d does not display a head portrait of the host, and the corresponding wheat bit of the host is a spare wheat bit, so that a subsequent host user has a chance to upload wheat to the spare wheat bit of the host corresponding to the team forming area 4; similarly, after any anchor in the team area 4 is successfully played, the played anchor in the team area 4 may become an anchor to be matched, the identifier display area 20c does not display the participant user identifier of the participant user, and the position corresponding to the participant user is a spare position, so that the subsequent audience user has an opportunity to play the anchor to the spare position corresponding to the played anchor in the team area 4.
It should be appreciated that as shown in fig. 2a, since there are also spare host stations in the team area 4, host a may click on a barley control (e.g., the apply barley control shown in fig. 2 a) to request barley; the barley control may be referred to as an interactive control. Accordingly, the anchor terminal a may display a tag selection popup window in response to the trigger operation, where the tag selection popup window may include one or more key tags (e.g. "voice control", "like game", "travel man", "trekking fan", "warm man", "general mastery of the street", "natural main angle", "home", "super meeting photo", "life energy hand") as shown in fig. 2a, and the anchor a may select any one or more tags from the key tags included in the tag selection popup window as its key tag.
As shown in fig. 2a, the key labels selected by the anchor a (i.e. anchor user a) are "like games", "trendy street clap fan", "natural principal angle" and "super meeting photo"; then, anchor a may click on the determination control; correspondingly, the anchor terminal A can respond to the clicking operation to generate a barley loading request and send the barley loading request to the service server; it should be understood that the anchor terminal a may display an audit prompt pop-up window in the live voice virtual room, where the audit prompt pop-up window may include text content "apply for success, wait for audit" to prompt the anchor a that an application has been initiated, and wait for audit. Further, the service server may send the upload request to the anchor terminal B creating the live voice virtual room.
Further, as shown in fig. 2B, the anchor terminal B may display an audit popup window in the live voice virtual room, where the audit popup window may include text content "anchor a applies for barley, whether agrees"; the audit popup window can also comprise an approval control and a rejection control, and the host b can approve the barley application of the host a by clicking the approval control, or reject the barley application of the host a by canceling the control. As shown in fig. 2B, after the anchor B clicks the consent control, the anchor terminal B may respond to the clicking operation and return the audit result of the anchor B (i.e. the result of consenting to the barley application) to the service server. The business server can return the auditing result to the anchor terminal A; after receiving the auditing result, the anchor terminal a may display an anchor identifier (e.g., anchor avatar) of the anchor a in the identifier display area 20d of the team area 4. It should be appreciated that displaying the anchor identification of the anchor a indicates that the anchor a has successfully been picked up in the live voice virtual room.
Further, when the conditions for the spectator user to upload are reached, the spectator user may apply for the upload in the live voice virtual room. The loading condition of the audience user can be a preset condition, and the preset condition can be a preset loading time of the audience user; the preset condition can also be that after the host plays the wheat in all the host wheat positions or part of the wheat positions exceeding the proportion threshold (such as 2/3) in the voice live virtual room, the preset condition is not limited in the application.
For ease of understanding, please refer to fig. 3 a-3 d together, and fig. 3 a-3 d are schematic views of a scenario of interaction applied by a viewer according to an embodiment of the present application. The first terminal C shown in fig. 3a to 3d may be any one of the user terminals selected from the user terminal clusters in the embodiment corresponding to fig. 1, for example, the user terminal may be the user terminal 100d.
It should be understood that, as shown in fig. 3a, because there are still spare positions of the participating users in the team forming area 2 and the team forming area 4 (including the identification display area 20b and the identification display area 20 c), the participating user c may click on the pop-up control (the pop-up application control shown in fig. 3 a) to request pop-up; the barley control may be referred to as an interactive control. Accordingly, the user terminal corresponding to the participating user C (referred to as a first terminal C) may send a request for the uploading of the participating user C (referred to as a first interaction request) to the service server. It should be understood that, after the first terminal C sends the request, an audit prompt pop-up window may be displayed in the live voice virtual room, where the audit prompt pop-up window may include text content "apply for success, wait for audit" to prompt the participating user C that the application has been initiated, and wait for audit. Further, the service server may send the first interactive request to the anchor terminal B creating the live voice virtual room.
Further, as shown in fig. 3B, the anchor terminal B may display an audit popup window in the live voice virtual room, where the audit popup window may include text content "whether the participating user c applies for the barley, agrees; the audit popup window can also comprise an approval control and a rejection control, and the host b can approve the barley application of the participating user c by clicking the approval control, or reject the barley application of the participating user c by canceling the control. As shown in fig. 3B, after the anchor B clicks the consent control, the anchor terminal B may respond to the clicking operation and return the audit result of the anchor B (i.e. the result of consenting to the barley application) to the service server. After receiving the auditing result, the service server can acquire key labels of the anchor 2 (not matched with the anchor of the participating user) and the anchor b (not matched with the anchor of the participating user) and acquire question text associated with the key labels; as shown in fig. 3b, the question text may include question text 1, question text 2, and question text 3. Further, the service server may return the audit result and the question text (including question text 1, question text 2, and question text 3) to the first terminal C. For a specific implementation manner in which the service server obtains the question text associated with these key tags, reference may be made to the description in the embodiment corresponding to fig. 4.
Further, as shown in fig. 3C, the first terminal C may display an audit passing prompt pop-up window based on the audit result, where the audit passing prompt pop-up window may include text content "you have passed the audit, enter a matching stage, ask you to answer the following questions"; the audit passing prompt popup window can also comprise a continuation control and a cancellation control, and the participating user c can answer the questions through the continuation control; the participation c may also answer the question by refusing to answer (i.e. cancel the answer to cancel the barley application). As shown in fig. 3C, after the participant user C clicks the continuation control, the first terminal C may respond to the clicking operation to display the question 1, the question text 1 corresponding to the question 1, and the answer option corresponding to the question text 1, where the question text 1 is "do you easily generate fragile and non-self-confidence emotion"; the answer options corresponding to the question text 1 comprise an option A and an option B, wherein the answer text of the option A is 'frequent fragile emotion', and the answer text of the option B is 'little fragile emotion'; after participating user c selects option a "often creating a weak emotion," the next control may be clicked. Correspondingly, the first terminal C can respond to the clicking operation to display a question 2, a question text 2 corresponding to the question 2 and answer options corresponding to the question text 2, wherein the question text 2 is 'enough to love photographic art'; the answer options corresponding to the question text 2 comprise an option A and an option B, wherein the answer text of the option A is 'very favourite shooting', and the answer text of the option B is 'less interested in shooting'; after participating user c selects option a "very like photography," the next control may be clicked. Correspondingly, the first terminal C can respond to the clicking operation to display a question 3, a question text 3 corresponding to the question 3 and answer options corresponding to the question text 3, wherein the question text 3 is "do you play games in leisure time"; the answer options corresponding to the question text 3 comprise an option A and an option B, wherein the answer text of the option A is "game play", and the answer text of the option B is "game not play"; after participating user c selects option a "will play a game", the submit control may be clicked.
Further, the first terminal C may return the answer text "frequent weak emotion" corresponding to the question text 1, the answer text "very like shooting" corresponding to the question text 2, and the answer text "play game" corresponding to the question text 3 to the service server, where the answer text "frequent weak emotion" corresponding to the question text 1, the answer text "very like shooting" corresponding to the question text 2, and the answer text "play game" corresponding to the question text 3 may be referred to as the first answer text. Meanwhile, it should be understood that, as shown in fig. 3d, the first terminal C may display a matching prompt pop in the live voice virtual room, where the matching prompt pop may include text content "the system is matching a table for you", and it should be understood that the table is a host of a team (co-camping), and the text content may be used to prompt a host who is currently matching a co-camping with the participating user C.
Further, as shown in fig. 3d, the service server may determine the matching degree 1 according to the answer text "frequent weak emotion" corresponding to the question text 1, the answer text "very like photography" corresponding to the question text 2, the answer text "play games" corresponding to the question text 3, and the key label of the anchor 2; the matching degree 2 can also be determined according to the answer text 'frequent weak emotion', the answer text 'very favourite photography', the answer text 'game play', and the key label of the host b, wherein the answer text corresponds to the question text 1, the answer text corresponds to the question text 2, and the answer text corresponds to the question text 3. And according to the matching degree 1 and the matching degree 2, a target anchor user can be determined between the anchor 2 and the anchor c, and the first answer text of the participating user c can be determined to be matched with the key label of the target anchor user. Taking the target anchor user as anchor C as an example, the service server may return the matching result (the target anchor user is anchor b, and the participating user C is matched with anchor b) to the first terminal C.
Further, after receiving the matching result, the first terminal C may display the participating user identifier of the participating user C (e.g., the avatar of the participating user C) in the identifier display area 20C of the team area 4 corresponding to the anchor b. It should be understood that the participant user identification of the participant user c is displayed in the identification display area 20c, and the group area 4 has the anchor identification of the anchor b and the participant user identification of the participant user c co-displayed, i.e. it may be indicated that the participant user c has successfully matched the anchor b, and that the participant user c has already formed a camp with the anchor b. Then the participating user c may participate in the game or perform tasks with the host b, for example, the campaigns formed by the participating user c and the host b may play the game PK with other campaigns in the live voice virtual room (e.g., the campaigns formed by the host 1 and the participating users in the team area 1), and the final winning campaigns may be determined by the idiosyncratic game, and may obtain the corresponding rewards, for example, may reward the participating user for a level increase, rewards the host user for a certain virtual asset (e.g., 100 elements or 2500 live experience points).
It should be understood that the interaction mode of the participation user and the anchor user forms a camp, that is, the participation user is converted from the identity of the audience user to the identity of the anchor user, so that the audience user can have an opportunity to participate in the game or task in the anchor in the identity of the anchor, and the participation feeling and interaction feeling of the user can be better improved. Meanwhile, the host user and the audience user can be matched with the same table (users in the same camping) which accords with the preference of the host user and the audience user through the system, and the matching mode has more accidental and interesting properties, so that the interesting property in the voice live broadcast process can be improved. In the voice live broadcast process, the audience user can interact with the anchor through a single text chat, the audience user can form a camp with the anchor user, can interact with the anchor through voice to play games, do tasks and the like, and can improve the interactivity and the participation of the audience on live broadcast.
Further, referring to fig. 4, fig. 4 is a flow chart of a live broadcast data processing method according to an embodiment of the present application. The method may be performed by a user terminal (e.g., the user terminal shown in fig. 1 and described above) or a service server (e.g., the service server 1000 shown in fig. 1 and described above), or may be performed by both the user terminal and the service server (e.g., the service server 1000 in the embodiment described above and corresponding to fig. 1). For easy understanding, this embodiment will be described by taking the method performed by the user terminal and the service server together as an example. The live broadcast data processing method at least comprises the following steps of S101-S103:
step S101, a first terminal responds to an interaction request of a first participant user in a voice live virtual room aiming at an interaction control, and shows a question text; the questioning text is associated with key labels of N anchor users in the voice live virtual room; n is a positive integer.
In the application, in a target application (e.g., a live application) capable of live broadcasting, a user interface of a host user may include a virtual room creation control, any host user may initiate a room creation request by clicking the virtual room creation control, and accordingly, a host terminal corresponding to the host user may create a virtual room for an account corresponding to the host user (an account in the target application) based on the room creation request. The user interface may be a display interface displayed by the target application for the anchor user.
Optionally, before creating a virtual room for the account corresponding to the anchor user, the anchor terminal needs to select a room type corresponding to the created virtual room by using the account of the anchor user. In the present application, after receiving a creation request for a virtual room, the anchor terminal may display a plurality of room type selections in the user interface according to the creation request. Illustratively, the room type may include a voice type, a video type, and so on. The room type may be used to specify the form of interaction between the various accounts joining the virtual room (including the account corresponding to the anchor user and the account corresponding to the audience user), i.e., different room types correspond to different forms of interaction. For example, for a voice-type virtual room, the anchor user may interact with the spectator user in the form of voice interactions (e.g., the spectator user enters text content from which the anchor user performs voice-interactive chat). The target application may create a corresponding virtual room for the anchor user's account based on the type of room selected by the user.
Taking the case that the anchor user initiating the room creation request is the anchor user m, the room type selected by the anchor user m is a voice type, the target application may create a live voice virtual room for the account of the anchor user m. It can be understood that the method and the system can provide an opening control of the on-desk pairing interaction mode in the live voice virtual room created by the anchor user m, the anchor user m can open the on-desk pairing interaction mode in the live voice virtual room after triggering the opening control, and other anchor users and non-anchor users (such as audience users) using the target application can enter the live voice virtual room to interact through the on-desk pairing interaction mode. The same-table pairing interaction mode is that a host user and audience users are paired to form a camp (same table, team), so that the audience users and the host user play games in the voice live broadcast process together in the camp mode, and tasks and the like in the voice live broadcast process are completed together.
It can be understood that taking the anchor user as an anchor user v as an example, the anchor user v can enter a live voice virtual room which is created by the anchor user m and is opened in a co-table pairing interaction mode; it should be understood that the live voice virtual room with the live voice interaction mode of opening the same-table pairing mode may include a plurality of team areas, and each team area may include a seat corresponding to a host user and a seat corresponding to an audience user, where the seat corresponding to the host user is used for displaying a host identifier of a host user who is successful in listing, and the seat corresponding to the audience user is used for displaying an audience user identifier of an audience user who is successful in listing (successful in pairing). Illustratively, as shown in the embodiment corresponding to fig. 2a, the team formation area 4 in the live voice virtual room may include an identification display area 20c and an identification display area 20d, where the identification display area 20d is used for displaying the anchor identification of the anchor user who is successful in the listing, and the identification display area 20c is used for displaying the audience user identification of the audience user who is successful in the listing; the logo display area 20d can be understood as the seat corresponding to the anchor user in the team area 4 and the logo display area 20c can be understood as the seat corresponding to the audience user in the team area 4.
It should be appreciated that the anchor user v can see if there are more free seats in the respective team area for the anchor user's corresponding seat in the live voice virtual room. For example, as shown in the embodiment corresponding to fig. 2a, the group area 1, the group area 2, and the seats corresponding to the anchor users in the group area 3 in the live voice virtual room all display anchor identifiers of the anchor users, but the identifier display area 20d in the group area 4 does not display the anchor identifier of the anchor user, which may indicate that the anchor seat corresponding to the group area 4 (the seat corresponding to the anchor user) is a free seat, and the anchor user v may still apply for the anchor to the live voice virtual room (i.e., apply for the anchor to the free seat) based on the target application in the anchor terminal.
Further, the anchor terminal corresponding to the anchor user v can display the configuration key label in the live voice virtual room, so that the anchor user v can select the key label suitable for the anchor user v. The display mode may be a pop-up display mode, and as shown in the embodiment corresponding to fig. 2a, the pop-up display mode is used for displaying the live voice virtual room, the pop-up display mode includes a plurality of configuration key labels, and the host user v may select any configuration key label as its own key label. After the selection of the anchor user v is completed, the corresponding anchor terminal can initiate a barley loading request to the service server, the service server can send the barley loading request to the anchor terminal corresponding to the room creator (anchor user m), the anchor user m can audit the barley loading request based on the anchor terminal, and after the anchor user m passes the barley loading request of the anchor user v, the anchor user v can successfully barley loading. For example, a host identifier corresponding to the host user v may be displayed in the identifier display area 20d in the team area 4, indicating that the host user v has successfully credited.
Optionally, it may be understood that the time for selecting the key label by the anchor user may be the time when the anchor user registers the account in the target application, that is, the anchor user may select the key label when registering the account, so that after applying for the barley, the anchor user does not need to select the key label, and the anchor terminal may directly send the barley request to the service server.
It should be noted that, each of the anchor seats and the audience seats (seats corresponding to the audience users) corresponding to each team area includes at least one. The host user (such as host user m) creating the live voice virtual room can automatically upload the host to the host site in a certain team area. For example, after creating a live voice virtual room and starting a peer-to-peer pairing interaction mode, a host user m may display a host identifier of the host user m on a host seat corresponding to any team area of the live voice virtual room.
Further, it should be appreciated that when the conditions for the audience user to upload are reached, the audience user may apply for uploading in the live voice virtual room. The loading condition of the audience user may be a preset condition, and the preset condition may be a preset loading time of the audience user (for example, within 10 minutes, half hours, and 1 hour after the co-table pairing interaction mode is started); the preset condition may be that all the main broadcasting positions or the part of the positions exceeding the proportion threshold (e.g. 2/3) in the live voice virtual room have the main broadcasting and the wheat is on. That is, if there are 6 teams and the number of the anchor positions included in each teams is 1, then the number of all anchor positions is 6, and after the anchor is on the anchor or after the 4 anchor positions are on the anchor, the presence condition of the audience user can be considered to have been reached.
Further, the spectator user may enter the live voice virtual room and initiate a request for a listing based on the target application. It should be understood that, taking the first participant user as the participant user and the user terminal corresponding to the first participant user as the first terminal as an example, the application may provide a barley control in a user interface of the live voice virtual room (the target application is a display interface displayed by the first participant user), the first participant user may click on the barley control in the live voice virtual room (the barley control may be referred to as an interaction control), and initiate a barley request (the barley request may be referred to as an interaction request); correspondingly, the first terminal can respond to an interaction request of the first participant user for the interaction control in the voice live virtual room, the interaction request is sent to the anchor terminal corresponding to the anchor user creating the voice live virtual room, the anchor user creating the voice live virtual room can audit the interaction request, after the interaction request passes, the service server can obtain a questioning text associated with a key label of the anchor user contained in the voice live virtual room, the service server can return the audit passing result and the questioning text to the first terminal, and the first terminal can display the questioning text after receiving the audit passing result and the questioning text. It should be appreciated that the above-described embodiment of the scenario corresponding to fig. 3 a-3 c may be used as an exemplary scenario for this step.
For a specific implementation manner of acquiring the question text associated with the key tag of the anchor user contained in the live voice virtual room, see the description in the embodiment corresponding to fig. 6.
Step S102, responding to the answer input operation of the first participant user for the question text, and acquiring a first answer text provided by the first participant user.
In the application, the first participating user may input answer text for the question text. And the first terminal may obtain the answer text provided by the first participant user, which may be referred to as the first answer text. It should be understood that when the first terminal displays the question text, the first terminal may display the configuration answer text corresponding to the question text together, and then the first participant user may select the configuration answer text, that is, the first participant user may perform an answer input operation with respect to the question text, which may include a text selection operation. When the answer input operation includes the text selection operation, the specific manner of correspondingly acquiring the first answer text may be: responding to a text selection operation of a target user aiming at the configuration answer text, and acquiring a target configuration answer text corresponding to the text selection operation; the target configuration answer text may then be determined to be the first answer text provided by the target user.
For example, as shown in the embodiment corresponding to fig. 3C, when the first terminal C presents the question text (such as the question text 1), the configuration answer text corresponding to the question text 1 may be presented together (including the configuration answer text "frequent weak emotion" and the configuration answer text "rarely weak emotion"), and the first participating user C may select the configuration answer text "frequent weak emotion" and the configuration answer text "rarely weak emotion", as shown in fig. 3C, where the configuration answer text selected by the first participating user C is "frequent weak emotion", and the configuration answer text may be the target configuration answer text, that is, the configuration answer text "frequent weak emotion" may be the first answer text. Similarly, the configuration answer text "very like photography" and the configuration answer text "play game" selected by the first participating user c may be used as the first answer text.
Alternatively, the first terminal may also display the text input box together when displaying the question text, and the first participant user may input the answer text (e.g., input the answer text in a typing manner, input the answer text in a voice manner, etc.) for the question text in the text input box, where the answer text input by the first participant user may be the first answer text.
Step S103, when the first answer text is matched with the key label of the target anchor user, displaying the anchor identification of the target anchor user and the participation user identification of the first participation user in the team area corresponding to the target anchor user in the voice live virtual room; the N anchor users include target anchor users.
In the method, first answer texts provided by a first participant user can be matched with key labels of N anchor users in a voice live broadcast virtual room, whether the N anchor users have anchor users matched with the first answer texts or not is determined according to the matching result, if the matched anchor users exist, the matched anchor users can be determined to be target anchor users, and the first answer texts are determined to be matched with the key labels of the target anchor users. And when the first answer text is matched with the key label of the target anchor user, the first participated user can be determined to be the same table of the target anchor user (namely, the first participated user is the spectator user camping with the target anchor user), and when the same table of the target anchor user is determined to be the first participated user, the anchor identification of the target anchor user and the participated user identification of the first participated user can be jointly displayed in a team area corresponding to the target anchor user in the voice live virtual room so as to indicate that the first participated user has successfully on the wheat, and the seat of the wheat is the seat of the spectator in the team area corresponding to the target anchor user.
Wherein each answer text may be associated with a configuration keyword, then the first answer text may also be associated with a configuration keyword. The matching of the first answer text provided by the first participant with the key labels of the N anchor users in the live voice virtual room may be: the configuration keywords associated with the first answer text may be matched with the key labels of the N anchor users.
For a specific implementation manner of matching the configuration keywords associated with the first answer text with the key labels of the N anchor users, please refer to fig. 5, fig. 5 is a schematic diagram of a relationship among a question text, a configuration answer text and configuration keywords provided in an embodiment of the present application. A specific implementation of matching the configuration keywords associated with the first answer text with the key labels of the N anchor users will be described below based on the relationship diagram shown in fig. 5.
Each question text may correspond to one or more configuration answer texts, taking 3 configuration answer texts for each question text as an example, as shown in fig. 5, the question text may include configuration answer text 1, configuration answer text 2, and configuration answer text 3. It should be understood that one or more keywords may be configured for each configuration answer text, and as the configuration keywords associated with each configuration answer text, as shown in fig. 5, the configuration keywords associated with the configuration answer text 1 include a configuration keyword 1 and a configuration keyword 2; the configuration keywords associated with the configuration answer text 2 comprise configuration keywords 3 and configuration keywords 4; the configuration keywords associated with the configuration answer 3 include configuration keywords 5 and configuration keywords 6. Further, taking the configuration answer text selected by the first participating user as the configuration answer text 1 as an example, the configuration answer text 1 can be used as the first answer text, and whether the first answer text matches with the key labels of the N anchor users can be determined based on the configuration keyword 1 and the configuration keyword 2 associated with the configuration answer 1.
The specific modes of the method can be as follows: the matching degree between the configuration keywords and the key labels of N anchor users can be determined, and N matching degrees are obtained; according to the N matching degrees, the target anchor user can be obtained from the N anchor users, and then the first answer text can be determined to be matched with the key label of the target anchor user. Taking the example that the N anchor users include an anchor user 1 and an anchor user 2, and the key label of the anchor user 1 is a key label 1, and the key label of the anchor user 2 is a key label 2, for determining the matching degree between the configuration keywords and the key labels of the N anchor users, the specific ways for obtaining the N matching degrees may be: because the configuration keywords associated with the first answer text are configuration keywords 1 and 2, for the anchor user 1, the matching degree between the configuration keywords 1 and the key labels 1 and the matching degree between the configuration keywords 2 and the key labels 2 can be calculated and determined; then, the two matching degrees can be fused, so that the matching degree between the first answer text and the key label of the anchor user 1 can be determined; similarly, the matching degree between the first answer text and the key label of the anchor user 2 can be calculated and determined. Thereby 2 matches for the anchor user 1 and the anchor user 2 can be obtained. The matching degree between each configuration keyword and each key label is calculated and determined, which can be understood as the similarity between each configuration keyword and each key label is calculated and determined. Taking the configuration key 1 and the key tag 1 as an example, a vector corresponding to the configuration key 1 and a vector corresponding to the key tag 1 may be obtained, and then a vector similarity (for example, cosine similarity, vector distance, etc.) between the two vectors may be calculated. It should be understood that where the degree of matching is a vector similarity, the manner in which the two degrees of matching are fused may be understood as fusing the two vector similarities (e.g., vector addition, vector stitching, etc.).
Alternatively, it may be appreciated that if the first answer text provided by the first participant user is entered via a text entry box, then keywords in the first answer text may be extracted and a degree of match between the first answer text and the anchor user's key labels may be determined based on the keywords.
Further, after the matching degree between the first answer text and each anchor user is determined, N matching degrees are obtained, and then the target anchor user is obtained from N anchor users according to the N matching degrees. In one possible manner, the specific manner may be: the maximum matching degree can be obtained from N matching degrees; the anchor user corresponding to the maximum matching degree may be determined as the target anchor user. Optionally, after the maximum matching degree is obtained, the maximum matching degree can be compared with a matching degree threshold, and if the maximum matching degree is greater than or equal to the matching degree threshold, the anchor user corresponding to the maximum matching degree can be determined to be the target anchor user; if the maximum matching degree is smaller than the matching degree threshold, it may be determined that the first participating user does not match any anchor user of the N anchor users, and the first participating user fails to match.
Alternatively, in a possible embodiment, the specific manner of acquiring the target anchor user may be: the matching degree which is larger than or equal to the threshold value of the matching degree in the N matching degrees can be determined as the candidate matching degree, and the anchor user corresponding to the candidate matching degree is determined as the candidate anchor user; then, candidate anchor user information corresponding to the candidate anchor user can be obtained, and the candidate anchor user information is displayed; then, the first participating user may select any anchor user information from the candidate anchor user information, and the first terminal may determine an anchor user corresponding to the selected candidate anchor user information as a target anchor user in response to an anchor selection operation of the first participating user with respect to the candidate anchor user information.
Alternatively, in a possible embodiment, the specific manner of acquiring the target anchor user may be: the matching degree which is larger than or equal to the threshold value of the matching degree in the N matching degrees can be determined as the candidate matching degree, and the anchor user corresponding to the candidate matching degree is determined as the candidate anchor user; then, the number of concerned audience (such as the number of fan of the candidate anchor user) and the media data activity value (such as the number of historical live broadcast watching users of the candidate anchor user, the number of live broadcast times of the candidate anchor user and the like) corresponding to the candidate anchor user can be obtained, and the popularity value of the candidate anchor user can be determined according to the number of concerned audience and the media data activity value; the candidate anchor user with the greatest popularity value may be determined to be the target anchor user.
Further, when the first answer text matches with the key label of the target anchor user, the anchor identifier of the target anchor user and the participant user identifier of the first participant user may be displayed together in the team area corresponding to the target anchor user in the live voice virtual room, and the specific method may be: a second identification display area adjacent to the first identification display area can be acquired in a team formation area corresponding to the target anchor user; wherein, the first mark display area displays the anchor mark of the target anchor user; subsequently, a participating user identification of the first participating user may be obtained, and the participating user identification of the first participating user may be presented in the second identification display area. It should be understood that, in the first identifier display area, that is, the team area corresponding to the target anchor user, the anchor seat corresponding to the target anchor user, for example, the identifier display area 20a shown in the embodiment corresponding to fig. 2a may be referred to as a first identifier display area; the second identifier display area is a group area corresponding to the target anchor user, and the audience seat corresponding to the audience user matched with the target anchor user (i.e. the seat corresponding to the participating user who composes the camp with the target anchor user), for example, the identifier display area 20b shown in the embodiment corresponding to fig. 2a may be referred to as a second identifier display area.
Optionally, in a possible embodiment, any user (including the anchor user and the audience user) watching the live voice virtual room may click on the anchor identifier of any anchor user to view the attribute information of the anchor user (for example, the basic identity information of the anchor user: name, gender, age, etc.), and the specific manner of clicking on the attribute information of the target anchor user by the first participating user may be: the first terminal can respond to the triggering operation of the anchor mark of the target anchor user in the team forming area corresponding to the target anchor user, and display an attribute information interface comprising user attribute information of the target anchor user; the attribute information interface comprises a user description information area; the key labels of the targeted anchor users may then be presented in the user descriptive information area.
Optionally, in a possible embodiment, if the first participating user and the second participating user match the target anchor user within the same period of time, the target anchor user may select between the first participating user and the second participating user to determine which participating user forms a camp as the target anchor user, and after the target anchor user selects that the first participating user is the target anchor user and camps, the step of displaying the anchor identifier of the target anchor user and the user identifier of the first participating user together in the team area corresponding to the target anchor user in the live voice virtual room in step S103 may be performed. The specific method comprises the following steps: the first terminal sends first user information corresponding to a first participating user to a target anchor terminal corresponding to a target anchor user; receiving user confirmation information returned by the target anchor terminal based on the first user information; the user confirmation information comprises target participation users selected by target anchor users based on the first user information and the second user information; the second user information is user information corresponding to a second participating user sent by the second terminal to the target anchor terminal; the second answer text provided by the second participating user is matched with the key label of the target anchor user; the transmission time stamp of the first user information and the transmission time stamp of the second user information are in the same time range; and when the target participant user included in the user confirmation information is the first participant user, executing the step of displaying the anchor identification of the target anchor user and the user identification of the first participant user together in the team area corresponding to the target anchor user in the voice live virtual room.
It should be appreciated that after all of the anchor users in the live voice virtual room or anchor users exceeding a proportion threshold (e.g., 2/3) have been matched with the participating users of the same camp, the campaigns may be separated into different campaigns (different campaigns include anchor users and audience users) that may perform voice interactions (e.g., singing together, completing idiom-receiving tasks together, completing story-receiving tasks together) during the live voice of the live voice virtual room. Through the same-table pairing mode, audience users can interact with the anchor users through text chat, but are on the air to the voice live broadcast virtual room, so that the anchor users can be helped to complete tasks; the method can convert the identity of the audience user into the identity of the anchor user, the audience user can send voice in the voice live virtual room, voice dialogue interaction with the anchor user is realized, and the interaction feeling and participation feeling of the audience user on live broadcast can be greatly improved.
In the embodiment of the application, after the audience user (such as the first participating user) enters the live voice virtual room, the user can click the interaction control to apply for interaction between the user and the anchor user; and the terminal corresponding to the first participant (such as the first terminal) can respond to the triggering operation to display the question text associated with the anchor user to the first participant, the first participant needs to answer the question text, the first terminal can be matched with the key label of the anchor user after acquiring the answer text (such as the first answer text) provided by the first participant, and the anchor identification of the target anchor user and the participant identification of the first participant can be jointly displayed in the team formation area of the target anchor user successfully matched after successful matching. It should be appreciated that the first participating user's participant user identification may be displayed in conjunction with the target anchor user's anchor identification to allow the first participating user to interact with the target anchor user. It can be seen that in the live broadcast process of the voice live broadcast virtual room, the key labels of the anchor user and the answer text of the audience user can be quickly matched. Because the key labels of the anchor users represent the favorites of the anchor users and the answer texts of the audience users represent the favorites of the audience users, the anchor users which accord with the favorites of the audience users can be quickly matched based on the matching results of the anchor users and the audience users, that is, whether the anchor users matched with the anchor users exist or not can be determined by the audience users only by answering questions, redundant data requests can be avoided, and data traffic is saved. In conclusion, the matching efficiency of audience users and anchor users can be improved, frequent data requests are avoided, and data traffic is saved.
For an understanding of the specific implementation manner of acquiring the question text, please refer to fig. 6, and fig. 6 is a schematic diagram of acquiring the question text according to an embodiment of the present application.
As shown in FIG. 6, the N anchor users include anchor user k i For example, for get and anchor user k i The specific way of question text associated with the key tag of (c) may be: the first terminal can respond to the interaction request of the first participating user for the interaction control in the live voice virtual room and acquire the anchor user k i Key tags of (2); subsequently, user k can be acquired and anchor from the configuration keyword set i The matched configuration keywords of the key labels of the database are used as target keywords; subsequently, a text mapping table may be obtained; the text mapping table comprises mapping relations between configuration question texts and configuration keywords; in the text mapping table, a configuration question text with a mapping relation with the target keyword can be obtained, the obtained configuration question text is determined to be a question text, and the question text is displayed.
Wherein, the specific way for determining the target keyword in the configuration keyword set may be: can obtain the anchor user k i A label vector corresponding to the key label of (a); then, word vectors corresponding to each configuration keyword in the configuration keyword set can be obtained to obtain a word vector set; the similarity between the tag vector and each word vector in the word vector set can be determined, and a similarity set is obtained; the similarity greater than or equal to the similarity threshold may be obtained from the similarity set, and determined as the target similarity, and the configuration keyword corresponding to the target similarity may be determined as the target keyword.
It should be understood that, as shown in the schematic relationship between the question text, the configuration answer text and the configuration keywords corresponding to fig. 5, since each question text may correspond to one or more configuration answer texts and each configuration answer text may correspond to one or more configuration keywords, each question text and its corresponding configuration answer text and configuration keywords may be recorded in a text mapping table, the question text may be obtained through the text mapping tableConfiguration keywords of the mapping relation; it should be appreciated that the configuration keywords corresponding to each question text may constitute a set of configuration keywords. As shown in fig. 6, to anchor user k i For example, the keyword labels 11 and 12 may be used to calculate and determine the vector similarity between each configuration keyword in the configuration keyword set and the keyword label 11, and the target keyword 1 may be obtained from the configuration keyword set based on the vector similarity; similarly, the vector similarity between each configuration keyword in the configuration keyword set and the keyword tag 12 may be calculated and determined, and the target keyword 2 may be acquired from the configuration keyword set based on the vector similarity; further, a configuration answer text with a mapping relation with the target keyword 1 can be obtained through a text mapping table, and can be determined to be the target answer text 1; the configuration answer text with the mapping relation with the target key number 2 can be obtained and can be determined to be the target answer text 2; further, the question text with the mapping relation with the target answer text 1 can be obtained through the text mapping table to be the question text 1, and the question text with the mapping relation with the target answer text 2 is the question text 2; the question text composed of the question text 1 and the question text 2 can be determined to be the same as the anchor user k i Associated question text.
Alternatively, it will be appreciated that if the first answer text provided by the first participant user is entered via a text entry box, that is, without configuring the answer text for the question text, one or more question text may be randomly selected as the question text associated with the anchor user and presented.
It should be understood that, in order to improve the interestingness of the anchor content and the diversity of the live broadcast form, and further improve the enthusiasm of the audience users, after the audience users (participating users) are successfully matched with the anchor users to form different camps, a collarband user is selected from the audience users in the different camps, and the camping where the collarband user is located can be the camping; the collarband camps are allocated with additional virtual resources in the subsequent live broadcast interactive playing method, and the additional virtual resources can improve the probability of being competitive for the collarband camps when competitive optimal camps. For ease of understanding, please refer to fig. 7, fig. 7 is a system flowchart provided in an embodiment of the present application. As shown in fig. 7, the flow may include the following steps S201 to S209:
Step S201, the anchor creates a live voice virtual room and starts the same-table pairing playing method.
In the application, any anchor user can create a voice live virtual room and can select to start the co-table pairing interactive playing method (namely, the co-table pairing interactive mode).
Step S202, other anchor requests to bear wheat.
In the application, other anchor users who do not create a live voice virtual room can enter the live voice virtual room and apply for the wheat.
Step S203, the user applies for the wheat and answers the questions.
In the application, the audience users of the non-anchor users can enter the voice live broadcast virtual room and apply for the wheat-bearing (the wheat-bearing is successfully matched with a certain anchor user and becomes the same table as the anchor user and becomes the audience user in the same camp as the anchor user). It should be appreciated that when a certain viewer user applies for a listing, the viewer may be referred to as a participating user.
Step S204, the user plays the game.
In the application, when the audience user applies for the wheat and the matching is successful, the audience user can successfully play the wheat and participate in the subsequent game. If the audience user applies for the barley but the matching is not successful, the following step S205 may be entered.
Step S205, apply for or sightseeing again.
In the method, when the audience user applies for the barley but does not match successfully, if the barley application frequency of the audience user in the voice live virtual room does not exceed the application frequency threshold, the audience user can continuously send out the application and answer the questions again; if the number of times of the application of the spectator user to the live voice virtual room reaches the threshold number of times of application, the spectator user can not apply any more and can select as a spectator user (browsing user) to spectate in the live voice virtual room. If the number of times of the application of the audience user to the live voice virtual room does not exceed the threshold number of times of application, the audience user may still choose to discard the application and select the audience user (browsing user) to watch in the live voice virtual room.
For the specific implementation manners of the steps S201 to S205, reference may be made to the descriptions of the steps S101 to S103 in the embodiment corresponding to fig. 4, and the detailed descriptions will be omitted here.
Step S206, competing for the camping of the user of the collarband and the camping of the collarband.
In the application, the service server can determine a collarband user according to the interaction data of the sightseeing user (browsing user) on the successful-in-boarding participant (successful-in-boarding audience user) in each collarband, and determine the collarband in which the collarband user is located as the collarband enclaver; it should be appreciated that the present application may allocate additional virtual resources for the pilot camp, and that the pilot camp may increase the probability of being camped as an optimal camp based on the additional virtual resources when subsequently camping as an optimal camp. Taking the first participating user and the target anchor user as examples, the specific method for determining the camping of the collarband may be as follows: the first interactive control can be displayed in a first interactive display area of a team forming area corresponding to the target anchor user in the team competing period, and the second interactive control can be displayed in a second interactive display area of the team forming area corresponding to the rest anchor users; the rest of the anchor users are anchor users except the target anchor user among the matched anchor users contained in the voice live virtual room; the first interaction control can be used for interaction between the browsing user and the target anchor user and interaction between the browsing user and the first participating user; the second interaction control can be used for interaction between the browsing user and the rest of anchor users and interaction between the browsing user and the rest of participant users; the participation user identification of the residual participation user and the anchor identification of the residual anchor user are jointly displayed in a team area corresponding to the residual anchor user; the rest participating users are the participating users except the first participating user among the participating users contained in the voice live virtual room; browsing users may refer to users who watch a live voice virtual room (i.e., sightseeing users); when the system time reaches the maximum time stamp of the leader election time period and the number of the first interaction behaviors displayed for the first interaction control in the first interaction display area is larger than the number of the second interaction behaviors displayed for the second interaction control in the second interaction display area, the leader display area can be displayed in the live voice virtual room, first user information of the first participating user (the first participating user can be the leader user) is displayed in the leader display area, and leader prompt information of the first participating user is displayed.
The specific implementation manner of displaying the first user information of the first participant and the collarband prompt information for the first participant in the collarband display area may be: because the first participating user is a collarband user, the camping formed by the first participating user and the target anchor user can be determined as the collarband camping; then, additional virtual resource rates for the camping of the collarband can be obtained, and resource allocation prompt information can be generated according to the camping of the collarband and the additional virtual resource rates; the resource allocation prompt information can be used for prompting that the camping allocated with the additional virtual resource rate is a collarband camping; the additional virtual resource rate is used for determining additional virtual resources allocated to the camping of the collarband; and displaying the first user information, the collarband prompt information and the resource allocation prompt information in the collarband display area.
In the above process, the specific implementation manner of displaying the first interaction behavior number and the second interaction behavior number may be: when the system time reaches the maximum timestamp of the leader election time period, the first interaction data of the first interaction user aiming at the first interaction control and the second interaction data of the second interaction user aiming at the second interaction control can be counted; the first interactive data may include interactive behavior and first interactive user identification information; the second interactive data may include interactive behavior and second interactive user identification information; the browsing user comprises a first interactive user and a second interactive user; determining a first identification number of the first interactive user identification information and a second identification number of the second interactive user identification information; taking the first identification number as a first interaction behavior number, and taking the second identification number as a second interaction behavior number; displaying the first interactive behavior quantity in a first interactive display area, and displaying the second interactive behavior quantity in a second interactive display area.
It should be appreciated that the first interactive control and the second interactive control may include a praise control, a comment control, and the like, which may be used for performing interaction, and taking the first interactive control and the second interactive control as praise controls as examples, the praise control may be displayed in a team area corresponding to each anchor user, and the spectator user may praise the spectator users in different camps based on the praise control, may count the praise amount of each spectator user in each camps, and take the spectator user with the greatest praise amount as the leading user, and the camping where the spectator user is located may be determined as the leading camping. For ease of understanding, please refer to fig. 8, fig. 8 is a schematic diagram of a scenario for determining a camping of a convoy according to an embodiment of the present application. As shown in fig. 8, the first terminal is taken as the first terminal C in the embodiment corresponding to fig. 3a to 3d, and the user interface of the first participant user in the live voice virtual room of the target application is taken as the interface presented by the first terminal C in fig. 3a to 3 d. In the team forming area of the live voice virtual room, taking the team forming area 4 as an example, the first terminal C may display a praise control in an interactive display area 80a (the interactive display area 80a corresponds to a viewer user) of the team forming area 4, where the interactive display area 80a may be referred to as a first interactive display area, and the praise control may be referred to as a first interactive control. Similarly, as shown in fig. 8, in the interactive display area corresponding to each audience user in the other team areas, a praise control may be displayed, and optionally, in the interactive display area corresponding to each anchor user in each team area, a praise control may also be displayed.
It should be appreciated that a browsing user (sightseeing user) watching the live voice virtual room may praise (the praise may be referred to as an interactive behavior) a host user or a spectator user (participating user) in a team area based on the praise control, and each browsing user may record the interactive behavior of the browsing user (e.g., the praise of which praise control is which user) and the identification information of the browsing user at each praise, the praise browsing user may be referred to as an interactive user, and the identification information may be referred to as an interactive user identification information. It should be understood that when the system time reaches the maximum timestamp of the leader election period, the first terminal C may obtain the corresponding praise amount of each praise control (1 identification information may correspond to 1 praise amount) through the identification information recorded by the service server, and the first terminal C may display the praise amounts. It may be understood that the first terminal C may obtain the maximum praise amount (as shown in fig. 8, the maximum praise amount is 700) among praise amounts of the participating users of each camping, and obtain the participating user corresponding to the maximum praise amount. Taking the participating user corresponding to the maximum praise amount 700 as a participating user z as an example, the first terminal C may acquire user information of the participating user z (for example, a nickname of the participating user z in the target application: hair rain), and may generate the lead prompt information according to the user information of the participating user z (nickname is "hair rain") (that is, the participating user z is selected as the prompt information of the lead user); further, the first terminal C may obtain an additional virtual resource rate (e.g. 20%) for the camping of the collarband, and may generate the resource allocation hint information according to the additional virtual resource rate and the camping of the collarband. As shown in fig. 8, in the live voice virtual room, a leader prompt popup window may be displayed, where the leader prompt popup window may include text content "congratulating with capillary rain for 700 praise" and selected as a leader user, and 20% of camps where the leader user is located have addition of a "e".
Step S207, the game is started.
In the application, after the user of the leader and the camping of the leader are in competition, each camping can start to play a game or complete tasks in the voice live virtual room. For example, the game or task may be: singing games, idiom dragon-receiving games, you say me guessing games, completing story dragon-receiving tasks, etc.
Step S208, the sightseeing user sends virtual resources to determine the optimal camping.
In the present application, in the game or task-making process of each camp, the spectator user may send virtual resources to each user in each camp, for example, live experience values, live scores, virtual diamonds, virtual rockets, virtual automobiles, and the like may be referred to as virtual resources. Each sightseeing user can send virtual resources (such as virtual cars and virtual fireworks) to each user (including the anchor user and the participating user number) in each camp, and then the number of the virtual resources commonly received by each camp can be determined as the corresponding mercy value, and the mercy value can be used as the virtual resource value of each camp.
It will be appreciated that the additional virtual resource rates described above may be used to determine the value of the additional virtual resource (i.e., the additional virtual resource value) corresponding to the camping of the lead. It should be appreciated that the pilot lineup is assigned an additional virtual resource rate, and that the pilot lineup may calculate an additional virtual resource value corresponding to the pilot lineup based on the additional virtual resource rate. For example, the additional virtual resource rate is 20%, and the virtual resource value of the camping is 20, the final virtual resource value of the camping may be 20+20×20% =24, that is, the final virtual resource value of the camping may be improved based on the additional virtual resource rate.
Step S209, ending the game and publishing the optimal camping.
In the method, when the system time reaches the game ending time, the service server can count the value of the virtual resource corresponding to each camp, and the optimal camp can be determined according to the value of the virtual resource and published.
In order to facilitate understanding of the above-mentioned competing optimal camping and publishing the specific implementation manner of the optimal camping, the following will be specifically described: in the camping and competitive period (namely, in the game playing time), a first resource sending control can be displayed in a first resource display area of a team forming area corresponding to the target anchor user, and a second resource sending control can be displayed in a second resource display area of the team forming area corresponding to the residual anchor user; the first resource sending control is used for browsing virtual resources sent by the user for the camping of the collarband; the second resource sending control is used for browsing virtual resources sent by the user for the rest camps; the residual camping is formed by residual anchor users and residual participating users; when the system time reaches the maximum time stamp of the camping competitive time period, and the first virtual resource displayed by the first resource sending control in the first resource display area is larger than the second virtual resource displayed by the second resource sending control in the second resource display area, displaying an optimal camping display area in the voice live broadcast virtual room, and displaying the anchor identification of the target anchor user and the participation user identification of the first participation user in the optimal camping display area.
In the above process, the specific implementation manner of displaying the first virtual resource and the second virtual resource may be: when the system time reaches the maximum time stamp of the camping competitive period, the initial virtual resource of the camping of the collarband can be obtained; the initial virtual resource is a virtual resource sent by a browsing user to a collarband camp; the maximum timestamp of the camping election period is later than the maximum timestamp of the collarband election period; subsequently, a second virtual resource of the remaining camps can be acquired; the second virtual resource is a virtual resource sent by the browsing user to the rest camps; according to the additional virtual resource rate and the initial virtual resource, determining additional virtual resources of the camping of the collarband; according to the additional virtual resources and the initial virtual resources, a first virtual resource of the camping of the collarband can be determined; then, the first virtual resource may be displayed in the first resource display area and the second virtual resource may be displayed in the second resource display area.
For ease of understanding, please refer to fig. 9 together, fig. 9 is a schematic diagram of a scenario for determining an optimal camping according to an embodiment of the present application. The embodiment corresponding to fig. 9 may be an exemplary scenario for further competing for optimal camping according to the embodiment corresponding to fig. 8. In the team area of the live voice virtual room, taking the team area 4 as an example, the first terminal C may take an area of any displayable control of the team area 4 as a resource display area, where a resource sending control (such as the resource sending control 90a shown in fig. 9) may be displayed. Similarly, as shown in fig. 9, resource transmission controls may be displayed in resource display areas corresponding to other team areas.
It should be appreciated that a browsing user (spectator user) watching the live voice virtual room may send virtual resources to each campaigns based on the resource sending control, and the business server may record the virtual resources each browsing user sent to each campaigns. It should be understood that, when the system time reaches the maximum timestamp of the camping election period, the first terminal C may obtain, through the virtual resources recorded by the service server, the number of virtual resources received by each camping, determine, based on the number, the virtual resource value corresponding to each camping (if the number is 20, the virtual resource value is 20), and display the virtual resource value corresponding to each camping. It may be understood that, in the virtual resource value of each camping, the first terminal C may obtain the maximum virtual resource value (as shown in fig. 9, the maximum virtual resource value is 66), and obtain the camping corresponding to the maximum virtual resource value. Taking the camping corresponding to the maximum virtual resource value 66 as an example of the camping T, the first terminal C may obtain user information (for example, a host identifier corresponding to a host user and a participant user identifier corresponding to a participant user) included in the camping T, and the first terminal C may create an optimal camping display area, and display the host identifier and the participant user identifier of the optimal camping in the optimal camping display area. As shown in fig. 9, in the optimal camping display area in the live voice virtual room, text content "optimal camping birth" may be included, and a main cast identifier and a participating user identifier included in the optimal camping T.
It is understood that the interaction mode of pairing interaction with a table is provided in the voice live virtual room, so that the audience user can play the game together with the host user in the voice live virtual room to complete the task; the audience users are not limited to interact with the anchor users in a text chat mode, so that the interaction mode of the audience users and the anchor users is enriched; meanwhile, the enthusiasm of audience users can be improved through the play methods of the competitive collarband users and the competitive collarband camps.
Further, for ease of understanding, please refer to fig. 10, fig. 10 is a schematic flow chart of a host user applying for wheat harvesting provided in the embodiment of the present application. As shown in fig. 10, the flow may include the following steps S301 to S309:
step S301, the anchor user sets a matching condition.
In the application, any anchor user can create a voice live virtual room and can select to start the co-table pairing interactive playing method (namely, the co-table pairing interactive mode). Before creating the virtual room, the anchor user may set a match condition for the co-table pairing (e.g., the anchor user may become the co-table only if the answer text of the audience user and the key tag of the anchor user reach a match threshold, thereby forming a camping). And the service server may receive the match condition.
In step S302, the anchor user requests to create a virtual room and starts the co-table pairing interactive play.
In step S303, the service server creates a virtual room and determines whether the creation is successful.
In the application, the anchor user can send a creation request of the virtual room to the service server through the anchor terminal corresponding to the anchor user, and the service server can create the live voice virtual room based on the creation request. Determining whether the service server is successfully created, if so, executing step S304; if the creation is successful, step S305 may be performed.
Step S304, the service server prompts the live user to create failure.
In step S305, the service server presents a portal on the virtual room.
Specifically, the barley access may refer to a barley control.
In step S306, other anchor requests to upload wheat.
Step S307, it is determined whether the audit is passed.
In the application, the service server can push the barley request of the other anchor to the anchor user creating the virtual room, and the anchor user creating the virtual room can audit the barley request. If the audit is audit passing, step S308 may be executed; if the audit is not passed, step S309 may be performed.
Step S308, wait for the viewer user to match.
In the application, after the host user plays the wheat, the audience user can wait for matching and play the wheat.
Step S309, prompting the failure of wheat feeding.
Specifically, the service server prompts other anchor wheat-bearing failures.
Further, for ease of understanding, please refer to fig. 11, fig. 11 is a schematic flow chart of applying for a barley by a viewer according to an embodiment of the present application. As shown in fig. 10, the flow may include the following steps S401 to S410:
step S401, a spectator user enters a virtual room with table-mate play.
Step S402, the audience user applies for the wheat ordering.
In step S403, the service server determines whether the number of applications of the audience user for the barley exceeds the threshold number of applications.
In the present application, a threshold number of applications of each audience in the live voice virtual room may be preset, after each audience sends out an application of the top, the service server may determine the number of applications of the audience in the live voice virtual room based on the application of the top, and if the number of applications of the audience in the live voice virtual room exceeds the threshold number of applications (e.g. 3), step S404 may be executed; if the number of the applications of the audience user has not exceeded the threshold number of applications, step S405 may be performed.
In step S404, the service server prompts the viewer that the user application fails.
In step S405, the service server pushes the quiz text to the viewer user.
In particular, the viewer user may answer based on the question text, i.e., the viewer user may answer the question.
In step S406, the service server determines whether the matching is successful.
Specifically, the business server may match the anchor user for the audience user based on the answer text of the audience user and the key labels of the anchor user. If the matching is successful, step S409 may be executed; if the execution fails, step S407 may be executed.
In step S407, the service server determines whether there are free seats.
In particular, the free seats may refer to the free anchor seats of anchor users in the live voice virtual room, as well as the audience seats of audience users. If there are more seats left, step S408 may be performed; if there are no free seats, then step S410 may be performed.
In step S408, the service server allocates free seats to wait for other viewer users to match.
In step S409, the service server updates the seat.
Specifically, after the matching is successful, the business server may display the audience user identification of the audience user on a particular audience seat of the live voice virtual room.
Step S410, the matching is ended.
It is understood that the interaction mode of pairing interaction with a table is provided in the voice live virtual room, so that the audience user can play the game together with the host user in the voice live virtual room to complete the task; the audience users are not limited to interact with the anchor users in a text chat mode, so that the interaction modes of the audience users and the anchor users are enriched, and the interaction feeling and participation feeling of the audience users are improved.
Further, referring to fig. 12, fig. 12 is a schematic structural diagram of a live broadcast data processing apparatus according to an embodiment of the present application. The live data processing apparatus may be a computer program (including program code) running in a computer device, for example the live data processing apparatus is an application software; the live data processing apparatus may be used to perform the method shown in fig. 4. As shown in fig. 12, the live data processing apparatus 1 may include: a text presentation module 91, a text acquisition module 92 and an identification display module 93.
The text display module 91 is configured to respond to an interaction request of the first participant user for an interaction control in the live voice virtual room, and display a question text; the questioning text is associated with key labels of N anchor users in the voice live virtual room; n is a positive integer;
A text obtaining module 92, configured to obtain a first answer text provided by a first participating user in response to an answer input operation of the first participating user with respect to the question text;
the identification display module 93 is configured to jointly display, when the first answer text matches with a key label of the target anchor user, an anchor identification of the target anchor user and a participation user identification of the first participation user in a team area corresponding to the target anchor user in the live voice virtual room; the N anchor users include target anchor users.
The specific implementation manner of the text display module 91, the text obtaining module 92, and the identifier display module 93 may be referred to the description of step S101 to step S103 in the embodiment corresponding to fig. 4, which will not be repeated here.
In one embodiment, the question text includes configuration answer text; the answer input operation includes a text selection operation;
the text acquisition module 92 may include: an operation response unit 921, and a text determination unit 922.
An operation response unit 921, configured to respond to a text selection operation of the target user for the configuration answer text, and obtain a target configuration answer text corresponding to the text selection operation;
A text determination unit 922 for determining the target configuration answer text as the first answer text provided by the target user.
For a specific implementation manner of the operation response unit 921 and the text determination unit 922, reference may be made to the description in step S102 in the embodiment corresponding to fig. 4, which will not be repeated here.
Referring to fig. 12, the identification display module 93 may include: the region acquisition unit 931 and the identification display unit 932.
A region obtaining unit 931, configured to obtain, in a team region corresponding to the target anchor user, a second identifier display region adjacent to the first identifier display region; a host broadcast identifier of a target host broadcast user is displayed in the first identifier display area;
the identifier display unit 932 is configured to obtain a participant user identifier of the first participant user, and display the participant user identifier of the first participant user in the second identifier display area.
The specific implementation manner of the region obtaining unit 931 and the identifier displaying unit 932 may be referred to the description in step S103 in the embodiment corresponding to fig. 4, which will not be repeated here.
Referring to fig. 12, the live data processing apparatus 1 may further include: the interface display module 94 and the label display module 95.
The interface display module 94 is configured to display an attribute information interface including user attribute information of the target anchor user in response to a triggering operation for an anchor identifier of the target anchor user in a team area corresponding to the target anchor user; the attribute information interface comprises a user description information area;
the tag display module 95 is configured to display the key tag of the target anchor user in the user description information area.
The specific implementation manner of the interface display module 94 and the label display module 95 may be referred to the description in step S103 in the embodiment corresponding to fig. 4, and will not be described herein.
Referring to fig. 12, the live data processing apparatus 1 may further include: the interactive control display module 96 and the lead information display module 97.
The interactive control display module 96 is configured to display, in a first interactive display area of a team formation area corresponding to the target anchor user, a first interactive control, and display, in a second interactive display area of the team formation area corresponding to the remaining anchor users, a second interactive control in a collar tie election period; the rest of the anchor users are anchor users except the target anchor user among the matched anchor users contained in the voice live virtual room; the first interaction control is used for enabling the browsing user to interact with the target anchor user and the first participating user; the second interaction control is used for interacting the browsing user with the rest of anchor users and the rest of participating users; the participation user identification of the residual participation user and the anchor identification of the residual anchor user are jointly displayed in a team area corresponding to the residual anchor user; the rest participating users are the participating users except the first participating user among the participating users contained in the voice live virtual room; browsing users refers to users watching live voice virtual rooms;
The collarband information display module 97 is configured to display a collarband display area in the live voice virtual room when the system time reaches a maximum timestamp of a collarband competitive period and the number of first interaction behaviors displayed for the first interaction control in the first interaction display area is greater than the number of second interaction behaviors displayed for the second interaction control in the second interaction display area;
the collarband information display module 97 is further configured to display, in the collarband display area, first user information of the first participant user, and collarband prompt information for the first participant user.
In one embodiment, the lead information display module 97 may include: a camping determination unit 971 and an information display unit 972.
A camping determining unit 971, configured to determine a camping formed by the first participating user and the target anchor user as a collarband camping;
the information display unit 972 is configured to obtain an additional virtual resource rate for the camping of the collarband, and generate resource allocation prompt information according to the camping of the collarband and the additional virtual resource rate; the resource allocation prompt information is used for prompting that the camping allocated with the additional virtual resource rate is a collarband camping; the additional virtual resource rate is used for determining additional virtual resources allocated to the camping of the collarband;
The information display unit 972 is further configured to display, in the collarband display area, the first user information, the collarband prompt information, and the resource allocation prompt information.
The specific implementation manner of the camping determining unit 971 and the information display unit 972 may be referred to the description in the embodiment corresponding to fig. 7, and will not be described herein.
Referring to fig. 12, the live data processing apparatus 1 may further include: the interactive data statistics module 98 and the quantity display module 100.
The interaction data statistics module 98 is configured to, when the system time reaches a maximum timestamp of the collarband election time period, count first interaction data of the first interaction user for the first interaction control and second interaction data of the second interaction user for the second interaction control; the first interactive data comprises interactive behaviors and first interactive user identification information; the second interaction data comprises interaction behaviors and second interaction user identification information; the browsing user comprises a first interactive user and a second interactive user;
the interactive data statistics module 98 is further configured to determine a first number of identifiers of the first interactive user identification information and a second number of identifiers of the second interactive user identification information;
The interaction data statistics module 98 is further configured to use the first number of identifiers as a first number of interaction behaviors and the second number of identifiers as a second number of interaction behaviors;
the quantity display module 100 is configured to display a first quantity of interactive actions in the first interactive display area and a second quantity of interactive actions in the second interactive display area.
The specific implementation of the interactive data statistics module 98 and the number display module 100 may be referred to the description of the embodiment corresponding to fig. 7, and will not be described herein.
Referring to fig. 12, the live data processing apparatus 1 may further include: a transmission control display module 1001 and a camping display module 1002.
The transmission control display module 1001 is configured to display, in a first resource display area of a team formation area corresponding to a target anchor user, a first resource transmission control, and display, in a second resource display area of a team formation area corresponding to a remaining anchor user, a second resource transmission control in a team formation area corresponding to a target anchor user in a team formation competitive period; the first resource sending control is used for browsing virtual resources sent by the user for the camping of the collarband; the second resource sending control is used for browsing virtual resources sent by the user for the rest camps; the residual camping is formed by residual anchor users and residual participating users;
And the camping display module 1002 is configured to display an optimal camping display area in the live voice virtual room when the system time reaches a maximum timestamp of a camping competitive time period, and the first virtual resource displayed for the first resource transmission control in the first resource display area is larger than the second virtual resource displayed for the second resource transmission control in the second resource display area, and display a hosting identifier of the target hosting user and a participating user identifier of the first participating user in the optimal camping display area.
The specific implementation manner of the transmission control display module 1001 and the camping display module 1002 may be referred to the description in the embodiment corresponding to fig. 7, and will not be described herein.
Referring to fig. 12, the live data processing apparatus 1 may further include: the resource acquisition module 1003 and the resource display module 1004.
A resource obtaining module 1003, configured to obtain an initial virtual resource of the camping of the collarband when the system time reaches a maximum timestamp of the camping competitive period; the initial virtual resource is a virtual resource sent by a browsing user to a collarband camp; the maximum timestamp of the camping election period is later than the maximum timestamp of the collarband election period;
The resource obtaining module 1003 is further configured to obtain a second virtual resource of the remaining camps; the second virtual resource is a virtual resource sent by the browsing user to the rest camps;
the resource obtaining module 1003 is further configured to determine an additional virtual resource of the camping team according to the additional virtual resource rate and the initial virtual resource;
the resource obtaining module 1003 is further configured to determine a first virtual resource of the camping team according to the additional virtual resource and the initial virtual resource;
the resource display module 1004 is configured to display a first virtual resource in the first resource display area and a second virtual resource in the second resource display area.
The specific implementation manner of the resource obtaining module 1003 and the resource displaying module 1004 may be referred to the description in the embodiment corresponding to fig. 7, and will not be described herein.
In one embodiment, the N anchor users include anchor user k i
The text presentation module 91 may include: a request response unit 911, a keyword acquisition unit 912, a table acquisition unit 913, and a text presentation unit 914.
A request response unit 911 for responding to the interaction request of the first participant user for the interaction control in the live voice virtual room to obtain the anchor user k i Key tags of (2);
a keyword acquisition unit 912 for acquiring and hosting user k in the configuration keyword set i The matched configuration keywords of the key labels of the database are used as target keywords;
a table obtaining unit 913 for obtaining the text mapping table; the text mapping table comprises mapping relations between configuration question texts and configuration keywords;
the text display unit 914 is configured to obtain a configuration question text having a mapping relationship with the target keyword in the text mapping table, determine the obtained configuration question text as a question text, and display the question text.
The specific implementation manners of the request response unit 911, the keyword acquiring unit 912, the table acquiring unit 913, and the text presenting unit 914 may be referred to the description of step S101 in the embodiment corresponding to fig. 4, and will not be repeated here.
In one embodiment, the keyword acquisition unit 912 may include: vector acquisition subunit 9121 and keyword determination subunit 9122.
Vector acquisition subunit 9121 for acquiring anchor user k i A label vector corresponding to the key label of (a);
vector obtaining subunit 9121 is further configured to obtain a word vector corresponding to each configuration keyword in the configuration keyword set, so as to obtain a word vector set;
Keyword determining subunit 9122, configured to determine the similarity between the tag vector and each word vector in the set of word vectors, so as to obtain a set of similarity;
the keyword determining subunit 9122 is further configured to determine, as the target similarity, a similarity obtained from the similarity set and greater than or equal to the similarity threshold, and determine, as the target keyword, a configuration keyword corresponding to the target similarity.
For a specific implementation manner of the vector obtaining subunit 9121 and the keyword determining subunit 9122, reference may be made to the description of step S101 in the embodiment corresponding to fig. 4, which will not be described herein.
In one embodiment, the first answer text is associated with a configuration keyword; n is a positive integer;
the live data processing apparatus 1 may further include: the matching degree determination module 1005 and the matching relation determination module 1006.
The matching degree determining module 1005 is configured to determine matching degrees between the configuration keywords and the key labels of the N anchor users, so as to obtain N matching degrees;
a matching relationship determining module 1006, configured to obtain a target anchor user from N anchor users according to N matching degrees;
the matching relationship determining module 1006 is further configured to determine that the first answer text matches a key tag of the target anchor user.
For a specific implementation manner of the matching degree determining module 1005 and the matching relation determining module 1006, reference may be made to the description of step S103 in the embodiment corresponding to fig. 4, which will not be repeated here.
In one embodiment, the matching relationship determination module 1005 may include: the first user determination unit 10051.
A first user determining unit 10051, configured to obtain a maximum matching degree from the N matching degrees;
the first user determining unit 10051 is further configured to determine the anchor user corresponding to the maximum matching degree as the target anchor user.
In one embodiment, the matching relationship determination module 1005 includes:
a second user determining unit 10052, configured to determine, as a candidate matching degree, a matching degree greater than or equal to a matching degree threshold, and determine, as a candidate anchor user, an anchor user corresponding to the candidate matching degree;
the second user determining unit 10052 is further configured to obtain candidate anchor user information corresponding to the candidate anchor user, and display the candidate anchor user information;
the second user determining unit 10052 is further configured to determine, in response to a selection operation of the first participant user for the anchor user candidate information, an anchor user corresponding to the selected anchor user candidate information as a target anchor user.
In one embodiment, the matching relationship determination module 1005 may include: the third user determination unit 10053.
A third user determining unit 10053, configured to determine, as a candidate matching degree, a matching degree greater than or equal to a matching degree threshold value of the N matching degrees, and determine, as a candidate anchor user, an anchor user corresponding to the candidate matching degree;
the third user determining unit 10053 is further configured to obtain the number of audience members of interest and the active media data value corresponding to the candidate anchor user, and determine a popularity value of the candidate anchor user according to the number of audience members of interest and the active media data value;
the third user determining unit 10053 is further configured to determine the candidate anchor user with the largest popularity value as the target anchor user.
Referring to fig. 12, the live data processing apparatus 1 may further include: information transmission module 1007, information reception module 1008, and step execution module 1009.
An information sending module 1007, configured to send first user information corresponding to a first participating user to a target anchor terminal corresponding to a target anchor user;
the information receiving module 1008 is configured to receive user confirmation information returned by the target anchor terminal based on the first user information; the user confirmation information comprises target participation users selected by target anchor users based on the first user information and the second user information; the second user information is user information corresponding to a second participating user sent by the second terminal to the target anchor terminal; the second answer text provided by the second participating user is matched with the key label of the target anchor user; the transmission time stamp of the first user information and the transmission time stamp of the second user information are in the same time range;
The step execution module 1009 is configured to execute, when the target participant included in the user confirmation information is the first participant, a step of displaying, together, the anchor identifier of the target anchor user and the user identifier of the first participant in a team area corresponding to the target anchor user in the live voice virtual room.
The specific implementation manner of the information sending module 1007, the information receiving module 1008, and the step executing module 1009 may refer to the description of step S103 in the embodiment corresponding to fig. 4, which will not be described herein.
In the embodiment of the application, after the audience user (such as the first participating user) enters the live voice virtual room, the user can click the interaction control to apply for interaction between the user and the anchor user; and the terminal corresponding to the first participant (such as the first terminal) can respond to the triggering operation to display the question text associated with the anchor user to the first participant, the first participant needs to answer the question text, the first terminal can be matched with the key label of the anchor user after acquiring the answer text (such as the first answer text) provided by the first participant, and the anchor identification of the target anchor user and the participant identification of the first participant can be jointly displayed in the team formation area of the target anchor user successfully matched after successful matching. It should be appreciated that the first participating user's participant user identification may be displayed in conjunction with the target anchor user's anchor identification to allow the first participating user to interact with the target anchor user. It can be seen that in the live broadcast process of the voice live broadcast virtual room, the key labels of the anchor user and the answer text of the audience user can be quickly matched. Because the key labels of the anchor users represent the favorites of the anchor users and the answer texts of the audience users represent the favorites of the audience users, the anchor users which accord with the favorites of the audience users can be quickly matched based on the matching results of the anchor users and the audience users, that is, whether the anchor users matched with the anchor users exist or not can be determined by the audience users only by answering questions, redundant data requests can be avoided, and data traffic is saved. In conclusion, the matching efficiency of audience users and anchor users can be improved, frequent data requests are avoided, and data traffic is saved.
Further, referring to fig. 13, fig. 13 is a schematic structural diagram of a computer device according to an embodiment of the present application. As shown in fig. 13, the apparatus 1 in the embodiment corresponding to fig. 12 may be applied to the computer device 1000, and the computer device 1000 may include: processor 1001, network interface 1004, and memory 1005, and in addition, the above-described computer device 1000 further includes: a user interface 1003, and at least one communication bus 1002. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display (Display), a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface, among others. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one disk memory. The memory 1005 may also optionally be at least one storage device located remotely from the processor 1001. As shown in fig. 13, an operating system, a network communication module, a user interface module, and a device control application program may be included in the memory 1005, which is one type of computer-readable storage medium.
In the computer device 1000 shown in FIG. 13, the network interface 1004 may provide network communication functions; while user interface 1003 is primarily used as an interface for providing input to a user; and the processor 1001 may be used to invoke a device control application stored in the memory 1005 to implement:
responding to an interaction request of a first participant user aiming at an interaction control in a voice live virtual room, and displaying a question text; the questioning text is associated with key labels of N anchor users in the voice live virtual room; n is a positive integer;
responding to an answer input operation of a first participant user aiming at a question text, and acquiring a first answer text provided by the first participant user;
when the answer text is matched with the key label of the target anchor user, displaying the anchor identification of the target anchor user and the participation user identification of the first participation user in the team forming area corresponding to the target anchor user in the voice live virtual room; the N anchor users include target anchor users.
It should be understood that the computer device 1000 described in the embodiment of the present application may perform the description of the live data processing method in the embodiment corresponding to fig. 4, and may also perform the description of the live data processing apparatus 1 in the embodiment corresponding to fig. 12, which is not repeated herein. In addition, the description of the beneficial effects of the same method is omitted.
Furthermore, it should be noted here that: the embodiment of the present application further provides a computer readable storage medium, where a computer program executed by the aforementioned computer device 1000 for live broadcast data processing is stored, where the computer program includes program instructions, when the processor executes the program instructions, the description of the live broadcast data processing method in the embodiment corresponding to fig. 4 can be executed, and therefore will not be repeated herein. In addition, the description of the beneficial effects of the same method is omitted. For technical details not disclosed in the embodiments of the computer-readable storage medium according to the present application, please refer to the description of the method embodiments of the present application.
The computer readable storage medium may be the live data processing apparatus provided in any one of the foregoing embodiments or an internal storage unit of the foregoing computer device, for example, a hard disk or a memory of the computer device. The computer readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) card, a flash card (flash card) or the like, which are provided on the computer device. Further, the computer-readable storage medium may also include both internal storage units and external storage devices of the computer device. The computer-readable storage medium is used to store the computer program and other programs and data required by the computer device. The computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.
In one aspect of the present application, a computer program product or computer program is provided that includes computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the method provided in an aspect of the embodiments of the present application.
The terms first, second and the like in the description and in the claims and drawings of the embodiments of the present application are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the term "include" and any variations thereof is intended to cover a non-exclusive inclusion. For example, a process, method, apparatus, article, or device that comprises a list of steps or elements is not limited to the list of steps or modules but may, in the alternative, include other steps or modules not listed or inherent to such process, method, apparatus, article, or device.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The methods and related devices provided in the embodiments of the present application are described with reference to the method flowcharts and/or structure diagrams provided in the embodiments of the present application, and each flowchart and/or block of the method flowcharts and/or structure diagrams may be implemented by computer program instructions, and combinations of flowcharts and/or blocks in the flowchart and/or block diagrams. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable live data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable live data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or structural diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable direct a data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or structural diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or structures.
The foregoing disclosure is only illustrative of the preferred embodiments of the present application and is not intended to limit the scope of the claims herein, as the equivalent of the claims herein shall be construed to fall within the scope of the claims herein.

Claims (18)

1. A live data processing method, comprising:
the method comprises the steps that a first terminal responds to an interaction request of a first participant in a voice live virtual room aiming at an interaction control, and a question text is displayed; the question text is associated with key labels of N anchor users in the live voice virtual room; n is a positive integer;
responding to the answer input operation of the first participant user aiming at the questioning text, and acquiring a first answer text provided by the first participant user;
when the first answer text is matched with the key label of the target anchor user, displaying the anchor identification of the target anchor user and the participation user identification of the first participation user in a team area corresponding to the target anchor user in the voice live broadcast virtual room; the N anchor users comprise the target anchor users;
displaying a first interactive control in a first interactive display area of a team forming area corresponding to the target anchor user in a team competing period, and displaying a second interactive control in a second interactive display area of the team forming area corresponding to the rest anchor users; the rest anchor users are anchor users except the target anchor user in the matched anchor users contained in the voice live virtual room; the first interaction control is used for enabling a browsing user to interact with the target anchor user and the first participating user; the second interaction control is used for interaction between the browsing user and the rest of anchor users and interaction between the browsing user and the rest of participant users; the participant user identifications of the remaining participant users and the anchor identifications of the remaining anchor users are jointly displayed in a team area corresponding to the remaining anchor users; the rest participating users are the participating users except the first participating user among the participating users contained in the voice live virtual room; the browsing user is a user who views the voice live virtual room;
When the system time reaches the maximum time stamp of the leader election time period, and the first interactive behavior quantity displayed for the first interactive control in the first interactive display area is larger than the second interactive behavior quantity displayed for the second interactive control in the second interactive display area, displaying a leader display area in the live voice virtual room, displaying first user information of the first participant user in the leader display area, and leader prompt information for the first participant user.
2. The method of claim 1, wherein the question text comprises configuration answer text; the answer input operation includes a text selection operation;
the responding to the answer input operation of the first participating user for the question text, obtaining a first answer text provided by the first participating user, comprises the following steps:
responding to the text selection operation of the first participating user for the configuration answer text, and acquiring a target configuration answer text corresponding to the text selection operation;
and determining the target configuration answer text as the first answer text provided by the first participating user.
3. The method of claim 1, wherein the co-displaying the anchor identification of the target anchor user and the user identification of the first participant user in the group area corresponding to the target anchor user in the live-voice virtual room comprises:
acquiring a second identification display area adjacent to the first identification display area in a team formation area corresponding to the target anchor user; the first identifier display area displays the anchor identifier of the target anchor user;
and acquiring the participation user identification of the first participation user, and displaying the participation user identification of the first participation user in the second identification display area.
4. The method according to claim 1, wherein the method further comprises:
responding to triggering operation of a main broadcasting identification of the target main broadcasting user in a team forming area corresponding to the target main broadcasting user, and displaying an attribute information interface comprising user attribute information of the target main broadcasting user; the attribute information interface comprises a user description information area;
and displaying the key labels of the target anchor users in the user description information area.
5. The method of claim 1, wherein the displaying, in the lead display area, first user information for the first participant and lead prompt information for the first participant comprises:
determining a camping formed by the first participating user and the target anchor user as a collarband camping;
acquiring an additional virtual resource rate aiming at the collarband camping, and generating resource allocation prompt information according to the collarband camping and the additional virtual resource rate; the resource allocation prompt information is used for prompting that the camping allocated with the additional virtual resource rate is the camping of the collarband; the additional virtual resource rate is used for determining additional virtual resources allocated to the camping of the collarband;
and displaying the first user information, the collarband prompt information and the resource allocation prompt information in the collarband display area.
6. The method as recited in claim 1, further comprising:
when the system time reaches the maximum timestamp of the leader election time period, counting first interaction data of a first interaction user aiming at the first interaction control and second interaction data of a second interaction user aiming at the second interaction control; the first interaction data comprises interaction behaviors and first interaction user identification information; the second interaction data comprises the interaction behavior and second interaction user identification information; the browsing user comprises the first interactive user and the second interactive user;
Determining a first identification number of the first interactive user identification information and a second identification number of the second interactive user identification information;
taking the first identification number as the first interaction behavior number, and taking the second identification number as the second interaction behavior number;
displaying the first interactive behavior quantity in the first interactive display area, and displaying the second interactive behavior quantity in the second interactive display area.
7. The method of claim 5, wherein the method further comprises:
displaying a first resource sending control in a first resource display area of a team forming area corresponding to the target anchor user in a team forming period, and displaying a second resource sending control in a second resource display area of the team forming area corresponding to the rest anchor users; the first resource sending control is used for a browsing user to send virtual resources for the camping of the collarband; the second resource sending control is used for sending the virtual resource for the rest camping by the browsing user; the residual camping is formed by the residual anchor user and the residual participating user;
When the system time reaches the maximum time stamp of the camping competitive time period, and the first virtual resource displayed by the first resource sending control in the first resource display area is larger than the second virtual resource displayed by the second resource sending control in the second resource display area, displaying an optimal camping display area in the voice live broadcast virtual room, and displaying the anchor identification of the target anchor user and the participation user identification of the first participation user in the optimal camping display area.
8. The method of claim 7, wherein the method further comprises:
when the system time reaches the maximum time stamp of the camping competitive period, acquiring initial virtual resources of the camping of the collarband; the initial virtual resource is a virtual resource sent by the browsing user to the camping of the collarband; the maximum timestamp of the camping election time period is later than the maximum timestamp of the collarband election time period;
acquiring a second virtual resource of the residual camping; the second virtual resource is a virtual resource sent by the browsing user to the rest camps;
Determining the additional virtual resources of the camping team according to the additional virtual resource rate and the initial virtual resources;
determining the first virtual resource of the camping team according to the additional virtual resource and the initial virtual resource;
and displaying the first virtual resource in the first resource display area, and displaying the second virtual resource in the second resource display area.
9. The method of claim 1, wherein the N anchor users comprise anchor user k i
The first terminal responds to an interaction request of a first participant user for an interaction control in a voice live virtual room, and shows a question text, and the method comprises the following steps:
the first terminal responds to an interaction request of a first participant user for an interaction control in a live voice virtual room to acquire the anchor user k i Key tags of (2);
acquiring the user k of the anchor in the configuration keyword set i The matched configuration keywords of the key labels of the database are used as target keywords;
acquiring a text mapping table; the text mapping table comprises mapping relations between configuration question texts and configuration keywords;
and acquiring a configuration question text with the mapping relation with the target keyword in the text mapping table, determining the acquired configuration question text as the question text, and displaying the question text.
10. The method of claim 9, wherein the obtaining of the anchor user k from the set of configuration keywords i The configuration keywords matched with the key labels of the target keywords comprise:
acquiring the saidAnchor user k i A label vector corresponding to the key label of (a);
acquiring word vectors corresponding to each configuration keyword in the configuration keyword set to obtain a word vector set;
determining the similarity between the tag vector and each word vector in the word vector set to obtain a similarity set;
and obtaining the similarity greater than or equal to a similarity threshold value from the similarity set, determining the similarity as target similarity, and determining the configuration keywords corresponding to the target similarity as the target keywords.
11. The method of claim 1, wherein the first answer text is associated with a configuration keyword; n is a positive integer;
the method further comprises the steps of:
determining the matching degree between the configuration keywords and the key labels of the N anchor users respectively to obtain N matching degrees;
and acquiring the target anchor user from the N anchor users according to the N matching degrees, and determining that the first answer text is matched with the key label of the target anchor user.
12. The method of claim 11, wherein the obtaining the target anchor user from the N anchor users according to the N matches comprises:
obtaining the maximum matching degree from the N matching degrees;
and determining the anchor user corresponding to the maximum matching degree as the target anchor user.
13. The method of claim 11, wherein the obtaining the target anchor user from the N anchor users according to the N matches comprises:
determining a matching degree which is larger than or equal to a matching degree threshold value in the N matching degrees as a candidate matching degree, and determining a anchor user corresponding to the candidate matching degree as a candidate anchor user;
acquiring candidate anchor user information corresponding to the candidate anchor user, and displaying the candidate anchor user information;
and responding to the anchor selection operation of the first participating user for the candidate anchor user information, and determining the anchor user corresponding to the selected candidate anchor user information as the target anchor user.
14. The method of claim 11, wherein the obtaining the target anchor user from the N anchor users according to the N matches comprises:
Determining a matching degree which is larger than or equal to a matching degree threshold value in the N matching degrees as a candidate matching degree, and determining a anchor user corresponding to the candidate matching degree as a candidate anchor user;
acquiring the number of concerned audience and the media data activity value corresponding to the candidate anchor user, and determining the popularity value of the candidate anchor user according to the number of concerned audience and the media data activity value;
and determining the candidate anchor user with the largest popularity value as the target anchor user.
15. The method according to claim 1, wherein the method further comprises:
the first terminal sends first user information corresponding to the first participating user to a target anchor terminal corresponding to the target anchor user;
receiving user confirmation information returned by the target anchor terminal based on the first user information; the user confirmation information comprises target participation users selected by the target anchor user based on the first user information and the second user information; the second user information is user information corresponding to a second participating user sent by a second terminal to the target anchor terminal; the second answer text provided by the second participating user is matched with the key label of the target anchor user; the transmission time stamp of the first user information and the transmission time stamp of the second user information are in the same time range;
And when the target participant user included in the user confirmation information is the first participant user, executing the step of displaying the anchor identification of the target anchor user and the user identification of the first participant user together in the team forming area corresponding to the target anchor user in the live voice virtual room.
16. A live data processing apparatus, comprising:
the text display module is used for responding to an interaction request of the first participant user for the interaction control in the voice live virtual room and displaying the question text; the question text is associated with key labels of N anchor users in the live voice virtual room; n is a positive integer;
the text acquisition module is used for responding to the answer input operation of the first participant user for the question text and acquiring a first answer text provided by the first participant user;
the identification display module is used for jointly displaying the anchor identification of the target anchor user and the participation user identification of the first participation user in a team area corresponding to the target anchor user in the voice live broadcast virtual room when the answer text is matched with the key label of the target anchor user; the N anchor users comprise the target anchor users;
The interactive control display module is used for displaying a first interactive control in a first interactive display area of a team forming area corresponding to the target anchor user and displaying a second interactive control in a second interactive display area of the team forming area corresponding to the rest anchor users in the team competing time period; the rest anchor users are anchor users except the target anchor user in the matched anchor users contained in the voice live virtual room; the first interaction control is used for enabling a browsing user to interact with the target anchor user and the first participating user; the second interaction control is used for interaction between the browsing user and the rest of anchor users and interaction between the browsing user and the rest of participant users; the participant user identifications of the remaining participant users and the anchor identifications of the remaining anchor users are jointly displayed in a team area corresponding to the remaining anchor users; the rest participating users are the participating users except the first participating user among the participating users contained in the voice live virtual room; the browsing user is a user who views the voice live virtual room;
and the collarband information display module is used for displaying a collarband display area in the voice live broadcast virtual room, displaying first user information of the first participant user in the collarband display area and collarband prompt information of the first participant user when the system time reaches the maximum time stamp of the collarband competitive time period and the number of first interaction behaviors displayed for the first interaction control in the first interaction display area is larger than the number of second interaction behaviors displayed for the second interaction control in the second interaction display area.
17. A computer device, comprising: a processor, a memory, and a network interface;
the processor is connected to the memory, the network interface for providing network communication functions, the memory for storing program code, the processor for invoking the program code to perform the method of any of claims 1-15.
18. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program comprising program instructions which, when executed by a processor, perform the method of any of claims 1-15.
CN202110390618.3A 2021-04-12 2021-04-12 Live broadcast data processing method, device, equipment and readable storage medium Active CN113032542B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110390618.3A CN113032542B (en) 2021-04-12 2021-04-12 Live broadcast data processing method, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110390618.3A CN113032542B (en) 2021-04-12 2021-04-12 Live broadcast data processing method, device, equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN113032542A CN113032542A (en) 2021-06-25
CN113032542B true CN113032542B (en) 2024-04-09

Family

ID=76456329

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110390618.3A Active CN113032542B (en) 2021-04-12 2021-04-12 Live broadcast data processing method, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN113032542B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113453033B (en) * 2021-06-29 2023-01-20 广州方硅信息技术有限公司 Live broadcasting room information transmission processing method and device, equipment and medium thereof
CN113660155A (en) * 2021-07-30 2021-11-16 北京优酷科技有限公司 Special effect output method and device
CN114398135A (en) * 2022-01-14 2022-04-26 北京字跳网络技术有限公司 Interaction method, interaction device, electronic device, storage medium, and program product
CN114866795B (en) * 2022-04-28 2024-01-26 百果园技术(新加坡)有限公司 Live broadcast room data processing method and device and live broadcast platform

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103854031A (en) * 2012-11-28 2014-06-11 伊姆西公司 Method and device for identifying image content
CN108519991A (en) * 2018-02-28 2018-09-11 北京奇艺世纪科技有限公司 A kind of method and apparatus of main broadcaster's account recommendation
CN112073738A (en) * 2020-08-11 2020-12-11 北京城市网邻信息技术有限公司 Information processing method and device
CN112291632A (en) * 2020-11-04 2021-01-29 腾讯科技(深圳)有限公司 Live broadcast interaction method and device, electronic equipment and computer readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103854031A (en) * 2012-11-28 2014-06-11 伊姆西公司 Method and device for identifying image content
CN108519991A (en) * 2018-02-28 2018-09-11 北京奇艺世纪科技有限公司 A kind of method and apparatus of main broadcaster's account recommendation
CN112073738A (en) * 2020-08-11 2020-12-11 北京城市网邻信息技术有限公司 Information processing method and device
CN112291632A (en) * 2020-11-04 2021-01-29 腾讯科技(深圳)有限公司 Live broadcast interaction method and device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN113032542A (en) 2021-06-25

Similar Documents

Publication Publication Date Title
CN113032542B (en) Live broadcast data processing method, device, equipment and readable storage medium
CN108184144B (en) Live broadcast method and device, storage medium and electronic equipment
CN112291632B (en) Live broadcast interaction method and device, electronic equipment and computer readable storage medium
US10834479B2 (en) Interaction method based on multimedia programs and terminal device
CN107801101B (en) System and method for optimized and efficient interactive experience
US9066144B2 (en) Interactive remote participation in live entertainment
EP4203478A1 (en) Multi-user live streaming method and apparatus, terminal, server, and storage medium
US20080146342A1 (en) Live hosted online multiplayer game
CN113766340B (en) Dance music interaction method, system and device under live connected wheat broadcast and computer equipment
CN104219237A (en) Multimedia data processing method and system based on team speech communication platform
CN108171160B (en) Task result identification method and device, storage medium and electronic equipment
CN114501104B (en) Interaction method, device, equipment, storage medium and product based on live video
CN110366023B (en) Live broadcast interaction method, device, medium and electronic equipment
CN112203153A (en) Live broadcast interaction method, device, equipment and readable storage medium
CN114666671B (en) Live broadcast praise interaction method, device, equipment and storage medium
CN113824983B (en) Data matching method, device, equipment and computer readable storage medium
CN113438492B (en) Method, system, computer device and storage medium for generating title in live broadcast
US20230356082A1 (en) Method and apparatus for displaying event pop-ups, device, medium and program product
KR20130053218A (en) Method for providing interactive video contents
CN110417728B (en) Online interaction method, device, medium and electronic equipment
CN111836068A (en) Live broadcast interaction method and device, server and storage medium
CN114760531B (en) Team interaction method, device, system, equipment and storage medium for live broadcasting room
CN116567283A (en) Live broadcast interaction method and device, electronic equipment and storage medium
CN114339436B (en) Live broadcasting room game interaction method and device, electronic equipment and storage medium
CN114007095A (en) Voice microphone-connecting interaction method, system, medium and computer equipment for live broadcast room

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40046398

Country of ref document: HK

GR01 Patent grant