CN111870935B - Business data processing method and device, computer equipment and storage medium - Google Patents

Business data processing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111870935B
CN111870935B CN202010512719.9A CN202010512719A CN111870935B CN 111870935 B CN111870935 B CN 111870935B CN 202010512719 A CN202010512719 A CN 202010512719A CN 111870935 B CN111870935 B CN 111870935B
Authority
CN
China
Prior art keywords
user
interactive
interaction
main
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010512719.9A
Other languages
Chinese (zh)
Other versions
CN111870935A (en
Inventor
周书晖
许显杨
朱灿锋
袁智
刘立强
林诗钦
胡文灿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210467805.1A priority Critical patent/CN114797094A/en
Priority to CN202010512719.9A priority patent/CN111870935B/en
Publication of CN111870935A publication Critical patent/CN111870935A/en
Application granted granted Critical
Publication of CN111870935B publication Critical patent/CN111870935B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/79Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
    • A63F13/795Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for finding other players; for building a team; for providing a buddy list
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/53Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing
    • A63F2300/537Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing for exchanging game data using a messaging service, e.g. e-mail, SMS, MMS
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/55Details of game data or player data management
    • A63F2300/5546Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history
    • A63F2300/5566Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history by matching opponents or finding partners to build a team, e.g. by skill level, geographical area, background, play style

Abstract

The embodiment of the application provides a service data processing method, a device, computer equipment and a storage medium, wherein the method comprises the following steps: responding to a trigger operation aiming at a target service in the instant messaging application, and displaying a service display page matched with the target service; displaying at least two interactive users in an auxiliary area of a service display page; determining a main interactive user from the at least two interactive users in response to the confirmation operation for the auxiliary area; and displaying the main interactive user to a main area in the service display page, wherein the main area is used for displaying the interactive behavior data of the main interactive user in the target service. By adopting the embodiment of the application, the interaction mode in the instant messaging application can be enriched.

Description

Business data processing method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of internet technologies, and in particular, to a method and an apparatus for processing service data, a computer device, and a storage medium.
Background
With the continuous development of multimedia technology and the emergence of various social application software, more and more users use the social application software for entertainment activities, such as users can communicate and interact in the social application.
In the existing instant messaging application, a plurality of users can interact with each other through a conversation window, specifically in a text mode, and each user can only interact with the text in a fixed area of the conversation window, so that the interaction mode in the instant messaging application is too single.
Disclosure of Invention
The embodiment of the application provides a service data processing method, a service data processing device, computer equipment and a storage medium, and can enrich interaction modes in instant messaging application.
An aspect of the present application provides a method for processing service data, including:
responding to a trigger operation aiming at a target service in the instant messaging application, and displaying a service display page matched with the target service;
displaying at least two interactive users in an auxiliary area of a service display page;
determining a main interactive user from the at least two interactive users in response to the confirmation operation for the auxiliary area;
displaying the main interactive user to a main area in a service display page; the main area is used for displaying the interactive behavior data of the main interactive user in the target service.
An aspect of the present application provides a method for processing service data, including:
acquiring multimedia data corresponding to at least two interactive users in a target service respectively;
sending the multimedia data respectively corresponding to at least two interactive users to the terminal equipment so that the terminal equipment displays the multimedia data respectively corresponding to at least two interactive users in the auxiliary area of the service display page;
determining a main interactive user from at least two interactive users;
sending a region replacement instruction to the terminal equipment, and indicating the terminal equipment to display the main interactive user to a main region in the service display page according to the region replacement instruction; the main area is used for displaying the interactive behavior data of the main interactive user in the target service.
Wherein the task content comprises image task content;
acquiring interaction behavior data of a main interaction user aiming at task content, and generating an interaction result corresponding to the task content according to the interaction behavior data, wherein the interaction result comprises the following steps:
acquiring interactive behavior data of a main interactive user aiming at image task content, and performing expression recognition on interactive video data contained in the interactive behavior data to obtain an expression recognition result corresponding to the interactive behavior data;
when the expression recognition result is matched with the image task content, determining that an interaction result corresponding to the task content is an interaction success result;
and when the expression recognition result is not matched with the image task content, determining that the interaction result corresponding to the task content is an interaction failure result.
The main interactive users comprise a first interactive user and a second interactive user, and the first interactive user and the second interactive user both belong to at least two interactive users;
acquiring interaction behavior data of a main interaction user aiming at task content, and generating an interaction result corresponding to the task content according to the interaction behavior data, wherein the interaction result comprises the following steps:
acquiring interaction behavior data of a first interaction user and a second interaction user respectively aiming at task content;
acquiring third audio data in the interactive behavior data corresponding to the first interactive user, and performing voice recognition on the third audio data to obtain a third conversion text corresponding to the third audio data;
acquiring fourth audio data in the interactive behavior data corresponding to the second interactive user, and performing voice recognition on the fourth audio data to obtain a fourth conversion text corresponding to the fourth audio data;
when the third conversion text is matched with answer information corresponding to the task content and the acquisition time of the third audio data is earlier than that of the fourth audio data, determining an interaction result corresponding to the task content as an interaction success result aiming at the first interaction user;
when the fourth conversion text is matched with the answer information and the acquisition time of the fourth audio data is earlier than that of the third audio data, determining the interaction result corresponding to the task content as an interaction success result aiming at the second interaction user;
and when the third conversion text and the fourth conversion text are not matched with the answer information, determining that the interaction result corresponding to the task content is an interaction failure result.
The service display page comprises an operation area;
the method further comprises the following steps:
obtaining voting information aiming at interactive behavior data, which is sent by equipment to which an auxiliary interactive user belongs; the voting information is generated by the equipment which the auxiliary interactive user belongs to responding to voting triggering operation in the operation area, and the auxiliary interactive user refers to the interactive users except the main interactive user in at least two interactive users;
and counting the voting number of the interactive behavior data according to the voting information, generating an interactive result according to the voting number, and sending the interactive result to the terminal equipment so that the terminal equipment displays prompt information associated with the interactive result in the service display page.
An aspect of the present application provides a service data processing apparatus, including:
the first display module is used for responding to the trigger operation aiming at the target service in the instant messaging application and displaying a service display page matched with the target service;
the second display module is used for displaying at least two interactive users in the auxiliary area of the service display page;
a first determining module, configured to determine a primary interactive user from at least two interactive users in response to a confirmation operation for the secondary region;
the third display module is used for displaying the main interaction user to a main area in the service display page; the main area is used for displaying the interactive behavior data of the main interactive user in the target service.
An aspect of the present application provides a service data processing apparatus, including:
the acquisition module is used for acquiring multimedia data corresponding to at least two interactive users in the target service;
the first sending module is used for sending the multimedia data respectively corresponding to the at least two interactive users to the terminal equipment so that the terminal equipment displays the multimedia data respectively corresponding to the at least two interactive users in the auxiliary area of the service display page;
the second determining module is used for determining a main interactive user from at least two interactive users;
the second sending module is used for sending a region replacement instruction to the terminal equipment and indicating the terminal equipment to display the main interaction user to a main region in the service display page according to the region replacement instruction; the main area is used for displaying the interactive behavior data of the main interactive user in the target service.
An aspect of the embodiments of the present application provides a computer device, including a memory and a processor, where the memory stores a computer program, and the computer program, when executed by the processor, causes the processor to execute the steps of the method in the aspect of the embodiments of the present application.
In one aspect, embodiments of the present application provide a computer-readable storage medium, which stores a computer program, where the computer program includes program instructions, and the program instructions, when executed by a processor, perform the steps of the method in one aspect of the embodiments of the present application.
In the embodiment of the application, a service display page for a target service in an instant messaging application may include a main area and an auxiliary area, the auxiliary area may be used to display an interactive user participating in the target service, the main area may be used to display interaction behavior data of a selected user (i.e., a main interactive user), and after the main interactive user is determined from the target service, the main interactive user may be displayed to the main area, that is, the main interactive user may interact with other users in the main area, so that an interaction manner in the instant messaging application may be enriched.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a diagram of a network architecture provided by an embodiment of the present application;
fig. 2 is a schematic view of a service data processing scenario provided in an embodiment of the present application;
fig. 3 is a schematic flowchart of a service data processing method according to an embodiment of the present application;
fig. 4 is a schematic interface diagram for inviting a user to join a target service according to an embodiment of the present application;
fig. 5 is a schematic flowchart of a service data processing method according to an embodiment of the present application;
fig. 6a to fig. 6c are schematic diagrams of an interface for processing service data according to an embodiment of the present application;
FIG. 7 is a display diagram of an interface of a video interaction provided by an embodiment of the present application;
fig. 8 is a schematic flowchart of a service data processing method according to an embodiment of the present application;
fig. 9a to 9d are schematic views of a service page provided in an embodiment of the present application;
fig. 10a and fig. 10b are schematic diagrams of an interface of service data processing provided by an embodiment of the present application;
fig. 11 is a schematic flowchart of a service data processing method according to an embodiment of the present application;
fig. 12a and 12b are schematic diagrams of an interface of service data processing provided by an embodiment of the present application;
FIG. 13 is an interface display diagram of an emotive interaction provided by an embodiment of the present application;
fig. 14 is a timing diagram of a service data processing method according to an embodiment of the present application;
FIG. 15 is a schematic flow chart illustrating selection of a primary interactive user according to an embodiment of the present application;
FIG. 16 is a schematic flow chart illustrating the generation of interaction results according to an embodiment of the present disclosure;
fig. 17a and 17b are schematic diagrams of a service processing framework provided in an embodiment of the present application;
fig. 18 is a schematic structural diagram of a service data processing apparatus according to an embodiment of the present application;
fig. 19 is a schematic structural diagram of a service data processing apparatus according to an embodiment of the present application;
FIG. 20 is a schematic structural diagram of a computer device according to an embodiment of the present application;
fig. 21 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a diagram of a network architecture according to an embodiment of the present disclosure. The network architecture includes a server 10d and a plurality of terminal devices (as shown in fig. 1, specifically, the network architecture may include a terminal device 10a, a terminal device 10b, a terminal device 10c, and the like). Each terminal device may be installed with an instant messaging application, the server 10d may be a background server corresponding to the instant messaging application, and each terminal device may perform data transmission with the server 10d through a client corresponding to the instant messaging application.
In the service interaction scene of the instant messaging application, after a target service is started, terminal equipment corresponding to each user in the target service can acquire user data (such as voice data, video data and the like) and transmit the user data acquired respectively to the server 10d, the server 10d can send the received user data to each terminal equipment, so that each terminal equipment can obtain the user data corresponding to each user in the target service, further, in a service display page corresponding to the target service, the user data corresponding to each user respectively can be output, all users contained in the target service can be displayed in an auxiliary area of the service display page, the service display page can also comprise a main area which can be used for displaying interaction behavior data of a main interaction user, and the main interaction user refers to a background instruction or a background instruction according to user operation, the user selected from all users participating in the service may display the main interactive user to the main area after the main interactive user is determined.
It can be understood that the service data processing scheme proposed in the present application may be implemented by a system composed of a terminal device and a server, or implemented by a computer program (including program code) in a computer device, for example, the service data processing scheme is implemented by an application software, a client of the application software may display a video frame corresponding to each user in a service, the client may further collect behavior data (which may include at least one of audio data and video data) of the user, and upload the behavior data to a backend server of the application software, and the backend server of the application software may perform data processing on the behavior data of the user, and send a data processing result to the client.
The terminal device 10a, the terminal device 10b, the terminal device 10c, and the like may include a mobile phone, a tablet computer, a notebook computer, a palm computer, a Mobile Internet Device (MID), a wearable device (e.g., a smart watch, a smart bracelet, and the like), and other devices having a video function.
Referring to fig. 2, fig. 2 is a schematic view of a service data processing scenario provided in an embodiment of the present application. The service data processing scheme proposed in the embodiment of the present application is described by taking the terminal device 10a as an example. As shown in fig. 2, in a game interaction scenario of the instant messaging application, a target user may open the instant messaging application in the terminal device 10a, and select a game entry in the instant messaging application to enter a game home page 20 a; icons corresponding to games included in the instant messaging application, such as an icon of game 1, an icon of game 2, an icon of game 3, and the like, may be displayed in the game home page 20a, and the target user may select an interested game icon from the game home page 20a to enter a game corresponding to the game icon. When the target user selects the icon corresponding to the game 2 in the game home page 20a, the terminal device 10a may respond to the selection trigger operation (such as a single click, a double click, and the like) of the target user for the game 2 icon, create a game room 20b for the target user in the game 2, and display in a session page of the game room 20 b: room number 123456 corresponding to game room 20b, description information 20c (information such as game name and game rules) corresponding to game 2, and a video screen of the target user. At this time, the game room 20b contains only one member of the target user, that is, the target user is the first member in the game room 20b, and the terminal device 10a may display the video picture of the target user at the first position (that is, the position corresponding to the member 1) in the user area 20 d. Optionally, the game room 20b may include an exit function control, a video function control, a volume function control, a microphone function control, and the like, and if the target user selects the exit function control, the terminal device 10a may exit the target user from the game room 20b and close the game room 20 b; the video function control may be used to turn on or off the camera function in the terminal device 10a, the volume function control is used to adjust the output volume in the game 2, and the microphone function control is used to turn on or off the audio capture function in the terminal device 10 a.
It is understood that the game 2 has a member number limit as an interactive game, and the game can be started only when the number of members in the game room 20b reaches a preset number (e.g., 8), so that the target user needs to invite more friends to join the game room 20 b. The game room 20b further includes an "invite" function control, and the target user may trigger the terminal device to pull the buddy list of the target user from the instant messaging application by triggering the "invite" function control in the game room 20b, and display the pulled buddy list in the game room 20 b; the target user may select a friend who wants to invite to join the game from the friend list, and the terminal device 10a may send invitation information to the friend selected by the target user in response to the selection operation for the friend list.
If the friend of the target user agrees to join the game room 20b by the invitation information, the terminal device 10a may display a video screen of the friend in the user area 20 d. The video frames displayed in the user area 20d are all the video frames corresponding to the members who successfully join the game room 20b, the arrangement order of the video frames in the user area 20d is related to the time when the members join the game room 20b, the earlier the video frames of the members in the user area 20d are, if the user a joins the game room 20b as a second member after the target user, the video frames of the user a can be displayed at the second position (i.e., the position corresponding to the member 2) in the user area 20d, and so on, the video frames of the members who join the game room 20b are sequentially displayed in the user area 20 d. It should be noted that the video frame displayed in the user area 20d may be generated by a server (such as the server 10d in the embodiment corresponding to fig. 1) according to video data uploaded by terminal devices held by each member, that is, each device held by each member may collect video data of the corresponding member and upload the video data to the server.
When the number of members included in the user area 20d reaches the preset number, the target user may click the "start game" function control, enter a game display page matched with the game 2, and display, on the game display page, video frames corresponding to all the members in the game room 20b, and information such as game task content during the game. The terminal device 10a may determine, according to the description information 20c of the game 2, a game type corresponding to the game 2 in response to a click trigger operation of the target user for the "start game" function control, and further display a game presentation page matching the game type in the instant messaging application.
When the game type of the game 2 is a turn-by-turn table-play type (for example, the game 2 is a game such as "you guess me", "you sing me guess", and "you draw me guess"), the game display page displayed after the game starts may include a main area 20e and an auxiliary area 20f, and video pictures of all members in the game room 20b may be displayed in the auxiliary area 20f, such as video pictures corresponding to the members 1 to 8, respectively; the terminal device 10a can select a performing user (which may also be referred to as a main interactive user) from all members, such as the member 1, and switch and display the video screen of the member 1 from the auxiliary area 20f to the main area 20e, and the order in which the performing user is selected can be determined by the order in which the video screens of the members in the auxiliary area 20f are arranged.
When the game type of the game 2 is a lap type (for example, the game 2 is a game such as "true-word", "big adventure", or the like), the game display page 20k displayed after the game starts may include an operation area and an auxiliary area (the remaining areas of the game display page 20k except the operation area may be referred to as auxiliary areas), the auxiliary area may be divided into 8 display areas according to the number of members in the game room 20b, and each display area may be used to display a video picture of one member. A 'start' function control can be included in the operation area, the terminal device can conduct polling traversal on 8 members in the auxiliary area by triggering the 'start' function control, the member indicated when the polling traversal stops is taken as a performance user, such as the member 4, the display area of the member 4 in the auxiliary area can be highlighted, and the highlighted display area is called a main area in the game display page.
When the game type of the game 2 is a group confrontation type (a game such as "quiz game"), the terminal device 10a may divide 8 members in the game room 20B into 2 teams such as team a, which may include the member 1, the member 2, the member 3, and the member 4, and team B, which may include the member 5, the member 6, the member 7, and the member 8. The game display page displayed after the game starts can comprise a main area 20g, an auxiliary area 20h, a main area 20i and an auxiliary area 20j, wherein the auxiliary area 20h can be used for displaying video pictures of members in team A, and the auxiliary area 20j can be used for displaying video pictures of members in team B; when the terminal device 10a selects the member 1 from the team a as the performance user (which may also be called as the answering user) and the member 5 from the team B as the performance user, the video picture of the member 1 may be switched and displayed from the auxiliary area 20h to the main area 20g, the video picture of the member 5 may be switched and displayed from the auxiliary area 20j to the main area 20i, for the topic shown in the game, the member 1 and the member 5 may perform the answering, the member who succeeds in answering may continue answering, the member who fails in answering returns to the auxiliary area, and the member is reselected from the team as the performance user.
In the game interaction scene of the instant messaging application, different game types can display different game display pages, namely, the cutting mode of member video pictures can be adaptively adjusted according to the game types, and then the display modes of the video pictures can be enriched.
Referring to fig. 3, fig. 3 is a schematic flowchart of a service data processing method according to an embodiment of the present application. As shown in fig. 3, the service data processing method may include the following steps:
and step S101, responding to the trigger operation aiming at the target service in the instant messaging application, and displaying a service display page matched with the target service.
Specifically, in a service interaction scenario of the instant messaging application, the target user may create a service session (such as the game room 20b in the embodiment corresponding to fig. 2) associated with the target service in the instant messaging application, and invite a friend in the instant messaging application to join the service session, and when the number of users included in the service session reaches a preset number, the target user may start the target service, so that the target user and the friend in the service session may perform service interaction. Therefore, before responding to the trigger operation for the target service in the instant messaging application, the terminal device may respond to the session creation operation in the instant messaging application, create a service session corresponding to the target service for the target user in the instant messaging application, and may display the service description information (such as the description information 20c in the embodiment corresponding to fig. 2) corresponding to the target service and the target user in the service session, where the target user, as a creator of the service session, may join the service session corresponding to the target service as a first interactive user. The terminal device may refer to a user terminal held by a target user, and the target service may be an interactive game using video chat, or a service using a video chat function such as a video conference, for example, games such as "you guess me", "you play me guess", "you guess me", "true word", "big adventure", "question and answer", and a conference of a leader-turn speech type, a conference of a group discussion type, a conference of a multiparty debate type, and the like.
The target user can also invite friends in the instant messaging application to join the service session corresponding to the target service, when the target user triggers the invitation function control in the service session, the terminal device can respond to the invitation operation in the service session and send invitation information corresponding to the target service to candidate users selected by the invitation operation, and the candidate users can be represented as friends selected by the target user from a friend list of the instant messaging application; after receiving the invitation information corresponding to the target service, the candidate user can join the service session corresponding to the target service through the invitation information, when the candidate user confirms to join the service session through the invitation information, the terminal device can detect that the server returns a confirmation participation request of the invitation information, join the candidate user as an interactive user into the service session corresponding to the target service, and display the candidate user in the service session, and the target user and the candidate user at the moment can be determined as the interactive user in the target service.
Optionally, the target user may also enter a service navigation page from a group chat page of the instant messaging application, where the service navigation page may include a service that a member in the group has joined, the target user may select a target service from the services displayed on the service navigation page, trigger the terminal device to create a service session associated with the target service in the instant messaging application, and send invitation information corresponding to the service session to the group, and the member in the group may join the service session created by the target user by clicking the invitation information, that is, when the terminal device receives a confirmation participation request of the invitation information returned from the background of the instant messaging application, the group member associated with the confirmation participation request may join the service session. It should be noted that, in this embodiment of the present application, it is assumed that an interactive user corresponding to a current terminal device is a target user, and the target user may also be a local terminal user.
Referring to fig. 4, fig. 4 is a schematic diagram of an interface for inviting a user to join a target service according to an embodiment of the present application. Taking the target service as an example, the game that you play and guess me, the process of inviting users to join the target service is explained in detail. As shown in fig. 4, the user widget a may open an instant messenger application in the terminal device 10a, and select a game entry to enter the game home page 30a in the instant messenger application; icons of games included in the instant messenger application, such as an icon of "you draw me guess" game, "an icon of" you play me guess "game," an icon of "you sing me guess" game, "an icon of" true heart "game, and an icon of" big adventure "game, etc., may be displayed in the game home page 30 a. The user widget a may select a game to be played from the game home page 30a, and click on an icon corresponding to the game to create a game room (the widget a is the target user in this case), for example, after the user widget a clicks on an icon of the game of "you guess me", the terminal device 10a may respond to a click operation of the user widget a on the icon of the game of "you guess me", and create a game room 30b for the user widget a. In the game room 30b, description information 30c of "you play and guess" game (for example, each person shows the title in turn, and language and body motion description can be used in the process, but any words containing answers cannot be mentioned) and a user area 30d can be displayed; the user area 30d may be configured to display members in the game room 30b (e.g., at least one of the video pictures, the head portraits, the nicknames, and the like of the users), and the ranks of the members in the user area 30d are associated with the time when the corresponding member joins the game room 30b, e.g., the earlier the member joins the game room 30b, the earlier the display position of the member in the user area 30d is, the earlier the member joins the game room 30b, the game room 30b at this time only contains one member of the user xiao a, and thus the first display position in the user area 30d may display the user xiao.
The user widget a may trigger an invitation function control in the game room 30b, and the terminal device 10a, in response to an invitation operation directed to the invitation function control, pulls the buddy list of the user widget a from the instant messaging application, and displays the buddy list in the buddy selection page 30 e. It should be appreciated that the buddy list of user widget A may be displayed directly in a certain area of game room 30b, or may be displayed separately in game room 30b, or may jump from game room 30b to buddy selection page 30e, displaying buddy selection page 30 e.
The terminal device 10a may select a friend from the friend list to join in the game room 30B, and if the usera selects the usera B, the usera C, the usera D, the usera E, the usera F, the usera G, and the usera H from the friend list, and clicks "ok", the terminal device 10a may send invitation information for a "you play me guess" game to the usera B, the usera C, the usera D, the usera E, the usera F, the usera G, and the usera H, respectively. If the user boy B, the user boy C, the user boy D, the user boy E, the user boy F, the user boy G and the user boy H all agree to join the game room 30B, the users all can send a participation confirmation request to the terminal device 10a through invitation information, and after the terminal device 10a detects the participation confirmation request, the user boy B, the user boy C, the user boy D, the user boy E, the user boy F, the user boy G and the user boy H can be displayed in the user area 30D.
Further, when the number of the users of the interactive users in the service session reaches a preset number (for example, the preset number is 8, or 4, etc.), the start-up function control in the service session may be switched from an inactive state to an active state, and the target user may trigger the start-up function control to start up the target service. When the target user triggers the start function control, the terminal device may respond to the trigger operation for the start function control, and display a service presentation page matched with the target service. The region division mode in the service display page is related to the service type of the target service, and different service types can have different page display forms. For example, when the service type is a turn-by-turn show type, the service display page may include a main area and an auxiliary area, such as a game in which the target service is "you draw me guess", "you play me guess", "you sing me guess", or a turn-by-turn talk type conference; when the service type is a circle of sit-around type, the service display page can comprise an operation area and an auxiliary area, the auxiliary area can be divided into a plurality of unit areas, each unit area can display a video picture of an interactive user, namely the number of the unit areas is the same as that of the interactive users in the target service, for example, the target service is games such as 'true heart talk', 'big adventure' and the like, or a conference of a group discussion type and the like; when the service type is a grouping confrontation type, the service display page can comprise at least two main areas and at least two auxiliary areas, the number of the auxiliary areas is the same as that of teams, such as games like 'question answering PK' or a conference of a multi-party debate type; for specific description of service presentation pages corresponding to different service types, reference may be made to the embodiment corresponding to fig. 2, which is not described herein again.
Step S102, at least two interactive users are displayed in the auxiliary area of the service display page.
Specifically, after the target service is started, the terminal device may acquire all users included in the target service, and display all users in an auxiliary area of the service display page, where the users displayed in the auxiliary area may be displayed in the form of a real-time video picture, a head portrait, a nickname, and the like. In the embodiments of the present application, a real-time video image is taken as an example for specific description, that is, the interactive users are all displayed in the auxiliary area in the form of a video image.
After a local terminal user (which can be understood as a holder of the terminal device) joins a target service, a camera in the terminal device can be started, so that the terminal device can acquire video frame data of the local terminal user in real time by using the camera, and further can render the acquired video frame data of the local terminal user by using an open graphics library (such as OpenGL), generate a video picture corresponding to the local terminal user, and display the video picture corresponding to the local terminal user in an auxiliary area of a service display page; the terminal device may further send the collected video frame data associated with the local end user to a server (e.g., the server 10d in the embodiment corresponding to fig. 1), so that the server sends the video frame data of the local end user to user terminals corresponding to the remaining interactive users (i.e., the remaining interactive users except the local end user) of the at least two interactive users. Similarly, the user terminals corresponding to the other interactive users can also adopt the camera to collect the video frame data of the corresponding interactive users, and send the collected video frame data to the server, and then the server can send the video frame data of the other interactive users to the local terminal equipment. After receiving the video frame data corresponding to the other interactive users, the terminal device may render the received video frame data of the other interactive users by using the open graphics library, generate video pictures corresponding to the other interactive users, and display the video pictures corresponding to the other interactive users in the auxiliary area of the service display page.
Optionally, the terminal device may send the collected video frame data of the local terminal user to the server, and the user terminals corresponding to the other interactive users may also send the collected video frame data to the server, and the server may render the received video frame data to generate video pictures corresponding to the local terminal device and the other interactive users, respectively, that is, the server may receive the video frame data corresponding to all the interactive users in the target service, and generate the video pictures corresponding to each interactive user in the target service. The server can send the video pictures corresponding to each interactive user to the current terminal equipment, and the terminal equipment can separately display the video pictures corresponding to each interactive user in the auxiliary area of the service display page after receiving the video pictures corresponding to each interactive user. In summary, the video frames displayed in the auxiliary area of the service display page may be obtained by rendering video frame data respectively corresponding to each interactive user by using an open graphics library through the terminal device, or may be obtained by rendering video frame data respectively corresponding to each interactive user through the server.
Optionally, in the target service, the terminal device may further obtain a network status of the current device in real time, perform traffic management based on the network status of the current device, and display a higher-quality video picture in the auxiliary area when the network status is good, so that the local terminal user can more clearly see behaviors of other interactive users, such as increasing the definition and resolution of the video picture; when the network state is poor, the video pictures with the reduced quality can be displayed in the auxiliary area, so that the video in the service display page is smoother, and the situations of blocking and the like, such as the reduction of the definition and the resolution of the video pictures, are prevented.
And step S103, responding to the confirmation operation aiming at the auxiliary area, and determining a main interactive user from at least two interactive users.
Specifically, after the target service is started, the terminal device may select a main interactive user from at least two interactive users of the target service in response to the confirmation operation for the auxiliary area. The confirmation operation may be a confirmation behavior operation associated with the target user detected by the terminal device, or a confirmation instruction sent by the server to the terminal device, which is not specifically limited herein.
Optionally, after the target service is started, the server may detect that the target service is in a start state, and may further issue a confirmation instruction to the terminal device to instruct the terminal device to select the main interactive user from the at least two interactive users displayed in the auxiliary area. Optionally, the service presentation page corresponding to the target service may include an operation control, and the local terminal user may trigger the terminal device to select the main interactive user from the auxiliary area through the operation control in the service presentation page.
Step S104, displaying the main interactive user to a main area in a service display page; the main area is used for displaying the interactive behavior data of the main interactive user in the target service.
Specifically, after the terminal device determines the main interactive user from the auxiliary area, the main interactive user can be pulled to the main area of the service display page to be displayed. The main area and the auxiliary area in the service display page may be overlapped or not overlapped, and when there is overlap between the main area and the auxiliary area, it may indicate that the main area is a part of the auxiliary area, as in the game display page 20k in the embodiment corresponding to fig. 2, the display area of the selected member 4 (i.e., the main interactive user) may be referred to as the main area, and the rest areas except the operation area in the game display page 20k may be referred to as the auxiliary areas, when the next selected member is the member 5, the main area may be updated to the display area of the member 5 from the display area of the member 4, that is, the display position of the main area in the service display page may be unfixed; when the main area and the auxiliary area are not overlapped, the main area and the auxiliary area can be represented as two independent areas in the service presentation page, such as the main area 20e and the auxiliary area 20f in the embodiment corresponding to fig. 2, where the main area may have a fixed display position in the service presentation page, and when the performer (i.e., the main interactive user) is replaced in the target service, the display position of the main area remains unchanged.
It can be understood that, after the terminal device displays the main interactive user to the main area, the main interactive user may be continuously displayed in the auxiliary area, or the main interactive user may not be displayed in the auxiliary area temporarily. For example, the terminal device may display the video image of the main interactive user to the main area, and continue to display in the form of an avatar in the auxiliary area; or after the terminal equipment displays the video picture of the main interactive user to the main area, the main interactive user is not displayed in the auxiliary area temporarily.
The main area in the service display page may be used to display interactive behavior data of the main interactive user in the target service, where the interactive behavior data may refer to at least one of video data and audio data, such as body movement and voice of the main interactive user. The task content in the target service can be displayed in the service display page, the main interactive user can interact with other interactive users aiming at the task content, namely, the terminal equipment can output interactive behavior data of the main interactive user aiming at the task content in the main area, and the interactive behavior data can also be called interactive content.
Optionally, in the interaction process of the target service, the terminal device may obtain real-time progress information corresponding to the target service in real time through a heartbeat mechanism, obtain display progress information corresponding to the service display page, and update the content of the service display page according to the real-time progress information when the real-time progress information is inconsistent with the display progress information. The heartbeat mechanism means that the terminal device can send a request data packet to the server at regular intervals, the server can return a response data packet to the terminal device after receiving the request data packet sent by the terminal device, and the response data packet can include real-time progress information corresponding to the target service. It should be understood that, the data type and the data structure of the heartbeat data packet, and information such as a data result that should be returned in the response data packet may be preset in the heartbeat mechanism, for example, the heartbeat data packet in the embodiment of the present application is used to obtain real-time progress information of the target service from the server, that is, the response data packet returned by the server may include the real-time progress information of the target service.
The real-time progress information and the display progress information can be represented by a progress seq (a digital list in which a database system increases according to a certain rule) value, the real-time progress information can refer to the real-time progress of a target service, the display progress information can refer to the display progress in a service display page corresponding to a local terminal user, the display progress of the service display page possibly lags behind the real-time progress of the target service due to reasons such as time delay and the like on the service display page corresponding to the local terminal user, and at the moment, the terminal device needs to update the content of the service display page according to the real-time progress. For example, if the real-time progress seq value of the target service is 8 (indicating that the second main interactive user in the target service performs interaction in the main area), and the display progress seq value of the service display page is 7 (indicating that the video image of the first main interactive user in the main area of the service display page), it may be determined that the real-time progress information of the target service is inconsistent with the display progress information of the service display page, and the terminal device may pull the video image of the second main interactive user and display the video image of the second main interactive user in the main area.
After the target service is finished, the terminal device may detect that the target service at this time is in a service finished state, the terminal device may obtain interaction behavior data corresponding to a main interaction user, count interaction success results (such as times of answering in an answer game by the interaction users) respectively corresponding to each interaction user in the target service, generate a result ranking table according to the interaction success results, display the interaction behavior data and the result ranking table in a service result page, and may further include a sharing function control in the service result page. For example, when the target service is a quiz game, the terminal device may obtain, after the game is ended each time, an answer video of the local terminal user in the game process when the local terminal user is used as a main interactive user, and count the score (such as the number of correct questions to be quiz) of each interactive user in the quiz game in each game; and then, the scores corresponding to each interactive user can be sequenced to obtain a result sequencing list, and the result sequencing list and the answer video are displayed in a result page of the quiz game. The local terminal user can enable the terminal equipment to share the result sequence and the answer video displayed in the result page to the information publishing platform or the friend terminal in the instant messaging application by triggering the sharing function control.
In the embodiment of the application, a service display page for a target service in an instant messaging application may include a main area and an auxiliary area, the auxiliary area may be used to display an interactive user participating in the target service, the main area may be used to display interaction behavior data of a selected user (i.e., a main interactive user), and after the main interactive user is determined from the target service, the main interactive user may be displayed to the main area, that is, the main interactive user may interact with other users in the main area, so that an interaction manner in the instant messaging application may be enriched.
Referring to fig. 5, fig. 5 is a schematic flowchart of a service data processing method according to an embodiment of the present application. As shown in fig. 5, the service data processing method may include the following steps:
step S201, responding to a trigger operation for a target service in an instant messaging application, displaying a service display page matched with the target service, and displaying at least two interactive users in an auxiliary area of the service display page.
Specifically, in a service interaction scene of the instant messaging application, a local terminal user may start a target service in the instant messaging application, when the local terminal user selects the target service and starts the target service, the terminal device may respond to a trigger operation for the target service in the instant messaging application, obtain description information of the target service, such as information of a name of the target service, a service execution rule, and the like, determine a service type of the target service according to the description information, display a service presentation page matched with the service type, and different service types may correspond to different service presentation pages. In the embodiment of the present application, the service type is taken as an alternate channel-superior presentation type as an example, and a processing procedure of a target service is specifically described.
It can be understood that the target service is an interactive service, and the target service may have a limitation on the number of members, and when the number of users participating in the target service reaches a minimum limit, the local terminal user may start the target service to achieve the purpose of interacting with other users in the target service, so that the local terminal user needs to invite other users to participate in the target service before starting the target service, and a process of inviting a user to join in the target service may refer to step S101 in the embodiment corresponding to fig. 3, which is not described herein again.
When the service type of the target service is the alternate-channel presentation type, the service presentation page may include a secondary region and a primary region (such as the secondary region 20f and the primary region 20e in the embodiment corresponding to fig. 2). After the target service is started, the terminal device may display at least two interactive users in an auxiliary area of the service display page, where the at least two interactive users may be displayed in a form of a video picture. The auxiliary area can have a fixed display position in the service display page, and all video pictures displayed in the auxiliary area can have the same display size; the display position of the video picture of the interactive user in the auxiliary area can be determined by the time when the interactive user joins the target service, the earlier the display position in the auxiliary area is, if the local terminal user is the first member of the target service, namely the local terminal user is the first interactive user joining the target service, the video picture of the local terminal user can be displayed at the first position in the auxiliary area. Of course, the display position of the video frame of the interactive user in the auxiliary area may also be random, and is not limited specifically here. The main area is only used for displaying interactive behavior data of a main interactive user (which can be understood as an interactive user selected from at least two interactive users) in a target service, before the main interactive user is not selected, a video picture of any interactive user can not be displayed in the main area of the service display page, the main area can also have a fixed display position in the service display page, and the main area and the auxiliary area can be two non-overlapping independent areas in the service display page.
Step S202, responding to the confirmation operation aiming at the auxiliary area, determining a main interactive user from at least two interactive users, and displaying the main interactive user to a main area in the service display page.
Specifically, for the video pictures respectively corresponding to the at least two interactive users displayed in the auxiliary area, the terminal device may respond to the confirmation operation for the auxiliary area, and select a first interactive user in the auxiliary area as the main interactive user from the at least two interactive users. The confirmation operation may be a confirmation instruction sent by the server to the terminal device, for example, after the target service is started, the server may detect that the state of the target service is switched from an un-started state to a started state, and further may send a confirmation instruction to the terminal device, and after the terminal device receives the confirmation instruction sent by the server, the terminal device may determine the main interactive user from at least two interactive users of the target service according to the confirmation instruction. It can be understood that at least two interactive users in the target service may be sequentially selected as the main interactive users according to the display order in the auxiliary area, for example, a first interactive user in the auxiliary area may be used as a first main interactive user in the target service, and after the first interactive user completes the interaction in the target service, a second interactive user in the auxiliary area may be used as a next main interactive user, and so on.
After the main interactive user is determined from the at least two interactive users, the terminal device can switch and display the video picture of the main interactive user in the auxiliary area as the head portrait corresponding to the main interactive user, and switch and display the video picture of the main interactive user from the auxiliary area to the main area in the service display page, wherein the size of the video picture displayed in the main area is larger than that of the video picture displayed in the auxiliary area, namely, both the main area and the auxiliary area in the service display page can display the main interactive user at the moment. It is understood that when the first interactive user displayed in the auxiliary area is selected as the main interactive user and the video picture of the main interactive user is displayed to the main area, the main interactive user can still be displayed in the auxiliary area in the form of an avatar, and the display position remains unchanged.
Optionally, the display positions of at least two interactive users in the target service in the auxiliary area may be continuously updated, for example, before the main interactive user is selected, the display positions of at least two interactive users in the auxiliary area may be determined by the time when the interactive users join the target service, the first interactive user is selected as the main interactive user in the terminal device, and after the video picture of the main interactive user is switched and displayed from the auxiliary area to the main area, the main interactive user may not be displayed in the auxiliary area temporarily, and the interactive users located behind the main interactive user in the auxiliary area are moved in-line; when the main interactive user finishes the interaction and returns to the auxiliary area, the video picture of the main interactive user can be displayed at the tail of the auxiliary area, and the like, so that the display positions of at least two interactive users in the auxiliary area are updated. If the target service includes 4 interactive users, namely user 1, user 2, user 3 and user 4, before the main interactive user is selected, the display sequence of the 4 users in the auxiliary area is as follows: when the user 1 is selected as the main interactive user, the video picture of the user 1 can be switched and displayed from the auxiliary area to the main area, and the display sequence of the users in the auxiliary area is as follows: user 2-user 3-user 4, after user 1 interaction is completed, user 1 can exit from the main area and return to the auxiliary area again, and at this time, the user display sequence in the auxiliary area is: user 2-user 3-user 4-user 1.
Step S203, outputting the interaction behavior data of the main interaction user aiming at the task content in the target service and the feedback behavior data of the auxiliary interaction user aiming at the interaction behavior data.
Specifically, if the main interactive user is a local terminal user, the terminal device may obtain task content for the local terminal user in the target service, and may display the task content in the main area; the terminal equipment can acquire the interactive behavior data of the local terminal user aiming at the task content in real time and output the interactive behavior data of the local terminal user aiming at the task content in the target service. Meanwhile, the auxiliary interactive user in the target service can feed back the interactive behavior data of the local terminal user, the user terminal corresponding to the auxiliary interactive user can also collect the feedback behavior data of the auxiliary interactive user aiming at the interactive behavior data in real time and send the feedback behavior data to the server, and the server can send the received feedback behavior data to the local terminal equipment and further can output the feedback behavior data of the auxiliary interactive user in the target service. The auxiliary interactive user refers to an interactive user except the main interactive user in at least two interactive users in the target service, the task content may be a performance topic of the main interactive user in the target service, the task content may be text content, image content, audio content, or the like, the interactive behavior data may be at least one of video frame data (such as a body action picture of the local terminal user for the task content) and audio data (such as a voice description of the local terminal user for the task content), and the feedback behavior data may be audio data (such as an answer of the task content given by the auxiliary interactive user according to the interactive behavior data).
For example, if the target service is a game of "you play and guess me", and the task content is "baby is angry", the local terminal user can perform the task content by using expressions, body languages, language prompts (the task content cannot be spoken), and the like, the auxiliary interactive user can see the expressions, body languages, language prompts, and the like of the local terminal user from the main area, and perform a quiz according to the information of the local terminal user such as the expressions, body languages, language prompts, and the like, the auxiliary interactive user can directly make a voice when performing the quiz, at this time, the information of the local terminal user for the expressions, body languages, language prompts, and the like of the task content is the interactive behavior data, and the voice when performing the task content by the auxiliary interactive user is the feedback behavior data.
And step S204, sending the acquired interactive behavior data to a server, and receiving an interactive result determined by the server according to the interactive behavior data and the feedback behavior data.
Specifically, the terminal device can send the collected interaction behavior data for the task content to the server, the server can receive the interaction behavior data for the task content of the local terminal user and the feedback behavior data for the interaction behavior data of the auxiliary interaction user, and according to the interaction behavior data and the feedback behavior data, the interaction result corresponding to the task content can be obtained, and the finally obtained interaction result is sent to the terminal device. When the interaction behavior data contains the task content, the server can determine that the interaction result corresponding to the task content is a replacement task, namely the task content is invalidated and new task content needs to be acquired; when the feedback behavior data corresponding to the auxiliary interactive user is the same as the task content, the server can determine that the interactive result corresponding to the task content is an interactive success result, namely the auxiliary interactive user already provides a correct result of the task content; when the feedback behavior data of the auxiliary interactive user is not the same as the task content in the target service, the server can determine that the interactive result corresponding to the task content is an interactive failure result, that is, the auxiliary interactive user does not give a correct result of the task content within a specified time.
As in the foregoing example, if the local terminal user carelessly says five words "baby is angry", it indicates that the interaction result corresponding to the task content is a replacement task; if the auxiliary interactive user says five words of 'baby is angry' in the target service, the interactive result corresponding to the task content is an interactive success result, namely the auxiliary interactive user successfully answers; if the target service does not have the five characters that the auxiliary interactive user says that the baby is angry, the interactive result corresponding to the task content is an interactive failure result, namely all the auxiliary interactive users in the target service do not answer successfully.
Step S205, displaying the prompt information associated with the interaction result in the service display page, and updating the task content in the target service according to the prompt information.
When the terminal equipment receives the interaction result returned by the server, the prompt information associated with the interaction result can be displayed in the service display page, and the task content displayed in the main area is updated. If the interaction result received by the terminal equipment is an interaction success result, it indicates that the correct answer of the auxiliary interaction user successfully answers the task content within the specified time (each task content can be limited in time, and the correct answer is answered within the limited time before the answer is successful), and the prompt information associated with the interaction success result can be displayed in the service display page, wherein the prompt information can include cheering effect, the correct answer corresponding to the task content, user information of the auxiliary interaction user who says the correct answer, and the like, and then the task content can be updated, and the interaction with the auxiliary interaction user is continuously carried out in the main area. If the interaction result received by the terminal device is an interaction failure result, it indicates that no auxiliary interaction user successfully answers the correct answer of the task content within the specified time in the target service, and the prompt information associated with the interaction failure result can be displayed in the service display page, where the prompt information may include the correct answer corresponding to the task content, that is, when no auxiliary interaction user answers the correct answer, the terminal device may present the correct answer, and then the task content may be updated, and interaction with the auxiliary interaction user is continued in the main area. If the interaction result received by the terminal device is a replacement task, it indicates that the target user gives a correct answer to the task content in the interaction process, and the prompt information associated with the replacement task can be displayed in the service display page, and the prompt information can be used for giving a correct answer for "xx carelessly, skipping the next question", and then updating the task content, and continuing to interact with the auxiliary interaction user in the main area.
It can be understood that at least two interactive users included in the target service can be selected in turn as main interactive users, the interactive time of the main interactive users can be preset, for example, the interactive time of each main interactive user is 120s, when the time for the local terminal user to interact in the main area reaches 120s, the next main interactive user can be selected from the auxiliary area, the video picture of the next main interactive user is switched from the auxiliary area to the main area, when at least two interactive users in the target service are selected as the main interactive users, the prompt message "service is ended" can be displayed in the service display page, that is, the target service is in a service ended state at this time.
Optionally, after the target service is started, the terminal device may start a Timer (e.g., a Timer), check update of service data in the target service using the Timer, and drive the service presentation page to display different scenes and page information, where the service data may include information such as task content, function control icons, prompt information, and cheering animation in the target service. For example, when the time for the local terminal user to interact in the main area reaches 120s, the video image of the local terminal user needs to be switched from the main area to the auxiliary area, the terminal device is driven by the timer to obtain the video image of the next main interaction user, and the video image of the next main interaction user is displayed in the main area, if the video image of the local terminal user is switched to the auxiliary area, the terminal device does not obtain the interaction behavior data of the next main interaction user within the appointed time, it indicates that the terminal device fails to obtain the data, and the local terminal user network abnormality can be prompted in the service display page.
Optionally, in the target service, when the local terminal user interacts in the main area as a main interaction user, the terminal device may perform video recording on interaction behavior data of the local terminal user output in the target service, read Frame Buffer Object (FBO) content in a shared memory manner during the video recording, convert a prompt information corresponding to an interaction success result in an interaction process of the local terminal user into a bitmap texture, and fuse the interaction behavior data and the bitmap texture to obtain an interaction video associated with the local terminal user. After the target service is finished, the interactive video recorded by the local terminal user during interaction can be obtained, the result ranking list associated with at least two interactive users in the target service is obtained, the interactive video and the result ranking list are displayed in a service result page, and the service result page can also be shared with friends in the instant messaging application.
Please refer to fig. 6a to 6c together, and fig. 6a to 6c are schematic diagrams of an interface for processing service data according to an embodiment of the present application. Taking the example that the target service is a game of 'you play and guess me', the data processing process in the target service of the alternate channel showing type is described in detail. As shown in fig. 6a, when userba creates a game room (also referred to as a service session) in the instant messenger application of terminal device 40a (userba may be referred to as a local terminal user corresponding to terminal device 40 a), and invites userba B, userba C, userba D, userba E, userba F, userba G, and userba H to successfully join the game room, userba a may click on a "start game" function control in the trigger game room, so that terminal device 40a may respond to the trigger operation for the "start game" function control, display a service presentation page matching the "you guess" game in the instant messenger application, which may include a main area 40B and an auxiliary area 40C, and terminal device 40a may obtain a video frame corresponding to each user in the game room, and displays a video picture corresponding to each user in the sub area 40 c. When the user small A clicks and triggers the function control of 'start game', the background server of the instant messaging application can detect that the 'you play and me guess' game is switched from the non-starting state to the starting state, can send a confirmation instruction to the terminal equipment to instruct the terminal equipment 40a to select the user small A as the main interactive user from the auxiliary area 40c in sequence, and switch and display the video picture of the user small A from the auxiliary area 40c to the main area 40 b. At this time, the video picture of the user small a can be displayed in the main area 40b, and the avatar 40g of the user small a can be displayed at the display position of the user small a in the sub area 40 c. It is understood that the process of selecting the user small a from all users as the main interactive user and switching the video picture of the user small a to be displayed to the main area 40b is completed by the terminal device 40a in a very short time.
After the video image of the user small a is pulled into the main area 40b, the game may enter a countdown (e.g., 5 second countdown), and the prompt information 40d (e.g., "the small a first question, guess the question after 5 second countdown") and countdown numbers such as 5, 4, 3, 2, 1 are displayed in the main area 40b of the terminal device 40a, and when the countdown is performed, the user small a may preview its video image in the main area 40b of the terminal device 40a first, and make a ready confirmation before answering the question. After the countdown is finished, a title 40f (which may also be referred to as task content) for the user xiao a may be displayed in the main area 40b of the terminal device 40a, and the title 40f may be "smile-to-life". Each main interactive user may have an interaction time (also referred to as a performance time) of 120s, each topic may also be time-limited, for example, the answer time of each topic is 15 seconds, and other users (also referred to as auxiliary interactive users) except the user widget a in the game need to answer within the time-limited time, so that the remaining performance time of the main interactive user may also be displayed in the main area 40 b.
It should be understood that the thematic content in the game is only visible to the primary interactive users, the rest of the users cannot see the thematic content, that is, the main area 40b corresponding to the user small a can display the title 40f, and the main areas corresponding to the other users do not display the title 40f, as shown in fig. 6b, the service display pages of the other users also include a primary area 40b and a secondary area 40c, and when user a is selected as the first primary interactive user in the game, and the game enters a countdown, the avatar 40g of the user gadget a may be displayed at the display position of the user gadget a in the auxiliary area 40c of the remaining users, however, only the presentation information 40d (e.g., "question first, question guessing after countdown for 5 seconds") and the countdown numbers such as 5, 4, 3, 2, 1 are displayed in the main area 40b, and the video screen of the user's question a is not displayed for a while. After the countdown is ended, the video picture of the user small a and the performance time of the user small a may be displayed in the main area 40 b.
As shown in fig. 6a, performance data (i.e., interactive behavior data) of the user xiao a for the topic 40f may be output in the main region 40b of the terminal device 40a, and the other users may perform a quiz answer during the performance of the user xiao a, and may perform real-time recognition on the quiz answers voices of the other users by using a voice recognition technology, detect whether there is a keyword in the topic 40f, if it is detected that the other users hit the keyword in the topic 40f within a specified time, the result indicates that the quiz answer is successful, and a prompt message 40j indicating that the quiz answer is successful may be displayed in a service display page, where the prompt message 40j may include: the correct answer of the question 40F, the head portrait or video picture of the user small F with successful response, the cheering effect and other information, and the user with successful response can obtain scores, such as a successful response to a question, a score of 100 and the like; if it is detected that the keyword in the question 40f is missed by other users within the specified time, the question is failed to be answered, prompt information 40k of the failed question can be displayed in the service display page, and the correct answer of the question 40f is given in the prompt information 40 k.
The terminal device 40a may also collect voice data of the user xiao a, and send the collected voice data to the background server, the background server may perform real-time recognition on the voice of the user xiao a, if the user xiao a carelessly says a correct answer to the topic 40f, the prompt information 40h may be displayed in the main area 40b (for example, "you carelessly say an answer, jump to the next topic") to notify the user xiao a to jump to the next topic 40i (for example, "gazelle"), and the other users may continue to perform a quiz to the topic 40i according to the performance of the user xiao a.
When the performance of the user a is finished, that is, the performance time of the user small a reaches 120s, the video picture of the user small a can be pulled back into the auxiliary area 40c to be displayed, the user small B is continuously selected from the auxiliary area 40c to be used as the next active user, the video picture of the user small B is switched and displayed from the auxiliary area 40c to the main area 40B, the head portrait 40m of the user small B is displayed at the position of the user small B in the auxiliary area 40c, the performance data of the user small B can be output in the main area 40B of the terminal device 40a, and the user small a can perform the preemptive answer according to the performance data of the user small B.
In the game, users in the auxiliary area 40c may be sequentially selected as main interactive users, as shown in fig. 6c, after the last user in the auxiliary area 40c finishes the performance, a prompt message 40n (e.g., "game finished") may be displayed in the main area 40b, and a jump is made from the business presentation page to the business result page 40p, and a personal highlight clip video 40q (i.e., the above-mentioned interactive video, the highlight moment of each user at the time of performance) and a game ranking list 40r (i.e., a user list ranked according to the score of each user, such as a first user small a with a score of 12000, a second user small E with a score of 900, a third user small D with a score of 800, etc.) may be presented in the business result page 40p, and the user small a may download the clip video 40q to the local area of the terminal device 40 a. The business result page 40p further includes a "sharing" function control, and the user widget a can trigger the "sharing" function control in the business result page 40p, so that the terminal device 40a shares the business result page 40p to a friend in a manner of an H5 dynamic web page.
Optionally, in a game of "you play and guess me", questions given to a main interactive user may be various expression package pictures, the main interactive user may simulate expressions represented by the expression package pictures in the questions, and each expression package picture may correspond to an expression description text, for example, an expression description text corresponding to a certain expression package picture is "smile" or the like; the other users can perform the quiz according to the performance of the main interactive user, the background server can perform real-time recognition on the voices of the other users, if the fact that the other users hit the expression description characters corresponding to the expression package pictures within the specified time is detected, the quiz success is represented, prompt information of the quiz success can be displayed in the service display page, and the users who successfully perform the quiz can obtain scores; if the fact that the expression description characters corresponding to the expression package pictures are missed by other users within the set time is detected, the question answering failure is indicated, prompt information of the question answering failure can be displayed in the business display page, and the expression description characters corresponding to the expression package pictures in the question can be given.
Optionally, in the target service of the alternate channel showing type, the questions issued by the main interactive user can be various expression package pictures, the main interactive user can simulate the expressions represented by the expression package pictures in the questions, the background server can perform real-time expression recognition on the video pictures of the main interactive user, and if the video pictures of the main interactive user are detected to be matched with the expression package pictures in the questions, the main interactive user is shown to perform successfully, and scores can be obtained; if the fact that the video picture of the main interaction user is not matched with the expression package picture in the theme is detected within the set time, the fact that the main interaction user fails in performing is indicated, and the score cannot be obtained.
Optionally, based on the service display page corresponding to the alternate showing type, an interactive video game can be performed, all interactive users in the interactive video game perform voice voting on the played video, and the plot trend of the video can be determined according to the number of votes cast by the interactive users on the played video. Please refer to fig. 7, fig. 7 is a display diagram of an interface for video interaction according to an embodiment of the present disclosure. As shown in fig. 7, when a user cell a (which may be referred to as a local terminal user corresponding to the terminal device 50 a), a user cell B, a user cell C, a user cell D, a user cell E, a user cell F, a user cell G, and a user cell H are included in the game room, the user cell a may start an interactive video game in the instant messaging application of the terminal device 50a, the terminal device 50a may respond to a trigger operation of the user cell a, and display a service presentation page associated with the interactive video game in the instant messaging application, where the service presentation page may include a main area and an auxiliary area 50B, and the auxiliary area 50B may be used to display video pictures corresponding to the user cell a, the user cell B, the user cell C, the user cell D, the user cell E, the user cell F, the user cell G, and the user cell H, respectively; a video 50c to be played may be obtained, the video 50c is played in the main area, and when the video 50c is played to the scenario selection node, a plurality of scenario options may be displayed in the area 50d of the service presentation page, where option a is: using super capacity, option B calls friend help, option C: and (4) injuring. The area 50d may belong to a main area, the scenario selection node may be preset, for example, a duration of 5 minutes is used as the scenario selection period, the time point of 5 minutes may be used as the scenario selection node, and the time point of 10 minutes may also be used as the scenario selection node, that is, when the video 50c is played for 5 minutes, the video 50c may be paused to be played, and a plurality of scenario options are displayed in the area 50d of the service display page, and different scenario options indicate that the subsequent play scenarios of the video 50c may be different.
The users included in the subsidiary area 50B may discuss together, perform voice voting, select the scenario option with the largest number of votes as the subsequent scenario trend of the video 50C, for example, the small user F, the small user E, the small user H, and the small user C in the subsidiary area 50B all select the scenario option a as the subsequent scenario trend of the video 50C, the small user a and the small user D all select the scenario option B as the subsequent scenario trend of the video 50C, and the small user B and the small user G all select the scenario option C as the subsequent scenario trend of the video 50C. The scenario option with the highest vote count (vote count of scenario option a is 4, vote count of scenario option B is 2, vote count of scenario option C is 2) is taken as the subsequent scenario development trend of the video 50C, that is, the video 50C continues to be played in the main area according to the scenario development in scenario option a. The immersive feeling of the user can be enhanced by discussing the video scenarios together in a video chat mode.
In the embodiment of the application, a service display page for a target service in instant messaging application may include a main area and an auxiliary area, the auxiliary area may be used to display an interactive user participating in the target service, the main area may be used to display interaction behavior data of a selected user (i.e., a main interactive user), and after the main interactive user is determined from the target service, the main interactive user may be displayed to the main area, that is, the main interactive user may interact with other users in the main area, so as to enrich an interaction mode in the instant messaging application; after the main interaction user is selected from the interactive users, the video picture of the main interaction user can be switched and displayed from the auxiliary area to the main area, and the size of the video picture displayed in the main area is larger than that of the video picture displayed in the auxiliary area, so that the interactive users in the target service can quickly locate the video picture of the main interaction user and watch the interactive content output in the video picture of the main interaction user, the probability of requiring the main interaction user to provide the same interactive content again can be reduced, and the data flow can be saved.
Referring to fig. 8, fig. 8 is a schematic flowchart of a service data processing method according to an embodiment of the present application. As shown in fig. 8, the service data processing method may include the following steps:
step S301, responding to the trigger operation aiming at the target service in the instant communication application, and displaying a service display page matched with the target service.
Specifically, in the embodiment of the present application, the service type is taken as a circle of sitting around type, and a processing procedure of the target service is specifically described.
When the service type of the target service is a circle of sitting type, the service display page may include an operation area and an auxiliary area, the operation area may be used to display an operation control in the target service, the auxiliary area may be used to display video pictures corresponding to at least two interactive users in the target service, the operation area and the auxiliary area are two independent areas that do not overlap in the service display page, and display sizes of the operation area and the auxiliary area in the service display page are associated with the number of the interactive users in the target service.
Step S302, determining the display size corresponding to the auxiliary area in the service display page according to the total number of the users of the at least two interactive users, dividing the auxiliary area into at least two unit auxiliary areas according to the total number of the users and the display size corresponding to the auxiliary area, and respectively displaying the at least two interactive users in the at least two unit auxiliary areas.
Specifically, the terminal device can obtain the total number of users of at least two interactive users in the target service, the display sizes corresponding to the auxiliary region and the operation region respectively can be determined according to the total number of the users, the auxiliary region can be divided into at least two auxiliary unit regions according to the total number of the users and the display sizes corresponding to the auxiliary region, the number corresponding to the auxiliary unit regions is the same as the total number of the users, the terminal device can display the operation region in a service display page according to the display sizes corresponding to the operation region, the video pictures of the at least two interactive users are displayed in the at least two auxiliary unit regions respectively, and namely the video picture of one interactive user can be displayed in each auxiliary unit region.
It can be understood that the target service may have a member number limitation, for example, the lower limit of the member of the target service may be 4, and the upper limit of the member may be 8, that is, when the total number of the interactive users in the target service does not reach 4, the target service cannot be started, and needs to wait for the joining of other users; when the total number of the interactive users reaches 4, the target service can be started, and after the total number of the interactive users reaches 8, the number of the interactive users contained in the target service reaches the upper limit, and the other users cannot add the target service. When the total number of the interactive users is 8, the game display page corresponding to a circle of the sit-around type in the embodiment corresponding to fig. 2 may be referred to, and details are not repeated here.
Please refer to fig. 9a to 9d together, and fig. 9a to 9d are schematic views of a service page provided in an embodiment of the present application. As shown in fig. 9a, when the total number of the interactive users in the target service is 4, the service presentation page displayed in the terminal device 60a may include: region 60b, region 60c, region 60d, region 60e, and operation region 60 f. The area 60b, the area 60c, the area 60d, and the area 60e may collectively form an auxiliary area in a service display page, the area 60b, the area 60c, the area 60d, and the area 60e may all be referred to as a unit auxiliary area, the area 60b is configured to display a video picture of an interactive user 1 in a target service, the area 60c is configured to display a video picture of an interactive user 2 in the target service, the area 60d is configured to display a video picture of an interactive user 3 in the target service, the area 60e is configured to display a video picture of an interactive user 4 in the target service, and the operation area 60f is configured to display an operation control in the target service, such as a "start" operation control, which may be used to trigger a terminal device to determine to select a main interactive user from all interactive users in the target service.
Optionally, as shown in fig. 9b, when the total number of the interactive users in the target service is 5, the service display page displayed in the terminal device 60a may include: region 60g, region 60h, region 60i, region 60j, region 60k, and operation region 60 f. The area 60g, the area 60h, the area 60i, the area 60j, and the area 60k may collectively form an auxiliary area in the service display page, the area 60g, the area 60h, the area 60i, the area 60j, and the area 60k may all be referred to as a unit auxiliary area, the area 60g is configured to display a video picture of the interactive user 1 in the target service, the area 60h is configured to display a video picture of the interactive user 2 in the target service, the area 60i is configured to display a video picture of the interactive user 3 in the target service, the area 60j is configured to display a video picture of the interactive user 4 in the target service, the area 60k is configured to display a video picture of the interactive user 5 in the target service, and the operation area 60f may be configured to display an operation control in the target service.
Optionally, as shown in fig. 9c, when the total number of the interactive users in the target service is 6, the service display page displayed in the terminal device 60a may include: region 60m, region 60n, region 60p, region 60q, region 60r, region 60s, and an operation region. Wherein the area 60m, the area 60n, the area 60p, the area 60q, the area 60r and the area 60s may together form a secondary area in the service presentation page, the area 60m, the area 60n, the area 60p, the area 60q, the area 60r, and the area 60s may all be referred to as a unit auxiliary area, the area 60m is used to display a video picture of the interactive user 1 in the target service, the area 60n is used to display a video picture of the interactive user 2 in the target service, the area 60p is used to display a video picture of the interactive user 3 in the target service, the area 60q is used to display a video picture of the interactive user 4 in the target service, the area 60r is used to display a video picture of the interactive user 5 in the target service, the area 60s is used to display a video picture of the interactive user 6 in the target service, and the operation area may be used to display an operation control in the target service.
Optionally, as shown in fig. 9d, when the total number of the interactive users in the target service is 7, the service display page displayed in the terminal device 60a may include: region 60t, region 60u, region 60v, region 60w, region 60x, region 60y, region 60z, and an operation region. Wherein, the area 60t, the area 60u, the area 60v, the area 60w, the area 60x, the area 60y and the area 60z may jointly form an auxiliary area in the service display page, the area 60t, the area 60u, the area 60v, the area 60w, the area 60x, the area 60y and the area 60z may all be referred to as a unit auxiliary area, the area 60t is used for displaying a video picture of the interactive user 1 in the target service, the area 60u is used for displaying a video picture of the interactive user 2 in the target service, the area 60v is used for displaying a video picture of the interactive user 3 in the target service, the area 60w is used for displaying a video picture of the interactive user 4 in the target service, the area 60x is used for displaying a video picture of the interactive user 5 in the target service, the area 60y is used for displaying a video picture of the interactive user 6 in the target service, the area 60z is used for displaying a video picture of the interactive user 7 in the target service, the operation area can be used for displaying operation controls in the target business.
Step S303, responding to a starting instruction aiming at the operation control, and performing polling traversal on at least two interactive users in the auxiliary area.
Specifically, the operation area of the service presentation page includes an operation control, the confirmation operation may include a start instruction and a stop instruction, and when the local terminal user triggers the operation control, the terminal device may respond to the start instruction for the operation control to perform polling traversal on at least two interactive users in the auxiliary area, where the start instruction and the stop instruction may both be instructions generated according to user operation, or instructions generated according to user voice, or instructions sent by the server, and the like, and the polling traversal mode may be clockwise rotation traversal or counterclockwise rotation traversal.
Optionally, the operation control may include a "start" function control, and the manner of triggering the "start" function control may be click triggering, long-press triggering (that is, the terminal device detects, through the pressure sensor, that a duration of pressing the terminal screen by the local terminal user reaches a duration threshold, such as 2 seconds, and the like), voice triggering (such as detecting that the local terminal user shouts "start"), and the like. For example, when the target service includes 8 interactive users, the operation region may be located in a central region of the service presentation page, the auxiliary region is the rest region outside the operation region in the service presentation page, in the auxiliary region, video frames of the 8 interactive users may be displayed around the operation region, when the local terminal user clicks and triggers the "start" function control, the terminal device may respond to the click trigger operation for the "start" function control, and perform polling traversal on at least two interactive users in the auxiliary region in a clockwise rotation manner.
Step S304, responding to a stop instruction aiming at the operation control, stopping performing polling traversal on at least two interactive users in the auxiliary area, and determining the interactive user pointed when the polling traversal is stopped as a main interactive user.
Specifically, in the process of performing polling traversal on at least two interactive users in the auxiliary area, the terminal device may respond to a stop instruction for the operation control to stop performing polling traversal on the at least two interactive users in the auxiliary area, and the interactive user indicated when the polling traversal is stopped is used as the main interactive user. For example, when the interactive user indicated when the polling traversal stops is the interactive user 5 in the target service, the interactive user 5 may be determined as the primary interactive user.
Optionally, the operation control may include a "stop" function control, and the manner of triggering the "stop" function control may also be a click trigger, a long press trigger, a voice trigger (for example, detecting that a local terminal user shouts "stop"), and the like. As in the foregoing example, in the auxiliary area, video frames of 8 interactive users may be displayed around the operation area, when the local end user voice triggers the "stop" function control, the terminal device may stop performing polling traversal on at least two interactive users in the auxiliary area in response to the voice trigger operation for the "stop" function control, and may determine the interactive user 5 indicated when the polling traversal is stopped as the main interactive user.
Optionally, when the operation controls in the service presentation page include a "start" function control and a "stop" function control, the "start" function control and the "stop" function control may be simultaneously displayed in the operation area, that is, in the target service, the "start" function control and the "stop" function control may be always displayed in the operation area; or the start function control and the stop function control may be displayed in the operation area in a staggered manner, for example, the start function control is displayed in the operation area first, after the terminal device responds to a start instruction for the start function control and performs polling traversal on at least two interactive users in the auxiliary area, the start function control displayed in the operation area may be hidden, the stop function control is displayed in the operation area, and then a stop instruction for the stop function control may be responded, an interactive user pointed at when the polling traversal stops is determined as a main interactive user, and a video picture corresponding to the main interactive user is determined as a target video picture. Optionally, the "start" function control and the "stop" function control may also not be displayed in the operation area, the user may trigger the "start" function control and the "stop" function control in a voice-triggered manner, or only the "start" function control is displayed in the operation area, the "stop" function control is hidden, the user may trigger the "stop" function control in a voice-triggered manner, and the like, which is not specifically limited herein.
Step S305, highlighting the display area of the main interaction user in the auxiliary area, and determining the highlighted display area as the main area in the service display page.
Specifically, the terminal device may highlight a display area of the main interactive user in the auxiliary area, that is, highlight a unit auxiliary area corresponding to the video image of the main interactive user, and determine the highlighted unit auxiliary area as a main area in the service display page. It can be understood that, in the service display page, the video images of the interactive users displayed in the auxiliary region may be located around the operation region, and after the main interactive user is selected in a polling traversal manner, the display region of the main interactive user in the auxiliary region may be used as the main region in the target service, and when the main interactive user in the target service is changed, the main region in the service display page may also be changed accordingly. At this time, the main area in the service display page is a part of the auxiliary area, and when the video image of the main interactive user is displayed in the main area, visually, the display form of the main interactive user in the service display page is changed (highlighted), but the display position of the main interactive user in the service display page and the display size corresponding to the video image of the main interactive user are not changed.
And S306, displaying the task content in the main area, and outputting the interaction behavior data of the main interaction user aiming at the task content.
Specifically, after the main interactive user is displayed in the main area, the terminal device may obtain task content for the main interactive user and display the task content in the main area. The terminal equipment can acquire the interactive behavior data of the main interactive user aiming at the task content and output the interactive behavior data corresponding to the main interactive user in the target service. The task content may refer to a performance topic of the main interactive user in the target service, and the interactive behavior data may include body motions, voice data, expression motions, and the like of the main interactive user for the task content.
Optionally, when the main interactive user is a local terminal user, the terminal device may adopt a camera and a microphone to collect interactive behavior data of the local terminal user for task content, and output the interactive behavior data of the local terminal user for the task content in the target service, the local terminal user may send the interactive behavior data to the server, and the server transmits the interactive behavior data corresponding to the local terminal user to the user terminal corresponding to the auxiliary interactive user, where the auxiliary interactive user is an interactive user other than the main interactive user among at least two interactive users of the target service. When the main interactive user is not the local terminal user, the terminal device may receive the interactive behavior data of the main interactive user for the task content, which is sent by the server, and output the interactive behavior data of the main interactive user in the target service.
Step S307, obtaining the vote number of the auxiliary interaction users aiming at the interaction behavior data, and reselecting the main interaction user from the auxiliary area when the vote number is greater than or equal to the number threshold; and when the number of votes is smaller than the number threshold, outputting updated interactive behavior data of the main interactive user for the task content in the main area.
Specifically, the auxiliary interactive users in the target service can vote for the main interactive users according to the interactive behavior data of the main interactive users, that is, the auxiliary interactive users can be used as judges of the main interactive users, and when the auxiliary interactive users are satisfied with the interactive behavior data of the main interactive users aiming at the task content, the main interactive users can vote; when the auxiliary interactive user is not satisfied with the interactive behavior data of the main interactive user for the task content, the main interactive user may not be voted.
Optionally, the operation area corresponding to the auxiliary interactive user may include a voting function control, and the auxiliary interactive user may vote for the interactive behavior data of the main interactive user with respect to the task content by triggering the voting function control. When the local terminal user is an auxiliary interactive user, if the local terminal user is satisfied with the interactive behavior data of the main interactive user aiming at the task content, the voting function control in the operation area can be triggered, the terminal equipment can respond to the triggering operation aiming at the voting function control and send the voting information of the local terminal user to the server, and the server can count the voting number aiming at the main interactive user according to the received voting information of the auxiliary interactive user and return the voting number to the terminal corresponding to each interactive user in the target service. Therefore, the terminal device may receive the number of votes returned by the server for the main interactive user, when the number of votes is greater than or equal to the number threshold (for example, the number threshold may be one half of the number of the auxiliary interactive users), it indicates that the auxiliary interactive users exceeding the number threshold in the target service are satisfied with the interactive behavior data of the main interactive user for the task content, and may determine that the main interactive user has passed the task content, and the terminal device may reselect the main interactive user from the auxiliary area and continue to interact in the target service; when the vote number is smaller than the number threshold, it indicates that the auxiliary interactive users less than the number threshold in the target service are satisfied with the interactive behavior data of the main interactive user for the task content, and can determine that the main interactive user does not pass the task content, and the terminal device can output the updated interactive behavior data of the main interactive user for the task content in the target service, wherein the updated interactive behavior data can be understood as the behavior data of the main interactive user performing the task content again.
Referring to fig. 10a and 10b, fig. 10a and 10b are schematic diagrams of an interface for processing service data according to an embodiment of the present application. Taking the target business as an example of a game of 'really speaking big adventure', the data processing process in the target business of the circle of sitting is specifically described. As shown in fig. 10a, when the game room corresponding to the "big adventure at truth" game includes the user small a (the user small a may be referred to as a local terminal user corresponding to the terminal device 70 a), the user small B, the user small C, the user small D, the user small E, the user small F, the user small G, and the user small H, the user small a may click and trigger a "start game" function control in the game room, so that the terminal device 70a displays a service presentation page matching the "big adventure at truth" game in the instant messaging application, the service presentation page may include an operation area and an auxiliary area, and the auxiliary area may display a video screen 70B of the user small a, a video screen 70C of the user small B, a video screen 70D of the user small C, a video screen 70E of the user small D, a video screen 70F of the user small E, a video screen 70G of the user small F, a video screen 70B of the user small B, a video screen 70C of the user small B, a video screen 70D of the user small D, a video screen 70E of the user small E, a video screen 70F, a video screen 70E of the user small F, a video screen of the user C, a display screen of the user small B, a display screen of the user C display screen of the user in the terminal device 70B in the terminal device in the instant messaging application, a display area in the display area, Video pictures 70H of user small G and video pictures 70i of user small H.
Since the user cell a is a user who creates a game room, that is, the user cell a is a first user who joins a game, and the user cell a can be a user who selects a first person in the game, the "start" function control can be displayed in the operation area corresponding to the user cell a, and when the user cell a clicks (or voices) to trigger the "start" function control, the terminal device 70a can perform polling traversal on 8 interactive users in the auxiliary area in response to the triggering operation for the "start" function control, that is, perform polling traversal on video frames of the 8 interactive users displayed in the auxiliary area, such as sequentially polling traversal of the video frame 70b, the video frame 70c, the video frame 70d, the video frame 70e, the video frame 70f, the video frame 70g, the video frame 70h, and the video frame 70 i; in the polling traversal process, a prompt message of 'shouting' to stop rotating immediately can be displayed in an operation area, namely, the user A can trigger the terminal equipment to stop polling traversal through voice 'stopping', and an interactive user indicated when the polling traversal stops is used as a main interactive user. Optionally, the polling traversal in the game may also be predefined for a duration, for example, 10 seconds, when the user small a exceeds 10 seconds and does not shout to "stop", the countdown may be entered, and a countdown number, for example, 5, 4, 3, 2, 1, etc., is displayed in the service display page, and after the countdown is finished, the user small G (i.e., the interactive user corresponding to the video frame 70 h) indicated when the polling traversal stops is used as the main interactive user.
The terminal device 70a may determine the display area of the video image 70h of the user small G in the auxiliary area as the main area, that is, when the user small G is used as the main interactive user, the display area corresponding to the video image 70h is the main area in the service display page at this time, and may highlight, for example, highlight, the video image 70h in the main area.
The terminal device 70a may acquire the task content 70j (e.g., "who likes. The user small a is used as an auxiliary interactive user in the game at this time, the "passing" function control 70k (i.e. voting function control) can be displayed in the operation area corresponding to the user small a, the prompt information such as "small G is answering" can also be displayed in the operation area, the user small a can determine whether to vote for the user small G according to the performance data of the user small G, and if the user small a is satisfied with the task content 70j of the user small G, "who likes? "answer, then can trigger" pass the "function control 70k, vote to user small G; if user small a is not satisfied with user small G "who likes best? "does not trigger the" off "functionality control 70 k. When more than half of the number of the auxiliary interactive users votes for the user small G, namely more than half of the auxiliary interactive users answer the user small G satisfactorily, the next round of game can be entered. It will be appreciated that the next round of game play may be performed by the main interaction user of the previous round triggering the "start" function control in the operation area to select the player, i.e. the small G of user triggers the "start" function control in the next round of game play to select the main interaction user of the next round of game play. When the user small G does not trigger the start function control, the prompt message of waiting for start … may be displayed in the operation area of the user small a, in other words, when the user small G does not trigger the start function control, the start function control may be displayed in the operation area of the user small G, and the prompt messages of waiting for start … may be displayed in the operation areas of the other users except the user small G.
Optionally, when the user small G is selected as the main interaction user in the game, the user small G may have a chance to change the task content for multiple times (the number of times of changing the task content may be preset, for example, when the number of times of changing the task content is set to 3 times, it indicates that each main interaction user has 3 chances of changing the task content). As shown in fig. 10b, the terminal device 70m is a user terminal held by the user small G, and after the user small G is selected as the main interactive user, the task content 70p "sing the head of a song" can be displayed in the display area (i.e. the main area) of the video screen 70h of the user small G, and the "changing the question" function control 70n is displayed in the operation area corresponding to the user small G. If the user small G does not want to perform the task content 70p "sing the head of a song", the "change of question" function control 70n may be triggered, and when the user small G clicks the "change of question" function control 70n, the terminal device 70m may respond to the click operation for the "change of question" function control 70n to change the task content for the user small G, and switch the displayed task content 70p "sing the head of a song" to the task content 70j "who likes best? ". If the user G still does not want to answer the task content 70j "who likes? "change the question" function control 70n can be triggered again to change the task content, and when the number of times that the user small G changes the task content is used up, the user small G cannot change the task content again.
In the embodiment of the application, a service display page for a target service in instant messaging application may include a main area and an auxiliary area, the auxiliary area may be used to display an interactive user participating in the target service, the main area may be used to display interaction behavior data of a selected user (i.e., a main interactive user), and after the main interactive user is determined from the target service, the main interactive user may be displayed to the main area, that is, the main interactive user may interact with other users in the main area, so as to enrich an interaction mode in the instant messaging application; after the main interaction user is selected from the interactive users, the display area of the main interaction user in the auxiliary area can be determined as the main area, and the main area is highlighted, so that the interactive users in the target service can be quickly positioned to the video picture of the main interaction user, the interactive contents output in the video picture of the main interaction user can be watched, the probability of requiring the main interaction user to provide the same interactive contents again can be reduced, and the data traffic can be saved.
Referring to fig. 11, fig. 11 is a schematic flowchart of a service data processing method according to an embodiment of the present application. As shown in fig. 11, the service data processing method may include the following steps:
step S401, responding to the trigger operation aiming at the target service in the instant communication application, and displaying a service display page matched with the target service.
Specifically, in the embodiment of the present application, the service type is taken as an example of a packet countermeasure type, and a processing procedure of the target service is specifically described.
When the service type of the target service is a packet countermeasure type, the service presentation page may include a first secondary region, a second secondary region, a first main region, and a second main region (such as the secondary region 20h, the secondary region 20g, the main region 20g, and the main region 20i in the embodiment corresponding to fig. 2). The group confrontation type target service may include at least two teams (also referred to as user groups), for example, each team may correspond to an auxiliary region and a main region, the auxiliary region may be configured to display a video picture of an interactive user included in the corresponding team, the main region may be configured to output only interactive behavior data of the main interactive user in the corresponding team in the target service, if the team a corresponds to the first auxiliary region and the first main region, the first auxiliary region may be configured to display a video picture of the interactive user included in the team a, and the first main region may be configured to output only interactive behavior data of the main interactive user in the team a in the target service.
Step S402, grouping at least two interactive users according to the group identifications respectively corresponding to the at least two interactive users to obtain a first user group and a second user group.
Specifically, in the target service of the group countermeasure type, when the interactive user joins the target service, the interactive user may select a team (i.e., a user group) to be joined by himself or may allocate teams to the interactive user by the terminal device, so that the terminal device may divide at least two interactive users in the target service according to a group identifier (e.g., information such as a team name, a team number, and the like) corresponding to the interactive user, and the interactive users having the same group identifier may be divided into the same user group. When the group identifiers corresponding to the at least two interactive users are of two types, the terminal device may divide the at least two users into two user groups, which are a first user group and a second user group respectively.
Optionally, when the types of the group identifiers corresponding to the at least two interactive users are greater than two, the interactive users corresponding to each type of group identifier may be divided into one user group, that is, the number of the user groups may be a numerical value greater than 2, and if the types of the group identifiers are three, the terminal device may divide the at least two interactive users into three user groups according to the group identifiers; when the types of the group identifiers are four, the terminal device may divide at least two interactive users into four user groups according to the group identifiers. In the service display page, each user group may correspond to an auxiliary region and a main region, and the page display form when the target service exceeds two user groups is similar to the page display form when the target service exceeds two user groups.
Step S403, displaying the interactive users included in the first user group in the first auxiliary area, and displaying the interactive users included in the second user group in the second auxiliary area.
Specifically, after the terminal device divides at least two interactive users into two user groups, a video picture corresponding to the interactive user included in the first user group can be displayed in a first auxiliary area of the service display page, and a video picture corresponding to the interactive user included in the second user group can be displayed in a second auxiliary area of the service display page. For example, when the first user includes user 1, user 2, and user 3, video pictures corresponding to user 1, user 2, and user 3 may be displayed in the first auxiliary area, and when the second user group includes user 4, user 5, and user 6, video pictures corresponding to user 4, user 5, and user 6 may be displayed in the second auxiliary area.
Optionally, in the service display page, the first auxiliary area and the second auxiliary area may have the same display size, and the video frames of the interactive users displayed in the first auxiliary area and the second auxiliary area may also have the same display size.
Step S404, determining a first interactive user from the first user group and a second interactive user from the second user group in response to the confirmation operation for the auxiliary area.
Specifically, in the target service of the group countermeasure type, one master interactive user may be selected from the first user group and the second user group, that is, the master interactive user in the target service may include the first interactive user in the first user group and the second interactive user in the second user group. The terminal device may determine, according to the confirmation operation, a first interactive user from the first user group displayed in the first auxiliary area, and determine a second interactive user from the second auxiliary area. The confirmation operation may be a confirmation instruction sent by the server to the terminal device, and it can be understood that the interactive users included in the first user group may be sequentially selected as the primary interactive users in the first user group in turn (i.e., first interactive users), and the interactive users included in the second user group may be sequentially selected as the primary interactive users in the second user group in turn (i.e., second interactive users).
Step S405, display the first interactive user to the first main area, and display the second interactive user to the second main area.
Specifically, the terminal device may pull the video image of the first interactive user in the first auxiliary region into the first main region for display, and pull the video image of the second interactive user in the second auxiliary region into the second main region for display. It should be noted that the display positions of the interactive users included in the first user group in the first auxiliary region may be changeable, before the first interactive user is selected, the display positions of the interactive users in the first auxiliary region in the first user group may be determined by the time when the interactive users join the target service, or the display positions of the interactive users in the first user group in the first auxiliary region may be randomly determined, where the display positions at this time may be referred to as initial display positions of the interactive users in the first auxiliary region in the first user group. Selecting a main interactive user (namely a first interactive user) from a first user group at a terminal device, and switching and displaying a video picture of the first interactive user from a first auxiliary area to a first main area, wherein the first interactive user can not be displayed temporarily in the first auxiliary area, and the interactive user positioned behind the main interactive user in the first auxiliary area moves in the same direction; when the first interactive user finishes the interaction and returns to the first auxiliary area, the video picture of the first interactive user can be displayed at the tail of the first auxiliary area, and the like, so that the display position of the interactive user in the first auxiliary area is updated.
Similarly, the display position of the interactive user included in the second user group in the second auxiliary region may also be transformed, and the transformation process is the same as the transformation process of the interactive user included in the first auxiliary region, and is not described herein again.
Step S406, displaying the task content in the service display page, and outputting the interaction behavior data of the first interaction user and the second interaction user respectively aiming at the task content.
Specifically, the terminal device may obtain task contents for a first interactive user and a second interactive user in the target service, and display the task contents in the service display page, where the task contents may be independently displayed in the first main area and the second main area. The terminal equipment can also acquire the interactive behavior data of the first interactive user aiming at the task content and the interactive behavior data of the second interactive user aiming at the task content, and outputs the interactive behavior data of the first interactive user aiming at the task content and the interactive behavior data of the second interactive user aiming at the task content in the target service. For example, the task content is the question "is whale is fish? "the interactive behavior data of the first interactive user for the task content may be: the voice answer given by the first interactive user, and the interactive behavior data of the second interactive user for the task content may be: and a voice answer given by the second interactive user.
Optionally, the interaction behavior data output by the terminal device in the target service may be user behavior data collected locally by the terminal device, or may be behavior data received from a server; when the local terminal user is a main interactive user in the target service, if the local terminal user is a first interactive user, the terminal device at this time may acquire, in real time, interactive behavior data of the first interactive user for the task content by using a camera and a microphone, and output the interactive behavior data corresponding to the first interactive user, where the interactive behavior data may be video data of the first interactive user (e.g., a video frame including an expression of the first interactive user), or voice data of the first interactive user (e.g., an answer spoken by the first interactive user and associated with the task content); the terminal device may send the interactive behavior data corresponding to the first interactive user to the server, so that the server may send the received interactive behavior data corresponding to the first interactive user to the other interactive users except the first interactive user in the target service. Certainly, the terminal device at this time may receive the interactive behavior data, which is sent by the server and is specific to the task content, of the second interactive user, and after receiving the interactive behavior data corresponding to the second interactive user, the terminal device may output the interactive behavior data corresponding to the second interactive user in the target service. Optionally, when the local terminal user is an auxiliary interactive user in the target service, the terminal device may receive interactive behavior data of the first interactive user for the task content and interactive behavior data of the second interactive user for the task content, which are sent by the server.
Step S407, receiving an interaction result determined by the server according to the interaction behavior data respectively corresponding to the first interaction user and the second interaction user.
Specifically, after receiving the interaction behavior data corresponding to the first interaction user and the interaction behavior data corresponding to the second interaction user, the server may determine an interaction result with the task content according to the interaction behavior data corresponding to the first interaction user and the interaction behavior data corresponding to the second interaction user, and send the interaction result to the terminal device. When the interactive behavior data corresponding to the first interactive user includes answer information of task content, it may be determined that the interaction result of the task is: an interaction success result for the first interactive user; when the interaction behavior data corresponding to the second interaction user includes answer information of the task content, it may be determined that an interaction result of the task content is: an interaction success result for the second interactive user; when the interaction behavior data respectively corresponding to the first interaction user and the second interaction user do not contain the interaction result of the task content, the interaction result is as follows: and (5) interaction failure results. As in the previous example, the task content is the question "is whale a fish? "when the first interactive user utters the answer" no "first, the interaction result is: an interaction success result for the first interactive user; when the second interactive user utters the answer "no" first, the interactive result is: an interaction success result for the second interactive user; when the first interactive user and the second interactive user do not say the answer 'no' within the specified time, the interactive result is: and (5) interaction failure results.
Step S408, according to the interaction result, updating and displaying the first interaction user in the first main area and the second interaction user in the second main area, respectively.
Specifically, the terminal device may update the primary interactive user in the first user group and the primary interactive user in the second user group according to the interaction result returned by the server. Further, when the interaction result is the interaction success result corresponding to the first interaction user, the terminal device may switch and display the video picture of the second interaction user from the second main area to the second auxiliary area, select the next interaction user from the second auxiliary area, use the next interaction user as the updated second interaction user, and display the updated video picture of the second interaction user to the second main area; when the interaction result is the interaction success result corresponding to the second interaction user, the terminal device may switch and display the video picture of the first interaction user from the first main area to the first auxiliary area, select the next interaction user from the first auxiliary area, use the next interaction user as the updated first interaction user, and display the updated video picture of the first interaction user to the first main area. In other words, in the target service, the video pictures of the main interactive users who successfully interact with each other can be continuously displayed in the main area, and the main interactive users who fail to interact with each other need to return to the auxiliary area to reselect the main interactive users except for the updated main interactive users.
Optionally, when the interaction result is an interaction failure result corresponding to the task content, the terminal device may switch and display a video picture of the first interaction user from the first main area to the first auxiliary area, select a next interaction user from the first auxiliary area, serve as the updated first interaction user, display the updated video picture of the first interaction user to the first main area, and simultaneously need to switch and display a video picture of the second interaction user from the second main area to the second auxiliary area, and select a next interaction user from the second auxiliary area, serve as the updated second interaction user, and display the updated video picture of the second interaction user to the second main area. In other words, when both the first interactive user and the second interactive user fail to interact in the target service, both the first interactive user and the second interactive user need to exit from the main area and return to the corresponding auxiliary areas.
Referring to fig. 12a and 12b together, fig. 12a and 12b are schematic diagrams of an interface for processing service data according to an embodiment of the present application. Taking the example that the target service is a 'question and answer PK' game, the data processing process in the group resistance type target service is specifically described. As shown in fig. 12a, when the game room 80B corresponding to the "quiz PK" game includes two teams, namely team a (i.e. the first user group) and team B (i.e. the second user group), the team a members may include: user small a, user small B, user small C, and user small D, the video pictures of team a members may be displayed in area 80D of game room 80B; team B members may include: video frames of user gadget E, user gadget F, user gadget G, and user gadget H, B team members may be displayed in area 80E of game room 80B.
The user widget a may click and trigger a "start game" function control in the game room, so that the terminal device 80a may respond to a trigger operation for the "start game" function control, and display a service presentation page matched with the "ask-answer PK" game in the instant messaging application, where the service presentation page may include a main area 80h, a main area 80i, an auxiliary area 80f, and an auxiliary area 80g, and the terminal device 80a may display video pictures of all members in team a in the auxiliary area 80f, and display video pictures of all members in team B in the auxiliary area 80 g. When the user small a clicks and triggers the function control of 'start game', the background server of the instant messaging application can detect that the 'ask-answer PK' game is switched from the non-starting state to the starting state, and can send a confirmation instruction to the terminal device to instruct the terminal device 80a to select the user small a from the auxiliary region 80f in sequence as the main interactive user, and switch and display the video picture of the user small a from the auxiliary region 80f to the main region 80h, at this time, the user small a may not be displayed in the auxiliary region 80f temporarily, and the user small B becomes the first member in the auxiliary region 80f temporarily. Similarly, the background server sends a confirmation instruction to the terminal device, and may further instruct the terminal device 80a to select the user small E as the main interactive user from the auxiliary region 80g in sequence, and switch and display the video picture of the user small E from the auxiliary region 80g to the main region 80i, at this time, the user small E may not be displayed in the auxiliary region 80g temporarily, and the user small F becomes the first member in the auxiliary region 80g temporarily.
After the video picture of the user small A is pulled into the main area 80h to be displayed, the score obtained by the user small A in the game can be displayed in the main area 80h, and when the game is not formally started, the score of the user small A is 0; after the video image of the user small E is pulled into the main area 80i for display, the score obtained by the user small E in the game can be displayed in the main area 80i, and when the game is not formally started, the score of the user small E is also 0. Of course, each game may have a time length limit, and the time length information of the game is displayed in the main area 80h or the main area 80i, for example, the time length of each game is 120 seconds.
The terminal device 80a may obtain a topic 80j (e.g., "how dandelion spreads seeds") in the "question-and-answer PK" game, and display the topic 80j in the service display page, where the topic 80j may be independently displayed in the main area 80h and the main area 80i, or may be displayed at an intersection of the main area 80h and the main area 80 i. The user small A and the user small E can both rob the question 80j, the rob voice data corresponding to the user small A and the user small E respectively can be collected, the rob voice data corresponding to the user small A and the user small E respectively can be identified in real time through a voice identification technology, if the rob voice data of the user small A hits the correct answer 'wind' of the question 80j, namely the answer spoken by the user small A is the correct answer, the user small A is indicated to be successful in rob, prompt information 80k can be displayed in a main area 80h, the prompt information 80k can be reward prompt information after the rob is successful, the score of the user small A in the game can be updated to 100 at the moment, and the user small A is used as the party of successful rob and can continue to answer in the main area 80 h.
Because the user small E fails to answer the question, the terminal device 80a can switch and display the video picture of the user small E from the main area 80i to the auxiliary area 80g, and after the user small E returns to the auxiliary area 80g, the video picture of the user small E can be displayed at the tail of the auxiliary area 80 g; the terminal device 80a may select the user small F from the team members as the next main interactive user, and switch and display the video screen of the user small F from the auxiliary area 80g to the main area 80 i. The terminal device 80a may reacquire a new question 80m (e.g., "guess song name by listening to prelude"), and display the question 80m in a service display page, the user small a and the user small F may both perform a quiz on the question 80m, may collect the quiz voice data corresponding to the user small a and the user small F, and may also perform real-time recognition on the quiz voice data corresponding to the user small a and the user small F by using a voice recognition technology, if at a specified time (each question may be limited, e.g., the answering time of each question is 15 seconds, the user small a and the user small F need to perform a quiz within 15 seconds, and if the time exceeds, the answering failure is indicated), the quiz voice data of the user small a and the answering voice data of the user small E do not hit the correct answer for the question 80m, that is, the user small a and the user small E both fail to answer, may display the prompt information 80n in the service display page, the prompt message 80n can be a correct answer to the question 80m, at this time, the user small a and the user small F both need to exit the main area, and the new member answers, and so on, until the answering time exceeds 120s, the game is ended. Optionally, during the game, the audio data of the auxiliary interactive users (which may also be referred to as viewing users) in the auxiliary area 80f and the auxiliary area 80g may be output, and only the background server does not perform speech recognition on the audio data of the auxiliary interactive users, that is, the auxiliary interactive users are invalid even if they speak correct answers.
As shown in fig. 12B, when the answer time of this game reaches 120s, a prompt message 80p may be displayed in the service display page (e.g., "game is finished"), and jump from the service display page to the service result page 80q, and the total scores of team a and team B in this game (e.g., the final total score of team a: 9600 and the final total score of team B: 4000) and a user ranking list 80r (which may be ranked according to the number of questions each user answers in this game, such as the first user small a, the number of questions of answer pair 24, the second user small E, the number of questions of answer pair 18, the third user small D, the number of questions of answer pair 16, etc.) may be displayed in the service result page 80 q. The business result page 80q further includes a "sharing" function control, and the user widget a can trigger the "sharing" function control in the business result page 80q, so that the terminal device 80a shares the business result page 80q to friends through a H5 dynamic web page.
Optionally, in the target service of the group countermeasure type, the theme in the target service may be an expression picture or words describing expressions, the first interactive user and the second interactive user may make expressions according to the theme, interactive behavior data corresponding to the first interactive user (which may include the expressions made by the first interactive user) may be output in the first main area, interactive behavior data corresponding to the second interactive user (which may include the expressions made by the second interactive user) may be output in the second main area, by performing expression recognition on the interactive behavior data corresponding to the first interactive user and the second interactive user, the interactive user recognized first and matched with the theme is the winner.
Please refer to fig. 13, fig. 13 is an interface display diagram of expression interaction according to an embodiment of the present disclosure. As shown in fig. 13, a user small a, a user small B, a user small C, and a user small D form a team 1, a user small E, a user small F, a user small G, and a user small H form a team 2, a service display page of the terminal device 90a includes a main area 90D, a main area 90E, an auxiliary area 90B, and an auxiliary area 90C, the user small a is a main interactive user selected from the team 1, the user small E is a main interactive user selected from the team 2, at this time, a video image of the user small a may be displayed in the main area 90D, a video image of the user small E may be displayed in the main area 90E, video images of other users except the user small a in the team 1 may be displayed in the auxiliary area 90B, and video images of other users except the user small E in the team 2 may be displayed in the auxiliary area 90C.
The terminal device 90a can obtain a topic 90f in the target service, where the topic 90f is an expression picture, and the user small a and the user small E can make expressions according to the topic 90f, and output the interaction behavior data of the user small a (i.e. video frame data containing the expression of the user small a) in the main area 90d, and output the interaction behavior data of the user small E (i.e. video frame data containing the expression of the user small E) in the main area 90E. The server can acquire interactive behavior data corresponding to the user small A and the user small E in real time, real-time expression recognition results corresponding to the user small A and the user small E are obtained through an expression recognition technology, when the real-time expression recognition results are matched with expressions shown in the question 90f, the result shows that the answer is successful, the user which is recognized to be matched with the question 90f is a winning user, score rewards can be obtained, the user which fails in the answer needs to quit the area, the area returns to the auxiliary area, the next main interactive user is reselected for answer interaction, and the game quitting is carried out until the answer is finished.
In the embodiment of the application, a service display page for a target service in an instant messaging application may include a plurality of main areas and a plurality of auxiliary areas, the auxiliary areas may be used to display an interactive user in each user group, the main areas may be used to display interaction behavior data of a selected user (i.e., a main interactive user), and after the main interactive user is determined from the target service, the main interactive user may be displayed to the main areas, i.e., the main interactive user may interact with other users in the main areas, so that interaction modes in the instant messaging application may be enriched; after the main interaction user is selected from the interactive users, the main interaction user can be displayed in the main area, and the size of the video picture displayed in the main area is larger than that of the video picture displayed in the auxiliary area, so that the interactive users in the target service can quickly locate the video picture of the main interaction user and watch the interactive content output in the video picture of the main interaction user, the probability of requiring the main interaction user to provide the same interactive content again can be reduced, and the data traffic can be saved.
Referring to fig. 14, fig. 14 is a timing diagram of a service data processing method according to an embodiment of the present application. As shown in fig. 14, the service data processing method may include the following steps:
step S501, multimedia data corresponding to at least two interactive users in the target service are obtained.
Specifically, in a service interaction scenario using video chat, when a target service includes at least two interactive users and the target service is in a start state, a server may obtain multimedia data corresponding to the at least two interactive users in the target service, where the multimedia data includes but is not limited to: video pictures, head portraits, and user nicknames.
When the multimedia data is a head portrait or a user nickname, the server can acquire the head portrait or the user nickname respectively corresponding to at least two interactive users in the target service from a background database of the instant messaging application; when the multimedia data is a video picture, the server can render the video frame data corresponding to each interactive user to obtain the video picture corresponding to each interactive user. The video frame data may refer to video data associated with a corresponding interactive user, which is uploaded by a user terminal to which the interactive user belongs in a target service, for example, the user terminal to which the user 1 belongs uploads video data of the user 1, which is collected in real time, to a server; the method comprises the steps that after video data uploaded by a user terminal to which a user 1 belongs are received by a server, the received video data are converted into video frame data, the video frame data are rendered through an open graphic library, and a video picture displayed in a terminal screen is generated; similarly, the server may render the video frame data corresponding to each interactive user, so as to obtain the video frame corresponding to each interactive user in the target service.
Step S502, multimedia data corresponding to at least two interactive users are sent.
Specifically, the server may send the multimedia data corresponding to each interactive user to the user terminal to which each interactive user belongs, so that the user terminal corresponding to each interactive user may receive the multimedia data of all interactive users in the target service.
Step S503, displaying multimedia data corresponding to at least two interactive users in the auxiliary area of the service display page.
Specifically, after receiving the multimedia data respectively corresponding to the at least two interactive users sent by the server, the terminal device may respectively display the multimedia data of the at least two interactive users in the auxiliary area of the service display page, and the display form of the multimedia data corresponding to the interactive users in the auxiliary area may refer to step S102 in the embodiment corresponding to fig. 3, which is not described herein again.
Step S504, a main interactive user is determined from at least two interactive users.
Specifically, the server may select the primary interactive user from the at least two interactive users according to the type of the target service and the display order of the multimedia data of the at least two interactive users in the auxiliary area. For example, when the target service is a service of a type of showing in turn, the server may sequentially select a main interactive user from the at least two interactive users according to display positions of the at least two interactive users in the auxiliary area, and when the main interactive user is selected for the first time in the target service, the server may determine that the first interactive user in the auxiliary area is the main interactive user.
When the target service is a circle of sitting type service, the server can obtain audio data in the target service, if the interactive users are detected to shout 'stop', polling traversal of at least two interactive users in the video area is stopped, and the interactive users indicated when the polling traversal is stopped are selected from the at least two interactive users according to the positions of the polling traversal in the auxiliary area when the polling traversal is stopped, so that the interactive users are determined to be the main interactive users.
Please refer to fig. 15, fig. 15 is a flowchart illustrating a process of selecting a master interactive user according to an embodiment of the present disclosure. As shown in fig. 15, in a circle-around type target service, the selection process of the main interactive user may be implemented through steps S11-S13, in the target service, the interactive user creating the service session may trigger the operation control in the operation area, so that the terminal device performs polling traversal on at least two interactive users in the auxiliary area, in the polling traversal process, the server may perform step S11, that is, receive audio data corresponding to each interactive user in the target service in real time, and perform noise reduction processing on the received audio data, such as eliminating background music in the audio data, and further the server may continue to perform step S12, that is, invoke the voice recognition service, perform text conversion processing on the obtained audio data, obtain a text conversion result corresponding to the audio data, when the text conversion result is "text stop", step S13 may be performed to stop the polling traversal for at least two interactive users in the secondary region, and to select the interactive user indicated when the polling traversal is stopped as the primary interactive user. The specific implementation process of the speech recognition may refer to step S21 in the embodiment corresponding to fig. 16, which is not described herein again; the voice recognition service may be a function integrated in a server or a voice recognition service called from an artificial intelligence cloud service through an Application Program Interface (API), which is not particularly limited herein.
Among them, the artificial intelligence cloud Service is also generally called AI as a Service (AIaaS). The method is a service mode of an artificial intelligence platform, and particularly, the AIaaS platform splits several types of common AI services and provides independent or packaged services at a cloud. This service model is similar to the one opened in an AI theme mall: all developers can access one or more artificial intelligence services provided by the platform through an API (application programming interface), and part of the qualified developers can also use an AI framework and an AI infrastructure provided by the platform to deploy and operate and maintain the self-dedicated cloud artificial intelligence services.
In step S505, a region replacement command is transmitted.
Specifically, the server may send a region replacement instruction to the terminal device, where the region replacement instruction is used to instruct the terminal device to switch and display the main interactive user from the auxiliary region to the main region.
And S506, displaying the main interaction user to a main area in the service display page according to the area replacement instruction.
The specific implementation process of step S506 may refer to step S104 in the embodiment corresponding to fig. 3, which is not described herein again.
Furthermore, after the video picture of the main interactive user is pulled to the main area to be displayed, the task content aiming at the main interactive user can be displayed in the service display page, and the terminal equipment can collect the interactive behavior data aiming at the task content of the main interactive user and upload the interactive behavior data to the server. The server can acquire interaction behavior data of a main interaction user for task content, generate an interaction result corresponding to the task content according to the interaction behavior data, and send the interaction result to the terminal device, so that the terminal device can display prompt information associated with the interaction result in a service display page, such as prompt information of response success, prompt information of response failure and the like.
Optionally, when the target service is a service of a type of performing a channel-in-turn performance and the task content is a text task content, the server may obtain interaction behavior data of the main interaction user for the text task content, perform voice recognition on first audio data included in the interaction behavior data, and obtain a first conversion text corresponding to the interaction behavior data; acquiring second audio data of auxiliary interactive users (interactive users except the main interactive user in at least two interactive users) aiming at the interactive behavior data, and performing voice recognition on the second audio data to obtain a second conversion text corresponding to the second audio data; when the first conversion text is matched with the text task content, determining an interaction result corresponding to the task content as a replacement task; when the first conversion text is not matched with the text task content and the second conversion text is matched with the text task content, determining an interaction result corresponding to the task content as an interaction success result; and when the first conversion text and the second conversion text are not matched with the text task content, determining that the interaction result corresponding to the task content is an interaction failure result. In other words, the server can receive the audio data uploaded by the user terminal to which each interactive user belongs in the target service, call the voice recognition service, perform character conversion on the audio data of each interactive user to obtain a conversion text corresponding to the audio data, and determine an interactive result corresponding to the text task content according to a matching result of the text task content and the conversion text. It should be noted that the text task content in the embodiment of the present application refers to a task that requires an interactive user to perform voice answering in a target service.
Referring to fig. 16, fig. 16 is a schematic flowchart illustrating a process of generating an interaction result according to an embodiment of the present disclosure. As shown in fig. 16, taking the target service of the alternate-to-last-show type as an example, the generation process of the interaction result can be realized through steps S20-S23.
Step S20, in the target service, the user terminal to which each interactive user belongs may upload audio and video data (including interactive behavior data corresponding to the primary interactive user and second audio data corresponding to the secondary interactive user) to the server, and the server may receive audio data corresponding to each interactive user. The target service may have noise such as background music, so the server may perform noise reduction processing on the received audio data to eliminate the noise such as background music in the audio data.
In step S21, the server may invoke a speech recognition service to perform text conversion on the noise-reduced audio data, so as to obtain a converted text corresponding to the audio data.
The specific process of speech recognition may include: the server can perform framing processing on the received audio data, divide the audio data into multiple frames of audio data, and further perform feature extraction on each frame of audio data to obtain acoustic features corresponding to each frame of audio data, namely, each frame of audio signal (waveform) is converted into a multi-dimensional vector containing sound information, wherein the acoustic features can include features such as a linear prediction cepstrum coefficient, a Mel cepstrum coefficient and the like.
The server may obtain an acoustic model, where the acoustic model may be obtained through speech data training, and the acoustic model may be used to characterize a mapping relationship between acoustic features and phoneme information, that is, the acoustic model has an input of acoustic feature vectors and an output of phoneme information. The server can input the acoustic features corresponding to each frame of audio data into the acoustic model, and the target phoneme information corresponding to each frame of audio data can be obtained through the acoustic model. Among other things, the acoustic models may include, but are not limited to: hidden Markov Models (HMM), Long Short Term Memory (LSTM), Recurrent Neural Networks (RNN).
The server can obtain the unit characters corresponding to the target phoneme information from the dictionary, input the unit characters corresponding to each frame of audio data into the language model, obtain the probability of each unit character or adjacent unit characters through the language model, and determine the conversion text corresponding to the audio data according to the probability output by the language model. The dictionary may be used to represent the correspondence between characters or words and phonemes, in brief, the dictionary may be regarded as the correspondence between pinyin and chinese characters, a language model may be obtained by training a large amount of text data, the input of the language model is text information, and the output is the probability that a single character or word is associated with each other, and the language model may include but is not limited to: hidden Markov Models (HMM), Long Short Term Memory (LSTM), Recurrent Neural Networks (RNN), n-gram (n-gram) language models.
For example, the audio data is a speech signal of "i am a robot", and it is assumed that the acoustic features of the audio signal obtained by feature extraction are: [1, 2, 3, 4, 5, 6 … ], inputting acoustic features [1, 2, 3, 4, 5, 6 … ] into an acoustic model, and outputting target phoneme information matched with the acoustic features through the acoustic model, such as: wosijiqirn, the unit character matched with the target phoneme information can be obtained through a dictionary, such as: nest-wo, me-wo, is-si, machine-ji, machine-qi, human-rn, grade-ji, ninja-rn; further, when the unit character is input to the language model, the probabilities with which the unit character can be output are: the probability that "i" corresponds: 0.0786, probability of "yes" corresponding: 0.0546, probability that "i am" corresponds: 0.0898 probability of "machine" correspondence: 0.0967, probability corresponding to "robot": 0.6785, respectively; thus, a converted text of the audio data "i am a robot" can be obtained: i am a robot.
The server can obtain the conversion text corresponding to the audio data of each interactive user through the acoustic model, the dictionary and the language model.
Step S22, the server can match the recognition result with the answer of the text task content, and when the answer of the converted text and the answer of the text task content are the same in the auxiliary interactive user, the server can determine that the interactive result of the text task content is the interactive success result; when the corresponding conversion text of the main interactive user is the same as the answer of the text task content, the interactive result of the text task content can be determined to be a replacement task, namely the answer of the text task content is spoken by the main interactive user, and the text task content is invalidated; when the answer that the converted text and the text task content are the same does not exist in the auxiliary interactive user within the specified time, the interactive result of the text task content can be determined to be an interactive failure result.
In step S23, the server may send the interaction result of the text task content to the terminal device, and notify each interactive user (i.e., game member) in the target service of the interaction result of the text task content. For example, when the interaction result of the text task content is an interaction success result, the interaction success result may be sent to the terminal device, and a prompt message may be displayed in the terminal device to notify each interactive user in the target service that the response is successful.
Optionally, when the target service is a packet countermeasure type service and the task content is a text task content, the server may obtain interaction behavior data of the first interaction user and the second interaction user for the task content respectively; acquiring third audio data in the interactive behavior data corresponding to the first interactive user, and performing voice recognition on the third audio data to obtain a third conversion text corresponding to the third audio data; acquiring fourth audio data in the interactive behavior data corresponding to the second interactive user, and performing voice recognition on the fourth audio data to obtain a fourth conversion text corresponding to the fourth audio data; when the third conversion text is matched with answer information corresponding to the task content and the acquisition time of the third audio data is earlier than that of the fourth audio data, determining an interaction result corresponding to the task content as an interaction success result aiming at the first interaction user; when the fourth conversion text is matched with the answer information and the acquisition time of the fourth audio data is earlier than that of the third audio data, determining the interaction result corresponding to the task content as an interaction success result aiming at the second interaction user; and when the third conversion text and the fourth conversion text are not matched with the answer information, determining that the interaction result corresponding to the task content is an interaction failure result. In other words, the server may obtain audio data corresponding to a main interactive user in the target service, invoke the voice recognition service, convert the audio data into a text, determine an interactive result of the text task content according to a matching result of the converted text and an answer of the task content, and refer to the description in the embodiment corresponding to fig. 16 for a specific process of generating the interactive result, which is not described herein again.
Optionally, when the task content is the image task content, the server may obtain interactive behavior data of the main interactive user for the image task content, and perform expression recognition on interactive video data included in the interactive behavior data to obtain an expression recognition result corresponding to the interactive behavior data; when the expression recognition result is matched with the image task content, determining that an interaction result corresponding to the task content is an interaction success result; and when the expression recognition result is not matched with the image task content, determining that the interaction result corresponding to the task content is an interaction failure result. The image task content may include an emoticon or a word describing an emotion.
The server can call expression recognition service to perform expression recognition on interactive video data corresponding to the main interactive user, and expression recognition results corresponding to the interactive video data are obtained. The emotion recognition service may be a function integrated in a server, or an emotion recognition service invoked from an artificial intelligence cloud service through an Application Program Interface (API), which is not specifically limited herein.
The specific process of expression recognition may include: the server can perform framing processing on interactive video data of a main interactive user to obtain at least two frames of video data, wherein each frame of video data can be regarded as image data; the server can obtain an expression recognition model, the expression recognition model can be obtained through training of a large number of expression images, the expression recognition model can be used for representing the mapping relation between the images and the expression labels, and the expression recognition model can be a convolutional neural network, a deep neural network and the like. Taking a convolutional neural network as an example, a server can select any frame of video data from at least two frames of video data to input into an expression recognition model, perform feature extraction on the input video data through a convolutional layer in the expression recognition model to obtain expression attribute features corresponding to the input video data, input the expression attribute features into a classifier to obtain matching degrees between the expression attribute features and various attribute features in the classifier, determine label information corresponding to the attribute features with the maximum matching degree as expression labels corresponding to the input video data, and when the expression labels are matched with description words corresponding to image task content, determine the interaction result of the image task content as an interaction success result, and the remaining multi-frame video data do not need to perform expression recognition; when the expression labels are not matched with the description words corresponding to the image task content, one frame of video data can be continuously selected from at least two frames of video data, an expression recognition result is obtained through the expression recognition model, and if the video data matched with the image task content is not recognized within the set time, the interaction result of the image task content can be determined to be an interaction failure result. For example, the image task content is a smile expression image, and when the expression label smile is identified in the interactive video data corresponding to the main interactive user, it indicates that the main interactive user performs successfully, that is, the interaction result of the task content is: a task success result; when the expression label 'smile' is not identified in the interactive video data corresponding to the main interactive user, the main interactive user fails to perform, that is, the interactive result of the task content is as follows: and (5) a task failure result.
Optionally, when the target service is a circle of seating type service, the server may obtain voting information for the interactive behavior data sent by the device to which the auxiliary interactive user belongs; the voting information is generated by the equipment which the auxiliary interactive user belongs to responding to voting triggering operation in the operation area, and the auxiliary interactive user refers to the interactive users except the main interactive user in at least two interactive users; and counting the voting number of the interactive behavior data according to the voting information, generating an interactive result according to the voting number, and sending the interactive result to the terminal equipment so that the terminal equipment displays prompt information associated with the interactive result in the service display page. In a circle of sit-around type service, an auxiliary interactive user can judge interactive behavior data corresponding to a main interactive user, when the auxiliary interactive user is satisfied with the interactive behavior data of the main interactive user aiming at task content, the auxiliary interactive user can trigger a voting function control in an operation area, so that terminal equipment obtains voting information of the auxiliary interactive user and sends the voting information to a server, the server can count the received voting information, the number of the voting information is the voting number of the interactive behavior data, and when the voting number reaches a number threshold (such as half of the number of the auxiliary interactive users), the interactive result corresponding to the main interactive user can be determined as follows: a successful interaction result; when the number of votes does not reach the number threshold, it can be determined that the interaction result corresponding to the main interaction user is: and (5) interaction failure results. The server can send the interaction result to the terminal device, namely, the server informs the terminal device of the interaction result corresponding to the main interaction user.
Please refer to fig. 17a and 17b, and fig. 17a and 17b are schematic diagrams of a service processing framework according to an embodiment of the present application. As shown in fig. 17a, taking the example that the target service is an interactive game, in the interactive game, when the server detects that the interactive game is switched from an un-activated state to an activated state, that is, when the game starts, the server may pull game content from the interactive game, where the game content may include content such as task content, task pictures, interface pictures, animation effects in the game, and videos played in the game. When the server detects that the game heartbeat is abnormal 100a, the server may also pull game content in the interactive game, where the game heartbeat abnormality 100a may mean that real-time progress information in the game is inconsistent with display progress information in a service display page, for example, due to a poor network, content displayed in the service display page of the terminal device may be behind the real-time progress of the game, that is, the background progress is inconsistent with the progress of the client.
The server may summarize the pulled game content to obtain the game data 100c, and of course, the game data 100c may further include data corresponding to the game event 100b pushed by the server, for example, after the interaction time of a certain user in the game is over, the game event 100b pushed by the server may be an instruction for the server to push the terminal device to switch the next user to interact, or an instruction for the terminal device to push the game to be over. The server may perform data processing on the game state machine 100d based on the game data 100 c. The game state machine 100d may include at least three types, which are a round-robin presentation 100e, a round-robin play game 100f (which may also be referred to as a round-robin type), and a multi-team confrontation 100g (which may also be referred to as a group confrontation type).
As shown in fig. 17b, the data processing procedure in the game state machine 100d may include the following steps S40-S44:
in step S40, the server may obtain game data in the interactive game through the game content pulled when the game starts, the game content pulled when the game heartbeat is abnormal 100a, and the data generated when the server pushes the game event 100 b.
In step S41, the server may perform logic processing according to the game data, where the service logic may be used to implement functions in the game, and the service logic may include service judgment conditions, execution sequence, execution mode, and the like in the game data processing process.
Step S42, in the game process, the server needs to process a notification event, where the notification event may be a notification instruction sent by the server to the terminal device, such as outputting a prompt message in a service display page of the game, notifying the main interactive user to switch to the next question for performing, or notifying each member in the game that the current question has been answered successfully.
In step S43, the server may compare the game scene with the rendered scene, and draw a scene for display in the terminal screen according to the comparison result. The game scenes can refer to environments, props and the like in games, the game scenes can be different when the game types are different, three types of games such as standing around to play games and multi-player team formation confrontation are played on the game table in turn, different game scenes are provided, and the rendering scene can refer to the current rendering scene. When the game scene is matched with the rendering scene, the game scene and the rendering scene belong to the same type of game, and the content can be drawn on the basis of the existing rendering scene.
In step S44, when the game scene differs from the rendering scene, it indicates that the game scene and the rendering scene belong to different types of games, and the already drawn content may be cleared and a new scene may be redrawn.
In the embodiment of the application, a service display page for a target service in instant messaging application may include a main area and an auxiliary area, the auxiliary area may be used to display an interactive user participating in the target service, the main area may be used to display interaction behavior data of a selected user (i.e., a main interactive user), and after the main interactive user is determined from the target service, the main interactive user may be displayed to the main area, that is, the main interactive user may interact with other users in the main area, so as to enrich an interaction mode in the instant messaging application; after the main interaction user is selected from the interactive users, the video picture of the main interaction user can be switched and displayed from the auxiliary area to the main area, so that the interactive users in the target service can quickly position the video picture of the main interaction user and watch the interactive content output in the video picture of the main interaction user, the probability of requiring the main interaction user to provide the same interactive content again can be reduced, and the data flow can be saved; in the target service, the interactive user can directly complete the interactive process through voice communication without performing additional click operation, the operation is simple, the execution efficiency in the target service can be improved, and the interest of the target service is promoted.
Referring to fig. 18, fig. 18 is a schematic structural diagram of a service data processing apparatus according to an embodiment of the present application. As shown in fig. 18, the service data processing apparatus 1 may correspond to any terminal device in the embodiment corresponding to fig. 1, and the service data processing apparatus 1 may include: a first display module 101, a second display module 102, a first obtaining module 103, and a third display module 104;
the first display module 101 is configured to respond to a trigger operation for a target service in an instant messaging application and display a service display page matched with the target service;
the second display module 102 is configured to display at least two interactive users in an auxiliary area of a service display page;
a first determining module 103, configured to determine a primary interactive user from the at least two interactive users in response to a confirmation operation for the secondary area;
the third display module 104 is configured to display the main interactive user to a main area in the service display page; the main area is used for displaying the interactive behavior data of the main interactive user in the target service.
For specific functional implementation manners of the first display module 101, the second display module 102, the first determining module 103, and the third display module 104, reference may be made to steps S101 to S104 in the embodiment corresponding to fig. 3, which is not described herein again.
Referring to fig. 18, the service data processing apparatus 1 may further include: a creation module 105, an invitation module 106, and an addition module 107;
the creating module 105 is configured to respond to a session creating operation in the instant messaging application and create a service session corresponding to a target service for a target user;
the invitation module 106 is configured to respond to an invitation operation in a service session and send invitation information corresponding to a target service to a server;
and the adding module 107 is configured to determine, when a confirmation participation request that the server returns the invitation information is received, the target user and the candidate user associated with the confirmation participation request as at least two interactive users.
The specific functional implementation manners of the creating module 105, the inviting module 106, and the adding module 107 may refer to step S101 in the embodiment corresponding to fig. 3, which is not described herein again.
Referring to fig. 18, the service data processing apparatus 1 may further include: a first data output module 109, a behavior data sending module 110, a first result display module 111;
the first data output module 109 is configured to output interaction behavior data of the primary interaction user for task content in the target service, and feedback behavior data of the secondary interaction user for the interaction behavior data; the auxiliary interactive user refers to an interactive user except the main interactive user in at least two interactive users;
the behavior data sending module 110 is configured to send the acquired interaction behavior data to a server, and receive an interaction result determined by the server according to the interaction behavior data and the feedback behavior data; the interaction result is associated with the task content;
and the first result display module 111 is configured to display prompt information associated with the interaction result in the service display page, and update task content in the target service according to the prompt information.
For specific functional implementation manners of the first data output module 109, the behavior data sending module 110, and the first result display module 111, reference may be made to steps S203 to S205 in the embodiment corresponding to fig. 5, which is not described herein again.
Referring to fig. 18, the second display module 102 may include: a size determination unit 1021, an area division unit 1022, a screen display unit 1023;
a size determining unit 1021, configured to determine, according to the total number of users of the at least two interactive users, a display size corresponding to the auxiliary area in the service display page;
the region dividing unit 1022 is configured to divide the auxiliary region into at least two unit auxiliary regions according to the total number of users and the display size corresponding to the auxiliary region; the number corresponding to at least two unit auxiliary areas is the same as the total number of users;
and a screen display unit 1023 for displaying at least two interactive users in at least two unit auxiliary areas respectively.
The specific functional implementation manners of the size determining unit 1021, the area dividing unit 1022, and the image display unit 1023 can refer to step S302 in the embodiment corresponding to fig. 8, which is not described herein again.
Referring to fig. 18, when the service presentation page includes an operation control, and the confirmation operation includes a start instruction and a stop instruction, the first determining module 103 may include: a traversal unit 1031, a determination unit 1032;
a traversal unit 1031, configured to perform polling traversal on at least two interactive users in the auxiliary area in response to a start instruction for the operation control;
a determining unit 1032, configured to stop performing polling traversal on at least two interactive users in the auxiliary area in response to a stop instruction for the operation control, and determine an interactive user pointed when the polling traversal is stopped as a primary interactive user;
the third display module 104 is specifically configured to:
and highlighting the display area of the main interactive user in the auxiliary area, and determining the highlighted display area as the main area in the service display page.
The specific functional implementation manners of the traversal unit 1031, the determination unit 1032, and the third display module 104 may refer to steps S303 to S305 in the embodiment corresponding to fig. 8, which is not described herein again.
Referring to fig. 18, the service data processing apparatus 1 may further include: a second data output module 113, a vote number acquisition module 114, a reselect user module 115, and an update data output module 116;
the second data output module 113 is configured to display task content in the main area and output interaction behavior data of a main interaction user for the task content;
a vote number obtaining module 114, configured to obtain a vote number of the auxiliary interactive user for the interactive behavior data; the auxiliary interactive users refer to the rest users except the main interactive user in the at least two interactive users;
a reselect user module 115, configured to reselect the primary interactive user from the secondary area when the number of votes is greater than or equal to the number threshold;
and the update data output module 116 is further configured to output the update interaction behavior data of the main interaction user for the task content in the main area when the number of votes is less than the number threshold.
The specific functional implementation manners of the second data output module 113, the vote number obtaining module 114, the reselecting user module 115, and the updated data output module 116 may refer to steps S306 to S307 in the embodiment corresponding to fig. 8, which is not described herein again.
Referring to fig. 18 together, when the auxiliary area includes a first auxiliary area and a second auxiliary area, the second display module 102 may include: a user dividing unit 1024, a grouping display unit 1025;
the user dividing unit 1024 is configured to group the at least two interactive users according to the group identifiers respectively corresponding to the at least two interactive users, so as to obtain a first user group and a second user group;
the grouping display unit 1025 is configured to display the interactive users included in the first user group in the first auxiliary area and display the interactive users included in the second user group in the second auxiliary area.
The specific functional implementation manners of the user dividing unit 1024 and the grouping display unit 1025 may refer to steps S402 to S403 in the embodiment corresponding to fig. 11, which is not described herein again.
Referring to fig. 18, the main interactive user includes a first interactive user in a first user group and a second interactive user in a second user group, and the main area includes a first main area and a second main area;
the third display module 104 is specifically configured to:
and displaying the first interactive user to the first main area, and displaying the second interactive user to the second main area.
The specific function implementation manner of the third display module 104 may refer to step S405 in the embodiment corresponding to fig. 11, which is not described herein again.
Referring to fig. 18, the service data processing apparatus 1 may further include: a task display module 117, a packet data output module 118, a result receiving module 119, and a second result display module 120;
a task display module 117, configured to display task content in a service presentation page; the task content is independently displayed in the first main area and the second main area;
the packet data output module 118 is configured to output interactive behavior data of the first interactive user and the second interactive user for the task content respectively;
a result receiving module 119, configured to receive an interaction result determined by the server according to interaction behavior data corresponding to the first interaction user and the second interaction user, respectively; the interaction result is associated with the task content;
and a second result display module 120, configured to update and display the first interactive user in the first main area and the second interactive user in the second main area according to the interactive result.
The specific functional implementation manners of the task display module 117, the packet data output module 118, the result receiving module 119, and the second result display module 120 may refer to steps S406 to S408 in the embodiment corresponding to fig. 11, which is not described herein again.
Referring to fig. 18, the second result display module 120 may include: a first replacement subscriber unit 1201, a second replacement subscriber unit 1202;
the first replacement user unit 1201 is configured to, when the interaction result is an interaction success result corresponding to the first interaction user, display the second interaction user from the second main area to the second auxiliary area, select a next interaction user from the second auxiliary area, use the next interaction user as an updated second interaction user, and display the updated interaction user of the second interaction user to the second main area;
and the second replacement user unit 1202 is configured to, when the interaction result is an interaction success result corresponding to the second interaction user, display the first interaction user from the first main area to the first auxiliary area, select a next interaction user from the first auxiliary area, serve as the updated first interaction user, and display the updated interaction user of the first interaction user to the first main area.
For specific functional implementation manners of the first replacement user unit 1201 and the second replacement user unit 1202, reference may be made to step S408 in the embodiment corresponding to fig. 11, which is not described herein again.
Referring to fig. 18, the service data processing apparatus 1 may further include: a result counting module 121, a result page display module 122, a progress obtaining module 123 and a page updating module 124;
a result counting module 121, configured to obtain interaction behavior data corresponding to a main interaction user when a target service is in a service end state, and count interaction success results corresponding to at least two interaction users respectively;
and the result page display module 122 is configured to generate a result ranking list according to the interaction success result, and display the interaction behavior data and the result ranking list in the service result page.
The progress acquiring module 123 is configured to acquire, in real time, real-time progress information corresponding to the target service through a heartbeat mechanism, and acquire display progress information corresponding to a service display page;
and the page updating module 124 is configured to update the service display page according to the real-time progress information when the real-time progress information is inconsistent with the display progress information.
The specific functional implementation manners of the result counting module 121, the result page displaying module 122, the progress obtaining module 123, and the page updating module 124 may refer to step S104 in the embodiment corresponding to fig. 3, which is not described herein again.
In the embodiment of the application, a service display page for a target service in instant messaging application may include a main area and an auxiliary area, the auxiliary area may be used to display an interactive user participating in the target service, the main area may be used to display interaction behavior data of a selected user (i.e., a main interactive user), and after the main interactive user is determined from the target service, the main interactive user may be displayed to the main area, that is, the main interactive user may interact with other users in the main area, so as to enrich an interaction mode in the instant messaging application; the video picture of the main interactive user can be displayed in the main area, so that the interactive user in the target service can quickly locate the video picture of the main interactive user and watch the interactive content output in the video picture of the main interactive user, the probability of requiring the main interactive user to provide the same interactive content again can be reduced, and the data traffic can be saved.
Referring to fig. 19, fig. 19 is a schematic structural diagram of a service data processing apparatus according to an embodiment of the present application. As shown in fig. 19, the service data processing apparatus 2 may correspond to the server 10d in the embodiment corresponding to fig. 1, and the service data processing apparatus 2 may include: an obtaining module 21, a first sending module 22, a second determining module 23, and a second sending module 24;
an obtaining module 21, configured to obtain multimedia data corresponding to at least two interactive users in a target service respectively;
the first sending module 22 is configured to send the multimedia data corresponding to the at least two interactive users to the terminal device, so that the terminal device displays the multimedia data corresponding to the at least two interactive users in the auxiliary area of the service display page;
a second determining module 23, configured to determine a primary interactive user from at least two interactive users;
the second sending module 24 is configured to send a region replacement instruction to the terminal device, and instruct the terminal device to display the main interactive user to a main region in the service display page according to the region replacement instruction; the main area is used for displaying the interactive behavior data of the main interactive user in the target service.
For specific functional implementation manners of the obtaining module 21, the first sending module 22, the second determining module 23, and the second sending module 24, reference may be made to steps S501 to S506 in the embodiment corresponding to fig. 14, which is not described herein again.
Referring to fig. 19, when the service presentation page includes task content, the service data processing apparatus 2 may further include: a result generation module 25, a result transmission module 26;
the result generation module 25 is configured to obtain interaction behavior data of the main interaction user for the task content, and generate an interaction result corresponding to the task content according to the interaction behavior data;
and the result sending module 26 is configured to send the interaction result to the terminal device, so that the terminal device displays a prompt message associated with the interaction result in the service display page.
The specific functional implementation manners of the result generating module 25 and the result sending module 26 may refer to step S506 in the embodiment corresponding to fig. 14, which is not described herein again.
Referring also to fig. 19, when the task content includes text task content, the result generation module 25 may include: a first identifying unit 2501, a second identifying unit 2502, a first comparing unit 2503, a second comparing unit 2504 and a third comparing unit 2505;
the first recognition unit 2501 is configured to obtain interaction behavior data of a main interaction user for text task content, perform voice recognition on first audio data included in the interaction behavior data, and obtain a first conversion text corresponding to the interaction behavior data;
the second identification unit 2502 is configured to acquire second audio data of the auxiliary interactive user for the interactive behavior data, perform voice identification on the second audio data, and obtain a second conversion text corresponding to the second audio data; the auxiliary interactive user refers to an interactive user except the main interactive user in at least two interactive users;
a first comparing unit 2503, configured to determine that an interaction result corresponding to the task content is a replacement task when the first conversion text matches the text task content;
the second comparing unit 2504 is configured to determine that an interaction result corresponding to the task content is an interaction success result when the first conversion text is not matched with the text task content and the second conversion text is matched with the text task content;
the third comparing unit 2505 is configured to determine that an interaction result corresponding to the task content is an interaction failure result when the first converted text and the second converted text are not matched with the text task content.
For specific functional implementation manners of the first identifying unit 2501, the second identifying unit 2502, the first comparing unit 2503, the second comparing unit 2504, and the third comparing unit 2505, reference may be made to step S506 in the embodiment corresponding to fig. 14, which is not described herein again.
Referring to fig. 19 together, when the task content includes image task content, the result generation module 25 may include: a third identifying unit 2506, a fourth comparing unit 2507, a fifth comparing unit 2508;
a third identifying unit 2506, configured to obtain interaction behavior data of the main interaction user for the image task content, and perform expression identification on interaction video data included in the interaction behavior data to obtain an expression identification result corresponding to the interaction behavior data;
a fourth comparing unit 2507, configured to determine that an interaction result corresponding to the task content is an interaction success result when the expression recognition result matches the image task content;
a fifth comparing unit 2508, configured to determine that an interaction result corresponding to the task content is an interaction failure result when the expression recognition result is not matched with the image task content.
For specific functional implementation manners of the third identifying unit 2506, the fourth comparing unit 2507 and the fifth comparing unit 2508, reference may be made to step S506 in the embodiment corresponding to fig. 14, which is not described herein again.
Referring to fig. 19, when the primary interactive user includes a first interactive user and a second interactive user, and both the first interactive user and the second interactive user belong to at least two interactive users, the result generating module includes: a behavior data acquisition unit 2509, a fourth recognition unit 2510, a fifth recognition unit 2511, a sixth comparison unit 2512, a seventh comparison unit 2513, an eighth comparison unit 2514;
a behavior data acquiring unit 2509, configured to acquire interaction behavior data of the first interaction user and the second interaction user for task content respectively;
a fourth identifying unit 2510, configured to obtain third audio data in the interactive behavior data corresponding to the first interactive user, and perform voice identification on the third audio data to obtain a third conversion text corresponding to the third audio data;
a fifth identifying unit 2511, configured to obtain fourth audio data in the interaction behavior data corresponding to the second interaction user, perform voice recognition on the fourth audio data, and obtain a fourth conversion text corresponding to the fourth audio data;
a sixth comparing unit 2512, configured to determine that the interaction result corresponding to the task content is an interaction success result for the first interactive user when the third conversion text matches the answer information corresponding to the task content and the acquisition time of the third audio data is earlier than that of the fourth audio data;
a seventh comparing unit 2513, configured to determine that the interaction result corresponding to the task content is an interaction success result for the second interactive user when the fourth conversion text is matched with the answer information and the acquisition time of the fourth audio data is earlier than that of the third audio data;
an eighth comparing unit 2514, configured to determine that the interaction result corresponding to the task content is an interaction failure result when both the third converted text and the fourth converted text are not matched with the answer information.
The specific functional implementation manners of the behavior data obtaining unit 2509, the fourth identifying unit 2510, the fifth identifying unit 2511, the sixth comparing unit 2512, the seventh comparing unit 2513, and the eighth comparing unit 2514 may refer to step S506 in the embodiment corresponding to fig. 14, which is not described herein again.
Referring to fig. 19, when the service display page includes an operation area, the service data processing apparatus 2 may further include: a vote information acquisition module 27, a vote count statistic module 28;
a voting information obtaining module 27, configured to obtain voting information for the interactive behavior data sent by the device to which the auxiliary interactive user belongs; the voting information is generated by the equipment which the auxiliary interactive user belongs to responding to voting triggering operation in the operation area, and the auxiliary interactive user refers to the interactive users except the main interactive user in at least two interactive users;
and the vote counting module 28 is configured to count the votes of the interaction behavior data according to the vote information, generate an interaction result according to the votes, and send the interaction result to the terminal device, so that the terminal device displays prompt information associated with the interaction result in the service display page.
The specific functional implementation manners of the vote information obtaining module 27 and the vote counting module 28 may refer to step S506 in the embodiment corresponding to fig. 14, and are not described herein again.
In the embodiment of the application, a service display page for a target service in instant messaging application may include a main area and an auxiliary area, the auxiliary area may be used to display an interactive user participating in the target service, the main area may be used to display interaction behavior data of a selected user (i.e., a main interactive user), and after the main interactive user is determined from the target service, the main interactive user may be displayed to the main area, that is, the main interactive user may interact with other users in the main area, so as to enrich an interaction mode in the instant messaging application; after the main interactive user is determined, the main interactive user can be displayed to the main area, so that the interactive user in the target service can be quickly positioned to the video picture of the main interactive user and watch the interactive content output in the video picture of the main interactive user, the probability of requiring the main interactive user to provide the same interactive content again can be reduced, and the data flow can be saved; in the target service, the interactive user can directly complete the interactive process through voice communication without performing additional click operation, the operation is simple, the execution efficiency in the target service can be improved, and the interest of the target service is promoted.
Referring to fig. 20, fig. 20 is a schematic structural diagram of a computer device according to an embodiment of the present application. As shown in fig. 20, the computer device 1000 may correspond to any terminal device in the above-described embodiment corresponding to fig. 1, and the computer device 1000 may include: the processor 1001, the network interface 1004, and the memory 1005, and the computer apparatus 1000 may further include: a user interface 1003, and at least one communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display) and a Keyboard (Keyboard), and optionally, the user interface 1003 may also include a standard wired interface and a standard wireless interface. Optionally, the network interface 1004 may include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1004 may be a high-speed RAM memory or a non-volatile memory (e.g., at least one disk memory). Optionally, the memory 1005 may also be at least one memory device located remotely from the processor 1001. As shown in fig. 20, a memory 1005, which is a kind of computer-readable storage medium, may include an operating system, a network communication module, a user interface module, and a device control application program.
In the computer device 1000 shown in fig. 20, the network interface 1004 may provide a network communication function; the user interface 1003 is an interface for providing a user with input; and the processor 1001 may be used to invoke a device control application stored in the memory 1005 to implement:
responding to a trigger operation aiming at a target service in the instant messaging application, and displaying a service display page matched with the target service;
displaying at least two interactive users in an auxiliary area of a service display page;
determining a main interactive user from the at least two interactive users in response to the confirmation operation for the auxiliary area;
displaying the main interactive user to a main area in a service display page; the main area is used for displaying the interactive behavior data of the main interactive user in the target service.
It should be understood that the computer device 1000 described in this embodiment may perform the description of the service data processing method in the embodiment corresponding to fig. 3, and may also perform the description of the service data processing apparatus 1 in the embodiment corresponding to fig. 18, which is not described herein again. In addition, the beneficial effects of the same method are not described in detail.
Further, here, it is to be noted that: an embodiment of the present application further provides a computer-readable storage medium, where a computer program executed by the aforementioned service data processing apparatus 1 is stored in the computer-readable storage medium, and the computer program includes program instructions, and when the processor executes the program instructions, the description of the service data processing method in any one of the embodiments corresponding to fig. 3, fig. 5, fig. 8, and fig. 11 can be executed, so that details are not repeated here. In addition, the beneficial effects of the same method are not described in detail. For technical details not disclosed in embodiments of the computer-readable storage medium referred to in the present application, reference is made to the description of embodiments of the method of the present application. As an example, the program instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network, which may constitute a block chain system.
Referring to fig. 21, fig. 21 is a schematic structural diagram of a computer device according to an embodiment of the present application. As shown in fig. 21, the computer device 2000 may correspond to the server 10d in the embodiment corresponding to fig. 1, and the computer device 2000 may include: the processor 2001, the network interface 2004 and the memory 2005, the computer device 2000 may further include: a user interface 2003, and at least one communication bus 2002. The communication bus 2002 is used to implement connection communication between these components. The user interface 2003 may include a Display (Display) and a Keyboard (Keyboard), and optionally, the user interface 2003 may also include a standard wired interface and a wireless interface. Optionally, the network interface 2004 may include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 2004 may be a high-speed RAM memory or a non-volatile memory, such as at least one disk memory. Alternatively, the memory 2005 may be at least one storage device located remotely from the aforementioned processor 2001. As shown in fig. 21, the memory 2005 which is a kind of computer-readable storage medium may include an operating system, a network communication module, a user interface module, and a device control application program.
In the computer device 2000 shown in fig. 21, the network interface 2004 may provide a network communication function; and the user interface 2003 is primarily used to provide an interface for user input; and processor 2001 may be used to invoke the device control application stored in memory 2005 to implement:
acquiring multimedia data corresponding to at least two interactive users in a target service respectively;
sending the multimedia data respectively corresponding to at least two interactive users to the terminal equipment so that the terminal equipment displays the multimedia data respectively corresponding to at least two interactive users in the auxiliary area of the service display page;
determining a main interactive user from at least two interactive users;
sending a region replacement instruction to the terminal equipment, and indicating the terminal equipment to display the main interactive user to a main region in the service display page according to the region replacement instruction; the main area is used for displaying the interactive behavior data of the main interactive user in the target service.
It should be understood that the computer device 2000 described in this embodiment may perform the description of the service data processing method in the embodiment corresponding to fig. 14, and may also perform the description of the service data processing apparatus 2 in the embodiment corresponding to fig. 19, which is not described herein again. In addition, the beneficial effects of the same method are not described in detail.
Further, here, it is to be noted that: an embodiment of the present application further provides a computer-readable storage medium, where the computer program executed by the aforementioned service data processing apparatus 2 is stored in the computer-readable storage medium, and the computer program includes program instructions, and when the processor executes the program instructions, the description of the service data processing method in the embodiment corresponding to fig. 14 can be performed, so that details are not repeated here. In addition, the beneficial effects of the same method are not described in detail. For technical details not disclosed in embodiments of the computer-readable storage medium referred to in the present application, reference is made to the description of embodiments of the method of the present application. As an example, the program instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network, which may constitute a block chain system.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can include the processes of the embodiments of the methods described above when the computer program is executed. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present application and is not to be construed as limiting the scope of the present application, so that the present application is not limited thereto, and all equivalent variations and modifications can be made to the present application.

Claims (19)

1. A method for processing service data is characterized by comprising the following steps:
responding to starting operation aiming at a target service in the instant messaging application, and displaying a service display page matched with the target service;
displaying video pictures respectively corresponding to at least two interactive users in an auxiliary area of the service display page; the at least two interactive users perform interaction and service control in the target service through voice data;
determining a main interactive user from the at least two interactive users in response to a confirmation operation for the auxiliary area;
switching and displaying the video picture corresponding to the main interactive user from the auxiliary area to a main area in the service display page; the size of the video picture displayed in the main area is larger than that of the video picture displayed in the auxiliary area;
outputting interaction behavior data of the main interaction user aiming at the task content in the target service in the main area, and outputting feedback behavior data of the auxiliary interaction user aiming at the interaction behavior data in the target service; the auxiliary interactive user refers to an interactive user except the main interactive user in the at least two interactive users, and the feedback behavior data comprises voice data of the auxiliary interactive user.
2. The method of claim 1, further comprising:
responding to the session establishing operation in the instant communication application, and establishing a service session corresponding to the target service for a target user;
responding to the invitation operation in the service session, and sending invitation information corresponding to the target service to a server;
and when a confirmation participation request of the server returning the invitation information is received, determining the target user and the candidate user associated with the confirmation participation request as the at least two interactive users.
3. The method of claim 1, further comprising:
sending the acquired interaction behavior data to a server, and receiving an interaction result determined by the server according to the interaction behavior data and the feedback behavior data; the interaction result is associated with the task content;
and displaying prompt information associated with the interaction result in the service display page, and updating the task content in the target service according to the prompt information.
4. The method according to claim 1, wherein the displaying, in the auxiliary area of the service presentation page, video frames corresponding to at least two interactive users respectively comprises:
determining a display size corresponding to the auxiliary area in the service display page according to the total number of the at least two interactive users;
dividing the auxiliary area into at least two unit auxiliary areas according to the total number of the users and the display size corresponding to the auxiliary area; the number corresponding to the at least two unit auxiliary areas is the same as the total number of the users;
and respectively displaying the video pictures of the at least two interactive users in the at least two unit auxiliary areas.
5. The method of claim 1, wherein the secondary region comprises a first secondary region and a second secondary region;
the displaying at least two video pictures respectively corresponding to the interactive users in the auxiliary area of the service display page comprises:
grouping the at least two interactive users according to the group identifications respectively corresponding to the at least two interactive users to obtain a first user group and a second user group;
and displaying a video picture corresponding to the interactive user included in the first user group in the first auxiliary area, and displaying a video picture corresponding to the interactive user included in the second user group in the second auxiliary area.
6. The method of claim 5, wherein the primary interactive user comprises a first interactive user in the first group of users and a second interactive user in the second group of users; the main area comprises a first main area and a second main area, and the auxiliary area comprises a first auxiliary area and a second auxiliary area;
the switching and displaying the video picture corresponding to the main interactive user from the auxiliary area to the main area in the service display page comprises:
and switching and displaying the video picture corresponding to the first interactive user from the first auxiliary area to the first main area, and switching and displaying the video picture corresponding to the second interactive user from the second auxiliary area to the second main area.
7. The method of claim 6, further comprising:
displaying task content in the service display page; the task content is independently displayed in the first main area and the second main area;
outputting interactive behavior data of the first interactive user and the second interactive user respectively aiming at the task content;
receiving an interaction result determined by the server according to interaction behavior data respectively corresponding to the first interaction user and the second interaction user; the interaction result is associated with the task content;
and according to the interaction result, updating and displaying the first interaction user in the first main area and the second interaction user in the second main area respectively.
8. The method of claim 7, wherein the updating and displaying the first interactive user in the first main area and the second interactive user in the second main area according to the interactive result respectively comprises:
when the interaction result is an interaction success result corresponding to the first interaction user, displaying the second interaction user from the second main area to the second auxiliary area, selecting a next interaction user from the second auxiliary area as an updated second interaction user, and displaying the updated second interaction user to the second main area;
and when the interaction result is an interaction success result corresponding to the second interaction user, displaying the first interaction user from the first main area to the first auxiliary area, selecting the next interaction user from the first auxiliary area as an updated first interaction user, and displaying the updated first interaction user to the first main area.
9. The method of claim 1, further comprising:
when the target service is in a service ending state, acquiring the interaction behavior data corresponding to the main interaction user, and counting interaction success results corresponding to the at least two interaction users respectively;
and generating a result ranking list according to the interaction success result, and displaying the interaction behavior data and the result ranking list in a service result page.
10. The method of claim 1, further comprising:
acquiring real-time progress information corresponding to the target service in real time through a heartbeat mechanism, and acquiring display progress information corresponding to the service display page;
and when the real-time progress information is inconsistent with the display progress information, updating the service display page according to the real-time progress information.
11. A method for processing service data is characterized by comprising the following steps:
responding to starting operation aiming at a target service in the instant messaging application, and displaying a service display page matched with the target service;
displaying video pictures respectively corresponding to at least two interactive users in an auxiliary area of the service display page; the at least two interactive users perform interaction and service control in the target service through voice data;
responding to a starting instruction aiming at an operation control in the service display page, and performing polling traversal on the at least two interactive users in the auxiliary area;
responding to a stopping instruction aiming at the operation control, stopping performing polling traversal on the at least two interactive users in the auxiliary area, and determining an interactive user pointed when the polling traversal is stopped as a main interactive user; the start instruction and the stop instruction are instructions generated according to user voice data;
highlighting the video picture of the main interactive user in a display area in the auxiliary area, and determining the highlighted display area as a main area in the business display page; the main area is used for displaying the interactive behavior data of the main interactive user in the target service.
12. The method of claim 11, further comprising:
displaying task content in the main area, and outputting interaction behavior data of the main interaction user aiming at the task content;
obtaining the vote number of the auxiliary interactive users aiming at the interactive behavior data; the auxiliary interactive user refers to an interactive user except the main interactive user in the at least two interactive users;
reselecting a primary interactive user from the secondary area when the number of votes is greater than or equal to a number threshold;
when the number of votes is smaller than the number threshold, outputting updated interactive behavior data of the main interactive user for the task content in the main area.
13. A method for processing service data is characterized by comprising the following steps:
acquiring video pictures respectively corresponding to at least two interactive users in a target service; the at least two interactive users perform interaction and service control in the target service through voice data;
sending the video pictures respectively corresponding to the at least two interactive users to terminal equipment so that the terminal equipment displays the video pictures respectively corresponding to the at least two interactive users in an auxiliary area of a service display page;
determining a main interactive user from the at least two interactive users;
sending a region replacement instruction to the terminal equipment, instructing the terminal equipment to switch and display the video picture corresponding to the main interactive user from the auxiliary region to the main region in the service display page according to the region replacement instruction, and outputting interactive behavior data of the main interactive user aiming at task content in the target service and feedback behavior data of the auxiliary interactive user aiming at the interactive behavior data; the size of the video picture displayed in the main area is larger than that of the video picture displayed in the auxiliary area, the auxiliary interactive user refers to an interactive user except the main interactive user in the at least two interactive users, and the feedback behavior data comprises voice data of the auxiliary interactive user.
14. The method of claim 13, wherein the business presentation page includes task content;
the method further comprises the following steps:
acquiring interaction behavior data of the main interaction user aiming at the task content, and generating an interaction result corresponding to the task content according to the interaction behavior data;
and sending the interaction result to the terminal equipment so that the terminal equipment displays prompt information associated with the interaction result in the service display page.
15. The method of claim 14, wherein the task content comprises textual task content;
the acquiring of the interaction behavior data of the main interaction user for the task content and the generating of the interaction result corresponding to the task content according to the interaction behavior data include:
acquiring interaction behavior data of the main interaction user aiming at the text task content, and performing voice recognition on first audio data contained in the interaction behavior data to obtain a first conversion text corresponding to the interaction behavior data;
acquiring second audio data of the auxiliary interactive user aiming at the interactive behavior data, and performing voice recognition on the second audio data to obtain a second conversion text corresponding to the second audio data; the auxiliary interactive user refers to an interactive user except the main interactive user in the at least two interactive users;
when the first conversion text is matched with the text task content, determining an interaction result corresponding to the task content as a replacement task;
when the first conversion text is not matched with the text task content and the second conversion text is matched with the text task content, determining that an interaction result corresponding to the task content is an interaction success result;
and when the first conversion text and the second conversion text are not matched with the text task content, determining that the interaction result corresponding to the task content is an interaction failure result.
16. A service data processing apparatus, comprising:
the first display module is used for responding to starting operation aiming at a target service in the instant messaging application and displaying a service display page matched with the target service;
the second display module is used for displaying video pictures respectively corresponding to at least two interactive users in the auxiliary area of the service display page; the at least two interactive users perform interaction and service control in the target service through voice data;
a first determining module, configured to determine a primary interactive user from the at least two interactive users in response to a confirmation operation for the secondary region;
the third display module is used for switching and displaying the video picture corresponding to the main interactive user from the auxiliary area to the main area in the service display page; the size of the video picture displayed in the main area is larger than that of the video picture displayed in the auxiliary area;
the first data output module is used for outputting interaction behavior data of the main interaction user aiming at the task content in the target service in the main area and outputting feedback behavior data of the auxiliary interaction user aiming at the interaction behavior data in the target service; the auxiliary interactive user refers to an interactive user except the main interactive user in the at least two interactive users, and the feedback behavior data comprises voice data of the auxiliary interactive user.
17. A service data processing apparatus, comprising:
the acquisition module is used for acquiring video pictures respectively corresponding to at least two interactive users in the target service; the at least two interactive users perform interaction and service control in the target service through voice data;
the first sending module is used for sending the video pictures respectively corresponding to the at least two interactive users to the terminal equipment so that the terminal equipment displays the video pictures respectively corresponding to the at least two interactive users in an auxiliary area of a service display page;
the second determining module is used for determining a main interactive user from the at least two interactive users;
the second sending module is used for sending a region replacement instruction to the terminal equipment, instructing the terminal equipment to switch and display the video picture corresponding to the main interactive user from the auxiliary region to the main region in the service display page according to the region replacement instruction, and outputting interactive behavior data of the main interactive user aiming at task content in the target service and feedback behavior data of the auxiliary interactive user aiming at the interactive behavior data; the size of the video picture displayed in the main area is larger than that of the video picture displayed in the auxiliary area, the auxiliary interactive user refers to an interactive user except the main interactive user in the at least two interactive users, and the feedback behavior data comprises voice data of the auxiliary interactive user.
18. A computer device comprising a memory and a processor;
the memory stores a computer program that, when executed by the processor, causes the processor to perform the method of any of claims 1-15.
19. A computer-readable storage medium, in which a computer program is stored, the computer program comprising program instructions which, when executed by a processor, perform the method of any one of claims 1 to 15.
CN202010512719.9A 2020-06-08 2020-06-08 Business data processing method and device, computer equipment and storage medium Active CN111870935B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210467805.1A CN114797094A (en) 2020-06-08 2020-06-08 Business data processing method and device, computer equipment and storage medium
CN202010512719.9A CN111870935B (en) 2020-06-08 2020-06-08 Business data processing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010512719.9A CN111870935B (en) 2020-06-08 2020-06-08 Business data processing method and device, computer equipment and storage medium

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202210467805.1A Division CN114797094A (en) 2020-06-08 2020-06-08 Business data processing method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111870935A CN111870935A (en) 2020-11-03
CN111870935B true CN111870935B (en) 2022-04-01

Family

ID=73153840

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210467805.1A Pending CN114797094A (en) 2020-06-08 2020-06-08 Business data processing method and device, computer equipment and storage medium
CN202010512719.9A Active CN111870935B (en) 2020-06-08 2020-06-08 Business data processing method and device, computer equipment and storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202210467805.1A Pending CN114797094A (en) 2020-06-08 2020-06-08 Business data processing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (2) CN114797094A (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112684889B (en) * 2020-12-29 2023-06-30 上海掌门科技有限公司 User interaction method and device
CN112717422B (en) * 2020-12-30 2022-05-03 北京字跳网络技术有限公司 Real-time information interaction method and device, equipment and storage medium
CN113051473B (en) * 2021-03-23 2024-03-08 上海哔哩哔哩科技有限公司 Data processing method and device
CN113163223B (en) * 2021-04-26 2023-04-28 广州繁星互娱信息科技有限公司 Live interaction method, device, terminal equipment and storage medium
CN113453033B (en) * 2021-06-29 2023-01-20 广州方硅信息技术有限公司 Live broadcasting room information transmission processing method and device, equipment and medium thereof

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6388688B1 (en) * 1999-04-06 2002-05-14 Vergics Corporation Graph-based visual navigation through spatial environments
AU2002345337A1 (en) * 2001-07-09 2003-01-29 Ad4Ever Inc. Method and system for allowing cross-communication between first and second areas of a primary web page
US9498711B2 (en) * 2008-11-04 2016-11-22 Quado Media Inc. Multi-player, multi-screens, electronic gaming platform and system
CN104645614A (en) * 2015-03-02 2015-05-27 郑州三生石科技有限公司 Multi-player video on-line game method
CN109005417B (en) * 2018-08-06 2021-05-04 广州方硅信息技术有限公司 Live broadcast room entering method, system, terminal and device for playing game based on live broadcast
CN110337023B (en) * 2019-07-02 2022-05-13 游艺星际(北京)科技有限公司 Animation display method, device, terminal and storage medium
CN110225412B (en) * 2019-07-05 2022-05-17 腾讯科技(深圳)有限公司 Video interaction method, device and storage medium

Also Published As

Publication number Publication date
CN114797094A (en) 2022-07-29
CN111870935A (en) 2020-11-03

Similar Documents

Publication Publication Date Title
CN111870935B (en) Business data processing method and device, computer equipment and storage medium
US9614969B2 (en) In-call translation
US20150347399A1 (en) In-Call Translation
US20080070697A1 (en) Social interaction games and activities
CN112328142B (en) Live broadcast interaction method and device, electronic equipment and storage medium
CN113301358B (en) Content providing and displaying method and device, electronic equipment and storage medium
CN113032542B (en) Live broadcast data processing method, device, equipment and readable storage medium
CN113409778A (en) Voice interaction method, system and terminal
EP4054180A1 (en) Integrated input/output (i/o) for a three-dimensional (3d) environment
CN111294606A (en) Live broadcast processing method and device, live broadcast client and medium
JP4077656B2 (en) Speaker specific video device
US20230351661A1 (en) Artificial intelligence character models with goal-oriented behavior
US11318373B2 (en) Natural speech data generation systems and methods
CN112820265A (en) Speech synthesis model training method and related device
CN114338573A (en) Interactive data processing method and device and computer readable storage medium
JP7445938B1 (en) Servers, methods and computer programs
US20230351118A1 (en) Controlling generative language models for artificial intelligence characters
US11960983B1 (en) Pre-fetching results from large language models
CN115309304A (en) Session message display method, device, storage medium and computer equipment
CN112752159A (en) Interaction method and related device
WO2023212258A1 (en) Relationship graphs for artificial intelligence character models
CN115623133A (en) Online conference method and device, electronic equipment and readable storage medium
CN111836113A (en) Information processing method, client, server and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40030642

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant