CN113784151B - Data processing method, device, computer equipment and storage medium - Google Patents

Data processing method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN113784151B
CN113784151B CN202010525775.6A CN202010525775A CN113784151B CN 113784151 B CN113784151 B CN 113784151B CN 202010525775 A CN202010525775 A CN 202010525775A CN 113784151 B CN113784151 B CN 113784151B
Authority
CN
China
Prior art keywords
user
list
task state
target
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010525775.6A
Other languages
Chinese (zh)
Other versions
CN113784151A (en
Inventor
舒润民
陈科科
陈琦钿
吴歆婉
匡皓琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010525775.6A priority Critical patent/CN113784151B/en
Publication of CN113784151A publication Critical patent/CN113784151A/en
Application granted granted Critical
Publication of CN113784151B publication Critical patent/CN113784151B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4882Data services, e.g. news ticker for displaying messages, e.g. warnings, reminders

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Information Transfer Between Computers (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a data processing method, a device, computer equipment and a storage medium, wherein the method comprises the following steps: responsive to a first operation for the video client, outputting a virtual room associated with the first operation to a first display interface of the video client; the virtual room comprises a first user in a first task state and a second user in a second task state; the second task state is a different task state than the first task state; responding to a second operation aiming at the first display interface, acquiring a first user list associated with a first task state, and acquiring a second user list associated with a second task state; a target user list is determined from the first user list and the second user list for output to a list sub-interface that is independent of the first display interface. By adopting the embodiment of the application, a new online accompanying mode can be provided, and the display effect of the user data in the virtual room can be enriched.

Description

Data processing method, device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a data processing method, a data processing device, a computer device, and a storage medium.
Background
Currently, with the development of multimedia technology, more and more users can choose to perform self-learning or work on line, so as to improve the efficiency of self-learning or work of the users. However, due to the technical barriers existing in the prior art, a large number of users cannot perform online live broadcast services in the same virtual room at the same time.
For example, if there are currently 8 users to perform online self-learning, based on the existing online live service, the 8 users are sequentially grouped according to the sequence of online live requests, so as to allow some users (e.g., 4 users) of the 8 users to enter the virtual room 1 to perform online self-learning, and allow another user (e.g., the remaining 4 users) of the 8 users to enter the virtual room 2 to perform online self-learning. Obviously, existing online live services will have difficulty ensuring that these 8 users achieve online chaperoning in the same virtual room. In addition, in view of the existing online live service, it is often necessary to have the users distributed in the virtual room 1 and the virtual room 2 perform the live service by turning on videos, so that the display effect of the user data in the same virtual room is relatively single for the users in the virtual room 1 or the virtual room 2.
Disclosure of Invention
The embodiment of the application provides a data processing method, a data processing device, computer equipment and a storage medium, which can provide a new online accompanying mode and enrich the display effect of user data in a virtual room.
In one aspect, an embodiment of the present application provides a data processing method, where the method includes:
responsive to a first operation for the video client, outputting a virtual room associated with the first operation to a first display interface of the video client; the virtual room comprises a first user in a first task state and a second user in a second task state; the second task state is a different task state than the first task state;
Responding to a second operation aiming at the first display interface, acquiring a first user list associated with a first task state, and acquiring a second user list associated with a second task state; the first user list comprises first users; the second user list comprises second users;
a target user list is determined from the first user list and the second user list for output to a list sub-interface that is independent of the first display interface.
In one aspect, an embodiment of the present application provides a data processing apparatus, including:
A first output module for outputting a virtual room associated with a first operation to a first display interface of the video client in response to the first operation for the video client; the virtual room comprises a first user in a first task state and a second user in a second task state; the second task state is a different task state than the first task state;
The first acquisition module is used for responding to a second operation aiming at the first display interface, acquiring a first user list associated with a first task state and acquiring a second user list associated with a second task state; the first user list comprises first users; the second user list comprises second users;
And the first determining module is used for determining a target user list which is output to a list sub-interface independent of the first display interface from the first user list and the second user list.
Wherein the first output module comprises:
The first sending unit is used for responding to a first operation triggered by the video client and sending a room acquisition request to a server corresponding to the video client; the room acquisition request is used for instructing the server to configure a virtual room for a target user accessing the video client; the target user is a user who executes a first operation;
The first receiving unit is used for receiving the virtual room returned by the server, and initializing the task state of the target user into a second task state when the target user enters the virtual room;
and the updating unit is used for updating the second user in the virtual room according to the target user with the second task state and outputting the updated virtual room to the first display interface of the video client.
The first display interface comprises a first display area, a second display area and a third display area; the first display area has the function of displaying live broadcast data of a first type of user; the second display area comprises a first subarea and a second subarea; the first subarea has a function of displaying user image data of a target user; the second sub-area has a function of displaying image data of the second type of user; the first type user and the second type user belong to a first user in a first task state in the virtual room; the third display area has a function of displaying text auxiliary information in the virtual room;
the apparatus further comprises:
The first sending module is used for responding to a first service starting control triggered by a target user in the first subarea, sending a first starting request associated with a live service corresponding to the first service starting control to the server, so that the server responds to the first starting request and generates first starting prompt information associated with the live service;
The second acquisition module is used for acquiring first starting prompt information returned by the server based on the first starting request and outputting the first starting prompt information to the third display area as text auxiliary information;
The first adjusting module is used for adjusting the task state of the target user from the second task state to the first task state and updating the first type user in the first user according to the target user in the first task state;
the second output module is used for outputting the updated live broadcast data of the first type user to the first display area, deleting the first subarea in the second display area, and outputting the image data of the second type user according to the second display area and the second subarea after deleting the first subarea.
The first starting request is also used for indicating the server to determine a target video display area for executing the live broadcast service in the first display area; the updated first type user comprises a target user and a first type user;
the second output module includes:
The acquisition unit is used for acquiring a target video display area returned by the server based on the first starting request and starting a shooting task for shooting a target user;
The region expansion unit is used for expanding the region of the second subarea in the second display area when the first subarea is deleted in the second display area when live broadcast data of the target user shot during the execution of the shooting task is output to the target video display area, so as to obtain the expanded second subarea; the area size of the expanded second sub-area is equal to the area size of the second display area;
and the output unit is used for outputting the image data of the second type user in the expanded second subarea when the live broadcast data of the first type user is synchronously output in the first display area to which the target video display area belongs.
Wherein the apparatus further comprises:
The third acquisition module is used for acquiring a starting time stamp recorded by the server when the first starting request is acquired, starting a timer corresponding to the video client when the shooting task starts to be executed, and recording a timing time stamp of the timer;
The second determining module is used for taking the video live broadcast duration of the live broadcast data of the target user, the first task state of the target user and the user image data of the target user as service data information; the live video time length is determined by the difference between the timing time stamp and the starting time stamp;
And the third output module is used for outputting the business data information in the target video display area.
Wherein the apparatus further comprises:
The second sending module is used for responding to the triggering operation of the first service ending control and sending a first ending request associated with the first service ending control to the server so that the server responds to the first ending request and generates first ending prompt information;
the fourth acquisition module is used for acquiring first end prompt information returned by the server based on the first end request, adjusting the task state of the target user from the first task state to the second task state, and updating the second type user in the first user according to the target user in the second task state;
The ending module is used for acquiring an ending time stamp recorded by the server when the first ending request is acquired, and ending a timer corresponding to the video client when the image pickup task is ended;
The third determining module is used for determining a feedback sub-interface independent of the first display interface, and outputting the service time determined by the server aiming at the target user to execute the camera shooting task and the configuration text information associated with the historical behavior data of the target user to the feedback sub-interface; the duration of the service is determined by the difference between the start time stamp and the end time stamp.
The first display interface comprises a first display area, a second display area and a third display area; the first display area has the function of displaying live broadcast data of a first type of user; the second display area comprises a first subarea and a second subarea; the first subarea has a function of displaying user image data of a target user; the second sub-area has a function of displaying image data of the second type of user; the first type user and the second type user belong to a first user in a first task state in the virtual room; the third display area has a function of displaying text auxiliary information in the virtual room;
the apparatus further comprises:
The third sending module is used for responding to a second service starting control triggered by a target user in the first subarea, sending a second starting request associated with the live service corresponding to the second service starting control to the server, so that the server responds to the second starting request and generates second starting prompt information associated with the live service;
The fifth acquisition module is used for acquiring second starting prompt information returned by the server based on the second starting request and outputting the second starting prompt information to the third display area as text auxiliary information;
The second adjusting module is used for adjusting the task state of the target user from the second task state to the first task state, and recording the image display duration of the target user with the first task state through a timer corresponding to the video client;
The configuration module is used for configuring virtual animation data for the user image data of the target user with the first task state, taking the virtual animation data, the image display duration and the user image data of the target user in the first task state as the image data of the target user, synchronously outputting the image data of the second type user in the second subarea when the image data of the target user is output in the first subarea, and outputting the live broadcast data of the first type user in the first display area.
Wherein the apparatus further comprises:
The first receiving module is used for receiving service guide information configured by the server for the cold start user if the target user belongs to the cold start user in the video client; the cold start user is a user without history access information in the video client;
A fourth output module, configured to output service guidance information in a guidance sub-interface independent of the first display interface; the traffic guidance information is used to instruct the target user to adjust the task state in the virtual room.
Wherein, this first acquisition module includes:
The second sending unit is used for responding to a second operation aiming at the first display interface and sending a list pulling request to a server corresponding to the video client; the list pulling request is used for indicating the server to acquire a first initial user list associated with a first task state and a second initial user list associated with a second task state;
The second receiving unit is used for receiving the first initial user list and the second initial user list returned by the server; the first initial user list comprises a first user, and the first user comprises a first type user and a second type user; the second initial user list comprises a second user;
The first ordering processing unit is used for taking the live video time length of the first type user and the image display time length of the second type user as the service time length of the first user in the first initial user list, ordering the first user in the first initial user list, and determining the ordered first initial user list as the first user list;
The second sorting processing unit is used for acquiring access time stamps of the second users in the second initial user list entering the virtual room, configuring corresponding time attenuation factors for the access time stamps corresponding to the second users, sorting the second users in the second initial user list based on the time attenuation factors, and determining the sorted second initial user list as the second user list.
Wherein the apparatus further comprises:
A sixth obtaining module, configured to obtain a list update notification sent by a server corresponding to the video client and used for updating the first user list and the second user list; the list update notification is generated by the server upon detecting a task state change request sent by a user in the virtual room;
The updating module is used for respectively updating the first user list and the second user list based on the list updating notification to obtain an updated first user list and an updated second user list;
And the fourth determining module is used for updating the target user list based on the updated first user list and the updated second user list, and outputting the updated target user list to the list sub-interface.
Wherein the apparatus further comprises:
the first interface switching module is used for switching the display interface of the video client from the first display interface to the second display interface when the target user exits the virtual room;
The second interface switching module is used for responding to the triggering operation aiming at the second display interface and switching the display interface of the video client from the second display interface to a third display interface; the third display interface is a business ranking control for acquiring a ranking list associated with the target user;
The fourth sending module is used for responding to the triggering operation for the business ranking control and sending a ranking query request to a server corresponding to the video client; the ranking inquiry request is used for indicating the server to acquire a ranking list; the ranking list comprises a first ranking list and a second ranking list; the users in the first ranking list contain the same users as the geographic location area in which the target user is located; the users in the second ranking list comprise users with interactive relations with the target users;
The fifth output module is used for acquiring a first ranking list and a second ranking list returned by the server, and determining a target ranking list on a fourth display interface for outputting to the video client from the first ranking list and the second ranking list;
And the sixth output module is used for switching the display interface of the video client from the third display interface to the fourth display interface, determining the target ranking of the target user in the target ranking list, and outputting the target ranking list containing the target ranking to the fourth display interface.
In one aspect, the application provides a computer device comprising: a processor, a memory, a network interface;
The processor is connected to a memory for providing data communication functions, a network interface for storing a computer program, and for invoking the computer program to perform the method according to the above aspect of the embodiments of the application.
An aspect of the present application provides a computer readable storage medium storing a computer program comprising program instructions which, when executed by a processor, perform a method according to the above aspect of the embodiments of the present application.
In an embodiment of the application, the computer device may output the virtual room associated with the first operation to a first display interface of the video client in response to the first operation for the video client. It should be appreciated that the embodiments of the present application may collectively refer to a user (e.g., a user who is self-learning) who is on-line living in the virtual room (i.e., a live service for short) and a user (e.g., a user who is self-learning) who is not on-line living in the virtual room (i.e., a user who is self-learning) as a first user, and collectively refer to a user (e.g., a user who is watching) who is not performing live in the virtual room as a second user. A friendly man-machine interaction interface can be provided for a user operating the video client, so that the user can be helped to quickly check task states of different users in the virtual room, and further, the display effect of user data in the virtual room can be enriched.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1a is a schematic structural diagram of a network architecture according to an embodiment of the present application;
FIG. 1b is a block diagram of an embodiment of the present application;
FIG. 1c is a timing diagram for obtaining a user list according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a scenario for data interaction according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of a data processing method according to an embodiment of the present application;
fig. 4a is an interface display diagram of a first display interface of a video client according to an embodiment of the present application;
fig. 4b is an interface display diagram of a first display interface of a video client according to an embodiment of the present application;
FIG. 4c is an interface display diagram of a guide sub-interface provided by an embodiment of the present application;
fig. 5 is a schematic view of a scenario for executing a live service associated with live data according to an embodiment of the present application;
fig. 6 is a schematic view of a scenario for ending a live service associated with live data according to an embodiment of the present application;
FIG. 7 is a schematic view of a scenario for obtaining a ranking list according to an embodiment of the present application;
FIG. 8 is a flow chart of a data processing method according to an embodiment of the present application;
Fig. 9 is a timing diagram of a target user executing a live service according to an embodiment of the present application;
Fig. 10 is a schematic view of a scenario in which a live service associated with image data is executed according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a data processing apparatus according to an embodiment of the present application;
fig. 12 is a schematic diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Referring to fig. 1a, fig. 1a is a schematic structural diagram of a network architecture according to an embodiment of the present application. As shown in fig. 1a, the network architecture may comprise a server 10 and a cluster of user terminals. The cluster of user terminals may comprise one or more user terminals, the number of which will not be limited here. As shown in fig. 1a, the user terminals 100a, 100b, 100c, …, 100n may be specifically included. As shown in fig. 1a, the user terminals 100a, 100b, 100c, …, 100n may respectively be connected to the above-mentioned server 10 through a network, so that each user terminal may interact with the server 10 through the network connection.
It should be appreciated that the network architecture of embodiments of the present application may be a common client/server architecture (i.e., C/S architecture), wherein the server may be the server 10 shown in fig. 1 a. The server 10 may include interface services, logic processing, and data storage, and the server 10 may provide an efficient computing environment and data storage and various functional services for user terminals having a network connection with the server 10. The server 10 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligence platforms.
It should be appreciated that each user terminal in the cluster of user terminals shown in fig. 1a may be provided with a target application (i.e. a client, e.g. a video client) that, when running in each user terminal, may interact with the server 10 shown in fig. 1a, respectively, as described above. The video client may be an independent client, or may be an embedded sub-client integrated in a client (for example, a social client, an educational client, and a multimedia client), which is not limited herein.
For easy understanding, the embodiment of the present application may select one user terminal from the plurality of user terminals shown in fig. 1a as a target user terminal, where the target user terminal may be used for interface display, data transmission with the server 10, and service experience for a user. The target user terminal may include: smart terminals with data processing functions such as smart phones, tablet computers, notebook computers, desktop computers, wearable devices, smart home and head-mounted devices. For example, in the embodiment of the present application, the user terminal 100a shown in fig. 1a may be used as a target user terminal, where the video client may be integrated in the target user terminal, and at this time, the target user terminal may implement data interaction between the service data platform corresponding to the video client and the server 10.
It should be appreciated that the live service performed in the virtual room of the video client may be an online video conference in a conference scenario. Wherein the virtual room may contain a first user and a second user. The first type of users in the first user may be enterprise administrators who participated in the online video conference in the above-mentioned manner, the second type of users in the first user may be enterprise employees who participated in the online video conference in the off-mentioned manner, and the second user in the virtual room may be enterprise employees who are in the online video conference but do not perform the live service (e.g., may be other enterprise employees who live the video conference around, etc.).
Alternatively, the live service performed in the virtual room in the video client may be an online variety of programs in the entertainment scene. Wherein the virtual room may contain a first user and a second user. The first type of users in the virtual room may be a host and an artist user participating in the online variety program in the above-mentioned manner, the second type of users in the first user may be a fan group participating in the online variety program in the non-above-mentioned manner (for example, may be a fan user of the artist user), and the second user may be a user who is in the online variety program but does not perform the live broadcast service (for example, may be a user who is watching the online variety program, etc.).
Alternatively, the live service performed in the virtual room in the video client may be an online self-learning function in a learning scenario or an online educational scenario. Wherein the virtual room may contain a first user and a second user. The first type of users in the first user in the virtual room may be users participating in the online study in a top-up manner (i.e., top-up study users, e.g., the trunk of a class a), the second type of users in the first user may be users participating in the online study in a non-top-up manner (i.e., audience study users, e.g., students of the class a), and the second user may be a spectator user who is in the online study but does not perform the live-broadcast service (e.g., parents or teachers of the class a that spectator the current study, etc.). The live broadcast service executed in the virtual room in the video client may be an online learning function in a learning scene or an online education scene, or a live broadcast service in another scene, which is not limited herein.
Taking an online study function as an example, the video client in the embodiment of the present application may be used to show a target user list (i.e., any one of the first user list and the second user list) in a virtual room (e.g., an online study room), process operations of a user start/end service (e.g., a study live service), provide a timer for viewing by a user, process a notification sent by the server 10, exchange data with the server 10, and so on. The server 10 in the embodiment of the present application may be used to process a request from a video client, record a task state of a user, count a service duration of the user, push a list update notification to the video client, store user service data, and the like according to a task state change request (for example, a start request for starting a service or an end request for ending a service) sent by the user in a virtual room.
The first user refers to a user group in a first task state (e.g., a state in self-study) in the virtual room, and the first user list may include list data information such as user image data (e.g., user head portrait) of the first user, account name of the first user, gender icon of the first user, and service duration of the first user in the virtual room. The second user refers to another group of users in the virtual room in a second task state (for example, a state without self-study), and the second user list may include user image data (for example, user head portrait) of the second user, account name of the second user, gender icon of the second user, and other list data information.
Further, referring to fig. 1b, fig. 1b is a schematic diagram of a module structure according to an embodiment of the present application. As shown in fig. 1b, the module architecture 1 may include a presentation layer, a logic layer, and a service layer. It will be appreciated that the video client (e.g. an online study room) in the embodiment of the present application may be integrated in any one of the user terminals in the user terminal cluster shown in fig. 1a, for example, the video client may be a client integrated in the user terminal 100 a. The user corresponding to the user terminal in the embodiment of the present application may be user a. The server in the embodiment of the present application may be the server 10 shown in fig. 1 a.
It should be understood that the presentation layer in the embodiments of the present application may be used to present data and receive data input by the user a, and provide an interactive interface for the user a. For example, the presentation layer in the embodiment of the present application may present a virtual room (e.g., an online study room) of a video client on a first display interface of a user terminal. The virtual room may contain a first user and a second user. The first user may be a user performing live services (e.g., a self-learning user) and the second user may be a user not performing live services (e.g., a surrounding user). Wherein the first user may comprise a first type of user and a second type of user. It will be appreciated that a first type of user (e.g., a user who initiates a camera to live, such as a top-hat self-learning user) may be a first user for performing live services associated with live data, and a second type of user (e.g., a user who does not initiate a camera to live, such as a viewer self-learning user) may be a first user for performing live services associated with image data.
As shown in fig. 1b, the module architecture 1 may include a self-learning status module and a self-learning list module in the presentation layer. The self-learning state module may be configured to display task states of users in a virtual room of the video client, where the task states may include a non-self-learning state, a self-learning start state, a self-learning end state, and the like. It should be understood that, when the user a enters the virtual room in the video client, a triggering operation may be performed on a service start control (for example, a first service start control for joining a self-learning service of the top-hat or a second service start control for joining a self-learning service of the audience) on a first display interface of the video client, so that the user a may perform a live broadcast service corresponding to the service start control, where in the embodiment of the present application, a task state at a time (for example, a time t 1) when the user a performs the triggering operation on the service start control may be referred to as a self-learning start state. Further, when the user a finishes the live broadcast service, a triggering operation may be performed on a service finishing control (for example, a first service finishing control for finishing a self-learning service of the top-hat or a second service finishing control for finishing a self-learning service of the audience) on the first display interface, so that the user a finishes the live broadcast service corresponding to the service finishing control. The task state of the user a between the time t1 and the time t2 may be referred to as a state in self-learning, i.e., a first task state. The task state of the user a before the time t1 and after the time t2 can be referred to as a non-self-learning state, i.e. a second task state.
The review list module may be used to present a first list of users and a second list of users of a virtual room (e.g., an online review room). Wherein the user in the first user list may be a user associated with a first task state, the first user list may contain the first user; the users in the second list of users may be users associated with a second task state, and the second list of users may include the second user.
As shown in fig. 1b, the logic layer of the module architecture 1 is a bridge between the display layer and the service layer, so that data connection and instruction transmission between the three layers can be realized, logic processing can be performed on received data, further functions of modifying, acquiring, deleting and the like of the data are realized, and a processing result can be fed back to the display layer. The logic layer may include a local management module and a local service module. It will be appreciated that the local management module may be used to take charge of some of the most basic operations of the video client, such as: network requests, receiving background push notifications, database management, etc. The local service module may be used to provide update operations for user lists, local timers, and task state management (e.g., self-running state management), etc.
As shown in fig. 1b, the service layer of the module architecture 1 may provide a data service interface for the presentation layer and the logic layer, and the service layer may include a network service interface, logic processing, and data storage. The network service interface may receive various requests (e.g., a room acquisition request, a start request, an end request, a list pull request, a rank query request, etc.) from the video client, and may return processing results corresponding to the various requests to the video client, respectively, while also providing the capability of sending notification pushes (e.g., list update notifications) to the video client. The logic process may be used to process business logic after receiving various requests from the video client, for example, the logic process may categorize users in the virtual room into a first user list and a second user list, calculate business duration for the users, change task status based on video client request data, and the like. The data store may be used to record user start time stamps, end time stamps, number of times live service is performed, service duration, task status, etc.
Further, referring to fig. 1c, fig. 1c is a timing chart of acquiring a user list according to an embodiment of the present application. As shown in fig. 1c, the video client (e.g., online study room) in the embodiment of the present application may be integrated into any one of the user terminals in the user terminal cluster shown in fig. 1a, for example, the video client may be a client integrated into the user terminal 100 a. The user corresponding to the user terminal in the embodiment of the present application may be user a. The server in the embodiment of the present application may be the server 10 shown in fig. 1 a.
It should be appreciated that upon entering the virtual room of the video client, the user a (i.e., the target user) may perform a second operation with respect to the first display interface of the video client, which may in turn enable the user terminal to perform step S111, in response to the second operation with respect to the first display interface. At this time, the ue may execute step S112 to trigger the logic layer to pull the user list, thereby generating a list acquisition request. Further, the server may perform step S113 to generate an initial user list based on the list acquisition request transmitted by the user terminal calling the service layer. The first initial user list and the second initial user list may be collectively referred to as an initial user list in the embodiment of the present application.
It will be appreciated that the server may perform step S114 after generating the initial user list, and then may send the initial user list to the service layer, so that the user terminal may perform step S115 to obtain the initial user list through the logic layer. Further, when the user terminal acquires the initial user list, step S116 may be executed, so that the initial user list may be sorted to obtain the user list. In other words, the user terminal performs the sorting process on the first initial user list to obtain the first user list. Meanwhile, the user terminal may perform sorting processing on the second initial user list to obtain a second user list. Wherein the first user list and the second user list may be collectively referred to as a user list.
Further, the user terminal may execute step S117 to send the first user list and the second user list obtained after the sorting process to the presentation layer, so that the user terminal may execute step S118 to determine, from the first user list and the second user list, a target user list for outputting to a list sub-interface independent of the first display interface. The target user list may be any one of the first user list and the second user list. The list sub-interface is a floating window which is overlapped and displayed on the first display interface, and it can be understood that the data displayed on the list sub-interface and the data displayed on the first display interface are mutually independent.
It should be understood that when the user in the virtual room transmits a task state change request (for example, a start request or an end request) to the server, the server may generate a list update notification for updating the first user list and the second user list, and further may perform step S119 of transmitting the list update notification for updating the first user list and the second user list to the service layer. At this time, the user terminal may perform step S120 to acquire a list update notification transmitted from the server through the logical layer. Further, the user terminal may perform step S121 to perform an update operation on the first user list and the second user list based on the list update notification, so as to obtain an updated first user list and an updated second user list, respectively.
The user terminal may perform update operation on the first user list and the second user list through the logic layer based on the list update notification, that is, trigger an operation of pulling the first user list and the second user list once, so as to obtain an updated first initial user list and an updated second initial user list returned by the server, and further perform sorting processing on the updated first initial user list, so as to obtain an updated first user list. Meanwhile, the user terminal may perform sorting processing on the updated second initial user list to obtain an updated second user list. The embodiment of the application can collectively refer to the updated first user list and the updated second user list as the updated user list.
Further, the user terminal may execute step S122 to send the updated user list to the presentation layer through the logic layer, and then may execute step S123 to determine an updated target user list for outputting to the list sub-interface from the updated first user list and the updated second user list. The updated target user list may be any one of the updated first user list and the updated second user list.
Further, referring to fig. 2, fig. 2 is a schematic view of a scenario for data interaction according to an embodiment of the present application. As shown in fig. 2, the video client (e.g., online study room) in the embodiment of the present application may be integrated into any one of the user terminals in the user terminal cluster shown in fig. 1a, for example, the video client may be a client integrated into the user terminal 100 a. The user corresponding to the user terminal in the embodiment of the present application may be user a. The server in the embodiment of the present application may be the server 10 shown in fig. 1 a.
It should be understood that, in the embodiment of the present application, the user a corresponding to the user terminal may perform a triggering operation (i.e., the first operation) with respect to the video client. The triggering operation in the present application may include touch operations such as clicking and long pressing, or non-touch operations such as voice and gestures, which are not limited herein. At this time, the user terminal may output the virtual room associated with the first operation onto a first display interface (e.g., display interface 200a shown in fig. 2) of the video client terminal in response to the first operation.
Wherein the virtual room may contain a first user and a second user (e.g., a spectator user). The first user may comprise a first type of user (e.g., a top-hat self-learning user) and a second type of user (e.g., a spectator self-learning user). Wherein the first user may be a user in a first task state (e.g., a state in self-study) in the virtual room and the second user may be a user in a second task state (e.g., a state not self-study) in the virtual room. Display interface 200 shown in fig. 2 may include display area 1, display area 2, and display area 3. The display area 1 (i.e. the first display area) may have a function of displaying live data of the first user, the display area 2 (i.e. the second display area) may have a function of displaying image data of the second user, and the display area 3 (i.e. the third display area) may have a function of displaying text auxiliary information.
It is to be appreciated that the virtual room associated with the first operation can be recommended by the server based on the recommendation level of the plurality of virtual rooms of the video client. In other words, the user a may directly perform the triggering operation on the video client, and the triggering operation of the user a directly on the video client may be referred to as a first operation in the embodiments of the present application. At this time, the server shown in fig. 2 may determine a virtual room having the highest recommendation degree among the K virtual rooms in the video client in response to the first operation, and call the virtual room having the highest recommendation degree the virtual room associated with the first operation, and may output the virtual room associated with the first operation to the display interface 200a shown in fig. 2. Wherein K is a positive integer. A virtual room may correspond to an identification number (e.g., a room identification, ID).
Wherein the recommendation level may be determined by the number of users in each virtual room. It is understood that the server may determine the recommendation level for each virtual room based on the number of first type users and the number of second type users in each virtual room. Wherein the server, when determining the recommendation level in the virtual room, prioritizes the number of first type users over the number of second type users. For example, the server may first determine the number of first type users in each virtual room, and if the number of first type users in the virtual room is smaller, the recommendation of the virtual room is higher, and vice versa; when the number of the first type users in the two virtual rooms is the same, the server may determine the recommendation degree of the virtual room having the same number of the first type users based on the number of the second type users in each virtual room, and if the number of the second type users in the virtual room is smaller, the recommendation degree of the virtual room is higher, and vice versa.
Alternatively, the virtual room associated with the first operation may be selected by user a from a top page in the plurality of virtual rooms contained. It will be appreciated that the user a may perform a trigger operation on the display interface 200a for exiting the virtual room recommended by the server, so that the user terminal may be switched from the display interface 200a to the home page of the video client. The home page may include cover data corresponding to each of the plurality of virtual rooms. For example, the front page may include the cover data of the virtual room 1, the cover data of the virtual room 2, the cover data of the virtual room 3, and the cover data of the virtual room 4. Further, the user a may perform a triggering operation on the home page for an area where the cover data of a certain virtual room (for example, the cover data of the virtual room 1) is located, and in this case, the triggering operation of the user a directly for the area where the cover data of the virtual room 1 is located in the home page may be referred to as a first operation in the embodiment of the present application. Further, the server may determine the virtual room 1 corresponding to the cover data of the virtual room 1 as the virtual room associated with the first operation in response to the first operation, and may output the virtual room 1 to the display interface 200a shown in fig. 2.
It should be appreciated that user a may perform a second operation with respect to the display interface 200a such that the user terminal may respond to the second operation to obtain a first list of users associated with a first task state and a second list of users associated with a second task state in the virtual room. The first user list may include a first user, and the second user list may include a second user.
Wherein the first user list and the second user list may be obtained by performing a triggering operation for the user a with respect to the control 20 shown in fig. 2. It will be appreciated that the user a may perform a triggering operation (e.g., a clicking operation) on the control 20, and at this time, the user terminal may respond to the triggering operation to obtain the first user list and the second user list. The triggering operation performed by the user a on the control 20 may be referred to as a second operation in the embodiment of the present application.
Alternatively, the first user list and the second user list may be obtained by performing a trigger operation for the user a for the second type of user in the presentation area 2 shown in fig. 2. It will be appreciated that the user a may perform a triggering operation (e.g. a left sliding operation) for the second type of user in the presentation area 2, and when a certain number of threshold (e.g. 10) second type of users are displayed in a sliding manner, the user terminal may respond to the triggering operation, and thus obtain the first user list and the second user list. The embodiment of the application can refer to the triggering operation executed by the user A on the second type of user as the second operation.
Further, the user terminal may determine a list sub-interface (e.g., sub-interface 200b shown in fig. 2) independent of the display interface 200a, and may determine a target user list from the first user list and the second user list for output onto the sub-interface 200 b. It will be appreciated that after the user a performs the second operation in the display interface 200a, the sub-interface 200b of the user terminal may output the first user list, which is the target user list. At this time, the user a may also perform a triggering operation (e.g., click on "the" onlooker "operation in the sub-interface 200b or perform a left-sliding operation in the sub-interface 200 b) with respect to the sub-interface 200b, thereby causing the sub-interface 200b to output the second user list, which is the target user list.
It should be understood that, when the user a in the embodiment of the present application performs the second operation on the control 20 in the display interface 200a, the user a does not perform the live service in the virtual room, where it may be understood that the user a is the second user and the task state of the user a is the second task state (i.e. the state of no self-learning), and it should be understood that the user a belongs to the user in the second user list.
Optionally, when the user a in the embodiment of the present application performs the second operation on the control 20 in the display interface 200a, the user a has performed the live service associated with the live data in the virtual room, where it may be understood that the user a is a first type of user in the first user, and the task state of the user a is a first task state (i.e. a state in the study), and it should be understood that the user a belongs to the user in the first user list.
Optionally, when the user a in the embodiment of the present application performs the second operation on the control 20 in the display interface 200a, the user a has performed the live service associated with the image data in the virtual room, where it may be understood that the user a is a second type of user in the first user, and the task state of the user a is the first task state (i.e. the state in the study), and it should be understood that the user a belongs to the user in the first user list.
It can be seen that, in the embodiment of the present application, the target user list is output in the sub-interface 200b, so that the service participation (for example, the self-learning participation) of the user a performing the live service can be improved, and the live service can be performed for the second user in the virtual room to be stimulated, so as to transform the task state of the second user.
The specific implementation manner of determining the target user list output on the list sub-interface independent of the first display interface by executing the second operation on the first display interface of the video client to obtain the first user list and the second user list generated by the first user (for example, the first type user and the second type user) in the first task state and the second user in the second task state in the virtual room can be referred to in the embodiments corresponding to fig. 3-10 below.
Further, referring to fig. 3, fig. 3 is a flow chart of a data processing method according to an embodiment of the application. As shown in fig. 3, the method may be performed by a computer device, e.g., a user terminal, integrated with a video client. The user terminal may be any one of the user terminals (e.g. user terminal 100 a) in the user terminal cluster shown in fig. 1a, and the method may at least comprise the following steps S101-S103:
Step S101, in response to a first operation for the video client, outputting the virtual room associated with the first operation to a first display interface of the video client.
Specifically, the user terminal may send a room acquisition request to a server corresponding to the video client in response to a first operation triggered for the video client. The room acquisition request may be used to instruct the server to configure a virtual room for a target user accessing the video client; the target user is a user executing the first operation, namely, a user corresponding to the user terminal. At this time, the user terminal may receive the virtual room returned by the server, and may initialize the task state of the target user to a second task state (e.g., a state of no self-study) when the target user enters the virtual room. Further, the user terminal may update a second user (e.g., a pegbee) in the virtual room according to the target user having the second task state, and output the updated virtual room to the first display interface of the video client.
The first display interface of the video client may include a first display area, a second display area and a third display area; the first display area may have a function of displaying live data of the first type of user; the second display area may comprise a first sub-area and a second sub-area; the first sub-region may have a function of displaying user image data of the target user; the second sub-area may have a function of presenting image data of the second type of user; the first type user and the second type user belong to a first user in a first task state in the virtual room; the third presentation area may have a function of presenting text auxiliary information in the virtual room.
For ease of understanding, further, please refer to fig. 4a, fig. 4a is an interface display diagram of a first display interface of a video client according to an embodiment of the present application. As shown in fig. 4a, the video client (e.g., online study room) in the embodiment of the present application may be integrated into any one of the user terminals in the user terminal cluster shown in fig. 1a, for example, the video client may be a client integrated into the user terminal 100 a. The user corresponding to the user terminal in the embodiment of the present application may be a target user. The server in the embodiment of the present application may be the server 10 shown in fig. 1 a.
As shown in fig. 4a, a first display interface (display interface 400 a) of the video client may include a control 1, a control 2, and a control 3. Wherein control 1 may be used to obtain a first list of users and a second list of users, control 2 (i.e., a first service initiation control) may be used to perform a live service (e.g., a top-hat self-learning service) associated with live data, and control 3 (i.e., a second service initiation control) may be used to perform a live service (e.g., a viewer self-learning service) associated with image data. Display interface 400a may further include display area 1 (i.e., a first display area), display area 2 (i.e., a second display area), and display area 3 (i.e., a third display area).
The first display area may have a function of displaying live data of a first type of user (e.g., a barley self-learning user). As shown in fig. 4a, the display area 1 in the display interface 400a may include a video display area (e.g., 4 video display areas) with a certain number of thresholds, and the 4 video display areas may include a video display area 1, a video display area 2, a video display area 3, and a video display area 4. Wherein, the video display area 1, the video display area 2 and the video display area 3 all contain live broadcast data of the first type of user, and the video display area 4 is temporarily empty, it should be understood that the target user may perform a triggering operation for the control 2 in the display area 3, so that live broadcast data of the target user performing live broadcast service may be displayed in the video display area 4.
It should be understood that the number of the second type of users in the virtual room in the embodiment of the present application is not limited, and if the number of the first type of users in the first display area reaches the threshold, that is, the video display areas all have live broadcast data of the corresponding first type of users. In other words, when the number of first type users reaches the number threshold, the target user entering the virtual room cannot conduct live traffic (e.g., a top-hat self-learning traffic) associated with the live data. At this time, the target user may choose to perform a live service (e.g., a viewer self-learning service) associated with the image data, thereby enabling the target user to get rid of the difficulty of preempting the video presentation area (e.g., preempting the placeholder) to enhance the user's experience.
As shown in fig. 4a, when a second type of user currently exists in the virtual room of the video client, the second presentation area of the first display interface of the video client may include a first sub-area and a second sub-area. It will be appreciated that presentation area 2 in display interface 400a (i.e., the first display interface) may comprise sub-area 1 (i.e., the first sub-area) and sub-area 2 (i.e., the second sub-area). Wherein the sub-area 1 may have a function of displaying user image data of the target user; the sub-area 2 may have a function of presenting image data of a second type of user, e.g. a viewer self-learning user.
Wherein the third presentation area may have a function of presenting text auxiliary information in the virtual room. The presentation area 3 in the display interface 400a as shown in fig. 4a may present the text auxiliary information, which may be notification information of the user in the virtual room entering the room, for example, "XX user entering the room". The text auxiliary information may also be a system notification message sent by the server, for example, "system announcement: to create a healthy live environment, sending any information related to advertising, attractions, etc. would be severely punished. The text auxiliary information may also be a real-time message sent by the user in the virtual room, e.g. "silence-! ". The text auxiliary information may also be other forms of information, which are not illustrated here.
Optionally, if the number of the second type of users in the virtual room of the video client is zero, the display interface of the virtual room on the video client may be referred to as fig. 4b, and fig. 4b is an interface display diagram of the first display interface of the video client according to the embodiment of the present application. As shown in fig. 4b, presentation area 4 may be a second presentation area of display interface 400b (i.e., a first display interface).
It should be appreciated that when the target user enters the virtual room, the number of second type users in the virtual room is zero, at which point the presentation area 4 of the video client's display interface 400b may contain a sub-area 3 (i.e., a first sub-area) in which the target user's user image data is displayed. Wherein text information with a guiding function can be output in the sub-area 3 to prompt the target user to execute the live service in the virtual room. For example, "wheat can be used with a partner.
It should be understood that if the target user belongs to a cold start user in the video client, the user terminal may receive service guide information configured by the server for the cold start user; the cold start user is a user without history access information in the video client, namely, a user accessing the video client for the first time. At this time, the user terminal may output the service guide information in a guide sub-interface independent of the first display interface. The guiding sub-interface may be a floating window displayed on the first display interface in a superimposed manner, and it is understood that the data displayed on the guiding sub-interface and the data displayed on the first display interface are independent from each other. Wherein the traffic guidance information may be used to instruct the target user to adjust the task state in the virtual room. For example, the service guide information may be "not getting in the air but also joining a team who is self-learning together, and quickly joining a bar", thereby guiding a cold-start user who enters the virtual room to perform a live service.
Further, referring to fig. 4c, fig. 4c is an interface display diagram of a guiding sub-interface according to an embodiment of the present application. The display interface 400c in fig. 4c may be a first display interface of a virtual room of the video client.
It should be appreciated that if the target user belongs to a user accessing the video client for the first time (i.e., a cold start user), the server may determine the number of first users (e.g., 15 people) of the virtual room and the traffic guidance information configured for the target user. For example, the business guide information may be "not getting on the wheat but also being able to join a team who is self-learning together, and to join the bar quickly. Further, the server may transmit the service guide information and the number of the first users to the user terminal. At this time, the user terminal may determine a guide sub-interface independent of the first display interface (e.g., sub-interface 400d independent of display interface 400 c). The sub-interface 400d may be a floating window displayed on the display interface 400c in a superimposed manner, and it is understood that the data displayed on the sub-interface 400d and the data displayed on the display interface 400c are independent of each other. At this time, the user terminal may output the service guide information and text information generated based on the number of first users of the virtual room (e.g., all-in-one, 15 people are in-one) in the sub-interface 400 d. Wherein the traffic guidance information is used to instruct the target user to adjust the task state in the virtual room.
Further, the target user corresponding to the user terminal may perform a live service (e.g., a barley self-study service) associated with the live data. It should be appreciated that the user terminal may respond to the first service initiation control triggered for the target user in the first sub-area, and may further send a first initiation request associated with the live service corresponding to the first service initiation control to the server, so that the server may generate the first initiation prompt information associated with the live service in response to the first initiation request. Further, the user terminal may obtain the first start prompt information returned by the server, and output the first start prompt information as text auxiliary information to the third display area. At this time, the user terminal may adjust the task state of the target user from the second task state to the first task state, update the first type user in the first user according to the target user in the first task state, further output live broadcast data of the updated first type user to the first display area, delete the first sub-area in the second display area, and output image data of the second type user according to the second display area and the second sub-area after deleting the first sub-area.
Wherein, it can be understood that the first start request can also be used for instructing the server to determine a target video display area for executing the live service in the first display area; the updated first type of users may include target users and first type of users.
It should be appreciated that the server may determine the target video presentation area based on the first initiation request, and may in turn return the target video area to the user terminal so that the user terminal may initiate a camera task for capturing the target user. Further, the user terminal may output live broadcast data of the target user, which is shot when the image capturing task is executed, to the target video display area, and at this time, the user terminal may delete the first sub-area in the second display area and perform area expansion on the second sub-area in the second display area, so as to obtain the expanded second sub-area. The area size of the expanded second sub-area may be equal to the area size of the second display area. It can be understood that, in the first display area to which the target video display area belongs, the user terminal may output the image data of the second type user in the expanded second sub-area when synchronously outputting the live broadcast data of the first type user.
Further, the server may record the start time stamp when the first start request is obtained, and send the start time stamp to the user terminal, so that the user terminal may start a timer corresponding to the video client when the user terminal starts to perform the image capturing task, and record the timing time stamp of the timer. It should be understood that the user terminal may take the live video duration of the live video data of the target user, the first task state of the target user, and the user image data of the target user as service data information, and may further output the service data information in the target video display area. Wherein the live video duration is determined by the difference between the timed timestamp and the start timestamp.
For ease of understanding, further, please refer to fig. 5, fig. 5 is a schematic diagram of a scenario for executing a live service associated with live data according to an embodiment of the present application. As shown in fig. 5, the video client (e.g., online study room) in the embodiment of the present application may be integrated into any one of the user terminals in the user terminal cluster shown in fig. 1a, for example, the video client may be a client integrated into the user terminal 100 a. The user corresponding to the user terminal in the embodiment of the present application may be user a. The server in the embodiment of the present application may be the server 10 shown in fig. 1 a.
A plurality of video presentation areas may be included in a first presentation area in the display interface 500a (i.e., the first display interface) as shown in fig. 5, and the plurality of video presentation areas may include a video presentation area 1, a video presentation area 2, a video presentation area 3, and a video presentation area 4. Wherein the video presentation area 1, the video presentation area 2 and the video presentation area 3 can be used to present live data of a first type of user in the virtual room, respectively. Wherein video presentation area 1 may be used to present live data of user 50a, video presentation area 2 may be used to present live data of user 50b, and video presentation area 3 may be used to present live data of user 50c. At this time, the first type of user (e.g., a top-hat self-learning user) of the first user in the virtual room may be user 50a, user 50b, and user 50c.
It should be understood that, the user a (i.e., the target user) corresponding to the user terminal may perform a triggering operation for the service initiation control (i.e., the first service initiation control) in the display interface 500a shown in fig. 5, and then the user terminal may invoke, in response to the triggering operation, a start service protocol (for example, a start self-learning protocol) in the service layer through the logic layer shown in fig. 1b, and then may send, to the server, a first initiation request associated with the live service corresponding to the service initiation control. At this time, when the server receives the first start request, the server may respond to the first start request, and may further generate first start prompt information associated with the live service. For example, the first start prompt message may be "user a starts the self-learning of the wheat.
Further, the server may send the first start prompt information to the user terminal, so that the user terminal may obtain the first start prompt information, and may use the first start prompt information as text auxiliary information, and output the text auxiliary information to a third display area (for example, display area 3 in display interface 500b shown in fig. 5). At this time, the user terminal may adjust the task state of the user a from the second task state (e.g., a state without self-learning) to the first task state (e.g., a state in self-learning) through the logic layer, and update the first type user among the first users according to the user a in the first task state. At this time, the updated first type user among the first users in the virtual room may be the user 50a, the user 50b, the user 50c, and the user a.
It should be appreciated that the user terminal may output updated live data of the first type of user to the first presentation area (e.g., presentation area 1 in display interface 500b shown in fig. 5), delete the first sub-area in the second presentation area (e.g., presentation area 2 in display interface 500b shown in fig. 5), and output image data of the second type of user based on the second presentation area and the second sub-area after deleting the first sub-area.
Among other things, it can be appreciated that upon receipt of the first initiation request, the server can determine a target video presentation area (e.g., video presentation area 4) for executing live traffic in presentation area 1. It should be appreciated that the server may return the target video area to the user terminal so that the user terminal may initiate a camera task for shooting user a. Further, the user terminal may output live broadcast data of the user a, which is shot when the image capturing task is performed, to the target video display area, and at this time, the user terminal may delete the first sub-area in the display area 2 and perform area expansion on the second sub-area in the display area 2, so as to obtain the expanded second sub-area. The area size of the expanded second sub-area may be equal to the area size of the second display area. It will be appreciated that in the presentation area 1 to which the target video presentation area belongs, the user terminal may output image data of the second type user (e.g., a viewer self-learning user) in the expanded second sub-area while simultaneously outputting live data of the first type user. The user terminal where the target user is located may synchronize live broadcast data of the target user with other terminals through the server, and in addition, the user terminal where the target user is located may also receive live broadcast data of other first type users synchronized by other terminals through the server, and display the live broadcast data in the first display area (display area 1 of the display interface 500b shown in fig. 5).
Further, the server may record a start time stamp (e.g. 12:28:00) when the first start request is obtained, and send the start time stamp to the user terminal, so that the user terminal may start a timer corresponding to the video client when the user terminal starts to perform the image capturing task, and record a timing time stamp of the timer. It should be understood that the user terminal may take the live video duration of the live video data of the user a, the first task state of the user a, and the user image data of the user a as service data information, and may further output the service data information in the target video display area. It can be understood that if the timer records a time stamp of 12:48: at 00, the live video duration output in the target video presentation area may be the difference between the timing timestamp and the start timestamp, that is, 00:20:00. in other words, the target user has performed a live service for 20 minutes in the virtual room.
Further, the target user corresponding to the user terminal may end the live service (e.g., the barley self-study service) associated with the live data. It should be appreciated that the user terminal may respond to a triggering operation for the first end of service control, thereby sending a first end request associated with the first end of service control to the server, such that the server generates the first end prompt in response to the first end request. At this time, the server may return the first end prompt information to the user terminal. When the user terminal acquires the first end prompt information, the user terminal may adjust the task state of the target user from the first task state (for example, a state in self-learning) to the second task state (a state not self-learning), and may update the second type user in the first user according to the target user in the second task state.
Meanwhile, when the server acquires the first end request, the server may record an end timestamp, and determine a service duration of the target user for executing the image capturing task based on a difference between the start timestamp and the end timestamp. When the user terminal acquires the first ending prompt information, the user terminal can end executing the shooting task and end a timer corresponding to the video client. Further, the user terminal may determine a feedback sub-interface independent of the first display interface, and output the service duration obtained from the server and the configuration text information associated with the historical behavior data of the target user to the feedback sub-interface. The feedback sub-interface may be a floating window displayed on the first display interface in a superimposed manner, and it is understood that the data displayed on the feedback sub-interface is independent of the data displayed on the first display interface.
Further, the user terminal may send a message packet to the server at regular time (e.g., every five minutes), so that the server may return a response packet corresponding to the message packet to the user terminal, at which time the server may determine a network state (e.g., an online state) of the target user corresponding to the user terminal in the virtual room. If the server does not receive the message packet sent by the user terminal within five minutes, the server can determine that the user terminal corresponding to the target user exits the virtual room, and further can automatically help the target user end the live broadcast service, so that the task state of the target user is adjusted from the first task state to the second task state, and the service duration of the target user executing the live broadcast service is recorded. Optionally, in other scenarios (for example, a call scenario), the display interface of the user terminal is on the call interface for a period of time (for example, ten minutes) instead of the first display interface of the virtual room, where the server may also actively end the live service of the target user, so that the target user exits the virtual room, and record the service duration of the target user executing the live service.
For ease of understanding, further, please refer to fig. 6, fig. 6 is a schematic diagram of a scenario for ending a live service associated with live data according to an embodiment of the present application. As shown in fig. 6, the video client (e.g., online study room) in the embodiment of the present application may be integrated into any one of the user terminals in the user terminal cluster shown in fig. 1a, for example, the video client may be a client integrated into the user terminal 100 a. The user corresponding to the user terminal in the embodiment of the present application may be user a. The server in the embodiment of the present application may be the server 10 shown in fig. 1 a.
The display interface 600a shown in fig. 6 may be the display interface 500b shown in fig. 5. It is appreciated that an end of business control (i.e., a first end of business control) may be included in the third presentation area of the display interface 600a as shown in fig. 6.
It should be understood that, the user a (i.e., the target user) corresponding to the user terminal may perform a triggering operation for the service end control (i.e., the first service end control) in the display interface 600a shown in fig. 6, and then the user terminal may invoke, in response to the triggering operation, an end service protocol (for example, an end self-learning protocol) in the service layer through the logic layer shown in fig. 1b, and then may send, to the server, a first end request associated with the live service corresponding to the service end control. At this time, when the server receives the first end request, the server may respond to the first end request, and may further generate first end prompt information associated with the live broadcast service. For example, the first end prompt message may be "user a ends the self-learning of the wheat.
At this time, the server may return the first end prompt information to the user terminal. When the user terminal acquires the first end prompt information, the user terminal can adjust the task state of the user A from a first task state (for example, a self-learning state) to a second task state (not self-learning state) through a logic layer, and the first type user and the second user in the first user can be updated according to the target user in the second task state.
Meanwhile, when the server acquires the first end request, the server may record an end timestamp (for example, 14:58:00), determine a service duration (i.e., a live video duration, for example, 2 hours and 30 minutes) of the target user for executing the image capturing task based on a difference between the start timestamp (for example, 12:28:00) and the end timestamp, and at the same time, configure associated configuration text information for the user a based on historical behavior data of the user a. For example, if user a self-learned for more than an hour per day, user a's configuration text information may be "you strive to run, in order to catch up with that one was given a prestige on his own. If user a exceeds a certain deadline threshold (e.g., 3 days) from the last execution of the live service, the user a's configuration text message may be "willing to persist, and then be able to win. The configuration text information of the user a is not listed here.
Further, when the user terminal obtains the first ending prompt information, the user terminal can end executing the image capturing task and end the timer corresponding to the video client. At this time, the user terminal may determine a feedback sub-interface (for example, sub-interface 600c shown in fig. 6) independent of the first display interface, and further may output, to sub-interface 600c, the live video duration of the live video service executed by the user a at the present time and the configuration text information of the user a obtained from the server.
In addition, when the target user exits the virtual room, the user terminal may switch the display interface of the video client from the first display interface to the second display interface (the home page, i.e., the interface containing the cover data corresponding to the plurality of virtual rooms). Further, the user terminal may switch the display interface of the video client from the second display interface to a third display interface (e.g., a personal center page) in response to a triggering operation for the second display interface. Wherein the third display interface may include a business ranking control for retrieving a ranking list associated with the target user.
Further, the user terminal can respond to the triggering operation of the target user on the business ranking control, so that a ranking query request is sent to a server corresponding to the video client. At this time, the server may acquire a first ranked list having a first association with the target user and acquire a second ranked list having a second association with the target user based on the ranking query request. Wherein users in a first ranked list may include users having the same geographic location area as the target user (e.g., the same city or the same province), and users in a second ranked list may include users having an interactive relationship (e.g., a bi-directional friend relationship) with the target user. At this time, the server may collectively refer to the first user list and the second user list as a ranked list associated with the target user. Wherein the ranking of each user in the ranked list is determined based on the total length of service for each user within a certain time threshold (e.g., one week or one month). Further, the server may send the ranked list to the user terminal, thereby enabling the user terminal to determine a target ranked list on a fourth display interface for output to the video client from the first ranked list and the second ranked list returned by the server. The target rank list may be any one of the first rank list and the second rank list. At this time, the user terminal may switch the display interface of the video client from the third display interface to the fourth display interface, so as to determine the target ranking of the target user in the target ranking list, and output the target ranking list including the target ranking to the fourth display interface.
Further, referring to fig. 7, fig. 7 is a schematic view of a scenario for obtaining a ranking list according to an embodiment of the present application. As shown in fig. 7, the video client (e.g., online study room) in the embodiment of the present application may be integrated into any one of the user terminals in the user terminal cluster shown in fig. 1a, for example, the video client may be a client integrated into the user terminal 100 a. The user corresponding to the user terminal in the embodiment of the present application may be a target user. The server in the embodiment of the present application may be the server 10 shown in fig. 1 a.
When the target user exits the virtual room, the user terminal may switch the display interface of the video client from the first display interface (e.g., display interface 200a shown in fig. 2) to the second display interface (e.g., home page, display interface 700a shown in fig. 7). The display interface 700a shown in fig. 7 may include a control 70 and a plurality of pieces of cover data corresponding to the virtual rooms, for example, cover data of the virtual room 1, cover data of the virtual room 2, cover data of the virtual room 3, and cover data of the virtual room 4. Wherein the control 70 may be used to access a personal center page of the target user.
Further, the user terminal may enter the personal center page of the target user in response to triggering the operation for the control 70 in the display interface 700a, i.e., the user terminal may switch the display interface of the video client from the display interface 700a to the display interface 700b (i.e., the third display interface). Wherein the display interface 700b may contain a control 71, which control 71 may be used to obtain a business ranking control of a ranking list associated with a target user.
Further, the user terminal may respond to the triggering operation of the target user on the control 71, so as to send a ranking query request to the server corresponding to the video client. At this time, the server may acquire a first ranked list having a first association with the target user and acquire a second ranked list having a second association with the target user based on the ranking query request. Wherein users in a first ranked list (e.g., co-city Zhou Bang) may include users in the same geographic location area as the target user (e.g., the same city or the same province), and users in a second ranked list (e.g., friends Zhou Bang) may include users having an interactive relationship (e.g., a bi-directional friend relationship) with the target user. At this time, the server may collectively refer to the first user list and the second user list as a ranked list associated with the target user. Wherein the ranking of each user in the ranked list is determined based on the total length of service for each user over a certain time threshold (e.g., over a week). For example, the total time of day service of user a in the display interface 700c shown in fig. 7 may be a1 time B1 score, for example, 30 time 53 minutes, and the total time of day service of user B may be a2 time B2 score, for example, 28 time 16 minutes, so it may be understood that the total time of day service of user a is longer than the total time of day service of user B, that is, the ranking of user a is before user B. Similarly, the user ranks in the display interface 700c will not be described in detail.
Further, the server may send the ranked list to the user terminal, which in turn may enable the user terminal to determine a target ranked list on a fourth display interface (e.g., display interface 700c shown in fig. 7) for output to the video client from the first ranked list and the second ranked list returned by the server. The target rank list may be any one of the first rank list and the second rank list. At this time, the user terminal may switch the display interface 700b to the display interface 700c, so as to determine the target ranking of the target user in the target ranking list, and output the target ranking list including the target ranking to the display interface 700c, so that the target user obtains a larger sense of achievement when querying the target ranking.
Step S102, in response to a second operation for the first display interface, a first user list associated with a first task state is acquired, and a second user list associated with a second task state is acquired.
Specifically, the user terminal may send a list pull request to a server corresponding to the video client in response to the second operation for the first display interface. Wherein the list pull request may be used to instruct the server to obtain a first initial user list associated with a first task state and a second initial user list associated with a second task state. The first initial user list may include a first user, and the first user may include a first type user and a second type user; the second initial user list may include a second user. At this time, the server may return the first initial user list and the second initial user list to the user terminal. Further, the user terminal may use the live video time length of the first type of user and the image display time length of the second type of user as the service time length of the first user in the first initial user list, further may perform sorting processing on the first user in the first initial user list, determine the first initial user list after the sorting processing as the first user list, and meanwhile, the user terminal may further obtain an access timestamp of the second user in the second initial user list when the second user enters the virtual room, configure a corresponding time attenuation factor for the access timestamp corresponding to the second user, and perform sorting processing on the second user in the second initial user list based on the time attenuation factor, so that the second initial user list after the sorting processing may be determined as the second user list.
It should be appreciated that as shown in fig. 2, the user terminal may invoke the logical layer in response to the second operation for the display interface 200a, sending a list pull request to the server to which the video client corresponds. At this time, the server may filter the first user having the first task state (i.e., the first type user and the second type user) and the second user having the second task state in the virtual room displayed on the display interface 200a based on the list pull request, and thus may generate a first initial user list based on the first user in the virtual room and a second user list based on the second user in the virtual room.
Further, the server may send the first initial user list and the second initial user list to the user terminal. At this time, the user terminal may use the live video time length of the first type user and the image display time length of the second type user as the service time length of the first user in the first initial user list, so as to perform sorting processing on the first user in the first initial user list, and determine the first initial user list after the sorting processing as the first user list, so as to improve the participation sense of the first user in executing the live broadcast service, and simultaneously, may also stimulate the second user to execute the live broadcast service, so as to further convert into the first user.
Meanwhile, the user terminal can also acquire the access time stamp of the second user entering the virtual room in the second initial user list, configure a corresponding time attenuation factor for the access time stamp corresponding to the second user, and perform sorting processing on the second user in the second initial user list based on the time attenuation factor, so that the sorted second initial user list can be determined as the second user list. It will be appreciated that the later the access time stamp of the second user, the smaller the time decay factor configured for the second user. For example, in the user a and the user B of the second user terminal, the access timestamp of the user a is 12:05:00, user B's access timestamp is 13:32:09, at this time, the user terminal ranks user B before user a when performing ranking processing for user a and user B. In other words, the later the second user accessing the virtual room, the earlier the ranking in the second user list.
It will be appreciated that if the target user triggers a second operation on the first display interface, the target user belongs to the second user, i.e. the target user is in the second user list, when the target user has not yet performed the live task in the virtual room. If the target user triggers a second operation on the first display interface, and the target user is executing a live broadcast task in the virtual room, the target user belongs to the first user, namely the target user is in the first user list.
Step S103, determining a target user list for outputting to a list sub-interface independent of the first display interface from the first user list and the second user list.
Specifically, the user terminal may determine, according to the first user list and the second user list, a target user list for outputting to a list sub-interface independent of the first display interface, and may further output the target user list to the list sub-interface. The target user list may be any one of the first user list and the second user list. The list sub-interface may be a floating window that is displayed on the first display interface in a superimposed manner, and it is understood that the data displayed on the list sub-interface is independent of the data displayed on the first display interface.
Furthermore, each user in the virtual room may interact with other users in the virtual room in order to enhance the actual online companion between the first and second types of users and the second user in the virtual room.
For example, each user in the virtual room may input real-time information in a third presentation area on the first display interface of the virtual room to be presented as text auxiliary information in the third presentation area of the first display interface.
For example, the target user may encourage the user in the virtual room, e.g., the target user may trigger the operation in the region where the user's (e.g., user a) user avatar data is displayed on the first display interface, so that the user a may be encouraged. At this time, auxiliary text information such as "the target user encourages the user a" may be displayed in the third display area of the first display interface. For example, the target user may also perform a trigger operation on a user (e.g., user B) in the target user list displayed on the list sub-interface independent of the first display interface, thereby encouraging the user B. At this time, auxiliary text information such as "the target user encourages the user B" may be displayed in the third display area of the first display interface.
In addition, additional tools can be added to the video client, so that a user accessing the video client is more immersed in the live service. For example, a timing tool or the like of a live task may be displayed full screen on a display interface of a video client. Optionally, the user terminal may add a learning planning area or a wrong question recording area in the video client. At this time, the target user corresponding to the user terminal may input the planned service duration of the target user for executing the live broadcast task in the learning plan area, and remind the target user to end the live broadcast task when the time corresponding to the planned service duration corresponds to the planned service duration. The user corresponding to the user terminal can also input the error-prone questions recorded during the live broadcast task execution into the error-prone question recording area, so that the target user can conveniently master the error-prone knowledge points.
In an embodiment of the application, the computer device may output the virtual room associated with the first operation to a first display interface of the video client in response to the first operation for the video client. It should be appreciated that the embodiments of the present application may collectively refer to a user (e.g., a user who is self-learning) who is on-line living in the virtual room (i.e., a live service for short) and a user (e.g., a user who is self-learning) who is not on-line living in the virtual room (i.e., a user who is self-learning) as a first user, and collectively refer to a user (e.g., a user who is watching) who is not performing live in the virtual room as a second user. A friendly man-machine interaction interface can be provided for a user operating the video client, so that the user can be helped to quickly check task states of different users in the virtual room, and further, the display effect of user data in the virtual room can be enriched.
Further, referring to fig. 8, fig. 8 is a flow chart of a data processing method according to an embodiment of the application. As shown in fig. 8, the method may be performed by a computer device, e.g., a user terminal, integrated with a video client. The user terminal may be any one of the user terminals (e.g., user terminal 100 a) in the user terminal cluster shown in fig. 1a described above. The method may comprise the steps of:
step S201, responding to a first operation aiming at the video client, and outputting a virtual room associated with the first operation to a first display interface of the video client; the virtual room comprises a first user in a first task state and a second user in a second task state; the second task state is a different task state than the first task state;
wherein, the target user corresponding to the user terminal may execute a live service (e.g., a barley self-learning service) associated with the live data, the method may at least include the following steps S202-S205:
Step S202, a first service starting control triggered by a target user in a first subarea is responded, and a first starting request associated with a live service corresponding to the first service starting control is sent to a server, so that the server responds to the first starting request and generates first starting prompt information associated with the live service;
Step S203, the acquisition server outputs the first start prompt information as text auxiliary information to a third display area based on the first start prompt information returned by the first start request;
Step S204, the task state of the target user is adjusted from the second task state to the first task state, and the first type user in the first user is updated according to the target user in the first task state;
Step S205, outputting the updated live broadcast data of the first type user to the first display area, deleting the first subarea in the second display area, and outputting the image data of the second type user according to the second display area and the second subarea after deleting the first subarea.
It should be understood that, when the user terminal finishes step S205, the following steps S206 to S207 may be skipped:
step S206, responding to a second operation aiming at the first display interface, acquiring a first user list associated with a first task state, and acquiring a second user list associated with a second task state; the first user list comprises first users; the second user list comprises second users;
Step S207, determining a target user list which is used for being output to a list sub-interface independent of the first display interface from the first user list and the second user list.
The specific implementation of the steps S201 to S207 may be referred to the description of the steps S101 to S103 in the embodiment corresponding to fig. 3, and will not be repeated here.
Alternatively, the target user corresponding to the user terminal may perform a live service (e.g., a viewer self-study service) associated with the image data. The method may include at least the following steps S208-S211:
Step S208, responding to a second service starting control triggered by a target user in the first subarea, and sending a second starting request associated with a live service corresponding to the second service starting control to the server so that the server responds to the second starting request and generates second starting prompt information associated with the live service;
step S209, the acquisition server outputs the second start prompt information to the third display area as text auxiliary information based on the second start prompt information returned by the second start request;
Step S210, the task state of the target user is adjusted from the second task state to the first task state, and the image display duration of the target user with the first task state is recorded through a timer corresponding to the video client;
Step S211 of configuring virtual animation data for user image data of a target user having a first task state, taking the virtual animation data, the image display duration, and the user image data of the target user in the first task state as the image data of the target user, synchronously outputting the image data of the second type user in the second sub-area when the image data of the target user is output in the first sub-area, and outputting live broadcast data of the first type user in the first display area.
When the user terminal finishes executing step S211, the user terminal may jump to execute step S206-step S207, which will not be described further herein.
It should be understood that, after the target user corresponding to the user terminal obtains the first user list and the second user list, the live broadcast service in the virtual room may be executed. In other words, the user terminal may perform step S201, and further jump to step S206-step S207 to acquire the first user list and the second user list. Further, the user terminal may perform steps S202-S205 to enable the target user to perform a live service associated with live data in the virtual room. Alternatively, the user terminal may also perform steps S208-S211 to enable the target user to perform live services associated with the image data in the virtual room.
For ease of understanding, further, please refer to fig. 9, fig. 9 is a timing chart of performing live services by a target user according to an embodiment of the present application. As shown in fig. 9, the video client (e.g., online study room) in the embodiment of the present application may be integrated into any one of the user terminals in the user terminal cluster shown in fig. 1a, and for example, the video client may be a client integrated into the user terminal 100 a. The user corresponding to the user terminal in the embodiment of the present application may be user a. The server in the embodiment of the present application may be the server 10 shown in fig. 1 a.
It should be appreciated that, as shown in fig. 9, in a first display interface in a virtual room of a video client, user a (i.e., a target user) may perform live services in the virtual room. It will be appreciated that the user a may perform the triggering operation for the service initiation control (e.g., the first service initiation control or the second service initiation control), at which time the user terminal may perform step S911 in response to the triggering operation for the service initiation control. The triggering operation may include a touch operation such as clicking or long pressing, or may include a non-touch operation such as voice or gesture. Further, the user terminal may execute step S912, call the start service protocol in the service layer through the start live service logic in the logic layer, and send a start request to the server through the service layer. The first start request and the second start request may be collectively referred to as start requests in the embodiments of the present application.
At this time, the server may execute step S913 to receive the start request forwarded by the service layer, and may generate the start prompt message. The generated start prompt information may be referred to as a first start prompt information based on the first start request by the server, and the generated start prompt information may be referred to as a second start prompt information based on the second start request by the server. The embodiment of the application can collectively refer to the first start prompt information and the second start prompt information as start prompt information. Further, the server may execute step S914 to send the startup prompt message to the user terminal through the service layer.
It should be understood that the user terminal may perform step S915, receive, through the logic layer, the start prompt information sent by the server, and further may perform step S916 based on the start prompt information, so that a timer corresponding to the video client may be started, and perform step S917 when the user terminal receives the start prompt information, so that the task state of the target user may be adjusted from the second task state to the first task state. Meanwhile, the user terminal may perform step S918 to output service data (live data or image data) of the target user performing the live service on the first display interface of the video client through the presentation layer. If the live service executed by the target user is a live service (e.g., a barley self-learning service) associated with the live data, the first display interface may be as shown in the display interface 500b shown in fig. 5; if the live service performed by the target user is a live service (e.g., a viewer self-study service) associated with the image data, the first display interface may be as shown in a display interface 900b shown in fig. 10 below.
Further, in a first display interface in a virtual room of a video client, user a (i.e., a target user) may end a live service performed in the virtual room. It should be understood that, the target user may perform a triggering operation for a service end control (a first service end control or a second service end control) in the first display interface of the video client, at this time, the user terminal may perform step S919, respond to the triggering operation for the service end control, and may further perform step S920, call an end service protocol in the service layer through an end live service logic in the logic layer, and send an end request to the server. The first end request and the second end request may be collectively referred to as end requests in the embodiments of the present application.
At this time, the server may execute step S921, receive the end request forwarded by the service layer, and generate the end prompt information. The generated end prompt information may be referred to as a first end prompt information based on the first end request, and the generated end prompt information may be referred to as a second end prompt information based on the second end request. The embodiment of the application can collectively refer to the first ending prompt information and the second ending prompt information as ending prompt information. Further, the server may perform step S922 to send the end prompt message to the user terminal through the service layer.
Further, the ue may execute step S923, and receive, through the logic layer, the end prompt message sent by the server. It should be understood that the user terminal may perform step S924 based on the end prompt information, thereby ending the timer corresponding to the video client. Meanwhile, when the user terminal receives the end prompt information, step S925 may be executed to adjust the task state of the user a from the first task state to the second task state. Step S926 may further be executed, where the display interface outputs, through the presentation layer, service data (for example, the service duration of executing the live service, and the configuration text information) after the user a finishes the live service on the feedback sub-interface of the video client that is independent of the first display interface. If the live service executed by the user a is a live service (e.g., a barley self-learning service) associated with the live data, the feedback sub-interface may be as shown in the display interface 600c shown in fig. 6; if the live service performed by the user a is a live service (e.g., a viewer self-study service) associated with the image data, the feedback sub-interface may be as shown in a display interface 900d shown in fig. 10 below.
It should be appreciated that the target user corresponding to the user terminal may perform a live service (e.g., a viewer self-learning service) associated with the image data on the first display interface of the video client. It can be appreciated that the user terminal can respond to the second service start control triggered by the target user in the first sub-area, and send a second start request associated with the live service corresponding to the second service start control to the server, so that the server can respond to the second start request to generate second start prompt information associated with the live service. At this time, the server may return the second start-up information to the user terminal. When the user terminal obtains the second start prompt information, the user terminal can take the second start prompt information as text auxiliary information and output the text auxiliary information to the third display area.
Meanwhile, the user terminal can adjust the task state of the target user from the second task state (for example, a state without self-learning) to the first task state (for example, a state in self-learning), and record the image display duration of the target user with the first task state through a timer corresponding to the video client. Further, the user terminal may configure the virtual animation data (for example, a flip small animation) for the user image data of the target user having the first task state, and may further use the virtual animation data, the image display duration, and the user image data of the target user in the first task state as the image data of the target user, and may further output the image data of the second type user in synchronization in the second sub-area and output the live broadcast data of the first type user in the first presentation area when the image data of the target user is output in the first sub-area.
For ease of understanding, further, please refer to fig. 10, fig. 10 is a schematic diagram of a scenario in which a live service associated with image data is executed according to an embodiment of the present application. As shown in fig. 10, the video client (e.g., online study room) in the embodiment of the present application may be integrated into any one of the user terminals in the user terminal cluster shown in fig. 1a, and for example, the video client may be a client integrated into the user terminal 100 a. The user corresponding to the user terminal in the embodiment of the present application may be user a. The server in the embodiment of the present application may be the server 10 shown in fig. 1 a.
A first sub-region (e.g., sub-region 1) and a second sub-region (e.g., sub-region 2) may be contained in a second presentation area in the display interface 900a as shown in fig. 10. The sub-area 1 may include user image data of the user a and a service initiation control (i.e., a second service initiation control). The sub-area 2 may be used for presenting image data of a second type of user. Wherein in the virtual room of the video client, the second type of user of the first user may be user 90a, user 90b, user 90c, and user 90d.
It should be understood that, the user a (i.e., the target user) corresponding to the user terminal may perform a triggering operation for the service initiation control (i.e., the second service initiation control) in the display interface 900a shown in fig. 10, and then the user terminal may invoke, in response to the triggering operation, a start service protocol (for example, a start self-learning protocol) in the service layer through the logic layer shown in fig. 1b, and then may send a second initiation request associated with the live service corresponding to the service initiation control to the server. At this time, when the server receives the first start request, the server may respond to the second start request, and may further generate second start prompt information associated with the live broadcast service. For example, the second initiation prompt may be "user a starts audience self-learning".
Further, the server may send the second start prompt information to the user terminal, so that the user terminal may obtain the second start prompt information, and may use the second start prompt information as text auxiliary information, and output the text auxiliary information to the third display area in the display interface 900 b. At this time, the user terminal may adjust the task state of the user a from the second task state (for example, in the state self-learning) to the first task state (for example, the state not self-learning) through the logic layer, and record, through a timer corresponding to the video client, the image display duration of the target user having the first task state. Further, the user terminal may update the second type user of the first users according to the user a in the first task state. At this time, the updated second type user among the first users in the virtual room may be the user 90a, the user 90b, the user 90c, the user 90d, and the user a. The method for determining the image display duration may refer to the method for determining the video live broadcast duration, and will not be described in detail herein.
Further, the user terminal may configure the virtual animation data (for example, a small page-turning animation) for the user image data of the user a having the first task state, and thus may use the virtual animation data, the image display duration, and the user image data of the user a in the first task state as the image data of the user a, and further may output the image data of the second type of user in the sub-area 4 simultaneously when the image data of the user a is output in the sub-area 3 in the display interface 900b, and output the live broadcast data of the first type of user in the first presentation area of the display interface 900 b. In the embodiment of the application, the virtual animation data is configured for the second-type user, so that the dynamic sense of the second-type user executing the live broadcast service is enhanced, and the second-type user views the virtual animation data in the display interface 900b, thereby improving the experience of the user executing the live broadcast service.
Wherein region 3 in the second presentation area of the display interface 900b as shown in fig. 10 may contain an end-of-business control (i.e., a second end-of-business control). It should be understood that, when the user a finishes the live service associated with the image data, a triggering operation may be performed for the service finishing control in the display interface 900b shown in fig. 10, and then the user terminal may call, in response to the triggering operation, a finishing service protocol (for example, finishing self-learning protocol) in the service layer through the logic layer shown in fig. 1b, and then may send a second finishing request associated with the live service corresponding to the service finishing control to the server. At this time, when the server receives the second end request, the server may respond to the second end request, and may further generate second end prompt information associated with the live broadcast service. For example, the second end prompt may be "user a ends audience study".
At this time, the server may return the second end prompt information to the user terminal. When the user terminal acquires the second end prompt information, the user terminal can adjust the task state of the user a from the first task state (for example, the state in self-learning) to the second task state (not self-learning) through the logic layer, and can update the second type user and the second user in the first user according to the target user in the second task state. Meanwhile, when the server acquires the second end request, the server may record an end timestamp, determine a service duration (i.e., an image display duration) corresponding to the target user executing the live broadcast service based on a difference between the start timestamp and the end timestamp, and configure associated configuration text information for the user a based on historical behavior data of the user a. Further, when the user terminal obtains the second ending prompt information, the user terminal can end executing the live broadcast task and end the timer corresponding to the video client. At this time, the user terminal may determine a feedback sub-interface (for example, sub-interface 900d independent of display interface 900c shown in fig. 10) independent of the first display interface, and may further output the image display duration of executing the live service by the user a and the configuration text information of the user a to sub-interface 900d, which are acquired from the server.
In an embodiment of the application, the computer device may output the virtual room associated with the first operation to a first display interface of the video client in response to the first operation for the video client. It should be appreciated that the embodiments of the present application may collectively refer to a user (e.g., a user who is self-learning) who is on-line living in the virtual room (i.e., a live service for short) and a user (e.g., a user who is self-learning) who is not on-line living in the virtual room (i.e., a user who is self-learning) as a first user, and collectively refer to a user (e.g., a user who is watching) who is not performing live in the virtual room as a second user. A friendly man-machine interaction interface can be provided for a user operating the video client, so that the user can be helped to quickly check task states of different users in the virtual room, and further, the display effect of user data in the virtual room can be enriched.
Further, referring to fig. 11, fig. 11 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application. As shown in fig. 11, the data processing apparatus 1 may be a computer program (including program code) running in a computer device, for example, the data processing apparatus 1 is an application software; the data processing device 1 may be adapted to perform the respective steps of the method provided by the embodiments of the application. As shown in fig. 11, the data processing apparatus 1 may be operated in a user terminal, which may be any one of the user terminals in the user terminal cluster in the embodiment corresponding to fig. 1a, for example, the user terminal 100a. The data processing apparatus 1 may include: the first output module 11, the first acquisition module 12, the first determination module 13, the first transmission module 14, the second acquisition module 15, the first adjustment module 16, the second output module 17, the third acquisition module 18, the second determination module 19, the third output module 20, the second transmission module 21, the fourth acquisition module 22, the ending module 23, the third determination module 24, the third transmission module 25, the fifth acquisition module 26, the second adjustment module 27, the configuration module 28, the first receiving module 29, the fourth output module 30, the sixth acquisition module 31, the updating module 32, the fourth determination module 33, the first interface switching module 34, the second interface switching module 35, the fourth transmission module 36, the fifth output module 37, and the sixth output module 38.
The first output module 11 is configured to output, in response to a first operation for the video client, a virtual room associated with the first operation to a first display interface of the video client; the virtual room comprises a first user in a first task state and a second user in a second task state; the second task state is a different task state than the first task state.
Wherein the first output module 11 comprises: a first transmitting unit 111, a first receiving unit 112, and an updating unit 113.
The first sending unit 111 is configured to send a room acquisition request to a server corresponding to the video client in response to a first operation triggered by the video client; the room acquisition request is used for instructing the server to configure a virtual room for a target user accessing the video client; the target user is a user who executes a first operation;
The first receiving unit 112 is configured to receive the virtual room returned by the server, and initialize a task state of the target user to a second task state when the target user enters the virtual room;
the updating unit 113 is configured to update a second user in the virtual room according to the target user having the second task state, and output the updated virtual room to the first display interface of the video client.
The specific implementation manner of the first transmitting unit 111, the first receiving unit 112, and the updating unit 113 may be referred to the description of step S101 in the embodiment corresponding to fig. 3, and the detailed description will not be repeated here.
The first obtaining module 12 is configured to obtain a first user list associated with a first task state and obtain a second user list associated with a second task state in response to a second operation for the first display interface; the first user list comprises first users; the second user list comprises second users;
Wherein the first acquisition module 12 comprises: a second transmitting unit 121, a second receiving unit 122, a first sorting processing unit 123, and a second sorting processing unit 124.
The second sending unit 121 is configured to send a list pulling request to a server corresponding to the video client in response to a second operation for the first display interface; the list pulling request is used for indicating the server to acquire a first initial user list associated with a first task state and a second initial user list associated with a second task state;
The second receiving unit 122 is configured to receive a first initial user list and a second initial user list returned by the server; the first initial user list comprises a first user, and the first user comprises a first type user and a second type user; the second initial user list comprises a second user;
The first sorting processing unit 123 is configured to sort the first users in the first initial user list by using the live video time length of the first type user and the image display time length of the second type user as the service time length of the first users in the first initial user list, and determine the first initial user list after the sorting processing as the first user list;
The second sorting unit 124 is configured to obtain access time stamps of the second users in the second initial user list entering the virtual room, configure a corresponding time attenuation factor for the access time stamp corresponding to the second users, perform sorting processing on the second users in the second initial user list based on the time attenuation factor, and determine the sorted second initial user list as the second user list.
The specific implementation manner of the second sending unit 121, the second receiving unit 122, the first sorting unit 123 and the second sorting unit 124 may refer to the description of step S102 in the embodiment corresponding to fig. 3, and the detailed description will not be repeated here.
The first determining module 13 is configured to determine, from the first user list and the second user list, a target user list for outputting to a list sub-interface independent of the first display interface.
The first display interface comprises a first display area, a second display area and a third display area; the first display area has the function of displaying live broadcast data of a first type of user; the second display area comprises a first subarea and a second subarea; the first subarea has a function of displaying user image data of a target user; the second sub-area has a function of displaying image data of the second type of user; the first type user and the second type user belong to a first user in a first task state in the virtual room; the third display area has a function of displaying text auxiliary information in the virtual room;
The first sending module 14 is configured to respond to a first service start control triggered by a target user in the first sub-area, and send a first start request associated with a live service corresponding to the first service start control to the server, so that the server responds to the first start request and generates first start prompt information associated with the live service;
The second obtaining module 15 is configured to obtain first start prompt information returned by the server based on the first start request, and output the first start prompt information as text auxiliary information to the third display area;
the first adjustment module 16 is configured to adjust a task state of the target user from the second task state to a first task state, and update a first type of user in the first user according to the target user in the first task state;
The second output module 17 is configured to output the updated live broadcast data of the first type user to the first display area, delete the first sub-area in the second display area, and output the image data of the second type user according to the second display area and the second sub-area after deleting the first sub-area.
The first starting request is also used for indicating the server to determine a target video display area for executing the live broadcast service in the first display area; the updated first type user comprises a target user and a first type user;
The second output module 17 includes: an acquisition unit 171, an area expansion unit 172, and an output unit 173.
The acquiring unit 171 is configured to acquire a target video display area returned by the server based on the first start request, and start an image capturing task for capturing a target user;
The region expansion unit 172 is configured to, when outputting live broadcast data of a target user captured during execution of a capturing task to a target video display region, perform region expansion on a second sub-region in the second display region when deleting the first sub-region in the second display region, and obtain an expanded second sub-region; the area size of the expanded second sub-area is equal to the area size of the second display area;
the output unit 173 is configured to output, in the expanded second sub-area, image data of the second type user when live broadcast data of the first type user is synchronously output in the first display area to which the target video display area belongs.
The specific implementation manner of the obtaining unit 171, the area expanding unit 172 and the output unit 173 may refer to the description of the live broadcast data of the output target user in the embodiment corresponding to fig. 5, and will not be further described herein.
The third obtaining module 18 is configured to obtain a start time stamp recorded by the server when the first start request is obtained, start a timer corresponding to the video client when the image capturing task starts to be executed, and record a timing time stamp of the timer;
The second determining module 19 is configured to take, as service data information, a live video duration of live broadcast data of the target user, a first task state of the target user, and user image data of the target user; the live video time length is determined by the difference between the timing time stamp and the starting time stamp;
The third output module 20 is configured to output service data information in the target video display area.
The second sending module 21 is configured to send, to the server, a first end request associated with the first service end control in response to a triggering operation for the first service end control, so that the server responds to the first end request and generates first end prompt information;
The fourth obtaining module 22 is configured to obtain first end prompt information returned by the server based on the first end request, adjust a task state of the target user from the first task state to a second task state, and update a second type user in the first user according to the target user in the second task state;
The ending module 23 is configured to obtain an ending timestamp recorded by the server when the server obtains the first ending request, and end a timer corresponding to the video client when ending executing the image capturing task;
The third determining module 24 is configured to determine a feedback sub-interface independent of the first display interface, and output, to the feedback sub-interface, a service duration determined by the server for the target user to perform the camera task and configuration text information associated with historical behavior data of the target user; the duration of the service is determined by the difference between the start time stamp and the end time stamp.
The first display interface comprises a first display area, a second display area and a third display area; the first display area has the function of displaying live broadcast data of a first type of user; the second display area comprises a first subarea and a second subarea; the first subarea has a function of displaying user image data of a target user; the second sub-area has a function of displaying image data of the second type of user; the first type user and the second type user belong to a first user in a first task state in the virtual room; the third display area has a function of displaying text auxiliary information in the virtual room;
the third sending module 25 is configured to send, to the server, a second start request associated with a live service corresponding to a second service start control in response to a second service start control triggered by a target user in the first sub-area, so that the server responds to the second start request and generates second start prompt information associated with the live service;
The fifth obtaining module 26 is configured to obtain second start prompt information returned by the server based on the second start request, and output the second start prompt information as text auxiliary information to the third display area;
the second adjusting module 27 is configured to adjust the task state of the target user from the second task state to the first task state, and record, by using a timer corresponding to the video client, an image display duration of the target user having the first task state;
the configuration module 28 is configured to configure virtual animation data for user image data of a target user having a first task state, take the virtual animation data, an image display duration, and the user image data of the target user in the first task state as the image data of the target user, output image data of a second type of user synchronously in a second sub-area when the image data of the target user is output in the first sub-area, and output live broadcast data of the first type of user in the first display area.
The first receiving module 29 is configured to receive service guide information configured by the server for the cold start user if the target user belongs to the cold start user in the video client; the cold start user is a user without history access information in the video client;
The fourth output module 30 is configured to output the service guiding information in a guiding sub-interface independent of the first display interface; the traffic guidance information is used to instruct the target user to adjust the task state in the virtual room.
The sixth obtaining module 31 is configured to obtain a list update notification sent by a server corresponding to the video client and used for updating the first user list and the second user list; the list update notification is generated by the server upon detecting a task state change request sent by a user in the virtual room;
the updating module 32 is configured to perform an updating operation on the first user list and the second user list based on the list update notification, to obtain an updated first user list and an updated second user list;
The fourth determining module 33 is configured to update the target user list based on the updated first user list and the updated second user list, and output the updated target user list to the list sub-interface.
The first interface switching module 34 is configured to switch the display interface of the video client from the first display interface to the second display interface when the target user exits the virtual room;
The second interface switching module 35 is configured to switch the display interface of the video client from the second display interface to the third display interface in response to a triggering operation for the second display interface; the third display interface is a business ranking control for acquiring a ranking list associated with the target user;
The fourth sending module 36 is configured to send a ranking query request to a server corresponding to the video client in response to a triggering operation for the service ranking control; the ranking inquiry request is used for indicating the server to acquire a ranking list; the ranking list comprises a first ranking list and a second ranking list; the users in the first ranking list contain the same users as the geographic location area in which the target user is located; the users in the second ranking list comprise users with interactive relations with the target users;
The fifth output module 37 is configured to obtain a first ranking list and a second ranking list returned by the server, and determine a target ranking list on a fourth display interface for outputting to the video client from the first ranking list and the second ranking list;
The sixth output module 38 is configured to switch the display interface of the video client from the third display interface to the fourth display interface, determine the target ranking of the target user in the target ranking list, and output the target ranking list including the target ranking to the fourth display interface.
The specific implementation manners of the first output module 11, the first obtaining module 12, the first determining module 13, the first sending module 14, the second obtaining module 15, the first adjusting module 16, the second output module 17, the third obtaining module 18, the second determining module 19, the third output module 20, the second sending module 21, the fourth obtaining module 22, the ending module 23, the third determining module 24, the third sending module 25, the fifth obtaining module 26, the second adjusting module 27, the configuring module 28, the first receiving module 29, the fourth output module 30, the sixth obtaining module 31, the updating module 32, the fourth determining module 33, the first interface switching module 34, the second interface switching module 35, the fourth sending module 36, the fifth output module 37 and the sixth output module 38 may be referred to the description of the corresponding embodiments of fig. 8, and the descriptions of steps S201 to S211 will not be repeated herein. In addition, the description of the beneficial effects of the same method is omitted.
Further, referring to fig. 12, fig. 12 is a schematic diagram of a computer device according to an embodiment of the application. As shown in fig. 12, the computer device 1000 may be any one of the user terminals in the user terminal cluster in the embodiment corresponding to fig. 1a, for example, the user terminal 100a. The computer device 1000 may include: at least one processor 1001, such as a CPU, at least one network interface 1004, a user interface 1003, a memory 1005, at least one communication bus 1002. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display (Display), a Keyboard (Keyboard), and the network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others. The memory 1005 may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one disk memory. The memory 1005 may also optionally be at least one storage device located remotely from the aforementioned processor 1001. As shown in fig. 12, the memory 1005, which is one type of computer storage medium, may include an operating system, a network communication module, a user interface module, and a device control application.
In the computer device 1000 shown in fig. 12, the network interface 1004 is mainly used for network communication with a server; while user interface 1003 is primarily used as an interface for providing input to a user; and the processor 1001 may be used to invoke a device control application stored in the memory 1005 to implement:
responsive to a first operation for the video client, outputting a virtual room associated with the first operation to a first display interface of the video client; the virtual room comprises a first user in a first task state and a second user in a second task state; the second task state is a different task state than the first task state;
Responding to a second operation aiming at the first display interface, acquiring a first user list associated with a first task state, and acquiring a second user list associated with a second task state; the first user list comprises first users; the second user list comprises second users;
a target user list is determined from the first user list and the second user list for output to a list sub-interface that is independent of the first display interface.
It should be understood that the computer device 1000 described in the embodiment of the present application may perform the description of the data processing method in the embodiment corresponding to fig. 3 and 8, and may also perform the description of the data processing apparatus 1 in the embodiment corresponding to fig. 11, which is not repeated herein. In addition, the description of the beneficial effects of the same method is omitted.
Furthermore, it should be noted here that: the embodiment of the present application further provides a computer readable storage medium, in which the aforementioned computer program executed by the data processing apparatus 1 is stored, and the computer program includes program instructions, when executed by the processor, can execute the description of the data processing method in the embodiment corresponding to fig. 3 or fig. 8, and therefore, a description thereof will not be repeated here. In addition, the description of the beneficial effects of the same method is omitted. For technical details not disclosed in the embodiments of the computer-readable storage medium according to the present application, please refer to the description of the method embodiments of the present application. As an example, program instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network, where the multiple computing devices distributed across multiple sites and interconnected by a communication network may constitute a blockchain system.
Those skilled in the art will appreciate that implementing all or part of the above-described methods may be accomplished by way of computer programs, which may be stored on a computer-readable storage medium, and which, when executed, may comprise the steps of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random-access Memory (Random Access Memory, RAM), or the like.
The foregoing disclosure is illustrative of the present application and is not to be construed as limiting the scope of the application, which is defined by the appended claims.

Claims (13)

1. A method of data processing, comprising:
Responsive to a first operation for a video client, outputting a virtual room associated with the first operation to a first display interface of the video client; the virtual room comprises a first user in a first task state and a second user in a second task state; the second task state is a different task state than the first task state; the second user comprises a target user for executing the first operation, and the task state of the target user is the second task state;
responsive to a second operation for the first display interface, obtaining a first list of users associated with the first task state and obtaining a second list of users associated with the second task state; the first user list comprises the first user; the second user list comprises the second user;
Determining a target user list from the first user list and the second user list for output to a list sub-interface independent of the first display interface;
The first display interface comprises a first display area, a second display area and a third display area; the first display area has the function of displaying live broadcast data of a first type of user; the second display area comprises a first subarea and a second subarea; the first sub-region has a function of displaying user image data of the target user; the second sub-area has the function of displaying image data of a second type of user; the first type of user and the second type of user both belong to the first user in the virtual room in a first task state; the third display area has a function of displaying text auxiliary information in the virtual room;
The method further comprises the steps of:
Responding to a first service starting control triggered by the target user in the first subarea, and sending a first starting request associated with a live service corresponding to the first service starting control to a server so that the server responds to the first starting request and generates first starting prompt information associated with the live service;
acquiring first starting prompt information returned by the server based on the first starting request, and outputting the first starting prompt information to the third display area as the text auxiliary information;
the task state of the target user is adjusted from the second task state to the first task state, and a first type user in the first user is updated according to the target user in the first task state;
and outputting the updated live broadcast data of the first type user to the first display area, deleting the first subarea in the second display area, and outputting the image data of the second type user according to the second display area and the second subarea after deleting the first subarea.
2. The method of claim 1, wherein the outputting the virtual room associated with the first operation to the first display interface of the video client in response to the first operation for the video client comprises:
Responding to a first operation triggered by a video client, and sending a room acquisition request to a server corresponding to the video client; the room acquisition request is used for instructing the server to configure a virtual room for the target user accessing the video client;
receiving the virtual room returned by the server, and initializing the task state of the target user into the second task state when the target user enters the virtual room;
And updating the second user in the virtual room according to the target user with the second task state, and outputting the updated virtual room to a first display interface of the video client.
3. The method of claim 1, wherein the first initiation request is further for instructing the server to determine a target video presentation area in the first presentation area for executing the live service; the updated first type of users includes the target user and the first type of users;
The step of outputting the updated live broadcast data of the first type user to the first display area, deleting the first subarea in the second display area, and outputting the image data of the second type user according to the second display area and the second subarea after deleting the first subarea, wherein the step of outputting comprises the following steps:
Acquiring the target video display area returned by the server based on the first starting request, and starting a shooting task for shooting the target user;
When live broadcast data of the target user shot during the execution of the camera shooting task is output to the target video display area, when the first subarea is deleted in the second display area, carrying out area expansion on the second subarea in the second display area to obtain an expanded second subarea; the area size of the expanded second sub-area is equal to the area size of the second display area;
and when the live broadcast data of the first type user is synchronously output in the first display area to which the target video display area belongs, outputting the image data of the second type user in the expanded second subarea.
4. A method according to claim 3, characterized in that the method further comprises:
Acquiring a starting time stamp recorded by the server when the first starting request is acquired, starting a timer corresponding to the video client when the shooting task starts to be executed, and recording a timing time stamp of the timer;
Taking the video live time length of the live data of the target user, the first task state of the target user and the user image data of the target user as service data information; the live video duration is determined by the difference between the timing timestamp and the start timestamp;
and outputting the business data information in the target video display area.
5. The method according to claim 4, wherein the method further comprises:
Responding to triggering operation aiming at a first service end control, and sending a first end request associated with the first service end control to the server so that the server responds to the first end request to generate first end prompt information;
Acquiring first end prompt information returned by the server based on the first end request, adjusting the task state of the target user from the first task state to the second task state, and updating a second type user in the first user according to the target user in the second task state;
Acquiring an ending time stamp recorded by the server when the first ending request is acquired, and ending a timer corresponding to the video client when the execution of the shooting task is ended;
Determining a feedback sub-interface independent of the first display interface, and outputting the service duration determined by the server for the target user to execute the camera shooting task and configuration text information associated with historical behavior data of the target user to the feedback sub-interface; the duration of the service is determined by the difference between the start time stamp and the end time stamp.
6. The method according to claim 1, wherein the method further comprises:
Responding to a second service starting control triggered by the target user in the first subarea, and sending a second starting request associated with a live service corresponding to the second service starting control to the server so that the server responds to the second starting request and generates second starting prompt information associated with the live service;
Acquiring second starting prompt information returned by the server based on the second starting request, and outputting the second starting prompt information to the third display area as the text auxiliary information;
the task state of the target user is adjusted from the second task state to the first task state, and the image display duration of the target user with the first task state is recorded through a timer corresponding to the video client;
Configuring virtual animation data for user image data of a target user in the first task state, taking the virtual animation data, the image display duration and the user image data of the target user in the first task state as the image data of the target user, synchronously outputting the image data of the second type user in the second subarea when the image data of the target user is output in the first subarea, and outputting live broadcast data of the first type user in the first display area.
7. The method according to claim 1 or 2, characterized in that the method further comprises:
If the target user belongs to the cold start user in the video client, receiving service guide information configured by the server for the cold start user; the cold start user is a user without history access information in the video client;
Outputting the business guiding information in a guiding sub-interface independent of the first display interface; the business guide information is used for indicating the target user to adjust the task state in the virtual room.
8. The method of claim 1, wherein the obtaining a first list of users associated with the first task state and obtaining a second list of users associated with the second task state in response to a second operation of the first display interface comprises:
responding to a second operation aiming at the first display interface, and sending a list pulling request to a server corresponding to the video client; the list pulling request is used for indicating the server to acquire a first initial user list associated with the first task state and a second initial user list associated with the second task state;
Receiving the first initial user list and the second initial user list returned by the server; the first initial user list comprises the first user, and the first user comprises a first type user and a second type user; the second initial user list comprises the second user;
the video live broadcast time length of the first type user and the image display time length of the second type user are used as the service time length of the first user in the first initial user list, the first user in the first initial user list is subjected to sorting, and the first initial user list after sorting is determined to be the first user list;
And acquiring access time stamps of the second users in the second initial user list entering the virtual room, configuring corresponding time attenuation factors for the access time stamps corresponding to the second users, sorting the second users in the second initial user list based on the time attenuation factors, and determining the sorted second initial user list as the second user list.
9. The method according to claim 1, wherein the method further comprises:
Acquiring list update notification sent by a server corresponding to the video client and used for updating the first user list and the second user list; the list update notification is generated by the server upon detecting a task state change request sent by a user in the virtual room;
based on the list update notification, respectively performing update operation on the first user list and the second user list to obtain an updated first user list and an updated second user list;
updating the target user list based on the updated first user list and the updated second user list, and outputting the updated target user list to the list sub-interface.
10. The method according to claim 1 or 2, characterized in that the method further comprises:
when the target user exits the virtual room, switching a display interface of the video client from the first display interface to a second display interface;
Responding to a triggering operation aiming at the second display interface, and switching the display interface of the video client from the second display interface to a third display interface; the third display interface is a business ranking control for acquiring a ranking list associated with the target user;
Responding to triggering operation aiming at the business ranking control, and sending a ranking inquiry request to a server corresponding to the video client; the ranking inquiry request is used for indicating the server to acquire the ranking list; the ranking list comprises a first ranking list and a second ranking list; the users in the first ranking list comprise users in the same geographic location area as the target user is; the users in the second ranking list comprise users with interactive relation with the target user;
Acquiring the first ranking list and the second ranking list returned by the server, and determining a target ranking list on a fourth display interface for outputting to the video client from the first ranking list and the second ranking list;
And switching the display interface of the video client from the third display interface to the fourth display interface, determining the target ranking of the target user in the target ranking list, and outputting the target ranking list containing the target ranking to the fourth display interface.
11. A data processing apparatus, comprising:
A first output module for outputting a virtual room associated with a first operation to a first display interface of a video client in response to the first operation; the virtual room comprises a first user in a first task state and a second user in a second task state; the second task state is a different task state than the first task state; the second user comprises a target user for executing the first operation, and the task state of the target user is the second task state;
A first obtaining module, configured to obtain a first user list associated with the first task state and obtain a second user list associated with the second task state in response to a second operation for the first display interface; the first user list comprises the first user; the second user list comprises the second user;
A first determining module for determining a target user list for output to a list sub-interface independent of the first display interface from the first user list and the second user list;
The first display interface comprises a first display area, a second display area and a third display area; the first display area has the function of displaying live broadcast data of a first type of user; the second display area comprises a first subarea and a second subarea; the first sub-region has a function of displaying user image data of the target user; the second sub-area has the function of displaying image data of a second type of user; the first type of user and the second type of user both belong to the first user in the virtual room in a first task state; the third display area has a function of displaying text auxiliary information in the virtual room;
The apparatus further comprises:
The first sending module is used for responding to a first service starting control triggered by the target user in the first subarea, sending a first starting request associated with a live service corresponding to the first service starting control to a server, so that the server responds to the first starting request and generates first starting prompt information associated with the live service;
The second acquisition module is used for acquiring first starting prompt information returned by the server based on the first starting request and outputting the first starting prompt information to the third display area as the text auxiliary information;
the first adjusting module is used for adjusting the task state of the target user from the second task state to the first task state, and updating a first type user in the first user according to the target user in the first task state;
And the second output module is used for outputting the updated live broadcast data of the first type user to the first display area, deleting the first subarea in the second display area, and outputting the image data of the second type user according to the second display area and the second subarea after deleting the first subarea.
12. A computer device, comprising: a processor, a memory, a network interface;
the processor is connected to a memory for providing data communication functions, a network interface for storing a computer program, and for invoking the computer program to perform the method of any of claims 1-10.
13. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program comprising program instructions which, when executed by a processor, perform the method of any of claims 1-10.
CN202010525775.6A 2020-06-10 2020-06-10 Data processing method, device, computer equipment and storage medium Active CN113784151B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010525775.6A CN113784151B (en) 2020-06-10 2020-06-10 Data processing method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010525775.6A CN113784151B (en) 2020-06-10 2020-06-10 Data processing method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113784151A CN113784151A (en) 2021-12-10
CN113784151B true CN113784151B (en) 2024-05-17

Family

ID=78834886

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010525775.6A Active CN113784151B (en) 2020-06-10 2020-06-10 Data processing method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113784151B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20000072429A (en) * 2000-06-03 2000-12-05 조철 Realtime, interactive multimedia education system and method in online environment
WO2002010946A1 (en) * 2000-07-28 2002-02-07 Idiil International, Inc. Remote instruction and communication system
JP2004118506A (en) * 2002-09-26 2004-04-15 Victor Co Of Japan Ltd Conference network system
JP2006323467A (en) * 2005-05-17 2006-11-30 Nippon Telegr & Teleph Corp <Ntt> Exercise support system, its exercise support management device and program
CN101346949A (en) * 2005-10-21 2009-01-14 捷讯研究有限公司 Instant messaging device/server protocol
CN107682752A (en) * 2017-10-12 2018-02-09 广州视源电子科技股份有限公司 Method, apparatus, system, terminal device and the storage medium that video pictures are shown
CN109753501A (en) * 2018-12-27 2019-05-14 广州市玄武无线科技股份有限公司 A kind of data display method of off-line state, device, equipment and storage medium
CN110581975A (en) * 2019-09-03 2019-12-17 视联动力信息技术股份有限公司 Conference terminal updating method and video networking system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20000072429A (en) * 2000-06-03 2000-12-05 조철 Realtime, interactive multimedia education system and method in online environment
WO2002010946A1 (en) * 2000-07-28 2002-02-07 Idiil International, Inc. Remote instruction and communication system
JP2004118506A (en) * 2002-09-26 2004-04-15 Victor Co Of Japan Ltd Conference network system
JP2006323467A (en) * 2005-05-17 2006-11-30 Nippon Telegr & Teleph Corp <Ntt> Exercise support system, its exercise support management device and program
CN101346949A (en) * 2005-10-21 2009-01-14 捷讯研究有限公司 Instant messaging device/server protocol
CN107682752A (en) * 2017-10-12 2018-02-09 广州视源电子科技股份有限公司 Method, apparatus, system, terminal device and the storage medium that video pictures are shown
CN109753501A (en) * 2018-12-27 2019-05-14 广州市玄武无线科技股份有限公司 A kind of data display method of off-line state, device, equipment and storage medium
CN110581975A (en) * 2019-09-03 2019-12-17 视联动力信息技术股份有限公司 Conference terminal updating method and video networking system

Also Published As

Publication number Publication date
CN113784151A (en) 2021-12-10

Similar Documents

Publication Publication Date Title
CN110945840B (en) Method and system for providing embedded application associated with messaging application
CN107710197B (en) Sharing images and image albums over a communication network
US11171893B2 (en) Methods and systems for providing virtual collaboration via network
CN107924372B (en) Information processing system and information processing method
CN106953935B (en) Media information pushing method and device and storage medium
US20170358321A1 (en) Methods and systems for altering video clip objects
EP4096126A1 (en) Communication method and apparatus based on avatar interaction interface, and computer device
CN110570698A (en) Online teaching control method and device, storage medium and terminal
US10880398B2 (en) Information updating/exchange method, apparatus, and server
WO2018045979A1 (en) Message transmission method and device for media file, and storage medium
CN109074555A (en) One step task is completed
CN110609970B (en) User identity identification method and device, storage medium and electronic equipment
KR20170002485A (en) Connecting current user activities with related stored media collections
JP2010211569A (en) Evaluation device, program and information processing system
US20200104092A1 (en) Group Slideshow
CN111557014A (en) Method and system for providing multiple personal data
CN105808231A (en) System and method for recording script and system and method for playing script
JP6354750B2 (en) Community service system and community service method
CN113784151B (en) Data processing method, device, computer equipment and storage medium
US20170171333A1 (en) Method and electronic device for information pushing
WO2017165253A1 (en) Modular communications
CN114726818B (en) Network social method, device, equipment and computer readable storage medium
CN112162961B (en) Message processing method, device, computer equipment and storage medium
CN111885139A (en) Content sharing method, device and system, mobile terminal and server
CN108897801B (en) User behavior determination method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant