CN113784151A - Data processing method and device, computer equipment and storage medium - Google Patents

Data processing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113784151A
CN113784151A CN202010525775.6A CN202010525775A CN113784151A CN 113784151 A CN113784151 A CN 113784151A CN 202010525775 A CN202010525775 A CN 202010525775A CN 113784151 A CN113784151 A CN 113784151A
Authority
CN
China
Prior art keywords
user
list
task state
target
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010525775.6A
Other languages
Chinese (zh)
Other versions
CN113784151B (en
Inventor
舒润民
陈科科
陈琦钿
吴歆婉
匡皓琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010525775.6A priority Critical patent/CN113784151B/en
Priority claimed from CN202010525775.6A external-priority patent/CN113784151B/en
Publication of CN113784151A publication Critical patent/CN113784151A/en
Application granted granted Critical
Publication of CN113784151B publication Critical patent/CN113784151B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4882Data services, e.g. news ticker for displaying messages, e.g. warnings, reminders

Abstract

The embodiment of the application discloses a data processing method, a data processing device, computer equipment and a storage medium, wherein the method comprises the following steps: in response to a first operation on the video client, outputting a virtual room associated with the first operation to a first display interface of the video client; the virtual room comprises a first user in a first task state and a second user in a second task state; the second task state is a different task state than the first task state; in response to a second operation on the first display interface, acquiring a first user list associated with the first task state and acquiring a second user list associated with the second task state; from the first user list and the second user list, a list of target users for output to a list sub-interface separate from the first display interface is determined. By adopting the embodiment of the application, a new online accompanying mode can be provided, and the display effect of the user data in the virtual room can be enriched.

Description

Data processing method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a data processing method and apparatus, a computer device, and a storage medium.
Background
At present, with the development of multimedia technology, more and more users can choose to take their own study or work on line, so as to improve the efficiency of the users in taking their own study or work. However, due to the technical barriers existing in the prior art, the online live broadcast service of a large number of users in the same virtual room at the same time cannot be satisfied.
For example, if 8 users currently need to perform online study, based on the existing online live broadcast service, the 8 users are sequentially grouped according to the sequence of the online live broadcast requests, and then some users (e.g., 4 users) of the 8 users are allowed to enter the virtual room 1 for online study, and another some users (e.g., the remaining 4 users) of the 8 users are allowed to enter the virtual room 2 for online study. Obviously, the existing live online service will have difficulty in ensuring that these 8 users are accompanied online in the same virtual room. In addition, in view of the existing online live broadcast service, it is often necessary to enable the users distributed in the virtual room 1 and the virtual room 2 to perform the live broadcast service by turning on the video, so that the users in the virtual room 1 or the virtual room 2 have a single effect of displaying the user data in the same virtual room.
Disclosure of Invention
The embodiment of the application provides a data processing method, a data processing device, computer equipment and a storage medium, can provide a new online accompanying mode, and can enrich the display effect of user data in a virtual room.
An embodiment of the present application provides a data processing method, including:
in response to a first operation on the video client, outputting a virtual room associated with the first operation to a first display interface of the video client; the virtual room comprises a first user in a first task state and a second user in a second task state; the second task state is a different task state than the first task state;
in response to a second operation on the first display interface, acquiring a first user list associated with the first task state and acquiring a second user list associated with the second task state; the first user list comprises a first user; the second user list comprises a second user;
from the first user list and the second user list, a list of target users for output to a list sub-interface separate from the first display interface is determined.
An aspect of an embodiment of the present application provides a data processing apparatus, including:
a first output module, configured to output, in response to a first operation for the video client, a virtual room associated with the first operation to a first display interface of the video client; the virtual room comprises a first user in a first task state and a second user in a second task state; the second task state is a different task state than the first task state;
the first acquisition module is used for responding to a second operation aiming at the first display interface, acquiring a first user list associated with the first task state and acquiring a second user list associated with the second task state; the first user list comprises a first user; the second user list comprises a second user;
and the first determining module is used for determining a target user list which is used for being output to a list sub-interface independent from the first display interface from the first user list and the second user list.
Wherein, this first output module includes:
the first sending unit is used for responding to a first operation triggered by the video client and sending a room acquisition request to a server corresponding to the video client; the room acquisition request is used for indicating the server to configure a virtual room for a target user accessing the video client; the target user is a user executing a first operation;
the first receiving unit is used for receiving the virtual room returned by the server, and initializing the task state of the target user into a second task state when the target user enters the virtual room;
and the updating unit is used for updating the second user in the virtual room according to the target user with the second task state and outputting the updated virtual room to the first display interface of the video client.
The first display interface comprises a first display area, a second display area and a third display area; the first display area has a function of displaying live broadcast data of a first type of user; the second display region comprises a first sub-region and a second sub-region; the first sub-area has a function of displaying user image data of a target user; the second sub-area has a function of presenting image data of a second type of user; the first type user and the second type user both belong to a first user in a first task state in the virtual room; the third display area has the function of displaying the auxiliary information of the text in the virtual room;
the device also includes:
the first sending module is used for responding to a first service starting control triggered by a target user in a first subregion and sending a first starting request associated with the live broadcast service corresponding to the first service starting control to the server so that the server responds to the first starting request and generates first starting prompt information associated with the live broadcast service;
the second acquisition module is used for acquiring first starting prompt information returned by the server based on the first starting request and outputting the first starting prompt information serving as text auxiliary information to the third display area;
the first adjusting module is used for adjusting the task state of the target user from the second task state to the first task state and updating a first type user in the first user according to the target user in the first task state;
and the second output module is used for outputting the updated live broadcast data of the first type of user to the first display area, deleting the first sub-area in the second display area, and outputting the image data of the second type of user according to the second display area and the second sub-area from which the first sub-area is deleted.
The first starting request is also used for indicating the server to determine a target video display area for executing the live broadcast service in the first display area; the updated first type users comprise target users and first type users;
the second output module includes:
the acquisition unit is used for acquiring a target video display area returned by the server based on the first starting request and starting a shooting task for shooting a target user;
the area expansion unit is used for performing area expansion on a second subarea in the second display area when the live broadcast data of the target user shot when the shooting task is executed is output to the target video display area and the first subarea is deleted in the second display area to obtain an expanded second subarea; the size of the expanded second sub-area is equal to that of the second display area;
and the output unit is used for outputting the image data of the second type of users in the expanded second sub-area when the live broadcast data of the first type of users are synchronously output in the first display area to which the target video display area belongs.
Wherein, the device still includes:
the third acquisition module is used for acquiring a starting timestamp recorded by the server when the server acquires the first starting request, starting a timer corresponding to the video client when the camera shooting task is started, and recording a timing timestamp of the timer;
the second determining module is used for taking the video live broadcast time length of the live broadcast data of the target user, the first task state of the target user and the user image data of the target user as service data information; the live video time length is determined by the difference between the timing timestamp and the starting timestamp;
and the third output module is used for outputting the service data information in the target video display area.
Wherein, the device still includes:
the second sending module is used for responding to the triggering operation aiming at the first service ending control, and sending a first ending request associated with the first service ending control to the server so that the server responds to the first ending request and generates first ending prompt information;
the fourth acquisition module is used for acquiring first end prompt information returned by the server based on the first end request, adjusting the task state of the target user from the first task state to a second task state, and updating a second type user in the first user according to the target user in the second task state;
the ending module is used for acquiring an ending timestamp recorded by the server when the first ending request is acquired, and ending a timer corresponding to the video client when the shooting task is finished;
the third determining module is used for determining a feedback sub-interface independent of the first display interface and outputting the service duration determined by the server aiming at the target user to execute the camera shooting task and the configuration text information associated with the historical behavior data of the target user to the feedback sub-interface; the service duration is determined by the difference between the start time stamp and the end time stamp.
The first display interface comprises a first display area, a second display area and a third display area; the first display area has a function of displaying live broadcast data of a first type of user; the second display region comprises a first sub-region and a second sub-region; the first sub-area has a function of displaying user image data of a target user; the second sub-area has a function of presenting image data of a second type of user; the first type user and the second type user both belong to a first user in a first task state in the virtual room; the third display area has the function of displaying the auxiliary information of the text in the virtual room;
the device also includes:
the third sending module is used for responding to a second service starting control triggered by a target user in the first sub-area and sending a second starting request associated with the live broadcast service corresponding to the second service starting control to the server so that the server responds to the second starting request and generates second starting prompt information associated with the live broadcast service;
the fifth acquisition module is used for acquiring second starting prompt information returned by the server based on the second starting request and outputting the second starting prompt information serving as text auxiliary information to the third display area;
the second adjusting module is used for adjusting the task state of the target user from the second task state to the first task state, and recording the image display duration of the target user with the first task state through a timer corresponding to the video client;
the configuration module is used for configuring virtual animation data for user image data of a target user in a first task state, taking the virtual animation data, the image display duration and the user image data of the target user in the first task state as the image data of the target user, synchronously outputting the image data of a second type of user in a second subregion when the image data of the target user is output in the first subregion, and outputting live broadcast data of the first type of user in the first display region.
Wherein, the device still includes:
the first receiving module is used for receiving the service guide information configured by the server for the cold start user if the target user belongs to the cold start user in the video client; the cold start user is a user without historical access information in the video client;
the fourth output module is used for outputting the service guide information in a guide sub-interface independent of the first display interface; the service guide information is used for instructing the target user to adjust the task state in the virtual room.
Wherein, this first acquisition module includes:
the second sending unit is used for responding to a second operation aiming at the first display interface and sending a list pulling request to a server corresponding to the video client; the list pull request is used for instructing the server to acquire a first initial user list associated with the first task state and a second initial user list associated with the second task state;
the second receiving unit is used for receiving the first initial user list and the second initial user list returned by the server; the first initial user list comprises a first user, and the first user comprises a first type of user and a second type of user; the second initial user list comprises a second user;
the first sequencing processing unit is used for taking the video live broadcast time length of the first type user and the image display time length of the second type user as the service time length of the first user in the first initial user list, sequencing the first user in the first initial user list, and determining the first initial user list after sequencing as the first user list;
and the second sorting processing unit is used for acquiring an access timestamp of a second user in the second initial user list entering the virtual room, configuring a corresponding time attenuation factor for the access timestamp corresponding to the second user, sorting the second user in the second initial user list based on the time attenuation factor, and determining the sorted second initial user list as the second user list.
Wherein, the device still includes:
a sixth obtaining module, configured to obtain a list update notification sent by a server corresponding to the video client and used to update the first user list and the second user list; the list update notification is generated by the server when a task state change request sent by a user in the virtual room is detected;
the updating module is used for respectively updating the first user list and the second user list based on the list updating notice to obtain an updated first user list and an updated second user list;
and the fourth determining module is used for updating the target user list based on the updated first user list and the updated second user list and outputting the updated target user list to the list sub-interface.
Wherein, the device still includes:
the first interface switching module is used for switching the display interface of the video client from the first display interface to the second display interface when the target user exits the virtual room;
the second interface switching module is used for responding to the triggering operation aiming at the second display interface and switching the display interface of the video client from the second display interface to a third display interface; the third display interface comprises a business ranking control used for acquiring a ranking list associated with the target user;
the fourth sending module is used for responding to the triggering operation aiming at the business ranking control and sending a ranking query request to a server corresponding to the video client; the ranking query request is used for indicating the server to obtain a ranking list; the ranking list comprises a first ranking list and a second ranking list; the users in the first ranking list comprise users in the same geographical location area as the target user; the users in the second ranking list comprise users having interaction relation with the target user;
the fifth output module is used for acquiring the first ranking list and the second ranking list returned by the server and determining a target ranking list for outputting to a fourth display interface of the video client from the first ranking list and the second ranking list;
and the sixth output module is used for switching the display interface of the video client from the third display interface to the fourth display interface, determining the target rank of the target user in the target rank list, and outputting the target rank list containing the target rank to the fourth display interface.
One aspect of the present application provides a computer device, comprising: a processor, a memory, a network interface;
the processor is connected to a memory and a network interface, wherein the network interface is used for providing a data communication function, the memory is used for storing a computer program, and the processor is used for calling the computer program to execute the method in the above aspect in the embodiment of the present application.
An aspect of the present application provides a computer-readable storage medium storing a computer program comprising program instructions that, when executed by a processor, perform the method of the above-mentioned aspect of the embodiments of the present application.
In an embodiment of the application, a computer device may output, in response to a first operation for a video client, a virtual room associated with the first operation to a first display interface of the video client. It should be understood that, in the present embodiment, users (e.g., visiting self-study users) performing online live broadcast services (or simply live broadcast services) in the virtual room in the way of visiting and users (e.g., audience self-study users) performing live broadcast services in the way of visiting may be collectively referred to as first users, and users (e.g., watching self-study users) not performing live broadcast services in the virtual room may be collectively referred to as second users, wherein, it is understood that, here, the first task state and the second task state are two different task states in the virtual room, and by dividing the task states of different users in the virtual room, the present embodiment may provide a new online accompanying mode, and may further ensure online accompanying between different users in the same virtual room, for easy understanding, the live broadcast services of the present embodiment are exemplified by online self-study under an educational scene, according to the method and the device, after the task states of different users in the same virtual room are divided, the user lists corresponding to the different task states can be obtained. For example, the computer device may obtain a first user list corresponding to the first task state, may obtain a second user list corresponding to the second task state, and may further determine, from the first user list and the second user list, a target user list for outputting to a list sub-interface independent of the first display interface, where the target user list may be any one of the first user list and the second user list. According to the method and the device, the target user list is output to the list sub-interface of the video client, a friendly man-machine interaction interface can be provided for a user operating the video client, so that the user can be helped to quickly look up task states of different users in the virtual room, and then the display effect of user data in the virtual room can be enriched.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1a is a schematic structural diagram of a network architecture according to an embodiment of the present application;
FIG. 1b is a block diagram according to an embodiment of the present disclosure;
fig. 1c is a timing diagram of acquiring a user list according to an embodiment of the present application;
fig. 2 is a schematic view of a scenario for performing data interaction according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a data processing method according to an embodiment of the present application;
fig. 4a is an interface display diagram of a first display interface of a video client according to an embodiment of the present application;
fig. 4b is an interface display diagram of a first display interface of a video client according to an embodiment of the present application;
FIG. 4c is an interface display diagram of a guide sub-interface according to an embodiment of the present disclosure;
fig. 5 is a schematic view of a scenario for performing a live service associated with live data according to an embodiment of the present application;
fig. 6 is a schematic view of a scenario in which a live service associated with live data is ended according to an embodiment of the present application;
fig. 7 is a schematic diagram of a scenario for obtaining a ranking list according to an embodiment of the present application;
fig. 8 is a schematic flowchart of a data processing method according to an embodiment of the present application;
fig. 9 is a timing diagram illustrating a target user performing a live broadcast service according to an embodiment of the present application;
fig. 10 is a schematic view of a scenario for performing a live service associated with image data according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application;
fig. 12 is a schematic diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1a, fig. 1a is a schematic structural diagram of a network architecture according to an embodiment of the present disclosure. As shown in fig. 1a, the network architecture may include a server 10 and a cluster of user terminals. The user terminal cluster may comprise one or more user terminals, where the number of user terminals will not be limited. As shown in fig. 1a, the system may specifically include a user terminal 100a, a user terminal 100b, user terminals 100c and …, and a user terminal 100 n. As shown in fig. 1a, the user terminal 100b, the user terminals 100c, …, and the user terminal 100n may be respectively connected to the server 10 via a network, so that each user terminal may interact with the server 10 via the network.
It should be understood that the network architecture of the embodiments of the present application may be a common client/server architecture (i.e., C/S architecture), wherein the server may be the server 10 shown in fig. 1 a. The server 10 may include an interface service, a logic process, and a data storage, and the server 10 may provide a high-efficiency computing environment and a data storage and provide various functional services for a user terminal having a network connection relationship with the server 10. The server 10 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a CDN, a big data and artificial intelligence platform, and the like.
It should be understood that each user terminal in the user terminal cluster shown in fig. 1a may be installed with a target application (i.e. a client, for example, a video client), and when the video client runs in each user terminal, data interaction may be performed with the server 10 shown in fig. 1 a. The video client may be an independent client, or may be an embedded sub-client integrated in a certain client (e.g., a social client, an educational client, a multimedia client, etc.), which is not limited herein.
For convenience of understanding, in the embodiment of the present application, one user terminal may be selected from the plurality of user terminals shown in fig. 1a as a target user terminal, where the target user terminal may be used for interface display, data transmission with the server 10, and providing a service experience for a user. The target user terminal may include: the intelligent terminal comprises an intelligent terminal with a data processing function, such as a smart phone, a tablet computer, a notebook computer, a desktop computer, wearable equipment, an intelligent home, and head-mounted equipment. For example, the user terminal 100a shown in fig. 1a may be used as a target user terminal, and the video client may be integrated in the target user terminal, and at this time, the target user terminal may implement data interaction with the server 10 through a service data platform corresponding to the video client.
It should be appreciated that the live service performed in the virtual room of the video client may be an online video conference in a conference scenario. Wherein the virtual room may contain a first user and a second user. The first type of user of the first users may be an enterprise manager who participates in the online video conference in a mobile communication manner, the second type of user of the first users may be an enterprise employee who participates in the online video conference in a non-mobile communication manner, and the second user of the virtual room may be an enterprise employee who is in the online video conference but does not perform the live service (for example, may be another enterprise employee who surrounds the video conference).
Optionally, the live broadcast service executed in the virtual room in the video client may be an online variety program in an entertainment scene. Wherein the virtual room may contain a first user and a second user. A first type of users in the first users in the virtual room may be host and artist users who participate in the online integrated process program in an online manner, a second type of users in the first users may be fan groups who participate in the online integrated process program in a non-online manner (for example, fan users of the artist users), and the second user may be a user who is in the online integrated process program but does not perform the live broadcast service (for example, may be a user who views the online integrated process program).
Optionally, the live broadcast service executed in the virtual room in the video client may be an online learning function in a learning scenario or an online education scenario. Wherein the virtual room may contain a first user and a second user. A first type of users in the virtual room may be users who participated in online study in a mobile phone manner (i.e., mobile phone users, such as the trunk of a certain class a), a second type of users in the first users may be users who participated in online study in a non-mobile phone manner (i.e., audience mobile phone users, such as the students of the class a), and a second type of users may be watching users who participated in online study but did not perform the live service (e.g., may be parents or teachers of students of the class a who watched the study). The live broadcast service executed in the virtual room in the video client may also be an online learning function in a learning scene or an online education scene, or a live broadcast service in other scenes, which is not limited herein.
Taking the online study function as an example, the video client in the embodiment of the present application may be used to display a target user list (i.e., any one of the first user list and the second user list) in a virtual room (e.g., an online study room), process an operation of starting/ending a service (e.g., a live study service) by a user, provide a timer for the user to view, process a notification sent by the server 10, exchange data with the server 10, and the like. The server 10 in this embodiment may be configured to process a request from a video client, record a task state of a user, count service duration of the user, push a list update notification to the video client according to a task state change request (for example, a start request for starting a service or an end request for ending a service) sent by the user in a virtual room, and store user service data.
The first user refers to a user group in the virtual room in a first task state (e.g., a self-learning state), where the first user list may include list data information such as user image data (e.g., a user avatar) of the first user, an account name of the first user, a gender icon of the first user, and a service duration of the first user in the virtual room. The second user refers to another user group in a second task state (e.g., a state of not learning) in the virtual room, and the second user list may include list data information such as user image data (e.g., a user avatar) of the second user, an account name of the second user, and a gender icon of the second user.
Further, please refer to fig. 1b, wherein fig. 1b is a block diagram according to an embodiment of the present disclosure. As shown in fig. 1b, the module architecture 1 may include a presentation layer, a logic layer and a service layer. It is understood that the video client (e.g., online study room) in the embodiment of the present application may be integrated in any one of the user terminals in the user terminal cluster shown in fig. 1a, for example, the video client may be a client integrated on the user terminal 100 a. The user corresponding to the user terminal in the embodiment of the present application may be user a. The server in the embodiment of the present application may be the server 10 shown in fig. 1 a.
It should be understood that the presentation layer in the embodiment of the present application may be used for presenting data and receiving data input by the user a, and providing an interactive interface for the user a. For example, the presentation layer in the embodiment of the present application may present a virtual room (e.g., an online study room) of the video client on the first display interface of the user terminal. The virtual room may contain a first user and a second user. The first user may be a user performing a live service (e.g., a self-study user) and the second user may be a user not performing a live service (e.g., a watching user). Wherein the first user may comprise a first type of user and a second type of user. It is to be appreciated that a first type of user (e.g., a user who initiated the camera for live viewing, such as a customer on the market) can be a first user for performing a live service associated with live data, and a second type of user (e.g., a user who did not initiate the camera for live viewing, such as a customer on the market) can be a first user for performing a live service associated with image data.
As shown in fig. 1b, the display layer of the module architecture 1 may include a study status module and a study list module. The study status module can be used for displaying task statuses of users in a virtual room of the video client, where the task statuses may include a status of not taking a study, a status of starting a study, a status of taking a study, and a status of ending a study. It should be understood that, when entering a virtual room in the video client, the user a may perform a trigger operation on a service start control (e.g., a first service start control for joining a microphone study service or a second service start control for joining an audience study service) on a first display interface of the video client, so that the user a may perform a live broadcast service corresponding to the service start control, and at this time, the task state at the time when the user a performs the trigger operation on the service start control (e.g., at time t 1) may be referred to as a study start state. Further, when the user a ends the live service, a trigger operation may be performed on a service end control (for example, a first service end control for ending the microphone self-study service or a second service end control for ending the audience self-study service) on the first display interface, so that the user a may end the live service corresponding to the service end control, and at this time, the task state at the time (for example, at time t 2) when the user a performs the trigger operation on the service end control may be referred to as a self-study end state. The task state of the user a between the time t1 and the time t2 may be referred to as a self-learning state, i.e., a first task state. The task state of the user a before the time t1 and after the time t2 may be referred to as an unworted state, i.e., a second task state.
The study list module can be used to present a first user list and a second user list of a virtual room (e.g., an online study room). Wherein the users in the first user list may be users associated with the first task state, and the first user list may contain the first user; the user in the second user list may be a user associated with the second task state, and the second user list may contain the second user.
As shown in fig. 1b, the logic layer of the module architecture 1 is a bridge between the presentation layer and the service layer, and can implement data connection and instruction transmission between the three layers, perform logic processing on received data, further implement functions such as modification, acquisition, and deletion of data, and feed back a processing result to the presentation layer. The logical layer may include a local management module and a local service module. It will be appreciated that the local management module may be used to take charge of some of the most basic operations of the video client, such as: network requests, receiving background push notifications, database management, etc. The local service module may be used to provide user list update operations, local timers, and task state management (e.g., self-owned state management), among others.
As shown in fig. 1b, the service layer of the module architecture 1 may provide data service interfaces for the presentation layer and the logic layer, and the service layer may include a network service interface, a logic process, and a data store. The network service interface may receive various requests (e.g., a room acquisition request, a start request, an end request, a list pull request, a ranking query request, and the like) from the video client, may return processing results corresponding to the various requests to the video client, and also provides a capability of sending a notification push (e.g., a list update notification) to the video client. The logic processing may be configured to process service logic after various requests of the video client are received, for example, the logic processing may classify users in the virtual room into a first user list and a second user list, calculate service duration of the user, change task state according to data requested by the video client, and the like. The data store may be used to record user start timestamps, end timestamps, times of performing live services, service durations, task status, and the like.
Further, please refer to fig. 1c, where fig. 1c is a timing chart of acquiring a user list according to an embodiment of the present application. As shown in fig. 1c, the video client (e.g., online study room) in the embodiment of the present application may be integrated in any one of the user terminals in the user terminal cluster shown in fig. 1a, for example, the video client may be a client integrated on the user terminal 100 a. The user corresponding to the user terminal in the embodiment of the present application may be user a. The server in the embodiment of the present application may be the server 10 shown in fig. 1 a.
It should be understood that the user a (i.e. the target user) may perform the second operation with respect to the first display interface of the video client when entering the virtual room of the video client, and may further enable the user terminal to perform the step S111, thereby responding to the second operation with respect to the first display interface. At this time, the ue may execute step S112 to trigger the logic layer to pull the user list, so as to generate the list obtaining request. Further, the server may execute step S113 to generate an initial user list based on the list acquisition request sent by the user terminal invoking the service layer. The first initial user list and the second initial user list may be collectively referred to as an initial user list in the embodiment of the present application.
It is understood that the server may perform step S114 after generating the initial user list, and further may send the initial user list to the service layer, so that the user terminal may perform step S115 to obtain the initial user list through the logic layer. Further, when acquiring the initial user list, the user terminal may execute step S116, so that the initial user list may be sorted to obtain the user list. In other words, the user terminal performs sorting processing on the first initial user list to obtain the first user list. Meanwhile, the user terminal may perform sorting processing on the second initial user list to obtain a second user list. The first user list and the second user list may be collectively referred to as a user list.
Further, the user terminal may perform step S117 to send the first user list and the second user list obtained after the sorting process to the presentation layer, so that the user terminal may perform step S118 to determine a target user list for outputting to a list sub-interface independent of the first display interface from the first user list and the second user list. The target user list may be any one of a first user list and a second user list. The list sub-interface refers to a floating window which is displayed on the first display interface in an overlapping mode, and it can be understood that data displayed on the list sub-interface and data displayed on the first display interface are independent of each other.
It should be appreciated that when a user in the virtual room sends a task state change request (e.g., a start request or an end request) to the server, the server may generate a list update notification for updating the first user list and the second user list, and may further execute step S119 to send a list update notification for updating the first user list and the second user list to the service layer. At this time, the user terminal may perform step S120 to acquire the list update notification transmitted by the server through the logical layer. Further, the user terminal may execute step S121, based on the list update notification, to perform an update operation on the first user list and the second user list respectively, so as to obtain an updated first user list and an updated second user list.
The user terminal may respectively perform an update operation on the first user list and the second user list through the logic layer based on the list update notification, that is, trigger an operation of pulling the first user list and the second user list once, so that an updated first initial user list and an updated second initial user list returned by the server may be obtained, and then the updated first initial user list may be sorted to obtain the updated first user list. Meanwhile, the user terminal may perform sorting processing on the updated second initial user list to obtain an updated second user list. The updated first user list and the updated second user list can be collectively referred to as an updated user list in the embodiment of the application.
Further, the user terminal may perform step S122, send the updated user list to the presentation layer through the logic layer, and further may perform step S123, and determine an updated target user list for outputting to the list sub-interface from the updated first user list and the updated second user list. The updated target user list may be any one of the updated first user list and the updated second user list.
Further, please refer to fig. 2, and fig. 2 is a schematic view of a scenario for performing data interaction according to an embodiment of the present application. As shown in fig. 2, the video client (e.g., online study room) in the embodiment of the present application may be integrated in any one of the user terminals in the user terminal cluster shown in fig. 1a, for example, the video client may be a client integrated on the user terminal 100 a. The user corresponding to the user terminal in the embodiment of the present application may be user a. The server in the embodiment of the present application may be the server 10 shown in fig. 1 a.
It should be understood that, in the embodiment of the present application, the user a corresponding to the user terminal may perform a trigger operation (i.e., a first operation) for the video client. The trigger operation in the present application may include a contact operation such as a click or a long press, and may also include a non-contact operation such as a voice or a gesture, which is not limited herein. At this time, the user terminal may output the virtual room associated with the first operation onto a first display interface (e.g., the display interface 200a shown in fig. 2) of the video client terminal in response to the first operation.
Wherein the virtual room may contain a first user and a second user (e.g., a spectator user). The first user may include a first type of user (e.g., a boarding study user) and a second type of user (e.g., a spectator study user). Wherein the first user may be a user in a first task state (e.g., a state of being learned) in the virtual room and the second user may be a user in a second task state (e.g., a state of not being learned) in the virtual room. The display interface 200 shown in fig. 2 may include a display area 1, a display area 2, and a display area 3. The display area 1 (i.e., the first display area) may have a function of displaying live data of the first user, the display area 2 (i.e., the second display area) may have a function of displaying image data of the second user, and the display area 3 (i.e., the third display area) may have a function of displaying text auxiliary information.
It is to be appreciated that the virtual room associated with the first operation can be recommended by the server based on a recommendation level of a plurality of virtual rooms of the video client. In other words, the user a may directly perform the trigger operation on the video client, and the trigger operation of the user a directly on the video client may be referred to as a first operation in the embodiment of the present application. At this time, the server shown in fig. 2 may determine, in response to the first operation, a virtual room having the highest recommendation degree among the K virtual rooms in the video client, and refer to the virtual room having the highest recommendation degree as a virtual room associated with the first operation, and may output the virtual room associated with the first operation onto the display interface 200a shown in fig. 2. Wherein K is a positive integer. A virtual room may correspond to an identification number (e.g., a room identification, ID).
Wherein the recommendation degree can be determined by the number of users in each virtual room. It is to be appreciated that the server can determine the recommendation level for each virtual room based on the number of the first type of users and the number of the second type of users in each virtual room. Wherein the server determines the recommendation degree in the virtual room that the number of the first type of users has a higher priority than the number of the second type of users. For example, the server may determine the number of first type users in each virtual room, and if the number of first type users in the virtual room is smaller, the recommendation degree of the virtual room is higher, and vice versa; when the number of the first type users existing in the two virtual rooms is the same, the server may determine the recommendation degree of the virtual room having the same number of the first type users based on the number of the second type users in each virtual room, and if the number of the second type users in the virtual room is smaller, the recommendation degree of the virtual room is higher, and vice versa.
Alternatively, the virtual room associated with the first operation may be selected by the user a in a top page of a plurality of virtual rooms included. It is understood that the user a may perform a triggering operation on the display interface 200a for exiting the virtual room recommended by the server, so that the user terminal may be switched to the home page of the video client from the display interface 200 a. The front page may include front cover data corresponding to a plurality of virtual rooms. For example, the front cover data of the virtual room 1, the front cover data of the virtual room 2, the front cover data of the virtual room 3, and the front cover data of the virtual room 4 may be included in the front page. Further, the user a may perform a trigger operation on the front page for a region where the front cover data of one virtual room (for example, the front cover data of the virtual room 1) is located, and at this time, the trigger operation that the user a directly performs on the region where the front cover data of the virtual room 1 in the front page may be referred to as a first operation in this embodiment of the application. Further, the server may determine, in response to the first operation, the virtual room 1 corresponding to the cover art data of the virtual room 1 as the virtual room associated with the first operation, and may output the virtual room 1 to the display interface 200a shown in fig. 2.
It should be appreciated that user a may perform a second operation with respect to display interface 200a such that the user terminal may respond to the second operation to obtain a first list of users associated with a first task state and a second list of users associated with a second task state in the virtual room. The first user list may include a first user, and the second user list may include a second user.
The first user list and the second user list may be obtained by user a performing a triggering operation on control 20 shown in fig. 2. It is understood that the user a may perform a trigger operation (e.g., a click operation) on the control 20, and at this time, the user terminal may respond to the trigger operation to obtain the first user list and the second user list. The embodiment of the present application may refer to the triggering operation performed by the user a on the control 20 as a second operation.
Optionally, the first user list and the second user list may be obtained by performing a trigger operation for the user a with respect to the second type of user in the presentation area 2 shown in fig. 2. It is understood that the user a may perform a trigger operation (e.g., a leftward sliding operation) for the second type of users in the presentation area 2, and when a certain number of threshold (e.g., 10) users of the second type are displayed in a sliding manner, the user terminal may respond to the trigger operation to further obtain the first user list and the second user list. In the embodiment of the present application, the triggering operation performed by the user a on the second type of user may be referred to as a second operation.
Further, the user terminal may determine a list sub-interface (e.g., the sub-interface 200b shown in fig. 2) independent of the display interface 200a, and may determine a list of target users for output to the sub-interface 200b from the first user list and the second user list. It is understood that the sub-interface 200b of the user terminal may output a target user list of the first user list after the user a performs the second operation in the display interface 200 a. At this time, the user a may also perform a trigger operation with respect to the sub-interface 200b (e.g., click on "watch" operation in the sub-interface 200b or perform a leftward sliding operation in the sub-interface 200b), so that the sub-interface 200b outputs a target user list, which is a second user list.
It should be understood that, when the user a performs the second operation on the control 20 in the display interface 200a, the user a does not perform the live service in the virtual room, and at this time, it may be understood that the user a is the second user, and the task state of the user a is the second task state (i.e., the state of not learning by oneself), and it should be understood that the user a belongs to the users in the second user list.
Optionally, when the user a performs the second operation on the control 20 in the display interface 200a, the user a has performed a live broadcast service associated with live broadcast data in the virtual room, and at this time, it may be understood that the user a is a first type user in the first users, and the task state of the user a is a first task state (i.e., a state in self-study), and it should be understood that the user a belongs to users in the first user list.
Optionally, when the user a performs the second operation on the control 20 in the display interface 200a, the user a has performed the live broadcast service associated with the image data in the virtual room, and at this time, it may be understood that the user a is the second type user in the first user, and the task state of the user a is the first task state (i.e., the state in self-study), and it should be understood that the user a belongs to the users in the first user list.
Therefore, in the embodiment of the present application, the target user list is output in the sub-interface 200b, so that the service participation (for example, the self-learning participation) of the user a who executes the live service can be improved, and the second user in the virtual room can be stimulated to execute the live service, so as to convert the task state of the second user.
For a specific implementation manner of determining a target user list output on a list sub-interface independent of the first display interface, the following embodiments corresponding to fig. 3 to 10 may be referred to in the following fig. 3 to fig. 10.
Further, please refer to fig. 3, where fig. 3 is a schematic flow chart of a data processing method according to an embodiment of the present application. As shown in fig. 3, the method may be performed by a computer device, e.g., a user terminal, integrated with a video client. The ue may be any ue (e.g., ue 100a) in the ue cluster shown in fig. 1a, and the method may include at least the following steps S101 to S103:
step S101, responding to a first operation aiming at a video client, and outputting a virtual room associated with the first operation to a first display interface of the video client.
Specifically, the user terminal may send a room acquisition request to a server corresponding to the video client in response to a first operation triggered for the video client. The room obtaining request can be used for instructing the server to configure a virtual room for a target user accessing the video client; the target user is a user executing the first operation, namely a user corresponding to the user terminal. At this time, the user terminal may receive the virtual room returned by the server, and when the target user enters the virtual room, the task state of the target user may be initialized to a second task state (e.g., a state of no self-study). Further, the user terminal may update a second user (e.g., a watching user) in the virtual room according to the target user having the second task state, and output the updated virtual room to the first display interface of the video client.
The first display interface of the video client can comprise a first display area, a second display area and a third display area; the first display area may have a function of displaying live data of a first type of user; the second presentation zone may comprise a first sub-zone and a second sub-zone; the first sub-area may have a function of presenting user image data of a target user; the second sub-area may have a function of presenting image data of a second type of user; the first type user and the second type user both belong to a first user in a first task state in the virtual room; the third presentation area may have a function of presenting the text supplementary information in the virtual room.
For easy understanding, please refer to fig. 4a, where fig. 4a is an interface display diagram of a first display interface of a video client according to an embodiment of the present application. As shown in fig. 4a, the video client (e.g., online study room) in the embodiment of the present application may be integrated in any one of the user terminals in the user terminal cluster shown in fig. 1a, for example, the video client may be a client integrated on the user terminal 100 a. The user corresponding to the user terminal in the embodiment of the application may be a target user. The server in the embodiment of the present application may be the server 10 shown in fig. 1 a.
As shown in fig. 4a, a first display interface (display interface 400a) of the video client may include a control 1, a control 2, and a control 3. Wherein, the control 1 may be used to obtain a first user list and a second user list, the control 2 (i.e., a first service initiation control) may be used to execute a live service (e.g., a going-to-home service) associated with live data, and the control 3 (i.e., a second service initiation control) may be used to execute a live service (e.g., a viewer self-learning service) associated with image data. The display interface 400a may further include a display area 1 (i.e., a first display area), a display area 2 (i.e., a second display), and a display area 3 (i.e., a third display area).
The first display area may have a function of displaying live data of a first type of user (e.g., a user who got a study). As shown in fig. 4a, the display area 1 in the display interface 400a may include video display areas (e.g., 4 video display areas) having a certain number of threshold values, and the 4 video display areas may specifically include a video display area 1, a video display area 2, a video display area 3, and a video display area 4. The video display area 1, the video display area 2 and the video display area 3 all contain live broadcast data of a first type of user, and the video display area 4 is temporarily vacant, so that it should be understood that a target user can execute a trigger operation for the control 2 in the display area 3, and thus live broadcast data of a live broadcast service executed by the target user can be displayed in the video display area 4.
It should be understood that there is no limitation to the number of the second type users in the virtual room in the embodiment of the present application, and if the number of the first type users in the first display area reaches the threshold, the video display areas all have live broadcast data of the corresponding first type users. In other words, when the number of the first type users reaches the number threshold, the target user entering the virtual room cannot conduct the live service (e.g., the boarding study service) associated with the live data. At this time, the target user may select to execute a live service (e.g., a viewer self-learning service) associated with the image data, so that the target user may get rid of the difficulty of preempting the video display area (e.g., preempting the mic location), thereby improving the experience of the user.
As shown in fig. 4a, when a second type of user currently exists in the virtual room of the video client, the second presentation area of the first display interface of the video client may include a first sub-area and a second sub-area. It is understood that the presentation area 2 in the display interface 400a (i.e., the first display interface) may include sub-area 1 (i.e., the first sub-area) and sub-area 2 (i.e., the second sub-area). The subarea 1 may have a function of displaying user image data of a target user; the subarea 2 may have a function of presenting image data of a second type of user (e.g., a viewer-study user).
Wherein the third presentation area may have a function of presenting the text auxiliary information in the virtual room. The presentation area 3 in the display interface 400a as shown in fig. 4a may present the text auxiliary information, which may be notification information of a user's entrance into a room in a virtual room, for example, "XX user's entrance into a room". The text auxiliary information may also be a system notification message sent by the server, such as "system announcement: sending any information related to advertising, luring, etc. would be critical in order to create a healthy live environment. The textual assistance information may also be a real-time message sent by the user in the virtual room, e.g., "quiet! ". The text supplementary information may also be other forms of information, and is not illustrated herein.
Optionally, if the number of the second type of users in the virtual room of the video client is zero, the display interface of the virtual room at the video client may refer to fig. 4b described below, where fig. 4b is an interface display diagram of the first display interface of the video client provided in the embodiment of the present application. As shown in fig. 4b, presentation area 4 may be a second presentation area of display interface 400b (i.e., the first display interface).
It should be appreciated that when the target user enters the virtual room, the number of the second type users in the virtual room is zero, and at this time, the sub-area 3 (i.e. the first sub-area) displaying the user image data of the target user can be included in the display area 4 of the display interface 400b of the video client. Wherein, text information with guiding function can be output in the sub-area 3 to prompt the target user to execute the live service in the virtual room. For example, "not going to wheat but also having a business with a minor partner".
It should be understood that if the target user belongs to a cold-start user in the video client, the user terminal may receive service guide information configured by the server for the cold-start user; the cold start user is a user without history access information in the video client, namely a user accessing the video client for the first time. At this time, the user terminal may output the service guide information in a guide sub-interface independent of the first display interface. The guidance sub-interface may be a floating window which is displayed on the first display interface in an overlapping manner, and it can be understood that data displayed on the guidance sub-interface and data displayed on the first display interface are independent of each other. Wherein the traffic guidance information may be used to instruct the target user to adjust the task state in the virtual room. For example, the service guide message may be "join team who can join self-study together without going to the wheat, and join the bar soon", thereby guiding the cold-start user entering the virtual room to perform the live service.
Further, please refer to fig. 4c, where fig. 4c is an interface display diagram of a guidance sub-interface according to an embodiment of the present application. Among other things, the display interface 400c in fig. 4c may be the first display interface of the virtual room of the video client.
It should be appreciated that if the target user belongs to a user who first accesses the video client (i.e., a cold start user), the server may determine the number of first users (e.g., 15 people) of the virtual room and the service guide information configured for the target user. For example, the service guide message may be "join team who can join self-study together without going to the wheat, and join a bar soon". Further, the server may send the service guide information and the number of the first users to the user terminal. At this time, the user terminal may determine a guidance sub-interface independent of the first display interface (e.g., the sub-interface 400d independent of the display interface 400 c). The sub-interface 400d may be a floating window displayed on the display interface 400c in an overlapping manner, and it is understood that the data displayed on the sub-interface 400d and the data displayed on the display interface 400c are independent from each other. At this time, the user terminal may output the service guide information and the text information generated based on the number of the first users of the virtual room (e.g., study together, 15 people are studying) in the sub-interface 400 d. The service guiding information is used for indicating the target user to adjust the task state in the virtual room.
Further, a target user corresponding to the user terminal may perform a live service (e.g., a self-learning service) associated with the live data. It should be understood that the user terminal may respond to the first service start control triggered by the target user in the first sub-area, and may further send a first start request associated with the live service corresponding to the first service start control to the server, so that the server generates, in response to the first start request, first start prompt information associated with the live service. Further, the user terminal may obtain first start-up prompt information returned by the server, and output the first start-up prompt information as text auxiliary information to the third display area. At this time, the user terminal may adjust the task state of the target user from the second task state to the first task state, update the first type user in the first user according to the target user in the first task state, further output the updated live broadcast data of the first type user to the first display area, delete the first sub-area in the second display area, and output the image data of the second type user according to the second display area and the second sub-area from which the first sub-area is deleted.
The first start request can be used for instructing the server to determine a target video display area for executing the live broadcast service in the first display area; the updated first type of user may include the target user and the first type of user.
It should be understood that the server may determine the target video presentation area based on the first start request, and may return the target video area to the user terminal so that the user terminal may start a photographing task for photographing the target user. Further, the user terminal may output live broadcast data of the target user captured when the shooting task is executed to the target video display area, at this time, the user terminal may delete the first sub-area in the second display area, and perform area expansion on the second sub-area in the second display area, so that the expanded second sub-area may be obtained. Wherein, the expanded area size of the second sub-area may be equal to the area size of the second display area. It is understood that, in the first presentation area to which the target video presentation area belongs, the user terminal may output the image data of the second type of user in the extended second sub-area while synchronously outputting the live data of the first type of user.
Further, the server may record a start timestamp when acquiring the first start request, and send the start timestamp to the user terminal, so that the user terminal may start a timer corresponding to the video client when starting to execute the shooting task, and record a timing timestamp of the timer. It should be understood that the user terminal may use the video live broadcast duration of the live broadcast data of the target user, the first task state of the target user, and the user image data of the target user as the service data information, and may further output the service data information in the target video display area. Wherein the live video time duration is determined by the difference between the timing timestamp and the start timestamp.
For ease of understanding, please refer to fig. 5, where fig. 5 is a schematic view of a scenario in which a live service associated with live data is performed according to an embodiment of the present application. As shown in fig. 5, the video client (e.g., online study room) in the embodiment of the present application may be integrated in any one of the user terminals in the user terminal cluster shown in fig. 1a, for example, the video client may be a client integrated on the user terminal 100 a. The user corresponding to the user terminal in the embodiment of the present application may be user a. The server in the embodiment of the present application may be the server 10 shown in fig. 1 a.
The first display area in the display interface 500a (i.e., the first display interface) shown in fig. 5 may include a plurality of video display areas, and the plurality of video display areas may specifically include a video display area 1, a video display area 2, a video display area 3, and a video display area 4. The video display area 1, the video display area 2 and the video display area 3 can be respectively used for displaying live broadcast data of a first type of user in the virtual room. The video display area 1 may be used to display live data of the user 50a, the video display area 2 may be used to display live data of the user 50b, and the video display area 3 may be used to display live data of the user 50 c. At this time, a first type of user (e.g., a microphone study user) of the first user in the virtual room may be the user 50a, the user 50b, and the user 50 c.
It should be understood that a user a (i.e., a target user) corresponding to the user terminal may perform a trigger operation with respect to a service initiation control (i.e., a first service initiation control) in the display interface 500a shown in fig. 5, and in response to the trigger operation, the user terminal may invoke a service start protocol (e.g., a self-study start protocol) in the service layer through the logic layer shown in fig. 1b, and may send a first initiation request associated with a live service corresponding to the service initiation control to the server. At this time, when receiving the first start request, the server may respond to the first start request, and may further generate first start prompt information associated with the live broadcast service. For example, the first start prompt message may be "user a starts a study on boarding.
Further, the server may send the first start-up prompt message to the user terminal, so that the user terminal may obtain the first start-up prompt message, may use the first start-up prompt message as text auxiliary information, and simultaneously output the text auxiliary information to a third display area (for example, the display area 3 in the display interface 500b shown in fig. 5). At this time, the user terminal may adjust the task state of the user a from the second task state (e.g., the state not in study) to the first task state (e.g., the state in study) through the logic layer, and update the first type user of the first users according to the user a in the first task state. At this time, the updated first type users among the first users in the virtual room may be the user 50a, the user 50b, the user 50c, and the user a.
It should be understood that the user terminal may output the updated live data of the first type of user to a first presentation area (for example, presentation area 1 in the display interface 500b shown in fig. 5), delete the first sub-area in a second presentation area (for example, presentation area 2 in the display interface 500b shown in fig. 5), and output image data of the second type of user according to the second presentation area and the second sub-area after deleting the first sub-area.
Therein, it is understood that the server, upon receiving the first start request, may determine a target video presentation area (e.g., video presentation area 4) for performing a live service in the presentation area 1. It should be understood that the server may return the target video area to the user terminal so that the user terminal can start a photographing task for photographing the user a. Further, the user terminal may output live data of the user a captured when the shooting task is executed to the target video display area, at this time, the user terminal may delete the first sub-area in the display area 2, and perform area expansion on the second sub-area in the display area 2, so that the expanded second sub-area may be obtained. Wherein, the expanded area size of the second sub-area may be equal to the area size of the second display area. It is understood that, in the presentation area 1 to which the target video presentation area belongs, the user terminal may output image data of a second type of user (for example, audience study user) in the second sub-area after the expansion while synchronously outputting live data of the first type of user. In addition, the user terminal where the target user is located may also receive live broadcast data of other first type users synchronized by other terminals through the server, and display the live broadcast data in a first display area (e.g., the display area 1 of the display interface 500b shown in fig. 5).
Further, the server may record a start timestamp (e.g., 12: 28: 00) when acquiring the first start request, and send the start timestamp to the user terminal, so that the user terminal may start a timer corresponding to the video client when starting to execute the image capture task, and record a timing timestamp of the timer. It should be understood that the user terminal may use the video live broadcast duration of the live broadcast data of the user a, the first task state of the user a, and the user image data of the user a as the service data information, and may further output the service data information in the target video display area. It is understood that if the timer records a timing timestamp of 12: 48: when 00, the live video time length output in the target video display area may be a difference between the timing timestamp and the start timestamp, that is, 00: 20: 00. in other words, the target user has performed 20 minutes of live traffic in the virtual room.
Further, a target user corresponding to the user terminal may end a live service (e.g., a boarding study service) associated with the live data. It should be appreciated that the user terminal may be responsive to a triggering operation directed to the first end-of-service control to send a first end request associated with the first end-of-service control to the server to cause the server to generate the first end prompt in response to the first end request. At this time, the server may return the first end prompt message to the user terminal. When the user terminal acquires the first end prompt message, the user terminal may adjust the task state of the target user from a first task state (e.g., a state in which the user terminal is in study) to a second task state (a state not in study), and may update the second type user in the first user according to the target user in the second task state.
Meanwhile, when the server acquires the first end request, the end timestamp may be recorded, and the service duration of the target user for executing the shooting task may be determined based on a difference between the start timestamp and the end timestamp. When the user terminal acquires the first end prompt message, the user terminal may end execution of the shooting task and end the timer corresponding to the video client. Further, the user terminal may determine a feedback sub-interface independent of the first display interface, and output the configuration text information, which is obtained from the server and is associated with the historical behavior data of the target user, to the feedback sub-interface. The feedback sub-interface may be a floating window displayed on the first display interface in an overlapping manner, and it can be understood that data displayed on the feedback sub-interface and data displayed on the first display interface are independent of each other.
In addition, the user terminal may send a message packet to the server at regular time (e.g., every five minutes), so that the server may return a response packet corresponding to the message packet to the user terminal, at which point the server may determine a network status (e.g., an online status) of a target user corresponding to the user terminal in the virtual room. If the server does not receive the message packet sent by the user terminal within five minutes, the server can determine that the target user corresponding to the user terminal exits the virtual room, and can automatically help the target user to finish the live broadcast service, so that the task state of the target user is adjusted from the first task state to the second task state, and the service duration of the target user for executing the live broadcast service is recorded. Optionally, in another scenario (e.g., a call scenario), the display interface of the user terminal is on the call interface within a period of time (e.g., within ten minutes) instead of the first display interface of the virtual room, at this time, the server may also actively end the live broadcast service of the target user, so that the target user exits from the virtual room, and record the service duration of the target user executing the live broadcast service.
For ease of understanding, please refer to fig. 6, where fig. 6 is a schematic view of a scenario in which a live service associated with live data is ended according to an embodiment of the present application. As shown in fig. 6, the video client (e.g., online study room) in the embodiment of the present application may be integrated in any one of the user terminals in the user terminal cluster shown in fig. 1a, for example, the video client may be a client integrated on the user terminal 100 a. The user corresponding to the user terminal in the embodiment of the present application may be user a. The server in the embodiment of the present application may be the server 10 shown in fig. 1 a.
The display interface 600a shown in fig. 6 may be the display interface 500b shown in fig. 5. It is understood that the third display area of the display interface 600a shown in fig. 6 may include a service end control (i.e., a first service end control).
It should be understood that a user a (i.e., a target user) corresponding to the user terminal may perform a trigger operation with respect to a service end control (i.e., a first service end control) in the display interface 600a shown in fig. 6, and in response to the trigger operation, the user terminal may invoke an end service protocol (e.g., an end self-learning protocol) in the service layer through the logic layer shown in fig. 1b, and may send a first end request associated with the live service corresponding to the service end control to the server. At this time, the server may respond to the first end request when receiving the first end request, and may further generate first end prompt information associated with the live service. For example, the first end prompt message may be "user a ends the study of going to the wheat".
At this time, the server may return the first end prompt message to the user terminal. When the user terminal acquires the first end prompt message, the user terminal may adjust the task state of the user a from a first task state (e.g., a state in which the user is in study) to a second task state (a state in which the user is not in study) through the logic layer, and according to the target user in the second task state, the first type user and the second user in the first user may be updated.
Meanwhile, the server may record an end timestamp (e.g., 14: 58: 00) when acquiring the first end request, and determine a service duration (i.e., a video live time duration, e.g., 2 hours and 30 minutes) for the target user to perform the camera task based on a difference between the start timestamp (e.g., 12: 28: 00) and the end timestamp, and at the same time, the server may configure associated configuration text information for the user a based on the historical behavior data of the user a. For example, if user A learns more than an hour a day, user A's profile text message may be "you struggling to run, to catch up with that one who was once given a keen look". If user a is more than a certain time limit threshold (e.g., 3 days) away from the last live service, the configuration text message of user a may be "willing to persist and can win". The configuration text information of the user a is not listed one by one here.
Further, when the user terminal acquires the first end prompt message, the user terminal may end execution of the shooting task and end the timer corresponding to the video client. At this time, the user terminal may determine a feedback sub-interface (for example, the sub-interface 600c shown in fig. 6) independent of the first display interface, and may further output, to the sub-interface 600c, the video live broadcast time length obtained from the server to the user a to perform the live broadcast service this time and the configuration text information of the user a.
In addition, when the target user exits the virtual room, the user terminal may switch the display interface of the video client from the first display interface to a second display interface (a home page, i.e., an interface containing cover page data corresponding to a plurality of virtual rooms). Further, the user terminal may switch the display interface of the video client from the second display interface to a third display interface (e.g., a personal hub page) in response to a trigger operation for the second display interface. Wherein the third display interface may contain a business ranking control for obtaining a ranked list associated with the target user.
Further, the user terminal may respond to a trigger operation of the target user for the service ranking control, so as to send a ranking query request to a server corresponding to the video client. At this time, the server may acquire a first ranking list having a first association relationship with the target user and acquire a second ranking list having a second association relationship with the target user based on the ranking query request. The users in the first ranked list may include users in the same geographical location area (e.g., the same city or the same province) as the target user, and the users in the second ranked list may include users having an interactive relationship (e.g., a two-way friend relationship) with the target user. At this time, the server may collectively refer to the first user list and the second user list as a ranked list associated with the target user. Wherein the ranking of each user in the ranked list is determined based on a total length of service time for each user within a certain time threshold (e.g., a week or a month). Further, the server may send the ranking list to the user terminal, so that the user terminal may determine a target ranking list for outputting to a fourth display interface of the video client from the first ranking list and the second ranking list returned by the server. The target ranking list may be any one of the first ranking list and the second ranking list. At this time, the user terminal may switch the display interface of the video client from the third display interface to the fourth display interface, and may further determine the target rank of the target user in the target rank list, and output the target rank list including the target rank to the fourth display interface.
Further, please refer to fig. 7, and fig. 7 is a schematic view of a scenario for obtaining a ranking list according to an embodiment of the present application. As shown in fig. 7, the video client (e.g., online study room) in the embodiment of the present application may be integrated in any one of the user terminals in the user terminal cluster shown in fig. 1a, for example, the video client may be a client integrated on the user terminal 100 a. The user corresponding to the user terminal in the embodiment of the application may be a target user. The server in the embodiment of the present application may be the server 10 shown in fig. 1 a.
When the target user exits the virtual room, the user terminal may switch the display interface of the video client from the first display interface (e.g., display interface 200a shown in fig. 2) to the second display interface (e.g., the first page, display interface 700a shown in fig. 7). The display interface 700a shown in fig. 7 may include a control 70 and cover page data corresponding to a plurality of virtual rooms, for example, the cover page data of the virtual room 1, the cover page data of the virtual room 2, the cover page data of the virtual room 3, and the cover page data of the virtual room 4. Wherein control 70 may be used to access the target user's personal hub page.
Further, the user terminal may respond to the trigger operation for the control 70 in the display interface 700a, so as to enter the personal center page of the target user, that is, the user terminal may switch the display interface of the video client from the display interface 700a to the display interface 700b (i.e., the third display interface). Display interface 700b may include, among other things, a control 71, which control 71 may be used to obtain a business ranking control for a ranked list associated with a target user.
Further, the user terminal may respond to a triggering operation of the target user on the control 71, so as to send the ranking query request to the server corresponding to the video client. At this time, the server may acquire a first ranking list having a first association relationship with the target user and acquire a second ranking list having a second association relationship with the target user based on the ranking query request. Among other things, users in a first ranked list (e.g., a city leaderboard) may include users in the same geographic location area (e.g., the same city or the same province) as the target user, and users in a second ranked list (e.g., a friend leaderboard) may include users having an interactive relationship (e.g., a two-way friend relationship) with the target user. At this time, the server may collectively refer to the first user list and the second user list as a ranked list associated with the target user. Wherein the ranking of each user in the ranked list is determined based on a total length of time each user has been engaged in a certain time threshold (e.g., within a week). For example, the total business duration of user a in the display interface 700c shown in fig. 7 may be B1 points at a1 hours, e.g., 53 points at 30 hours, and the total business duration of user B in the week may be B2 points at a2 hours, e.g., 16 points at 28 hours, so that it can be understood that the total business duration of user a in the week is greater than the total business duration of user B in the week, i.e., user a is ranked before user B. By analogy, the user rankings in the display interface 700c are not described in detail one by one.
Further, the server may send the ranking list to the user terminal, so that the user terminal may determine a target ranking list for outputting to a fourth display interface (e.g., display interface 700c shown in fig. 7) of the video client from the first ranking list and the second ranking list returned by the server. The target ranking list may be any one of the first ranking list and the second ranking list. At this time, the user terminal may switch the display interface 700b to the display interface 700c, and further may determine the target rank of the target user in the target rank list, and output the target rank list including the target rank to the display interface 700c, so that the target user may obtain a greater sense of achievement when querying the target rank.
Step S102, responding to a second operation aiming at the first display interface, acquiring a first user list associated with the first task state, and acquiring a second user list associated with the second task state.
Specifically, the user terminal may send a list pull request to a server corresponding to the video client in response to the second operation for the first display interface. Wherein the list pull request may be used to instruct the server to obtain a first initial user list associated with the first task state and a second initial user list associated with the second task state. The first initial user list may include a first user, and the first user may include a first type of user and a second type of user; the second initial user list may include the second user. At this time, the server may return the first initial user list and the second initial user list to the user terminal. Further, the user terminal may use the live video time of the first type user and the image display time of the second type user as the service time of the first user in the first initial user list, may further perform sorting processing on the first user in the first initial user list, and determine the first initial user list after the sorting processing as the first user list, meanwhile, the user terminal may further obtain an access timestamp of the second user in the second initial user list into the virtual room, configure a corresponding time decay factor for the access timestamp corresponding to the second user, and perform sorting processing on the second user in the second initial user list based on the time decay factor, so that the second initial user list after the sorting processing may be determined as the second user list.
It should be understood that, as shown in fig. 2, the user terminal may respond to the second operation on the display interface 200a to invoke the logic layer and send the list pull request to the server corresponding to the video client. At this time, the server may filter and filter the first user (i.e., the first type user and the second type user) having the first task status and the second user having the second task status in the virtual room displayed on the display interface 200a based on the list pull request, and may further generate a first initial user list based on the first user in the virtual room and a second user list based on the second user in the virtual room.
Further, the server may transmit the first initial user list and the second initial user list to the user terminal. At this time, the user terminal may use the video live broadcast duration of the first type user and the image display duration of the second type user as the service duration of the first user in the first initial user list, so as to perform sorting processing on the first user in the first initial user list, and determine the sorted first initial user list as the first user list, so as to improve the participation sense of the first user in executing the live broadcast service, and at the same time, stimulate the second user to execute the live broadcast service, and further convert the live broadcast service into the first user.
Meanwhile, the user terminal may further obtain an access timestamp of the second user in the second initial user list entering the virtual room, configure a corresponding time attenuation factor for the access timestamp corresponding to the second user, and perform ranking processing on the second user in the second initial user list based on the time attenuation factor, so that the ranked second initial user list may be determined as the second user list. It should be appreciated that the later the access timestamp of the second user, the smaller the time decay factor configured for the second user. For example, in user a and user B of the second user terminal, the access timestamp of user a is 12: 05: 00, access timestamp of user B is 13: 32: 09, at this time, the user terminal arranges user B before user a when performing sorting processing for user a and user B. In other words, the later the second user visits the virtual room, the earlier the ranking in the second user list.
It can be understood that, if the target user triggers the second operation on the first display interface and the target user has not performed the live task in the virtual room, the target user belongs to the second user, that is, the target user is in the second user list. And if the target user triggers the second operation on the first display interface and the target user is executing the live task in the virtual room, the target user belongs to the first user, namely the target user is in the first user list.
Step S103, determining a target user list for outputting to a list sub-interface independent of the first display interface from the first user list and the second user list.
Specifically, the user terminal may determine, according to the first user list and the second user list, a target user list for outputting to a list sub-interface independent of the first display interface, and may further output the target user list to the list sub-interface. The target user list may be any one of a first user list and a second user list. The list sub-interface may be a floating window displayed on the first display interface in an overlapping manner, and it can be understood that data displayed on the list sub-interface and data displayed on the first display interface are independent from each other.
In addition, each of the first and second users in the virtual room may interact with other users in the virtual room in order to enhance a real online companionship between the first and second users.
For example, each user in the virtual room may input real-time information in a third presentation area on the first display interface of the virtual room to be presented as textual auxiliary information in the third presentation area of the first display interface.
For example, the target user may encourage a user in the virtual room, e.g., the target user may trigger an operation in an area where user avatar data of a user (e.g., user a) displayed on the first display interface is located, and thus may encourage the user a. At this time, supplementary text information "the target user encourages the user a" may be presented in the third presentation area of the first display interface. For example, the target user may also trigger a user (e.g., user B) in the target user list displayed on the list sub-interface that is separate from the first display interface, thereby encouraging user B. At this time, supplementary text information "the target user encourages the user B" may be presented in the third presentation area of the first display interface.
In addition, an additional tool can be added to the video client, so that a user accessing the video client is more immersed in the live broadcast service. For example, a timing tool of a live task can be displayed in a full screen mode on a display interface of the video client. Optionally, the user terminal may add a learning planning area or a wrong-question recording area in the video client. At this time, the target user corresponding to the user terminal can input the scheduled service time for the target user to execute the live broadcast task in the learning planning area, and the target user is reminded to end the live broadcast task at the time corresponding to the scheduled service time. The user corresponding to the user terminal can also input the error-prone questions recorded during the live broadcast task to the error-prone question recording area, so that the target user can conveniently master the error-prone knowledge points.
In an embodiment of the application, a computer device may output, in response to a first operation for a video client, a virtual room associated with the first operation to a first display interface of the video client. It should be understood that, in the present embodiment, users (e.g., visiting self-study users) performing online live broadcast services (or simply live broadcast services) in the virtual room in the way of visiting and users (e.g., audience self-study users) performing live broadcast services in the way of visiting may be collectively referred to as first users, and users (e.g., watching self-study users) not performing live broadcast services in the virtual room may be collectively referred to as second users, wherein, it is understood that, here, the first task state and the second task state are two different task states in the virtual room, and by dividing the task states of different users in the virtual room, the present embodiment may provide a new online accompanying mode, and may further ensure online accompanying between different users in the same virtual room, for easy understanding, the live broadcast services of the present embodiment are exemplified by online self-study under an educational scene, according to the method and the device, after the task states of different users in the same virtual room are divided, the user lists corresponding to the different task states can be obtained. For example, the computer device may obtain a first user list corresponding to the first task state, may obtain a second user list corresponding to the second task state, and may further determine, from the first user list and the second user list, a target user list for outputting to a list sub-interface independent of the first display interface, where the target user list may be any one of the first user list and the second user list. According to the method and the device, the target user list is output to the list sub-interface of the video client, a friendly man-machine interaction interface can be provided for a user operating the video client, so that the user can be helped to quickly look up task states of different users in the virtual room, and then the display effect of user data in the virtual room can be enriched.
Further, please refer to fig. 8, and fig. 8 is a schematic flowchart of a data processing method according to an embodiment of the present application. As shown in fig. 8, the method may be performed by a computer device, e.g., a user terminal, integrated with a video client. The ue may be implemented by any ue (e.g., ue 100a) in the ue cluster shown in fig. 1 a. The method may comprise the steps of:
step S201, responding to a first operation aiming at a video client, and outputting a virtual room associated with the first operation to a first display interface of the video client; the virtual room comprises a first user in a first task state and a second user in a second task state; the second task state is a different task state than the first task state;
a target user corresponding to the user terminal may perform a live service (e.g., a self-learning service on the microphone) associated with live data, and the method may include at least the following steps S202 to S205:
step S202, responding to a first service starting control triggered by a target user in a first sub-area, and sending a first starting request associated with a live broadcast service corresponding to the first service starting control to a server, so that the server responds to the first starting request and generates first starting prompt information associated with the live broadcast service;
step S203, acquiring first start prompting information returned by the server based on the first start request, and outputting the first start prompting information to a third display area as text auxiliary information;
step S204, the task state of the target user is adjusted from the second task state to the first task state, and a first type user in the first user is updated according to the target user in the first task state;
step S205, outputting the updated live data of the first type user to the first display area, deleting the first sub-area in the second display area, and outputting the image data of the second type user according to the second display area and the second sub-area from which the first sub-area is deleted.
It should be understood that, when the user terminal finishes step S205, the user terminal may jump to perform the following steps S206 to S207:
step S206, responding to a second operation aiming at the first display interface, acquiring a first user list associated with the first task state, and acquiring a second user list associated with the second task state; the first user list comprises a first user; the second user list comprises a second user;
step S207, determining a target user list for output to a list sub-interface independent of the first display interface from the first user list and the second user list.
For specific implementation of steps S201 to S207, reference may be made to the description of steps S101 to S103 in the embodiment corresponding to fig. 3, which will not be described again here.
Alternatively, a target user corresponding to the user terminal may perform a live service (e.g., a viewer self-study service) associated with the image data. The method may comprise at least the following steps S208-S211:
step S208, responding to a second service starting control triggered by a target user in the first sub-area, and sending a second starting request associated with the live broadcast service corresponding to the second service starting control to the server, so that the server responds to the second starting request and generates second starting prompt information associated with the live broadcast service;
step S209, acquiring second start prompting information returned by the server based on the second start request, and outputting the second start prompting information to a third display area as text auxiliary information;
step S210, adjusting the task state of the target user from the second task state to the first task state, and recording the image display duration of the target user with the first task state through a timer corresponding to the video client;
step S211, configuring virtual animation data for user image data of a target user in a first task state, taking the virtual animation data, the image display duration and the user image data of the target user in the first task state as the image data of the target user, synchronously outputting the image data of a second type of user in a second subregion when the image data of the target user is output in the first subregion, and outputting live broadcast data of the first type of user in a first display region.
When the user terminal finishes step S211, it may jump to perform step S206-step S207, which will not be described again.
It should be understood that, after the target user corresponding to the user terminal obtains the first user list and the second user list, the live broadcast service in the virtual room may be executed. In other words, the user terminal may execute step S201, and then jump to execute step S206-step S207 to obtain the first user list and the second user list. Further, the user terminal may perform steps S202-S205 to enable the target user to perform a live service associated with live data in the virtual room. Optionally, the user terminal may also perform steps S208 to S211 to enable the target user to perform a live service associated with the image data in the virtual room.
For ease of understanding, please refer to fig. 9, and fig. 9 is a timing diagram illustrating a live service executed by a target user according to an embodiment of the present application. As shown in fig. 9, the video client (e.g., online study room) in the embodiment of the present application may be integrated in any one of the user terminals in the user terminal cluster shown in fig. 1a, for example, the video client may be a client integrated on the user terminal 100 a. The user corresponding to the user terminal in the embodiment of the present application may be user a. The server in the embodiment of the present application may be the server 10 shown in fig. 1 a.
It should be appreciated that in a first display interface in a virtual room of a video client, as shown in fig. 9, user a (i.e., the target user) may perform a live service in the virtual room. It is understood that the user a may perform a triggering operation for a service initiation control (e.g., a first service initiation control or a second service initiation control), and at this time, the user terminal may perform step S911 in response to the triggering operation for the service initiation control. The trigger operation may include a contact operation such as a click or a long press, or may also include a non-contact operation such as a voice or a gesture. Further, the user terminal may execute step S912, invoke a start service protocol in the service layer through the live service start logic in the logic layer, and send a start request to the server through the service layer. The first initiation request and the second initiation request may be collectively referred to as initiation requests in the embodiments of the present application.
At this time, the server may execute step S913, receive the start request forwarded by the service layer, and may generate start prompt information. The server may refer to the generated start prompt information as first start prompt information based on the first start request, and the server may refer to the generated start prompt information as second start prompt information based on the second start request. The first start prompt information and the second start prompt information can be collectively referred to as start prompt information in the embodiment of the application. Further, the server may execute step S914 to send the start prompting message to the user terminal through the service layer.
It should be understood that the user terminal may perform step S915, receive the start prompting message sent by the server through the logic layer, and then may perform step S916 based on the start prompting message, so that a timer corresponding to the video client may be started, and perform step S917 when the user terminal receives the start prompting message, so that the task state of the target user may be adjusted from the second task state to the first task state. Meanwhile, the user terminal may perform step S918, and output service data (live data or image data) of a target user performing a live service on the first display interface of the video client through the presentation layer. If the live broadcast service executed by the target user is a live broadcast service (e.g., a self-study service in the upper part of the market), the first display interface may be as shown in the display interface 500b shown in fig. 5; if the live service performed by the target user is a live service associated with image data (e.g., a viewer study service), the first display interface may be as shown in display interface 900b of fig. 10 described below.
Further, in the first display interface in the virtual room of the video client, the user a (i.e., the target user) may end the live service executed in the virtual room. It should be understood that the target user may perform a triggering operation on a service end control (a first service end control or a second service end control) in the first display interface of the video client, at this time, the user terminal may perform step S919, respond to the triggering operation on the service end control, and further may perform step S920, invoke an end service protocol in the service layer through end live broadcast service logic in the logic layer, and send an end request to the server. The first end request and the second end request may be collectively referred to as an end request in the embodiments of the present application.
In this case, the server may execute step S921 to receive the end request forwarded by the service layer, and further generate end prompt information. The server generates end prompt information based on the first end request, and the server generates end prompt information based on the second end request. The first end prompt message and the second end prompt message can be collectively referred to as end prompt messages in the embodiments of the present application. Further, the server may execute step S922 to send the end prompt message to the user terminal through the service layer.
Further, the ue may execute step S923 to receive the end prompt information sent by the server through the logic layer. It should be understood that the user terminal may execute step S924 based on the end prompt message, thereby ending the timer corresponding to the video client. Meanwhile, when the user terminal receives the end prompt message, step S925 may be executed to adjust the task state of the user a from the first task state to the second task state. Further, step S926 may be executed, and a display interface of service data (for example, service duration of the currently executed live service, and configuration text information) after the user a finishes the live service is output on a feedback sub-interface of the video client, which is independent of the first display interface, through the presentation layer. If the live broadcast service executed by the user a is a live broadcast service associated with live broadcast data (e.g., a self-study service in the upper part of the market), the feedback sub-interface may be as shown in the display interface 600c shown in fig. 6; if the live service performed by user a is a live service associated with image data (e.g., a viewer study service), the feedback sub-interface may be as shown in the display interface 900d shown in fig. 10 described below.
It should be appreciated that a target user corresponding to the user terminal may perform a live service (e.g., a viewer self-study service) associated with the image data on the first display interface of the video client. It can be understood that, the user terminal may send, to the server, a second start request associated with the live broadcast service corresponding to the second service start control in response to the second service start control triggered for the target user in the first sub-area, and further, the server may generate, in response to the second start request, second start prompt information associated with the live broadcast service. At this time, the server may return the second start-up information to the user terminal. When the user terminal acquires the second start-up prompting message, the user terminal may use the second start-up prompting message as the text auxiliary information and output the text auxiliary information to the third display area.
Meanwhile, the user terminal may adjust the task state of the target user from a second task state (e.g., a state without self-study) to a first task state (e.g., a state in self-study), and record, by a timer corresponding to the video client, an image display duration of the target user having the first task state. Further, the user terminal may configure virtual animation data (for example, a page turning animation) for user image data of a target user in a first task state, and may further use the virtual animation data, an image display duration and the user image data of the target user in the first task state as image data of the target user, and may further output image data of a second type of user in synchronization in a second sub-region and output live broadcast data of a first type of user in a first display region when the image data of the target user is output in the first sub-region.
For ease of understanding, please refer to fig. 10, where fig. 10 is a schematic view of a scenario for performing a live service associated with image data according to an embodiment of the present application. As shown in fig. 10, the video client (e.g., online study room) in the embodiment of the present application may be integrated in any one of the user terminals in the user terminal cluster shown in fig. 1a, for example, the video client may be a client integrated on the user terminal 100 a. The user corresponding to the user terminal in the embodiment of the present application may be user a. The server in the embodiment of the present application may be the server 10 shown in fig. 1 a.
The second presentation area in the display interface 900a shown in fig. 10 may include a first sub-area (e.g., sub-area 1) and a second sub-area (e.g., sub-area 2). The sub-area 1 may contain user image data of the user a and a service initiation control (i.e., a second service initiation control). The subarea 2 may be used for presenting image data of a second type of user. Wherein the second type of user of the first user may be user 90a, user 90b, user 90c, and user 90d in the virtual room of the video client.
It should be understood that, a user a (i.e., a target user) corresponding to the user terminal may perform a trigger operation with respect to the service initiation control (i.e., the second service initiation control) in the display interface 900a shown in fig. 10, and in response to the trigger operation, the user terminal may invoke a service start protocol (e.g., a self-study start protocol) in the service layer through the logic layer shown in fig. 1b, and may send a second initiation request associated with the live service corresponding to the service initiation control to the server. At this time, when receiving the first start request, the server may respond to a second start request, and may further generate second start prompt information associated with the live broadcast service. For example, the second start prompt message may be "user a starts audience study".
Further, the server may send the second start-up prompt message to the user terminal, so that the user terminal may obtain the second start-up prompt message, may use the second start-up prompt message as the text auxiliary message, and simultaneously output the text auxiliary message to the third display area in the display interface 900 b. At this time, the user terminal may adjust the task state of the user a from the second task state (e.g., during state study) to the first task state (e.g., non-study state) through the logic layer, and record the image display duration of the target user having the first task state through a timer corresponding to the video client. Further, the user terminal may update the second type of user among the first users according to the user a in the first task state. At this time, the updated second type users among the first users in the virtual room may be the user 90a, the user 90b, the user 90c, the user 90d, and the user a. For the method for determining the image display duration, reference may be made to the method for determining the video live broadcast duration, and details are not repeated here.
Further, the user terminal may configure virtual animation data (for example, a page turning animation) for the user image data of the user a in the first task state, and then may use the virtual animation data, the image display duration, and the user image data of the user a in the first task state as the image data of the user a, and further may output the image data of the user a in the sub area 3 in the display interface 900b, synchronously output the image data of the second type of user in the sub area 4, and output the live broadcast data of the first type of user in the first display area of the display interface 900 b. In the embodiment of the application, the virtual animation data is configured for the second type user, so that the dynamic sense of the second type user executing the live broadcast service is enhanced, and the second type user views the virtual animation data in the display interface 900b, so that the experience of the user executing the live broadcast service can be improved.
In which, the area 3 in the second display area of the display interface 900b shown in fig. 10 may contain a service end control (i.e., a second service end control). It should be understood that, when the user a ends the live broadcast service associated with the image data, a trigger operation may be performed on a service end control in the display interface 900b shown in fig. 10, and in response to the trigger operation, the user terminal may invoke an end service protocol (e.g., an end self-study protocol) in the service layer through the logic layer shown in fig. 1b, and may send a second end request associated with the live broadcast service corresponding to the service end control to the server. At this time, the server may respond to the second end request when receiving the second end request, and may further generate second end prompt information associated with the live broadcast service. For example, the second end prompt message may be "user a ends audience study".
At this time, the server may return the second end prompt message to the user terminal. When the user terminal acquires the second end prompt message, the user terminal may adjust the task state of the user a from a first task state (e.g., a state in study) to a second task state (a state not in study) through the logic layer, and according to the target user in the second task state, the second type user and the second user in the first user may be updated. Meanwhile, when the server acquires the second end request, the end timestamp may be recorded, the service duration (i.e., the image display duration) corresponding to the target user executing the live broadcast service is determined based on the difference between the start timestamp and the end timestamp, and the associated configuration text information is configured for the user a based on the historical behavior data of the user a. Further, when the user terminal acquires the second end prompt message, the user terminal may end the execution of the live broadcast task and end the timer corresponding to the video client. At this time, the user terminal may determine a feedback sub-interface (for example, a sub-interface 900d independent of the display interface 900c shown in fig. 10) independent of the first display interface, and may further output, to the sub-interface 900d, the image display duration obtained from the server to the user a to perform the live broadcast service this time and the configuration text information of the user a.
In an embodiment of the application, a computer device may output, in response to a first operation for a video client, a virtual room associated with the first operation to a first display interface of the video client. It should be understood that, in the present embodiment, users (e.g., visiting self-study users) performing online live broadcast services (or simply live broadcast services) in the virtual room in the way of visiting and users (e.g., audience self-study users) performing live broadcast services in the way of visiting may be collectively referred to as first users, and users (e.g., watching self-study users) not performing live broadcast services in the virtual room may be collectively referred to as second users, wherein, it is understood that, here, the first task state and the second task state are two different task states in the virtual room, and by dividing the task states of different users in the virtual room, the present embodiment may provide a new online accompanying mode, and may further ensure online accompanying between different users in the same virtual room, for easy understanding, the live broadcast services of the present embodiment are exemplified by online self-study under an educational scene, according to the method and the device, after the task states of different users in the same virtual room are divided, the user lists corresponding to the different task states can be obtained. For example, the computer device may obtain a first user list corresponding to the first task state, may obtain a second user list corresponding to the second task state, and may further determine, from the first user list and the second user list, a target user list for outputting to a list sub-interface independent of the first display interface, where the target user list may be any one of the first user list and the second user list. According to the method and the device, the target user list is output to the list sub-interface of the video client, a friendly man-machine interaction interface can be provided for a user operating the video client, so that the user can be helped to quickly look up task states of different users in the virtual room, and then the display effect of user data in the virtual room can be enriched.
Further, please refer to fig. 11, where fig. 11 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application. As shown in fig. 11, the data processing apparatus 1 may be a computer program (including program code) running in a computer device, for example, the data processing apparatus 1 is an application software; the data processing device 1 may be configured to perform corresponding steps in the method provided by the embodiment of the present application. As shown in fig. 11, the data processing apparatus 1 may operate in a user terminal, which may be any one of the user terminals in the user terminal cluster in the embodiment corresponding to fig. 1a, for example, the user terminal 100 a. The data processing apparatus 1 may include: the interface display device comprises a first output module 11, a first obtaining module 12, a first determining module 13, a first sending module 14, a second obtaining module 15, a first adjusting module 16, a second output module 17, a third obtaining module 18, a second determining module 19, a third output module 20, a second sending module 21, a fourth obtaining module 22, an ending module 23, a third determining module 24, a third sending module 25, a fifth obtaining module 26, a second adjusting module 27, a configuring module 28, a first receiving module 29, a fourth output module 30, a sixth obtaining module 31, an updating module 32, a fourth determining module 33, a first interface switching module 34, a second interface switching module 35, a fourth sending module 36, a fifth output module 37 and a sixth output module 38.
The first output module 11 is configured to, in response to a first operation for the video client, output a virtual room associated with the first operation to a first display interface of the video client; the virtual room comprises a first user in a first task state and a second user in a second task state; the second task state is a different task state than the first task state.
Wherein, the first output module 11 includes: a first sending unit 111, a first receiving unit 112 and an updating unit 113.
The first sending unit 111 is configured to send a room obtaining request to a server corresponding to the video client in response to a first operation triggered by the video client; the room acquisition request is used for indicating the server to configure a virtual room for a target user accessing the video client; the target user is a user executing a first operation;
the first receiving unit 112 is configured to receive the virtual room returned by the server, and initialize the task state of the target user to a second task state when the target user enters the virtual room;
the updating unit 113 is configured to update a second user in the virtual room according to the target user with the second task state, and output the updated virtual room to the first display interface of the video client.
For specific implementation manners of the first sending unit 111, the first receiving unit 112, and the updating unit 113, reference may be made to the description of step S101 in the embodiment corresponding to fig. 3, and details will not be further described here.
The first obtaining module 12 is configured to, in response to a second operation on the first display interface, obtain a first user list associated with the first task state, and obtain a second user list associated with the second task state; the first user list comprises a first user; the second user list comprises a second user;
wherein, the first obtaining module 12 includes: a second sending unit 121, a second receiving unit 122, a first sorting processing unit 123 and a second sorting processing unit 124.
The second sending unit 121 is configured to send, in response to a second operation on the first display interface, a list pull request to a server corresponding to the video client; the list pull request is used for instructing the server to acquire a first initial user list associated with the first task state and a second initial user list associated with the second task state;
the second receiving unit 122 is configured to receive the first initial user list and the second initial user list returned by the server; the first initial user list comprises a first user, and the first user comprises a first type of user and a second type of user; the second initial user list comprises a second user;
the first sorting processing unit 123 is configured to sort the first users in the first initial user list by using the video live broadcast time length of the first type user and the image display time length of the second type user as service time lengths of the first users in the first initial user list, and determine the first initial user list after the sorting processing as the first user list;
the second sorting processing unit 124 is configured to obtain an access timestamp of a second user in the second initial user list entering the virtual room, configure a corresponding time attenuation factor for the access timestamp corresponding to the second user, perform sorting processing on the second user in the second initial user list based on the time attenuation factor, and determine the sorted second initial user list as the second user list.
For specific implementation of the second sending unit 121, the second receiving unit 122, the first sorting processing unit 123, and the second sorting processing unit 124, reference may be made to the description of step S102 in the embodiment corresponding to fig. 3, and details will not be further described here.
The first determining module 13 is configured to determine a target user list from the first user list and the second user list, for outputting to a list sub-interface independent of the first display interface.
The first display interface comprises a first display area, a second display area and a third display area; the first display area has a function of displaying live broadcast data of a first type of user; the second display region comprises a first sub-region and a second sub-region; the first sub-area has a function of displaying user image data of a target user; the second sub-area has a function of presenting image data of a second type of user; the first type user and the second type user both belong to a first user in a first task state in the virtual room; the third display area has the function of displaying the auxiliary information of the text in the virtual room;
the first sending module 14 is configured to send, in response to a first service start control triggered by a target user in a first sub-area, a first start request associated with a live service corresponding to the first service start control to a server, so that the server responds to the first start request and generates first start prompt information associated with the live service;
the second obtaining module 15 is configured to obtain first start prompting information returned by the server based on the first start request, and output the first start prompting information to the third display area as text auxiliary information;
the first adjusting module 16 is configured to adjust the task state of the target user from the second task state to the first task state, and update the first type user in the first user according to the target user in the first task state;
the second output module 17 is configured to output the updated live data of the first type of user to the first display area, delete the first sub-area in the second display area, and output image data of the second type of user according to the second display area and the second sub-area from which the first sub-area is deleted.
The first starting request is also used for indicating the server to determine a target video display area for executing the live broadcast service in the first display area; the updated first type users comprise target users and first type users;
the second output module 17 includes: an acquisition unit 171, an area expansion unit 172, and an output unit 173.
The acquiring unit 171 is configured to acquire a target video display area returned by the server based on the first start request, and start a shooting task for shooting a target user;
the area expansion unit 172 is configured to, when live data of a target user captured during execution of a shooting task is output to a target video display area, delete a first sub-area in a second display area, perform area expansion on the second sub-area in the second display area, and obtain an expanded second sub-area; the size of the expanded second sub-area is equal to that of the second display area;
the output unit 173 is configured to output the image data of the second type of user in the expanded second sub-area when the live data of the first type of user is synchronously output in the first display area to which the target video display area belongs.
For specific implementation manners of the obtaining unit 171, the area expanding unit 172 and the output unit 173, reference may be made to the description of outputting the live data of the target user in the embodiment corresponding to fig. 5, and details will not be further described here.
The third obtaining module 18 is configured to obtain a start timestamp recorded by the server when the server obtains the first start request, start a timer corresponding to the video client when the server starts to execute the shooting task, and record a timing timestamp of the timer;
the second determining module 19 is configured to use a video live broadcast duration of live broadcast data of a target user, a first task state of the target user, and user image data of the target user as service data information; the live video time length is determined by the difference between the timing timestamp and the starting timestamp;
the third output module 20 is configured to output the service data information in the target video display area.
The second sending module 21 is configured to send, in response to a trigger operation for the first service end control, a first end request associated with the first service end control to the server, so that the server generates first end prompt information in response to the first end request;
the fourth obtaining module 22 is configured to obtain first end prompt information returned by the server based on the first end request, adjust the task state of the target user from the first task state to the second task state, and update the second type user in the first user according to the target user in the second task state;
the ending module 23 is configured to acquire an ending timestamp recorded by the server when the server acquires the first ending request, and end a timer corresponding to the video client when the server finishes executing the shooting task;
the third determining module 24 is configured to determine a feedback sub-interface independent of the first display interface, and output, to the feedback sub-interface, a service duration determined by the server for the target user to execute the camera task and configuration text information associated with historical behavior data of the target user; the service duration is determined by the difference between the start time stamp and the end time stamp.
The first display interface comprises a first display area, a second display area and a third display area; the first display area has a function of displaying live broadcast data of a first type of user; the second display region comprises a first sub-region and a second sub-region; the first sub-area has a function of displaying user image data of a target user; the second sub-area has a function of presenting image data of a second type of user; the first type user and the second type user both belong to a first user in a first task state in the virtual room; the third display area has the function of displaying the auxiliary information of the text in the virtual room;
the third sending module 25 is configured to send, in response to a second service start control triggered by the target user in the first sub-area, a second start request associated with the live service corresponding to the second service start control to the server, so that the server responds to the second start request and generates second start prompt information associated with the live service;
the fifth obtaining module 26 is configured to obtain second start-up prompt information returned by the server based on the second start-up request, and output the second start-up prompt information to the third display area as text auxiliary information;
the second adjusting module 27 is configured to adjust the task state of the target user from the second task state to the first task state, and record the image display duration of the target user with the first task state through a timer corresponding to the video client;
the configuration module 28 is configured to configure virtual animation data for user image data of a target user in a first task state, use the virtual animation data, an image display duration and the user image data of the target user in the first task state as image data of the target user, output image data of a second type of user synchronously in a second sub-area when the image data of the target user is output in a first sub-area, and output live broadcast data of a first type of user in a first display area.
The first receiving module 29 is configured to receive service guiding information configured by the server for the cold-start user if the target user belongs to the cold-start user in the video client; the cold start user is a user without historical access information in the video client;
the fourth output module 30 is configured to output the service guidance information in a guidance sub-interface independent from the first display interface; the service guide information is used for instructing the target user to adjust the task state in the virtual room.
The sixth obtaining module 31 is configured to obtain a list update notification, which is sent by a server corresponding to the video client and used to update the first user list and the second user list; the list update notification is generated by the server when a task state change request sent by a user in the virtual room is detected;
the updating module 32 is configured to perform an updating operation on the first user list and the second user list respectively based on the list updating notification, so as to obtain an updated first user list and an updated second user list;
the fourth determining module 33 is configured to update the target user list based on the updated first user list and the updated second user list, and output the updated target user list to the list sub-interface.
The first interface switching module 34 is configured to switch the display interface of the video client from the first display interface to the second display interface when the target user exits the virtual room;
the second interface switching module 35 is configured to respond to a trigger operation for the second display interface, and switch the display interface of the video client from the second display interface to a third display interface; the third display interface comprises a business ranking control used for acquiring a ranking list associated with the target user;
the fourth sending module 36 is configured to send a ranking query request to a server corresponding to the video client in response to a trigger operation for the service ranking control; the ranking query request is used for indicating the server to obtain a ranking list; the ranking list comprises a first ranking list and a second ranking list; the users in the first ranking list comprise users in the same geographical location area as the target user; the users in the second ranking list comprise users having interaction relation with the target user;
the fifth output module 37 is configured to obtain the first ranking list and the second ranking list returned by the server, and determine a target ranking list for outputting to a fourth display interface of the video client from the first ranking list and the second ranking list;
the sixth output module 38 is configured to switch the display interface of the video client from the third display interface to the fourth display interface, determine a target rank of the target user in the target rank list, and output the target rank list including the target rank to the fourth display interface.
Wherein, specific implementation manners of the first output module 11, the first obtaining module 12, the first determining module 13, the first sending module 14, the second obtaining module 15, the first adjusting module 16, the second output module 17, the third obtaining module 18, the second determining module 19, the third output module 20, the second sending module 21, the fourth obtaining module 22, the ending module 23, the third determining module 24, the third sending module 25, the fifth obtaining module 26, the second adjusting module 27, the configuring module 28, the first receiving module 29, the fourth output module 30, the sixth obtaining module 31, the updating module 32, the fourth determining module 33, the first interface switching module 34, the second interface switching module 35, the fourth sending module 36, the fifth output module 37, and the sixth output module 38 can be referred to the descriptions of steps S201 to S211 in the embodiment corresponding to the above-mentioned fig. 8, the description will not be continued here. In addition, the beneficial effects of the same method are not described in detail.
Further, please refer to fig. 12, fig. 12 is a schematic diagram of a computer device according to an embodiment of the present application. As shown in fig. 12, the computer device 1000 may be any one of the user terminals in the user terminal cluster in the embodiment corresponding to fig. 1a, for example, the user terminal 100 a. The computer device 1000 may include: at least one processor 1001, such as a CPU, at least one network interface 1004, a user interface 1003, memory 1005, at least one communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display (Display) and a Keyboard (Keyboard), and the network interface 1004 may optionally include a standard wired interface and a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one disk memory. The memory 1005 may optionally also be at least one storage device located remotely from the aforementioned processor 1001. As shown in fig. 12, a memory 1005, which is a kind of computer storage medium, may include an operating system, a network communication module, a user interface module, and a device control application program.
In the computer apparatus 1000 shown in fig. 12, the network interface 1004 is mainly used for network communication with a server; the user interface 1003 is an interface for providing a user with input; and the processor 1001 may be used to invoke a device control application stored in the memory 1005 to implement:
in response to a first operation on the video client, outputting a virtual room associated with the first operation to a first display interface of the video client; the virtual room comprises a first user in a first task state and a second user in a second task state; the second task state is a different task state than the first task state;
in response to a second operation on the first display interface, acquiring a first user list associated with the first task state and acquiring a second user list associated with the second task state; the first user list comprises a first user; the second user list comprises a second user;
from the first user list and the second user list, a list of target users for output to a list sub-interface separate from the first display interface is determined.
It should be understood that the computer device 1000 described in this embodiment of the present application may perform the description of the data processing method in the embodiment corresponding to fig. 3 and fig. 8, and may also perform the description of the data processing apparatus 1 in the embodiment corresponding to fig. 11, which is not described herein again. In addition, the beneficial effects of the same method are not described in detail.
Further, here, it is to be noted that: an embodiment of the present application further provides a computer-readable storage medium, where the computer program executed by the aforementioned data processing apparatus 1 is stored in the computer-readable storage medium, and the computer program includes program instructions, and when the processor executes the program instructions, the description of the data processing method in the embodiment corresponding to fig. 3 or fig. 8 can be performed, so that details are not repeated here. In addition, the beneficial effects of the same method are not described in detail. For technical details not disclosed in embodiments of the computer-readable storage medium referred to in the present application, reference is made to the description of embodiments of the method of the present application. As an example, program instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network, which may comprise a block chain system.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present application and is not to be construed as limiting the scope of the present application, so that the present application is not limited thereto, and all equivalent variations and modifications can be made to the present application.

Claims (14)

1. A data processing method, comprising:
in response to a first operation for a video client, outputting a virtual room associated with the first operation to a first display interface of the video client; the virtual room comprises a first user in a first task state and a second user in a second task state; the second task state is a different task state than the first task state;
in response to a second operation on the first display interface, acquiring a first user list associated with the first task state and acquiring a second user list associated with the second task state; the first user list comprises the first user; the second user list comprises the second user;
determining a list of target users for output to a list sub-interface independent of the first display interface from the first user list and the second user list.
2. The method of claim 1, wherein outputting, in response to a first operation for a video client, a virtual room associated with the first operation to a first display interface of the video client comprises:
responding to a first operation triggered by a video client, and sending a room acquisition request to a server corresponding to the video client; the room acquisition request is used for indicating the server to configure a virtual room for a target user accessing the video client; the target user is a user executing the first operation;
receiving the virtual room returned by the server, and initializing the task state of the target user to the second task state when the target user enters the virtual room;
and updating the second user in the virtual room according to the target user with the second task state, and outputting the updated virtual room to the first display interface of the video client.
3. The method according to claim 2, wherein the first display interface comprises a first display area, a second display area and a third display area; the first display area has a function of displaying live broadcast data of a first type of user; the second display region comprises a first sub-region and a second sub-region; the first sub-area has a function of presenting user image data of the target user; the second sub-area has a function of presenting image data of a second type of user; the first type of user and the second type of user both belong to the first user in a first task state in the virtual room; the third display area has a function of displaying the text auxiliary information in the virtual room;
the method further comprises the following steps:
responding to a first service starting control triggered by the target user in the first sub-area, and sending a first starting request associated with the live broadcast service corresponding to the first service starting control to the server, so that the server responds to the first starting request and generates first starting prompt information associated with the live broadcast service;
acquiring first starting prompt information returned by the server based on the first starting request, and outputting the first starting prompt information serving as the text auxiliary information to the third display area;
adjusting the task state of the target user from the second task state to the first task state, and updating a first type user in the first user according to the target user in the first task state;
and outputting the updated live broadcast data of the first type of users to the first display area, deleting the first sub-area in the second display area, and outputting the image data of the second type of users according to the second display area and the second sub-area after deleting the first sub-area.
4. The method of claim 3, wherein the first initiation request is further used to instruct the server to determine a target video presentation area in the first presentation area for executing the live broadcast service; the updated first type user comprises the target user and the first type user;
the outputting the updated live broadcast data of the first type of user to the first display area, deleting the first sub-area in the second display area, and outputting the image data of the second type of user according to the second display area and the second sub-area from which the first sub-area is deleted includes:
acquiring the target video display area returned by the server based on the first starting request, and starting a shooting task for shooting the target user;
when live broadcast data of the target user shot when the shooting task is executed is output to the target video display area, when the first sub-area is deleted in the second display area, performing area expansion on the second sub-area in the second display area to obtain an expanded second sub-area; the area size of the expanded second sub-area is equal to the area size of the second display area;
and when the live broadcast data of the first type of users are synchronously output in the first display area to which the target video display area belongs, outputting the image data of the second type of users in the expanded second sub-area.
5. The method of claim 4, further comprising:
acquiring a starting timestamp recorded by the server when the server acquires the first starting request, starting a timer corresponding to the video client when the shooting task is started, and recording a timing timestamp of the timer;
taking the video live broadcast duration of the live broadcast data of the target user, the first task state of the target user and the user image data of the target user as service data information; the live video time length is determined by the difference between the timing timestamp and the starting timestamp;
and outputting the service data information in the target video display area.
6. The method of claim 5, further comprising:
sending a first end request associated with a first business end control to the server in response to a triggering operation for the first business end control, so that the server generates first end prompt information in response to the first end request;
acquiring first ending prompt information returned by the server based on the first ending request, adjusting the task state of the target user from the first task state to the second task state, and updating a second type user in the first user according to the target user in the second task state;
acquiring an end timestamp recorded by the server when the first end request is acquired, and ending a timer corresponding to the video client when the shooting task is finished;
determining a feedback sub-interface independent of the first display interface, and outputting service duration determined by the server for the target user to execute the camera task and configuration text information associated with historical behavior data of the target user to the feedback sub-interface; the service duration is determined by a difference between the start time stamp and the end time stamp.
7. The method according to claim 2, wherein the first display interface comprises a first display area, a second display area and a third display area; the first display area has a function of displaying live broadcast data of a first type of user; the second display region comprises a first sub-region and a second sub-region; the first sub-area has a function of presenting user image data of the target user; the second sub-area has a function of presenting image data of a second type of user; the first type of user and the second type of user both belong to the first user in a first task state in the virtual room; the third display area has a function of displaying the text auxiliary information in the virtual room;
the method further comprises the following steps:
responding to a second service starting control triggered by the target user in the first sub-area, and sending a second starting request associated with the live service corresponding to the second service starting control to the server, so that the server responds to the second starting request and generates second starting prompt information associated with the live service;
acquiring second starting prompt information returned by the server based on the second starting request, and outputting the second starting prompt information serving as the text auxiliary information to the third display area;
adjusting the task state of the target user from the second task state to the first task state, and recording the image display duration of the target user with the first task state through a timer corresponding to the video client;
configuring virtual animation data for user image data of a target user in the first task state, taking the virtual animation data, the image display duration and the user image data of the target user in the first task state as the image data of the target user, synchronously outputting the image data of the second type of user in the second sub-area when the image data of the target user is output in the first sub-area, and outputting live broadcast data of the first type of user in the first display area.
8. The method of claim 2, further comprising:
if the target user belongs to a cold-start user in the video client, receiving service guide information configured by the server for the cold-start user; the cold start user is a user without historical access information in the video client;
outputting the service guide information in a guide sub-interface independent of the first display interface; the service guiding information is used for indicating the target user to adjust the task state in the virtual room.
9. The method of claim 1, wherein obtaining a first list of users associated with the first task state and obtaining a second list of users associated with the second task state in response to the second operation on the first display interface comprises:
responding to a second operation aiming at the first display interface, and sending a list pulling request to a server corresponding to the video client; the list pull request is used for instructing the server to acquire a first initial user list associated with the first task state and a second initial user list associated with the second task state;
receiving the first initial user list and the second initial user list returned by the server; the first initial user list comprises the first user, and the first user comprises a first type of user and a second type of user; the second initial user list comprises the second user;
taking the video live broadcast time length of the first type user and the image display time length of the second type user as the service time length of the first user in the first initial user list, sequencing the first user in the first initial user list, and determining the sequenced first initial user list as the first user list;
acquiring an access timestamp of the second user in the second initial user list entering the virtual room, configuring a corresponding time attenuation factor for the access timestamp corresponding to the second user, performing sorting processing on the second user in the second initial user list based on the time attenuation factor, and determining the sorted second initial user list as the second user list.
10. The method of claim 1, further comprising:
acquiring a list updating notification which is sent by a server corresponding to the video client and used for updating the first user list and the second user list; the list update notification is generated by the server upon detecting a task state change request sent by a user in the virtual room;
respectively performing updating operation on the first user list and the second user list based on the list updating notification to obtain an updated first user list and an updated second user list;
and updating the target user list based on the updated first user list and the updated second user list, and outputting the updated target user list to the list sub-interface.
11. The method of claim 2, further comprising:
when the target user exits the virtual room, switching the display interface of the video client from the first display interface to a second display interface;
responding to the trigger operation aiming at the second display interface, and switching the display interface of the video client from the second display interface to a third display interface; the third display interface comprises a business ranking control used for obtaining a ranking list associated with the target user;
responding to the triggering operation aiming at the business ranking control, and sending a ranking query request to a server corresponding to the video client; the ranking query request is used for indicating the server to acquire the ranking list; the ranking list comprises a first ranking list and a second ranking list; the users in the first ranking list comprise users in the same geographical location area as the target user; the users in the second ranking list comprise users having an interaction relation with the target user;
acquiring the first ranking list and the second ranking list returned by the server, and determining a target ranking list on a fourth display interface for outputting to the video client from the first ranking list and the second ranking list;
switching the display interface of the video client from the third display interface to the fourth display interface, determining the target rank of the target user in the target rank list, and outputting the target rank list containing the target rank to the fourth display interface.
12. A data processing apparatus, comprising:
a first output module, configured to output, in response to a first operation for a video client, a virtual room associated with the first operation to a first display interface of the video client; the virtual room comprises a first user in a first task state and a second user in a second task state; the second task state is a different task state than the first task state;
a first obtaining module, configured to obtain, in response to a second operation on the first display interface, a first user list associated with the first task state and a second user list associated with the second task state; the first user list comprises the first user; the second user list comprises the second user;
a first determining module, configured to determine, from the first user list and the second user list, a list of target users for outputting to a list sub-interface independent of the first display interface.
13. A computer device, comprising: a processor, a memory, a network interface;
the processor is connected to a memory for providing data communication functions, a network interface for storing a computer program, and a processor for calling the computer program to perform the method of any one of claims 1 to 11.
14. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program comprising program instructions which, when executed by a processor, perform the method of any of claims 1-11.
CN202010525775.6A 2020-06-10 Data processing method, device, computer equipment and storage medium Active CN113784151B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010525775.6A CN113784151B (en) 2020-06-10 Data processing method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010525775.6A CN113784151B (en) 2020-06-10 Data processing method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113784151A true CN113784151A (en) 2021-12-10
CN113784151B CN113784151B (en) 2024-05-17

Family

ID=

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20000072429A (en) * 2000-06-03 2000-12-05 조철 Realtime, interactive multimedia education system and method in online environment
WO2002010946A1 (en) * 2000-07-28 2002-02-07 Idiil International, Inc. Remote instruction and communication system
JP2004118506A (en) * 2002-09-26 2004-04-15 Victor Co Of Japan Ltd Conference network system
JP2006323467A (en) * 2005-05-17 2006-11-30 Nippon Telegr & Teleph Corp <Ntt> Exercise support system, its exercise support management device and program
CN101346949A (en) * 2005-10-21 2009-01-14 捷讯研究有限公司 Instant messaging device/server protocol
CN107682752A (en) * 2017-10-12 2018-02-09 广州视源电子科技股份有限公司 Method, apparatus, system, terminal device and the storage medium that video pictures are shown
CN109753501A (en) * 2018-12-27 2019-05-14 广州市玄武无线科技股份有限公司 A kind of data display method of off-line state, device, equipment and storage medium
CN110581975A (en) * 2019-09-03 2019-12-17 视联动力信息技术股份有限公司 Conference terminal updating method and video networking system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20000072429A (en) * 2000-06-03 2000-12-05 조철 Realtime, interactive multimedia education system and method in online environment
WO2002010946A1 (en) * 2000-07-28 2002-02-07 Idiil International, Inc. Remote instruction and communication system
JP2004118506A (en) * 2002-09-26 2004-04-15 Victor Co Of Japan Ltd Conference network system
JP2006323467A (en) * 2005-05-17 2006-11-30 Nippon Telegr & Teleph Corp <Ntt> Exercise support system, its exercise support management device and program
CN101346949A (en) * 2005-10-21 2009-01-14 捷讯研究有限公司 Instant messaging device/server protocol
CN107682752A (en) * 2017-10-12 2018-02-09 广州视源电子科技股份有限公司 Method, apparatus, system, terminal device and the storage medium that video pictures are shown
CN109753501A (en) * 2018-12-27 2019-05-14 广州市玄武无线科技股份有限公司 A kind of data display method of off-line state, device, equipment and storage medium
CN110581975A (en) * 2019-09-03 2019-12-17 视联动力信息技术股份有限公司 Conference terminal updating method and video networking system

Similar Documents

Publication Publication Date Title
CN107710197B (en) Sharing images and image albums over a communication network
US11474779B2 (en) Method and apparatus for processing information
CN112423002B (en) Live broadcast method and device
CN110266879B (en) Playing interface display method, device, terminal and storage medium
CN111711831B (en) Data processing method and device based on interactive behavior and storage medium
CN112367528B (en) Live broadcast interaction method and computer equipment
KR102590492B1 (en) Method, system, and computer program for providing ruputation badge for video chat
CN110929170A (en) Friend recommendation method, device, equipment and storage medium for social group
CN109446031B (en) Control method of terminal equipment, terminal and readable storage medium
CN114138146B (en) Card recommendation method and electronic equipment
JP7202386B2 (en) Method and system for providing multiple profiles
CN110609970A (en) User identity identification method and device, storage medium and electronic equipment
CN113824983A (en) Data matching method, device, equipment and computer readable storage medium
CN110234019B (en) Barrage interaction method, barrage interaction system, barrage interaction terminal and computer-readable storage medium
CN114518918A (en) Data processing method, device, equipment and storage medium
CN114257570A (en) Processing method, device, equipment and medium based on multi-person session
CN113784151B (en) Data processing method, device, computer equipment and storage medium
CN113784151A (en) Data processing method and device, computer equipment and storage medium
CN111369275A (en) Group identification and description method, coordination device and computer readable storage medium
CN114726818B (en) Network social method, device, equipment and computer readable storage medium
CN111885010B (en) Network communication method, device, medium and electronic equipment
EP3934257A1 (en) Livestreaming method, apparatus and device, and computer-readable storage medium
CN111885139B (en) Content sharing method, device and system, mobile terminal and server
CN114666643A (en) Information display method and device, electronic equipment and storage medium
CN114968435A (en) Live broadcast processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant