CN113905265A - Video data processing method and device and storage medium - Google Patents

Video data processing method and device and storage medium Download PDF

Info

Publication number
CN113905265A
CN113905265A CN202111151931.8A CN202111151931A CN113905265A CN 113905265 A CN113905265 A CN 113905265A CN 202111151931 A CN202111151931 A CN 202111151931A CN 113905265 A CN113905265 A CN 113905265A
Authority
CN
China
Prior art keywords
user
type
video
area
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111151931.8A
Other languages
Chinese (zh)
Other versions
CN113905265B (en
Inventor
王静
马越
李倩芸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202111151931.8A priority Critical patent/CN113905265B/en
Publication of CN113905265A publication Critical patent/CN113905265A/en
Application granted granted Critical
Publication of CN113905265B publication Critical patent/CN113905265B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • H04N21/43076Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of the same content streams on multiple devices, e.g. when family members are watching the same movie on different devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection
    • H04N21/4826End-user interface for program selection using recommendation lists, e.g. of programs or channels sorted out according to their score
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8126Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts
    • H04N21/8133Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts specifically related to the content, e.g. biography of the actors in a movie, detailed information about an article seen in a video program

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application discloses a video data processing method, a video data processing device and a storage medium, wherein the method comprises the following steps: responding to a trigger operation executed by a task initiating control in a video detail page, and outputting a data transfer page corresponding to the service attribute indicated by the task initiating control; after responding to data transfer operation executed aiming at the data transfer page, determining a virtual interaction interface, and outputting first user image data of a first type of user in a virtual seating area of the virtual interaction interface; responding to a triggering operation executed aiming at an invitation control in a virtual interactive interface, and outputting interactive invitation information to a public service broadcast group where a first type of user is located; outputting the received second user image data of the second type of users to the virtual seating area; and playing the target video data for the first type of users and the second type of users in the virtual seating area. By adopting the method and the device, rich interface display effects can be provided, and the play control of the online multi-person synchronous film watching can be realized.

Description

Video data processing method and device and storage medium
The present application is a divisional application of a chinese patent application filed on 03/08/2020 by the chinese patent office under the application number 202010769097.8 entitled "a video data processing method, apparatus, and storage medium", the entire contents of which are incorporated herein by reference.
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for processing video data, and a storage medium.
Background
At present, when a user needs to request a certain video program (for example, a pay program) in a video client, the user pays for the video program that the user needs to watch, so that any two users watching the video program are independent of each other. For example, for a video program a, after two users (e.g., user B and user C) pay data for the video program, the service server configures two independent viewing channels for the two users (i.e., user B and user C), so that user B can view the video program a through the viewing channel 1 configured by the service server at a certain time (e.g., time T1), and user C can view the video program a through the viewing channel 2 configured by the service server at another time (e.g., time T2). In addition, any two users are independent of each other, so that the same interface display effect can be presented in the process of data payment or data playing of the two users on respective user terminals, and the display effect of the display interfaces of different user terminals is single.
Disclosure of Invention
The embodiment of the application provides a video data processing method, a video data processing device and a storage medium, which can provide rich interface display effects and can realize the play control of multi-user online synchronous film watching.
An embodiment of the present application provides a video data processing method, where the method includes:
acquiring a video detail page corresponding to target video data in an application client, responding to a trigger operation executed by a first type user aiming at a task initiating control in the video detail page, and outputting a data transfer page corresponding to a service attribute indicated by the task initiating control; the video detail page comprises a browsing auxiliary area associated with the target video data;
when the data transfer operation is completed on the data transfer page, returning a display interface of the application client from the data transfer page to the video detail page, rotating a browsing auxiliary area in the video detail page, taking the display interface where the rotated browsing auxiliary area is located as a virtual interaction interface, and outputting first user image data of a first type of user in a virtual seating area of the virtual interaction interface;
responding to a triggering operation executed aiming at an invitation control in a virtual interactive interface, and outputting interactive invitation information associated with target video data to a public service broadcast group where a first type of user is located; the public service broadcast group comprises a second type of user; the second type user is a user participating in responding to the interactive invitation information in the public service broadcast group;
receiving second user image data of a second type user, which is sent by a user terminal corresponding to the second type user, and outputting the second user image data to the virtual seating area;
and playing target video data for the first type users and the second type users in the virtual seating area in a virtual playing room corresponding to the public service broadcasting group.
An aspect of an embodiment of the present application provides a video data processing apparatus, including:
the detail page acquisition module is used for acquiring a video detail page corresponding to target video data in the application client, responding to the triggering operation executed by a first type of user aiming at a task initiating control in the video detail page, and outputting a data transfer page corresponding to the service attribute indicated by the task initiating control; the video detail page comprises a browsing auxiliary area associated with the target video data;
the area rotating module is used for returning the display interface of the application client to the video detail page from the data transfer page when the data transfer operation is completed on the data transfer page, rotating the browsing auxiliary area in the video detail page, taking the display interface where the rotated browsing auxiliary area is located as a virtual interaction interface, and outputting first user image data of a first type of user in a virtual seating area of the virtual interaction interface;
the invitation information sending module is used for responding to the triggering operation executed aiming at the invitation control in the virtual interactive interface and outputting the interactive invitation information associated with the target video data to the public service broadcast group where the first type user is located; the public service broadcast group comprises a second type of user; the second type user is a user participating in responding to the interactive invitation information in the public service broadcast group;
the image data receiving module is used for receiving second user image data of a second type of user, which is sent by a user terminal corresponding to the second type of user, and outputting the second user image data to the virtual seating area;
and the video data playing module is used for playing the target video data for the first type of users and the second type of users in the virtual seating area in the virtual playing room corresponding to the public service broadcasting group.
Wherein, the device still includes:
the session interface control module is used for acquiring a session interface corresponding to a public service broadcast group in the application client, responding to trigger operation aiming at the session interface, and displaying a target service program in the application client in the session interface; the target service program is an embedded subprogram embedded in the application client;
the video data request module is used for responding to the trigger operation executed aiming at the target service program and sending a video acquisition request to a service server corresponding to the application client; the video acquisition request is used for indicating the service server to screen K video data from the video data recommendation system so as to form a video recommendation list; k is a positive integer;
and the recommendation list receiving module is used for receiving the video recommendation list returned by the service server and outputting the K video data in the video recommendation list to a video recommendation interface corresponding to the target service program in the application client.
The K pieces of video data comprise target video data;
the detail page acquisition module comprises:
the detail page output unit is used for responding to trigger operation executed aiming at the target video data in the list display interface and outputting a video detail page corresponding to the target video data; the video detail page comprises a task initiating control used for indicating a first type of user to initiate an interactive task;
the authentication request sending unit is used for responding to the triggering operation executed by the first type user aiming at the task initiating control in the video detail page and sending an authentication request to the service server; the authentication request is used for indicating the service server to acquire an order configuration page associated with the service attribute of the target video data when the task initiating authority of the first type user is successfully authenticated;
the order configuration unit is used for receiving an order configuration page returned by the service server, responding to a trigger operation executed aiming at the order configuration page, outputting order configuration information associated with the service attribute of the target video data on the order configuration page, and sending a data transfer page associated with the order configuration information to the server;
and the transfer interface output unit is used for receiving the data transfer page returned by the service server and outputting the data transfer page.
Wherein, regional rotatory module includes:
the transfer operation acquisition unit is used for acquiring data transfer operation executed aiming at the business auxiliary information in the data transfer page, receiving a data transfer certificate returned by the business server when the completion of the data transfer operation is confirmed, and outputting transfer success prompt information on the data transfer page; the data transfer voucher is used for representing that the first type user has the authority of initiating the interaction task;
the detail page returning unit is used for responding to a returning operation executed on the data transfer page where the transfer success prompt information is located and returning the display interface of the application client to the video detail page from the data transfer page;
the rotation area determining unit is used for determining a first area and a second area in the browsing auxiliary area in the video detail page, taking the direction perpendicular to the plane of the browsing auxiliary area as a rotation direction, performing rotation operation on the first area and the second area based on the rotation direction, and determining the browsing auxiliary area where the rotated first area and the rotated second area are located as the rotated browsing auxiliary area; the size of the browsing auxiliary area after rotation is the same as that of the browsing auxiliary area before rotation;
the seating area unit is used for determining that a display interface where the rotated browsing auxiliary area is located is used as a virtual interaction interface for initiating an interaction task, determining a rotated second area as a virtual seating area on the virtual interaction interface, and outputting first user image data of a first type of user in the virtual seating area; the number of seats in the virtual seating area is determined by the total number of people in the public service broadcast group in which the first type of user is located.
Wherein the rotated first area is used for displaying video auxiliary information associated with the target video data; the video auxiliary information includes any one of picture information and short video information of the target video data;
the region rotating module further includes:
and the auxiliary information switching unit is used for responding to the triggering operation executed aiming at the short video playing control corresponding to the short video information when the video auxiliary information displayed in the rotated first area is the picture information, and playing the short video information in the rotated first area.
Wherein, the invitation information sending module comprises:
the invitation information generating unit is used for responding to triggering operation executed aiming at an invitation control in the virtual interactive interface, acquiring a user name of a first type user and a video name of target video data, and generating interactive invitation information associated with the service attribute of the target video data based on the user name and the video name;
the invitation information sending unit is used for sending the interaction invitation information associated with the target video data to the service server so that the service server broadcasts the interaction prompt information corresponding to the interaction invitation information in the public service broadcast group where the first type of user is located;
and the prompt information output unit is used for receiving the interactive prompt information broadcast by the service server and outputting the interactive prompt information to the public service broadcast group where the first type of user is located.
Wherein, the image data receiving module includes:
the response information receiving unit is used for receiving interaction response information returned by the user terminal corresponding to the second type user based on the interaction invitation information when the second type user in the public service broadcast group obtains the interaction invitation information based on the interaction prompt information; the interactive response information comprises second user image data of a second type of user and an interactive sequence number of an interactive task indicated by the interactive invitation information participated by the second type of user;
and the user image output unit is used for determining the virtual seat corresponding to the interaction serial number in the virtual seating area and outputting the second user image data to the virtual seat in the virtual seating area.
The interactive response information comprises the user name of the second type of user; the rotated browsing auxiliary area comprises a third area; the third area is used for displaying the task state of the interactive task corresponding to the video service data;
the image data receiving module further includes:
and the task state updating unit is used for receiving the number of the interactive users counted by the service server based on the interactive response information and updating the task state of the interactive task in the third area based on the number of the interactive users when the user name containing the second type of user is output to the virtual interactive interface.
If the service attribute is a first service type; the interactive task corresponding to the target video data is a first task;
the video data playing module includes:
the broadcasting room creating unit is used for receiving a first virtual broadcasting room corresponding to a public service broadcasting group created by the service server based on a first service type when the first type user completes data transfer operation;
the first interface output unit is used for responding to the trigger operation executed by the sharing starting control associated with the first virtual playing room in the task invitation duration corresponding to the first task and outputting a video sharing interface corresponding to the virtual playing room;
and the first playing unit is used for calling the video player in the video sharing interface when the task invitation duration reaches the invitation duration threshold value, and playing the target video data for the first type user and the second type user in the virtual seating area.
If the service attribute is a second service type; the interactive task corresponding to the target video data is a second task;
the video data playing module includes:
the task state detection unit is used for receiving a second virtual playing room corresponding to the public service broadcast group created by the service server based on the second service type when the task state corresponding to the second task is detected to be a completion state within the task invitation duration corresponding to the second task; the completion state means that the service server counts that the number of the interactive users participating in the response of the second task reaches the interactive user threshold; the interactive user threshold is less than or equal to the total number of users in the public service broadcast group;
the second interface output unit is used for responding to the trigger operation executed by the sharing starting control associated with the second virtual playing room within the task invitation duration and outputting a video sharing interface corresponding to the second virtual playing room;
and the second playing unit is used for calling the video player in the video sharing interface when the task invitation duration reaches the invitation duration threshold value, and playing the target video data for the first type user and the second type user in the virtual seating area.
An embodiment of the present application provides a video data processing method, where the method includes:
receiving an authentication request initiated by a first user terminal, authenticating a first type of user corresponding to the first user terminal based on the authentication request, and returning a data transfer page to the first user terminal; the authentication request is obtained after a first type of user initiates a trigger operation executed by a control aiming at a task in a video detail page; the video detail page comprises a browsing auxiliary area associated with the target video data;
receiving a data transfer request sent by a first user terminal, and returning a data transfer page to the first user terminal based on the data transfer request; the data transfer request is obtained by the first user terminal responding to a trigger operation executed aiming at the target video data in the list display interface;
receiving interaction invitation information sent by a first user terminal, and outputting the interaction invitation information to a public service broadcast group where a first type of user is located; the public service broadcast group comprises a second type of user; the second type user is a user participating in responding to the interactive invitation information in the public service broadcast group; the interaction invitation information is obtained by the first user terminal responding to the triggering operation executed aiming at the invitation control in the virtual interaction interface; the virtual interaction interface is determined after the first user terminal performs rotation operation on a browsing auxiliary area in the video detail page; the rotated browsing auxiliary area comprises a virtual seating area; the virtual seating area is used for displaying first user image data of a first type of user;
when a second type user participates in response to the interaction invitation information, receiving second user image data of the second type user, which is sent by a second user terminal corresponding to the second type user, and sending the second user image data to the first user terminal so that the first user terminal outputs the second user image data to the virtual seating area;
receiving a virtual playing room creating instruction initiated by a first user terminal, and creating a virtual playing room corresponding to a public service broadcast group; the virtual playing room is used for playing the target video data for the first type users and the second type users in the virtual seating area.
An aspect of an embodiment of the present application provides a video data processing apparatus, including:
the authentication request receiving module is used for receiving an authentication request initiated by a first user terminal, authenticating a first type of user corresponding to the first user terminal based on the authentication request and then returning a data transfer page to the first user terminal; the authentication request is obtained after a first type of user initiates a trigger operation executed by a control aiming at a task in a video detail page; the video detail page comprises a browsing auxiliary area associated with the target video data;
the transfer request receiving module is used for receiving a data transfer request sent by the first user terminal and returning a data transfer page to the first user terminal based on the data transfer request; the data transfer request is obtained by the first user terminal responding to a trigger operation executed aiming at the target video data in the list display interface;
the invitation information pushing module is used for receiving the interaction invitation information sent by the first user terminal and pushing the interaction invitation information to the public service broadcast group where the first type of user is located; the public service broadcast group comprises a second type of user; the second type user is a user participating in responding to the interactive invitation information in the public service broadcast group; the interaction invitation information is obtained by the first user terminal responding to the triggering operation executed aiming at the invitation control in the virtual interaction interface; the virtual interaction interface is determined after the first user terminal performs rotation operation on a browsing auxiliary area in the video detail page; the rotated browsing auxiliary area comprises a virtual seating area; the virtual seating area is used for displaying first user image data of a first type of user;
the image data forwarding module is used for receiving second user image data of a second type user, which is sent by a second user terminal corresponding to the second type user, when the second type user participates in the response of the interaction invitation information, and forwarding the second user image data to the first user terminal so that the first user terminal outputs the second user image data to the virtual seating area;
the virtual playing room creating module is used for receiving a virtual playing room creating instruction initiated by the first user terminal and creating a virtual playing room corresponding to the public service broadcast group; the virtual playing room is used for playing the target video data for the first type users and the second type users in the virtual seating area.
An aspect of an embodiment of the present application provides a computer device, where the computer device includes: a processor and a memory;
the processor is coupled to the memory, wherein the memory is configured to store program code and the processor is configured to call the program code to perform a method according to an aspect of an embodiment of the present application.
An aspect of the embodiments of the present application provides a computer storage medium, in which a computer program is stored, where the computer program includes program instructions, and when a processor executes the program instructions, the method in the aspect of the embodiments of the present application is performed.
An aspect of an embodiment of the present application provides a computer program product or a computer program, which includes computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method in one aspect of the embodiments of the present application.
The computer device in the embodiment of the application can receive a trigger operation executed by a first type user for a task initiating control (for example, a singleton control or a package yard control) in a video detail page when the video detail page corresponding to target video data in an application client is obtained, so as to output a data transfer page corresponding to a service attribute indicated by the task initiating control; it should be understood that the video details page includes a browsing assistance area associated with the target video data, for example, the browsing assistance area may be a specific area existing in the video details interface that needs to be 3D rotated. Further, after detecting that the data transfer operation (i.e., the order completion or the package yard completion) is executed for the data transfer page, the computer device may return the display interface of the application client from the data transfer page to the video detail page, may perform a rotation operation on the browsing auxiliary area in the video detail page, may further use the display interface where the rotated browsing auxiliary area is located as a virtual interaction interface, and may further output first user image data of the first type of user in a virtual seating area of the virtual interaction interface. The virtual interactive interface may include an invitation control for inviting a group friend in a public service broadcast group of the application client. Further, the computer device may respond to a trigger operation executed for the invitation control in the virtual interactive interface to output interactive invitation information associated with the target video data to a public service broadcast group in which the first type user (for example, user 1) is located; the public service broadcast group may include users of a second type (e.g., user 2 and user 3); the second type of user may be a user participating in the response to the interactive invitation information in the public service broadcast group. It should be understood that the virtual interactive interface may be a display interface obtained by performing 3D rotation processing on a specific area in the video detail page, so that the virtual seating area in the virtual interactive interface can be used for displaying not only the user image data of the inviting user who initiates the invitation, but also the user image data of the invited user who has received the invitation. In this way, when the invited other users (e.g., user 4) obtain the interaction invitation information, the user image data of each user currently needing to view the target video data (e.g., program X) can be further viewed in the virtual seating area of the virtual interaction interface, so that the other users (e.g., user 4) can be helped to decide whether to join in viewing the video program X together, and if the other users (e.g., user 4) also choose to view the video program X together, the user 4 can be regarded as a second type user. At this time, the computer device may receive and receive second user image data of the second type user sent by the user terminal corresponding to the second type user (e.g., user 4), and then may output the second user image data to the virtual seating area, which means that as the number of users invited to the computer device increases, the virtual seating area in the virtual interactive interface may present different user image data, and further, rich interface display effects may be provided. In addition, the computer device may further play the target video data for the first type user and the second type user in the virtual seating area in a virtual play room corresponding to the public service broadcast group, and it should be understood that, in the embodiment of the present application, the function of online synchronous film watching may be provided for multiple people in the same public service broadcast group through the virtual play room, and then, the play control of online synchronous film watching for multiple people may be implemented in the virtual play room.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a network architecture according to an embodiment of the present application;
FIG. 2 is a scene interaction diagram for data interaction according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a video data processing method according to an embodiment of the present application;
fig. 4 is a schematic view of a scene for acquiring target video data according to an embodiment of the present application;
fig. 5 is a schematic view of a scenario for acquiring a data transfer page according to an embodiment of the present application;
fig. 6 is a schematic view of a scenario of an order configuration page provided in an embodiment of the present application;
fig. 7 is a scene schematic diagram for acquiring a virtual interactive interface according to an embodiment of the present disclosure;
fig. 8 is a schematic view of a scene for adding second user image data to a virtual seating area according to an embodiment of the present application;
fig. 9 is a schematic diagram of a video data processing method according to an embodiment of the present application;
fig. 10 is a technical architecture diagram of a front end interacting with a back end in data according to an embodiment of the present application;
FIG. 11 is a diagram illustrating a qualifying scenario provided by an embodiment of the present application;
fig. 12 is a schematic diagram of a scenario for performing state transition according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a video data processing apparatus according to an embodiment of the present application;
FIG. 14 is a schematic structural diagram of a computer device according to an embodiment of the present disclosure;
fig. 15 is a schematic structural diagram of a video data processing apparatus according to an embodiment of the present application;
FIG. 16 is a schematic structural diagram of a computer device according to an embodiment of the present disclosure;
fig. 17 is a schematic diagram of a video data processing system according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Please refer to fig. 1, which is a schematic structural diagram of a network architecture according to an embodiment of the present application. As shown in fig. 1, the network architecture may include a service server 1000, a first user terminal cluster, and a second user terminal cluster. In the network media data system corresponding to the network architecture, the first user terminal cluster and the second user terminal cluster may be collectively referred to as a user terminal cluster having an association relationship with the service server 1000. The network media data system corresponding to the network architecture may include a social networking system, a video playing system, and other systems with audio/video processing functions.
It is to be understood that the first user terminal cluster may include one or more user terminals, and the number of the user terminals in the first user terminal cluster will not be limited herein. As shown in fig. 1, the plurality of user terminals in the first user terminal cluster may specifically include user terminals 3000 a. As shown in fig. 1, the user terminals 3000a, 3000b may be respectively in network connection with the service server 1000, so that each first user terminal in the first user terminal cluster may perform data interaction with the service server 1000 through the network connection.
Similarly, the second cluster of user terminals may comprise one or more user terminals, where the number of user terminals in the second cluster of terminals will not be limited. As shown in fig. 1, the plurality of user terminals in the second user terminal cluster may specifically include a user terminal 2000 a. As shown in fig. 1, the user terminals 2000a, 2000., 2000n may be respectively in network connection with the service server 1000, so that each user terminal in the second user terminal cluster may perform data interaction with the service server 1000 through the network connection.
The service server 1000 shown in fig. 1 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, network service, cloud communication, middleware service, domain name service, security service, CDN, and a big data and artificial intelligence platform.
In the network media data system described in this embodiment of the present application, in order to distinguish the user terminals in the two user terminal clusters (i.e., the first user terminal cluster and the second user terminal cluster), in this embodiment of the present application, the user terminals in the first user terminal cluster may be collectively referred to as a first user terminal, and a first type of user who uses the first user terminal to initiate an interaction task (e.g., initiate a task with a group property such as a group order or a group yard) may be collectively referred to as an interaction initiator (which may also be referred to as an inviting user); in addition, in the embodiment of the present application, the user terminals in the second user terminal cluster may be collectively referred to as second user terminals, and the second type of users who use the second user terminals to respond to the interaction task may be collectively referred to as interaction responders (which may also be referred to as invited users). It is understood that the first user terminal and the second user terminal herein may each comprise: smart terminals carrying video data processing functions (e.g., video data playing functions), such as smart phones, tablet computers, notebook computers, desktop computers, wearable devices, smart homes (e.g., smart televisions), and the like.
For convenience of understanding, in the embodiment of the present application, one user terminal may be selected as the first user terminal in the first user terminal cluster shown in fig. 1, for example, the user terminal 3000a shown in fig. 1 may be used as the first user terminal in the embodiment of the present application, and an application client having a video data processing function (for example, a video data loading and playing function) may be integrated in the first user terminal. The application client may specifically include a social client, a multimedia client (e.g., a video client), an entertainment client (e.g., a song requesting client), an education client, and other clients having a frame sequence (e.g., a frame animation sequence) loading and playing function. The first user terminal (e.g., user terminal 3000a) may be a user terminal used by the first type of user. For convenience of understanding, the video data (e.g., video programs or movies, etc.) selected by the first type of user in the application client to fit their interests may be collectively referred to as target video data in the embodiments of the present application.
In the embodiment of the present application, a user terminal may be selected as the second user terminal in the second user terminal cluster shown in fig. 1, for example, the user terminal 2000a shown in fig. 1 may be used as the second user terminal in the embodiment of the present application, and an application client having a video data processing function (for example, a video data loading and playing function) may be integrated in the second user terminal. The application client herein may specifically include a social client, a multimedia client (e.g., a video client), an entertainment client (e.g., a song-ordering client), an education client, and the like having a frame sequence (e.g., a frame animation sequence) loading and playing function. The second user terminal (e.g., user terminal 2000a) may be a user terminal used by the second type of user. Therefore, when the second type user receives the interaction invitation information sent by the first type user through the first user terminal, the second type user can participate in responding the interaction invitation information, so that the second type user can watch the video data which are commonly interested with the first type user in the same virtual playing room subsequently.
It can be understood that the service scenario applicable to the network media data system may specifically include: video program on demand scene, off-line cinema film watching scene, off-line concert song listening scene, off-line classroom lesson listening scene, off-line recording studio song singing scene, etc., wherein the service scenes suitable for the network media data system are not listed one by one.
For example, in a video program on-demand, the target video data may be a video program selected by the first type user in a video recommendation interface (i.e., a video program recommendation list) and fitting the interest of the first type user, for example, the video program may be a television program, an entertainment program, or the like in which a public character of interest selected by the first type user in the video recommendation list participates in recording, for example, the public character may be a movie star, an entertainment star, or the like. For another example, in an online cinema viewing scene, the target video data may be a movie selected by the first type user in the video recommendation interface (i.e., the movie recommendation list) and fitting the interest of the first type user, for example, the movie may be a movie program recorded by public people selected by the first type user on the video recommendation list and interested in the movie program.
For convenience of understanding, the embodiment of the present application takes an offline cinema viewing scene as an example to illustrate how to create a collective atmosphere for multiple people to view videos synchronously together and how to perform play control on multiple people to view videos synchronously online in the offline cinema viewing scene. For ease of understanding, please refer to fig. 2, and fig. 2 is a scene interaction diagram for performing data interaction according to an embodiment of the present application. Wherein, the server 20a shown in fig. 2 may be the service server 1000 in the embodiment corresponding to fig. 1, the terminal 10a shown in fig. 2 may be a user terminal used by the user a1 shown in fig. 2, when the user a1 needs to order the target video data (for example, the video data whose video name may be "XXX in rumor" shown in fig. 2) in a way of a group (for example, a group or a package field) in a movie viewing scene of an online movie theater, when the payment is successful (that is, the user a1 completes the data transfer operation), the display interface of the application client (for example, a social client) running in the terminal 10a may be returned from the payment page (that is, the data transfer page) to the video detail page of the target video data, and then a rotation operation may be performed on a specific area (that is, a browsing auxiliary area in the video detail page) in the video page of the target video data (for example, 3D virtual rotation operation) to obtain the virtual interactive interface 100a shown in fig. 2.
Among other things, it should be understood that in the process of requesting the target video data shown in fig. 2 by way of a party (e.g., a party or a package) in the on-line cinema viewing scene of the user a1, the party qualification of the user a1 needs to be reviewed by the server 20a of fig. 2. Such as: 1. the server 20a may be configured to determine whether the user a1 is in a group (e.g., may determine whether the user a1 is in a group created by the user a 1); 2. if the server 20a determines that the user a1 is indeed in the current group, it may be further determined whether a blob associated with the target video data exists in the current group. It is to be appreciated that when the server 20a determines that no other blobs associated with the target video data exist in the current group, the server 20a may determine that the user A1 is currently qualified for a blob in the group, and may return an order configuration page for the target video data to the terminal 10a used by the user A1. At this time, the user a1 may determine on the order configuration page whether an order needs to be placed, and if it is determined that an order needs to be placed, the server 20a may return a payment page (i.e., the data transfer page described above) to the terminal 10 a. On the contrary, when the server 20a determines that there is a blob associated with the target video data in the current group, the server 20a may determine that the user a1 does not currently have a blob qualification in the group, and may issue an analysis reason that the user a1 does not have a blob qualification (for example, there are blobs initiated by other users for the target video data in the group currently) to the terminal 10a, so as to prompt the user a 1.
For convenience of understanding, the embodiment of the present application takes the example that the user a1 requests the target video data shown in fig. 2 through a party to illustrate that data interaction can be performed between the user terminal 10a and the user terminal 30a through the server 20a shown in fig. 2 when the current service is a party service. For example, as shown in fig. 2, after the user a1 performs a triggering operation with respect to the invitation control in the virtual interactive interface 100a shown in fig. 2, the terminal 10a shown in fig. 2 may output the interaction invitation information associated with the target video data to the public service broadcast group (e.g., group 1) in which the first type user is located through the server 20a shown in fig. 2. It should be understood that, in the embodiment of the present application, the users (for example, the user a1 shown in fig. 2) initiating the invitation request in the group 1 (i.e., the public service broadcast group) may be collectively referred to as a first type of user, and the user terminals (for example, the terminal 10a shown in fig. 2) used by the first type of user may be collectively referred to as a first user terminal. Similarly, in the embodiment of the present application, the users (e.g., the user B1 shown in fig. 2) responding to the invitation request in the group 1 (i.e., the public service broadcast group) may be collectively referred to as a second type of user, and the user terminals (e.g., the terminal 30a shown in fig. 2) used by the second type of user may be collectively referred to as a second user terminal.
As shown in fig. 2, when the user a1 performs a trigger operation with respect to the invitation control for inviting group friends shown in fig. 2, the user name of the first type user (i.e., user a1) and the video name of the target video data (i.e., "XXX in rumor" shown in fig. 2) may be obtained, and further, based on the user name and the video name of the user a1, interaction invitation information associated with the service attribute of the target video data may be generated (e.g., the interaction invitation information may invite the user a1 to invite you to see "XXX in rumor" together).
Further, as shown in fig. 2, after receiving the interaction invitation information, the server 20a may push the interaction invitation information to the group 1 where the user a1 is currently located, so that other users in the group 1 can refer to the interaction invitation information pushed in the current group (e.g., the group 1) in corresponding user terminals. At this time, when a certain user (for example, the user B1 shown in fig. 2) of the users clicks the interaction invitation information, a detail page of the target video data may be displayed in the terminal 30a shown in fig. 2, for example, the detail page in the terminal 30a may be the virtual interaction interface 200a shown in fig. 2. As shown in FIG. 2, the user B1 may view the user image data displayed in area 30d in the virtual interactive interface 200 a.
It should be understood that, for the interactive task initiated by the user a1 to be a two-person spelling task, the user B1 may refer to the user image data of the user a1 initiating the invitation in the area 30d of the virtual interactive interface 200a shown in fig. 2, for example, the user image data of the user a1 seen by the user B1 may be the user image data 10B shown in fig. 2, so that the user B1, after performing the triggering operation with respect to the invitation response control shown in fig. 2, may participate in the interactive task (i.e., the two-person spelling task) sent by the user a1, and may further switch the virtual interactive interface 200a to a new data transfer page in the terminal 30a, so that the user B1 may perform a data transfer operation (i.e., a payment operation) in the new data transfer page, and may further, after the user B1 completes the payment operation, the data transfer page is returned to the virtual interactive interface 200a shown in fig. 2, and the user image data (i.e., the second user image data) of the user B1 can be output to the area 30d in the virtual interactive interface 200a for displaying in the virtual interactive interface shown in fig. 2. It should be understood that after the user B1 performs the data transfer operation with the server 20a via the terminal 30a shown in fig. 2, the terminal 30a corresponding to the user B1 may push the interactive response information to the terminal 10a shown in fig. 2 via the server 20a shown in fig. 2, so that the terminal 10a may add the user image data (i.e., the second user image data) of the user B1 carried in the interactive response information to the area 10d in the virtual interactive interface 100a shown in fig. 2, so as to achieve the perception effect of online multi-person synchronous viewing. In other words, by adopting the embodiment of the application, the synchronous movie watching and perception atmosphere can be created in different terminals by simulating virtual scenes such as ticket buying, movie entering, seating, movie watching and the like of a movie theater in a movie theater viewing scene on line. At this moment, any two users who need to watch the target video data can mutually know which people accompany themselves to watch the films online in the virtual interactive interface, and a brand-new online accompanying mode can be provided under the scene of watching the films online in a cinema.
Further, it is understood that when the user a1 and the user B1 shown in fig. 2 complete an interactive task (e.g., the two-people singing task), the server 20a shown in fig. 2 may allocate virtual rooms to the two users in the group 1 (i.e., the user a1 and the user B1), and in the offline cinema viewing scene, the virtual rooms allocated by the server 20a based on the group 1 may be collectively referred to as a virtual studio. It should be understood that, in the embodiment of the present application, the user a1 initiating the interactive task may start the virtual studio (e.g., virtual studio 1) by triggering a sharing start control (e.g., a control corresponding to "go to see together") in the virtual interactive interface of the terminal 10a within a task invitation duration (e.g., 4 hours) of the interactive task, so as to output a video sharing interface corresponding to the virtual studio 1 at the terminal 10 a. It is understood that, when the task invitation duration reaches the invitation duration threshold (e.g., 4 hours), and the virtual studio is started, the video player may be invoked in the video sharing interface corresponding to the virtual studio to play the target video data for the first type user and the second type user in the virtual seating area (e.g., the area 10d shown in fig. 2) shown in fig. 2. It can be understood that, by introducing the virtual play room, the embodiment of the present application can implement play control on video data played on different devices in the offline cinema viewing scene, and further can ensure that the play progress of the video data seen by each user in the virtual play room is consistent, that is, the embodiment of the present application can implement play control on online multi-user synchronous viewing. In addition, the virtual seating area is introduced, so that the inviting user and the invited user can be helped to look up the user image data of different users with the same viewing interest in the virtual seating area, then, along with the increase of the number of the invited users, the number of the user image data in the virtual seating area is continuously updated, and further, the display effect of the interface can be enriched.
The specific implementation manner of returning the data transfer page to the video detail page by the first user terminal, performing 3D rotation on a specific area (i.e., the browsing auxiliary area to which the area 10c and the area 10D of fig. 2 belong) in the video detail page to obtain a virtual interaction interface, and sending the interaction invitation information based on the virtual interaction interface may refer to the following embodiments corresponding to fig. 3 to fig. 12.
Further, please refer to fig. 3, wherein fig. 3 is a schematic flowchart of a video data processing method according to an embodiment of the present application. As shown in fig. 3, the method may be executed by a user terminal corresponding to the first type of user (i.e., the first user terminal, which may be, for example, the terminal 10a shown in fig. 2), or may be executed by a service server (e.g., the server 20a shown in fig. 2), or may be executed by both the user terminal and the service server. For convenience of understanding, the embodiment is described by taking an example that the method is executed by the user terminal corresponding to the first type of user (i.e. the first user terminal), so as to describe a specific process of outputting the virtual interactive interface in the first user terminal and implementing online multi-user synchronous viewing based on the virtual interactive interface. Wherein, the method at least comprises the following steps S101-S105:
step S101, acquiring a video detail page corresponding to target video data in an application client, responding to a trigger operation executed by a first type user aiming at a task initiating control in the video detail page, and outputting a data transfer page corresponding to a service attribute indicated by the task initiating control;
specifically, when the first type user selects a service program related to the video (i.e. a target service program, for example, the target service program may be an embedded sub-program integrated in the social client with "look together") in the session interface corresponding to the current social group, a video acquisition request may be sent to a service server (e.g., a backend server corresponding to the target service program, e.g., the backend server 1), so that the service server (e.g., the backend server 1) may filter K video data from the video data recommendation system according to the video acquisition request, and returns a video recommendation list consisting of the K video data to the first user terminal, so that the display interfaces for displaying the video recommendation list in the target service program of the application client are collectively called video recommendation interfaces. In this way, the first type user using the first user terminal can select certain video data which is of interest to the first type user from the video recommendation list displayed on the video recommendation interface. It can be understood that, in the embodiment of the present application, this video data selected by the first type of user may be collectively referred to as target video data, and then, a display interface for displaying detail information of the target video data may be collectively referred to as a video detail page corresponding to the target video data. Further, the first user terminal may respond to a trigger operation executed by the first type user for the task initiation control in the video detail page, so as to output a data transfer page corresponding to the service attribute indicated by the task initiation control, so as to further execute the following step S102 in the data transfer page.
For ease of understanding, please refer to fig. 4, and fig. 4 is a schematic view of a scene for acquiring target video data according to an embodiment of the present application. The session interface 300a, the video recommendation interface 300b, and the video details page 300c shown in fig. 4 may be display interfaces presented in the application client (e.g., social client) at different times. The session interface 300a may be a session interface of a group (i.e., group a shown in fig. 4) displayed in the first user terminal.
As shown in fig. 4, the group a in the conversation interface 300a may include N users shown in fig. 4, where N is the total number of people in the group a, where N may be a positive integer greater than 1, that is, the group a may include at least two users. For example, as shown in fig. 4, the group a may include at least two users, i.e., user 1 and user 2 shown in fig. 4. It can be understood that, when there is no grouping task for a certain video data (e.g., video data 1) currently in the group a, either the user 1 or the user 2 shown in fig. 4 may send a video acquisition request to the service server corresponding to the application client through the embedded type sub-program control 40b shown in fig. 4.
For the convenience of understanding, the embodiment of the present application takes the first user terminal as an example of the user terminal used by the user 1 shown in fig. 4 to illustrate that the user 1 sends the video acquisition request to the service server (not shown in the figure) through the embedded sub-program control 40b shown in fig. 4. It is to be understood that the sub program display area shown in fig. 4 may specifically include one or more embedded sub programs embedded in the application client (e.g., social client), and the number of embedded sub programs embedded in the application client will not be limited herein. The plurality of embedded subroutines shown in fig. 4 may specifically include the embedded subroutine 40a, the embedded subroutine 40b, the embedded subroutine 40c and the embedded subroutine 40d shown in fig. 4. For example, the embedded sub-program 40a shown in fig. 4 may be a business program for listening together with a song, the embedded sub-program 40b may be a business program for watching together, the embedded sub-program may be a business program for listening together with a lesson, and the embedded sub-program 40d may be a business program for singing together with a song.
It is to be understood that, in the embodiment of the present application, the embedded sub programs embedded in the application client may be collectively referred to as service programs, so that when the user 1 in fig. 4 determines to select a certain service program from the service programs, the embodiment of the present application may collectively refer to the service program (e.g., the embedded sub program 40b shown in fig. 4) selected by the user 1 from the programs as a target service program.
Wherein, it can be understood that if the embedded sub-program 40a is to listen to songs together, by adopting the embodiment of the present application, multiple persons having the same interest in listening to songs can be gathered, and multiple persons can listen to songs synchronously in a scene of simulating an offline singing meeting to listen to songs; similarly, if the embedded subroutine 40b is viewed together, by using the embodiment of the present application, multiple persons with the same viewing interest can be gathered, and the synchronous viewing of multiple persons in the offline cinema viewing scene can be simulated; similarly, if the embedded sub-program 40c is used for listening to classes together, then by using the embodiment of the present application, multiple persons with the same learning interest can be gathered, and multiple persons in the online classroom listening scene can be simulated to listen to classes synchronously; similarly, if the embedded sub-program 40b sings together, multiple persons with the same singing interest can be gathered by using the embodiment of the present application, and multiple persons singing synchronously in the off-line singing scene can be simulated. It should be understood that, in the embodiments of the present application, the offline concert song listening scene, the offline theater movie viewing scene, the offline classroom song listening scene, and the offline studio song singing scene may be collectively referred to as the above service scenes, and then the playing control of the target video data (e.g., concert video, tv series video, or movie video, learning video, music short, etc.) watched by multiple persons synchronously may be implemented in these service scenes.
For the convenience of understanding, the embodiment of the present application takes a service scene as an offline cinema viewing scene as an example, that is, as shown in fig. 4, the user 1 may use the selected embedded sub program 40b (i.e., viewing together) in the sub program display area as a target service program to send a video acquisition request to a backend server corresponding to the target service program. The target service program is integrated or embedded in the application client, so that a video acquisition request sent by the first user terminal acquired by the background server corresponding to the target service program may be equivalent to a video acquisition request sent by the first user terminal acquired by the service server corresponding to the application client. The service server may be the service server 1000 in the embodiment corresponding to fig. 1.
As shown in fig. 4, after the first user terminal (i.e., the user terminal used by the user 1 shown in fig. 4) sends a video acquisition request to the service server corresponding to the application client, the service server may further screen K pieces of video data that fit the interest of the user 1 from a video data recommendation system that has an association relationship with the service server, and may further form a video recommendation list to be pushed to the user 1 from the screened K pieces of video data; k here may be a positive integer. As shown in fig. 4, after the first user terminal receives the video recommendation list returned by the service server, K pieces of video data in the video recommendation list may be output to a video recommendation interface corresponding to a target service program in an application client, where the video recommendation interface may be the video recommendation interface 300b shown in fig. 4.
It can be understood that, as shown in fig. 4, the video recommendation interface 300b may be used to display a video recommendation list returned by the service server, where K pieces of video data in the video recommendation list may specifically include the video data 50a, the video data 50b, the video data 50c, and the video data 50d shown in fig. 4. As shown in fig. 4, the user 1 may select video data of interest among the K pieces of video data as target video data, where the target video data may be video data 50a shown in fig. 4. As shown in fig. 4, the first user terminal may perform a trigger operation with respect to the target video data (i.e., video data 50a shown in fig. 4) in response to the user 1 (i.e., a first type of user), to send a detail page obtaining request to a service server corresponding to the application client (e.g., a background server corresponding to a target service program), so that the service server can query the video detail information of the target video data from the multimedia content database, and then the inquired video detail information can be returned to the first user terminal, so that when the first user terminal acquires the trigger operation executed on the target video data in the list display interface 300b shown in fig. 4, the received video detail information may be further output to a video detail page corresponding to the target video data, the video detail page corresponding to the target video data may be the video detail page 300c shown in fig. 4. It is understood that, in the embodiment of the present application, when the first type user (for example, user 1 shown in fig. 4) enters the video detail page 300c at time T1, the video detail page may make a flat view that the current video (i.e., the target video data selected by user 1) is waiting for the opening atmosphere state, so as to attract the users in the public service broadcast group (i.e., group a shown in fig. 4) where user 1 is located to initiate an interactive task of grouping or contracting a party in the offline cinema viewing scene.
For example, as shown in fig. 4, the video details page 300c includes a task initiation control for instructing a first type of user to initiate an interactive task; the task initiating control here may be any one of control 1 or control 2 shown in fig. 4. The control 1 may be a control for initiating the order splicing task, and the control 2 may be a control for initiating the package field task.
For easy understanding, please refer to fig. 5, and fig. 5 is a schematic view of a scenario for acquiring a data transfer page according to an embodiment of the present application. For convenience of understanding, the embodiment of the present application may still take the video detail interface 300c in the embodiment corresponding to fig. 4 as an example, that is, the video detail interface shown in fig. 5 and the video detail interface shown in fig. 4 may be the same display interface of the first user terminal. The task initiation control as shown in fig. 5 may be control 1 as shown in fig. 5. When the user 1 performs a trigger operation on the control 1 shown in fig. 5, the first user terminal may send an authentication request (for example, authentication request 1) to a service server corresponding to the application client in response to the trigger operation performed by the user 1 on the control 1 (i.e., task initiation control) in the video detail page 300c shown in fig. 5; the authentication request 1 herein may be used to indicate that the service server authenticates the task initiation permission of the first type of user, and when the authentication is successful, it may be determined that the user 1 is qualified to initiate a billing task, and then a data transfer page corresponding to a service attribute (e.g., 10-person billing attribute) indicated by the control 1 (i.e., a task initiation control) may be returned to the first user terminal, where the data transfer page returned by the service server may be a data transfer page 400a shown in fig. 5, and at this time, the first user terminal may further perform the following step S102 based on the service assistance information (i.e., the service assistance information 1 shown in fig. 5) in the data transfer page 400a, so as to complete a data transfer operation for the 5 th set of the video name "XXX in news". The service assistance information 1 may specifically include the number of the single-point tickets related to the order combining task shown in fig. 5 as 1, the value of the single-point tickets (for example, 6 yuan or 600 diamonds shown in fig. 5), the account balance in the default payment channel (for example, the remaining resource amount when the virtual asset shown in fig. 5 is a virtual diamond), and optionally other payment channels (for example, payment methods such as a payment, B payment, and C payment shown in fig. 5).
Optionally, the user 1 (i.e., the first type of user) may also perform a trigger operation on the control 2 shown in fig. 5 in the video detail interface 300c in the embodiment corresponding to fig. 5, at this time, the first user terminal may respond to the trigger operation performed by the user 1 on the control 2 (i.e., the task initiation control) in the video detail page 300c shown in fig. 5, so as to send another authentication request (e.g., authentication request 2) to the service server corresponding to the application client; the authentication request 2 may be used to indicate that the service server authenticates the task initiation permission of the first type user, and when the authentication is successful, it may be determined that the user 1 is qualified to initiate the package yard task, and then an order configuration page corresponding to the service attribute (for example, 10 package yard attributes) indicated by the control 2 (i.e., the task initiation control) may be returned to the first user terminal.
For ease of understanding, please refer to fig. 6, and fig. 6 is a schematic view of a scenario of an order configuration page according to an embodiment of the present application. As shown in fig. 6, when the user 1 shown in fig. 5 performs a trigger operation on the control 2 (i.e., a control for initiating a package yard task), the first user terminal may receive an order configuration page returned by the service server after authenticating the first type of user (i.e., the user 1 shown in fig. 6), where the order configuration page may be the order configuration page 500a shown in fig. 6. As shown in fig. 6, the first type of user (i.e., user 1) can select the number of the single-point tickets (e.g., 10 tickets shown in fig. 6) that need to be packaged in the purchase quantity selection area of the order configuration page 500a, and if one person can correspond to one single-point ticket, the first type of user can select the number of the single-point tickets smaller than or equal to the total number of people (e.g., N people shown in fig. 6) in the purchase quantity selection area shown in fig. 6. Further, after the user 1 determines the number of the single-point tickets requiring a package yard in the purchase quantity selection area, a trigger operation may be further performed on a payment control (i.e., an immediate payment control shown in fig. 6) in the order configuration interface shown in fig. 6, so as to send package yard information (which may also be referred to as order configuration information) selected by the first type user to the service server through the first user terminal, so that the service server obtains a data transfer page corresponding to the service attribute (i.e., 10-person package yard attribute) indicated by the control 2 and returns the data transfer page to the first user terminal when determining that the first type user has package yard qualification. As shown in fig. 6, the data transfer page returned by the service server may be the data transfer page 500b shown in fig. 6, and at this time, the first user terminal may further perform the following step S102 based on the service auxiliary information 2 in the data transfer page 500b, so as to complete the data transfer operation for the 5 th set of the video name "XXX in rumor" in the data transfer page 500 b. The service assistance information 2 may specifically include the number of the single-point tickets (for example, 10) involved in the package yard task shown in fig. 6, the value of the single-point tickets (for example, 60 yuan or 6000 diamonds shown in fig. 5), the account balance in the default payment channel (for example, the remaining asset amount when the virtual asset shown in fig. 6 is a virtual diamond), and optionally other payment channels (for example, payment methods such as a payment, B payment, and C payment shown in fig. 6).
It can be understood that the video detail page according to the embodiment of the present application may include a browsing assistance area (for example, the browsing assistance area in the embodiment corresponding to fig. 4 described above) associated with the target video data; it should be understood that the browsing aid area may specifically include three key areas, and the three key areas may specifically be a first area, a second area, and a third area. The first area may specifically be an area where a projection screen in a movie theater is located in a corresponding service scene (for example, an offline cinema viewing scene of a social client), and similarly, the second area may specifically be an area where a seat in the movie theater is located in the service scene. In addition, the third area may be specifically configured to be used for showing a task state within a task invitation duration (e.g., 4 hours) corresponding to a task in which the first type user initiates an interactive task (e.g., a task with a grouping property such as a group list or a group yard) with respect to the target video data selected by the first type user in the service scenario. For example, for a task to spell, the task status may include a status of a successful spell or a status of a failed spell. The task state of the first type user when the first type user invites the specified number of interactive users (for example, the above 10 people) within the task invitation duration may be determined as the mashup success state, and the task state of the first type user when the first type user does not invite the specified number of interactive users (for example, 10 people) within the task invitation duration may be determined as the mashup failure state.
It can be understood that, with the embodiment provided in the present application, the virtual movie theater can be simulated and restored in the social client through the embedded target service program with the "watch together" function without the users needing to watch the movie going to the location of the real movie theater in the real world, so that the users needing to watch the movie can select whether to start the specific function of the target service program at any time or any place through the application client (e.g., social client) in the embodiment of the present application, for example, the specific function of the target service program can be a function of "watch together" a certain tv play or a certain movie, so as to provide better convenience for watching the movie for the users needing to watch the movie, so as to enhance the viscosity of the users.
And S102, when the data transfer operation is completed on the data transfer page, returning the display interface of the application client from the data transfer page to the video detail page, rotating the browsing auxiliary area in the video detail page, taking the display interface where the rotated browsing auxiliary area is located as a virtual interaction interface, and outputting first user image data of a first type of user in the virtual seating area of the virtual interaction interface.
Specifically, the first user terminal may obtain a data transfer operation executed for the service auxiliary information in the data transfer page, receive a data transfer credential returned by the service server when it is determined that the data transfer operation is completed, and output a transfer success prompt message on the data transfer page; the data transfer voucher is used for representing that the first type user has the authority of initiating the interaction task; further, the first user terminal may respond to a return operation executed on the data transfer page where the transfer success prompt information is located, and return the display interface of the application client from the data transfer page to the video detail page; further, the first user terminal may determine a first area and a second area in the browsing auxiliary area in the video detail page, rotate the first area and the second area based on a rotation direction with a direction perpendicular to a plane where the browsing auxiliary area is located as the rotation direction, and determine the rotated browsing auxiliary area where the rotated first area and the rotated second area are located as the rotated browsing auxiliary area; the size of the browsing auxiliary area after rotation is the same as that of the browsing auxiliary area before rotation; further, the first user terminal may use a display interface where the rotated browsing assistance area is located as a virtual interaction interface for initiating an interaction task, determine the rotated second area as a virtual seating area on the virtual interaction interface, and output first user image data of a first type of user in the virtual seating area; the number of seats in the virtual seating area is determined by the total number of people in the public service broadcast group in which the first type of user is located.
It can be understood that, before a first type of user selects a certain payment channel to pay in a data transfer page (e.g., the data transfer page 400a shown in fig. 5) corresponding to the corresponding service (e.g., the order combining service in the embodiment shown in fig. 5), a total amount of virtual assets (e.g., the 6-yuan or 600-yuan shown in fig. 5) to be paid in the current payment channel may be calculated by a billing platform for providing a billing service, and whether a single-point ticket for the target video data exists in a ticket list bar corresponding to the first type of user may be determined by a service platform for providing a general single-point ticket service, and if not, it may indicate that the first type of user currently qualifies to receive the single-point ticket, that is, the number of the single-point tickets may be displayed in the data transfer page 400a shown in fig. 5 (for example, 1). Then, when the first type user selects such a payment channel in the data transfer page 400a, a data transfer operation may be performed based on the service auxiliary information currently displayed in the data transfer page (i.e., the service auxiliary information 1 shown in fig. 5), and further, when it is confirmed that the data transfer operation is completed, a data transfer credential returned by the service server corresponding to the order combining service may be received, and a transfer success prompt message may be output on the data transfer page (e.g., the data transfer page 400a shown in fig. 4). It will be appreciated that the data transfer voucher may be used to instruct a nine-tier shipping system associated with the business server to provide shipping services for the first type of user for the corresponding business attributes. In other words, the data transfer credential herein can be used to characterize that the first type of user has the right to initiate the corresponding interactive task. In this way, after the first type user performs a return operation on the data transfer page where the transfer success prompt information is located, the first user terminal may return the display interface of the application client from the data transfer page to the video detail page. It is understood that the first user terminal may determine that the first type user has currently completed the interactive task for the target video data (for example, the interactive task may be that the order-sharing is successful) when detecting that the first type user completes the data transfer operation.
Based on this, when the first type user completes the order-gathering task or the package task, the first user terminal may further perform a rotation operation on a specific area (for example, a browsing auxiliary area) in the currently displayed video detail page, and may further use a display interface where the rotated browsing auxiliary area is located as a virtual interaction interface.
For easy understanding, please refer to fig. 7, and fig. 7 is a schematic view of a scene for acquiring a virtual interactive interface according to an embodiment of the present application. As shown in fig. 7, when it is determined that the user 1 shown in fig. 7 activates an interactive task (for example, the user 1 has the right to initiate the interactive task when the composition succeeds or the package yard succeeds), the first user terminal may return the display interface of the application client from the data transfer page to the video detail page to obtain the video detail page 600a shown in fig. 7. At this time, the first user terminal may determine the first area and the second area among the browsing assistance areas shown in fig. 7 in the video detail page 600a when the current time is the time T2 (the time T2 is a time after the time T1). At this point, the first area and the second area in the video detail display interface are still shown in the above-mentioned plan view manner.
It can be understood that, in order to truly simulate the virtual scenes of buying tickets, entering into a cinema, sitting and watching movies of the cinema under the offline cinema watching scene, the embodiment of the application provides that under the condition that the whole atmosphere of the virtual cinema is dragged backwards, taking the direction perpendicular to the plane of the browsing aid area shown in figure 7 (i.e. the above-mentioned plan view) as the rotation direction, and then a rotation operation (e.g., 3D selection) may be performed on the first area (e.g., the area where the projection screen is located) and the second area (e.g., the area where the chair is located) shown in fig. 7 based on the rotation direction, to enter the virtual 3D mode, therefore, the browsing auxiliary area where the rotated first area and the rotated second area are located can be determined as the rotated browsing auxiliary area (specifically, see the schematic diagram of the rotated browsing auxiliary area shown in fig. 7); it can be understood that, in the 3D rotation operation on the browsing aid area, the embodiment of the present application can ensure that the size of the browsing aid area after the entire rotation can be the same as the size of the browsing aid area before the entire rotation.
It is understood that, for example, as shown in fig. 7, the embodiment of the present application may adaptively reduce the first region shown in fig. 7 through a 3D rotation operation (that is, the region where the projection screen is located may be reduced through 3D rotation of the screen), and adaptively increase the second region shown in fig. 7 (that is, the region where the seat is located may be reduced through 3D rotation of the screen), so that the rotated browsing assistance region shown in fig. 7 may be determined according to the rotated first region and the rotated second region. It can be understood that, in the embodiment of the present application, the display interface where the rotated browsing assistance area is located may be collectively referred to as a virtual interaction interface, and the virtual interaction interface may be a virtual interaction interface 600b shown in fig. 7, so as to simulate a virtual reality function of entering a virtual movie theater in the above application client. As shown in fig. 7, the virtual seating area shown in fig. 7 may be included in the virtual interactive interface 600b, and the virtual seating area may be an area obtained after performing a 3D rotation operation on the second area in the video detail page 600a, and it is understood that the user 1 (i.e., the first type user) shown in fig. 7 may view image data of a user who currently selects a seat in the virtual theater through a 3D sliding manner (e.g., a forward sliding manner, a backward sliding manner, a leftward sliding manner, or a rightward sliding manner) in the virtual seating area. For example, at this time, the user 1 may browse the image data of the user, that is, the user image data 1 shown in fig. 7, in a 3D sliding manner in the scene of the virtual cinema. Therefore, in the embodiment of the application, when the above-mentioned composition succeeds or the package yard succeeds, the user image data of the first type user may be output to the virtual seating area shown in fig. 7 for display, so as to help the user 1 to know that the user has entered the virtual movie theater in the current year, and can quickly perform seating in the virtual movie theater according to the preset mapping relationship. For example, here, the user 1 is a first type of user in the public service broadcast group who initiates an interactive task with respect to the target video data with the video name "XXX in rumor" shown in fig. 7, so that the first user image data (i.e., the user image data 1 shown in fig. 7) of the first type of user can be placed on a designated seat (e.g., a seat with a homeowner identifier) based on the user role type of the first type of user (i.e., an inviting user to the interactive task) when it is determined that the first type of user has the right to initiate the interactive task, and then the user image data 1 shown in fig. 7 can be currently displayed on the designated seat.
Therefore, in the embodiment of the application, the service server can find the seat numbers having one-to-one mapping relationship with the corresponding participation orders (i.e., the corresponding interaction numbers) for the users according to the participation orders (i.e., the interaction numbers participating in the interaction tasks) of the users currently entering the virtual cinema, and further can determine the virtual seats in the scene of the virtual cinema based on the found seat numbers, so that the image data of the users in the corresponding participation orders (i.e., the corresponding interaction numbers) can be respectively placed on the virtual seats corresponding to the corresponding seat numbers. In other words, in the embodiment of the present application, the number of seats in the virtual seating area is determined by the total number of people in the public service broadcast group in which the first type of user is located. For example, if there are N (e.g., 30) individuals in the user group a, the number of seats in the virtual seating area in the offline cinema scene may be N (i.e., 30).
Further, as shown in fig. 7, the first type user may trigger the invite control of "invite group friend" shown in fig. 7 in the virtual interactive interface 600b shown in fig. 7, so as to further perform the following step S103.
In addition, as shown in fig. 7, the rotated first region may be used to present video auxiliary information associated with the target video data (i.e., video data related to the video name "XXX in news" shown in fig. 7); the video auxiliary information herein may include any one of picture information and short video information of the target video data; the picture information may include a video cover or poster picture of the target video data. The short video information may include a trailer or a highlight clip of the target video data, etc. Namely, the embodiment of the application can change the showing content displayed on the showing screen in the rotated first area in a manual switching mode, so as to provide a friendly man-machine interaction interface.
At this time, optionally, when the video auxiliary information displayed in the rotated first region is picture information (that is, image data of one frame shown in fig. 7), the first user terminal may further respond to a trigger operation executed for a short video playing control (for example, a trailer control shown in fig. 7) corresponding to the short video information, so as to play the short video information of the target video data in the rotated first region.
Similarly, optionally, when the video auxiliary information displayed in the rotated first region is short video information, the first user terminal may further respond to a trigger operation executed with respect to a picture preview control (for example, a picture control shown in fig. 7) corresponding to picture information, so as to play the picture information of the target video data in the rotated first region.
Step S103, responding to a trigger operation executed aiming at an invitation control in a virtual interactive interface, and outputting interactive invitation information associated with target video data to a public service broadcast group where a first type of user is located;
specifically, the first user terminal may respond to a trigger operation executed for an invitation control (e.g., the control of the invitation group friend shown in fig. 7) in the virtual interaction interface, obtain a user name (e.g., zhang san) of the first type user and a video name (e.g., "XXX in rumours" shown in fig. 7) of the target video data, and may further generate interaction invitation information associated with a service attribute of the target video data based on the obtained user name and video name; further, the first user terminal may send interaction invitation information associated with the target video data to the service server, so that the service server may broadcast interaction prompt information corresponding to the interaction invitation information in a public service broadcast group where the first type user is located. Further, the first user terminal may receive the interaction prompt information broadcast by the service server, and output the interaction prompt information to the public service broadcast group where the first type of user is located. For an implementation manner in which the first type user sends the interaction invitation information through the service server in the virtual 3D mode, reference may be made to the description of a specific process in which the user a1 sends the interaction invitation information in the embodiment corresponding to fig. 2, and details will not be further described here.
It can be understood that, in the embodiment of the present application, the service server may display the interaction prompt information in the public service broadcast group in a top-absorbing bar manner, so that other users in the public service broadcast group can quickly see the interaction prompt information currently set on the public service broadcast group, and further can identify the interaction invitation information sent by the first user terminal by triggering the interaction prompt information. It is understood that, in the embodiment of the present application, users who initiate the interaction invitation information (i.e., the first type of users) may be collectively referred to as interaction initiators, and users who receive the interaction invitation information in the public service broadcast group may be collectively referred to as interaction responders. That is, the interactive initiator and the interactive responder here can classify the roles for different users in the same group in the virtual reality scene (e.g., an offline cinema scene).
Optionally, it may be understood that, when the first user terminal acquires the interaction invitation information associated with the target video data, the interaction invitation information may be packaged as an invitation card, and then the interaction invitation information may be sent to the service server by using the invitation card, so that the service server broadcasts the invitation card corresponding to the interaction invitation information in the public service broadcast group where the first type of user is located, in this way, other users (i.e., users to be invited) located in the public service broadcast group may quickly see the invitation card currently presented (e.g., top-presented) in the public service broadcast group, and then may identify the interaction invitation information sent by the first user terminal by triggering the invitation card. It can be understood that, at this time, the users to be invited that trigger the invitation card may be collectively referred to as second-type users, and when the users corresponding to the second-type users (i.e., the second user terminal) can acquire the interaction invitation information by identifying the invitation card, a video detail page corresponding to the target video data is output, at this time, the second-type users may enter a virtual 3D mode through the video detail page, so as to decide whether to participate in a ticket assembly by referring to the user image data currently located in the virtual seating area in the second user terminal. The terminal display interface in the virtual 3D mode corresponding to the second type of user may be the virtual interactive interface 200a in the embodiment corresponding to fig. 2.
It can be understood that, when the second type user performs the triggering operation on the invitation response control (i.e., the control for accepting the invitation and viewing the x-gram in the embodiment corresponding to fig. 2) in the virtual interactive interface 200a, a new data transfer page may be quickly output in the second user terminal. At this time, the second type user can perform data transfer operation on the new data transfer page, and further can receive the data transfer certificate returned by the service server when the data transfer operation is successfully completed in the data transfer page, and output a transfer success prompt message on the data transfer page; at this point, the data transfer credential may be used to characterize that the second type of user currently has the right to participate in this interactive task together. It can be seen that the second type of user can be a user participating in responding to the interactive invitation information in the public service broadcast group (e.g., the group a described above); at this time, the first user terminal may further perform step S104, in which second user image data of the second type user transmitted by the user terminal corresponding to the second type user (i.e. the second user terminal) may be received.
Step S104, receiving second user image data of a second type user, which is sent by a user terminal corresponding to the second type user, and outputting the second user image data to a virtual seating area;
specifically, when a second type of user in the public service broadcast group obtains the interaction invitation information based on the interaction prompt information, the first user terminal may receive interaction response information returned by a user terminal (i.e., the second user terminal) corresponding to the second type of user based on the interaction invitation information; the interactive response information herein may include second user image data of the second type of user and an interaction serial number of the interaction task indicated by the interaction invitation information of the second type of user; further, the first user terminal may determine a virtual seat corresponding to the interaction number in the virtual seating area, and may further output the second user image data to the virtual seat in the virtual seating area. Optionally, the interactive response information may further include a user name of the second type of user; thus, when the second type user successfully participates in the order, a prompt message that the second type user (e.g., a certain one) chooses to watch the series together may be output in the first user terminal.
For ease of understanding, please refer to fig. 8, where fig. 8 is a schematic view of a scene in which second user image data is added to a virtual seating area according to an embodiment of the present application. The area 70b shown in fig. 8 is the above-mentioned virtual seating area, and the virtual seating area can be currently used for displaying the user image data of each user participating in the interactive task, for example, the user image data of the interaction initiator initiating the interactive task can be displayed, and the user image data of the interaction responder (or the invited user) participating in the response interactive task by the first user terminal can be displayed. For example, the user image data 1 shown in fig. 8 may be an interaction initiator currently initiating a spelling task (e.g., the user image data of the user 1 shown in fig. 8). Similarly, user image data 1 as in FIG. 8 may be an interactive responder currently participating in a stitching task (e.g., user image data of user 2 as shown in FIG. 8).
Furthermore, it is understood that the browsing assistance area as in fig. 8 may further comprise a third area (not shown in the figure), which may be used to show the task state of the interactive task initiated by the user 1. It can be understood that, in this embodiment of the application, when the first user terminal outputs the user name including the second type of user to the virtual interactive interface, the first user terminal further receives the number of interactive users counted by the service server based on the interactive response information, so that the task state of the interactive task in the third area can be updated based on the number of interactive users. For example, taking two-person spelling as an example, when the user 2 shown in fig. 8 successfully participates in the interactive task (i.e., 2-person spelling task), the service server may count that the user 2 is the user meeting the minimum number of the interactive users, so as shown in fig. 8, the task state displayed in the third area may be adjusted from the previous "poor 1-person spelling success" state to the "spelling success" state shown in fig. 8.
Further, as shown in fig. 8, when the task status of the order combining task initiated by the user 1 for the target video data is an order combining success status, the service server may allocate a virtual studio (which may also be referred to as a virtual room) to the order combining task (e.g., 2 person order combining task) in the public service broadcast group. In this way, when the first type user performs a trigger operation on the "enter-and-watch" control (i.e., the shared start control) in the virtual interactive interface 700a shown in fig. 8 through the first user terminal, the virtual studio may be started to output the video playing interface 700b shown in fig. 8 in the first user terminal, and then the following step S105 may be continuously performed, that is, in the effective playing duration (e.g., 24 hours) of the virtual studio, in the virtual studio corresponding to the public service broadcast group, the first type user (i.e., user 1 shown in fig. 8) and the second type user (i.e., user 2 shown in fig. 8) in the region 70b shown in fig. 8 (i.e., the virtual seating region) may play the target video data.
Step S105, in the virtual playing room corresponding to the public service broadcast group, playing the target video data for the first type user and the second type user in the virtual seating area.
Specifically, if the service attribute is a first service type; if the interactive task corresponding to the target video data is a first task (for example, the package field task), the first user terminal may quickly receive a first virtual studio corresponding to a public service broadcast group created by the service server based on the first service type when the first type user completes the data transfer operation; furthermore, the first user terminal may respond to the trigger operation executed by the sharing start control associated with the first virtual play room within the task invitation duration corresponding to the first task, and may further output a video sharing interface corresponding to the virtual play room; further, the first user terminal may invoke a video player in the video sharing interface when the task invitation duration reaches the invitation duration threshold, and play the target video data for the first type user and the second type user in the virtual seating area. Optionally, if the service attribute is a second service type; if the interactive task corresponding to the target video data is a second task (for example, the above-mentioned order-sharing task), the first user terminal may receive a second virtual studio corresponding to the public service broadcast group created by the service server based on the second service type when detecting that the task state corresponding to the second task is the completion state within the task invitation duration corresponding to the second task; the completion state means that the service server counts that the number of the interactive users participating in the response to the second task reaches an interactive user threshold (for example, 2 people); the interactive user threshold is less than or equal to the total number of users in the public service broadcast group (e.g., N, where N may be a positive integer greater than or equal to 2); furthermore, the first user terminal may respond to the trigger operation executed by the shared start control associated with the second virtual play room within the task invitation duration, and output a video sharing interface corresponding to the second virtual play room; further, the first user terminal may invoke a video player in the video sharing interface when the task invitation duration reaches the invitation duration threshold, and play the target video data for the first type user and the second type user in the virtual seating area.
The computer device in the embodiment of the application can receive a trigger operation executed by a first type user for a task initiating control (for example, a singleton control or a package yard control) in a video detail page when the video detail page corresponding to target video data in an application client is obtained, so as to output a data transfer page corresponding to a service attribute indicated by the task initiating control; it should be understood that the video details page includes a browsing assistance area associated with the target video data, for example, the browsing assistance area may be a specific area existing in the video details interface that needs to be 3D rotated. Further, after detecting that the data transfer operation (i.e., the order completion or the package yard completion) is executed for the data transfer page, the computer device may return the display interface of the application client from the data transfer page to the video detail page, may perform a rotation operation on the browsing auxiliary area in the video detail page, may further use the display interface where the rotated browsing auxiliary area is located as a virtual interaction interface, and may further output first user image data of the first type of user in a virtual seating area of the virtual interaction interface. The virtual interactive interface may include an invitation control for inviting a group friend in a public service broadcast group of the application client. Further, the computer device may respond to a trigger operation executed for the invitation control in the virtual interactive interface to output interactive invitation information associated with the target video data to a public service broadcast group in which the first type user (for example, user 1) is located; the public service broadcast group may include users of a second type (e.g., user 2 and user 3); the second type of user may be a user participating in the response to the interactive invitation information in the public service broadcast group. It should be understood that the virtual interactive interface may be a display interface obtained by performing 3D rotation processing on a specific area in the video detail page, so that the virtual seating area in the virtual interactive interface can be used for displaying not only the user image data of the inviting user who initiates the invitation, but also the user image data of the invited user who has received the invitation. In this way, when the invited other users (e.g., user 4) obtain the interaction invitation information, the user image data of each user currently needing to view the target video data (e.g., program X) can be further viewed in the virtual seating area of the virtual interaction interface, so that the other users (e.g., user 4) can be helped to decide whether to join in viewing the video program X together, and if the other users (e.g., user 4) also choose to view the video program X together, the user 4 can be regarded as a second type user. At this time, the computer device may receive and receive second user image data of the second type user sent by the user terminal corresponding to the second type user (e.g., user 4), and then may output the second user image data to the virtual seating area, which means that as the number of users invited to the computer device increases, the virtual seating area in the virtual interactive interface may present different user image data, and further, rich interface display effects may be provided. In addition, the computer device may further play the target video data for the first type user and the second type user in the virtual seating area in a virtual play room corresponding to the public service broadcast group, and it should be understood that, in the embodiment of the present application, the function of online synchronous film watching may be provided for multiple people in the same public service broadcast group through the virtual play room, and then, the play control of online synchronous film watching for multiple people may be implemented in the virtual play room.
Further, please refer to fig. 9, where fig. 9 is a schematic diagram of a video data processing method according to an embodiment of the present application. As shown in fig. 9, the method may be performed by a user terminal (e.g., the terminal 10a shown in fig. 2, described above), or may be performed by a service server (e.g., the server 20a shown in fig. 2, described above), or may be performed by both the user terminal and the service server. For convenience of understanding, in this embodiment, the method is described as an example that the user terminal and the service server jointly execute the method, and the terminal 10a may be the first user terminal, and the method may specifically include the following steps:
step S201, a first user terminal obtains a video detail page corresponding to target video data in an application client;
step S202, a first user terminal responds to a trigger operation executed by a first type user aiming at a task initiating control in a video detail page, and initiates an authentication request to a service server;
step S203, the service server authenticates the first type user corresponding to the first user terminal based on the authentication request, and then returns a data transfer page to the first user terminal;
step S204, the first user terminal outputs a data transfer page corresponding to the service attribute indicated by the task initiating control;
the video detail page comprises a browsing auxiliary area associated with the target video data;
step S205, when the data transfer operation is completed on the data transfer page, the first user terminal returns the display interface of the application client from the data transfer page to the video detail page, performs rotation operation on the browsing auxiliary area in the video detail page, takes the display interface where the rotated browsing auxiliary area is located as a virtual interaction interface, and outputs first user image data of a first type of user in a virtual seating area of the virtual interaction interface;
step S206, the first user terminal responds to the triggering operation executed aiming at the invitation control in the virtual interactive interface and sends the interactive invitation information associated with the target video data to the service server;
step S207, the service server outputs the interaction invitation information to the public service broadcast group where the first type user is located;
wherein the public service broadcast group comprises a second type of user; the second type user is the user participating in responding the interactive invitation information in the public service broadcast group.
Step S208, when the second type user participates in the response of the interaction invitation information, the service server receives second user image data of the second type user, which is sent by a second user terminal corresponding to the second type user, and sends the second user image data to the first user terminal;
step S209, the first user terminal outputs the second user image data to the virtual seating area;
step S210, the first user terminal sends a virtual playing room establishing instruction to the service server;
step S211, the service server creates a virtual playing room corresponding to the public service broadcasting group based on the virtual playing room creating instruction, and sends the virtual playing room to the first user terminal;
in step S212, in the virtual studio, the target video data is played for the first type user and the second type user in the virtual seating area.
The specific implementation manner of steps S201 to S212 may refer to the description of steps S101 to S105 in the embodiment corresponding to fig. 3. The description will not be continued here.
For ease of understanding, please refer to fig. 10, where fig. 10 is a diagram illustrating a technical architecture of a front end interacting with a back end according to an embodiment of the present application. As shown in fig. 10, the background may be the service server, that is, the service server may specifically relate to the background server (e.g., background server 1) corresponding to the pay-together front end shown in fig. 10 and the background server (e.g., background server 2) corresponding to the hand Q front end shown in fig. 10 (the background server 2 may include the hand Q background and the player background shown in fig. 10). As shown in fig. 10, the front end may operate in the first user terminal, and the front end operated by the first user terminal may specifically include the hand Q front end shown in fig. 10 and the pay-for-look front end integrated in the hand Q front end. That is, the hand Q front end may be the application client in the embodiment corresponding to fig. 3, and the collective payment front end may be the target service program in the embodiment corresponding to fig. 3. As shown in fig. 10, after a user (for example, user 1 in the embodiment corresponding to fig. 3) clicks "look-together" (i.e., the target service program) in the hand Q chat group of the hand Q front end shown in fig. 10, the user may enter into the look-together payment front end shown in fig. 10.
The background server 1 may relate to a plurality of service modules shown in fig. 10, such as a movie service, a single-point ordering service, a general single-point ticket service, a group-sharing service, and a qualification service. The movie service may be configured to recommend, by using the configuration system shown in fig. 10 (i.e., a video data recommendation system), K pieces of video data that are good in interest of the user 1 for a first type of user (e.g., the user 1), and then return a movie list formed by the recommended K pieces of video data to the first user terminal, so as to present a movie list page shown in fig. 10 in a target service program of the first user terminal, where the movie list page may be a video recommendation interface in the embodiment corresponding to fig. 3. At this time, the user 1 may select a certain video data matching the interest of the user as the target video data in the movie list page, and further may query the video detail information (i.e., query the movie details) of the target video data to the multimedia content database shown in fig. 10 through the movie service shown in fig. 10, and further may display the queried movie details on a movie detail (grouping) page corresponding to the target business program.
Further, the user 1 enters into the payment service process through the movie detail (collage) page shown in fig. 10, step 1 will be performed by the piecemeal service shown in fig. 10, to query the user 1 for the piecemeal details, that is, before entering the payment page, the background server 1 in the service server will determine whether there is a party in the public service broadcast group where the user 1 is currently located, if not, this indicates that user 1 (i.e., the first type of user) is currently eligible for a party, and the pay-together front end may be allowed to present the video payment page shown in fig. 10, the video payment page may be an order configuration page in the embodiment corresponding to fig. 3, as shown in fig. 10, the backend server may also execute step 2 to query the user 1 as to whether there are currently available amounts of vouchers to be presented on the video payment page. Thus, when the user 1 performs the ordering in step 3 shown in fig. 10 for the video payment page, a single-point ordering service may be provided for the user 1 through the charging platform shown in fig. 10, and then the data transfer page in the embodiment corresponding to fig. 3 may be obtained. At this time, the background server 1 may execute step 5 and step 6 shown in fig. 10 through the single-point ordering service, and further may execute step 7 to call back the shipment when the payment is successful. It can be understood that the background server 1 may deliver goods through the nine-layer delivery system (a universal service system for delivery) shown in fig. 10, and then may determine, through the group-sharing service shown in fig. 10, which interactive task the user 1 currently qualifies to initiate, and then may issue a single-point ticket according to the current qualification of the user 1.
As shown in fig. 10, when the user 1 initiates an interactive task (e.g., a spelling task or a chartered task), a party message can be pushed to the background server 2 corresponding to the hand Q front end (e.g., the hand Q background shown in fig. 10) through the background server corresponding to the pay-together front end. The grouping message may be the interaction invitation information in the embodiment corresponding to fig. 3. In this way, when the other users in the public service broadcast group see that the group message corresponding to the interaction invitation information joins the group in the middle, the participants currently participating in the group message can be qualified by the qualification service and the group service shown in fig. 10, that is, 1) it is determined whether the current participant (i.e., the second type user, for example, the user 2) belongs to the public service broadcast group, and 2) whether other group services for the target video data exist in the current group. Thus, after the second type user confirms successful participation in the party, the first type user can enter the room through the front end of the hand Q in fig. 10 to watch the video data in the hand Q watching room (i.e., the virtual play room).
It can be understood that, as shown in fig. 10, when the user 1 requests to enter the room, the qualification examination is performed through the hand Q backend shown in fig. 10 and the qualification service and the group-building service provided by the backend server 1, and then the user can enter the virtual play room when the examination is successful, as shown in fig. 10, the player backend shown in fig. 10 can perform authentication processing on the currently requested video data to compare the qualification data of each person in the on-line multi-person synchronous viewing, and further can play corresponding video data for the first type user and the second type user in the virtual room when the comparison is successful, so as to implement play control on different viewing of the on-line multi-person. It can be understood that, since the pay-as-you-go front end shown in fig. 10 is embedded in the hand Q front end, in the embodiment of the present application, the backend server 1 and the backend server 2 may be regarded as the same server, and optionally, in the embodiment of the present application, the backend server 1 and the backend server 2 may also be equivalent to two sub-servers in the service server, and the implementation forms of the two types of servers will not be limited herein.
For easy understanding, please refer to fig. 11, and fig. 11 is a schematic diagram of a qualification scenario provided in an embodiment of the present application. As shown in fig. 11, the embodiment of the present application may seek a payment increment in a social scenario by integrating a service function of a business program "look-together" with an application client (for example, a social client is a client Q). In addition, the embodiment of the present application can provide reliable payment authentication capability by providing a quality ticket to the hand Q (i.e., to the watch-together service program in the hand Q) on the video side. In addition, as shown in fig. 11, the application client (e.g., hand Q) provides a virtual room (i.e., the virtual studio) to provide the user with viewing requirements in an offline cinema viewing scene.
As shown in fig. 11, in the process that the user participating in the interaction enters the Q-watching room to watch the video, a review of payment qualification is required, for example, a specific review is performed to see whether the user can pay/enter the current virtual studio. Namely, the application can authenticate the participators in a mode of inquiring qualification flow in the process of playing authentication. And when the film watching qualification is inquired, the video can be permitted to enter the room (namely, enter the virtual playing room) to watch. As shown in fig. 11, it can be understood that, if the current interactive task is a spelling task, the embodiment of the present application may perform qualification review on the participants newly joining the spelling group if the current room is already opened, such as: when a newly added participant clicks to pay immediately, whether the participant is qualified for grouping or not is firstly inquired, the participant is qualified to allow ordering, and the participant is informed of the reason if the participant is not qualified.
Wherein, the grouping qualification needs to meet the following conditions: 1) the current user is in this cluster, 2) if it is an initiating clique, then the current cluster has no other movies being clique. A group is only allowed to initiate one clique at a time. In other words, in the embodiment of the present application, after a group initiates a group corresponding to a certain target video data, the locked state may be entered, that is, a new group corresponding to the same target video data is not allowed to be initiated, and the same user is not allowed to initiate multiple groups in the group, so that before payment each time, it is necessary to determine whether the user initiates other groups in the current group.
It can be understood that, in the embodiment of the present application, when it is determined that the current user is qualified to initiate grouping or yard wrapping, state transition at a corresponding time may be performed. For example, please refer to fig. 12, and fig. 12 is a schematic view of a scenario for performing state transition according to an embodiment of the present application. The current user shown in fig. 12 may be the first type user, and when the first type user does not initiate the mashup, the first type user may be qualified to initiate the mashup by means of data transfer, that is, the first type user is divided into pre-purchase (unqualified) and post-purchase (qualified and unqualified) for the mashup/package yard. As shown in fig. 12, the mashup initiated by the first type user may include a status of mashup unsuccessful, for example, since the first type user may have a task invitation duration when initiating the mashup, then when the task invitation duration exceeds an invitation duration threshold (e.g., 4 hours), the mashup timeout is considered, and further, in case of the mashup timeout, the mashup timeout expires as shown in fig. 12, and at this time, the service server may release the mashup lock for the first type user in the background and reimburse other types of virtual assets (e.g., a diamond may be reissued) such as the virtual assets paid when initiating the mashup. Alternatively, as shown in fig. 12, in the case that the number of the users in the task invitation duration has reached the minimum threshold (e.g., 2 people), if the first type user does not open the room when the task invitation duration reaches the invitation duration threshold (e.g., 4 hours), the user is also considered to be the time-out of the group, and the state of the group time-out failure is generated.
As shown in fig. 12, if the number of people in the task invitation duration has reached a minimum threshold (e.g., 2 people) and the first type user first opens the room before the task invitation duration reaches the invitation duration threshold (e.g., 4 hours), then the status of the room being successfully joined and opened is deemed to be satisfied, at which time the first type user is qualified but has not been underwritten. It is understood that, as shown in fig. 12, a first type user may manually receive a room in a state where the grouping is successful and the room is already opened, and it should be understood that, at this time, the embodiment of the present application may be regarded that the virtual studio is in a temporarily closed state, that is, the first type user has a right to select to re-open the room within the valid time period (for example, 8 hours) of the virtual studio, and at this time, the first type user may still manually receive the room in a state where the grouping is successful and the room is already opened.
Further, optionally, as shown in fig. 12, the first type user may suspend closing the room after manually closing the room, determine the room status of the current room as the end status when the duration of temporarily closing the room exceeds the effective duration of the virtual studio, and may release the chunking lock and reimburse the masonry for the purchased first type user if the first type user is qualified for non-verification. Optionally, as shown in fig. 12, optionally, if the first type user detects that the duration of opening the room exceeds the effective duration of the virtual studio in the state that the room is successfully pieced together and opened, the state of the current studio is also regarded as the room end state.
Optionally, when the first type user successfully spells and opens the room, and starts the playing, the qualification of the qualified first type user may be further verified, so as to end the current virtual playing room.
It can be understood that the first user terminal may play the target video data for the first type user and the second type user in the virtual seating area in the virtual play room corresponding to the public service broadcast group, and it should be understood that in the embodiment of the present application, the function of online synchronous film viewing may be provided for multiple users in the same public service broadcast group through the virtual play room, and then the play control of online synchronous film viewing for multiple users may be implemented in the virtual play room. In addition, the beneficial effects of the same method are not described in detail.
Further, please refer to fig. 13, fig. 13 is a schematic structural diagram of a video data processing apparatus according to an embodiment of the present application. Wherein the video data processing apparatus 1 may comprise: a detail page acquisition module 10, an area rotation module 20, an invitation information sending module 30, an image data receiving module 40 and a video data playing module 50; further, the video data processing apparatus 1 may further include: a session interface control module 60, a video data request module 70, a recommendation list receiving module 80;
a detail page obtaining module 10, configured to obtain a video detail page corresponding to target video data in an application client, respond to a trigger operation executed by a first type of user for a task initiation control in the video detail page, and output a data transfer page corresponding to a service attribute indicated by the task initiation control; the video detail page comprises a browsing auxiliary area associated with the target video data;
the K pieces of video data comprise target video data;
the detail sheet acquisition module 10 includes: a detail page output unit 101, an authentication request sending unit 102, an order configuration unit 103 and a transfer interface output unit 104;
a detail page output unit 101, configured to respond to a trigger operation executed on target video data in the list display interface, and output a video detail page corresponding to the target video data; the video detail page comprises a task initiating control used for indicating a first type of user to initiate an interactive task;
the authentication request sending unit 102 is configured to send an authentication request to the service server in response to a trigger operation executed by the first type of user for initiating a control for a task in the video detail page; the authentication request is used for indicating the service server to acquire an order configuration page associated with the service attribute of the target video data when the task initiating authority of the first type user is successfully authenticated;
the order configuration unit 103 is configured to receive an order configuration page returned by the service server, respond to a trigger operation executed for the order configuration page, output order configuration information associated with a service attribute of the target video data on the order configuration page, and send a data transfer page associated with the order configuration information to the server;
and the transfer interface output unit 104 is used for receiving the data transfer page returned by the service server and outputting the data transfer page.
The detail page acquiring module 10 includes: for specific implementation manners of the detail page output unit 101, the authentication request sending unit 102, the order configuration unit 103, and the transfer interface output unit 104, reference may be made to the description of step S101 in the embodiment corresponding to fig. 3, which will not be further described here.
The area rotating module 20 is configured to, when the data transfer operation is completed on the data transfer page, return the display interface of the application client from the data transfer page to the video detail page, perform a rotating operation on a browsing auxiliary area in the video detail page, use the display interface where the rotated browsing auxiliary area is located as a virtual interaction interface, and output first user image data of a first type of user in a virtual seating area of the virtual interaction interface;
wherein, the region rotating module 20 includes: a transfer operation acquisition unit 201, a detail sheet returning unit 202, a rotation area determination unit 203, a seating area unit 204, and an auxiliary information switching unit 205;
a transfer operation obtaining unit 201, configured to obtain a data transfer operation performed on the service auxiliary information in the data transfer page, receive a data transfer credential returned by the service server when it is determined that the data transfer operation is completed, and output a transfer success prompt message on the data transfer page; the data transfer voucher is used for representing that the first type user has the authority of initiating the interaction task;
a detail page returning unit 202, configured to respond to a return operation executed on the data transfer page where the transfer success prompt information is located, and return the display interface of the application client from the data transfer page to the video detail page;
a rotation region determining unit 203, configured to determine, in the video detail page, a first region and a second region in the browsing auxiliary region, perform a rotation operation on the first region and the second region based on a rotation direction with a direction perpendicular to a plane where the browsing auxiliary region is located as the rotation direction, and determine, as the browsing auxiliary region after rotation, the browsing auxiliary region where the first region and the second region after rotation are located; the size of the browsing auxiliary area after rotation is the same as that of the browsing auxiliary area before rotation;
a seating area unit 204, configured to determine that a display interface where the rotated browsing assistance area is located is used as a virtual interaction interface for initiating an interaction task, determine, on the virtual interaction interface, a second area after rotation as a virtual seating area, and output first user image data of a first type of user in the virtual seating area; the number of seats in the virtual seating area is determined by the total number of people in the public service broadcast group in which the first type of user is located.
Wherein the rotated first area is used for displaying video auxiliary information associated with the target video data; the video auxiliary information includes any one of picture information and short video information of the target video data;
optionally, the auxiliary information switching unit 205 is configured to, when the video auxiliary information displayed in the rotated first area is picture information, respond to a trigger operation executed for a short video playing control corresponding to the short video information, and play the short video information in the rotated first area.
For a specific implementation manner of the transfer operation obtaining unit 201, the detail page returning unit 202, the rotation area determining unit 203, the seating area unit 204, and the auxiliary information switching unit 205, reference may be made to the description of step S102 in the embodiment corresponding to fig. 3, and details will not be further described here.
The invitation information sending module 30 is configured to respond to a trigger operation executed for an invitation control in the virtual interactive interface, and output interaction invitation information associated with the target video data to a public service broadcast group where the first type of user is located; the public service broadcast group comprises a second type of user; the second type user is a user participating in responding to the interactive invitation information in the public service broadcast group;
the invitation information sending module 30 includes: an invitation information generating unit 301, an invitation information transmitting unit 302, a prompt information output unit 303;
an invitation information generating unit 301, configured to respond to a trigger operation executed for an invitation control in a virtual interaction interface, acquire a user name of a first type of user and a video name of target video data, and generate interaction invitation information associated with a service attribute of the target video data based on the user name and the video name;
an invitation information sending unit 302, configured to send interaction invitation information associated with the target video data to the service server, so that the service server broadcasts interaction prompt information corresponding to the interaction invitation information in a public service broadcast group where the first type of user is located;
the prompt information output unit 303 is configured to receive the interactive prompt information broadcast by the service server, and output the interactive prompt information to the public service broadcast group where the first type of user is located.
For a specific implementation manner of the invitation information generating unit 301, the invitation information sending unit 302, and the prompt information outputting unit 303, reference may be made to the description of step S103 in the embodiment corresponding to fig. 3, which will not be described again here.
The image data receiving module 40 is configured to receive second user image data of a second type of user, which is sent by a user terminal corresponding to the second type of user, and output the second user image data to the virtual seating area;
wherein, the image data receiving module 40 includes: a response information receiving unit 401, a user image output unit 402, and a task state updating unit 403;
a response information receiving unit 401, configured to receive, when a second-type user in the public service broadcast group obtains interaction invitation information based on the interaction prompt information, interaction response information returned by a user terminal corresponding to the second-type user based on the interaction invitation information; the interactive response information comprises second user image data of a second type of user and an interactive sequence number of an interactive task indicated by the interactive invitation information participated by the second type of user;
a user image output unit 402, configured to determine a virtual seat corresponding to the interaction number in the virtual seating area, and output the second user image data to the virtual seat in the virtual seating area.
Optionally, the interactive response information includes a user name of the second type of user; the rotated browsing auxiliary area comprises a third area; the third area is used for displaying the task state of the interactive task corresponding to the video service data;
a task state updating unit 403, configured to receive the number of interactive users counted by the service server based on the interactive response information when outputting the user name including the second type of user to the virtual interactive interface, and update the task state of the interactive task in the third area based on the number of interactive users.
For specific implementation manners of the response information receiving unit 401, the user image output unit 402, and the task state updating unit 403, reference may be made to the description of step S104 in the embodiment corresponding to fig. 3, and details will not be further described here.
And the video data playing module 50 is configured to play the target video data for the first type of user and the second type of user in the virtual seating area in the virtual playing room corresponding to the public service broadcast group.
If the service attribute is a first service type; the interactive task corresponding to the target video data is a first task;
the video data playing module 50 includes: a studio creating unit 501, a first interface output unit 502, and a first playback unit 503; optionally, the video data playing module 50 further includes: a task state detection unit 504, a second interface output unit 505 and a second playing unit 506;
a studio creating unit 501, configured to receive a first virtual studio corresponding to a public service broadcast group created by a service server based on a first service type when a first type of user completes data transfer operation;
a first interface output unit 502, configured to respond to a trigger operation executed by a shared start control associated with a first virtual studio within a task invitation duration corresponding to a first task, and output a video sharing interface corresponding to the virtual studio;
the first playing unit 503 is configured to invoke a video player in the video sharing interface when the task invitation duration reaches the invitation duration threshold, and play the target video data for the first type of user and the second type of user in the virtual seating area.
Optionally, if the service attribute is a second service type; the interactive task corresponding to the target video data is a second task;
a task state detection unit 504, configured to receive a second virtual studio corresponding to a public service broadcast group created by the service server based on the second service type when detecting that a task state corresponding to the second task is a completion state within a task invitation duration corresponding to the second task; the completion state means that the service server counts that the number of the interactive users participating in the response of the second task reaches the interactive user threshold; the interactive user threshold is less than or equal to the total number of users in the public service broadcast group;
a second interface output unit 505, configured to respond to a trigger operation executed by a shared start control associated with a second virtual studio within the task invitation duration, and output a video sharing interface corresponding to the second virtual studio;
and a second playing unit 506, configured to invoke the video player in the video sharing interface when the task invitation duration reaches the invitation duration threshold, and play the target video data for the first type of user and the second type of user in the virtual seating area.
A studio creating unit 501, a first interface output unit 502 and a first playing unit 503; for specific implementation manners of the task state detecting unit 504, the second interface outputting unit 505, and the second playing unit 506, reference may be made to the description of step S105 in the embodiment corresponding to fig. 3, and details will not be further described here.
Optionally, the session interface control module 60 is configured to obtain a session interface corresponding to a public service broadcast group in the application client, respond to a trigger operation for the session interface, and display a target service program in the application client in the session interface; the target service program is an embedded subprogram embedded in the application client;
a video data request module 70, configured to send a video acquisition request to a service server corresponding to an application client in response to a trigger operation executed for a target service program; the video acquisition request is used for indicating the service server to screen K video data from the video data recommendation system so as to form a video recommendation list; k is a positive integer;
and the recommendation list receiving module 80 is configured to receive the video recommendation list returned by the service server, and output the K video data in the video recommendation list to a video recommendation interface corresponding to the target service program in the application client.
For specific implementation manners of the detail page obtaining module 10, the area rotating module 20, the invitation information sending module 30, the image data receiving module 40, the video data playing module 50, the session interface control module 60, the video data requesting module 70, and the recommendation list receiving module 80, reference may be made to the description of step S101 to step S1051 in the embodiment corresponding to fig. 3, and details will not be further described here.
It is to be understood that the video data processing apparatus 1 in this embodiment of the application can perform the description of the video data processing method in the embodiment corresponding to fig. 3 or fig. 9, which is not repeated herein. In addition, the beneficial effects of the same method are not described in detail.
Further, please refer to fig. 14, which is a schematic structural diagram of a computer device according to an embodiment of the present application. As shown in fig. 14, the computer device 4000 may be a user terminal, which may be the user terminal 3000a in the embodiment corresponding to fig. 1, and optionally, the computer device 4000 may also be a service server, which may be the service server 1000 in the embodiment corresponding to fig. 1. For convenience of understanding, the embodiment of the present application takes the computer device as an example of a user terminal. At this time. The computer device 4000 may include: the processor 1001, the network interface 1004, and the memory 1005, and the computer apparatus 4000 may further include: a user interface 1003, and at least one communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display) and a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a standard wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1004 may be a high-speed RAM memory or a non-volatile memory (e.g., at least one disk memory). The memory 1005 may optionally be at least one memory device located remotely from the processor 1001. As shown in fig. 14, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a device control application program.
The network interface 1004 in the computer device 4000 may also provide a network communication function, and the optional user interface 1003 may also include a Display screen (Display) and a Keyboard (Keyboard). In the computer apparatus 4000 shown in fig. 14, a network interface 1004 may provide a network communication function; the user interface 1003 is an interface for providing a user with input; and the processor 1001 may be used to invoke a device control application stored in the memory 1005 to implement:
acquiring a video detail page corresponding to target video data in an application client, responding to a trigger operation executed by a first type user aiming at a task initiating control in the video detail page, and outputting a data transfer page corresponding to a service attribute indicated by the task initiating control; the video detail page comprises a browsing auxiliary area associated with the target video data;
when the data transfer operation is completed on the data transfer page, returning a display interface of the application client from the data transfer page to the video detail page, rotating a browsing auxiliary area in the video detail page, taking the display interface where the rotated browsing auxiliary area is located as a virtual interaction interface, and outputting first user image data of a first type of user in a virtual seating area of the virtual interaction interface;
responding to a triggering operation executed aiming at an invitation control in a virtual interactive interface, and outputting interactive invitation information associated with target video data to a public service broadcast group where a first type of user is located; the public service broadcast group comprises a second type of user; the second type user is a user participating in responding to the interactive invitation information in the public service broadcast group;
receiving second user image data of a second type user, which is sent by a user terminal corresponding to the second type user, and outputting the second user image data to the virtual seating area;
and playing target video data for the first type users and the second type users in the virtual seating area in a virtual playing room corresponding to the public service broadcasting group.
It should be understood that the computer device 4000 described in this embodiment may perform the description of the video data processing method in the embodiment corresponding to fig. 3 or fig. 9, and may also perform the description of the video data processing apparatus 1 in the embodiment corresponding to fig. 13, which is not described herein again. In addition, the beneficial effects of the same method are not described in detail.
Further, here, it is to be noted that: an embodiment of the present application further provides a computer storage medium, and the computer storage medium stores the aforementioned computer program executed by the video data processing apparatus 1, and the computer program includes program instructions, and when the processor executes the program instructions, the description of the video data processing method in the embodiment corresponding to fig. 3 or fig. 9 can be performed, and therefore, details will not be repeated here. In addition, the beneficial effects of the same method are not described in detail. For technical details not disclosed in the embodiments of the computer storage medium referred to in the present application, reference is made to the description of the embodiments of the method of the present application.
Further, please refer to fig. 15, fig. 15 is a schematic structural diagram of a video data processing apparatus according to an embodiment of the present application. The video data processing apparatus 2 may include: an authentication request receiving module 100, a transfer request receiving module 200, an invitation information pushing module 300, an image data forwarding module 400 and a virtual studio creating module 500;
an authentication request receiving module 100, configured to receive an authentication request initiated by a first user terminal, authenticate a first type of user corresponding to the first user terminal based on the authentication request, and return a data transfer page to the first user terminal; the authentication request is obtained after a first type of user initiates a trigger operation executed by a control aiming at a task in a video detail page; the video detail page comprises a browsing auxiliary area associated with the target video data;
a transfer request receiving module 200, configured to receive a data transfer request sent by a first user terminal, and return a data transfer page to the first user terminal based on the data transfer request; the data transfer request is obtained by the first user terminal responding to a trigger operation executed aiming at the target video data in the list display interface;
the invitation information pushing module 300 is configured to receive interaction invitation information sent by a first user terminal, and push the interaction invitation information to a public service broadcast group in which a first type of user is located; the public service broadcast group comprises a second type of user; the second type user is a user participating in responding to the interactive invitation information in the public service broadcast group; the interaction invitation information is obtained by the first user terminal responding to the triggering operation executed aiming at the invitation control in the virtual interaction interface; the virtual interaction interface is determined after the first user terminal performs rotation operation on a browsing auxiliary area in the video detail page; the rotated browsing auxiliary area comprises a virtual seating area; the virtual seating area is used for displaying first user image data of a first type of user;
the image data forwarding module 400 is configured to receive second user image data of a second type user sent by a second user terminal corresponding to the second type user when the second type user participates in responding to the interaction invitation information, and forward the second user image data to the first user terminal, so that the first user terminal outputs the second user image data to the virtual seating area;
a virtual studio creating module 500, configured to receive a virtual studio creating instruction initiated by a first user terminal, and create a virtual studio corresponding to a public service broadcast group; the virtual playing room is used for playing the target video data for the first type users and the second type users in the virtual seating area.
For a specific implementation manner of the authentication request receiving module 100, the transfer request receiving module 200, the invitation information pushing module 300, the image data forwarding module 400, and the virtual studio creating module 500, reference may be made to the description of the service server in the embodiment corresponding to fig. 9, which will not be further described here.
Further, please refer to fig. 16, which is a schematic structural diagram of a computer device according to an embodiment of the present application. As shown in fig. 16, the computer device 5000 may be a user terminal, which may be the user terminal 3000a in the embodiment corresponding to fig. 1, and optionally, the computer device 5000 may also be a service server, which may be the service server 1000 in the embodiment corresponding to fig. 1. For convenience of understanding, the embodiment of the present application takes the computer device as an example of a user terminal. At this time. The computer device 5000 may include: a processor 5001, a network interface 5004, and a memory 5005, the computer device 5000 may further include: a user interface 5003, and at least one communication bus 5002. The communication bus 5002 is used to implement connection communication between these components. The optional user interface 5003 may also include a standard wired interface, a wireless interface. The network interface 5004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 5004 may be a high-speed RAM memory or a non-volatile memory (e.g., at least one disk memory). The memory 5005 may optionally be at least one storage device located remotely from the processor 5001. As shown in fig. 16, the memory 5005, which is a type of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a device control application program.
The network interface 5004 in the computer device 5000 may also provide network communication functions. In the computer apparatus 5000 shown in fig. 16, a network interface 5004 may provide a network communication function; while user interface 5003 is primarily used to provide an interface for user input; and the processor 1001 may be used to invoke a device control application stored in the memory 5005 to implement:
receiving an authentication request initiated by a first user terminal, authenticating a first type of user corresponding to the first user terminal based on the authentication request, and returning a data transfer page to the first user terminal; the authentication request is obtained after a first type of user initiates a trigger operation executed by a control aiming at a task in a video detail page; the video detail page comprises a browsing auxiliary area associated with the target video data;
receiving a data transfer request sent by a first user terminal, and returning a data transfer page to the first user terminal based on the data transfer request; the data transfer request is obtained by the first user terminal responding to a trigger operation executed aiming at the target video data in the list display interface;
receiving interaction invitation information sent by a first user terminal, and outputting the interaction invitation information to a public service broadcast group where a first type of user is located; the public service broadcast group comprises a second type of user; the second type user is a user participating in responding to the interactive invitation information in the public service broadcast group; the interaction invitation information is obtained by the first user terminal responding to the triggering operation executed aiming at the invitation control in the virtual interaction interface; the virtual interaction interface is determined after the first user terminal performs rotation operation on a browsing auxiliary area in the video detail page; the rotated browsing auxiliary area comprises a virtual seating area; the virtual seating area is used for displaying first user image data of a first type of user;
when a second type user participates in response to the interaction invitation information, receiving second user image data of the second type user, which is sent by a second user terminal corresponding to the second type user, and sending the second user image data to the first user terminal so that the first user terminal outputs the second user image data to the virtual seating area;
receiving a virtual playing room creating instruction initiated by a first user terminal, and creating a virtual playing room corresponding to a public service broadcast group; the virtual playing room is used for playing the target video data for the first type users and the second type users in the virtual seating area.
It will be appreciated that embodiments of the present application also provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instruction from the computer-readable storage medium, and executes the computer instruction, so that the computer device executes the description of the video data processing method in the embodiment corresponding to fig. 3 or fig. 9, which is described above, and therefore, the description of this embodiment will not be repeated here. In addition, the beneficial effects of the same method are not described in detail. For technical details not disclosed in the embodiments of the computer storage medium referred to in the present application, reference is made to the description of the embodiments of the method of the present application.
Further, please refer to fig. 17, where fig. 17 is a schematic structural diagram of a video data processing system according to an embodiment of the present application. The video data processing system 3 may comprise a video data processing apparatus 1a and a video data processing apparatus 2 a. The video data processing apparatus 1a may be the video data processing apparatus 1 in the embodiment corresponding to fig. 13, and it can be understood that the video data processing apparatus 1a may be integrated in the terminal 10a in the embodiment corresponding to fig. 2, and therefore, details thereof will not be repeated here. The video data processing apparatus 2a may be the video data processing apparatus 2 in the embodiment corresponding to fig. 15, and it is understood that the data processing apparatus 2a may be integrated in the server 20a in the embodiment corresponding to the above, and therefore, the details will not be described here. In addition, the beneficial effects of the same method are not described in detail. For technical details not disclosed in the embodiments of the video data processing system to which the present application relates, reference is made to the description of the embodiments of the method of the present application.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, and the program can be stored in a computer readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present application and is not to be construed as limiting the scope of the present application, so that the present application is not limited thereto, and all equivalent variations and modifications can be made to the present application.

Claims (20)

1. A method of processing video data, comprising:
responding to a trigger operation aiming at target video data in an application client, outputting a video detail page corresponding to the target video data, responding to a trigger operation executed by a first type of user aiming at a package field control in the video detail page, and outputting an order configuration page corresponding to a package field attribute indicated by the package field control;
responding to the trigger operation aiming at the order configuration information in the order configuration page, and outputting a data transfer page corresponding to the package field attribute;
when data transfer operation is completed for the service auxiliary information in the data transfer page, returning a display interface of the application client from the data transfer page to the video detail page, and outputting first user image data of the first type of user in a browsing auxiliary area in the video detail page;
responding to a triggering operation executed aiming at an invitation control, and pushing interaction invitation information associated with the target video data to a user terminal corresponding to a second type user;
when the second type user participates in responding to the interaction invitation information, outputting the second user image data to a browsing auxiliary area in the video detail page;
and playing the target video data for the first type of users and the second type of users.
2. The method according to claim 1, wherein the outputting a video detail page corresponding to target video data in response to a trigger operation on the target video data in an application client, and outputting an order configuration page corresponding to a package field attribute indicated by a package field control in response to a trigger operation performed by a first type of user on the package field control in the video detail page, comprises:
when a session interface corresponding to a public service broadcast group in an application client displays a target application program, responding to a trigger operation executed aiming at the target service program, and outputting K pieces of video data on a video recommendation interface corresponding to the target service program;
responding to trigger operation aiming at target video data in the K pieces of video data, and outputting a video detail page corresponding to the target video data;
and responding to the triggering operation executed by the first type user for the package field control in the video detail page, and outputting an order configuration page corresponding to the package field attribute indicated by the package field control.
3. The method according to claim 2, wherein when the session interface corresponding to the public service broadcast group in the application client exposes a target application program, in response to a trigger operation executed for the target service program, outputting K pieces of video data on a video recommendation interface corresponding to the target service program, includes:
acquiring a session interface corresponding to a public service broadcast group in an application client;
when the grouping task aiming at the target video data does not exist in the session interface, responding to the triggering operation of a first type user in the public service broadcasting group aiming at the session interface, and displaying a target service program in the application client in a subprogram display area of the session interface; the target business program is an embedded subprogram embedded in the application client;
responding to a trigger operation executed aiming at the target service program, and sending a video acquisition request to a service server corresponding to the target service program; the video acquisition request is used for indicating the service server to screen K video data fitting the first type of user interest from a video data recommendation system; k is a positive integer;
and receiving the K video data returned by the service server, and outputting the K video data to a video recommendation interface corresponding to the target service program in the application client.
4. The method of claim 3, wherein when the application client is a social client, the subroutine display area contains a plurality of embedded subroutines; the target service program is an embedded subprogram which is selected by the first type of user from the plurality of embedded subprograms and is used for synchronously watching the film under the online cinema film watching scene.
5. The method according to claim 2, wherein the outputting a video detail page corresponding to a target video data in the K video data in response to a trigger operation for the target video data comprises:
responding to the triggering operation of the first type user on target video data in the K video data, and sending a detail page acquisition request to a service server corresponding to the target service program, so that the service server inquires video detail information of the target video data from a multimedia content database based on the detail page acquisition request;
and receiving the video detail information of the target video data returned by the service server, and outputting the video detail information to a video detail page corresponding to the target video data.
6. The method according to claim 5, wherein when the service scene corresponding to the application client is an offline cinema viewing scene, the method further comprises:
displaying a first area and a second area in the browsing auxiliary area in a flat view mode in the video detail page; the plan view is a plane where a browsing auxiliary area in the video detail page is located; the first area is an area where a projection screen in a simulated cinema is located under the offline cinema viewing scene; the second area is an area where a seat is located in a simulated cinema under the offline cinema viewing scene.
7. The method according to claim 6, wherein the browsing assistance area in the video detail page further comprises a third area, and the third area is used for showing task states within a task invitation duration corresponding to a package task with a grouping property initiated by the first type user for the target video data in the offline cinema viewing scene; wherein the task state comprises a grouping success state or a grouping failure state; the grouping success state refers to a task state when the first type user invites a specified number of interactive users within a task invitation duration corresponding to the package field task; the grouping failure state refers to a task state when the first type user does not invite to a specified number of interactive users within the task invitation duration corresponding to the package field task.
8. The method according to claim 2, wherein the outputting the order configuration page corresponding to the package field attribute indicated by the package field control in response to the triggering operation performed by the first type user for the package field control in the video detail page comprises:
responding to the triggering operation executed by the first type user aiming at the package field control in the video detail page, and sending an authentication request to a service server corresponding to the target service program; the authentication request is used for indicating the service server to acquire an order configuration page corresponding to the package field attribute indicated by the package field control when the first type user is determined to have the qualification of initiating the package field task; the package field control is a control used for initiating the package field task in the video detail page;
and receiving the order configuration page returned by the service server, and outputting the order configuration page in the application client.
9. The method of claim 1, wherein the order configuration page includes a purchase quantity selection area; the purchase quantity selection area is used for indicating the quantity of the single-point tickets needing to be packaged by the first type of users; the number of the single-point tickets selected in the purchase number selection area is less than or equal to N; n is the total number of people in the public service broadcast group where the first type of user is located, and N is a positive integer greater than or equal to 2;
the responding to the trigger operation aiming at the order configuration information in the order configuration page and outputting the data transfer page corresponding to the package field attribute comprises the following steps:
after the number of the single-point coupons is determined in the purchase number selection area, taking the determined number of the single-point coupons as order configuration information in the order configuration page;
responding to a trigger operation executed by a payment control in the order configuration page, and sending the order configuration information to a service server corresponding to the application client, so that the service server acquires a data transfer page corresponding to a package field attribute indicated by the package field control when determining that the first type user has the qualification of initiating a package field task;
and receiving the data transfer page returned by the service server, and outputting the data transfer page in the application client.
10. The method of claim 9, further comprising:
when the data transfer operation is completed aiming at the service auxiliary information in the data transfer page, receiving a virtual playing room corresponding to the public service broadcast group created by the service server based on the package yard attribute; the business auxiliary information at least comprises the number of the single-point coupons related to the package yard task, the value corresponding to the number of the single-point coupons and the account balance in a default payment channel;
responding to a trigger operation executed by a sharing starting control associated with the virtual playing room within an invitation duration corresponding to the package field task, starting the virtual playing room, and outputting a video sharing interface corresponding to the virtual playing room; the video sharing interface is used for playing the target video data for the first type of users and the second type of users in the browsing auxiliary area.
11. The method of claim 1, wherein the package field control is a control in the video details page for initiating a package field task;
when the data transfer operation is completed for the business auxiliary information in the data transfer page, returning the display interface of the application client from the data transfer page to the video detail page, and outputting the first user image data of the first type of user in the browsing auxiliary area in the video detail page, including:
when data transfer operation is completed aiming at the business auxiliary information in the data transfer page, returning the display interface of the application client from the data transfer page to the video detail page;
and rotating the browsing auxiliary area in the video detail page, taking a display interface where the rotated browsing auxiliary area is located as a virtual interaction interface, designating a virtual seat with a house owner identifier for the first type of user in a virtual seating area of the virtual interaction interface, and outputting first user image data of the first type of user on the virtual seat with the house owner identifier.
12. The method of claim 11, wherein returning the display interface of the application client from the data transfer page to the video details page when the data transfer operation is completed for the business assistance information in the data transfer page comprises:
when the data transfer operation is completed aiming at the business auxiliary information in the data transfer page, receiving a data transfer certificate returned by a business server corresponding to the application client, and outputting transfer success prompt information on the data transfer page; the data transfer voucher is used for representing that the first type user has the authority to initiate the package yard task, and the data transfer voucher is used for indicating a nine-layer delivery system associated with the business server to provide delivery service of the package yard attribute for the first type user;
and responding to a return operation executed on the data transfer page where the transfer success prompt information is located, and returning the display interface of the application client from the data transfer page to the video detail page.
13. The method according to claim 11, wherein the browsing assistance area in the video detail page comprises a first area and a second area which are displayed in a flat view manner;
the rotating the browsing auxiliary area in the video detail page, taking a display interface where the rotated browsing auxiliary area is located as a virtual interaction interface, designating a virtual seat with a house owner identifier for the first type of user in a virtual seating area of the virtual interaction interface, and outputting first user image data of the first type of user on the virtual seat with the house owner identifier includes:
determining the first area and the second area displayed in a flat view mode in the browsing auxiliary area of the video detail page, taking a direction perpendicular to a flat view of the browsing auxiliary area in the video detail page as a rotation direction, performing rotation operation on the first area and the second area based on the rotation direction, and determining the browsing auxiliary area where the rotated first area and the rotated second area are located as the rotated browsing auxiliary area; the size of the browsing auxiliary area after rotation is the same as that of the browsing auxiliary area before rotation;
and taking a display interface where the rotated browsing auxiliary area is located as a virtual interaction interface corresponding to the package yard task, determining the rotated second area as a virtual seating area in the virtual interaction interface, designating a virtual seat with a house owner identifier for the first type of user in the virtual seating area, and outputting first user image data of the first type of user on the virtual seat with the house owner identifier.
14. The method of claim 13, wherein the first area is an area where a projection screen is located and the second area is an area where a seat is located;
the performing a 3D rotation operation on the first region and the second region based on the rotation direction includes:
and adaptively reducing the area where the projection screen is located based on the rotation direction, and adaptively increasing the area where the seat is located based on the rotation direction.
15. The method of claim 1, wherein the first user is a user in a public service broadcast group of the application client; the display interface where the first user image data is located is a virtual interaction interface, and the virtual interaction interface is obtained after selection operation is performed on a browsing auxiliary area in the video detail page;
the step of pushing the interaction invitation information associated with the target video data to a user terminal corresponding to a second type of user in response to the triggering operation executed by the invitation control comprises the following steps:
responding to a triggering operation executed aiming at an invitation control in the virtual interaction interface, and acquiring a user name of the first type user and a video name of the target video data;
generating interaction invitation information associated with the package field attribute of the target video data based on the user name and the video name, and sending the interaction invitation information to a service server so that the service server broadcasts interaction prompt information corresponding to the interaction invitation information to user terminals corresponding to users of a second type in the public service broadcast group where the first type of user is located; and the interaction prompt information is used for displaying in a mode of a ceiling bar in the user terminal corresponding to the second type user.
16. The method of claim 1, wherein the first user image data is output for display in a virtual seating area of the rotated browsing aid area; the rotated browsing auxiliary area is obtained by rotating the browsing auxiliary area in the video detail page;
when the second type user participates in responding to the interaction invitation information, outputting the second user image data to a browsing auxiliary area in the video detail page, wherein the browsing auxiliary area comprises:
when a user terminal corresponding to a second type user in the public service broadcast group receives the interaction invitation information broadcasted by a service server, receiving interaction response information returned by the user terminal corresponding to the second type user aiming at the interaction invitation information; the interactive response information comprises second user image data of the second type of user and an interactive sequence number of the package field task indicated by the interactive invitation information participated by the second type of user; the interaction sequence number is determined by the service server based on the participation order of the user terminal corresponding to the second type user, and the interaction sequence number and the seat number in the virtual seating area have a mapping relation;
and determining a virtual seat corresponding to a seat number mapped by the interaction sequence number in the virtual seating area based on the mapping relation, and outputting the second user image data to the virtual seat corresponding to the seat number in the virtual seating area.
17. A method of processing video data, comprising:
responding to a trigger operation aiming at target video data in an application client, outputting a video detail page corresponding to the target video data, responding to a trigger operation executed by a first type user aiming at a task initiating control in the video detail page, and outputting a data transfer page corresponding to a service attribute indicated by the task initiating control;
acquiring data transfer operation executed aiming at the service auxiliary information in the data transfer page, and outputting transfer success prompt information on the data transfer page when the completion of the data transfer operation is confirmed;
returning the display interface of the application client from the data transfer page to the video detail page;
outputting first user image data for the first type of user in a browse assist area in the video details page.
18. A computer device, comprising: a processor and a memory;
the processor is coupled to a memory, wherein the memory is configured to store a computer program, and the processor is configured to invoke the computer program to cause the computer device to perform the method of any of claims 1-17.
19. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program comprising program instructions which, when executed by a processor, cause a computer device having the processor to perform the method of any of claims 1-17.
20. A computer program product, characterized in that it comprises computer instructions stored in a computer readable storage medium, which computer instructions are adapted to be read and executed by a processor to cause a computer device having said processor to perform the method of any of claims 1-17.
CN202111151931.8A 2020-08-03 2020-08-03 Video data processing method and device and storage medium Active CN113905265B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111151931.8A CN113905265B (en) 2020-08-03 2020-08-03 Video data processing method and device and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010769097.8A CN111741351B (en) 2020-08-03 2020-08-03 Video data processing method and device and storage medium
CN202111151931.8A CN113905265B (en) 2020-08-03 2020-08-03 Video data processing method and device and storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202010769097.8A Division CN111741351B (en) 2020-08-03 2020-08-03 Video data processing method and device and storage medium

Publications (2)

Publication Number Publication Date
CN113905265A true CN113905265A (en) 2022-01-07
CN113905265B CN113905265B (en) 2022-10-14

Family

ID=72657166

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202010769097.8A Active CN111741351B (en) 2020-08-03 2020-08-03 Video data processing method and device and storage medium
CN202111151931.8A Active CN113905265B (en) 2020-08-03 2020-08-03 Video data processing method and device and storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202010769097.8A Active CN111741351B (en) 2020-08-03 2020-08-03 Video data processing method and device and storage medium

Country Status (1)

Country Link
CN (2) CN111741351B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114584838B (en) * 2020-11-28 2024-05-17 腾讯科技(北京)有限公司 Multimedia data progress control method, device and readable storage medium
CN112565657B (en) * 2020-11-30 2023-09-15 百果园技术(新加坡)有限公司 Call interaction method, device, equipment and storage medium
CN112911368A (en) 2021-01-15 2021-06-04 北京字跳网络技术有限公司 Interaction method, interaction device, electronic equipment and storage medium
CN115460233A (en) * 2021-05-20 2022-12-09 华为技术有限公司 Application-based equipment connection relation establishing method and related device
CN113596560B (en) * 2021-07-26 2023-03-24 北京达佳互联信息技术有限公司 Resource processing method, device, terminal and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080201196A1 (en) * 2007-02-13 2008-08-21 Bed Bath & Beyond Procurement Co. Inc. Method and system for event planning
US20110225518A1 (en) * 2010-03-10 2011-09-15 Oddmobb, Inc. Friends toolbar for a virtual social venue
CN108234295A (en) * 2017-12-29 2018-06-29 努比亚技术有限公司 Display control method, terminal and the computer readable storage medium of group's functionality controls
CN108415552A (en) * 2017-02-09 2018-08-17 南宁富桂精密工业有限公司 Virtual cinema interaction system and method
CN109525902A (en) * 2018-11-15 2019-03-26 贵阳语玩科技有限公司 A kind of method and device of more people's Real-Time Sharing videos
CN109819276A (en) * 2017-11-20 2019-05-28 腾讯科技(深圳)有限公司 Method, apparatus, computer equipment and the storage medium of video playing
CN111343476A (en) * 2020-03-06 2020-06-26 北京达佳互联信息技术有限公司 Video sharing method and device, electronic equipment and storage medium
CN111405321A (en) * 2020-04-22 2020-07-10 聚好看科技股份有限公司 Video acquisition method, display device and server
CN111414565A (en) * 2020-03-27 2020-07-14 北京字节跳动网络技术有限公司 Information display method and device, electronic equipment and storage medium

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101453285B (en) * 2007-11-30 2010-10-27 华为终端有限公司 System and method for viewing program together
CN101710968B (en) * 2009-12-04 2013-12-18 深圳创维数字技术股份有限公司 Method for sharing and watching requested programmes through bidirectional set-top box and digital television broadcasting system thereof
US8893022B2 (en) * 2010-04-01 2014-11-18 Microsoft Corporation Interactive and shared viewing experience
US8825809B2 (en) * 2010-05-19 2014-09-02 Microsoft Corporation Asset resolvable bookmarks
CN105407071A (en) * 2014-08-29 2016-03-16 阿里巴巴集团控股有限公司 Information displaying method, client, server, and system
CN104519391A (en) * 2014-12-09 2015-04-15 常璨 Social system based on Internet television programs and working method of social system
US9998434B2 (en) * 2015-01-26 2018-06-12 Listat Ltd. Secure dynamic communication network and protocol
CN104902295B (en) * 2015-06-19 2018-03-23 腾讯科技(北京)有限公司 Intelligent television service implementation method, terminal device and system
CN105933790A (en) * 2016-04-29 2016-09-07 乐视控股(北京)有限公司 Video play method, device and system based on virtual movie theater
CN106303590B (en) * 2016-08-08 2020-08-18 腾讯科技(深圳)有限公司 Method and device for inviting to watch video film
CN108200458A (en) * 2018-02-02 2018-06-22 优酷网络技术(北京)有限公司 Video interaction method, subscription client, server and storage medium
CN108667798A (en) * 2018-03-27 2018-10-16 上海临奇智能科技有限公司 A kind of method and system of virtual viewing
CN108768832B (en) * 2018-05-24 2022-07-12 腾讯科技(深圳)有限公司 Interaction method and device between clients, storage medium and electronic device
CN109068168A (en) * 2018-08-03 2018-12-21 深圳市环球数码科技有限公司 A method of the spelling field formula film play mode based on movie theater is realized
CN109246448A (en) * 2018-08-21 2019-01-18 姜天鹏 Offline content of copyright distribution system and method
CN109523336A (en) * 2018-09-13 2019-03-26 北京三快在线科技有限公司 Data processing method, device, electronic equipment and readable storage medium storing program for executing
CN111027995A (en) * 2018-10-10 2020-04-17 人人好做商品交易中心股份有限公司 Community group-buying system capable of automatically spreading, promoting and operating
CN111385632B (en) * 2020-03-06 2021-08-13 腾讯科技(深圳)有限公司 Multimedia interaction method, device, equipment and medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080201196A1 (en) * 2007-02-13 2008-08-21 Bed Bath & Beyond Procurement Co. Inc. Method and system for event planning
US20110225518A1 (en) * 2010-03-10 2011-09-15 Oddmobb, Inc. Friends toolbar for a virtual social venue
CN108415552A (en) * 2017-02-09 2018-08-17 南宁富桂精密工业有限公司 Virtual cinema interaction system and method
CN109819276A (en) * 2017-11-20 2019-05-28 腾讯科技(深圳)有限公司 Method, apparatus, computer equipment and the storage medium of video playing
CN108234295A (en) * 2017-12-29 2018-06-29 努比亚技术有限公司 Display control method, terminal and the computer readable storage medium of group's functionality controls
CN109525902A (en) * 2018-11-15 2019-03-26 贵阳语玩科技有限公司 A kind of method and device of more people's Real-Time Sharing videos
CN111343476A (en) * 2020-03-06 2020-06-26 北京达佳互联信息技术有限公司 Video sharing method and device, electronic equipment and storage medium
CN111414565A (en) * 2020-03-27 2020-07-14 北京字节跳动网络技术有限公司 Information display method and device, electronic equipment and storage medium
CN111405321A (en) * 2020-04-22 2020-07-10 聚好看科技股份有限公司 Video acquisition method, display device and server

Also Published As

Publication number Publication date
CN113905265B (en) 2022-10-14
CN111741351A (en) 2020-10-02
CN111741351B (en) 2021-08-24

Similar Documents

Publication Publication Date Title
CN111741351B (en) Video data processing method and device and storage medium
RU2427090C2 (en) System and method of organising group presentations of content and group communication during said presentations
RU2637461C2 (en) Method of electronic commerce through public broadcasting environment
US8943141B2 (en) Social networking system and methods of implementation
EP2680198A1 (en) Social networking system and methods of implementation
AU2016221369A1 (en) System and method for video communication
CN105051778A (en) System and method for interactive remote movie viewing, scheduling and social connections
CN111773667A (en) Live game interaction method and device, computer readable medium and electronic equipment
US20090064245A1 (en) Enhanced On-Line Collaboration System for Broadcast Presentations
JP2004248145A (en) Multi-point communication system
EP2722806A1 (en) System and method for advertising
CN111711528A (en) Network conference control method and device, computer readable storage medium and equipment
US11178461B2 (en) Asynchronous video conversation systems and methods
CN117278770A (en) Video interaction method, apparatus, device, storage medium and computer program product
CN115794012A (en) Method and apparatus for content recording and streaming
CN117255207A (en) Live broadcast interaction method and related products
CN116939285A (en) Video dubbing method and related products
KR20240019044A (en) Videoconferencing meeting slots via specific secure deep links
KR20240019045A (en) Videoconferencing meeting slots via specific secure deep links
KR20240019043A (en) Videoconferencing meeting slots via specific secure deep links
CN117560518A (en) Interaction method, device, electronic equipment and storage medium
CN117939179A (en) Live broadcast interaction method, device, equipment and storage medium
CN117560520A (en) Interaction method, device, electronic equipment and storage medium
CN117278810A (en) Video interaction method based on message and related equipment
TWM572516U (en) Video stream display device integrated into instant messaging platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40065968

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant