WO2019080720A1 - Procédé d'affichage et de fourniture de carte de performance d'ensemble, terminal client, et serveur - Google Patents

Procédé d'affichage et de fourniture de carte de performance d'ensemble, terminal client, et serveur

Info

Publication number
WO2019080720A1
WO2019080720A1 PCT/CN2018/109964 CN2018109964W WO2019080720A1 WO 2019080720 A1 WO2019080720 A1 WO 2019080720A1 CN 2018109964 W CN2018109964 W CN 2018109964W WO 2019080720 A1 WO2019080720 A1 WO 2019080720A1
Authority
WO
WIPO (PCT)
Prior art keywords
map
user
user identifier
video
node
Prior art date
Application number
PCT/CN2018/109964
Other languages
English (en)
Chinese (zh)
Inventor
王媛
张迪
蔡林
Original Assignee
优酷网络技术(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 优酷网络技术(北京)有限公司 filed Critical 优酷网络技术(北京)有限公司
Publication of WO2019080720A1 publication Critical patent/WO2019080720A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4782Web browsing, e.g. WebTV
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/858Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot

Definitions

  • the present application relates to the field of Internet technologies, and in particular, to a display, a providing method, a client, and a server of a joint map.
  • the video uploader usually uploads the video, and the uploaded video is posted to the video playing website after being reviewed. The user can then watch the video of interest on the video play site.
  • the video uploader When interacting with users of the current video playing website, the video uploader can usually be paid attention by the user, and when the video uploader of the user's attention releases a new video, the user can receive the push information of the video update.
  • users can also communicate with video uploaders by sending private messages to interested video uploaders. The user and the user can usually interact with the comment information of the video, and can add each other as a friend on the comment information page.
  • the purpose of the embodiments of the present application is to provide a display, a providing method, a client, and a server for a joint map, which can provide a convenient interaction mode for the user.
  • an embodiment of the present application provides a method for displaying a joint map, the method comprising: obtaining a concerted video and a set of user identifiers associated with the companion video from a server, and displaying the same in a current interface.
  • the target user identifier in the current interface is And triggering, sending, to the server, a matching map acquisition request that includes the target user identifier; receiving and displaying a matching map corresponding to the target user identifier fed back by the server, where the matching map includes characterizing the target user identifier a map node and a map node that characterizes other user identifiers; wherein the user pointed to by the other user identifier and the user pointed to by the target user identifier participate in recording at least one of the companion videos.
  • an embodiment of the present application further provides a client, where the client includes a memory and a processor, where the computer stores a computer program, and when the computer program is executed by the processor, the following steps are implemented: Obtaining a matching video and a user identification set associated with the matching video from a server, and displaying the matching video and the user identifier in the user identification set in a current interface; wherein the user in the user identification set The user pointed to by the identifier participates in recording part of the content in the companion video; when the target user identifier in the current interface is triggered, sending a comprehension map acquisition request including the target user identifier to the server; receiving and displaying a mapping map corresponding to the target user identifier fed back by the server, where the mapping map includes a map node that represents the target user identifier and a map node that represents another user identifier; wherein the other user identifier points to the user and The user pointed to by the target user identifier participates in the recording together.
  • a small video includes a
  • the embodiment of the present application further provides a method for providing a joint map, the method comprising: receiving a video acquisition request sent by a client, and feeding back a matching video to the client and being related to the matching video.
  • a user identifier set of the user identifier the user pointed to by the user identifier in the user identifier set participates in recording part of the content in the joint video; and receives a joint map acquisition request from the client that includes the target user identifier; Generating a matching map corresponding to the target user identifier, and providing the matching map to the client;
  • the compiling map includes a map node that represents the target user identifier and a map node that represents other user identifiers; The user pointed to by the other user identifier and the user pointed to by the target user identifier participate in the recording of at least one of the concert videos.
  • an embodiment of the present application further provides a server, where the server includes a memory and a processor, where the computer stores a computer program, and when the computer program is executed by the processor, the following steps are implemented: receiving a client Sending a video request to the client, and feeding back to the client a companion video and a set of user identifiers associated with the companion video; wherein the user pointed to by the user identifier in the set of user identifiers participates in recording the co-production And a part of the content in the video; receiving a matching map acquisition request sent by the client, including the target user identifier; generating a matching map corresponding to the target user identifier, and providing the matching map to the client;
  • the map includes a map node that identifies the target user identifier and a map node that characterizes other user identifiers; wherein the user pointed to by the other user identifier and the user pointed to by the target user identifier participate in recording at least one of the companion videos.
  • the technical solution provided by the present application can select the target user identifier of interest from the user identifiers of the users participating in the joint display displayed in the current interface when viewing the joint video recorded by the multiple users.
  • the server may feed back a mapping map corresponding to the target user identity.
  • the collaboration map displays the user pointed by the target user identifier and the user who is involved in the recorded video together with the user pointed by the target user identifier in a visual manner to display the current interface in the manner of the map node.
  • the technical solution provided by the present application not only can intuitively display the information of multiple users who jointly participate in recording the concerted video, but also can simplify the process of adding friends, so that the interaction between users in the video playing website becomes Very convenient.
  • FIG. 1 is a flowchart of a method for displaying a joint map in an embodiment of the present application
  • FIG. 2 is a schematic diagram of a current interface in an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a joint map in the embodiment of the present application.
  • FIG. 4 is a schematic diagram of a joint map of a connection line added in an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a joint map in a practical application scenario
  • FIG. 6 is a schematic diagram of a trigger map node in an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a client in an embodiment of the present application.
  • FIG. 8 is a flowchart of a method for providing a joint map in an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a server in an embodiment of the present application.
  • the present application provides a method for displaying a joint map, which can be applied to a system architecture of a client and a server.
  • the client may be a terminal device that is always used by the user.
  • the client may be an electronic device such as a smart phone, a notebook computer, a desktop computer, a tablet computer, a smart television, a smart wearable device (a smart watch, a virtual reality device).
  • the client may also be software running in the above electronic device.
  • the client may be an iQiyi APP (Application), a Tencent video app, a ⁇ APP, or the like running in a smartphone.
  • the server may be a service server of a video playing website, where the server may store user information and video data, and the server may receive the video loading request sent by the client, and feed back the corresponding video to the client. data.
  • the server may be a single server or a server cluster composed of multiple servers.
  • the server may be an edge node in a Content Delivery Network (CDN) or may refer to the entire content distribution network.
  • CDN Content Delivery Network
  • a video processing method provided by the present application may be an execution body of the foregoing method, and the method may include the following steps.
  • S11 Obtain a companion video and a set of user identifiers associated with the companion video from a server, and display the companion video and the user identifier in the set of user identifiers in a current interface; wherein the user identifier set is The user pointed to by the user identifier participates in recording part of the content in the joint video.
  • the user can browse the page of the video playing website in the client, and the cover of the plurality of matching videos can be displayed in the page.
  • the co-op video may be a video in which a plurality of users participate in recording together.
  • the user A may record a video of the user A and upload it to the video playing website.
  • User B sees video A and feels that the content in video A is interesting, so video B interacting with the content of video A is recorded, and the content of video B is integrated into video A to form video C.
  • the video C can be a joint video that is jointly recorded by the user A and the user B.
  • the user identifier of the user when the user uploads a video to the server, the user identifier of the user can be uploaded together.
  • the server sends the video data to the client, the user identifier associated with the video can be sent together.
  • the client may add the identifier of the current user in the user identifier associated with the loaded video, thereby forming a new set of user identifiers.
  • the user identifier mentioned above may be a user name registered by the user in the video playing website, or may be a string encoding of the background of the user name.
  • the server can associate video A with user identifier A.
  • User B loads Video A from the server and simultaneously obtains User ID A.
  • User B can record the current video and add the content of the current video B in video A to form video C.
  • the user B of the user B can add the user ID B of the user B on the basis of the user identifier A when the video C is integrated, thereby forming a user identifier set of “user identifier A+user identifier B”, and can The formed set of user IDs are sent to the server along with the video C. In this way, the server can associate the formed set of user identities with video C.
  • the corresponding user identity set will continue to add a new user identity.
  • the user pointed to by each user identifier in the associated user identification set can participate in recording part of the content in the companion video.
  • the client can send a load request to the server of the video play site to point to the companion video.
  • the identifier of the co-op video may be carried. The identification may for example be a numerical number of the presentation video in the server.
  • the server may feed back the companion video and the set of user identifiers associated with the companion video to the client.
  • the client may display the matching video in the current interface. Meanwhile, please refer to FIG. 2, and the display can be displayed below the matching video.
  • Each user identifier in the set of user identifiers is described to enable the user to know the identity of each user participating in recording the companion video.
  • the user identifier displayed in the current interface may be a nickname of the user, or may be an avatar of the user, or may be composed of a user's avatar and a prompt control as shown in FIG. 2, and the prompt
  • the "View TA's Coordination Map" can be displayed on the control.
  • the user identification can interact with a user viewing the companion video. For example, the user can click on the user's nickname or the user's avatar, or click on the prompt control above. In this way, the user identification can be regarded as being triggered after being clicked by the user.
  • the client may send a matching map acquisition request including the target user identifier to the server. In this way, after receiving the joint map acquisition request, the server extracts the target user identifier carried therein, so as to know which user's collaboration map is currently requested by the client.
  • the server may generate a matching map corresponding to the target user identifier for the joint map acquisition request.
  • the mapping map corresponding to the target user identifier may include a map node that represents the target user identifier, and a map node that represents other user identifiers.
  • the user pointed by the other user identifiers and the user pointed to by the target user identifier participate in recording at least one of the concert videos. For example, if the target user identifier is directed to the user A, then if both the user B and the user C participate in the recorded video together with the user A, then in the matching map corresponding to the user A, the corresponding user A is displayed.
  • the map node also shows the map nodes corresponding to user B and user C.
  • the target user identifier set including the target user identifier may be filtered out from the user identifier set associated with each of the matching videos.
  • the target user identifier set includes the target user identifier, indicating that the user pointed to by the target user identifier participates in the recording in the joint video associated with the target user identifier set.
  • the user identifiers other than the target user identifiers in each of the target user identifier sets may be counted.
  • the two target user identifier sets include, in addition to the target user identifier, a user identifier A, a user identifier B, and a user identifier C.
  • the target user identifier and the statistically corresponding map node corresponding to the user identifier may be separately constructed, and the constructed set of the map nodes is used as the matching map corresponding to the target user identifier. Referring to FIG. 3, for the target user identifier and the user identifier A, the user identifier B, and the user identifier C, four map nodes may be constructed, and the four map nodes may constitute a joint map corresponding to the target user identifier.
  • the corresponding map nodes may be constructed based on the user avatars corresponding to the respective user identifiers.
  • the user avatar corresponding to the target user identifier and the statistic obtained user identifier may be separately obtained, where the user avatar may be provided when each user registers information in the video playing website.
  • These user avatars can be stored in the server in association with the user ID. In this way, the server can read the corresponding user avatar according to the user identifier.
  • the obtained user avatar may be separately displayed in the designated area.
  • the designated area may be an area having a specified size in the joint map.
  • the size and shape of the designated area may be previously set by the server.
  • the designated area may take a circle, and the radius of the circle may range from 10 pixels to 20 pixels.
  • the user avatar displayed in the designated area can be used as the map node.
  • four map nodes as shown in FIG. 3 can be obtained.
  • the video A may be subjected to content integration processing by multiple users, thereby obtaining a plurality of matching videos based on the video A.
  • the resulting co-op video may be further integrated by other users to get more co-production videos.
  • the client can arrange the matching video and the associated user identification set in the user identification set according to the order in which the user participates in the video production.
  • the corresponding user ID For example, the video A created by the user A is subsequently integrated by the user B to obtain the video B, and the video B is integrated by the user C to obtain the video C, and then the client uploads the video B together.
  • the order of the user identifiers in the user ID set may be “user identifier A+user identifier B”.
  • the user identifiers in the user identifier set uploaded together may be in the order of “user identifier A+user”. Identify B+ User ID C”.
  • the server can read the user identifiers in the user identification set in turn, so that the order of the users participating in the production of the current concert video can be known.
  • the map nodes may be connected by way of a line in the joint map. Specifically, after the mapping node is constructed, a connection may be established between the mapping nodes corresponding to the two adjacent user identifiers according to the order of the user identifiers in the target user identifier set. For example, the order of the user identifiers in the target user identifier set is “target user identifier+user identifier A+user identifier B+user identifier C”, then the target user identifier set may be set between the respective map nodes on the basis of FIG. 3 . After the arrangement sequence shown in the figure is connected, the joint map shown in Fig. 4 can be obtained. In FIG.
  • the video corresponding to the user identifier A is generated based on the video corresponding to the target user identifier
  • the video corresponding to the user identifier B is generated based on the video corresponding to the user identifier A, and so on.
  • a joint map as shown in FIG. 5 can be obtained, in which the association between the original video for making the joint video and the produced joint video can be reflected. relationship.
  • the user information and/or the joint video information may be associated with the map nodes in the joint map.
  • the user information associated with the map node may be the personal information of the user corresponding to the map node.
  • the personal information may be entered into the video playing website by the user when registering the account or after registering the account.
  • the personal information may include, for example, an account name, a mobile phone number, a gender, a date of birth, and the like.
  • the companion video information may refer to all or part of the co-production video in which the user corresponding to the map node participates in the recording. In this way, when the map node in the comprehension map is triggered in the client, the server can feed back the user information and/or the companion video information associated with the triggered graph node to the client.
  • the server may also combine the display permissions set by each user to determine whether or not the mapping node is configured according to the user identifier obtained by the statistics. Allows the construction and display of the map nodes corresponding to the user. Specifically, the server may obtain the display permission corresponding to the statistically obtained user identifier. The display permission may be set by the user in the video playing website. The display rights may include various permission levels such as public display, only friend visibility, hidden user information, and the like.
  • the public display indicates that the user in the video playing website can view the information of the user; only the friend visible indicates that only the user who is in a friend relationship with the user can view the user's information; and the hidden user information indicates that other users cannot view the information.
  • the user's information based on different privilege levels, different processing methods can be used when constructing the graph nodes.
  • the server may remove the user identifier corresponding to the display permission that identifies the hidden user information from the user identifier obtained by the statistics, and may not generate a corresponding map node for the partially removed user identifier, thereby protecting the privacy of the user.
  • the server may determine the candidate user identifier corresponding to the display permission that is visible only to the buddy, and the user corresponding to the client in the candidate user identifier is not in the user identifier that is visible to the buddy.
  • the user identifier of the friend relationship is removed, and the user pointed to by the removed user identifier is not in a friend relationship with the user who currently wants to view the comprehension map, so the server does not display the information of the user to the user who currently wants to view the comprehension map.
  • the server can construct a map node corresponding to the remaining user identifier in the statistically obtained user identifier.
  • the display permission may have more forms of expression, for example, may include only some friends, blacklists, and the like.
  • Those skilled in the art can appropriately modify and change the technical solutions of the present application on the premise of understanding the essence of the technical solutions of the present application.
  • the technical solutions obtained and the technical effects achieved are similar to the technical solutions of the present application, they should belong to the present application.
  • S15 Receive and display a matching map corresponding to the target user identifier fed back by the server, where the mapping map includes a map node that represents the target user identifier and a map node that represents another user identifier; wherein the other user The user pointed by the identifier and the user pointed to by the target user identifier participate in recording at least one of the co-production videos.
  • the matching map can be fed back to the client.
  • the client can receive and display the matching map corresponding to the target user identifier fed back by the server.
  • a map node that represents the target user identifier and a map node that represents other user identifiers may be included; wherein the user pointed to by the other user identifier and the user pointed to by the target user identifier participate in the recording. At least one of the co-production videos.
  • one of the target map nodes may be clicked, so that the detailed information corresponding to the target map node may be further viewed.
  • a control for indicating adding a friend and/or a control for indicating a viewing of the companion video may be popped up in the current interface.
  • a control for adding a friend and a control for viewing the video may be popped up beside the map node.
  • the user can further click on the add friend's control to initiate a friend application to the user A, or click to view the control of the joint video to view all or part of the co-play video that the user A participates in the recording.
  • the page jump may be directly performed. Specifically, the page may be jumped from the page displaying the joint map to the target map node. The corresponding user's page. Wherein, the user's personal information and/or the user's participation in the recorded concerted video may be displayed in the page of the user.
  • the map node of the joint map can be filled with a user avatar corresponding to the user preset, so that the map node in the joint map can present a personalized display manner.
  • the user avatar may be uploaded by the user to the server.
  • the content displayed in the graph node can also be dynamically changed.
  • the map node is not triggered, the corresponding user avatar can be displayed.
  • the target map node in the joint map is triggered, the user avatar may not be displayed in the area where the user avatar is originally displayed in the target map node, but the user corresponding to the target map node may be played to participate in the recording. Co-starring.
  • the companion video played in the map node may also be preset by the user, and the duration of the played video may also be preset by the user. In this way, in the case that the page jump does not need to occur, the user can browse to the partial companion video of the user corresponding to the target map node, so that the user can conveniently provide a reference for the user to measure whether the target map node corresponds to the target map node. Interested in user-recorded co-production videos.
  • one of the map nodes may be set as the focus map node.
  • the focus map node can be used as the most conspicuous map node in the joint map.
  • the focus map node displays the largest size in the current interface.
  • the map node corresponding to the target user identifier may be a focus map node.
  • the initial focus map node in the matching map corresponding to the target user identifier may be the map node corresponding to the target user identifier.
  • the focus map node in the current interface can be changed in response to the user's operation instructions.
  • the user can change the focus map node by triggering the map node.
  • the target map node in the joint map is triggered, the target map node can be changed to the focus map node in the current interface.
  • the map node corresponding to the original target user identifier is a focus map node, and when the user clicks on the map node corresponding to the user identifier A, the map node corresponding to the user identifier A becomes the focus map node of the current interface, and at the same time, the user identifier A
  • the corresponding map node can be moved to the position of the map node corresponding to the original target user identifier, and other map nodes can also move synchronously.
  • users can change the map nodes by dragging the map.
  • the user can swipe on the touch screen by the finger, and after the operation map displayed on the touch screen receives the operation instruction of the user's swipe, the companion map can be correspondingly moved according to the direction of the swipe.
  • the focus map node originally displayed at the center of the touch screen can be changed to the map node after the movement, thereby realizing the change process of the focus map node.
  • other map nodes in the joint map other than the focus map node may be directly or indirectly connected to the focus map node.
  • directly connected may refer to a line between the nodes of the focus map, and there are no other map nodes; and indirectly connected may mean that at least one other map node exists on the line between the nodes of the focus map.
  • the label 1 is the focus map node
  • the map node labeled 2 and the focus map node may be directly connected
  • the map node and the focus map node labeled 3 may be indirectly Connected.
  • connection level parameters which may be from the other map nodes and the focus map
  • the number of map nodes on the node connection is determined.
  • the value of the connection level parameter associated with the other map node may be the number of map nodes connected to the other map node and the focus map node.
  • the value of the connection level parameter associated with the map node labeled 2 is 0, because there are no other map nodes on the line connecting the map node labeled 2 to the focus map node.
  • connection level parameter associated with the map node numbered 3 is 1, because there is one map node on the line connecting the map node with the label 3 and the focus map node.
  • the value of the connection level parameter may also have other limited ways.
  • the size displayed by the other map nodes in the current interface may be determined according to the connection level parameter. Specifically, the display size of the map node directly connected to the focus map node may be larger, indicating that the relationship with the focus map node is relatively close; and the display size of the map node indirectly connected to the focus map node may be smaller, indicating The relationship between the focus map nodes is not very close.
  • the size of the other map nodes displayed in the current interface may be inversely proportional to the value of the connection level parameter.
  • the larger the value of the connection level parameter the more map nodes exist between the node and the focus map node, and the farther away from the focus map node, so that the corresponding display size can be smaller. Conversely, the corresponding display size can be larger.
  • a display size of a map node other than the focus map node in the joint map may be determined according to a relationship state with the focus map node; wherein, the focus map is The display size of the map node whose node is in the buddy state may be larger than the display size of the map node that is in the non-friend state with the focus map node.
  • the friend status between the map nodes can be understood as the friend status between the users corresponding to the map nodes. For example, if user A and user B are friends, the map node corresponding to user A and the map node corresponding to user B may also be a friend status.
  • the present application further provides a client, which includes a memory and a processor, wherein the memory stores a computer program, and when the computer program is executed by the processor, the following steps are implemented.
  • S11 Obtain a companion video and a set of user identifiers associated with the companion video from a server, and display the companion video and the user identifier in the set of user identifiers in a current interface; wherein the user identifier set is The user pointed to by the user identifier participates in recording part of the content in the joint video;
  • S15 Receive and display a matching map corresponding to the target user identifier fed back by the server, where the mapping map includes a map node that represents the target user identifier and a map node that represents another user identifier; wherein the other user The user pointed by the identifier and the user pointed to by the target user identifier participate in recording at least one of the co-production videos.
  • the memory may include physical means for storing information, typically by digitizing the information and then storing it in a medium that utilizes electrical, magnetic or optical methods.
  • the memory according to the embodiment may further include: a device for storing information by using an electric energy method, such as a RAM, a ROM, etc.; a device for storing information by using a magnetic energy method, such as a hard disk, a floppy disk, a magnetic tape, a magnetic core memory, a magnetic bubble memory, and a USB flash drive; A device that optically stores information, such as a CD or a DVD.
  • an electric energy method such as a RAM, a ROM, etc.
  • a magnetic energy method such as a hard disk, a floppy disk, a magnetic tape, a magnetic core memory, a magnetic bubble memory, and a USB flash drive
  • a device that optically stores information such as a CD or a DVD.
  • quantum memory graphene memory, and the like.
  • the processor can be implemented in any suitable manner.
  • the processor can take the form of, for example, a microprocessor or processor and computer readable media, logic gates, switches, and special-purpose integrations for storing computer readable program code (eg, software or firmware) executable by the (micro)processor.
  • ASIC Application Specific Integrated Circuit
  • programmable logic controller programmable logic controller and embedded microcontroller form.
  • the application also provides a method for providing a joint map, and the execution body of the method may be the server described above. Referring to FIG. 8, the method may include the following steps.
  • S21 Receive a video acquisition request sent by the client, and feed back, to the client, a companion video and a set of user identifiers associated with the companion video; wherein the user pointed to by the user identifier in the user identifier set participates in recording Part of the content in the co-production video.
  • the corresponding co-presenting video can be loaded by clicking the cover of the co-op video.
  • the client can send a video acquisition request to the server of the video playing website to the companion video.
  • the identifier of the co-presentation video may be carried.
  • the identification may for example be a numerical number of the presentation video in the server.
  • the server may feed back the companion video and the set of user identifiers associated with the companion video to the client. The user pointed to by the user identifier in the user identifier set participates in recording part of the content in the joint video.
  • S23 Receive a matching map acquisition request that is sent by the client and includes a target user identifier.
  • the client may display the matching video in the current interface, and may display the user identification set in the lower part of the matching video.
  • Each user identifier is such that the user can know the identity of each user participating in recording the companion video.
  • the user identifier displayed in the current interface may be a user's nickname, or may be a user's avatar, or may be composed of a user's avatar and a prompt control, and the prompt control may display “View TA”. Co-production map.”
  • the user identification can interact with a user viewing the companion video. For example, the user can click on the user's nickname or the user's avatar, or click on the prompt control above. In this way, the user identification can be regarded as being triggered after being clicked by the user.
  • the client may send a matching map acquisition request including the target user identifier to the server. In this way, after receiving the joint map acquisition request, the server extracts the target user identifier carried therein, so as to know which user's collaboration map is currently requested by the client.
  • S25 Generate a matching map corresponding to the target user identifier, and provide the joint map to the client; the map includes a map node that represents the target user identifier and a map node that represents another user identifier; The user pointed to by the other user identifier and the user pointed to by the target user identifier participate in recording at least one of the concert videos.
  • the server may generate a matching map corresponding to the target user identifier for the joint map acquisition request.
  • the mapping map corresponding to the target user identifier may include a map node that represents the target user identifier, and a map node that represents other user identifiers.
  • the user pointed by the other user identifier and the user pointed to by the target user identifier participate in recording at least one of the concert videos. For example, if the target user identifier is directed to the user A, then if both the user B and the user C participate in the recorded video together with the user A, then in the matching map corresponding to the user A, the corresponding user A is displayed.
  • the map node also shows the map nodes corresponding to user B and user C.
  • the target user identifier set including the target user identifier may be filtered out from the user identifier set associated with each of the matching videos.
  • the target user identifier includes the target user identifier, indicating that the user pointed to by the target user identifier participates in the recording in the joint video associated with the target user identifier set.
  • the user identifiers other than the target user identifiers in each of the target user identifier sets may be counted.
  • the two target user identifier sets include, in addition to the target user identifier, a user identifier A, a user identifier B, and a user identifier C.
  • the target user identifier and the statistically corresponding map node corresponding to the user identifier may be separately constructed, and the constructed set of the map nodes is used as the matching map corresponding to the target user identifier. Referring to FIG. 3, for the target user identifier and the user identifier A, the user identifier B, and the user identifier C, four map nodes may be constructed, and the four map nodes may constitute a joint map corresponding to the target user identifier.
  • the corresponding map nodes may be constructed based on the user avatars corresponding to the respective user identifiers.
  • the user avatar corresponding to the target user identifier and the statistic obtained user identifier may be separately obtained, where the user avatar may be provided when each user registers information in the video playing website.
  • These user avatars can be stored in the server in association with the user ID. In this way, the server can read the corresponding user avatar according to the user identifier.
  • the obtained user avatar may be separately displayed in the designated area.
  • the designated area may be an area having a specified size in the joint map.
  • the size and shape of the designated area may be previously set by the server.
  • the designated area may take a circle, and the radius of the circle may range from 10 pixels to 20 pixels.
  • the user avatar displayed in the designated area can be used as the map node.
  • four map nodes as shown in FIG. 3 can be obtained.
  • the video A may be subjected to content integration processing by multiple users, thereby obtaining a plurality of matching videos based on the video A.
  • the resulting co-op video may be further integrated by other users to get more co-production videos.
  • the client can arrange the matching video and the associated user identification set in the user identification set according to the order in which the user participates in the video production.
  • the corresponding user ID For example, the video A created by the user A is subsequently integrated by the user B to obtain the video B, and the video B is integrated by the user C to obtain the video C, and then the client uploads the video B together.
  • the order of the user identifiers in the user ID set may be “user identifier A+user identifier B”.
  • the user identifiers in the user identifier set uploaded together may be in the order of “user identifier A+ user”. Identify B+ User ID C”.
  • the server can read the user identifiers in the user identification set in turn, so that the order of the users participating in the production of the current concert video can be known.
  • the map nodes may be connected by way of a line in the joint map. Specifically, after the mapping node is constructed, a connection may be established between the mapping nodes corresponding to the two adjacent user identifiers according to the order of the user identifiers in the target user identifier set. For example, the order of the user identifiers in the target user identifier set is “target user identifier+user identifier A+user identifier B+user identifier C”, then the target user identifier set may be set between the respective map nodes on the basis of FIG. 3 . After the arrangement sequence shown in the figure is connected, the joint map shown in Fig. 4 can be obtained. In FIG.
  • the video corresponding to the user identifier A is generated based on the video corresponding to the target user identifier
  • the video corresponding to the user identifier B is generated based on the video corresponding to the user identifier A, and so on.
  • a joint map as shown in FIG. 5 can be obtained, in which the association between the original video for making the joint video and the produced joint video can be reflected. relationship.
  • the user information and/or the joint video information may be associated with the map nodes in the joint map.
  • the user information associated with the map node may be the personal information of the user corresponding to the map node.
  • the personal information may be entered into the video playing website by the user when registering the account or after registering the account.
  • the personal information may include, for example, an account name, a mobile phone number, a gender, a date of birth, and the like.
  • the companion video information may refer to all or part of the co-production video in which the user corresponding to the map node participates in the recording. In this way, when the map node in the comprehension map is triggered in the client, the server can feed back the user information and/or the companion video information associated with the triggered graph node to the client.
  • the server may also combine the display permissions set by each user to determine whether or not the mapping node is configured according to the user identifier obtained by the statistics. Allows the construction and display of the map nodes corresponding to the user. Specifically, the server may obtain the display permission corresponding to the statistically obtained user identifier. The display permission may be set by the user in the video playing website. The display rights may include various permission levels such as public display, only friend visibility, hidden user information, and the like.
  • the public display indicates that the user in the video playing website can view the information of the user; only the friend visible indicates that only the user who is in a friend relationship with the user can view the user's information; and the hidden user information indicates that other users cannot view the information.
  • the user's information based on different privilege levels, different processing methods can be used when constructing the graph nodes.
  • the server may remove the user identifier corresponding to the display permission that identifies the hidden user information from the user ID obtained by the statistics, and the user identifier removed from the part may not generate a corresponding map node, thereby protecting the privacy of the user.
  • the server may determine the candidate user identifier corresponding to the display permission that is visible only to the buddy, and the user corresponding to the client in the candidate user identifier is not in the user identifier that is visible to the buddy.
  • the user identifier of the friend relationship is removed, and the user pointed to by the removed user identifier is not in a friend relationship with the user who currently wants to view the comprehension map, so the server does not display the information of the user to the user who currently wants to view the comprehension map.
  • the server can construct a map node corresponding to the remaining user identifier in the statistically obtained user identifier.
  • the display permission may have more forms of expression, for example, may include only some friends, blacklists, and the like.
  • Those skilled in the art can appropriately modify and change the technical solutions of the present application on the premise of understanding the essence of the technical solutions of the present application.
  • the technical solutions obtained and the technical effects achieved are similar to the technical solutions of the present application, they should belong to the present application.
  • the present application further provides a server including a memory and a processor, wherein the memory stores a computer program, and when the computer program is executed by the processor, the following steps are implemented.
  • S21 Receive a video acquisition request sent by the client, and feed back, to the client, a companion video and a set of user identifiers associated with the companion video; wherein the user pointed to by the user identifier in the user identifier set participates in recording Part of the content in the co-production video.
  • S23 Receive a matching map acquisition request that is sent by the client and includes a target user identifier.
  • S25 Generate a matching map corresponding to the target user identifier, and provide the joint map to the client; the map includes a map node that represents the target user identifier and a map node that represents another user identifier; The user pointed to by the other user identifier and the user pointed to by the target user identifier participate in recording at least one of the concert videos.
  • the memory may include physical means for storing information, typically by digitizing the information and then storing it in a medium that utilizes electrical, magnetic or optical methods.
  • the memory according to the embodiment may further include: a device for storing information by using an electric energy method, such as a RAM, a ROM, etc.; a device for storing information by using a magnetic energy method, such as a hard disk, a floppy disk, a magnetic tape, a magnetic core memory, a magnetic bubble memory, and a USB flash drive; A device that optically stores information, such as a CD or a DVD.
  • an electric energy method such as a RAM, a ROM, etc.
  • a magnetic energy method such as a hard disk, a floppy disk, a magnetic tape, a magnetic core memory, a magnetic bubble memory, and a USB flash drive
  • a device that optically stores information such as a CD or a DVD.
  • quantum memory graphene memory, and the like.
  • the processor can be implemented in any suitable manner.
  • the processor can take the form of, for example, a microprocessor or processor and computer readable media, logic gates, switches, and special-purpose integrations for storing computer readable program code (eg, software or firmware) executable by the (micro)processor.
  • ASIC Application Specific Integrated Circuit
  • programmable logic controller programmable logic controller and embedded microcontroller form.
  • the technical solution provided by the present application can select the target user identifier of interest from the user identifiers of the users participating in the joint display displayed when the user is watching the joint video recorded by the multiple users.
  • the server may feed back a mapping map corresponding to the target user identity.
  • the collaboration map displays the user pointed by the target user identifier and the user who is involved in the recorded video together with the user pointed by the target user identifier in a visual manner to display the current interface in the manner of the map node.
  • the technical solution provided by the present application not only can intuitively display the information of multiple users who jointly participate in recording the concerted video, but also can simplify the process of adding friends, so that the interaction between users in the video playing website becomes Very convenient.
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • HDL Hardware Description Language
  • a client and server can be considered a hardware component, and the means for implementing various functions included therein can also be considered as a structure within the hardware component. Or even a device for implementing various functions can be considered as a software module that can be both a method of implementation and a structure within a hardware component.
  • the present application can be implemented by means of software plus a necessary general hardware platform. Based on such understanding, the technical solution of the present application may be embodied in the form of a software product in essence or in the form of a software product, which may be stored in a storage medium such as a ROM/RAM or a disk.
  • An optical disk, etc. includes instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform the methods described in various embodiments of the present application or portions of the embodiments.
  • the application can be described in the general context of computer-executable instructions executed by a computer, such as a program module.
  • program modules include routines, programs, objects, components, data structures, and the like that perform particular tasks or implement particular abstract data types.
  • the present application can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are connected through a communication network.
  • program modules can be located in both local and remote computer storage media including storage devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

Les modes de réalisation de la présente invention concernent un procédé d'affichage et de fourniture d'une carte de performance d'ensemble, un terminal client, et un serveur. Le procédé d'affichage consiste à : acquérir une vidéo de performance d'ensemble et un groupe d'ID d'utilisateur associés à la vidéo de performance d'ensemble, auprès d'un serveur, et afficher la vidéo de performance d'ensemble et les ID d'utilisateur du groupe d'ID d'utilisateur sur une interface actuelle ; lorsqu'un ID d'utilisateur cible sur l'interface actuelle est déclenché, envoyer au serveur une demande d'acquisition de carte de performance d'ensemble contenant l'ID d'utilisateur cible ; et recevoir et afficher une carte de performance d'ensemble correspondant à l'ID d'utilisateur cible retourné par le serveur, la carte de performance d'ensemble comprenant un nœud de carte représentant l'ID d'utilisateur cible et des nœuds de carte représentant d'autres ID d'utilisateur. La solution technique décrite dans la présente invention propose un procédé d'interaction pratique pour des utilisateurs.
PCT/CN2018/109964 2017-10-27 2018-10-12 Procédé d'affichage et de fourniture de carte de performance d'ensemble, terminal client, et serveur WO2019080720A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711024799.8 2017-10-27
CN201711024799.8A CN107911749B (zh) 2017-10-27 2017-10-27 一种合演图谱的展示、提供方法、客户端及服务器

Publications (1)

Publication Number Publication Date
WO2019080720A1 true WO2019080720A1 (fr) 2019-05-02

Family

ID=61842008

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/109964 WO2019080720A1 (fr) 2017-10-27 2018-10-12 Procédé d'affichage et de fourniture de carte de performance d'ensemble, terminal client, et serveur

Country Status (3)

Country Link
CN (1) CN107911749B (fr)
TW (1) TW201918075A (fr)
WO (1) WO2019080720A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107911749B (zh) * 2017-10-27 2020-03-03 优酷网络技术(北京)有限公司 一种合演图谱的展示、提供方法、客户端及服务器
CN109271557B (zh) 2018-08-31 2022-03-22 北京字节跳动网络技术有限公司 用于输出信息的方法和装置
CN114358291B (zh) * 2020-09-30 2024-04-09 本源量子计算科技(合肥)股份有限公司 量子连通图的交叉连线处理方法、装置、终端及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130094831A1 (en) * 2011-10-18 2013-04-18 Sony Corporation Image processing apparatus, image processing method, and program
CN103108248A (zh) * 2013-01-06 2013-05-15 王汝迟 一种互动式视频的实现方法和系统
CN104703056A (zh) * 2013-12-04 2015-06-10 腾讯科技(北京)有限公司 一种视频播放方法、装置和系统
CN105635129A (zh) * 2015-12-25 2016-06-01 腾讯科技(深圳)有限公司 歌曲合唱方法、装置及系统
CN105787087A (zh) * 2016-03-14 2016-07-20 腾讯科技(深圳)有限公司 合演视频中搭档的匹配方法和装置
CN106303657A (zh) * 2016-08-18 2017-01-04 北京奇虎科技有限公司 一种连麦直播的方法及主播端设备
CN106488331A (zh) * 2015-09-01 2017-03-08 腾讯科技(北京)有限公司 基于多媒体数据的互动方法、智能终端及服务器
CN107911749A (zh) * 2017-10-27 2018-04-13 优酷网络技术(北京)有限公司 一种合演图谱的展示、提供方法、客户端及服务器

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110043578A (ko) * 2011-04-06 2011-04-27 야후! 인크. 네트워크에서의 컨텐츠 연계 사용자 피드백 제공 시스템 및 방법
CN104750718B (zh) * 2013-12-29 2018-06-12 中国移动通信集团公司 一种数据信息的搜索方法和设备
CN104967902B (zh) * 2014-09-17 2018-10-12 腾讯科技(北京)有限公司 视频分享方法、装置及系统
JP2017005371A (ja) * 2015-06-05 2017-01-05 ローランド株式会社 共演映像演出装置および共演映像演出システム
CN107104883B (zh) * 2017-04-21 2019-05-03 腾讯科技(深圳)有限公司 一种社交关系链中的信息分享方法、客户端及服务器

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130094831A1 (en) * 2011-10-18 2013-04-18 Sony Corporation Image processing apparatus, image processing method, and program
CN103108248A (zh) * 2013-01-06 2013-05-15 王汝迟 一种互动式视频的实现方法和系统
CN104703056A (zh) * 2013-12-04 2015-06-10 腾讯科技(北京)有限公司 一种视频播放方法、装置和系统
CN106488331A (zh) * 2015-09-01 2017-03-08 腾讯科技(北京)有限公司 基于多媒体数据的互动方法、智能终端及服务器
CN105635129A (zh) * 2015-12-25 2016-06-01 腾讯科技(深圳)有限公司 歌曲合唱方法、装置及系统
CN105787087A (zh) * 2016-03-14 2016-07-20 腾讯科技(深圳)有限公司 合演视频中搭档的匹配方法和装置
CN106303657A (zh) * 2016-08-18 2017-01-04 北京奇虎科技有限公司 一种连麦直播的方法及主播端设备
CN107911749A (zh) * 2017-10-27 2018-04-13 优酷网络技术(北京)有限公司 一种合演图谱的展示、提供方法、客户端及服务器

Also Published As

Publication number Publication date
CN107911749A (zh) 2018-04-13
TW201918075A (zh) 2019-05-01
CN107911749B (zh) 2020-03-03

Similar Documents

Publication Publication Date Title
WO2017092257A1 (fr) Procédé et appareil de simulation de visionnage conjoint en diffusion en direct
US10321193B2 (en) Sharing a user-selected video in a group communication
TWI711304B (zh) 一種視頻處理方法、用戶端及伺服器
Sundar et al. Uses and grats 2.0: New gratifications for new media
US20220200938A1 (en) Methods and systems for providing virtual collaboration via network
US8117281B2 (en) Using internet content as a means to establish live social networks by linking internet users to each other who are simultaneously engaged in the same and/or similar content
US9798457B2 (en) Synchronization of media interactions using context
US10965993B2 (en) Video playback in group communications
WO2022111238A1 (fr) Procédé et dispositif d'interaction de diffusion continue en direct
WO2018126957A1 (fr) Procédé d'affichage d'écran de réalité virtuelle et dispositif de réalité virtuelle
WO2022156638A1 (fr) Procédé et appareil d'interaction, dispositif électronique et support de stockage
JP2014131736A (ja) 共有されたクラウドで実行されるミニゲームのコンテンツのタグ付け、およびタグ共有制御のシステムおよび方法
CN110462609A (zh) 媒体内容元数据的临时修改
WO2019080720A1 (fr) Procédé d'affichage et de fourniture de carte de performance d'ensemble, terminal client, et serveur
US9560110B1 (en) Synchronizing shared content served to a third-party service
US10698744B2 (en) Enabling third parties to add effects to an application
WO2023273692A1 (fr) Procédé et appareil de réponse à des informations, dispositif électronique, support de stockage informatique et produit
CN114764485B (zh) 一种信息显示方法、装置、存储介质及计算机设备
KR20210051334A (ko) 컨텐츠 콘테스트 및 컨텐츠 공유 리워드를 제공하는 방법, 시스템 및 컴퓨터 판독가능 저장매체
EP3389049B1 (fr) Techniques permettant à des tiers d'ajouter des effets à une application
JP7313641B1 (ja) 端末及びコンピュータプログラム
US20240005608A1 (en) Travel in Artificial Reality
CN113545015B (zh) 合并消息存储数据结构
KR20230157692A (ko) 영상 콘텐츠 내 사용자 감정표현 표시 방법 및 장치
CN117354570A (zh) 信息显示方法、装置、设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18869991

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18869991

Country of ref document: EP

Kind code of ref document: A1