CN107911749B - Method for displaying and providing rehearsal graph, client and server - Google Patents

Method for displaying and providing rehearsal graph, client and server Download PDF

Info

Publication number
CN107911749B
CN107911749B CN201711024799.8A CN201711024799A CN107911749B CN 107911749 B CN107911749 B CN 107911749B CN 201711024799 A CN201711024799 A CN 201711024799A CN 107911749 B CN107911749 B CN 107911749B
Authority
CN
China
Prior art keywords
user
graph
performance
video
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711024799.8A
Other languages
Chinese (zh)
Other versions
CN107911749A (en
Inventor
王媛
张迪
蔡林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201711024799.8A priority Critical patent/CN107911749B/en
Publication of CN107911749A publication Critical patent/CN107911749A/en
Priority to PCT/CN2018/109964 priority patent/WO2019080720A1/en
Priority to TW107137719A priority patent/TW201918075A/en
Application granted granted Critical
Publication of CN107911749B publication Critical patent/CN107911749B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4782Web browsing, e.g. WebTV
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/858Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot

Abstract

The embodiment of the application discloses a showing and providing method of a rehearsal atlas, a client and a server, wherein the showing method comprises the following steps: acquiring a co-performance video and a user identification set associated with the co-performance video from a server, and displaying the user identification in the co-performance video and the user identification set in a current interface; when a target user identifier in the current interface is triggered, sending a join performance graph acquisition request containing the target user identifier to the server; and receiving and displaying a co-performance graph corresponding to the target user identification fed back by the server, wherein the co-performance graph comprises graph nodes representing the target user identification and graph nodes representing other user identifications. The technical scheme provided by the application can provide a convenient interactive mode for the user.

Description

Method for displaying and providing rehearsal graph, client and server
Technical Field
The application relates to the technical field of internet, in particular to a showing and providing method of a rehearsal graph, a client and a server.
Background
In a current video playing website, a video uploading person generally uploads a video, and the uploaded video is published to the video playing website after being audited. The user may then view the video of interest in the video playback website.
When the users of the current video playing website interact with each other, the users usually pay attention to the video uploader, and then when the video uploader paid attention to by the users releases a new video, the users can receive the push information of the video update. In addition, the user can communicate with the video uploader by sending a private letter to the concerned video uploader. The users can interact with each other through the comment information of the video, and can add friends to each other on the comment information page.
However, the above-mentioned interaction manner between users is single and not intuitive enough, and cannot meet the increasing interaction demand of users of video playing websites.
Disclosure of Invention
The embodiment of the application aims to provide a showing and providing method of a co-performance map, a client and a server, which can provide a convenient interaction mode for a user.
In order to achieve the above object, an embodiment of the present application provides a method for displaying a performance graph, where the method includes: acquiring a co-performance video and a user identification set associated with the co-performance video from a server, and displaying the user identification in the co-performance video and the user identification set in a current interface; wherein, the users pointed by the user identifiers in the user identifier set participate in recording part of the content in the co-performance video; when a target user identifier in the current interface is triggered, sending a join performance graph acquisition request containing the target user identifier to the server; receiving and displaying a co-performance graph corresponding to the target user identifier fed back by the server, wherein the co-performance graph comprises graph nodes representing the target user identifier and graph nodes representing other user identifiers; and the users pointed by the other user identifications and the users pointed by the target user identification participate in recording at least one co-performance video together.
In order to achieve the above object, an embodiment of the present application further provides a client, where the client includes a memory and a processor, the memory stores a computer program, and the computer program, when executed by the processor, implements the following steps: acquiring a co-performance video and a user identification set associated with the co-performance video from a server, and displaying the user identification in the co-performance video and the user identification set in a current interface; wherein, the users pointed by the user identifiers in the user identifier set participate in recording part of the content in the co-performance video; when a target user identifier in the current interface is triggered, sending a join performance graph acquisition request containing the target user identifier to the server; receiving and displaying a co-performance graph corresponding to the target user identifier fed back by the server, wherein the co-performance graph comprises graph nodes representing the target user identifier and graph nodes representing other user identifiers; and the users pointed by the other user identifications and the users pointed by the target user identification participate in recording at least one co-performance video together.
In order to achieve the above object, an embodiment of the present application further provides a method for providing a performance graph, where the method includes: receiving a video acquisition request sent by a client, and feeding back a co-performance video and a user identification set associated with the co-performance video to the client; wherein, the users pointed by the user identifiers in the user identifier set participate in recording part of the content in the co-performance video; receiving a co-performance map acquisition request containing a target user identifier sent by the client; generating a chorus atlas corresponding to the target user identification, and providing the chorus atlas for the client; the rendezvous graph comprises graph nodes representing the target user identification and graph nodes representing other user identifications; and the users pointed by the other user identifications and the users pointed by the target user identification participate in recording at least one co-performance video together.
To achieve the above object, an embodiment of the present application further provides a server, where the server includes a memory and a processor, where the memory stores a computer program, and the computer program, when executed by the processor, implements the following steps: receiving a video acquisition request sent by a client, and feeding back a co-performance video and a user identification set associated with the co-performance video to the client; wherein, the users pointed by the user identifiers in the user identifier set participate in recording part of the content in the co-performance video; receiving a co-performance map acquisition request containing a target user identifier sent by the client; generating a chorus atlas corresponding to the target user identification, and providing the chorus atlas for the client; the rendezvous graph comprises graph nodes representing the target user identification and graph nodes representing other user identifications; and the users pointed by the other user identifications and the users pointed by the target user identification participate in recording at least one co-performance video together.
As can be seen from the above, according to the technical scheme provided by the application, when a user watches a co-performance video recorded by a plurality of users together, an interested target user identifier can be selected from user identifiers of users participating in the co-performance displayed in a current interface. When the target user identifier is triggered, the server may feed back a rendezvous graph corresponding to the target user identifier. And the co-performance graph displays the user pointed by the target user identification and the user participating in the co-performance video recording together with the user pointed by the target user identification in a current interface together in a graph node mode in a visual mode. Furthermore, by checking the map nodes corresponding to the users in the co-performance map, the detailed information of the users who have interacted during the preparation of the co-performance video or other co-performance videos which are recorded by the users respectively can be known, and a part of friends in the co-performance map can be conveniently added. Therefore, the technical scheme provided by the application not only can intuitively display the information of a plurality of users who participate in recording the co-performance video to the users, but also can simplify the process of adding friends, so that the interaction among the users in the video playing website becomes very convenient.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a method for displaying a rehearsal graph in an embodiment of the present application;
FIG. 2 is a schematic view of a current interface in an embodiment of the present application;
FIG. 3 is a schematic diagram of a performance graph according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a join-play graph with connecting lines added in the embodiment of the present application;
fig. 5 is a schematic diagram of a performance graph in an actual application scenario;
fig. 6 is a schematic diagram of a trigger graph node according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a client according to an embodiment of the present application;
fig. 8 is a flowchart of a method for providing a performance graph according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art without any inventive work based on the embodiments in the present application shall fall within the scope of protection of the present application.
The application provides a method for displaying a rehearsal graph, which can be applied to a system architecture of a client and a server. The client may be a terminal device used by a user. For example, the client may be an electronic device such as a smart phone, a laptop, a desktop, a tablet, a smart tv, a smart wearable device (smart watch, virtual reality device), and so on. The client may also be software running in the electronic device. For example, the client may be an arcade APP (Application), a Tencent video APP, a beep Li and the like running in a smart phone. The server may be a service server of a video playing website, the server may store user information and video data, and the server may receive the video loading request sent by the client and feed back corresponding video data to the client. In this embodiment, the server may be a single server or a server cluster including a plurality of servers. For example, the server may be an edge node in a Content Delivery Network (CDN), or may refer to the entire CDN.
Referring to fig. 1, in the video processing method provided by the present application, an execution subject of the method may be the client described above, and the method may include the following steps.
S11: acquiring a co-performance video and a user identification set associated with the co-performance video from a server, and displaying the user identification in the co-performance video and the user identification set in a current interface; and the users pointed by the user identifications in the user identification set participate in recording part of contents in the co-performance video.
In this embodiment, a user can browse a page of a video playback website in a client, and a cover page of a plurality of synopsis videos can be displayed in the page. The performance video may be a video in which a plurality of users participate in recording together. The production process of the rehearsal video can be, for example: after recording a video A of the user A, the user A uploads the video A to a video playing website. The user B sees the video A, feels the content in the video A interesting, records the video B interacting with the content of the video A, and integrates the content of the video B into the video A to form a video C. Thus, video C may be a co-performance video that user a and user B are participating in the recording together.
In this embodiment, when the user uploads the video to the server, the user identifier of the user can be uploaded together. Meanwhile, when the server sends the data of the video to the client, the user identification associated with the video can be sent together. In this way, after the user completes content integration on the basis of the loaded video through the client, the client can add the identifier of the current user to the user identifier associated with the loaded video, thereby forming a new user identifier set. The user identification may be a user name registered in the video playing website by the user, or may be a character string code of a user name background. For example, after the user a records the video a and uploads the video a and the user identifier a of the user a to the server, the server may associate the video a with the user identifier a. Subsequently, the user B loads the video a from the server and obtains the user identifier a at the same time. The user B may record the current video, and after the content of the current video B is added to the video a, a video C is formed. At this time, when the client of the user B obtains the video C by integration, the user identifier B of the user B may be added on the basis of the user identifier a, so as to form a user identifier set of "user identifier a + user identifier B", and the formed user identifier set may be sent to the server together with the video C. In this way, the server can associate the formed set of user identifications with video C. Subsequently, if a new user continues to add new content on the basis of the video C, the corresponding user identifier set also continues to add new user identifiers. In this way, for a co-performance video, users pointed by each user identifier in the user identifier set associated with the co-performance video can participate in recording part of the content in the co-performance video.
In this way, when the user browses the interested co-performance video at the video playing website, the corresponding co-performance video can be loaded by clicking the cover of the co-performance video. When the user clicks the cover of the co-performance video, the client can send a loading request pointing to the co-performance video to the server of the video playing website. In the loading request, the identifier of the co-evolution video may be carried. The identification may be, for example, a numerical number of the co-acting video in the server. In this way, after receiving the loading request sent by the client, the server may feed back the performance video and the user identifier set associated with the performance video to the client.
In this embodiment, after receiving the co-performance video and the associated user identifier set, the client may display the co-performance video in the current interface, and meanwhile, please refer to fig. 2, may display each user identifier in the user identifier set below the co-performance video, so that the user can know the identifier of each user participating in recording the co-performance video.
S13: and when the target user identification in the current interface is triggered, sending a performance graph acquisition request containing the target user identification to the server.
In this embodiment, the user identifier displayed in the current interface may be a nickname of the user, may also be an avatar of the user, and may also be formed by the avatar of the user and a prompt control as shown in fig. 2, where the prompt control may display a "get-all map for viewing TAs". The user identification may interact with a user viewing the co-performance video. For example, the user may click on the user's nickname or the user's avatar, or click on the prompt control described above. Thus, after the user identifier is clicked by the user, the user identifier can be regarded as being triggered. If the target user identification in the current interface is triggered by the user, the client can send a performance graph acquisition request containing the target user identification to the server. In this way, after receiving the rendezvous graph obtaining request, the server extracts the target user identifier carried in the request, so that the server can know which user's rendezvous graph is requested by the client at present.
In this embodiment, the server may generate a performance graph corresponding to the target user identifier, according to the performance graph acquisition request. Specifically, the performance graph corresponding to the target user identifier may include a graph node representing the target user identifier and a graph node representing other user identifiers. And the users pointed by the other user identifications and the users pointed by the target user identification participate in recording at least one co-performance video together. For example, if the target user identifier is directed to the user a, if both the user B and the user C participate in recording the co-performance video together with the user a, in the co-performance graph corresponding to the user a, in addition to the graph node corresponding to the user a, graph nodes corresponding to the user B and the user C are also shown.
In this embodiment, when the co-performance graph corresponding to the target user identifier is generated, a target user identifier set including the target user identifier may be screened from user identifier sets associated with respective co-performance videos. The target user identification set comprises the target user identification, which indicates that the user pointed by the target user identification participates in recording in the co-performance video associated with the target user identification set. At this time, the user identifiers in each target user identifier set except the target user identifier may be counted. For example, two target user identifier sets are currently screened, where the two target user identifier sets include a user identifier a, a user identifier B, and a user identifier C in addition to the target user identifier. This means that the users respectively pointed to by the user identifier a, the user identifier B, and the user identifier C all record a join play video together with the user pointed to by the target user identifier, and these user identifiers should be shown in the join play map corresponding to the target user identifier. Based on this, the target user identifier and the graph nodes corresponding to the user identifiers obtained through statistics can be respectively constructed, and the set of the constructed graph nodes is used as the co-performance graph corresponding to the target user identifier. Referring to fig. 3, for the target user identifier, the user identifier a, the user identifier B, and the user identifier C, 4 graph nodes may be constructed, and the 4 graph nodes may form a rendezvous graph corresponding to the target user identifier.
In one embodiment, when building a graph node corresponding to each user identifier, in order to visually display users included in the performance graph, the corresponding graph node may be built based on a user avatar corresponding to each user identifier. Specifically, the target user identifier and the user avatar corresponding to the user identifier obtained through statistics may be obtained respectively, and the user avatar may be provided when each user registers information in the video playing website. These user avatars may be stored in association with user identifications in a server. Therefore, the server can read the corresponding user head portrait according to the user identification. After the user avatars are acquired, the acquired user avatars may be displayed in designated areas, respectively. The designated region may be a region having a designated size in the rendition graph. The size and shape of the designated area may be set in advance by the server. For example, the designated area may present a circle, and the radius of the circle may range from 10 pixels to 20 pixels. In this way, the user avatar displayed in the designated area can be taken as the graph node. For example, 4 graph nodes as shown in fig. 3 may be obtained.
In this embodiment, after the user a records the video a, the video a may be subjected to content integration processing by a plurality of users, so as to obtain a plurality of co-performance videos based on the video a. The obtained co-performance video may be further processed by other users for content integration, so as to obtain more co-performance videos. In order to systematically distinguish which videos each group photo video is obtained by performing content integration processing based on which videos, when the client uploads the group photo videos and the associated user identifier sets, the corresponding user identifiers may be arranged in the user identifier sets according to the sequence in which the user participates in video production. For example, a video a made by the user a is subsequently integrated by the user B to obtain a video B, and the video B is integrated by the user C to obtain a video C, so that when the client uploads the video B, the order of the user identifiers in the user identifier set uploaded together may be "user identifier a + user identifier B"; when the client uploads the video C, the sequence of the user identifiers in the user identifier set uploaded together may be "user identifier a + user identifier B + user identifier C". Therefore, the server can know the sequence of the participation of the users in the current co-performance video by sequentially reading the user identifications in the user identification set.
In this embodiment, in order to intuitively represent the order of delivery of the co-performance videos, the graph nodes may be connected by connecting lines in the co-performance graph. Specifically, after the graph nodes are constructed, a connection line may be established between the graph nodes corresponding to two adjacent user identifiers according to the arrangement order of the user identifiers in the target user identifier set. For example, if the arrangement order of the user identifiers in the target user identifier set is "target user identifier + user identifier a + user identifier B + user identifier C", then, on the basis of fig. 3, after the connection is performed between the map nodes according to the arrangement order shown in the target user identifier set, the co-occurrence map shown in fig. 4 may be obtained. In fig. 4, it can be clearly known that the video corresponding to the user identifier a is produced based on the video corresponding to the target user identifier, the video corresponding to the user identifier B is produced based on the video corresponding to the user identifier a, and so on. In an actual application scenario, according to the above-mentioned connection method, a performance combination map as shown in fig. 5 can be obtained, and in the performance combination map, an association relationship between an original video for making a performance combination video and a performance combination video obtained by making the performance combination video can be embodied.
In this embodiment, after the server constructs each graph node, in order to enhance the interaction between the graph node and the user, the server may associate user information and/or join video information with the graph node in the join graph. The user information associated with the graph node may be personal information of a user corresponding to the graph node. The personal information can be input into the video playing website by the user when the account is registered or after the account is registered. The personal information may include information such as account name, mobile phone number, gender, year, month and day of birth, and the like. The performance video information may refer to all or part of the performance video recorded by the user corresponding to the map node. In this way, when a graph node in the co-performance graph is triggered in the client, the server may feed back user information and/or co-performance video information associated with the triggered graph node to the client.
It should be noted that, since operations such as constructing a graph node and associating user information may involve privacy of a user, before the server constructs a corresponding graph node according to the counted user identifier, the server may further determine whether to allow construction and display of the graph node corresponding to the user in combination with the display authority set by each user. Specifically, the server may obtain the statistical display permission corresponding to the user identifier. The display permission can be set by the user in the video playing website. The display permission can comprise multiple permission levels of public display, visibility of only friends, hidden user information and the like. The public display shows that users in the video playing website can view the information of the users; a friend-only visible representation that only users in friend relationship with the user can view the user's information; and the hidden user information indicates that no other user can view the information of the user. At this time, based on different authority levels, different processing modes can be adopted when the graph nodes are constructed. Specifically, the server may remove the user identifier corresponding to the presentation authority representing the hidden user information from the user identifier obtained through the statistics, and may not generate a corresponding graph node for the removed user identifier, thereby protecting the privacy of the user. For the display permission representing only the friend to be visible, the server may determine, from the user identifiers obtained through statistics, a candidate user identifier corresponding to the display permission representing only the friend to be visible, and remove a user identifier, which is in a non-friend relationship with the user corresponding to the client, from the candidate user identifiers, where the user to which the removed user identifier points is not in a friend relationship with the user who currently wants to view the chorus graph, so that the server does not display information of the users to the user who currently wants to view the chorus graph. In this way, the server may construct graph nodes corresponding to the remaining user identifiers among the user identifiers obtained by statistics.
Of course, in an actual application scenario, the display right may have more expressions, for example, only part of the friends may be visible, a black list, and the like. Those skilled in the art can make appropriate changes and modifications to the technical solutions of the present application on the premise of understanding the essence of the technical solutions of the present application, and should be within the protection scope of the present application as long as the technical solutions and the achieved technical effects are similar to the technical solutions of the present application.
S15: receiving and displaying a co-performance graph corresponding to the target user identifier fed back by the server, wherein the co-performance graph comprises graph nodes representing the target user identifier and graph nodes representing other user identifiers; and the users pointed by the other user identifications and the users pointed by the target user identification participate in recording at least one co-performance video together.
In this embodiment, after the server generates the performance combination graph corresponding to the target user identifier, the performance combination graph may be fed back to the client. Therefore, the client can receive and display the co-performance map corresponding to the target user identification fed back by the server. In the co-performance graph, a graph node for representing the target user identifier and a graph node for representing other user identifiers can be included; and the users pointed by the other user identifications and the users pointed by the target user identification participate in recording at least one co-performance video together.
In this embodiment, when the user views the join-play graph in the client, the user may click one of the target graph nodes, so as to further view detailed information corresponding to the target graph node. Specifically, when a target graph node in the co-performance graph is triggered, a control for indicating that friends are added and/or a control for indicating that a co-performance video is watched may pop up in the current interface. Specifically, referring to fig. 6, after the graph node corresponding to the user a is clicked by the user, a control for adding a friend and a control for viewing a join-in video may be popped up beside the graph node. In this way, the user can further click the control for adding the friend to initiate friend application to the user A, and can click the control for viewing the co-occurrence video to view all or part of the co-occurrence video recorded by the user A.
In addition, in this embodiment, when a target graph node in the chorus graph is triggered, a page may be directly skipped, and specifically, a page showing the chorus graph may be skipped to a page of the user corresponding to the target graph node. The user's page may show the personal information of the user and/or the co-performance video recorded by the user.
In this embodiment, the graph nodes of the rehearsal graph can be filled with user avatars preset by corresponding users, so that the graph nodes in the rehearsal graph can present an individualized display mode. The user avatar may be uploaded to a server by the user. In an actual application scenario, the content shown in the graph node may also be dynamically changed. When a graph node is not triggered, a corresponding user avatar may be presented. When a target graph node in the rehearsal graph is triggered, the user head portrait may not be displayed in the area where the user head portrait is originally displayed in the target graph node, but the rehearsal video which is recorded by the user and is participated by the user and corresponding to the target graph node may be played. The join play video played in the map node may also be preset by the user, and the time length of the played join play video may also be preset by the user. Therefore, under the condition that page jumping does not need to occur, a user can browse the part of the co-performance video which is recorded by the user and corresponds to the target spectrum node, so that reference can be conveniently provided for the user, and the user can judge whether the user is interested in the co-performance video recorded by the user and corresponds to the target spectrum node.
In this embodiment, when the client displays the rendezvous graph, one of the graph nodes may be set as a focus graph node. The focus map node can be used as the most obvious map node in the co-performance map. Specifically, the size of the focus graph node displayed in the current interface is the largest. For example, in fig. 4, the graph node corresponding to the target user identification may be the focus graph node. In practical application, since the user requests to acquire the rendezvous graph corresponding to the target user identifier, the initial focus graph node in the rendezvous graph corresponding to the target user identifier may be the graph node corresponding to the target user identifier. Of course, the focus graph node in the current interface can be changed in response to the operation instruction of the user. In particular, the user may alter the focus graph node by triggering the graph node. When a target graph node in the rendezvous graph is triggered, the target graph node can be changed into a focus graph node in the current interface. For example, the graph node corresponding to the original target user identifier is a focus graph node, and when the user clicks the graph node corresponding to the user identifier a, the graph node corresponding to the user identifier a becomes the focus graph node of the current interface, and meanwhile, the graph node corresponding to the user identifier a may be moved to the position where the graph node corresponding to the original target user identifier is located, and other graph nodes may also be moved synchronously. In addition, the user can also change the map nodes by dragging the co-performance map. Specifically, the user can move the touch screen by a finger, and after the choreography displayed in the touch screen receives an operation instruction of the user, the choreography can be correspondingly moved according to the direction of the scratching. Therefore, the originally displayed focus map node at the center of the touch screen can be changed into the moved map node, and the change process of the focus map node is realized.
In this embodiment, other map nodes in the close-up map than the focus map node may be directly or indirectly connected to the focus map node. Wherein, directly connecting may mean that there are no other graph nodes on the connecting line between the focal graph nodes; while indirectly connected may mean that there is at least one other graph node on a connection to the focal graph node. For example, in fig. 4, reference numeral 1 is a focal graph node, and then the graph node of reference numeral 2 and the focal graph node may be directly connected, and the graph node of reference numeral 3 and the focal graph node may be indirectly connected.
In the present embodiment, in order to embody such a connection state, other graph nodes except for the focus graph node may be associated with a connection level parameter, which may be determined by the number of graph nodes on a connection line of the other graph nodes and the focus graph node. Specifically, the numerical value of the connection level parameter associated with the other graph node may be the number of graph nodes on the connection line between the other graph node and the focus graph node. For example, in fig. 4, the value of the connection level parameter associated with the graph node numbered 2 is 0 because there is no other graph node on the connecting line of the graph node numbered 2 and the focus graph node. And the value of the connection level parameter associated with the graph node numbered 3 is 1, because there are 1 graph node on the connecting line of the graph node numbered 3 and the focus graph node. Of course, in practical applications, the values of the connection level parameters may have other limiting manners. In this embodiment, the size of the other graph nodes displayed in the current interface may be determined according to the connection level parameter. Specifically, the display size of the graph nodes directly connected with the focus graph nodes may be larger, indicating that the relationship with the focus graph nodes is closer; the display size of the graph nodes indirectly connected with the focus graph nodes can be smaller, and the display size shows that the graph nodes are not closely related with the focus graph nodes. Thus, the size of the other graph nodes displayed in the current interface may be inversely proportional to the value of the connection-level parameter. The larger the value of the connection level parameter is, the more map nodes exist between the representation and the focus map node, and the farther the focus map node is, so that the corresponding display size can be smaller. Conversely, the corresponding display size may be larger.
In addition, in one embodiment, the display size of the graph nodes except the focus graph node in the co-performance graph can be determined according to the relation state of the focus graph node; the display size of the graph node in a friend state with the focus graph node may be larger than the display size of the graph node in a non-friend state with the focus graph node. It should be noted that the friend states between the graph nodes may be understood as friend states between users corresponding to the graph nodes. For example, if the user a and the user B are friends, the graph node corresponding to the user a and the graph node corresponding to the user B may also be in a friend state.
Referring to fig. 7, the present application further provides a client, where the client includes a memory and a processor, and the memory stores a computer program, and the computer program, when executed by the processor, implements the following steps.
S11: acquiring a co-performance video and a user identification set associated with the co-performance video from a server, and displaying the user identification in the co-performance video and the user identification set in a current interface; wherein, the users pointed by the user identifiers in the user identifier set participate in recording part of the content in the co-performance video;
s13: when a target user identifier in the current interface is triggered, sending a join performance graph acquisition request containing the target user identifier to the server;
s15: receiving and displaying a co-performance graph corresponding to the target user identifier fed back by the server, wherein the co-performance graph comprises graph nodes representing the target user identifier and graph nodes representing other user identifiers; and the users pointed by the other user identifications and the users pointed by the target user identification participate in recording at least one co-performance video together.
In this embodiment, the memory may include a physical device for storing information, and typically, the information is digitized and then stored in a medium using an electrical, magnetic, or optical method. The memory according to this embodiment may further include: devices that store information using electrical energy, such as RAM, ROM, etc.; devices that store information using magnetic energy, such as hard disks, floppy disks, tapes, core memories, bubble memories, usb disks; devices for storing information optically, such as CDs or DVDs. Of course, there are other ways of memory, such as quantum memory, graphene memory, and so forth.
In this embodiment, the processor may be implemented in any suitable manner. For example, the processor may take the form of, for example, a microprocessor or processor and a computer-readable medium that stores computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, an embedded microcontroller, and so forth.
The specific functions of the client, the memory thereof and the processor thereof provided in the embodiments of this specification can be explained in comparison with the foregoing embodiments in this specification, and can achieve the technical effects of the foregoing embodiments, and thus, will not be described herein again.
The present application also provides a method for providing a performance graph, an execution subject of the method may be the server, please refer to fig. 8, and the method may include the following steps.
S21: receiving a video acquisition request sent by a client, and feeding back a co-performance video and a user identification set associated with the co-performance video to the client; and the users pointed by the user identifications in the user identification set participate in recording part of contents in the co-performance video.
In this embodiment, when the user browses an interested co-performance video on a video playing website, the user can load the corresponding co-performance video by clicking the cover of the co-performance video. When the user clicks the cover of the co-occurrence video, the client can send a video acquisition request pointing to the co-occurrence video to the server of the video playing website. The video acquisition request may carry an identifier of the co-occurrence video. The identification may be, for example, a numerical number of the co-acting video in the server. In this way, after receiving a video acquisition request sent by the client, the server can feed back the co-performance video and the user identifier set associated with the co-performance video to the client. And the users pointed by the user identifications in the user identification set participate in recording part of contents in the co-performance video.
S23: and receiving a co-performance map acquisition request containing a target user identifier sent by the client.
In this embodiment, after receiving the co-performance video and the associated user identifier set, the client may display the co-performance video in the current interface, and simultaneously display each user identifier in the user identifier set below the co-performance video, so that the user can know the identifier of each user participating in recording the co-performance video.
In this embodiment, the user identifier displayed in the current interface may be a nickname of the user, may also be an avatar of the user, or may be formed by the avatar of the user and a prompt control, where the prompt control may display a "get-around map for viewing the TA". The user identification may interact with a user viewing the co-performance video. For example, the user may click on the user's nickname or the user's avatar, or click on the prompt control described above. Thus, after the user identifier is clicked by the user, the user identifier can be regarded as being triggered. If the target user identification in the current interface is triggered by the user, the client can send a performance graph acquisition request containing the target user identification to the server. In this way, after receiving the rendezvous graph obtaining request, the server extracts the target user identifier carried in the request, so that the server can know which user's rendezvous graph is requested by the client at present.
S25: generating a chorus atlas corresponding to the target user identification, and providing the chorus atlas for the client; the rendezvous graph comprises graph nodes representing the target user identification and graph nodes representing other user identifications; and the users pointed by the other user identifications and the users pointed by the target user identification participate in recording at least one co-performance video together.
In this embodiment, the server may generate a performance graph corresponding to the target user identifier, according to the performance graph acquisition request. Specifically, the performance graph corresponding to the target user identifier may include a graph node representing the target user identifier and a graph node representing other user identifiers. And the users pointed by the other user identifications and the users pointed by the target user identification participate in recording at least one co-performance video together. For example, if the target user identifier is directed to the user a, if both the user B and the user C participate in recording the co-performance video together with the user a, in the co-performance graph corresponding to the user a, in addition to the graph node corresponding to the user a, graph nodes corresponding to the user B and the user C are also shown.
In this embodiment, when the co-performance graph corresponding to the target user identifier is generated, a target user identifier set including the target user identifier may be screened from user identifier sets associated with respective co-performance videos. The target user identification set comprises the target user identification, which indicates that the user pointed by the target user identification participates in recording in the co-performance video associated with the target user identification set. At this time, the user identifiers in each target user identifier set except the target user identifier may be counted. For example, two target user identifier sets are currently screened, where the two target user identifier sets include a user identifier a, a user identifier B, and a user identifier C in addition to the target user identifier. This means that the users respectively pointed to by the user identifier a, the user identifier B, and the user identifier C all record a join play video together with the user pointed to by the target user identifier, and these user identifiers should be shown in the join play map corresponding to the target user identifier. Based on this, the target user identifier and the graph nodes corresponding to the user identifiers obtained through statistics can be respectively constructed, and the set of the constructed graph nodes is used as the co-performance graph corresponding to the target user identifier. Referring to fig. 3, for the target user identifier, the user identifier a, the user identifier B, and the user identifier C, 4 graph nodes may be constructed, and the 4 graph nodes may form a rendezvous graph corresponding to the target user identifier.
In one embodiment, when building a graph node corresponding to each user identifier, in order to visually display users included in the performance graph, the corresponding graph node may be built based on a user avatar corresponding to each user identifier. Specifically, the target user identifier and the user avatar corresponding to the user identifier obtained through statistics may be obtained respectively, and the user avatar may be provided when each user registers information in the video playing website. These user avatars may be stored in association with user identifications in a server. Therefore, the server can read the corresponding user head portrait according to the user identification. After the user avatars are acquired, the acquired user avatars may be displayed in designated areas, respectively. The designated region may be a region having a designated size in the rendition graph. The size and shape of the designated area may be set in advance by the server. For example, the designated area may present a circle, and the radius of the circle may range from 10 pixels to 20 pixels. In this way, the user avatar displayed in the designated area can be taken as the graph node. For example, 4 graph nodes as shown in fig. 3 may be obtained.
In this embodiment, after the user a records the video a, the video a may be subjected to content integration processing by a plurality of users, so as to obtain a plurality of co-performance videos based on the video a. The obtained co-performance video may be further processed by other users for content integration, so as to obtain more co-performance videos. In order to systematically distinguish which videos each group photo video is obtained by performing content integration processing based on which videos, when the client uploads the group photo videos and the associated user identifier sets, the corresponding user identifiers may be arranged in the user identifier sets according to the sequence in which the user participates in video production. For example, a video a made by the user a is subsequently integrated by the user B to obtain a video B, and the video B is integrated by the user C to obtain a video C, so that when the client uploads the video B, the order of the user identifiers in the user identifier set uploaded together may be "user identifier a + user identifier B"; when the client uploads the video C, the sequence of the user identifiers in the user identifier set uploaded together may be "user identifier a + user identifier B + user identifier C". Therefore, the server can know the sequence of the participation of the users in the current co-performance video by sequentially reading the user identifications in the user identification set.
In this embodiment, in order to intuitively represent the order of delivery of the co-performance videos, the graph nodes may be connected by connecting lines in the co-performance graph. Specifically, after the graph nodes are constructed, a connection line may be established between the graph nodes corresponding to two adjacent user identifiers according to the arrangement order of the user identifiers in the target user identifier set. For example, if the arrangement order of the user identifiers in the target user identifier set is "target user identifier + user identifier a + user identifier B + user identifier C", then, on the basis of fig. 3, after the connection is performed between the map nodes according to the arrangement order shown in the target user identifier set, the co-occurrence map shown in fig. 4 may be obtained. In fig. 4, it can be clearly known that the video corresponding to the user identifier a is produced based on the video corresponding to the target user identifier, the video corresponding to the user identifier B is produced based on the video corresponding to the user identifier a, and so on. In an actual application scenario, according to the above-mentioned connection method, a performance combination map as shown in fig. 5 can be obtained, and in the performance combination map, an association relationship between an original video for making a performance combination video and a performance combination video obtained by making the performance combination video can be embodied.
In this embodiment, after the server constructs each graph node, in order to enhance the interaction between the graph node and the user, the server may associate user information and/or join video information with the graph node in the join graph. The user information associated with the graph node may be personal information of a user corresponding to the graph node. The personal information can be input into the video playing website by the user when the account is registered or after the account is registered. The personal information may include information such as account name, mobile phone number, gender, year, month and day of birth, and the like. The performance video information may refer to all or part of the performance video recorded by the user corresponding to the map node. In this way, when a graph node in the co-performance graph is triggered in the client, the server may feed back user information and/or co-performance video information associated with the triggered graph node to the client.
It should be noted that, since operations such as constructing a graph node and associating user information may involve privacy of a user, before the server constructs a corresponding graph node according to the counted user identifier, the server may further determine whether to allow construction and display of the graph node corresponding to the user in combination with the display authority set by each user. Specifically, the server may obtain the statistical display permission corresponding to the user identifier. The display permission can be set by the user in the video playing website. The display permission can comprise multiple permission levels of public display, visibility of only friends, hidden user information and the like. The public display shows that users in the video playing website can view the information of the users; a friend-only visible representation that only users in friend relationship with the user can view the user's information; and the hidden user information indicates that no other user can view the information of the user. At this time, based on different authority levels, different processing modes can be adopted when the graph nodes are constructed. Specifically, the server may remove the user identifier corresponding to the presentation authority representing the hidden user information from the user identifier obtained through the statistics, and may not generate a corresponding graph node for the removed user identifier, thereby protecting the privacy of the user. For the display permission representing only the friend to be visible, the server may determine, from the user identifiers obtained through statistics, a candidate user identifier corresponding to the display permission representing only the friend to be visible, and remove a user identifier, which is in a non-friend relationship with the user corresponding to the client, from the candidate user identifiers, where the user to which the removed user identifier points is not in a friend relationship with the user who currently wants to view the chorus graph, so that the server does not display information of the users to the user who currently wants to view the chorus graph. In this way, the server may construct graph nodes corresponding to the remaining user identifiers among the user identifiers obtained by statistics.
Of course, in an actual application scenario, the display right may have more expressions, for example, only part of the friends may be visible, a black list, and the like. Those skilled in the art can make appropriate changes and modifications to the technical solutions of the present application on the premise of understanding the essence of the technical solutions of the present application, and should be within the protection scope of the present application as long as the technical solutions and the achieved technical effects are similar to the technical solutions of the present application.
Referring to fig. 9, the present application further provides a server, where the server includes a memory and a processor, where the memory stores a computer program, and the computer program, when executed by the processor, implements the following steps.
S21: receiving a video acquisition request sent by a client, and feeding back a co-performance video and a user identification set associated with the co-performance video to the client; and the users pointed by the user identifications in the user identification set participate in recording part of contents in the co-performance video.
S23: and receiving a co-performance map acquisition request containing a target user identifier sent by the client.
S25: generating a chorus atlas corresponding to the target user identification, and providing the chorus atlas for the client; the rendezvous graph comprises graph nodes representing the target user identification and graph nodes representing other user identifications; and the users pointed by the other user identifications and the users pointed by the target user identification participate in recording at least one co-performance video together.
In this embodiment, the memory may include a physical device for storing information, and typically, the information is digitized and then stored in a medium using an electrical, magnetic, or optical method. The memory according to this embodiment may further include: devices that store information using electrical energy, such as RAM, ROM, etc.; devices that store information using magnetic energy, such as hard disks, floppy disks, tapes, core memories, bubble memories, usb disks; devices for storing information optically, such as CDs or DVDs. Of course, there are other ways of memory, such as quantum memory, graphene memory, and so forth.
In this embodiment, the processor may be implemented in any suitable manner. For example, the processor may take the form of, for example, a microprocessor or processor and a computer-readable medium that stores computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, an embedded microcontroller, and so forth.
The specific functions implemented by the memory and the processor of the server provided in the embodiments of the present specification may be explained in comparison with the foregoing embodiments in the present specification, and can achieve the technical effects of the foregoing embodiments, and thus, no further description is provided herein.
As can be seen from the above, according to the technical scheme provided by the application, when a user watches a co-performance video recorded by a plurality of users together, an interested target user identifier can be selected from user identifiers of users participating in the co-performance displayed in a current interface. When the target user identifier is triggered, the server may feed back a rendezvous graph corresponding to the target user identifier. And the co-performance graph displays the user pointed by the target user identification and the user participating in the co-performance video recording together with the user pointed by the target user identification in a current interface together in a graph node mode in a visual mode. Furthermore, by checking the map nodes corresponding to the users in the co-performance map, the detailed information of the users who have interacted during the preparation of the co-performance video or other co-performance videos which are recorded by the users respectively can be known, and a part of friends in the co-performance map can be conveniently added. Therefore, the technical scheme provided by the application not only can intuitively display the information of a plurality of users who participate in recording the co-performance video to the users, but also can simplify the process of adding friends, so that the interaction among the users in the video playing website becomes very convenient.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Language Description Language), traffic, pl (core unified Programming Language), HDCal, JHDL (Java Hardware Description Language), langue, Lola, HDL, laspam, hardsradware (Hardware Description Language), vhjhd (Hardware Description Language), and vhigh-Language, which are currently used in most popular applications. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
Those skilled in the art will also appreciate that, in addition to implementing clients and servers as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the clients and servers implement logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such clients and servers may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as structures within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present application.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, both for the embodiments of the client and the server, reference may be made to the introduction of embodiments of the method described above.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
Although the present application has been described in terms of embodiments, those of ordinary skill in the art will recognize that there are numerous variations and permutations of the present application without departing from the spirit of the application, and it is intended that the appended claims encompass such variations and permutations without departing from the spirit of the application.

Claims (16)

1. A method for displaying a performance graph, which is characterized by comprising the following steps:
acquiring a co-performance video and a user identification set associated with the co-performance video from a server, and displaying the user identification in the co-performance video and the user identification set in a current interface; wherein, the users pointed by the user identifiers in the user identifier set participate in recording part of the content in the co-performance video;
when a target user identifier in the current interface is triggered, sending a join performance graph acquisition request containing the target user identifier to the server;
receiving and displaying a co-performance graph corresponding to the target user identifier fed back by the server, wherein the co-performance graph comprises graph nodes representing the target user identifier and graph nodes representing other user identifiers; and the users pointed by the other user identifications and the users pointed by the target user identification participate in recording at least one co-performance video together.
2. The method of claim 1, further comprising:
when a target graph node in the co-performance graph is triggered, popping up a control for indicating that friends are added and/or a control for indicating that a co-performance video is watched in the current interface;
or
When a target map node in the chorus map is triggered, jumping from a page displaying the chorus map to a user page corresponding to the target map node; and displaying the personal information of the user and/or the co-performance video recorded by the user in the page of the user.
3. The method according to claim 1, wherein the graph nodes of the co-performance graph are filled with user head portraits preset by corresponding users;
correspondingly, when a target map node in the chorus map is triggered, a chorus video which is recorded by a user and participates in the target map node and corresponds to the target map node is played in an area which displays a user head portrait in the target map node.
4. The method according to claim 1, wherein the close-up graph comprises a focus graph node, and the focus graph node is displayed in the current interface in the largest size; and the initial focus map node in the chorus map corresponding to the target user identifier is the map node corresponding to the target user identifier.
5. The method according to claim 4, wherein when a target graph node in the close-up graph is triggered, the target graph node changes to a focus graph node in the current interface.
6. The method according to claim 4, wherein other graph nodes in the co-performance graph than the focal graph node are directly or indirectly connected to the focal graph node; wherein the other graph nodes are associated with a connection level parameter determined by the number of graph nodes on a connection line of the other graph nodes and the focal graph node;
accordingly, the size of the other graph nodes displayed in the current interface is determined according to the connection level parameters.
7. The method of claim 6, wherein the numerical value of the connection level parameter associated with the other graph node is the number of graph nodes on a connecting line of the other graph node and the focus graph node;
accordingly, the size of the other graph nodes displayed in the current interface is inversely proportional to the value of the connection-level parameter.
8. The method according to claim 4, wherein the display size of the graph nodes other than the focus graph node in the co-performance graph is determined according to the relation state of the focus graph node; the display size of the graph node in the friend state with the focus graph node is larger than that of the graph node in the non-friend state with the focus graph node.
9. A client, characterized in that the client comprises a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, performs the steps of:
acquiring a co-performance video and a user identification set associated with the co-performance video from a server, and displaying the user identification in the co-performance video and the user identification set in a current interface; wherein, the users pointed by the user identifiers in the user identifier set participate in recording part of the content in the co-performance video;
when a target user identifier in the current interface is triggered, sending a join performance graph acquisition request containing the target user identifier to the server;
receiving and displaying a co-performance graph corresponding to the target user identifier fed back by the server, wherein the co-performance graph comprises graph nodes representing the target user identifier and graph nodes representing other user identifiers; and the users pointed by the other user identifications and the users pointed by the target user identification participate in recording at least one co-performance video together.
10. A method for providing a performance graph, the method comprising:
receiving a video acquisition request sent by a client, and feeding back a co-performance video and a user identification set associated with the co-performance video to the client; wherein, the users pointed by the user identifiers in the user identifier set participate in recording part of the content in the co-performance video;
receiving a co-performance map acquisition request containing a target user identifier sent by the client;
generating a chorus atlas corresponding to the target user identification, and providing the chorus atlas for the client; the rendezvous graph comprises graph nodes representing the target user identification and graph nodes representing other user identifications; and the users pointed by the other user identifications and the users pointed by the target user identification participate in recording at least one co-performance video together.
11. The method of claim 10, wherein generating the performance graph corresponding to the target user identification comprises:
acquiring at least one target user identification set containing the target user identification in a user identification set;
counting user identifications except the target user identification in the target user identification set;
and respectively constructing the target user identification and the graph nodes corresponding to the user identification obtained through statistics, and taking the set of the constructed graph nodes as a co-performance graph corresponding to the target user identification.
12. The method of claim 11, wherein separately constructing graph nodes corresponding to the target user identifier and the statistical user identifier comprises:
respectively acquiring the target user identification and the user head portrait corresponding to the user identification obtained through statistics, and respectively displaying the acquired user head portrait in a designated area;
and taking the user head portrait displayed in the designated area as the graph node.
13. The method according to claim 11 or 12, wherein after constructing a graph node, the method further comprises:
and establishing a connection line between the map nodes corresponding to the two adjacent user identifications according to the arrangement sequence of the user identifications in the target user identification set.
14. The method of claim 11, wherein after respectively constructing graph nodes corresponding to the target user identifier and the statistically derived user identifier, the method further comprises:
associating user information and/or a rehearsal video information for graph nodes in the rehearsal graph, so that when the graph nodes in the rehearsal graph are triggered in the client, the triggered graph node-associated user information and/or the rehearsal video information are fed back to the client.
15. The method of claim 11, wherein prior to constructing the graph nodes corresponding to the statistically derived user identities, the method further comprises:
acquiring the display authority corresponding to the user identification obtained through statistics, and removing the user identification corresponding to the display authority representing the hidden user information from the user identification obtained through statistics;
determining candidate user identifications corresponding to display authorities only representing friends to be visible, and removing user identifications which are in non-friend relationship with users corresponding to the client in the candidate user identifications;
correspondingly, map nodes corresponding to the rest user identifications in the user identifications obtained through statistics are constructed.
16. A server, characterized in that the server comprises a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, performs the steps of:
receiving a video acquisition request sent by a client, and feeding back a co-performance video and a user identification set associated with the co-performance video to the client; wherein, the users pointed by the user identifiers in the user identifier set participate in recording part of the content in the co-performance video;
receiving a co-performance map acquisition request containing a target user identifier sent by the client;
generating a chorus atlas corresponding to the target user identification, and providing the chorus atlas for the client; the rendezvous graph comprises graph nodes representing the target user identification and graph nodes representing other user identifications; and the users pointed by the other user identifications and the users pointed by the target user identification participate in recording at least one co-performance video together.
CN201711024799.8A 2017-10-27 2017-10-27 Method for displaying and providing rehearsal graph, client and server Active CN107911749B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201711024799.8A CN107911749B (en) 2017-10-27 2017-10-27 Method for displaying and providing rehearsal graph, client and server
PCT/CN2018/109964 WO2019080720A1 (en) 2017-10-27 2018-10-12 Method for displaying and providing ensemble performance map, client terminal, and server
TW107137719A TW201918075A (en) 2017-10-27 2018-10-25 Methods for displaying and providing joint-performance atlas, client, and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711024799.8A CN107911749B (en) 2017-10-27 2017-10-27 Method for displaying and providing rehearsal graph, client and server

Publications (2)

Publication Number Publication Date
CN107911749A CN107911749A (en) 2018-04-13
CN107911749B true CN107911749B (en) 2020-03-03

Family

ID=61842008

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711024799.8A Active CN107911749B (en) 2017-10-27 2017-10-27 Method for displaying and providing rehearsal graph, client and server

Country Status (3)

Country Link
CN (1) CN107911749B (en)
TW (1) TW201918075A (en)
WO (1) WO2019080720A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107911749B (en) * 2017-10-27 2020-03-03 优酷网络技术(北京)有限公司 Method for displaying and providing rehearsal graph, client and server
CN109271557B (en) 2018-08-31 2022-03-22 北京字节跳动网络技术有限公司 Method and apparatus for outputting information
CN114358291B (en) * 2020-09-30 2024-04-09 本源量子计算科技(合肥)股份有限公司 Quantum connectivity graph cross-connection processing method, device, terminal and storage medium

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110043578A (en) * 2011-04-06 2011-04-27 야후! 인크. System and method for providing user feedback in association with content in a network
JP5845801B2 (en) * 2011-10-18 2016-01-20 ソニー株式会社 Image processing apparatus, image processing method, and program
CN103108248B (en) * 2013-01-06 2016-04-27 王汝迟 A kind of implementation method of interactive video and system
CN104703056B (en) * 2013-12-04 2019-04-12 腾讯科技(北京)有限公司 A kind of video broadcasting method, device and system
CN104750718B (en) * 2013-12-29 2018-06-12 中国移动通信集团公司 The searching method and equipment of a kind of data information
CN104967902B (en) * 2014-09-17 2018-10-12 腾讯科技(北京)有限公司 Video sharing method, apparatus and system
JP2017005371A (en) * 2015-06-05 2017-01-05 ローランド株式会社 Coaction video presentation device and coaction video presentation system
CN106488331A (en) * 2015-09-01 2017-03-08 腾讯科技(北京)有限公司 Interactive approach based on multi-medium data, intelligent terminal and server
CN105635129B (en) * 2015-12-25 2020-04-21 腾讯科技(深圳)有限公司 Song chorusing method, device and system
CN105787087B (en) * 2016-03-14 2019-09-17 腾讯科技(深圳)有限公司 Costar the matching process and device worked together in video
CN106303657A (en) * 2016-08-18 2017-01-04 北京奇虎科技有限公司 A kind of even method that wheat is live and main broadcaster's end equipment
CN107104883B (en) * 2017-04-21 2019-05-03 腾讯科技(深圳)有限公司 Information sharing method, client and server in a kind of social networks chain
CN107911749B (en) * 2017-10-27 2020-03-03 优酷网络技术(北京)有限公司 Method for displaying and providing rehearsal graph, client and server

Also Published As

Publication number Publication date
TW201918075A (en) 2019-05-01
WO2019080720A1 (en) 2019-05-02
CN107911749A (en) 2018-04-13

Similar Documents

Publication Publication Date Title
EP4087258A1 (en) Method and apparatus for displaying live broadcast data, and device and storage medium
US8117281B2 (en) Using internet content as a means to establish live social networks by linking internet users to each other who are simultaneously engaged in the same and/or similar content
CN107920274B (en) Video processing method, client and server
TWI533193B (en) Method,computer-readable storage medium and apparatus for providing chatting service
TWI521455B (en) Computer-implemented method for stimulating user engagement with advertising content and computing device thereof
US20130047123A1 (en) Method for presenting user-defined menu of digital content choices, organized as ring of icons surrounding preview pane
CN112153454B (en) Method, device and equipment for providing multimedia content
US20160004761A1 (en) Person-based display of posts in social network
US20190104325A1 (en) Event streaming with added content and context
WO2015089100A1 (en) Social messaging system and method
CN107911749B (en) Method for displaying and providing rehearsal graph, client and server
US11770357B2 (en) Multi-blockchain proof-of-activity platform
US10042516B2 (en) Lithe clip survey facilitation systems and methods
CN115190366B (en) Information display method, device, electronic equipment and computer readable medium
CN110855557A (en) Video sharing method and device and storage medium
JP7462235B2 (en) Video distribution system, information processing method, and computer program
CN113515336B (en) Live room joining method, creation method, device, equipment and storage medium
CN115510348A (en) Method, apparatus, device and storage medium for content presentation
US20190288972A1 (en) Reveal posts in a content sharing platform
CN114764485B (en) Information display method and device, storage medium and computer equipment
CN112799748B (en) Expression element display method, device, equipment and computer readable storage medium
JP7313641B1 (en) terminal and computer program
WO2023273692A1 (en) Method and apparatus for replying to information, electronic device, computer storage medium, and product
CN110830412B (en) Method and server for sharing membership permission
US11601481B2 (en) Image-based file and media loading

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1252713

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant