CN110752983B - Interaction method, device, interface, medium and computing equipment - Google Patents

Interaction method, device, interface, medium and computing equipment Download PDF

Info

Publication number
CN110752983B
CN110752983B CN201910967378.1A CN201910967378A CN110752983B CN 110752983 B CN110752983 B CN 110752983B CN 201910967378 A CN201910967378 A CN 201910967378A CN 110752983 B CN110752983 B CN 110752983B
Authority
CN
China
Prior art keywords
users
group
interactive
time
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910967378.1A
Other languages
Chinese (zh)
Other versions
CN110752983A (en
Inventor
胡震宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Netease Cloud Music Technology Co Ltd
Original Assignee
Hangzhou Netease Cloud Music Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Netease Cloud Music Technology Co Ltd filed Critical Hangzhou Netease Cloud Music Technology Co Ltd
Priority to CN201910967378.1A priority Critical patent/CN110752983B/en
Publication of CN110752983A publication Critical patent/CN110752983A/en
Application granted granted Critical
Publication of CN110752983B publication Critical patent/CN110752983B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/52User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention provides an interaction method, an interaction device, an interaction interface, an interaction medium and a computing device. The interaction method comprises the following steps: dividing users with audio playing progress in the same time interval into a group, and displaying interactive information of the group of users on a time axis of the same interface, wherein the display position of the interactive information corresponds to the audio playing progress of the users. According to the interaction method, the interaction information fed back by the users in the same group based on the current audio playing progress can be obtained through the time axis during audio playing, and the problems that the real-time accompanying feeling of the users is weak and the participation threshold is high in the existing music social contact mode are solved, so that the real-time social contact interaction is provided for the users during the audio playing, the real-time accompanying feeling and the interactivity of music social contact are enhanced, common topics among the users are increased, the deep participation of the users is improved, and the music social contact experience of the users is greatly improved.

Description

Interaction method, device, interface, medium and computing equipment
Technical Field
The embodiment of the invention relates to the field of software, in particular to an interaction method, an interaction device, an interaction interface, an interaction medium and a computing device.
Background
This section is intended to provide a background or context to the embodiments of the invention that are recited in the claims. The description herein is not admitted to be prior art by inclusion in this section.
Currently, music socialization mainly adopts a traditional chat room mode, in which a user can concentrate on listening to music in a chat room and also can send text messages or pictures, such as shrimp groveling. And the comment page or the message board in the music playing software can also be used for realizing music social contact, for example, users can interact through comment replies in the comment page.
However, in the existing music social contact, a plurality of users cannot perceive songs, lyrics or music fragments with the same taste of each other, so that the real-time accompanying feeling of the users is weak. Particularly, a user newly added to a chat room in a conventional chat room mode cannot know previous chat contents, which not only causes poor real-time accompanying feeling of the user, but also makes it more difficult to ensure that a plurality of users are in the same chat progress, and further causes a high participation threshold of the user.
Therefore, in order to solve the problems of weak user real-time accompanying feeling and high participation threshold in the existing music social contact mode, a new interactive social scheme is urgently needed to be designed.
Disclosure of Invention
The problem that the user experience is influenced due to the fact that the user is weak in accompanying sense and high in participation threshold in real time in the existing music social contact mode is solved. Therefore, an improved social interaction solution is highly needed to solve the above technical problems.
In this context, embodiments of the present invention are intended to provide an interactive method, apparatus, interface, medium, and computing device.
In a first aspect of embodiments of the present invention, there is provided an interaction method, including: dividing users with the audio playing progress in the same time interval into a group, and displaying interactive information of the group of users on a time axis of the same interface, wherein the display position of the interactive information corresponds to the audio playing progress of the users.
In one possible embodiment, the time axis includes a plurality of time intervals, each time interval includes a plurality of time nodes, and the plurality of time nodes are in one-to-one correspondence with the plurality of audio playing schedules.
In one possible embodiment, the interactive information includes one or a combination of user identification, audio playing progress and interactive content; and the display position of the interactive information is a time node corresponding to the audio playing progress of the user on a time axis.
In one possible embodiment, the interactive content is text content and/or picture content. The interactive information of a group of users is displayed on a time shaft of the same interface, and the method comprises the following steps: acquiring text content and/or picture content fed back by a user; and displaying the text content and/or the picture content on the corresponding time node of the user on the time axis.
In one possible embodiment, the interactive content is displayed in a bubble frame, and the interactive content displayed to the same group of users in the bubble frame is the same.
In one possible embodiment, the interactive content displayed by the bubble box is the interactive content with the feedback time closest to the current time.
In one possible embodiment, the user identification is a user avatar. The interactive information of a group of users is displayed on a time shaft of the same interface, and the method comprises the following steps: and differently displaying the head portraits of different users on a time axis of the same interface in different display modes.
In one possible embodiment, if the number of at least one group of users on the time axis exceeds a threshold, the at least one group of users is split into corresponding time intervals of at least two time axes.
In one possible embodiment, the method further comprises: and switching the interface into a global time axis browsing mode by reducing the time axis, wherein the same group of users in the global time axis browsing mode are intensively displayed on the same time node corresponding to the time interval.
In one possible embodiment, at least one group of users with high interaction frequency is preferentially displayed in the interface from high to low, wherein the interaction frequency of a group of users is higher as the interaction information of the group of users is more.
In one possible embodiment, the method further comprises: triggering a current interface to be switched into an interface for displaying interaction information of a group of users by selecting the group of users; and triggering the audio playing progress to be positioned in the time interval of the group of users.
In one possible embodiment, the method further comprises: and selecting a user identifier to trigger a detail floating layer to be displayed in the interface, wherein the detail floating layer comprises user information and interaction information of the user and an operation identifier for contacting or paying attention to the user.
In a second aspect of the embodiments of the present invention, there is provided an interactive apparatus, where a processing unit is configured to divide users whose audio playing progresses in a same time interval into a group;
the display unit is configured to display interactive information of a group of users on a time axis of the same interface, wherein the display position of the interactive information corresponds to the audio playing progress of the users.
In one possible embodiment, the time axis includes a plurality of time intervals, each time interval includes a plurality of time nodes, and the plurality of time nodes are in one-to-one correspondence with the plurality of audio playing schedules.
In one possible embodiment, the interactive information includes one or a combination of user identification, audio playing progress and interactive content; and the display position of the interactive information is a time node corresponding to the audio playing progress of the user on a time axis.
In one possible embodiment, the interactive content is text content and/or picture content; the presentation unit is specifically configured to: acquiring text content and/or picture content fed back by a user; and displaying the text content and/or the picture content on the corresponding time node of the user on the time axis.
In one possible embodiment, the interactive content is displayed in a bubble frame, and the interactive content displayed to the same group of users in the bubble frame is the same.
In one possible embodiment, the interactive content displayed by the bubble box is the interactive content with the feedback time closest to the current time.
In one possible embodiment, the user identification is a user avatar. The presentation unit is specifically configured to: and differently displaying the head portraits of different users on a time axis of the same interface in different display modes.
In one possible embodiment, if the number of at least one group of users on the time axis exceeds a threshold, the at least one group of users is split into corresponding time intervals of at least two time axes.
In a possible embodiment, the apparatus further comprises a switching unit configured to: and switching the interface into a global time axis browsing mode by reducing the time axis, wherein the same group of users in the global time axis browsing mode are intensively displayed on the same time node corresponding to the time interval.
In one possible embodiment, at least one group of users with high interaction frequency is preferentially displayed in the interface from high to low, wherein the interaction frequency of a group of users is higher as the interaction information of the group of users is more.
In a possible embodiment, the apparatus further comprises a switching unit, further configured to: triggering a current interface to be switched into an interface for displaying interaction information of a group of users by selecting the group of users; and triggering the audio playing progress to be positioned in the time interval of the group of users.
In a possible embodiment, the processing unit is further configured to: and selecting a user identifier to trigger a detail floating layer to be displayed in the interface, wherein the detail floating layer comprises user information and interaction information of the user and an operation identifier for contacting or paying attention to the user.
In a third aspect of embodiments of the present invention, an interface is provided, where the interface is adapted to the interaction method according to any one of the first aspect, and the interface includes a time axis configured to show interaction information of at least one group of users, where audio playing schedules of the same group of users in the at least one group of users are in the same time interval, and a display position of the interaction information corresponds to the audio playing schedule of the user.
In a fourth aspect of embodiments of the present invention, there is provided a medium storing computer-executable instructions for causing a computer to perform the interaction method of any one of the first aspect.
In a fifth aspect of embodiments of the present invention, there is provided a computing device comprising a processing unit, a memory, and an input/output (In/Out, I/O) interface; a memory for storing programs or instructions for execution by the processing unit; a processing unit for executing the interaction method of any of the embodiments of the first aspect according to a program or instructions stored in a memory; an I/O interface for receiving or transmitting data under control of the processing unit.
According to the technical scheme provided by the embodiment of the invention, the interaction information fed back by the users in the same group based on the current audio playing progress can be obtained through the time axis during audio playing, and the problems of weak user real-time accompanying feeling and high participation threshold in the existing music social contact mode are solved, so that the real-time social contact interaction is provided for the users during audio playing, the real-time accompanying feeling and interactivity of music social contact are enhanced, the common topics among the users are increased, the deep participation of the users is promoted, and the music social contact experience of the users is greatly improved.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
FIG. 1 schematically shows a flow diagram of an interaction method according to an embodiment of the invention;
FIGS. 2a to 2f are schematic interface interaction diagrams illustrating an interaction method according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of an interaction device according to an embodiment of the invention;
FIG. 4 schematically illustrates an interface interaction diagram of another interaction method according to an embodiment of the invention;
FIG. 5 schematically shows a schematic structural diagram of a medium according to an embodiment of the invention;
FIG. 6 schematically illustrates a structural diagram of a computing device in accordance with an embodiment of the present invention;
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
The principles and spirit of the present invention will be described with reference to a number of exemplary embodiments. It is understood that these embodiments are given solely for the purpose of enabling those skilled in the art to better understand and to practice the invention, and are not intended to limit the scope of the invention in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As will be appreciated by one skilled in the art, embodiments of the present invention may be embodied as a system, apparatus, device, method, or computer program product. Accordingly, the present disclosure may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
According to the embodiment of the invention, an interaction method, an interaction device, an interaction interface, an interaction medium and a computing device are provided.
In this document, any number of elements in the drawings is by way of example and not by way of limitation, and any nomenclature is used solely for differentiation and not by way of limitation.
The principles and spirit of the present invention are explained in detail below with reference to several representative embodiments of the invention.
Summary of The Invention
The inventor finds that at present, the traditional chat room mode is mainly adopted for music social contact, and a plurality of users cannot perceive songs, lyrics or music fragments with the same taste of each other, so that the real-time accompanying feeling of the users is weak. In addition, a user newly added into the chat room cannot know the previous chat content, so that the real-time accompanying feeling of the user is weak, and it is more difficult to ensure that a plurality of users are in the same chat progress, and further the user participation threshold is high. Therefore, the existing music social contact mode has the problems of weak user accompanying feeling in real time and high participation threshold.
In order to overcome the problems in the prior art, the invention provides an interaction method, an interaction device, an interaction interface, an interaction medium and a computing device. The interaction method comprises the following steps: dividing users with the audio playing progress in the same time interval into a group, and displaying interactive information of the group of users on a time axis of the same interface, wherein the display position of the interactive information corresponds to the audio playing progress of the users.
According to the interaction method, users with similar audio playing progress are divided into a group, and the interaction information of the users in the same group is displayed through the time axis, so that real-time social interaction is provided for the users during audio playing. In the interaction method, the users in the same group can acquire the interaction information fed back based on the current audio playing progress through the time axis when playing the audio, so that the problem that the users in the existing music social contact mode are weak in real-time accompanying and accompanying sense is avoided, the users in the same group can be ensured to be in the same chat progress, common topics among the users are increased, the deep participation sense of the users is improved, and the music social contact experience of the users is greatly improved. It will be appreciated that the principles of the apparatus, interface, medium, and computing device are similar to the method and will not be described in detail herein.
Having described the general principles of the invention, various non-limiting embodiments of the invention are described in detail below.
Application scene overview
The embodiment of the invention can be applied to the interactive scene of a media stream playing system, in particular to a multi-user interactive scene in an audio playing system. The media stream playing scene related to the embodiment of the present invention is not limited to the user interaction scene of the music playing application in the audio playing state, the scene of the user leaving/commenting on the song in the music playing application, and the user interaction scene of the video playing application in the video playing state.
In an application scenario of the interaction scheme, a user can acquire interaction information through terminal equipment, for example, information such as characters and pictures input by the user is acquired, the terminal equipment can display an interface for bearing the interaction information on a screen, for example, a time axis, a detail floating layer or icons in other forms on the interface, and the interaction information input by the user can be acquired through data acquisition equipment. It is understood that the interface may be downloaded from a server by the terminal device, and the data collected by the terminal device is analyzed (i.e., the interaction scheme is executed) or the interface may be the server. In an actual application process, the server may have multiple stages, that is, the receiving server receives data containing the interactive information sent by the terminal device and sends the received data containing the interactive information to the processing server, and the processing server processes the received data containing the interactive information and the audio playing progress according to the interactive method of the present invention, obtains a user interface and feeds the user interface back to the terminal device for display. The user related to the embodiment of the present invention may be a real-time online user and/or an offline user, and is not limited herein. Specifically, for an audio, the interactive information displayed on the time axis may come from a user playing the audio online in real time or from a user playing the audio offline. For example, the interaction information a fed back by the user a playing a certain segment of audio offline and the interaction information B fed back by the user B playing the segment of audio online in real time can be simultaneously displayed on the time axis.
Exemplary method
In the following, a method for user interaction according to an exemplary embodiment of the invention is described with reference to the drawings in connection with an application scenario. It should be noted that the above application scenarios are merely illustrated for the convenience of understanding the spirit and principles of the present invention, and the embodiments of the present invention are not limited in this respect. Rather, embodiments of the present invention may be applied to any scenario where applicable.
The embodiment of the invention provides an interaction method which is suitable for various media stream playing systems, such as an audio playing system and a video playing system. The following describes the interaction method provided by the embodiment of the present invention in detail by taking an audio playing system as an example, as shown in fig. 1, the interaction method includes:
s101, dividing users with audio playing progress in the same time interval into a group;
s102, displaying interactive information of a group of users on a time axis of the same interface, wherein the display position of the interactive information corresponds to the audio playing progress of the users.
The interaction method is described in detail below with reference to the accompanying drawings:
in the embodiment of the invention, the time axis comprises a plurality of time intervals; specifically, the time interval further includes a plurality of time nodes, and the plurality of time nodes are in one-to-one correspondence with the plurality of audio playing schedules. The time axis is a linear structure which is formed by a plurality of equally spaced nodes arranged from top to bottom by taking the initial time of audio playing as an initial time node, the equally spaced nodes arranged from top to bottom represent a plurality of time nodes from front to back of the audio playing progress, and the interval length of the adjacent time nodes on the time axis represents the interval duration of the adjacent time nodes. Taking the interface shown in fig. 2a as an example, the time nodes corresponding to the four users from top to bottom are 02:11, 02:30, 02:41, and 02:52, respectively. It should be noted that, in addition to the time axis shown in fig. 2a, the form of the time axis in the interface includes various forms, such as a circular time axis and a tree time axis. It should also be understood that the timeline scrolls as the playback progresses during audio playback, and the timeline stops when audio playback stops.
In S101, according to the audio playing progress of each user playing the audio, each user is respectively divided into groups corresponding to time intervals in which the respective audio playing progress is located. Taking the interface shown in fig. 2a as an example, assuming that the audio name is "mountain-flower porridge", and the time interval is 2:00 to 3:00, 4 users whose audio playing progress is within 2:00 to 3:00 are divided into a group in S101.
The number of time intervals on the time axis may be preset, or may be dynamically adjusted based on the number of users playing the audio, or may be dynamically adjusted based on the number of users leaving/commenting on the audio. Before S101, an audio playing progress and/or a user identifier of a user playing the audio are obtained. Before S101, a threshold value of a group of user quantity is preset; if the number of the users with the audio playing progress in the same time interval exceeds the threshold, the users are divided into corresponding time intervals of at least two time axes, so that the interaction information of at least one group of users is clearly shown on the time axes. Of course, the embodiment of the present invention does not limit that the at least two split time axes are disposed on the same interface or different interfaces. Furthermore, if the number of at least one group of users on the time axis exceeds a threshold value, splitting the at least one group of users into corresponding time intervals of at least two time axes. By splitting the user into at least two time axes, the method is beneficial to reducing the user density of a single time axis, the problem of insufficient interactive information display caused by over concentration of the user in a single time interval is avoided, and the user is guaranteed to keep good interactive experience in the music social contact process.
In S102, the implementation manner of displaying the interaction information of a group of users on the time axis of the same interface includes multiple types. Wherein the interactive information includes, but is not limited to, one or a combination of user identification, audio playing progress and interactive content. The user identification may be a user avatar, a user name. The interactive content can be text content and/or picture content; for example, the text content may be the user's experience with audio or other content, and the picture content may be an emoticon or a picture related to the song content for representing the user's experience. It should be noted that, besides the user identifier, the audio playing progress and the interactive content, the embodiment of the present invention does not limit the interactive information to include other data. The display position of the interactive information corresponds to the audio playing progress of the user, so that the interactive feedback of the user to the current audio can be displayed visually. Specifically, the display position of the interactive information is a time node corresponding to the audio playing progress of the user on a time axis.
One implementation manner of S102 is that, in S102, text content and/or picture content fed back by the user is obtained, and the text content and/or the picture content is displayed on a time node corresponding to the user on the time axis. Optionally, the interactive content is displayed in the bubble frame, and the interactive content displayed to the same group of users in the bubble frame is the same. Further, the interactive content displayed by the bubble frame is the interactive content with the feedback time closest to the current time. In one example, assuming that the audio name is "mountain-flower porridge", the time interval is 2:00 to 3:00, the interactive information includes a user avatar, an audio playing progress of the user, and interactive content fed back by the user, after 4 users whose audio playing progress is within 2:00 to 3:00 are divided into a group in S101, user avatars and audio playing progresses of 4 users in the group of users are respectively displayed on two sides of corresponding time nodes on a time axis in S102, the audio playing progress is displayed below the user avatar of the corresponding user, and the interactive content is displayed in a bubble box on the same side of the user avatar of the user, as shown in fig. 2 b. That is, the text content "this song is invincible" of the user whose audio playing progress is 02:11 is displayed in the bubble box on the same side as the head portrait of the user; the picture content of the user with the audio playing progress of 02:30 is displayed in the bubble box on the same side as the head portrait of the user.
Another implementation manner of S102 is that, if the user identifier is a user avatar, in S102, different display manners are used to perform differentiated display on the avatars of different users on the time axis of the same interface. Further, different users are displayed in a distinguishing manner by adopting bubble frames with different frames, or different users are displayed in a distinguishing manner by adopting user identifications in different display forms. Taking the interface shown in fig. 2b as an example, assuming that the user with the audio playing progress of 02:30 is the user of the terminal device displaying the interface, the display form of the user head portrait of the user is set as the frame highlight.
Optionally, the interface background is set according to the audio attribute, so that the audio atmosphere can be rendered more intuitively through the interface setting. Audio attributes include, but are not limited to, audio type, audio tempo, audio atmosphere. For example, if the audio type is electronic music, the interface background is set to a color with strong visual stimulation; or the audio type is electronic music, the interface background is set to be rendered differently based on the change of the audio rhythm, namely, the interface background is set to be a color with stronger visual stimulation changed along with the change of the audio rhythm when the rhythm is changed to be strong, and the interface background is set to be a color with weaker visual stimulation changed along with the change of the audio rhythm when the rhythm is changed to be slow.
Optionally, the details floating layer is displayed in the user identifier triggering interface, and the details floating layer includes user information and interaction information of the user and an operation identifier for contacting or paying attention to the user. The implementation forms of the selection operation related to the embodiment of the invention include but are not limited to voice input, gesture image acquisition, or operations such as clicking and sliding on a display screen of the terminal device. In the interface shown in fig. 2c, a user avatar of a user with an audio playing progress of 02:11 is selected to trigger a detail floating layer in the form of a rounded rectangle box in the interface, wherein the detail floating layer comprises the user avatar of the user, user data, interactive content fed back by the user and an icon for privately contacting or paying attention to the user.
Optionally, the interface is switched to the global timeline browsing mode by reducing the timeline, for example, the interface is switched to the global timeline browsing mode by reducing the timeline by using two-finger sliding. Specifically, the same group of users in the global timeline browsing mode are collectively displayed on the same time node corresponding to the time interval where the users are located. Furthermore, the user identifiers of multiple users in the same group of users may be collectively displayed on one side of the same time node corresponding to the time interval in a folded manner. A plurality of groups of users are displayed in the interface in the global time axis browsing mode, and in one embodiment, at least one group of users are displayed in the interface according to the time sequence of the audio playing progress; in another embodiment, at least one group of users with high interaction frequency is preferentially displayed in the interface from high to low, wherein the interaction frequency of a group of users is higher as the interaction information of the group of users is more. Taking the interface shown in fig. 2d as an example, the interface is in a global timeline browsing mode, and the user identifiers of 4 groups of users and the user identifiers corresponding to the terminal device users are shown in the interface, and the time intervals corresponding to the 4 groups of users are respectively "2: 02-2: 10", "2: 22-2: 38", and "2: 41-2: 58". Further, a user identifier of a group of users is selected, and a detail floating layer of the group of users is displayed in a trigger interface, wherein the detail floating layer of the group of users includes but is not limited to a time interval in which the group of users is located, user information of the group of users, and user identifiers of the group of users. Taking the interface shown in fig. 2e as an example, by selecting the user head portraits of a group of users with the time interval of "2: 22-2: 38", the detail floating layer of the group of users is shown in the trigger interface, and the detail floating layer of the group of users includes the time interval in which the group of users is located, the gender information of the group of users, the user head portraits of the group of users, and the user names.
Further, a group of users is selected, the current interface is triggered to be switched into an interface for displaying the interactive information of the group of users, and the audio playing progress is triggered to be positioned in the time interval of the group of users. Still taking the interface shown in fig. 2d as an example, a switch identifier for switching to an interface for displaying the interaction information of the group of users, i.e., "enter", is displayed above the side of the 4 groups of user identifiers, and user information of the group of users, e.g., gender information, activity of the users in the group, is also displayed on the side of the 4 groups of user identifiers; one side of the user identifier corresponding to the terminal device user is displayed with a positioning identifier for positioning to an interface corresponding to the time interval where the user is located, namely, returning to my progress.
Optionally, the interface further includes a prompt message for instructing the current audio to switch to the next audio. Taking the interface shown in fig. 2f as an example, the prompt information for indicating that the current audio is switched to the next audio is displayed in the detail floating layer, and the remaining duration of the current audio is also displayed in the detail floating layer; further, through two operation icons arranged in the detail floating layer, the user can select to pause the song or ignore the song in the detail floating layer.
The interaction method shown in fig. 1 divides users with similar audio playing progress into a group, and displays the interaction information of the users in the same group by a time axis, so that the interaction information fed back by the users in the same group based on the current audio playing progress can be acquired by the time axis during audio playing, and the problems of weak user real-time accompanying feeling and high participation threshold in the existing music social contact mode are avoided, thereby realizing the real-time social interaction for the users during audio playing, enhancing the real-time accompanying feeling and interactivity of music social contact, increasing common topics among the users, improving the deep participation of the users, and greatly improving the music social contact experience of the users.
Exemplary devices
Having described the interaction method of the exemplary embodiments of the present invention, it is next described that the present invention provides an interaction device of an exemplary implementation. The interaction device provided by the invention can be applied to any interaction method provided by the embodiment corresponding to fig. 1. Referring to fig. 3, the interactive apparatus at least includes:
a processing unit 301 configured to divide users whose audio playing progresses are in the same time interval into a group;
the presentation unit 302 is configured to display interactive information of a group of users on a time axis of the same interface, where a display position of the interactive information corresponds to an audio playing progress of the users.
Optionally, the time axis includes a plurality of time intervals, each time interval includes a plurality of time nodes, and the plurality of time nodes are in one-to-one correspondence with the plurality of audio playing schedules.
Optionally, the interactive information includes one or a combination of a user identifier, an audio playing progress and interactive content; and the display position of the interactive information is a time node corresponding to the audio playing progress of the user on a time axis.
Optionally, the interactive content is text content and/or picture content; the presentation unit 302 is specifically configured to: and acquiring the text content and/or the picture content fed back by the user, and displaying the text content and/or the picture content on a corresponding time node of the user on a time axis.
Accordingly, the interactive contents are displayed in the bubble frame, and the interactive contents displayed to the same group of users in the bubble frame are the same.
Accordingly, the interactive content displayed by the bubble frame is the interactive content with the feedback time closest to the current moment.
Accordingly, the user identification is a user avatar. The presentation unit 302 is specifically configured to: and differently displaying the head portraits of different users on a time axis of the same interface in different display modes.
Optionally, if the number of at least one group of users on the time axis exceeds the threshold, splitting the at least one group of users into corresponding time intervals of at least two time axes.
Optionally, the apparatus further includes a switching unit configured to: and switching the interface into a global time axis browsing mode by reducing the time axis, wherein the same group of users in the global time axis browsing mode are intensively displayed on the same time node corresponding to the time interval.
Correspondingly, at least one group of users with high interaction frequency is preferentially displayed in the interface from high to low, wherein the interaction frequency of the group of users is higher as the interaction information of the group of users is more.
Optionally, the apparatus further includes a switching unit, further configured to: triggering a current interface to be switched into an interface for displaying interaction information of a group of users by selecting the group of users; and triggering the audio playing progress to be positioned in the time interval of the group of users.
Optionally, the processing unit 301 is further configured to: and selecting a user identifier to trigger a detail floating layer to be displayed in the interface, wherein the detail floating layer comprises user information and interaction information of the user and an operation identifier for contacting or paying attention to the user.
Exemplary interface
Having described the interaction method and apparatus of the exemplary embodiments of the present invention, the following description provides an interface of exemplary embodiments. The exemplary interface provided by the present invention may be applied to any of the interaction methods provided by the embodiments corresponding to fig. 1. Referring to fig. 4, the interface 400 includes at least:
the timeline 401 is configured to show interactive information of at least one group of users, where the audio playing schedules of the same group of users in the at least one group of users are in the same time interval, and the display position of the interactive information corresponds to the audio playing schedule of the users.
Optionally, the time axis 401 includes a plurality of time intervals, each time interval includes a plurality of time nodes, and the plurality of time nodes are in one-to-one correspondence with the plurality of audio playing schedules.
Optionally, the interactive information includes one or a combination of a user identifier, an audio playing progress and interactive content; and the display position of the interactive information is a time node corresponding to the audio playing progress of the user on the time axis 401.
Optionally, the interactive content is text content and/or picture content; displaying the interaction information of at least one group of users, specifically: the text content and/or the picture content is displayed at the corresponding time node on the time axis 401 by the user.
The interactive content is displayed in the bubble frame, and the interactive content displayed to the same group of users in the bubble frame is the same.
And the interactive content displayed by the bubble frame is the interactive content with the feedback time closest to the current moment.
Optionally, the user identifier is a user avatar; displaying the interaction information of at least one group of users, specifically: different display modes are adopted on a time axis 401 of the same interface to display different head portraits of different users in a distinguishing way.
Optionally, the interface further includes a global timeline browsing mode, and the same group of users in the global timeline browsing mode are collectively displayed on the same time node corresponding to the time interval where the group of users is located.
Optionally, at least one group of users is displayed in the interface 400 in the order of the interaction frequency from high to low, wherein the more the interaction information of a group of users, the higher the interaction frequency of the group of users.
Optionally, the interface 400 further includes a detail floating layer, which includes user information, interaction information, and operation identifiers for contacting or paying attention to the user.
It should be noted that, in addition to the implementation form of the time axis 401 shown in fig. 4, the implementation form of the time axis related to the embodiment of the present invention may also be a circular time axis, a tree time axis, or another implementation form.
Exemplary Medium
Having described the interaction method, apparatus, and interface of the exemplary embodiments of the present invention, referring next to fig. 5, the present invention provides an exemplary medium storing computer-executable instructions operable to cause a computer to perform the interaction method of any one of the corresponding exemplary embodiments of the present invention of fig. 1.
Exemplary computing device
Having described the interaction methods, apparatus, interfaces, media, and means of the exemplary embodiments of this invention, next, referring to fig. 6, an exemplary computing device 60 provided by this invention is described, the computing device 60 comprising a processing unit 601, a Memory 602, a bus 603, an external device 604, an I/O interface 605, and a network adapter 606, the Memory 602 comprising a Random Access Memory (RAM) 6021, a cache Memory 6022, a Read-Only Memory (ROM) 6023, and a Memory unit array 6025 of at least one Memory unit 6024. The memory 602 is used for storing programs or instructions executed by the processing unit 601; the processing unit 601 is configured to execute the method according to any one of the exemplary embodiments of the present invention corresponding to fig. 1 according to the program or the instructions stored in the memory 602; the I/O interface 605 is used for receiving or transmitting data under the control of the processing unit 601.
It should be noted that although in the above detailed description several units/modules or sub-units/modules of the apparatus are mentioned, such a division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more of the units/modules described above may be embodied in one unit/module according to embodiments of the invention. Conversely, the features and functions of one unit/module described above may be further divided into embodiments by a plurality of units/modules.
Moreover, while the operations of the method of the invention are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
While the spirit and principles of the invention have been described with reference to several particular embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, nor is the division of aspects, which is for convenience only as the features in such aspects may not be combined to benefit. The invention is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (12)

1. An interactive method, comprising:
acquiring the audio playing progress of a user playing audio;
dividing users with audio playing progress in the same time interval into a group;
displaying interactive information of a group of users on a time axis of the same interface, wherein the display position of the interactive information corresponds to the audio playing progress of the users;
switching the interface into a global time axis browsing mode by reducing the time axis, wherein the same group of users in the global time axis browsing mode are intensively displayed on the same time node corresponding to the time interval;
triggering a current interface to be switched into an interface for displaying interaction information of a group of users by selecting the group of users; and is provided with
And triggering the audio playing progress to be positioned in the time interval of the group of users.
2. The interactive method of claim 1, wherein the interactive information comprises one or a combination of a user identification, an audio playing progress and interactive content; and is
And the display position of the interactive information is a time node corresponding to the audio playing progress of the user on a time axis.
3. The interactive method of claim 2, wherein the interactive content is text content and/or picture content;
the interactive information of a group of users is displayed on a time axis of the same interface, and the method comprises the following steps:
acquiring the text content and/or the picture content fed back by a user;
and displaying the text content and/or the picture content on a corresponding time node of the user on the time axis.
4. The interactive method of claim 3, wherein the interactive contents are displayed in a bubble frame, and the interactive contents displayed to the same group of users in the bubble frame are the same.
5. The interactive method of claim 3 or 4, wherein the interactive content displayed by the bubble box is the interactive content with the feedback time closest to the current time.
6. The interactive method of claim 2, wherein the user identification is a user avatar;
the interactive information of a group of users is displayed on a time axis of the same interface, and the method comprises the following steps: and differently displaying the head portraits of different users on a time axis of the same interface in different display modes.
7. The interaction method according to any one of claims 1 to 6, wherein if the number of at least one group of users existing on the time axis exceeds a threshold value, splitting the at least one group of users into corresponding time intervals of at least two time axes.
8. The interactive method of claim 1, wherein at least one group of users having high interaction frequency is preferentially displayed in the interface in an order of the interaction frequency from high to low, wherein the more the interaction information of a group of users, the higher the interaction frequency of the group of users.
9. An interaction method as claimed in any one of claims 1 to 3, further comprising:
and selecting a user identifier to trigger a detail floating layer to be displayed in an interface, wherein the detail floating layer comprises user information and interaction information of the user and an operation identifier for contacting or paying attention to the user.
10. An interactive device, wherein the interactive device is adapted to the interactive method of any one of claims 1 to 9, and the interactive device comprises:
a unit for acquiring an audio playing progress of a user playing audio;
the processing unit is configured to divide users with audio playing progress in the same time interval into a group;
the display unit is configured to display interactive information of a group of users on a time axis of the same interface, wherein the display position of the interactive information corresponds to the audio playing progress of the users, the time axis comprises a plurality of time intervals, the time intervals comprise a plurality of time nodes, and the time nodes are in one-to-one correspondence with the audio playing progress;
a unit for switching the interface into a global time axis browsing mode by reducing the time axis, wherein the same group of users in the global time axis browsing mode are intensively displayed on the same time node corresponding to the time interval;
the unit is used for triggering the current interface to be switched into an interface for displaying the interactive information of the group of users by selecting the group of users; and is
And the unit is used for triggering the audio playing progress to be positioned to the time interval of the group of users.
11. A computer-readable storage medium storing program code which, when executed by a processor, implements the interaction method according to one of claims 1 to 9.
12. A computing device comprising a processor and a storage medium storing program code which, when executed by the processor, implements the interactive method as claimed in one of claims 1 to 9.
CN201910967378.1A 2019-10-12 2019-10-12 Interaction method, device, interface, medium and computing equipment Active CN110752983B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910967378.1A CN110752983B (en) 2019-10-12 2019-10-12 Interaction method, device, interface, medium and computing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910967378.1A CN110752983B (en) 2019-10-12 2019-10-12 Interaction method, device, interface, medium and computing equipment

Publications (2)

Publication Number Publication Date
CN110752983A CN110752983A (en) 2020-02-04
CN110752983B true CN110752983B (en) 2022-05-20

Family

ID=69278100

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910967378.1A Active CN110752983B (en) 2019-10-12 2019-10-12 Interaction method, device, interface, medium and computing equipment

Country Status (1)

Country Link
CN (1) CN110752983B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113014994B (en) * 2021-04-14 2023-04-28 杭州网易云音乐科技有限公司 Multimedia playing control method and device, storage medium and electronic equipment
CN113411680B (en) * 2021-06-18 2023-03-21 腾讯科技(深圳)有限公司 Multimedia resource playing method, device, terminal and storage medium
CN113556611B (en) * 2021-07-20 2022-08-16 上海哔哩哔哩科技有限公司 Video watching method and device
CN115174968A (en) * 2022-06-13 2022-10-11 咪咕文化科技有限公司 Video user interaction method, device and system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107479810A (en) * 2016-06-07 2017-12-15 北京三星通信技术研究有限公司 Operating method and terminal device based on secondary display area

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103164414B (en) * 2011-12-09 2015-07-01 腾讯科技(深圳)有限公司 Multimedia file comment display method and multimedia file comment display system
US9116596B2 (en) * 2012-06-10 2015-08-25 Apple Inc. Sharing images and comments across different devices
CN103914464B (en) * 2012-12-31 2017-05-24 上海证大喜马拉雅网络科技有限公司 Streaming media based interactive display method and system of accompanied comments
US10055411B2 (en) * 2015-10-30 2018-08-21 International Business Machines Corporation Music recommendation engine
CN105847986A (en) * 2016-02-01 2016-08-10 乐视移动智能信息技术(北京)有限公司 Real-time music commenting method and system
CN106454538B (en) * 2016-11-07 2020-09-25 上海幻电信息科技有限公司 Real-time bullet screen interaction method
CN109428908B (en) * 2017-08-23 2021-06-01 腾讯科技(深圳)有限公司 Information display method, device and equipment
CN107995515B (en) * 2017-11-30 2021-01-29 华为技术有限公司 Information prompting method and device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107479810A (en) * 2016-06-07 2017-12-15 北京三星通信技术研究有限公司 Operating method and terminal device based on secondary display area

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"商业化的复兴:中国数字音乐产业研究报告 2019年";上海艾瑞市场咨询有限公司;《艾瑞咨询系列研究报告》;20190430;第1-45页 *

Also Published As

Publication number Publication date
CN110752983A (en) 2020-02-04

Similar Documents

Publication Publication Date Title
CN110752983B (en) Interaction method, device, interface, medium and computing equipment
CN109756787B (en) Virtual gift generation method and device and virtual gift presentation system
CN109011574B (en) Game interface display method, system, terminal and device based on live broadcast
CN108833936B (en) Live broadcast room information pushing method, device, server and medium
CN111857923B (en) Special effect display method and device, electronic equipment and computer readable medium
CN110568984A (en) Online teaching method and device, storage medium and electronic equipment
CN113691829B (en) Virtual object interaction method, device, storage medium and computer program product
CN109495427B (en) Multimedia data display method and device, storage medium and computer equipment
CN104243463A (en) Method and device for displaying virtual items
CN110719529B (en) Multi-channel video synchronization method, device, storage medium and terminal
US20160259512A1 (en) Information processing apparatus, information processing method, and program
CN113194349A (en) Video playing method, commenting method, device, equipment and storage medium
CN106375860A (en) Video playing method and device, and terminal and server
CN106878825B (en) Live broadcast-based sound effect display method and device
CN111569436A (en) Processing method, device and equipment based on interaction in live broadcast fighting
CN109756766B (en) Virtual gift display method, storage medium, electronic device and system of live broadcast platform
CN109117053B (en) Dynamic display method, device and equipment for interface content
CN109954276B (en) Information processing method, device, medium and electronic equipment in game
CN117033700A (en) Method, system and storage medium for assisting courseware display based on AI assistant
CN112169319A (en) Application program starting method, device, equipment and storage medium
CN111859869A (en) Questionnaire editing method and device, electronic equipment and storage medium
CN114760274B (en) Voice interaction method, device, equipment and storage medium for online classroom
CN114422843B (en) video color egg playing method and device, electronic equipment and medium
CN110891200B (en) Bullet screen based interaction method, device, equipment and storage medium
CN107864409B (en) Bullet screen display method and device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant