CN117874280A - Multi-user audio data display method, medium, device and computing equipment - Google Patents

Multi-user audio data display method, medium, device and computing equipment Download PDF

Info

Publication number
CN117874280A
CN117874280A CN202410057664.5A CN202410057664A CN117874280A CN 117874280 A CN117874280 A CN 117874280A CN 202410057664 A CN202410057664 A CN 202410057664A CN 117874280 A CN117874280 A CN 117874280A
Authority
CN
China
Prior art keywords
audio
user
preference information
information
audio data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410057664.5A
Other languages
Chinese (zh)
Inventor
石芳瑜
俞静
杜佳楠
孙玮梓
吴林
曹一豪
周敏
李宁
吕峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Netease Cloud Music Technology Co Ltd
Original Assignee
Hangzhou Netease Cloud Music Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Netease Cloud Music Technology Co Ltd filed Critical Hangzhou Netease Cloud Music Technology Co Ltd
Priority to CN202410057664.5A priority Critical patent/CN117874280A/en
Publication of CN117874280A publication Critical patent/CN117874280A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/635Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/638Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the disclosure provides a multi-user audio data display method, medium, device and computing equipment, and relates to the technical field of audio. According to the method, the multi-user audio data analysis results of all the users establishing the multi-person relationship are obtained, the multi-user audio data analysis results comprise audio common preference information and audio difference preference information, a multi-user audio sharing page is generated, the multi-user audio data analysis results are displayed on the multi-user audio sharing page, the common points or the difference points of the users on the audio preference can be intuitively embodied by the audio sharing page, the interactivity of the audio data among the users is improved, the mutual understanding among the users establishing the multi-person relationship is enhanced, and better experience is brought to the users.

Description

Multi-user audio data display method, medium, device and computing equipment
Technical Field
Embodiments of the present disclosure relate to the field of audio technology, and more particularly, to a multi-user audio data display method, medium, apparatus, and computing device.
Background
This section is intended to provide a background or context for embodiments of the present disclosure. The description herein is not admitted to be prior art by inclusion in this section.
With the rapid development of audio technology and the rapid growth of audio resources, the currently mainstream audio programs can provide massive audio for users, and the users select corresponding audio to listen to.
In the related art, for different users, audio data of the corresponding users, such as a normally listened song, an artist, etc., are displayed on an interface, but the audio data of the different users are independent and do not affect each other, if the users want to know each other through the audio data, the users generally need to log in the account of the opposite party to view the audio data of the opposite user, and as the audio data are independent, the common point or the difference point of the users on the audio preference cannot be intuitively reflected, so that the interactivity of the users on the audio data is poor, and the user experience is affected.
Disclosure of Invention
The disclosure provides a multi-user audio data display method, medium, device and computing equipment, so as to solve the problem of poor interactivity of multi-user audio data.
In a first aspect of embodiments of the present disclosure, there is provided a multi-user audio data display method, including: acquiring multi-user audio data analysis results of all target users, wherein all target users are used for indicating all users in a multi-user relationship established based on an audio playing program; generating a multi-user audio sharing page, and displaying the audio data analysis result on the multi-user audio sharing page, wherein the audio data analysis result comprises audio common preference information and audio difference preference information of each target user.
In one embodiment of the present disclosure, the method further comprises: respectively acquiring historical audio playing data of each target user; and determining the audio data analysis result of each target user based on the historical audio play data.
In another embodiment of the present disclosure, the determining the audio data analysis result of each of the target users based on the historical audio playback includes: determining, based on the historical audio data of each target user, at least one audio difference preference information of each target user and at least one audio common preference information between each target user; and obtaining the audio data analysis result based on each piece of audio difference preference information and each piece of audio common preference information.
In yet another embodiment of the present disclosure, the determining, based on the historical audio data of each of the target users, the respective at least one audio difference preference information of the target users, and the at least one audio common preference information between each of the target users, respectively, includes: extracting audio feature information of each target user from the historical audio data, and calculating and determining preference similarity of the audio feature information among the target users; at least one audio difference preference information for each of the target users and at least one audio common preference information between each of the target users are determined based on the preference similarities, respectively.
In still another embodiment of the present disclosure, the audio feature information includes audio playing period preference information, and the extracting the audio feature information of each of the target users from the historical audio data includes: traversing each target user to obtain time information of each audio in the historical audio data when playing; determining the time period when each audio is played according to the time information and a preset time period mapping table; and determining the preference information of the audio playing time period according to the number of the audio contained in each time period.
In still another embodiment of the present disclosure, the audio feature information includes a wind preference information, and the extracting the audio feature information of each of the target users from the historical audio data includes:
and traversing each target user, acquiring a curved wind label of each audio in the historical audio data, and determining the curved wind preference information based on the curved wind label.
In yet another embodiment of the present disclosure, the determining the wind preference information based on the wind label includes: and determining first wind preference information based on the number of audio frequencies contained in each wind label.
In yet another embodiment of the present disclosure, the determining the wind preference information based on the wind label includes: acquiring playing time corresponding to a wind label of each audio in the historical audio data; determining time period wind label information of audio playing at different time periods every day based on the wind labels and the corresponding playing time thereof; and determining second wind preference information based on the time period wind label information.
In yet another embodiment of the present disclosure, the determining the wind preference information based on the wind label includes: acquiring a historical favorite wind label of the target user; determining the curved wind label with the largest audio frequency as a favorite curved wind label; and when the favorite wind label is inconsistent with the historical favorite wind label, determining the favorite wind label as a newly-added wind label, and determining third wind preference information based on the newly-added wind label.
In yet another embodiment of the present disclosure, the audio feature information includes singer preference information, and the extracting the audio feature information of each of the target users from the historical audio data includes: and traversing each target user, acquiring singers of each audio in the historical audio data, and determining the singer preference information based on the singers.
In yet another embodiment of the present disclosure, the determining the singer preference information based on the singer includes: and determining first singer preference information based on the corresponding singers when the number of the audio frequencies reaches a first preset threshold value based on the number of the audio frequencies contained by each singer.
In yet another embodiment of the present disclosure, the determining the singer preference information based on the singer includes: acquiring the attention quantity of singers of each audio in the historical audio data; ranking the singers based on the number of audio contained by each singer, and determining at least one singer ranked before a preset position as a favorite singer; and determining second singer preference information based on the favorite singers of which the attention quantity does not reach a second preset threshold.
In yet another embodiment of the present disclosure, the determining the singer preference information based on the singer includes: acquiring a historical favorite singer which does not play audio in a preset period of the target user; and determining third singer preference information based on the matched singers when the singers are matched with the historical favorite singers.
In still another embodiment of the present disclosure, the audio feature information includes audio distribution section preference information, and the extracting the audio feature information of each of the target users from the historical audio data includes: traversing each target user to obtain the audio release time information of each audio in the historical audio data; determining the release section information of each audio based on the audio release time information and a preset release section mapping table; and determining the preference information of the audio issuing section according to the number of the audio contained in each issuing section information.
In yet another embodiment of the present disclosure, the audio feature information includes audio popularity preference information, and the extracting audio feature information of each of the target users from the historical audio data includes: traversing each target user to obtain the playing times of each audio in the historical audio data; determining the heat information of each audio based on the playing times and a preset audio heat mapping table; and determining the audio popularity preference information according to the number of the audio contained in each piece of popularity information.
In yet another embodiment of the present disclosure, the audio feature information includes audio language preference information, and the extracting the audio feature information of each of the target users from the historical audio data includes: and traversing each target user, and acquiring language tags of each audio in the historical audio data so as to determine the audio language preference information based on the number of the audio contained in each language tag.
In yet another embodiment of the present disclosure, the method further comprises: acquiring a first mapping relation between each of the pre-established plurality of defined audio difference preference information and a plurality of first preference documents, and a second mapping relation between each of the plurality of defined audio common preference information and a plurality of second preference documents; the step of obtaining the audio data analysis result based on each piece of audio difference preference information and each piece of audio common preference information comprises the following steps: for each piece of audio difference preference information, when the audio difference information is matched with corresponding defined audio difference preference information, mapping out a first preference document corresponding to the audio difference preference information based on the first mapping relation and the defined audio difference preference information; for each piece of audio common preference information, when the audio common preference information is matched with the corresponding defined audio common preference information, a second preference document corresponding to the audio common preference information is mapped out based on the second mapping relation and the defined audio common preference information; and obtaining the audio data analysis result based on the mapped first preference file and the mapped second preference file.
In yet another embodiment of the present disclosure, the displaying the audio data analysis result on the multi-user audio sharing page includes: determining the first preference file and/or the second preference file which are preferentially displayed in the audio data analysis result based on the preset display priority corresponding to each piece of audio difference preference information and/or each piece of audio common preference information; and displaying the determined first preference file and/or the determined second preference file on the multi-user audio sharing page.
In still another embodiment of the present disclosure, the obtaining the audio data analysis result based on each of the audio difference preference information and each of the audio common preference information further includes: for each piece of audio difference preference information, when the audio difference preference information is not matched with the corresponding definition audio difference preference information, randomly selecting and determining a corresponding first spam document from a spam document library; for each piece of audio common preference information, when the audio common preference information is not matched with the corresponding definition audio common preference information, randomly selecting and determining a corresponding second spam document from a spam document library.
In yet another embodiment of the present disclosure, the method further comprises: displaying a first display area and a second display area on the multi-user audio sharing page; and displaying the multi-person relation information among the target users in the first display area, and displaying the audio data analysis result in the second display area.
In yet another embodiment of the present disclosure, the target user includes a first user and a second user, the method further comprising: dividing the second display area into a first section for displaying the audio preference information of the first user, a second section for displaying the audio preference information of the second user, and a third section for displaying the audio common preference information; displaying the audio data analysis result in the second display area, including: and displaying the audio data analysis result in the first slice area, the second slice area and the third slice area in the second display area.
In yet another embodiment of the present disclosure, each of the first patch, the second patch, and the third patch includes a preset number of floating modules; the method further comprises the steps of: according to the number of floating modules of the first zone/the second zone/the third zone, respectively determining a first display number for displaying the audio difference preference information of the first user, a second display number for displaying the audio difference preference information of the second user and a third display number for displaying the audio common preference information; the displaying the audio data analysis result in the first tile, the second tile, and the third tile in the second display area includes: floating display of the corresponding audio difference preference information of the first user in each floating module of the first area according to the first display quantity; according to the second display quantity, floating display of the corresponding audio difference preference information of the second user in each floating module of the second section; and according to the third display quantity, floating and displaying corresponding audio common preference information in each floating module of the third section.
In yet another embodiment of the present disclosure, the floating module includes a fixed field region and a floating display region; floating the audio difference preference information of the first user, the audio difference preference information of the second user, and the audio common preference information in each floating module, comprising:
and fixing display identification information based on the fixed field area, wherein the identification information comprises one of the following identifications: a first user identification, a second user identification, or a common identification between the first user and the second user; and floatingly displaying the audio difference preference information of the first user, the audio difference preference information of the second user or the audio common preference information based on the floating display area.
In yet another embodiment of the present disclosure, the multi-person relationship information includes at least one of the following information: user basic information corresponding to the multi-person relationship, multi-person relationship establishment time and relationship index between the multi-person relationship.
In yet another embodiment of the present disclosure, the method further comprises: user behavior information of any one of the target users is displayed in the first display area, wherein the user behavior information comprises praise audio of the latest timestamp.
In yet another embodiment of the present disclosure, the method further comprises: displaying a multi-user audio recommendation icon on the multi-user audio sharing page; and responding to the triggering operation of the multi-person audio recommendation icon, generating and displaying a multi-person audio recommendation page, wherein the multi-person audio recommendation page comprises a recommendation song list for any user in the target users.
In yet another embodiment of the present disclosure, the method further comprises: and responding to the triggering operation aiming at the multi-user functional module in the target audio playing program, and acquiring multi-user audio data analysis results of all target users.
In yet another embodiment of the present disclosure, the method further comprises: displaying a sharing function icon on the multi-user audio sharing page; and responding to the triggering operation for the sharing function icon, generating an information display picture corresponding to the multi-user audio sharing page, and pushing the information display picture to a target sharing user, wherein the target sharing user comprises any user in the target users.
In still another embodiment of the present disclosure, the information presentation picture carries multi-person relationship description information and/or link information corresponding to the multi-user audio sharing page, where the link information is used to display the multi-user audio sharing page after triggering based on a target operation.
In a second aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium comprising: the computer-readable storage medium has stored therein computer-executable instructions which, when executed by a processor, are for implementing the multi-user audio data display method according to any one of the first aspects above.
In a third aspect of embodiments of the present disclosure, there is provided a multi-user audio data display apparatus, the apparatus comprising: the system comprises an acquisition module, a storage module and a storage module, wherein the acquisition module is used for acquiring multi-user audio data analysis results of all target users, and each target user is used for indicating each user in a multi-user relationship established based on an audio playing program; the generation and display module is configured to generate a multi-user audio sharing page, and display the audio data analysis result on the multi-user audio sharing page, wherein the audio data analysis result comprises audio common preference information and audio difference preference information of each target user.
In one embodiment of the present disclosure, the apparatus further comprises: and the analysis module is used for respectively acquiring the historical audio playing data of each target user and determining the audio data analysis result of each target user based on the historical audio playing data.
In another embodiment of the disclosure, the analysis module is specifically configured to determine, based on the historical audio data of each of the target users, at least one audio difference preference information of each of the target users, and at least one audio common preference information between each of the target users, respectively; and obtaining the audio data analysis result based on each piece of audio difference preference information and each piece of audio common preference information.
In yet another embodiment of the present disclosure, the analysis module includes: a feature extraction unit configured to extract audio feature information of each of the target users from the historical audio data; a similarity determination unit configured to computationally determine a preference similarity between the respective target users with respect to the audio feature information; a preference information determining unit configured to determine, based on the preference similarities, at least one audio difference preference information of each of the target users, and at least one audio common preference information between each of the target users, respectively.
In still another embodiment of the present disclosure, the feature extraction unit is specifically configured to traverse each of the target users and obtain time information when each of the audio in the historical audio data is played; determining the time period when each audio is played according to the time information and a preset time period mapping table; and determining the preference information of the audio playing time period according to the number of the audio contained in each time period.
In still another embodiment of the present disclosure, the audio feature information includes a wind preference information, and the feature extraction unit is specifically configured to traverse each of the target users to obtain a wind label of each audio in the historical audio data; and determining the wind preference information based on the wind label.
In yet another embodiment of the present disclosure, the determining the wind preference information based on the wind label includes: and determining first wind preference information based on the number of audio frequencies contained in each wind label.
In yet another embodiment of the present disclosure, the determining the wind preference information based on the wind label includes: acquiring playing time corresponding to a wind label of each audio in the historical audio data; determining time period wind label information of audio playing at different time periods every day based on the wind labels and the corresponding playing time thereof; and determining second wind preference information based on the time period wind label information.
In yet another embodiment of the present disclosure, the determining the wind preference information based on the wind label includes: acquiring a historical favorite wind label of the target user; determining the curved wind label with the largest audio frequency as a favorite curved wind label; and when the favorite wind label is inconsistent with the historical favorite wind label, determining the favorite wind label as a newly-added wind label, and determining third wind preference information based on the newly-added wind label.
In yet another embodiment of the present disclosure, the audio feature information includes singer preference information, and the feature extraction unit is specifically configured to traverse each of the target users, obtain singers of each of the audio in the historical audio data, and determine the singer preference information based on the singers.
In yet another embodiment of the present disclosure, the determining the singer preference information based on the singer includes: and determining first singer preference information based on the corresponding singers when the number of the audio frequencies reaches a first preset threshold value based on the number of the audio frequencies contained by each singer.
In yet another embodiment of the present disclosure, the determining the singer preference information based on the singer includes: acquiring the attention quantity of singers of each audio in the historical audio data; ranking the singers based on the number of audio contained by each singer, and determining at least one singer ranked before a preset position as a favorite singer; and determining second singer preference information based on the favorite singers of which the attention quantity does not reach a second preset threshold.
In yet another embodiment of the present disclosure, the determining the singer preference information based on the singer includes: acquiring a historical favorite singer which does not play audio in a preset period of the target user; and determining third singer preference information based on the matched singers when the singers are matched with the historical favorite singers.
In still another embodiment of the present disclosure, the audio feature information includes audio distribution section preference information, and the feature extraction unit is specifically configured to traverse each of the target users to obtain audio distribution time information of each of the audio in the historical audio data; determining the release section information of each audio based on the audio release time information and a preset release section mapping table; and determining the preference information of the audio issuing section according to the number of the audio contained in each issuing section information.
In still another embodiment of the present disclosure, the audio feature information includes audio popularity preference information, and the feature extraction unit is specifically configured to traverse each of the target users to obtain a number of times of playing each of the audio in the historical audio data; determining the heat information of each audio based on the playing times and a preset audio heat mapping table; and determining the audio popularity preference information according to the number of the audio contained in each piece of popularity information.
In still another embodiment of the present disclosure, the audio feature information includes audio language preference information, and the feature extraction unit is specifically configured to traverse each of the target users, obtain language tags of each of the audio in the historical audio data, and determine the audio language preference information based on the number of the audio included in each of the language tags.
In yet another embodiment of the present disclosure, the apparatus further comprises: a mapping acquisition module configured to acquire a first mapping relationship between each of the pre-established plurality of defined audio difference preference information and the plurality of first preference documents, and a second mapping relationship between each of the plurality of defined audio common preference information and the plurality of second preference documents; the step of obtaining the audio data analysis result based on each piece of audio difference preference information and each piece of audio common preference information comprises the following steps: for each piece of audio difference preference information, when the audio difference information is matched with corresponding defined audio difference preference information, mapping out a first preference document corresponding to the audio difference preference information based on the first mapping relation and the defined audio difference preference information; for each piece of audio common preference information, when the audio common preference information is matched with the corresponding defined audio common preference information, a second preference document corresponding to the audio common preference information is mapped out based on the second mapping relation and the defined audio common preference information; and obtaining the audio data analysis result based on the mapped first preference file and the mapped second preference file.
In yet another embodiment of the present disclosure, the generating and displaying module includes: a document determining unit configured to determine the first preference document and/or the second preference document preferentially displayed in the audio data analysis result based on a predetermined presentation priority corresponding to each of the audio difference preference information and/or each of the audio common preference information; and a document display unit configured to display the determined first preference document and/or the determined second preference document on the multi-user audio sharing page.
In still another embodiment of the present disclosure, the obtaining the audio data analysis result based on each of the audio difference preference information and each of the audio common preference information further includes: for each piece of audio difference preference information, when the audio difference preference information is not matched with the corresponding definition audio difference preference information, randomly selecting and determining a corresponding first spam document from a spam document library; for each piece of audio common preference information, when the audio common preference information is not matched with the corresponding definition audio common preference information, randomly selecting and determining a corresponding second spam document from a spam document library.
In yet another embodiment of the present disclosure, the apparatus further comprises: a region display module configured to display a first display region and a second display region on the multi-user audio sharing page; and displaying the multi-person relation information among the target users in the first display area, and displaying the audio data analysis result in the second display area.
In yet another embodiment of the present disclosure, the target user includes a first user and a second user, the apparatus further includes: a section dividing module configured to divide the second display area into a first section for displaying the audio preference information of the first user, a second section for displaying the audio preference information of the second user, and a third section for displaying the audio common preference information; the region display module is specifically configured to display the audio data analysis result in the first region, the second region and the third region in the second display region.
In yet another embodiment of the present disclosure, each of the first patch, the second patch, and the third patch includes a preset number of floating modules; the apparatus further comprises: a floating number determining module configured to determine a first display number for displaying the audio difference preference information of the first user, a second display number for displaying the audio difference preference information of the second user, and a third display number for displaying the audio common preference information, respectively, according to the number of floating modules of the first/second/third tiles; the area display module is specifically configured to display the corresponding audio difference preference information of the first user in a floating manner in each floating module of the first area according to the first display quantity; according to the second display quantity, floating display of the corresponding audio difference preference information of the second user in each floating module of the second section; and according to the third display quantity, floating and displaying corresponding audio common preference information in each floating module of the third section.
In yet another embodiment of the present disclosure, the floating module includes a fixed field region and a floating display region; the area display module is specifically configured to fixedly display identification information based on the fixed field area, wherein the identification information comprises one of the following identifications: a first user identification, a second user identification, or a common identification between the first user and the second user; and floatingly displaying the audio difference preference information of the first user, the audio difference preference information of the second user or the audio common preference information based on the floating display area.
In yet another embodiment of the present disclosure, the multi-person relationship information includes at least one of the following information: user basic information corresponding to the multi-person relationship, multi-person relationship establishment time and relationship index between the multi-person relationship.
In yet another embodiment of the present disclosure, further comprising: and a user behavior display module configured to display user behavior information of any one of the target users in the first display area, the user behavior information including a praise audio of a latest timestamp.
In yet another embodiment of the present disclosure, the apparatus further comprises: the first icon display module is used for displaying a multi-user audio recommendation icon on the multi-user audio sharing page; and responding to the triggering operation of the multi-person audio recommendation icon, generating and displaying a multi-person audio recommendation page, wherein the multi-person audio recommendation page comprises a recommendation song list for any user in the target users.
In yet another embodiment of the present disclosure, the apparatus further comprises: and the response module is used for responding to the triggering operation of the multi-user function module in the target audio playing program and acquiring multi-user audio data analysis results of all target users.
In yet another embodiment of the present disclosure, the apparatus further comprises: the second image display module is used for displaying sharing function icons on the multi-user audio sharing page; and responding to the triggering operation for the sharing function icon, generating an information display picture corresponding to the multi-user audio sharing page, and pushing the information display picture to a target sharing user, wherein the target sharing user comprises any user in the target users.
In still another embodiment of the present disclosure, the information presentation picture carries multi-person relationship description information and/or link information corresponding to the multi-user audio sharing page, where the link information is used to display the multi-user audio sharing page after triggering based on a target operation.
In a fourth aspect of embodiments of the present disclosure, there is provided a computing device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the computing device to perform the multi-user audio data display method of any one of the first aspects above.
According to the multi-user audio data display method, medium, device and computing equipment provided by the embodiment of the disclosure, through acquiring the multi-user audio data analysis results of all users establishing a multi-person relationship, the multi-user audio data analysis results comprise audio common preference information and audio difference preference information, and a multi-user audio sharing page is generated, and the multi-user audio data analysis results are displayed on the multi-user audio sharing page, so that the audio sharing page can intuitively embody common points or difference points among the users on audio preference, the interactivity of the users on audio data is improved, the mutual understanding among the users establishing the multi-person relationship is enhanced, and better experience is brought to the users.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings, in which:
FIG. 1 schematically illustrates a schematic view of a scenario according to an embodiment of the present disclosure;
fig. 2 schematically illustrates a flowchart of a multi-user audio data display method according to an embodiment of the disclosure;
FIG. 3 schematically illustrates an interface diagram of an invitation page in an embodiment of the disclosure;
FIG. 4 schematically illustrates a flow chart of another multi-user audio data display method provided in accordance with an embodiment of the present disclosure;
FIG. 5 schematically illustrates a flow diagram of yet another multi-user audio data display method provided in accordance with an embodiment of the present disclosure;
FIG. 6 schematically illustrates one of the interface diagrams of a multi-user audio sharing page in an embodiment of the present disclosure;
FIG. 7 schematically illustrates a second interface diagram of a multi-user audio sharing page in an embodiment of the present disclosure;
FIG. 8 schematically illustrates a third interface diagram of a multi-user audio sharing page in an embodiment of the present disclosure;
FIG. 9 schematically illustrates a fourth interface diagram of a multi-user audio sharing page in an embodiment of the present disclosure;
FIG. 10 schematically illustrates a fifth interface diagram of a multi-user audio sharing page in an embodiment of the present disclosure;
FIG. 11 schematically illustrates a flowchart of yet another multi-user audio data display method according to an embodiment of the present disclosure;
FIG. 12 schematically illustrates an interface diagram of an information presentation picture in an embodiment of the present disclosure;
FIG. 13 schematically illustrates an apparatus schematic of a program product provided by an embodiment of the present disclosure;
Fig. 14 schematically illustrates an apparatus schematic diagram of a multi-user audio data display method according to an embodiment of the present disclosure;
FIG. 15 schematically illustrates an apparatus schematic of a computing device provided by an embodiment of the present disclosure;
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
The principles and spirit of the present disclosure will be described below with reference to several exemplary embodiments. It should be understood that these embodiments are presented merely to enable one skilled in the art to better understand and practice the present disclosure and are not intended to limit the scope of the present disclosure in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Those skilled in the art will appreciate that embodiments of the present disclosure may be implemented as a system, apparatus, device, method, or computer program product. Accordingly, the present disclosure may be embodied in the following forms, namely: complete hardware, complete software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
According to an embodiment of the disclosure, a multi-user audio data display method, medium, device and computing equipment are provided.
In the embodiments of the present disclosure, the terms involved:
multi-person relationship: meaning that one or more users may establish relationships within an audio program (e.g., a music application), which may include a variety of different relationships such as family, friends, lovers, undefined, etc. Optionally, taking a two-person relationship as an example, after the two-person relationship is established successfully, both parties can enjoy the exclusive audio related two-person rights provided by the platform for the user.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for analysis, stored data, presented data, etc.) related to the present disclosure are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region, and be provided with corresponding operation entries for the user to select authorization or rejection.
Furthermore, any number of elements in the figures is for illustration and not limitation, and any naming is used for distinction only and not for any limiting sense.
The principles and spirit of the present disclosure are explained in detail below with reference to several representative embodiments thereof.
Summary of The Invention
The inventor finds that the music application generally collects and displays the song listening data of the user, so that the user can perform song listening selection according to the displayed song listening data when listening to the song next time, or recommend corresponding songs to the user, thereby improving the user experience.
In the related art, the music application's song listening recording function may take favorite songs, frequently listened songs, focused singers, etc. as examples according to the user's historical song listening data, and analyze the user's song style, artist, and content preference for music according to the historical song listening data, so as to implement the user's song listening data analysis and display.
In another related art, the multi-user interaction function of the music application may establish a multi-user relationship between multiple users according to user requirements, and perform functional interaction between multiple users based on a shared page, for example, specify interactive playing methods such as multi-user decoration, multi-user favorite song list, multi-user song listening, and the like.
However, in the related art, both the song listening recording function and the multi-user interaction function are performed around respective song listening data of users, and the displayed pages are independent and do not affect each other for the song listening data of different users, so that the common point or the difference point of the users in the audio preference cannot be intuitively reflected, the interactivity of the users on the audio data is poor, and the user experience is affected.
In view of the above problems, the present disclosure provides a method, medium, device and computing device for displaying multi-user audio data, where by acquiring a multi-user audio data analysis result of each user establishing a multi-user relationship, the multi-user audio data analysis result includes audio common preference information and audio difference preference information, and generating a multi-user audio sharing page, and displaying the multi-user audio data analysis result on the multi-user audio sharing page, the audio sharing page can intuitively embody a common point or a difference point in audio preference between users, so as to improve interactivity between users with respect to audio data, enhance knowledge between users establishing a multi-user relationship, and bring better experience to users.
Having described the basic principles of the present disclosure, various non-limiting embodiments of the present disclosure are specifically described below.
Application scene overview
An application scenario of the solution provided in the present disclosure is first illustrated with reference to fig. 1. Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present disclosure, as shown in fig. 1, in the application scenario, a user opens an audio program, such as a music application, on a first terminal 101, the first terminal 101 sends a request for obtaining a multi-user audio data analysis result to a server 102, where the request may carry multi-user relationship information of the user, after receiving the request, the server 102 sends the multi-user audio data analysis result obtained by the server 102 or corresponding to the request generated by the server 102 to the first terminal 101, and the first terminal 101 generates a multi-user audio sharing page in a display interface of the music application and displays the multi-user audio data analysis result in the multi-user audio sharing page. Optionally, the audio sharing page displays a sharing function icon, and the first terminal 101 generates and pushes an information display picture corresponding to the multi-user audio sharing page in response to a triggering operation of the user on the sharing function icon, so that the user stores the information display picture or pushes the information display picture to other users in a multi-user relationship, and enters into the audio sharing page to perform sharing interaction.
Optionally, the application scenario may further include the second terminal 103, after receiving the information presentation picture in the second terminal 103, other users trigger the music application in the second terminal 103 and display the audio sharing page, so as to implement sharing interaction between multiple users in the audio sharing page, for example, browse the audio common preference information and the audio difference preference information of the multiple users.
Terminals may include, but are not limited to, computers, smart phones, tablet computers, electronic book readers, dynamic video expert compression standard audio layer 3 (Moving Picture experts group audio layer III, MP3 for short) players, dynamic video expert compression standard audio layer 4 (Moving Picture experts group audio layer IV, MP4 for short) players, portable computers, car computers, wearable devices, desktop computers, set-top boxes, smart televisions, and the like.
In the application scenario, the server may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, a content delivery network (Content Delivery Network, CDN), and basic cloud computing services such as big data and an artificial intelligent platform.
It should be noted that fig. 1 is only a schematic diagram of an application scenario provided by an embodiment of the present disclosure, and the embodiment of the present disclosure does not limit the devices included in fig. 1 or limit the positional relationship between the devices in fig. 1. For example, in the application scenario shown in fig. 1, a data storage device may be an external memory with respect to the server 102, or an internal memory integrated into the server 102.
Exemplary method
A multi-user audio data display method according to an exemplary embodiment of the present disclosure is described below with reference to fig. 1 in conjunction with the application scenario of fig. 1. It should be noted that the above application scenario is only shown for the convenience of understanding the spirit and principles of the present disclosure, and the embodiments of the present disclosure are not limited in any way in this respect. Rather, embodiments of the present disclosure may be applied to any scenario where applicable.
Referring to fig. 2, fig. 2 is a flowchart of a multi-user audio data display method according to an embodiment of the disclosure, taking an execution subject as an example of the first terminal 101, the method includes step S201 and step S202.
It should be noted that, in this embodiment, the step S201 and the step S202 are only described for convenience, and there is no specific order between them, and the step S201 may be performed synchronously with the step S202 or may be performed after the step S202.
As shown in fig. 2, step S201 obtains multi-user audio data analysis results of each target user, where each target user is used to indicate each user in the multi-user relationship established based on the audio playing program.
In the embodiment of the present disclosure, the multi-user audio data analysis result may be obtained from the server side or may be obtained after the data processing performed by the client side (the specific processing procedure may refer to a procedure of determining the audio data analysis result hereinafter), which is not particularly limited in this embodiment.
As an example, the establishment of the multi-person relationship may be that after the user completes the registration and authorization of the member user identity related to the multi-user function module, the user may invite the friend to establish the multi-person relationship, for example, by triggering an invite function key in the invite page, as shown in fig. 3, the user may enter the invite page, a relationship theme may be displayed in the invite page (according to the number of users in the multi-person relationship), for example, the double relationship may display a relationship theme of "travelling with double music", the invite page may also display an alternative relationship of the double relationship, for example, may include four or more alternative relationships, for example, a friend, a family, a lover, or a non-defined relationship, after the user may select one of the relationships, invite the corresponding user to establish the double relationship by triggering an invite function key in the invite page, and experience the audio program provides the multi-user audio sharing page generating and displaying functions for the user. Alternatively, the user may establish a multi-person relationship with different users, or may establish a plurality of multi-person relationships with different users, respectively, which is not particularly limited in this embodiment.
As another example, the multi-user audio data analysis results of the respective target users may be obtained by responding to a trigger operation for the multi-user function module in the target audio playback program. Specifically, if the multi-user relationship of the user is established, the multi-user audio data analysis result can be obtained through the contact guidance of the multi-user function module in the audio program (such as the member center function entry of the audio program homepage), after the contact guidance is triggered, and a double space page (i.e. a multi-user audio sharing page) is generated and displayed, and the cross analysis result (i.e. the audio data analysis result) of the song listening data of two people is displayed in the double space page.
It will be appreciated that in response to a condition or state being relied upon to represent an operation being performed, one or more operations being performed may be in real time or with a set delay when the dependent condition or state is satisfied; without being specifically described, there is no limitation in the execution sequence of the plurality of operations performed.
With continued reference to fig. 2, step S202 generates a multi-user audio sharing page, and displays an audio data analysis result including audio common preference information and audio difference preference information of each target user on the multi-user audio sharing page.
In this embodiment, the audio common preference information and the audio differential preference information may be determined according to respective historical audio data (may be a specific time period, for example, audio data within a week is calculated forward based on a time point when the user triggers the multi-user function module), specifically, taking as an example, song listening data of a two-person relationship, the two-person user respectively approving songs (i.e. approving audio), normally listening songs (i.e. audio with the number of audio plays reaching a specific value), paying attention to artists (i.e. paying attention to singers), normally listening artists (i.e. singers with the number of audio plays reaching a specific value), listening to songs at a specific moment (i.e. audio corresponding to a specific audio play time), browsing page information, performing cross-collision analysis on recent listening song data using rights in a relevant station (e.g. audio play is paid audio), and the like, analyzing different points on the listening behaviors of two persons, and displaying in an audio sharing page. Optionally, for the audio common preference information and the audio difference preference information, the partition display may be performed, for the audio difference preference information of each user, the partition display may be performed as a respective partition display for each user, and for the audio common preference information between users, the display partition of the audio difference preference information of each user may be displayed on two sides of the audio sharing page, and the display partition of the audio common preference information may be displayed in the middle of the display partition of the audio difference preference information of each user, which will not be described herein.
In this embodiment, in addition to the audio data analysis result, the audio sharing page may display multi-person relationship information, where the multi-person relationship information may include user basic information (such as a nickname of the user) corresponding to the multi-person relationship, a multi-person relationship establishment time (such as an initial establishment time and an establishment duration), and a relationship index (which may be determined according to the number of interactions) between the multi-person relationships.
According to the technical scheme, the multi-user audio data analysis results of the users establishing the multi-person relationship are obtained, the multi-user audio data analysis results comprise the audio common preference information and the audio differential preference information, the multi-user audio sharing page is generated, and the multi-user audio data analysis results are displayed on the multi-user audio sharing page, so that the common points or the differential points of the users on the audio preferences can be intuitively embodied by the audio sharing page, the interactivity of the users on the audio data is improved, the mutual understanding of the users establishing the multi-person relationship is enhanced, and better experience is brought to the users.
Fig. 4 is a flow chart of another multi-user audio data display method according to an embodiment of the present disclosure, and compared with the method according to the above embodiment, the present embodiment obtains and analyzes historical audio data of each user to determine a cross analysis result including audio difference preference information and audio common preference information, and can obtain the audio data analysis result locally without establishing a connection with a server. Specifically, the method may further include the following step S401 and step S402 in addition to the above-described step S201 and step S202.
For example, the technical principles of the above steps S201 and S202 may be referred to the description of the above embodiments, and are not repeated here. Wherein, the step S401 and the step S402 may precede the step S201 described above.
Step S401, historical audio playing data of each target user are respectively obtained.
In this embodiment, the historical audio playing data may be all audio playing data in a specific time period under the user account, for example, including audio playing data of a vehicle-mounted, TV, PC, mobile, etc. The audio playing data in a specific time period, such as the audio playing data in the near week, has higher data freshness and better accords with the current audio playing preference of the user.
In this embodiment, taking an example in which the target user includes a first user of the local client and a second user of the remote client, the historical audio playing data of the first user is obtained from the local audio program, and the historical audio playing data of the second user is obtained from the remote client by establishing communication connection with the remote client.
In this embodiment, for the first user and the second user, the acquisition of the historical audio playing data is not affected for different multi-person relationships; in some embodiments, in order to improve the operation flexibility of the user and further improve the use experience of the user, for different multi-person relationships, data types for acquiring historical audio playing data and acquiring the historical audio playing data under corresponding data types may be determined for different multi-person relationships, where the data types may be audio data type information such as praise songs, normally-listened songs, attention artists, normally-listened artists, listening to songs at special moments, browsing related pages, using related in-station interests, and the like.
Step S402, based on the historical audio playing data, determining the audio data analysis result of each target user.
According to the obtained historical audio playing data, the common preference information of the audios among users and the difference preference information of the audios of each user can be determined by calculating the preference similarity corresponding to each type of data in the historical audio playing data; the method may also be used for analyzing each type of data in the historical audio playing data of each user separately for each user, determining the same analysis result in each type of data as audio common preference information, and determining different analysis results in each type of data as audio difference preference information, for example, analyzing the obtained audio playing data about singers, wherein the analysis results are that singers with preference (such as the maximum playing times) of each user are the same singer, namely, determining the singers as audio common preference information, otherwise, if the singers are different singers, determining singers with preference of each user as audio difference preference information of each user respectively.
In some embodiments, the step S402 of determining the audio data analysis result of each target user based on the historical audio playing may include the following steps: determining at least one audio difference preference information of each target user and at least one audio common preference information among each target user based on historical audio data of each target user; and obtaining an audio data analysis result based on each audio difference preference information and each audio common preference information.
For example, the plurality of audio difference preference information and the audio common preference information may be determined according to different data types of the historical audio data, respectively. Further by way of example, a plurality of analysis dimensions (e.g., total audio play duration in a period, time period & track, release section, singer, newly added track, mass audio, language, audio popularity, etc.) may also be determined according to each data type, and for each dimension, audio difference preference information and audio common preference information corresponding to each dimension may be determined.
Thus, the audio difference preference information and the audio common preference information obtained by analysis can be more comprehensive and accurate.
In some embodiments, the determining the at least one audio difference preference information of each target user and the at least one audio common preference information between each target user based on the historical audio data of each target user in the above steps may include the steps of: extracting audio feature information of each target user from the historical audio data, and calculating and determining preference similarity of the audio feature information among each target user; at least one audio difference preference information of each target user and at least one audio common preference information between each target user are determined based on the preference similarity.
Illustratively, the audio feature information may include audio play duration information, audio play period preference information, music wind preference information, singer preference information, audio release section signal information, audio popularity preference information, and audio language preference information. In some examples, other audio features may be included in addition to the above-described audio feature information, such as audio interaction habits (audio praise, comments, page views, etc.), to which the present disclosure is not particularly limited.
In some embodiments, the audio feature information includes audio playing period preference information, and the extracting the audio feature information of each target user from the historical audio data in the above steps may include the following steps: traversing each target user to obtain time information of each audio in the historical audio data when playing; determining the time period of each audio playing according to the time information and a preset time period mapping table; and determining the preference information of the audio playing time period according to the number of the audio contained in each time period.
For example, the historical audio data is obtained from audio playing data within one week of the user, for each audio played, time information of the audio played is obtained, a period in which the time information falls is determined according to a period mapping table, the number (i.e. the total number) of the audio played in each period is calculated, and the period with the largest number of the audio can be determined as the preference of the audio playing period of the user, wherein the period mapping table is as shown in the following table 1:
TABLE 1
Time period of Time information Counting time length
Early morning hours [5:00-8:00) 3H
Morning of course [8:00-11:00) 3H
Midday [11:00-14:00) 3H
Afternoon [14:00-17:00) 3H
Evening hours [17:00-20:00) 3H
Before sleeping [20:00-23:00) 3H
Late night [23:00-2:00) 3H
Early morning [2:00-5:00) 3H
Further exemplary, in this embodiment, for the calculation determination of the preference similarity, it may be a similarity between the audio playing period preference information, by determining whether the periods are the same, the similarity may be determined, and the similarity may include the same or different, i.e. one hundred percent similarity or zero percent similarity, for example, the audio playing period preference is all the noon playing audio, and the audio playing period preference is determined as the audio common preference information; for another example, the audio playing period preference of the first user is a afternoon audio playing period, the audio playing period preference of the second user is an evening audio playing period, the afternoon audio playing period is determined as audio difference information of the first user, and the evening audio playing period is determined as audio difference information of the second user.
It will be appreciated that other audio feature information, examples of which follow, may refer to this manner in relation to the determination of the similarity of their corresponding preferences, the audio common preference information or the audio difference preference information.
Through the scheme, the users with the multi-user relationship can intuitively browse the preference condition of the audio time length of the interactive users, and the purpose of promoting the multi-user to quickly know the listening time dimension of the audio playing is achieved.
Further by way of example, feature similarities for all audio feature information may also be calculated and determined for all audio feature information, where the feature similarities are used to indicate overall audio playing similarities for multiple users, where each audio feature information has a corresponding weight to calculate and determine feature similarities between users, and audio common preference information may be determined and displayed according to the feature similarities, so that users may more intuitively determine the similarity of listening songs for two or more parties.
In some embodiments, the audio feature information includes the wind preference information, and the step of extracting the audio feature information of each target user from the historical audio data may include the steps of: traversing each target user, acquiring a curved wind label of each audio in the historical audio data, and determining curved wind preference information based on the curved wind labels.
It can be appreciated that when playing audio, the currently mainstream audio program can obtain a tune tag (such as popularity, electronic, rock and roll, etc.) corresponding to the audio from the tune library side, for example. The curved wind tag may include a primary tag and a secondary tag (for example, a popular primary tag and a chinese stream behavior secondary tag), where the secondary tag is preferentially acquired when the secondary tag can be identified, and the primary tag is acquired when the secondary tag cannot be identified.
In an example of this embodiment, determining the wind preference information based on the wind label in the step includes: the first wind preference information is determined based on the number of audio frequencies contained in each wind label.
Specifically, according to the audio playing number of each of the wind tags, the wind tags whose playing number corresponding to the wind tags is up to a preset ratio (for example, 50%) of the total playing number of all the wind tags can be obtained, and the cumulative playing time period within a week (for example, the time of the obtained historical audio playing data is the data within a week) is longer than the preset time period (for example, not less than 20 minutes), or the wind tags with the largest playing number, or the wind tags with the first three playing numbers, or according to other determining manners of the audio number, the first wind preference information is determined, and is used for indicating the overall wind preference of the user.
In another example of this embodiment, determining the wind preference information based on the wind label in the above step may include the steps of: acquiring playing time corresponding to a wind label of each audio in the historical audio data; determining time period wind label information of audio playing at different time periods every day based on the wind labels and the corresponding playing time thereof; and determining second wind preference information based on the time period wind label information.
In order to further promote the audio data interaction of multiple users, the fact that the users have the corresponding curved wind preference under different time periods is considered. The method and the device determine user audio difference preference information and/or audio common preference information by extracting the wind preference information of the user in different time periods. It will be appreciated that the second wind preference information in this example is used to indicate the wind preference information of the user at different periods of time.
For the above determination of different periods, the determination may be performed in combination with the period map table illustrated in table 1 above, or may be determined in other manners, which is not particularly limited herein.
Further by way of example, when audio difference preference information or audio common preference information corresponding to the wind preference information under different periods of time is displayed, a playing period in which the user is most active (the number of effective plays is the greatest) in one day may be taken from the wind preference information under each period of time; the secondary label (provided by the side of the curved library) is taken by the curved wind, and if the secondary label cannot be taken, the primary label is taken; defining the most active time, playing the most music (secondary label) in the time period, wherein the effective playing number of the music is more than or equal to X (note that the effective playing number of the Chinese popular can be more than or equal to 5, and the other languages or music can be more than or equal to 3); the access range is to access data of the user for nearly 7 days (taking historical audio data as data within 7 days as an example), and the content of the data is updated at 0 point per day.
In yet another example of the embodiment, determining the wind preference information based on the wind label in the step includes: acquiring a historical favorite curved wind label of a target user; determining the curved wind label with the largest number of the audios as a favorite curved wind label; and when the favorite wind label is inconsistent with the historical favorite wind label, determining the favorite wind label as a newly-added wind label, and determining third wind preference information based on the newly-added wind label.
In practical applications, when some users play audio, the playing of the music is limited, that is, only the preferred music is played, so that only the preferred music may be played in a longer period (for example, half a year or other period). However, over time, the user's wind preference may change, and analysis of the newly added wind (i.e., the newly unlocked wind) of the user is significant in improving the interactivity between users.
The historical favorite wind tags can be favorite wind tags determined according to audio playing data within half a year. In some examples, the favorite wind tags may be the wind tags with the longest playing time, or the wind tags with the longest accumulated playing time.
Through the scheme, the users with the multi-user relationship can intuitively browse the preference condition of the curved wind of the interactive users, and the purpose of promoting the multi-user to quickly know the curved wind dimension of the audio playing is achieved.
In some embodiments, the audio feature information includes singer preference information, and the step of extracting the audio feature information of each target user from the historical audio data may include the steps of: traversing each target user, acquiring singers of each audio in the historical audio data, and determining singer preference information based on the singers.
In this embodiment, the singer preference information is determined based on the singers, which may be determined according to the singers playing the most number of audio among the respective audio; in some examples, it may also be determined from the singer's attention information; in other examples, the singer preference information may be determined jointly in combination with the playing amount and the singer's attention information, e.g., by a predetermined coefficient of the attention information, and a coefficient of the playing amount (a priori data may be combined by one skilled in the art to determine the correlation coefficient). Similarly, in the process of extracting other audio feature information, the like-beat audio of the user can be considered for determination, for example, in the process of determining the wind preference information, whether the corresponding audio is the like-beat audio is identified for the audio corresponding to the wind label, and if the corresponding audio is the like-beat audio, the corresponding coefficient can be multiplied by the audio to weight so as to determine the wind preference information.
It can be appreciated that the singer of the present disclosure is a generic concept, which may refer to a singer or an artist or any singer user who sings a song, and may refer to an creator who composes audio, for example, the audio is pure music, and the singer refers to a player; or when the audio is speech audio, the singer is the presenter; or the audio is a recording of natural sounds (e.g., rain, sea waves, or animal sounds, etc.), the singer is the recorder, and so on.
In an example of this embodiment, determining singer preference information based on the singer in the step above includes: and determining first singer preference information based on the corresponding singers when the number of the audio reaches a first preset threshold value based on the number of the audio contained by each singer.
It should be noted that, a person skilled in the art may adaptively determine the first preset threshold according to practical applications, for example, the total number of audio is 200, the first preset threshold is 90, and if more than 2 singers reach 90, the singer with the largest number is selected to determine the preference information of the first singer.
In another example of this embodiment, determining singer preference information based on the singer in the step described above includes: acquiring the attention quantity of singers of each audio in the historical audio data; ranking the singers based on the number of audio contained by each singer, and determining at least one singer ranked in front of a preset position as a favorite singer; and determining second singer preference information based on the favorite singers of which the attention quantity does not reach a second preset threshold.
It should be noted that, a person skilled in the art may adaptively determine the preset positions in connection with the actual application, for example, determine the preset positions as the top five ordered singers, i.e. the singers playing the top five audio amounts, and determine one or more of the top five singers as favorite singers.
Optionally, by acquiring the number of attention (for example, the number of fans) of each favorite singer, and analyzing the favorite singers of which the number of attention does not reach a second preset threshold (for example, the number of fans is less than or equal to 1 ten thousand, and can be adaptively determined), if there are favorite singers which do not reach the second preset threshold, determining second singer preference information based on the favorite singers, wherein the second singer preference information can be used for indicating users to prefer to dig singers with fewer fans; if a plurality of favorite singers which do not reach a second preset threshold value exist, determining the favorite singers with more audio frequency as excavation singers; if there are no favorite singers that do not reach the second preset threshold, the process may end and other examples may be employed to determine the singer preference information.
In yet another example of the embodiment, determining singer preference information based on the singer includes: acquiring a historical favorite singer of a target user who does not play audio in a preset period; when the singer matches the history favorite singer, third singer preference information is determined based on the matching singer.
Through the scheme, the singer preference situation of the interactive user can be intuitively acquired by the users with the multi-user relationship, and the purpose of promoting the multi-user to quickly know the dimensions of the singers playing the audio is achieved.
In some embodiments, the audio feature information includes audio distribution section preference information, and the extracting the audio feature information of each target user from the historical audio data in the above steps may include the following steps: traversing each target user to obtain the audio release time information of each audio in the historical audio data; determining the release section information of each audio based on the audio release time information and a preset release section mapping table; the audio distribution section preference information is determined according to the number of audio contained in each distribution section information.
It will be appreciated that section information is released, i.e. the year corresponding to the release of audio. Illustratively, the issue section mapping table is shown in table 2 below,
TABLE 2
It will be appreciated that the hit opportunities in table 2 above, i.e., the optional condition of the corresponding distribution section information of the audio distribution section preference information (i.e., the possible examples of the audio distribution section preference information are determined based on the number of audio contained in each distribution section information) can be determined.
Further exemplary, for the audio distribution section preference information, the ratio of the number of audio in all audio at the recent distribution time (for example, the sub-section corresponding to one month in the latest section such as 2020 in the section mapping table described above) to the number of audio in all audio may be analyzed according to the audio play time, for example, up to 50%, or the songs newly distributed may be repeatedly listened to a plurality of times (for example, 2 times) and more for the same audio.
Through the scheme, the users with the multi-user relationship can intuitively acquire the audio age preference condition of the interactive users, and the purpose of promoting the multi-user to quickly know the audio age dimension of the audio playing is achieved.
In some embodiments, the audio feature information includes audio popularity preference information, and the extracting the audio feature information of each target user from the historical audio data in the above steps may include the following steps: traversing each target user to obtain the playing times of each audio in the historical audio data; determining the heat information of each audio based on the playing times and a preset audio heat mapping table; and determining the audio heat preference information according to the number of the audio contained in each heat information.
In this embodiment, the heat information may be a heat level, and the audio heat map may be as shown in table 3 below, for example. It will be appreciated that in other examples, the audio heat map may also be determined for other adaptations depending on the actual application, which is not particularly limited by the present disclosure.
TABLE 3 Table 3
It will be appreciated that the number of plays in table 3 above may be the number of active plays within 7 days (determined according to the time of the historical audio play data), for example, up to 30s at the time of playing audio is determined to be active play, and less than 30s is determined to be inactive play (not counted as the number of plays). It should be noted that the above determination method for the number of playing times is applicable to any example of the present disclosure.
In this embodiment, the audio in which the above-described heat level reaches the C level may be determined as the heat audio. Taking song listening as an example, the audio popularity information can be determined according to that the playing amount of popular music (namely, popularity audio) in song content listened by a user within 7 days is more than 30%, namely, popular music which is more than 30% is selected for determining and displaying audio difference preference information or difference common preference information.
Further by way of example, the audio for which the above-described heat level does not reach the C level may be determined as the crowd audio. Taking the song listening as an example, taking the content of songs listened in the near 7 days, the playing amount of the music of the masses (namely the audio of the masses) is more than 30%, namely, the music of the masses more than 30% is used for determining and displaying the audio difference preference information or the difference common preference information. In some embodiments, the manner of determining the hot audio or the popular audio may be determined according to the number of praise and/or comment in addition to the number of playing times, for example, the popular audio may be audio with the total number of praise and comment being less than 1 w.
Through the scheme, the users with the multi-user relationship can intuitively browse the conditions of the audio hotness preference (such as preference hotsongs and popular songs) of the interactive users, and the purpose of promoting the multi-user to quickly know the audio hotness dimension of the audio playing is achieved.
In some embodiments, the audio feature information includes audio language preference information, and the extracting the audio feature information of each target user from the historical audio data in the above steps may include the following steps: traversing each target user, and acquiring language tags of each audio in the historical audio data so as to determine the audio language preference information based on the number of the audio contained in each language tag.
In this embodiment, the language tags may be identified on the library side, for example by language models identifying the language tags of the individual audio. The determining of the audio language preference information can determine the languages corresponding to the language labels at the same time, and the determining mode can adopt the ordering mode of the audio number, which is not repeated here.
Through the scheme, the users with the multi-user relationship can intuitively browse the audio language preference (such as Chinese, guangdong, and the like) conditions of the interactive users, and the purpose of promoting the multi-user to quickly know the audio playing language dimension is achieved.
In some embodiments, the audio feature information may further include other information extracted by those skilled in the art according to practical applications, for example, audio cumulative playing duration preference information, specifically, the audio cumulative playing duration preference information may be determined by traversing each target user according to the playing duration of each audio in the historical audio playing data. Generally speaking, the probability that the total audio playing time length between users is the same is small, the total audio playing time length information can be determined to be audio difference preference information, different users are respectively displayed, and the users can quickly acquire the audio playing time length of the interactive users according to the display information; for another example, the audio single-play duration preference information may be determined by combining audio play data with the longest duration, which is not described herein.
The extraction process of the audio feature information has been described above with reference to exemplary embodiments, and the audio data analysis results in the present disclosure are further described below with reference to exemplary embodiments. In some embodiments, the user can browse conveniently by mapping the content into the form of the preference document, and the interest of the double relationship is improved.
In this embodiment, the present disclosure provides a method further comprising the steps of: acquiring a first mapping relation between each of the pre-established plurality of defined audio difference preference information and a plurality of first preference documents, and a second mapping relation between each of the plurality of defined audio common preference information and a plurality of second preference documents; based on each audio difference preference information and each audio common preference information, obtaining an audio data analysis result may include the steps of: for each piece of audio difference preference information, when the audio difference information is matched with the corresponding defined audio difference preference information, mapping a first preference file corresponding to the audio difference preference information based on a first mapping relation and the defined audio difference preference information; for each piece of audio common preference information, when the audio common preference information is matched with the corresponding definition audio common preference information, mapping a second preference file corresponding to the audio common preference information based on a second mapping relation and the definition audio common preference information; and obtaining an audio data analysis result based on the mapped first preference file and second preference file.
It will be appreciated that the audio difference preference information may be defined as a number of the above-described audio difference preference information or as all of the information including the above-described audio difference preference information. The audio common preference information is defined.
For the first preference file corresponding to the mapped audio difference preference information, the following implementation manner may be exemplified:
as one example, the audio difference preference information is determined based on the wind preference information:
the second curved wind preference information (i.e. the preference curved wind under different time periods, time period & curved wind) shows templates (if hit, the server side can randomly match one template from a plurality of matching templates according to the mapping relation, i.e. the preference document): "XX loves listening to XXXX" "XX cycle XXXXXX" "XX normally listening to XXXX"; such as "love for listening to the sun in the afternoon" and "cycle fashion rock in the early morning". Alternatively, the playing period of the most active (the most effective playing times) of the user in one day can be taken; the secondary label (provided by the side of the curved library) is taken by the curved wind, and if the secondary label cannot be taken, the primary label is taken; defining the most active time, and playing the most curved wind (a second-level label) in the time period, wherein the effective playing number of the curved wind is more than or equal to X (note that the effective playing number of the Chinese epidemic is more than or equal to 5, and the effective playing number of other languages or curved wind is more than or equal to 3); the access range is to access data of the user for approximately 7 days, and the content of the data is updated at 0 points every day. Specific time period ranges may be shown in table 1 above, and are not described here again;
The third wind preference information as described above, namely the new wind type unlocked recently: the display template (if hit, the server randomly matches the template) is "unlock new wind XXX", such as: unlocking the new qufeng slide plate punk. Optionally, the secondary label of the song wind effectively played by the user in approximately 7 days is taken, the user never listens in the past 6 months, the label of the playing quantity top1 is displayed under the secondary label, the data in the number range of approximately 7 days is taken, and the data content is updated at 0 points every day.
The first song style preference information is the overall favorite song style judgment, and the song style analysis in the recently listened song content. Optionally, the triggering logic is used for counting that in the song effectively played for about 7 days, whether the following conditions are met or not is confirmed, if a plurality of the curved winds are met, one curved wind label is randomly returned, and if the following curved winds are not hit, the following curved winds are not displayed. The access range is to access data of the user for approximately 7 days, and the content of the data is updated at 0 points every day. The specific case mapping is shown in table 4 below:
TABLE 4 Table 4
As another example, the audio difference preference information is determined based on the audio distribution section preference information, i.e., what year songs like to listen to: demonstration template (if hit, server side can randomly match a template from multiple matching templates according to mapping relation): an "xxx-year fan" "xxx-year case" "xxx-year lovers" "xxx-year circulators"; such as "1990 s lovers". Optionally, in the effective song playing process of about 7 days, the year of the first top1 is played, if the number hits for a plurality of years, one is randomly selected, the selection range is that the user takes data of about 7 days, and the data content is updated at 0 points every day.
In addition, for the audio spelling release segment, the new song listening judgment can be further based on: the display template is a new song taster, and the user repeatedly listens 2 times or more recently released songs (released in the last month) in the song content listened to in the last 7 days, and updates data at 0 point every day.
As yet another example, the audio difference preference information is determined based on singer preference information, i.e., what artist's songs are liked to be listened to: three apertures may be included, the existence priority of which may be for the pick artist (corresponding to third artist preference information) > the mine artist (corresponding to second artist preference information) > the favorite artist (first artist preference information), the data content updated at 0 points daily, an exemplary strategy may be as follows:
the heavy-duty artist: the audio difference preference information is determined based on singer preference information, and the presentation template may be "playback XXX", such as "playback artist a"; taking the artist effectively played in the last 7 days (t-7), which is not heard in the last 2 weeks (t-7,t-21), but plays the artist with the volume top20 in the last 2 months (t-21, t-60);
digging artists: the display template may be "dig treasures XXX", such as "dig treasures artist B"; among artists with effective playing amount top3 in nearly 7 days, the number of the artist vermicelli is less than or equal to 1w, and the highest artist vermicelli is sequentially selected according to the playing times;
The favorite artist: the display template (if hit, the server randomly matches the template) is "favorite XXX" "XX follower" (template is randomly output), such as "favorite Wangfei" "Wangfei follower"; taking the artist with the largest effective playing amount in the last 7 days, wherein the effective playing times of songs of the artist are more than 20% in the last 7 days, and the playing amount of songs of the artist is more than or equal to 10 times.
As yet another example, the audio difference preference information is determined based on audio popularity preference information, such as popularity audio, whose presentation template (which the server randomly matches if hit) may be "hot list listeners" trending song fans "trending music fans" hot followers. Optionally, the playing amount of the popular music in the song content listened to by the user for approximately 7 days is more than 30%; the hot song is defined as a song range with a play level of C or more at the access bin side, and the specific play level is shown in table 3, which is not described here again.
Or, for example, the audio of the masses, the display template (the random matching template of the service end if the masses hit) can be an independent music digger, a mass music exclusive person, a music explorer; in the song content listened to in the near 7 days, the ratio of the playing amount of the small-sized music is more than 30%, wherein the definition of the small-sized music is that the total number of single-song red hearts and comments is below 1w, the data in the near 7 days of the user are taken, and the data content is updated at 0 points every day.
As yet another example, the audio difference preference information is determined based on language preference information, where the language judgment-language analysis in the recent listening song content may be that the trigger logic is to count up to about 7 days in effectively playing the song, to confirm whether the following conditions are met, and if a plurality of languages are met, to randomly return to a language presentation template, and if the following languages are not hit, not present. The access range is to access data of the user for approximately 7 days, and the content of the data is updated at 0 points every day. The mapping relationship between languages and text can be shown in the following table 6:
TABLE 6
It will be appreciated that the hit timing can determine the selectable conditions of the corresponding language preference information of the audio distribution segment preference information (i.e., determine a possible example of the audio language preference information based on the number of audio included in each of the language tags)
As yet another example, the audio difference preference information is determined based on audio playing time length information, which shows a text mapping table corresponding to a template, as shown in table 7:
TABLE 7
For the second preference file corresponding to the mapped audio common preference information, the following implementation manner may be exemplified:
as an example, the feature similarity (which may be calculated and determined based on the respective audio feature information between users) is displayed as audio common preference information, where the feature similarity may be obtained by the server side from the content of the algorithm interface of the similarity algorithm for double songs and single songs, and the rule processing may be:
The algorithm side returns the similarity score of the real song listening, and rules are made on the basis:
algorithm model similarity is (0%, 5% ], then show similarity = 5%
Algorithm model similarity range x= (5%, 20% ], then show similarity = X1.5
Algorithm model similarity range x= (20%, 50% ], then show similarity = X1.3
Algorithm model similarity range x= (50%, 75% ], then show similarity = X1.1
Algorithm model similarity range x= (75%, 90% ], then similarity=x is shown
Wherein the above calculation takes an integer, and the decimal point is rounded off by one digit, wherein the coefficients of 1.1, 1.3 and 1.5 are respectively in different similarity ranges. Illustratively, its presentation template may be [ two-person listen to song similarity X ].
Alternatively, different scales of similarity may employ different scale templates. And returning the corresponding name when the corresponding similarity range is hit, and if the similarity is too low and is not within the defined range, not returning.
As another example, the display may be displayed according to data recorded about the time period of listening to songs together accumulated by two people in the last week, and the presentation template may be: listening to XX minutes together for nearly one week; the special logic is as follows: if the user does not hear the song data, the data is not displayed.
As yet another example, the maximum number of active plays may be 1 hour for two persons each from the large period type according to the most active period caliber; XX corresponds to a document whose period corresponds to the statistical period. The presentation templates may be [ all like listen to songs in XX ]. Alternatively, the number bin update frequency may be daily
Taking the data of the user for about 7 days; page update status, 0 point update per day.
As yet another example, the audio common preference information may be determined from commonly liked winds, such as all love exploring rock songs. Caliber, namely, taking intersection data of the music of two persons loving hearing in the near 7 days and TOP of the music of the two persons loving hearing; the secondary label is preferentially taken by the curved wind, and the primary label is taken if the secondary label is not taken. The display template can be [ all like XX ] XX is curved wind. Optionally, the daily update frequency is counted; taking the data of the user for about 7 days; page update status, 0 point update per day.
As yet another example, we may be based on commonly liked artists, such as XX artists being our common cycle. The processing mode can be that a plurality of bins return the most artists of two artists, the number of effective playing times of songs of the artists exceeds 20% in the near 7 days and the playing amount of songs of the artists is more than or equal to 10 times, the most top3 artists of the two artists are taken to be given to a service end, and the service end confirms whether the same artists exist according to the return data, and if so, the service end returns. The demonstration template can be [ all like XX ] XX to get the name of the artist. Optionally, the daily update frequency is counted; the access range is to access the data of the user for about 7 days.
As yet another example, with the above-described audio difference preference information, presentation is made in the audio common preference information if the respective preference documents of the users are duplicated. For example, the identification is performed according to the first preference file of each user every day, if the first preference file of the current day is repeated, the corresponding first preference file is deleted and is displayed as the second preference file, and the first preference file corresponding to each user is not displayed any more. Typically, the preference similarity determination process is performed to distinguish the audio difference preference information, but to further exclude such display cases, this example may implement an optimization process.
For the above example, the priorities may be set as: the common display of the repeated texts of the common-show > similarity corresponding scale template of the common-show > by the common-active period > by the common-show that the common-show is all loved is listened to in the near week, if all hits, the first 3 pieces of data can be preferentially displayed, if the 3 pieces of data are not met, the spam text can be displayed, and the spam text is randomly displayed but not repeatedly displayed and can be updated once every day at 0 point. The spam is randomized, for example, [ relation explorer ] [ Yun Cun roamer ] [ music ideal ] [ daytime thinker ] [ interpersonal adventure ] [ Utropang reality person ] [ music philosophy on moon ] [ dopamine sense ] [ music energy person ] [ elegant audience ]. It will be appreciated that the spam may be any other neutral word, and that the above is by way of example only.
In some embodiments, for different audio common preference information, or audio difference preference information, only some of the information may be mapped, for example, for the feature similarity in the above-mentioned audio common preference information, since the similarity value is used to display the feature similarity, the feature similarity may be displayed directly without being mapped into a corresponding document.
In this embodiment, displaying the audio data analysis result on the multi-user audio sharing page in the above steps may include the steps of: determining a first preference file and/or a second preference file which are preferentially displayed in the audio data analysis result based on the preset display priority corresponding to each audio difference preference information and/or each audio common preference information; and displaying the determined first preference file and/or the determined second preference file on the multi-user audio sharing page.
Illustratively, in combination with the various audio feature information illustrated in the above embodiments, the presentation priority may be the listening time (accumulated audio playing duration) of the week > period & song > year > artist > new song unlock > mass music > duration (single audio playing duration) > new song listening > hot song listening > favorite song. In other examples, those skilled in the art may also make other optional determination of the presentation priority according to the actual application, for example, may make the determination of the presentation priority in conjunction with the habit of a specific user, and the manner of determining the presentation priority is not particularly limited in this disclosure.
In some embodiments, the step of obtaining the audio data analysis result based on each audio difference preference information and each audio common preference information may further include the steps of: for each piece of audio difference preference information, when the audio difference preference information is not matched with the corresponding definition audio difference preference information, randomly selecting and determining a corresponding first spam document from a spam document library; for each piece of audio common preference information, when the audio common preference information is not matched with the corresponding definition audio common preference information, randomly selecting and determining a corresponding second spam document from the spam document library.
In this embodiment, the first and second spam cases may be the same spam case or may be different spam cases, which is not particularly limited in this disclosure. As mentioned in the above examples, the spam is a neutral vocabulary describing the audio difference preference information or the audio co-preference information, i.e. a randomly selected document without a corresponding preference document hit, such as [ relationship explorer ] [ Yun Cun seeker ] [ music ideal ] [ white day thinker ], etc.
Fig. 5 is a flow chart of another method for displaying multi-user audio data according to an embodiment of the present disclosure, which is more flexible than the method according to the above embodiment, in order to further improve the interactivity between multiple users, the multi-user audio sharing page is divided into a first display area and a second display area, and the multi-user relationship information or other information is displayed in the first display area, and the audio data analysis result is displayed in the second display area. Specifically, as shown in fig. 5, the method may further include the following step S501 and step S502 in addition to the above-described step S201 and step S202.
Step S501, displaying a first display area and a second display area on the multi-user audio sharing page.
For example, the first display region may be displayed at an upper position of the second display region. In some embodiments, the first display area may also be displayed at a lower position of the second display area, or at a left position or a right position of the second area, and the specific display positions of the first display area and the second display area are not particularly limited in the present disclosure.
Step S502, displaying the multi-person relationship information between the respective target users in the first display area, and displaying the audio data analysis result in the second display area.
Illustratively, taking a two-person relationship as an example, the multi-user audio sharing page is shown in fig. 6, and includes a first display area 100 in which multi-person relationship information between users is displayed, and a second display area 200. In this embodiment, the multi-person relationship information may include any one or a combination of the following information: user basic information corresponding to the multi-person relationship, multi-person relationship establishment time and relationship index between the multi-person relationship.
It will be appreciated that the user basic information may include user nickname information, wherein the American style of "American style without ice & Ice American style" illustrated in FIG. 6 is nickname information of the user, respectively; the "314 day" illustrated in fig. 6 is the multi-person relationship establishment time, and further by way of example, the initial date of the multi-person relationship establishment may also be displayed; the "520" illustrated in fig. 6 is a relationship index between the relationships of multiple persons, where the relationship index of multiple persons may be determined according to the number of interactions between multiple persons, the duration of interactions, such as the number of songs together, the duration of songs together, etc., or the number of comments between users, such as the number of gifts between users, etc. In some embodiments, other information may be included in addition to the above-described multi-person relationship information, for example, a specific relationship type established is displayed in the multi-person relationship information, and the "establish undefined relationship" illustrated in fig. 6, that is, a specific relationship type, may be co-realistic with the establishment of the relationship time. In some embodiments, the system may further include person image information of two persons, that is, the images of two persons including the small person illustrated in fig. 6, each small person represents one of the users, and in some examples, the small person images may be further decorated during the interaction of two persons, etc.
In some embodiments, the method may further comprise the steps of: user behavior information of any one of the target users is displayed in the first display area, the user behavior information including praise audio of the latest timestamp.
In this embodiment, the user behavior information may include comment information of the latest timestamp of the user, or other information that interacts with the audio program, such as an icon to replace the audio program, in addition to the approval audio of the latest timestamp. Further exemplary, continuing to refer to fig. 6, the multi-person relationship information may be displayed in the first sub-display area 110 of the first display area, and the user behavior information may be displayed in the second sub-display area 120, where, for each display location of the user behavior information, a corresponding jump link may also be displayed, so that other users (which may also be the present user) who establish the multi-person relationship may listen to songs or view details from the jump link, thereby enhancing the interactive performance between users.
It will be appreciated that the latest time stamp, i.e. the latest time stamp within a specified period of time, e.g. 2 days, is shifted forward based on the time when the multi-person audio sharing page was generated. If there is no user behavior information within 2 days, the second sub-display area 120 does not need to display the user behavior information, as can be seen in fig. 7. Further, when no user behavior information exists in the specified period of time, in order to promote interaction between the user and the audio program, text information corresponding to the recent lack of user behavior information can be displayed in the second sub-display area, and at the same time, audio is randomly recommended to the user or pages are browsed.
In some embodiments, continuing with FIG. 6, the target user comprises a first user and a second user, the method may further comprise the steps of: the second display area 200 is divided into a first section 210 for displaying audio preference information of the first user, a second section 220 for displaying audio preference information of the second user, and a third section 230 for displaying audio common preference information; the step S502 of displaying the audio data analysis result in the second display area may include the steps of: and displaying the audio data analysis result in the first area, the second area and the third area in the second display area.
For example, the target user may include a third user, a fourth user, or a greater number of users in addition to the first user, the second user, and a corresponding number of fourth tiles, fifth tiles, etc. are added, while the third tiles for displaying the audio common preference information are unchanged.
It should be noted that the sizes (i.e., display ranges) of the tiles, such as the first tile, the second tile, and the third tile …, may be non-fixed, and may be flexibly adjusted, for example, adaptively adjusted according to the number of preference documents in each tile.
Through the scheme, the mode of dividing different sections is adopted, so that when a user browses an audio sharing page, the difference preference and the common preference of the user can be rapidly positioned, and the user experience is improved.
In the embodiment, in order to facilitate the user to acquire each piece of difference preference information or common preference information more intuitively, the floating module is utilized to perform isolated display on each piece of preference information (including the difference preference information and the common preference information), so that user experience is further improved. Specifically, each of the first tile 210, the second tile 220, and the third tile 230 includes a preset number of floating modules, and a first display number for displaying audio difference preference information of the first user, a second display number for displaying audio difference preference information of the second user, and a third display number for displaying audio common preference information are respectively determined by the number of floating modules according to the first tile 210/the second tile 220/the third tile 230; in the above steps, displaying the audio data analysis result in the first region, the second region, and the third region in the second display region may include the steps of: according to the first display quantity, floating display of the corresponding audio difference preference information of the first user in each floating module of the first area; according to the second display quantity, floating display of the corresponding audio difference preference information of the second user in each floating module of the second section; and according to the third display quantity, floating and displaying corresponding audio common preference information in each floating module of the third section.
Illustratively, the float module may be a sphere for presenting preference information (e.g., the preference document mentioned above), i.e., a keyword presentation sphere. In some examples, the floating module may also be other shapes. The number of floating modules in each zone can be the same or different. The number of floating modules can be determined according to adaptability, since the sizes of different terminal interfaces are determined, the display content is limited, n preference dimensions (dimensions corresponding to audio feature information) are possibly involved in consideration of analysis of user song listening preference, when n is too large (for example, 10 or more preference documents corresponding to n preference dimensions are displayed in a page at the same time, the display fonts in a sphere are smaller, and in order to promote the browsing effect of a user, the number of floating modules is not too large, for example, the number of the displayed floating modules in each area is less than or equal to 4. As illustrated in fig. 6, each tile contains 3 floating modules for preference information presentation.
Further by way of example, the area size of each floating module may be the same or different, e.g., the area of the floating module for displaying audio common preference information may be larger than the area of the floating module for displaying audio difference preference information, so as to facilitate preferential browsing by the user to the same audio playback preferences between users; further, as an example, each floating module may dynamically change (become larger or smaller) according to the preference information content, where the larger the listening similarity (i.e. the feature similarity above), the larger the listening similarity percentage, the larger the area of the determined floating module, the largest area of the floating module changes when the similarity is 100%, and the smallest area of the floating module changes when the similarity is 0%. In addition, the area of the floating module can be determined according to the display priority of the preference information, the area with higher priority is larger, the area with lower priority is smaller, and the like. Further, each floating module may be indicated by a different color, e.g., the floating modules in the first, second, and third tiles may each be of the same color system, and the floating modules between different tiles may be of different color systems. Therefore, the browsing of the preference information by the user is facilitated, and the user experience is further improved.
It can be understood that the floating display is performed according to different preference information, the preference information analyzes new preference information according to the update of the historical audio data, the update time of the historical audio data can be daily update, and the floating display of the floating module displays the newly determined preference information according to the update time. The floating display mode can adopt a mode of replacing a current display floating module with a new floating module, and can also adopt a mode of replacing the current display preference information with new preference information in the floating module.
In an example of preference information replacement of the floating modules, each floating module may include a fixed field area a and a floating presentation area b, in which the audio difference preference information of the first user, the audio difference preference information of the second user, and the audio common preference information are floatingly presented in each floating module, and the steps of: based on the fixed field area fixed presentation identification information, the identification information comprises one of the following identifications: a first user identification, a second user identification, or a common identification between the first user and the second user; the audio difference preference information of the first user, the audio difference preference information of the second user, or the audio common preference information is floatingly displayed based on the floating display area.
The first user identification and the second user identification are short names of all users respectively, and the common identification is short names of all users. In an example where only double preference information needs to be displayed, the first user identification and the second user identification may be "me" and "ta" as identifications, and in an example where preference information more than double persons needs to be displayed, the first user identification may be "me" and each other user may determine the second user identification in the form of a label or a user-defined nickname. Whether in a two-person relationship or other multi-person relationship above two persons, for the common identifier, "we" can be adopted as the identifier, or "all people" and the like can simply represent the identifier of the common preference information.
By way of example, continuing to refer to fig. 6, taking a double relationship as an example, a double preference information nine-grid display mode is adopted to display that spheres are sequentially arranged from top to bottom, the audio difference preference information of the first user can correspond to three spheres at the leftmost side, and the first row of spheres displays the field of 'me'; the audio common preference information between the first user and the second user corresponds to the middle three balls, the first row of balls showing the "we" field; the audio difference preference information for the second user corresponds to the three rightmost balls, with the first row of balls showing the field "TA".
In practical application, some users in the multi-user relationship may have an inactive state, resulting in no preference document display, especially for the stage of just establishing the double relationship, the two parties may be in an inactive state, and in combination with fig. 8 to 10, the display mode of the floating module may be as follows:
for example, if the sphere has no keyword returned, the text may be shown to be mined (for example, without any song listening data), and the sphere pattern is distinguished from the pattern returned by the keyword, for example, the line of the sphere returned by the keyword (including the spam text) is a solid line pattern, and the line of the sphere when the keyword is not returned is a dotted line pattern; or the ball filling color returned by the key word is white (lighting state), and the transparency of the ball filling color is reduced to be in gray pattern when the irrelevant key word is returned.
In another example, if one party is active with data (e.g., three keywords are returned) and the other party is inactive (no keyword hits), it may be shown that: the user of the active party normally displays (three keywords are filled in), the user of the inactive party displays a spam document, and the three ball keywords are respectively [ cloud village diver ] [ to-be-mined ], wherein [ cloud village diver ] is in a lighting state, and [ to-be-mined ] can be in a gray state. Alternatively, if the historical audio data is analyzed, the non-active party is the local user, and the description text can be presented, for example [ in the effort of loading double matching data, the song is listened to quickly to improve the matching degree bar ].
In yet another example, if both parties are active and not active (no keyword hits), then it may be demonstrated that: [ I ] the three keywords are [ cloud village diver ] [ to-be-dug ]; three keywords are [ listen to song similarity ] [ relation explorer ] [ to be mined ]; the three key words [ TA ] are [ cloud village diver ] to be excavated ] [ to be excavated ], wherein [ cloud village diver ] [ relation explorer ] can be in a lighting state, and the key words are in a gray state to be excavated. Alternatively, the description case [ in the effort of loading double matching data, listening to songs quickly improves matching bars ].
In yet another example, if one cannot hit three key words, but can hit 1-2, the [ to-be-mined ] document can be presented in the third sphere. Alternatively, the activity may be calculated in such a way that all total daily living users + established relations, wherein the data may be collected and analyzed before being functionally online.
Through the display mode of the fixed field area and the floating display area, a user can quickly position the difference preference information of the user and other users and the common preference information between the user and other users on the multi-user audio sharing page.
In some embodiments, a multi-user audio recommendation icon is also displayed in the multi-user audio sharing page, and after the user performs corresponding interactive operation, a multi-user audio recommendation page is generated and displayed to recommend a sharing song list for the user. Specifically, the method can further comprise the following steps: and responding to the triggering operation of the multi-user audio recommendation icon by displaying the multi-user audio recommendation icon on the multi-user audio sharing page, and generating and displaying a multi-user audio recommendation page, wherein the multi-user audio recommendation page comprises a recommendation song for any user in target users.
As an example, continuing with fig. 6, in connection with the description of the first display area and the second display area in the above embodiments, the multi-person music recommendation icon 300 may be displayed in the second display area and at a lower position of each tile. Taking a double relation as an example, the multi-person music recommendation icon may include a function key for "listening to songs", and may further include double music recommendation information, such as double recommendation dates illustrated in fig. 6, and description field information about multi-person music recommendation corresponding to affinity +10 (i.e. relation index) by adding 10 minutes of listening to songs each day.
Fig. 11 is a flow chart of another method for displaying multi-user audio data according to an embodiment of the present disclosure, compared with the method provided in the above embodiment, by enabling a user to generate and push a multi-user audio recommendation page by operating the sharing function icon, any user who has established a multi-user relationship can efficiently enter the multi-user audio sharing page based on the multi-user audio recommendation page, so as to further enhance user experience. Specifically, the following steps S111 and S112 may be included in addition to the above steps S201 and S202.
Step S111, displaying a sharing function icon on the multi-user audio sharing page.
In this embodiment, the sharing function icon may be displayed above the multi-user audio sharing page, for example, in the upper right corner of the multi-user audio sharing page.
The first display area and the second display area of the multi-user audio sharing page are combined, wherein the sharing function icon can be displayed in the first display area and displayed above the user behavior information. Specifically, as shown in fig. 6, the sharing function icon 400 is displayed in the upper right corner of "american no ice & ice american".
In some examples, the sharing functionality image may also be displayed at any location in the multi-user audio sharing page, which is not particularly limited by the present disclosure.
Step S112, responding to the triggering operation for the sharing function icons, generating an information display picture corresponding to the multi-user audio sharing page, and pushing the information display picture to a target sharing user, wherein the target sharing user comprises any user of the target users.
For example, after clicking or touching the sharing function icon, the user generates an information presentation picture corresponding to the multi-user audio sharing page. For example, after the information display picture is generated, the page popup is to push or store a target program to be shared, such as program a, program B, and program … … in fig. 12, so as to push the information display picture.
In some embodiments, the information presentation picture may include thumbnail information of the multi-user audio sharing page, for example, the information presentation picture may carry multi-person relationship description information corresponding to the multi-user audio sharing page. Taking a double relationship as an example, the information display picture is shown in fig. 12, and the multi-person relationship description information can include a nickname of a double user, a double figure, a relationship establishment time, a song listening similarity and the like. Further, in combination with the description content about the floating modules in the above embodiment, the multi-person relationship description information may further include a plurality of floating modules selected from all the floating modules, for example, one or two floating modules with highest priority are selected from each tile respectively for displaying preference information (i.e. audio difference preference information or audio common preference information).
In some embodiments, the information presentation picture may further carry link information, where the link information may take a two-dimensional code form or other forms (such as a website), and the link information is used to display a corresponding multi-user audio sharing page after triggering based on a target operation.
In this embodiment, the information presentation picture may be pushed to other programs of the local machine, such as an album program, a social program, etc., and the local user quickly opens the audio program by saving the information presentation picture and, when necessary, by link information in the information presentation picture, and accesses the multi-user audio sharing page. In addition, the information display picture can be pushed to social programs of other terminals so as to be pushed to other users.
In this embodiment, the target operation may be an operation for link information, and the target operation may be a different operation for different forms of link information. For example, the link information is a two-dimensional code, and the user scans the two-dimensional code for a long time, calls a client (namely an audio program) and displays a multi-user sharing page; for another example, the link information is a website, and the user clicks the website, calls the client, and displays the multi-user shared page. Further, after the client is evoked, the client further identifies whether the user has established a multi-user relationship (i.e. opens the multi-user function), if so, the multi-user sharing page is displayed, otherwise, the client jumps to the opening page in the multi-user function module, thereby facilitating the establishment of the multi-user relationship. If the link information is a version lower than the client processing connection, the upgraded version is directed to operate again.
Based on the technical scheme, the multi-person relationship can be established among users, aiming at the mining of the audio preference data for establishing the multi-person relationship and the cross analysis of the audio preference data among the users, different points of the users on the audio playing preference are displayed, the interactive performance of the audio data is greatly improved, the audio program is guided to search more audio content providing directions for the users, the use experience of the users is improved, and the retention of the audio program is promoted.
Exemplary Medium
Having described the method of the exemplary embodiments of the present disclosure, next, a storage medium of the exemplary embodiments of the present disclosure will be described with reference to fig. 13.
Referring to fig. 13, a storage medium 30, in which a program product for implementing the above-described method according to an embodiment of the present disclosure is stored, may employ a portable compact disc read-only memory (CD-ROM) and include program code, and may be run on a terminal device such as a personal computer. However, the program product of the present disclosure is not limited thereto.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. The readable signal medium may also be any readable medium other than a readable storage medium.
Program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, partly on a remote computing device, or entirely on the remote computing device or server. In the context of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN).
Exemplary apparatus
Having described the media of the exemplary embodiments of the present disclosure, a multi-user audio data display device of the exemplary embodiments of the present disclosure will be described with reference to the drawings, for implementing the method in any of the method embodiments, and implementation principles and technical effects are similar, and will not be described herein.
Fig. 14 is a schematic structural diagram of a multi-user audio data display device according to an embodiment of the disclosure, as shown in fig. 14, the device includes: an obtaining module 141 configured to obtain multi-user audio data analysis results of respective target users, wherein the respective target users are used to indicate respective users in a multi-person relationship established based on an audio playing program; and a generation and display module 142 configured to generate a multi-user audio sharing page and display an audio data analysis result including audio common preference information and audio difference preference information of each target user on the multi-user audio sharing page.
In one embodiment of the present disclosure, the apparatus further comprises: and the analysis module is used for respectively acquiring the historical audio playing data of each target user and determining the audio data analysis result of each target user based on the historical audio playing data.
In another embodiment of the disclosure, the analysis module is specifically configured to determine, based on the historical audio data of each target user, at least one audio difference preference information of each target user, and at least one audio common preference information between each target user, respectively; and obtaining an audio data analysis result based on each audio difference preference information and each audio common preference information. In yet another embodiment of the present disclosure, the analysis module includes: a feature extraction unit configured to extract audio feature information of each target user from the historical audio data; a similarity determination unit configured to computationally determine preference similarity between respective target users with respect to the audio feature information; and a preference information determining unit configured to determine, based on the preference similarity, at least one audio difference preference information of each of the target users, and at least one audio common preference information between each of the target users, respectively.
In still another embodiment of the present disclosure, the feature extraction unit is specifically configured to traverse each target user to obtain time information when each audio in the historical audio data is played; determining the time period of each audio playing according to the time information and a preset time period mapping table; and determining the preference information of the audio playing time period according to the number of the audio contained in each time period.
In still another embodiment of the present disclosure, the audio feature information includes a wind preference information, and the feature extraction unit is specifically configured to traverse each target user to obtain a wind label of each audio in the historical audio data; and determining the wind preference information based on the wind label.
In yet another embodiment of the present disclosure, determining the wind preference information based on the wind label includes: the first wind preference information is determined based on the number of audio frequencies contained in each wind label.
In yet another embodiment of the present disclosure, determining the wind preference information based on the wind label includes: acquiring playing time corresponding to a wind label of each audio in the historical audio data; determining time period wind label information of audio playing at different time periods every day based on the wind labels and the corresponding playing time thereof; and determining second wind preference information based on the time period wind label information.
In yet another embodiment of the present disclosure, determining the wind preference information based on the wind label includes: acquiring a historical favorite curved wind label of a target user; determining the curved wind label with the largest number of the audios as a favorite curved wind label; and when the favorite wind label is inconsistent with the historical favorite wind label, determining the favorite wind label as a newly-added wind label, and determining third wind preference information based on the newly-added wind label.
In yet another embodiment of the present disclosure, the audio feature information includes singer preference information, and the feature extraction unit is specifically configured to traverse each target user, obtain singers of each audio in the historical audio data, and determine the singer preference information based on the singers.
In yet another embodiment of the present disclosure, determining singer preference information based on a singer includes: and determining first singer preference information based on the corresponding singers when the number of the audio reaches a first preset threshold value based on the number of the audio contained by each singer.
In yet another embodiment of the present disclosure, determining singer preference information based on a singer includes: acquiring the attention quantity of singers of each audio in the historical audio data; ranking the singers based on the number of audio contained by each singer, and determining at least one singer ranked in front of a preset position as a favorite singer; and determining second singer preference information based on the favorite singers of which the attention quantity does not reach a second preset threshold.
In yet another embodiment of the present disclosure, determining singer preference information based on a singer includes: acquiring a historical favorite singer of a target user who does not play audio in a preset period; when the singer matches the history favorite singer, third singer preference information is determined based on the matching singer.
In yet another embodiment of the present disclosure, the audio feature information includes audio release section preference information, and the feature extraction unit is specifically configured to traverse each target user to obtain audio release time information of each audio in the historical audio data; determining the release section information of each audio based on the audio release time information and a preset release section mapping table; the audio distribution section preference information is determined according to the number of audio contained in each distribution section information.
In still another embodiment of the present disclosure, the audio feature information includes audio popularity preference information, and the feature extraction unit is specifically configured to traverse each target user to obtain a number of times of playing each audio in the historical audio data; determining the heat information of each audio based on the playing times and a preset audio heat mapping table; and determining the audio heat preference information according to the number of the audio contained in each heat information.
In yet another embodiment of the present disclosure, the audio feature information includes audio language preference information, and the feature extraction unit is specifically configured to traverse each target user to obtain language tags of each audio in the historical audio data, so as to determine the audio language preference information based on the number of the audio contained in each language tag.
In yet another embodiment of the present disclosure, the apparatus further comprises: a mapping acquisition module configured to acquire a first mapping relationship between each of the pre-established plurality of defined audio difference preference information and the plurality of first preference documents, and a second mapping relationship between each of the plurality of defined audio common preference information and the plurality of second preference documents; obtaining an audio data analysis result based on each audio difference preference information and each audio common preference information, including: for each piece of audio difference preference information, when the audio difference information is matched with the corresponding defined audio difference preference information, mapping a first preference file corresponding to the audio difference preference information based on a first mapping relation and the defined audio difference preference information; for each piece of audio common preference information, when the audio common preference information is matched with the corresponding definition audio common preference information, mapping a second preference file corresponding to the audio common preference information based on a second mapping relation and the definition audio common preference information; and obtaining an audio data analysis result based on the mapped first preference file and second preference file.
In yet another embodiment of the present disclosure, the generating and displaying module 142 includes: a document determining unit configured to determine a first preference document and/or a second preference document preferentially displayed in the audio data analysis result based on a predetermined presentation priority corresponding to each audio difference preference information and/or each audio common preference information; and a document display unit configured to display the determined first preference document and/or second preference document on the multi-user audio sharing page.
In still another embodiment of the present disclosure, obtaining the audio data analysis result based on each of the audio difference preference information and each of the audio common preference information further includes: for each piece of audio difference preference information, when the audio difference preference information is not matched with the corresponding definition audio difference preference information, randomly selecting and determining a corresponding first spam document from a spam document library; for each piece of audio common preference information, when the audio common preference information is not matched with the corresponding definition audio common preference information, randomly selecting and determining a corresponding second spam document from the spam document library.
In yet another embodiment of the present disclosure, the apparatus further comprises: a region display module configured to display a first display region and a second display region on a multi-user audio sharing page; the multi-person relationship information between the respective target users is displayed in the first display area, and the audio data analysis result is displayed in the second display area.
In yet another embodiment of the present disclosure, the target user includes a first user and the second user device further includes: a section dividing module configured to divide the second display area into a first section for displaying audio preference information of the first user, a second section for displaying audio preference information of the second user, and a third section for displaying audio common preference information; the area display module is specifically configured to display the audio data analysis result in the first area, the second area and the third area in the second display area.
In yet another embodiment of the present disclosure, each of the first patch, the second patch, and the third patch includes a predetermined number of floating modules; the apparatus further comprises: a floating number determining module configured to determine a first display number for displaying the audio difference preference information of the first user, a second display number for displaying the audio difference preference information of the second user, and a third display number for displaying the audio common preference information, respectively, according to the number of floating modules of the first/second/third tiles; the area display module is specifically configured to display the corresponding audio difference preference information of the first user in a floating manner in each floating module of the first area according to the first display quantity; according to the second display quantity, floating display of the corresponding audio difference preference information of the second user in each floating module of the second section; and according to the third display quantity, floating and displaying corresponding audio common preference information in each floating module of the third section.
In yet another embodiment of the present disclosure, a floating module includes a fixed field region and a floating display region; the area display module is specifically configured to display identification information based on the fixed field area, wherein the identification information includes one of the following identifications: a first user identification, a second user identification, or a common identification between the first user and the second user; the audio difference preference information of the first user, the audio difference preference information of the second user, or the audio common preference information is floatingly displayed based on the floating display area.
In yet another embodiment of the present disclosure, the multiplayer relationship information includes at least one of the following information: user basic information corresponding to the multi-person relationship, multi-person relationship establishment time and relationship index between the multi-person relationship.
In yet another embodiment of the present disclosure, further comprising: and a user behavior display module configured to display user behavior information of any one of the target users in the first display area, the user behavior information including praise audio of the latest time stamp.
In yet another embodiment of the present disclosure, the apparatus further comprises: a first icon display module configured to display a multi-user audio recommendation icon on a multi-user audio sharing page; in response to a triggering operation for the multi-person audio recommendation icon, a multi-person audio recommendation page is generated and displayed, the multi-person audio recommendation page including a recommendation song for any one of the target users.
In yet another embodiment of the present disclosure, the apparatus further comprises: and the response module is used for responding to the triggering operation of the multi-user function module in the target audio playing program and acquiring multi-user audio data analysis results of all target users.
In yet another embodiment of the present disclosure, the apparatus further comprises: the second image display module is used for displaying sharing function icons on the multi-user audio sharing page; and responding to the triggering operation aiming at the sharing function icon, generating an information display picture corresponding to the multi-user audio sharing page, and pushing the information display picture to a target sharing user, wherein the target sharing user comprises any user in the target users.
In still another embodiment of the present disclosure, the information presentation picture carries multi-person relationship description information and/or link information corresponding to the multi-user audio sharing page, where the link information is used to display the multi-user audio sharing page after triggering based on the target operation.
Exemplary computing device
Having described the methods, media, and apparatus of exemplary embodiments of the present disclosure, a computing device of exemplary embodiments of the present disclosure is next described with reference to fig. 15.
The computing device 40 shown in fig. 15 is merely an example and should not be taken as limiting the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 15, computing device 40 is in the form of a general purpose computing device. Components of computing device 40 may include, but are not limited to: at least one processing unit 401, at least one memory unit 402, a bus 403 connecting the different system components, including the processing unit 401 and the memory unit 402. Wherein at least one memory unit 402 has stored therein computer-executable instructions; at least one processing unit 401 includes a processor that executes computer-executable instructions to implement the methods described above.
The bus 403 includes a data bus, a control bus, and an address bus.
The storage unit 402 may include readable media in the form of volatile memory, such as Random Access Memory (RAM) 4021 and/or cache memory 4022, and may further include readable media in the form of nonvolatile memory, such as Read Only Memory (ROM) 4023.
The storage unit 402 may also include a program/utility 4025 having a set (at least one) of program modules 4024, such program modules 4024 include, but are not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Computing device 40 may also communicate with one or more external devices 404 (e.g., keyboard, pointing device, etc.). Such communication may occur through an input/output (I/O) interface 405. Moreover, computing device 40 may also communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 406. As shown in fig. 4, network adapter 406 communicates with other modules of computing device 40 over bus 403. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in connection with computing device 40, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
It should be noted that although several units/modules or sub-units/modules of a multi-user audio data display apparatus are mentioned in the above detailed description, such a division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more units/modules described above may be embodied in one unit/module in accordance with embodiments of the present disclosure. Conversely, the features and functions of one unit/module described above may be further divided into ones that are embodied by a plurality of units/modules.
Furthermore, although the operations of the methods of the present disclosure are depicted in the drawings in a particular order, this is not required to or suggested that these operations must be performed in this particular order or that all of the illustrated operations must be performed in order to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform.
While the spirit and principles of the present disclosure have been described with reference to several particular embodiments, it is to be understood that this disclosure is not limited to the particular embodiments disclosed nor does it imply that features in these aspects are not to be combined to benefit from this division, which is done for convenience of description only. The disclosure is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (10)

1. A multi-user audio data display method, comprising:
acquiring multi-user audio data analysis results of all target users, wherein all target users are used for indicating all users in a multi-user relationship established based on an audio playing program;
generating a multi-user audio sharing page, and displaying the audio data analysis result on the multi-user audio sharing page, wherein the audio data analysis result comprises audio common preference information and audio difference preference information of each target user.
2. The method as recited in claim 1, further comprising:
respectively acquiring historical audio playing data of each target user;
and determining the audio data analysis result of each target user based on the historical audio play data.
3. The method of claim 2, wherein said determining the audio data analysis results for each of the target users based on the historical audio plays comprises:
determining, based on the historical audio data of each target user, at least one audio difference preference information of each target user and at least one audio common preference information between each target user;
and obtaining the audio data analysis result based on each piece of audio difference preference information and each piece of audio common preference information.
4. A method according to claim 3, wherein said determining, based on said historical audio data of each of said target users, respective at least one audio difference preference information of each of said target users, and at least one audio common preference information between each of said target users, respectively, comprises:
Extracting audio feature information of each target user from the historical audio data, and calculating and determining preference similarity of the audio feature information among the target users;
at least one audio difference preference information for each of the target users and at least one audio common preference information between each of the target users are determined based on the preference similarities, respectively.
5. The method as recited in claim 1, further comprising:
displaying a first display area and a second display area on the multi-user audio sharing page;
and displaying the multi-person relation information among the target users in the first display area, and displaying the audio data analysis result in the second display area.
6. The method of claim 5, wherein the target user comprises a first user and a second user, the method further comprising:
dividing the second display area into a first section for displaying the audio preference information of the first user, a second section for displaying the audio preference information of the second user, and a third section for displaying the audio common preference information;
Displaying the audio data analysis result in the second display area, including:
and displaying the audio data analysis result in the first slice area, the second slice area and the third slice area in the second display area.
7. The method of any one of claims 1-6, further comprising:
displaying a sharing function icon on the multi-user audio sharing page;
and responding to the triggering operation for the sharing function icon, generating an information display picture corresponding to the multi-user audio sharing page, and pushing the information display picture to a target sharing user, wherein the target sharing user comprises any user in the target users.
8. A computer-readable storage medium, comprising: the computer-readable storage medium has stored therein computer-executable instructions which, when executed by a processor, are for implementing the multi-user audio data display method of any one of claims 1 to 7.
9. A multi-user audio data display apparatus, the apparatus comprising:
the system comprises an acquisition module, a storage module and a storage module, wherein the acquisition module is used for acquiring multi-user audio data analysis results of all target users, and each target user is used for indicating each user in a multi-user relationship established based on an audio playing program;
The generation and display module is configured to generate a multi-user audio sharing page, and display the audio data analysis result on the multi-user audio sharing page, wherein the audio data analysis result comprises audio common preference information and audio difference preference information of each target user.
10. A computing device, comprising:
at least one processor;
and a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor to cause the computing device to perform the multi-user audio data display method of any one of claims 1 to 7.
CN202410057664.5A 2024-01-15 2024-01-15 Multi-user audio data display method, medium, device and computing equipment Pending CN117874280A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410057664.5A CN117874280A (en) 2024-01-15 2024-01-15 Multi-user audio data display method, medium, device and computing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410057664.5A CN117874280A (en) 2024-01-15 2024-01-15 Multi-user audio data display method, medium, device and computing equipment

Publications (1)

Publication Number Publication Date
CN117874280A true CN117874280A (en) 2024-04-12

Family

ID=90590041

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410057664.5A Pending CN117874280A (en) 2024-01-15 2024-01-15 Multi-user audio data display method, medium, device and computing equipment

Country Status (1)

Country Link
CN (1) CN117874280A (en)

Similar Documents

Publication Publication Date Title
US11810576B2 (en) Personalization of experiences with digital assistants in communal settings through voice and query processing
US11461380B2 (en) System and method for tagging a region within a distributed video file
US20210357449A1 (en) Override of Automatically Shared Meta-Data of Media
CN101556617B (en) Systems and methods for associating metadata with media
US7979442B2 (en) Automatic meta-data sharing of existing media
JP6231472B2 (en) Method for searching on the web, computing device, and computer-readable storage medium
US20070220025A1 (en) Automatic meta-data sharing of existing media
CN109983455A (en) The diversified media research result on online social networks
CN107533558A (en) Train of thought knowledge panel
KR20080035617A (en) Single action media playlist generation
CN111177499A (en) Label adding method and device and computer readable storage medium
US11797590B2 (en) Generating structured data for rich experiences from unstructured data streams
US20130125008A1 (en) Systems And Methods For Providing Content Streams
CN107862532A (en) A kind of user characteristics extracting method and relevant apparatus
US11843843B2 (en) Bullet screen key content jump method and bullet screen jump method
McGrady et al. Dialing for Videos: A Random Sample of YouTube
US7885924B2 (en) Management of recorded data for online simulations
CN118590700A (en) Audio processing method, device, terminal and storage medium
CN117874280A (en) Multi-user audio data display method, medium, device and computing equipment
WO2023059653A9 (en) Matching video content to podcast episodes
US20220012279A1 (en) Methods, systems, and media for determining and presenting information related to embedded sound recordings
CN116628232A (en) Label determining method, device, equipment, storage medium and product
US20130151978A1 (en) Method and system for creating smart contents based on contents of users
CN117271806A (en) Content recommendation method, device, equipment, storage medium and product
Udden Taiwanese comedies under the shadow of the Chinese market

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination