CN110910917B - Audio clip splicing method and device - Google Patents

Audio clip splicing method and device Download PDF

Info

Publication number
CN110910917B
CN110910917B CN201911080116.XA CN201911080116A CN110910917B CN 110910917 B CN110910917 B CN 110910917B CN 201911080116 A CN201911080116 A CN 201911080116A CN 110910917 B CN110910917 B CN 110910917B
Authority
CN
China
Prior art keywords
audio
user
clip
lyrics
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911080116.XA
Other languages
Chinese (zh)
Other versions
CN110910917A (en
Inventor
叶聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Music Entertainment Technology Shenzhen Co Ltd
Original Assignee
Tencent Music Entertainment Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Music Entertainment Technology Shenzhen Co Ltd filed Critical Tencent Music Entertainment Technology Shenzhen Co Ltd
Priority to CN201911080116.XA priority Critical patent/CN110910917B/en
Publication of CN110910917A publication Critical patent/CN110910917A/en
Application granted granted Critical
Publication of CN110910917B publication Critical patent/CN110910917B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals

Abstract

The disclosure provides a splicing method and device of audio segments, and relates to the technical field of audio. The method can splice the first audio clip and the target audio clip to obtain the target audio file. The target audio clip is formed by combining the accompaniment audio data of the second audio frequency band and the singing data of the user, and the second audio clip and the first audio clip are audio clips in different songs, so that the obtained accompaniment audio data of the target audio file is the accompaniment audio data in different songs, the fixed mode that the accompaniment audio data of the traditional target audio file is the accompaniment audio data in the same song is broken, and the singing receiving mode of the user is more diversified.

Description

Audio clip splicing method and device
Technical Field
The present disclosure relates to the field of audio technologies, and in particular, to a method and an apparatus for splicing audio segments.
Background
Currently, a terminal may provide a user with a variety of entertainment services, such as a song reception service, to enrich the user's life. The user can receive songs on the terminal to achieve the effect of self-entertainment.
In the related art, in the process of providing a song receiving service for a user, a terminal may play a certain audio clip of a song (i.e., a received record segment) first, and then the user may receive a next audio clip of the song (i.e., a received record segment), and in the process of receiving a song by the user, the terminal may play an accompaniment of the received record segment and collect audio data of the received record segment received by the user, and finally the terminal may play a received audio file, where the audio file includes: the audio clip played by the terminal and the audio clip sung by the user.
However, in the related art, the user has a single pattern of singing.
Disclosure of Invention
The disclosure provides a splicing method and device of audio segments, which can solve the problem of single singing receiving mode of users in the related art. The technical scheme is as follows:
in one aspect, a method for splicing audio segments is provided, and is applied to a terminal, and the method includes:
acquiring a first audio clip;
determining a second audio clip according to the received first operation of the user, wherein the first audio clip and the second audio clip are audio clips in different songs;
acquiring accompaniment audio data of the second audio clip and lyrics of the second audio clip;
acquiring singing data of a user, and synthesizing the singing data of the user and the accompaniment audio data of the second audio clip into a target audio clip;
and splicing the first audio clip and the target audio clip to obtain a target audio file, wherein the lyrics of the first audio clip are displayed in a first area of a screen of the terminal, the lyrics of the second audio clip are displayed in a second area of the screen, and the second area and the first area are different areas in the screen.
Optionally, the determining the second audio segment according to the received first operation of the user includes:
displaying a recommendation list comprising lyrics of a plurality of candidate second audio fragments;
receiving the first operation of the user, wherein the first operation is used for indicating the second audio clip selected by the user in the recommendation list.
Optionally, the recommendation list further includes:
and the song, the original singer and the singing times of the user for the accompaniment audio data of the candidate second audio clip of the audio file to which each candidate second audio clip belongs.
Optionally, the determining the second audio segment according to the received first operation of the user includes:
receiving the first operation of the user, wherein the first operation is used for indicating a keyword input by the user;
acquiring a target song of which the lyrics comprise the keywords;
determining one audio clip of the target song as the second audio clip.
Optionally, the determining one audio segment of the target song as the second audio segment includes:
displaying lyrics of the target song in the screen;
receiving a second operation of the user, wherein the second operation is used for indicating the target lyrics determined by the user in the lyrics of the target song;
determining the audio segment indicated by the target lyrics as the second audio segment.
Optionally, after the receiving the second operation of the user, the method further includes:
displaying the target lyrics and lyrics of the target song except the target lyrics in at least one of the following modes:
displaying in different colors;
displaying with different font sizes;
the display is performed in different types.
Optionally, before the obtaining the first audio segment, the method further includes:
acquiring a plurality of audio files, wherein each audio file comprises a third audio clip and a fourth audio clip, the fourth audio clip is formed by combining accompaniment audio data of the fourth audio clip and singing data of a user, and the third audio clip and the fourth audio clip are audio clips in different songs;
the obtaining the first audio piece includes:
receiving a third operation of the user, wherein the third operation is used for indicating the initial audio file selected by the user in the plurality of audio files;
determining a third audio segment in the initial audio file as the first audio segment.
Optionally, the terminal is connected to the server, and after the first audio clip and the target audio clip are spliced to obtain the target audio file, the method further includes:
and sending the target audio file to the server.
In another aspect, an apparatus for splicing audio segments is provided, which is applied to a terminal, and includes:
the first acquisition module is used for acquiring a first audio clip;
the determining module is used for determining a second audio clip according to the received first operation of the user, wherein the first audio clip and the second audio clip are audio clips in different songs;
the second obtaining module is used for obtaining the accompaniment audio data of the second audio clip and the lyrics of the second audio clip;
the synthesis module is used for acquiring singing data of a user and synthesizing the singing data of the user and the accompaniment audio data of the second audio clip into a target audio clip;
and the splicing module is used for splicing the first audio clip and the target audio clip to obtain a target audio file, wherein the lyrics of the first audio clip are displayed in a first area of a screen of the terminal, the lyrics of the second audio clip are displayed in a second area of the screen, and the second area and the first area are different areas in the screen.
Optionally, the determining module is configured to:
displaying a recommendation list comprising lyrics of a plurality of candidate second audio fragments;
receiving a first operation of the user, wherein the first operation is used for indicating the second audio clip selected by the user in the recommendation list.
Optionally, the recommendation list further includes: and the song, the original singer and the singing times of the user for the accompaniment audio data of the candidate second audio clip of the audio file to which each candidate second audio clip belongs.
Optionally, the determining module includes:
the receiving submodule is used for receiving the first operation of the user, and the first operation is used for indicating a keyword input by the user; (ii) a
The obtaining submodule is used for obtaining a target song of which the lyrics comprise the keywords;
a determining sub-module, configured to determine one audio segment of the target song as the second audio segment.
Optionally, the determining sub-module is configured to:
displaying lyrics of the target song in the screen;
receiving a second operation of the user, wherein the second operation is used for indicating the target lyrics determined by the user in the lyrics of the target song;
determining the audio segment indicated by the target lyrics as the second audio segment.
Optionally, the determining module is configured to: displaying the target lyrics and lyrics of the target song except the target lyrics in at least one of the following modes:
displaying in different colors;
displaying with different font sizes;
the display is performed in different types.
Optionally, the apparatus further comprises:
the third obtaining module is used for obtaining a plurality of audio files, each audio file comprises a third audio clip and a fourth audio clip, the fourth audio clip is formed by combining accompaniment audio data of the fourth audio clip and singing data of a user, and the third audio clip and the fourth audio clip are audio clips in different songs;
the first obtaining module is configured to:
receiving a third operation of the user, wherein the third operation is used for indicating the initial audio file selected by the user in the plurality of audio files;
determining a third audio segment in the initial audio file as the first audio segment.
Optionally, the terminal is connected to a server, and the apparatus further includes:
and the sending module is used for sending the target audio file to the server.
In yet another aspect, an apparatus for splicing audio segments is provided, the apparatus comprising: a processor, a memory and a computer program stored on the memory and executable on the processor, the processor when executing the computer program implementing the method of splicing audio segments as described in the above aspect.
In yet another aspect, a computer-readable storage medium having instructions stored therein, which when run on a computer, cause the computer to perform the method of splicing audio segments as described in the above aspect.
In a further aspect, a computer program product comprising instructions is provided, which when run on the computer causes the computer to perform the method of splicing audio segments of the above aspect.
The beneficial effect that technical scheme that this disclosure provided brought includes at least:
the invention provides a method and a device for splicing audio clips. The target audio clip is formed by combining the accompaniment audio data of the second audio frequency band and the singing data of the user, and the second audio clip and the first audio clip are audio clips in different songs, so that the obtained accompaniment audio data of the target audio file is the accompaniment audio data in different songs, the fixed mode that the accompaniment audio data of the traditional target audio file is the accompaniment audio data in the same song is broken, and the singing receiving mode of the user is more diversified.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
FIG. 1 is a schematic illustration of an implementation environment to which embodiments of the present disclosure are directed;
fig. 2 is a flowchart of a splicing method of audio segments according to an embodiment of the present disclosure;
FIG. 3 is a flowchart of another audio segment splicing method provided by the embodiment of the present disclosure;
FIG. 4 is a schematic diagram of an interface for a user to determine an initial audio file in a terminal according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of an interface for a user to determine an initial audio file in another terminal according to an embodiment of the disclosure;
fig. 6 is a schematic diagram of a sing-over interface in a terminal according to an embodiment of the present disclosure;
FIG. 7 is a flowchart illustrating a method for obtaining accompaniment audio data of a second audio clip according to an embodiment of the present disclosure;
FIG. 8 is a flowchart illustrating an alternative method for obtaining accompaniment audio data for a second audio clip according to an embodiment of the present disclosure;
FIG. 9 is a schematic diagram of a search interface for audio files in a terminal according to an embodiment of the disclosure;
FIG. 10 is a schematic diagram of a search results interface for audio files in a terminal according to an embodiment of the disclosure;
fig. 11 is a schematic diagram of a screening result interface of an audio file in a terminal according to an embodiment of the present disclosure;
FIG. 12 is a schematic diagram of a selection interface of a lyric fragment in a terminal according to an embodiment of the disclosure;
fig. 13 is a schematic diagram of another interface for receiving singing in a terminal according to an embodiment of the disclosure;
fig. 14 is a schematic diagram of a singing interface in a terminal provided by an embodiment of the present disclosure;
fig. 15 is a schematic diagram of an interface to be published of a target audio file in a terminal according to an embodiment of the present disclosure;
fig. 16 is a schematic diagram of a release status interface in a terminal according to an embodiment of the present disclosure;
fig. 17 is a schematic structural diagram of an audio segment splicing apparatus provided in an embodiment of the present disclosure;
fig. 18 is a schematic structural diagram of a second obtaining module provided in the embodiment of the present disclosure;
FIG. 19 is a schematic structural diagram of another audio segment splicing apparatus provided by the embodiment of the present disclosure;
fig. 20 is a schematic structural diagram of a splicing apparatus for audio segments according to another embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the present disclosure more apparent, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings.
FIG. 1 is a schematic diagram of an implementation environment to which embodiments of the present disclosure relate. Referring to fig. 1, the implementation environment may include one or more terminals 100 (e.g., 2 terminals 100 are shown in fig. 1) and a server 200. Each terminal 100 may establish a communication connection with the server 200 by wire or wirelessly.
Alternatively, each terminal 100 may be a smart phone, a tablet computer, an MP4(moving picture experts group Audio Layer IV) player, a laptop portable computer, a desktop computer, or the like. The server 120 may be a server, or may be a server cluster composed of several servers, or may be a cloud computing service center.
The embodiment of the present disclosure provides an audio segment splicing method, which may be applied to an audio segment splicing device, which may be disposed on any terminal 100 shown in fig. 1, or the device may be the terminal 100. Referring to fig. 2, the method may include:
step 201, obtaining a first audio clip.
The first audio clip may include accompaniment audio data and original audio data, and lyrics of the first audio clip may be displayed in a first region of a screen of the terminal.
Step 202, determining a second audio clip according to the received first operation of the user.
Wherein the first audio clip and the second audio clip are audio clips in different songs. Accordingly, the accompaniment audio data of the first audio clip and the accompaniment audio data of the second audio clip are different. The lyrics of the second audio piece may be displayed in a second region of the screen of the terminal, the second region being a different region of the screen than the first region.
Step 203, obtaining accompaniment audio data of the second audio clip and lyrics of the second audio clip.
In the embodiment of the disclosure, after acquiring the second audio clip, the terminal may directly acquire accompaniment audio data and lyrics of the second audio clip, and may display the lyrics in a second region of the screen, where the second region is a region different from the first region in the screen.
Step 204, obtaining the singing data of the user, and synthesizing the singing data of the user and the accompaniment audio data of the second audio clip into a target audio clip.
Alternatively, the lyrics of the user's singing data may be the same as the lyrics of the second audio piece, or may be different.
And step 205, splicing the first audio clip and the target audio clip to obtain a target audio file.
Optionally, in the target audio file obtained by splicing the first audio segment and the target audio segment by the terminal, the first audio segment may be located before the target audio segment.
In summary, the embodiments of the present disclosure provide an audio clip splicing method, which can splice a first audio clip and a target audio clip to obtain a target audio file. The target audio clip is synthesized by the accompaniment audio data of the second audio frequency band and the singing data of the user, and the second audio clip and the first audio clip are audio clips in different songs, so that the obtained accompaniment audio data of the target audio file is the accompaniment audio data in different songs, the fixed mode that the accompaniment audio data of the traditional target audio file is the accompaniment audio data in the same song is broken, and the singing receiving mode of the user is more diversified.
Fig. 3 is a flowchart of another audio segment splicing method provided by the embodiment of the present disclosure. The method can be applied to an apparatus for splicing audio segments, the apparatus can be disposed in any one of the terminals 100 shown in fig. 1, or the apparatus can be the terminal 100. Referring to fig. 3, the method may include:
step 301, obtaining a plurality of audio files.
Each audio file may include a third audio clip and a fourth audio clip, the fourth audio clip may be formed by combining accompaniment audio data of the fourth audio clip and singing data of the user, and the third audio clip and the fourth audio clip are audio clips in different songs. The third audio segment is generally an audio segment including original singing audio data and may also be referred to as a singing-receiving segment, and the fourth audio segment is generally an audio segment including singing data of the user and may also be referred to as a singing-receiving segment.
Optionally, the lyrics of the singing data of the user included in the fourth audio segment may be the same as or different from the lyrics of the fourth audio segment. Moreover, any two audio files included in the plurality of audio files may include completely different audio clips, that is, any two audio files include different accompaniment audio data of the third audio clip and include different accompaniment audio data of the fourth audio clip. Or the audio clips included in any two audio files may be partially different, that is, the accompaniment audio data of the third audio clip included in any two audio files is the same, and the accompaniment audio data of the fourth audio clip included in any two audio files is different.
In the embodiment of the present disclosure, the plurality of audio files may be pre-stored in the terminal, or may be sent to the terminal by the server. If the plurality of audio files are sent to the terminal by the server, each of the plurality of audio files may be an audio file associated with a song that is determined by the server to have the highest frequency of user clicks based on the frequency of clicks on different songs by a user (i.e., user) of the terminal. The audio file related to the song with the highest click frequency of the user may be an audio file (for example, all national music) of the same type as the song with the highest click frequency of the user, or may be an audio file of which the original singer is the same as the original singer of the song with the highest click frequency of the user.
Because the clicking frequency of the user on the songs can reflect the preference of the user, and each audio file in the plurality of audio files is related to the song with the highest clicking frequency of the user, the preference requirement of the user can be met to a certain extent, personalized service is provided for the user, and the user experience is effectively improved.
Alternatively, the plurality of audio files may be audio files related to the song with the highest frequency of user clicks, and may be audio files with higher numbers of praise, forward, and comment. The interest of the audio file can be reflected to a certain extent by the praise number, the forwarding number and the comment number of the audio file, and the terminal can acquire the audio file with higher praise number, forwarding number and comment number, namely the audio file with stronger interest, so that the user experience can be improved to a certain extent.
Step 302, receiving a third operation of the user, where the third operation is used to indicate an initial audio file selected by the user among the plurality of audio files.
Optionally, the third operation may include: single click or double click, etc.
In the embodiment of the present disclosure, after the terminal acquires the plurality of audio files, the terminal may play a first audio file of the plurality of audio files, and switch the played audio file when receiving a switching operation of a user. And then, when a third operation of the user is received, determining the audio file which is played currently as the initial audio file. The switching operation may be a click operation or a sliding operation, and the click operation may be a single click operation or a double click operation.
Optionally, when the terminal plays a certain audio file, the terminal may display the attribute information of the audio file. The attribute information may include: the number of likes, comments, and forwards for the audio file.
Optionally, the attribute information of the audio file may further include: a serial number of the audio file, a time of release of the audio file, and an identification of a user providing the audio file. Wherein, the sequence number of the audio file may refer to: and the singing data of the accompaniment audio data of the fourth audio clip of the user are arranged in sequence in all the singing data of the accompaniment audio data of the fourth audio clip. Wherein the user's identification may be at least one of a user's avatar, nickname, and account number.
The attribute information of each audio file can reflect the interestingness of the audio file to a certain extent, so that the terminal can display the attribute information of the audio file to a user, and can provide certain reference for the user so that the user can determine the initial audio file, and the user experience is improved.
Optionally, the terminal may further display the comment content of the audio file by the other user and the identifier of the user who comments on the audio file, so that the user can know the evaluation of the audio file by the other user.
In the embodiment of the disclosure, when a certain audio file is played, the terminal may display the lyrics of the audio file, and may display the playing progress of the audio file. Optionally, the terminal may display the playing progress of the audio file by adjusting a display effect of lyrics of a part of the played audio file. For example, lyrics of a played portion of an audio file may be displayed in a different color than lyrics of an unplayed portion of the audio file. Alternatively, the lyrics of the played part of the audio file and the lyrics of the unplayed part of the audio file may be displayed in different font sizes (i.e., font sizes) or different types (i.e., font types of the fonts), thereby realizing the display of the playing progress of the audio file.
And in the process of playing a certain audio file, the terminal can gradually display the lyrics of the audio file according to the playing progress of the audio file. For example, when playing a third audio segment of an audio file, only the lyrics of the third audio segment may be displayed, and the lyrics of the fourth audio segment may not be displayed. When the fourth audio segment is played, the lyrics of the fourth audio segment are displayed. Therefore, the flexibility of displaying the lyrics of the audio file by the terminal is improved, and a suspense can be left for the user, so that the user experience is improved. The not displaying the lyrics of the fourth audio fragment may be hiding the lyrics of the fourth audio fragment, or displaying the lyrics of the fourth audio fragment after fuzzification processing.
For example, referring to fig. 4 and 5, fig. 4 is a schematic diagram illustrating an interface for a user to determine an initial audio file in a terminal according to an embodiment of the present disclosure. Fig. 5 is a schematic diagram illustrating an interface for determining an initial audio file by a user in another terminal according to an embodiment of the present disclosure. The interfaces (which may be referred to as a singing-pickup detail interface) Z1 and Z2 shown in fig. 4 and 5 may include play switching keys of an audio file (the play switching keys include a left switching key C11 and a right switching key C12), a singing-pickup key C2, a signal identifier C3, a power identifier C4 of the terminal, a font indicating that the remaining power of the terminal is "100%", and a font "9: 41 AM" indicating that the current time of the terminal is 9:41 AM. Each interface in the terminal may include a signal identifier C3, a power identifier C4, a word for indicating the remaining power of the terminal, and a word for indicating the current time of the terminal. In the interfaces Z1 and Z2 displayed on the terminal, if a user clicks the left switch key C11 or the right switch key C12, the terminal can switch the currently played audio file. If the user clicks the sing button C2 (i.e. the third operation) is received, the terminal may determine that the currently playing audio file is the initial audio file. The area where the play switching key and the sing-receiving key C2 are located may be referred to as an operation area of the terminal.
Referring to fig. 4, the terminal is currently playing a third audio clip of an audio file, and the lyrics of the third audio clip of the audio file displayed by the terminal at this time are: XXX light XX, which comes from Song "A1" of singer A, is shown after fuzzification of the lyrics of the fourth audio piece, and shows the avatar of the user providing the audio file as Y11 and a nickname (i.e. brave). As can be seen from fig. 4 and 5, the number of praise of the audio file is 1024, the number of comments is 524, and the number of forwarding is 15. The comment content of the audio file displayed by the terminal comprises: comment of user with avatar Y12 "this replies to something just like Onhaha, i sing one", comment of user with avatar Y13 "have talent, this is on! "and the comment" taking gas "of the user with the avatar of Y14, and the like. When the terminal plays to the fourth audio clip of the audio file, referring to fig. 5, the lyrics of the fourth audio clip of the audio file displayed by the terminal at this time are: xxxxxx, which comes from song "B1" of singer B, and it can also be seen that the audio file was released at 8 months and 23 days.
Step 303, determining a third audio segment in the initial audio file as the first audio segment.
After the terminal determines the initial audio file from the plurality of audio files according to the third operation of the user, the terminal may directly determine a third audio clip in the initial audio file as the first audio clip, and display lyrics corresponding to the first audio clip in the first area of the screen of the terminal.
For example, referring to fig. 4 and 5, after the terminal receives a click operation of the sing key C5 by the user, the audio file currently being played, i.e. the audio file whose lyrics include "XXX light XX on, xxxxxx" is determined as the initial audio file, and the audio file includes a third audio fragment, i.e. the audio fragment whose lyrics are "XXX light XX on" is determined as the first audio fragment, at which time the terminal may display an interface (may be referred to as a singing receiving interface) Z3 shown in fig. 6, where the interface Z3 may include a word "singing receiving", and as can be seen in fig. 6, the lyrics "XXX light XX on" are displayed in the first area 00a of the screen XX of the terminal.
And step 304, acquiring a second audio clip according to the received first operation of the user.
The second audio clip and the first audio clip are audio clips in different songs, and the first operation of the user can be a click operation or an input operation.
In the embodiment of the present disclosure, there are various ways for the terminal to acquire the accompaniment audio data of the second audio clip, and the implementation process of the step 304 is mainly described in the following several realizable ways in the embodiment of the present disclosure.
In a first implementation manner, referring to fig. 7, the method for obtaining accompaniment audio data of the second audio clip may include:
step 3041a, display the recommendation list.
Wherein the recommendation list may comprise lyrics of a plurality of candidate second audio fragments.
In the embodiment of the present disclosure, the recommendation list may be sent to the terminal by the server. Alternatively, the recommendation list may be pre-stored in the terminal.
Optionally, the first candidate audio clip in the plurality of candidate second audio clips may be a fourth audio clip included in the initial audio file, and each candidate second audio clip may be an audio clip merged with the first audio clip.
In an embodiment of the present disclosure, the recommendation list may further include: and the song, the original singer and the singing times of the user for the accompaniment audio data of the candidate second audio clip of the audio file to which each candidate second audio clip belongs. Correspondingly, when the terminal displays the lyrics of the plurality of candidate second audio fragments included in the recommendation list, the track, the original singer and the singing times of the accompaniment audio data of the candidate second audio fragment of the audio file to which the candidate second audio fragment belongs can be displayed for the user, so that more information about the candidate second audio fragments can be provided for the user, and the user can determine the lyrics of the second audio fragment from the plurality of candidate audio fragments included in the recommendation list.
For example, referring to FIG. 6, the interface Z3 may display a recommendation list, the interface Z3 currently displaying lyrics of two candidate second audio fragments included in the recommendation list, wherein the lyrics of one candidate second audio fragment are "open XXXXX", which is a song "B1" from singer B. Another candidate second audio piece's lyrics is "X light XXXX", which comes from Song "C1" of singer C.
Step 3042a, receiving a first operation of the user, where the first operation is used to indicate a second audio clip selected by the user in the recommendation list.
The first operation may be a click operation or a circle selection operation.
Optionally, a first selection frame may be displayed in the screen of the terminal, and the first selection frame may be located in a second area of the screen of the terminal. Before receiving the first operation of the user, the terminal may receive a sliding operation of the user, the sliding operation being used for instructing the user to move the lyrics of the determined candidate second audio frequency fragment into the first selection box. Thereafter, the terminal may determine an audio segment indicated by the lyric located within the first selection box as the second audio segment after receiving the first operation of the user.
It should be noted that, in order to improve the saliency of the lyrics located in the first selection box, the terminal may display the lyrics located in the first selection box, the lyrics corresponding to the other second audio pieces to be selected in the recommendation list in different colors, different word sizes, or different types.
For example, referring to fig. 6, the lyrics of the candidate second audio fragment located within the first selection box K1 are "xxxxxx" open, and the font of the lyrics is larger than the fonts of the lyrics corresponding to the other candidate second audio fragments. If the first operation of the user is received at this time, the audio segment indicated by the lyric "xxxxxx" may be determined as the second audio segment. If the terminal receives a sliding operation by the user at this time, the lyrics located in the first selection box K1 may be switched. As can also be seen in FIG. 6, the interface Z3 may include a start sing button C5. When the terminal receives the click operation of the user on the singing receiving key C5, the singing data of the user can be collected.
In a second implementable manner, referring to fig. 8, the method for obtaining accompaniment audio data of the second audio clip may include:
step 3041b, obtain a first operation of the user, where the first operation is used to indicate a keyword input by the user.
Optionally, the keyword may be input by the user at will, or may be determined by the user according to the lyrics of the first audio piece, for example, the keyword may be a word in the lyrics of the first audio piece, or a word, that is, the lyrics of the first audio piece include the keyword. Or the keyword may be a word that is rhyme with a certain word in the lyrics of the first audio piece, or may be a word that is rhyme with a certain word. The present disclosure does not limit the manner of determining the keyword.
It should be noted that, if the keyword input by the user is determined according to the lyrics of the first audio piece, the terminal may further display the lyrics of the first audio piece before acquiring the keyword input by the user.
In the embodiment of the present disclosure, in the process of acquiring the keyword input by the user, the terminal may display a keyword input keyboard, for example, a 9-key input keyboard, or a 26-key input keyboard. Then, the terminal may receive a click operation of a key in the keyword input keyboard by the user and display a corresponding word or word according to the click operation, and then the terminal may receive a determination operation of the user, the determination operation being used to instruct the user to determine a keyword from the words or words displayed by the terminal.
For example, referring to FIG. 6, the interface Z3 may also include a search button S1, with the word "search songs/lyrics" to the right of the search button. The terminal may display a search interface Z4 of the audio file shown in fig. 9 upon receiving a user' S clicking operation on the search button S1, the search interface Z4 may include a search box S2, a word pattern of "search song/lyrics" and a search icon S1 located in the search box S2, a 9-key input keypad J, a cancel key Q, and default keywords (i.e., yes, bar, know, and fan, etc.) displayed in the 9-key input keypad J, which may be determined by the terminal according to a history input record of the user, for example, words with a high frequency of user input, and may be constantly changed.
Step 3042b, obtain a target song whose lyrics include keywords.
The terminal may first acquire a plurality of songs, and the lyrics of each of the plurality of songs may include the keyword, and then the terminal may display the plurality of songs, for example, may display the plurality of songs in a list form, and after receiving a determination operation of the target song by the user, acquire the target song from the plurality of songs. Or after the terminal acquires the plurality of songs, the terminal may acquire the target song from the plurality of songs according to a pre-stored screening condition. Wherein the screening conditions may include: the lyrics of the target song comprise lyric sentences beginning with the keywords; or the lyrics of the target song comprise song word sentences ending by keywords; or the lyrics of the target song comprise lyric sentences taking the keywords as the finals.
Wherein, the displaying of the plurality of songs by the terminal may refer to: for each song, the terminal displays only the lyric sentences containing the keywords in the song and at least one lyric sentence adjacent to the lyric sentences containing the keywords. Optionally, the terminal may also display the track of each of the plurality of songs, as well as the original artist of the song.
Optionally, after acquiring the keyword input by the user, the terminal may send a second request to the server, where the second request carries the keyword. After receiving the second request, the server may obtain a plurality of songs whose lyrics include the keyword based on the keyword, and transmit the plurality of songs to the terminal. Accordingly, the terminal may retrieve the plurality of songs.
In the embodiment of the present disclosure, in order to highlight the saliency of the keyword, the terminal may adjust the display effect of the lyric sentences containing the keyword, for example, for each song, the lyric sentences containing the keyword and the lyric sentences not containing the keyword may be displayed in different word sizes. The lyric sentence without the keyword is the at least one lyric sentence adjacent to the lyric sentence with the keyword.
Optionally, in order to further highlight the saliency of the keyword, the terminal may further adjust the display effect of the keyword, for example, the keyword may be displayed in a color different from other words in the lyric sentence containing the keyword.
For example, assuming that the terminal obtains the keyword input by the user as "guess", referring to fig. 10, the terminal may present to the user a plurality of songs whose lyrics include the keyword through the search result interface Z5 of the audio file. The interface Z5 may include a search box S2, a search icon S1 located within the search box S2, and the keyword "guess" typeface, and a cancel button Q. As can be seen from fig. 10, the 4 songs whose corresponding lyrics currently displayed by the terminal include "guess" are: "XXXXX guess" in song "D1" from singer D, "XXX guess XXX in song" E1 "from singer E," XXX guess X, X "in song" F1 "from singer F, and" guess XXXXX "in song" G1 "from singer G.
Referring to fig. 9 and 10, the search interface Z4 and the search results interface Z5 may each include three filter buttons (i.e., C6, C7, and C8) and a typeface for prompting the filter condition indicated by each filter button. The user may click on the filter button C6, or C7, or C8, and accordingly, the terminal may select a target song from the plurality of songs that meets the filter condition indicated by the clicked button according to the user's click operation.
Referring to fig. 11, a schematic diagram of a screening result interface of an audio file in a terminal is shown, and the interface (which may be a result screening interface) Z6 may display target songs meeting the screening condition. Referring to fig. 11, the interface Z6 may include a search box S2, a "guess" typeface and search icon S1 located within the search box S2, a cancel button Q, filter buttons (C6, C7, and C8), and typefaces for prompting the filter conditions indicated by each filter button. As can be seen from fig. 11, the terminal receives the click operation of the user on the filter button C6, and the determined target song is the song that ends with "guess", i.e., song "D1" of the singer D.
Step 3043b, determining one audio segment in the target song as the second audio segment.
In the embodiment of the disclosure, after the terminal acquires the target song, the lyric of the target song may be displayed in a screen of the terminal. Then, a second operation of the user may be received, the second operation being for indicating a target lyric selected by the user among lyrics of the target song. Thereafter, the terminal may determine the audio piece indicated by the target lyric as the second audio piece. Wherein, the second operation can be a click operation or a circle selection operation.
Alternatively, a second selection box may be displayed in the screen of the terminal, and the terminal may first receive a sliding operation by the user, the sliding operation being used to indicate a portion of the lyrics located in the second selection box selected by the user among the lyrics of the target song. Then, the terminal may receive a second operation (e.g., a click operation) of the user, and may determine the lyrics located in the second selection box as the target lyrics. Thereafter, the audio piece indicated by the target lyrics may be determined as the second audio piece.
It should be noted that, in order to improve the saliency of the target lyrics, the terminal may adjust the display effect of the target lyrics after receiving the second operation of the user. For example, the target lyric and lyrics of the target song other than the target lyric can be displayed in at least one of the following manners: displaying in different colors; displaying with different font sizes; the display is performed in different types.
For example, in the interface Z6 shown in fig. 11, upon receiving a click operation of the target song by the user, the terminal may display an interface Z7 shown in fig. 12, where the interface (which may be referred to as a lyric fragment selection interface) Z7 may include a "confirm" button C9. As can be seen in FIG. 12, the interface Z7 may display the lyrics of the target song and may display a second selection box K2. It can also be seen from fig. 12 that the lyrics currently located in the second selection box K2 are "xxxxxx guess", and the font of the lyrics in the second selection box K2 is larger than the font of other parts of the lyrics in the target song. If the terminal receives the click operation of the determination key C9 by the user at this time, the lyric "xxxxxx guess" may be determined as the target lyric, and the audio segment indicated by the target lyric may be determined as the second audio segment. Thereafter, the terminal may add a second audio clip to the recommendation list and locate it within the first selection box K1, and may display the interface Z8 shown in fig. 13.
Optionally, the interface Z7 shown in fig. 12 may further include a first trial listening button C10, and the terminal may play an audio clip located in the second selection box K2 when detecting a click operation of the first trial listening button C10 by the user.
Step 305, obtaining accompaniment audio data of the second audio clip and lyrics of the second audio clip.
In the embodiment of the present disclosure, after acquiring the second audio clip, the terminal may directly acquire accompaniment audio data and lyrics of the second audio clip, and may display the lyrics in a second region (i.e., a region where the first selection box is located) of the screen, where the second region is a different region from the first region in the screen.
Optionally, the lyrics of the second audio piece may or may not include keywords.
For example, after the terminal determines the second audio clip, the obtained lyrics of the second audio clip are "xxxxxx" open, and then referring to fig. 13, the terminal may display the lyrics of the second audio clip in the first selection box K1, i.e., display the lyrics of the second audio clip in the second area of the screen.
Step 306, obtaining singing data of the user, and synthesizing the singing data and the accompaniment audio data of the second audio clip into a target audio clip.
After acquiring the accompaniment audio data of the second audio clip, the terminal may collect the singing data of the user first, and combine the singing data with the accompaniment audio data of the second audio clip to obtain the target audio clip.
Optionally, when the singing data of the user is collected, the terminal may further display a playing progress of the accompaniment audio data corresponding to the second audio clip. For example, the playing progress of the accompaniment audio data can be displayed by adjusting the display effect of the lyrics of the played accompaniment audio data.
For example, referring to FIG. 13, interface Z8 shown in FIG. 13 may include a start sing button C5. The user starts singing after clicking the start singing button C5, and simultaneously the terminal displays a singing receiving interface Z9 shown in FIG. 14 after receiving the clicking operation of the start singing button C5, wherein the interface Z9 can comprise words of 'singing receiving', and can display the lyrics of the spliced first audio segment and the lyrics of the spliced second audio segment, and a sound wave line X for indicating that the terminal is currently recording the singing receiving data of the user.
And 307, splicing the first audio clip and the target audio clip to obtain a target audio file.
In the spliced target audio file, the first audio clip may be located in front of the target audio clip.
For example, when the terminal completes recording the singing data of the user, a to-be-released interface Z10 of the target audio file shown in fig. 15 may be displayed, and the interface Z10 may display lyrics of the first audio fragment and lyrics of the second audio fragment. The interface Z10 may also include: a re-record key C11, a release key C12, and a second listen on trial key C13. The user can click the second trial listening button C13 to listen to the target audio clip synthesized by the terminal according to the singing data of the terminal and the accompaniment audio data of the second audio clip. If the user considers that the quality of the second audio clip is good, the user can click the release button C12, and the corresponding terminal can splice the first audio clip and the target audio clip. If the user considers that the quality of the second audio segment is poor, the release re-recording button C11 may be clicked, and accordingly, the terminal may display the interface Z9 shown in fig. 14 to re-collect the singing data of the user.
And step 308, sending the target audio file to a server.
In the disclosed embodiment, after obtaining the target audio file, the terminal may send the target audio file to the server. Then, the server may send the target audio file to other terminals, so that the other terminals may listen to the target audio file and may perform operations on the target audio file, such as praise, comment, or forward. Therefore, the interactivity of each user is effectively enhanced.
For example, referring to fig. 15, if the terminal receives a click operation of the issue button C12 from a user, the terminal may splice the first audio clip and the target audio clip to obtain a target audio file, and send the audio file to the server. Thereafter, the terminal may display a distribution status interface Z11 shown in fig. 16, and the interface Z11 may display lyrics of the audio file currently distributed by the user, the user's identification (i.e., avatar Y14, nickname xian son), distribution time (day 8 month 26), and attribute information of the distributed audio file, which may include the number of praise of other users to the target audio file. Number of comments and number of forwards. The interface Z11 may receive a word and share the sharing key C14, and after receiving a click operation of the sharing key C14 by a user, the terminal may share a currently issued audio file with a friend.
In summary, the embodiments of the present disclosure provide an audio clip splicing method, which can splice a first audio clip and a target audio clip to obtain a target audio file. The target audio clip is formed by combining the accompaniment audio data of the second audio frequency band and the singing data of the user, and the second audio clip and the first audio clip are audio clips in different songs, so that the obtained accompaniment audio data of the target audio file is the accompaniment audio data in different songs, the fixed mode that the accompaniment audio data of the traditional target audio file is the accompaniment audio data in the same song is broken, and the singing receiving mode of the user is more diversified.
Referring to fig. 17, an embodiment of the present disclosure provides an apparatus for splicing audio segments, where the apparatus may be applied to a terminal, and the apparatus 400 may include:
a first obtaining module 401, configured to obtain a first audio segment.
A determining module 402, configured to determine a second audio segment according to the received first operation of the user, where the first audio segment and the second audio segment are audio segments in different songs.
A second obtaining module 403, configured to obtain accompaniment audio data of the second audio segment and lyrics of the second audio segment.
And a synthesizing module 404, configured to obtain singing data of the user, and synthesize the singing data of the user and the accompaniment audio data of the second audio clip into a target audio clip.
The splicing module 405 is configured to splice the first audio segment and the target audio segment to obtain a target audio file; the lyrics of the first audio clip are displayed in a first area of a screen of the terminal, the lyrics of the second audio clip are displayed in a second area of the screen, and the second area and the first area are different areas in the screen.
In summary, the present disclosure provides an audio clip splicing apparatus, which can splice a first audio clip and a target audio clip to obtain a target audio file. The target audio clip is formed by combining the accompaniment audio data of the second audio frequency band and the singing data of the user, and the second audio clip and the first audio clip are audio clips in different songs, so that the obtained accompaniment audio data of the target audio file is the accompaniment audio data in different songs, the fixed mode that the accompaniment audio data of the traditional target audio file is the accompaniment audio data in the same song is broken, and the singing receiving mode of the user is more diversified.
Optionally, the determining module 402 may be configured to:
displaying a recommendation list, the recommendation list including lyrics of the plurality of candidate second audio fragments; and receiving a first operation of the user, wherein the first operation is used for indicating a second audio clip selected by the user in the recommendation list.
Optionally, the recommendation list further includes: the song, the original singer and the singing times of the user for the accompaniment audio data of the candidate second audio clip of the audio file to which each candidate second audio clip belongs.
Fig. 18 is a schematic structural diagram of a determining module 402 according to an embodiment of the present disclosure. Referring to fig. 18, the determining module 402 may include:
the first obtaining sub-module 4021 is configured to receive a first operation of a user, where the first operation is used to indicate a keyword input by the user.
The second obtaining sub-module 4022 is configured to obtain a target song whose lyrics include a keyword.
The determining sub-module 4023 is configured to determine one audio segment in the target song as the second audio segment.
Optionally, the determining sub-module 4023 may be configured to:
displaying lyrics of a target song in a screen; receiving a second operation of the user, wherein the second operation is used for indicating target lyrics determined by the user in the lyrics of the target song; and determining the audio segment corresponding to the target lyric as a second audio segment.
Optionally, the determining sub-module 4023 may be configured to: displaying the target lyrics and lyrics except the target lyrics in the lyrics of the target song in at least one of the following modes: displaying in different colors; displaying with different font sizes; the display is performed in different types.
Fig. 19 is a schematic structural diagram of another audio segment splicing apparatus provided in the embodiment of the present disclosure. Referring to fig. 19, the apparatus 400 may further include:
a third obtaining module 406, configured to obtain multiple audio files, where each audio file includes a third audio clip and a fourth audio clip, the fourth audio clip is formed by combining accompaniment audio data of the fourth audio clip and singing data of a user, and the third audio clip and the fourth audio clip are audio clips in different songs;
the first obtaining module 401 may be configured to:
receiving a third operation of the user, wherein the third operation is used for indicating an initial audio file selected by the user in the plurality of audio files; a third audio segment in the initial audio file is determined to be the first audio segment.
Optionally, the terminal is connected to the server, referring to fig. 19, the apparatus may further include:
a sending module 407, configured to send the target audio file to the server.
In summary, the present disclosure provides an audio clip splicing apparatus, which can splice a first audio clip and a target audio clip to obtain a target audio file. The target audio clip is formed by combining the accompaniment audio data of the second audio frequency band and the singing data of the user, and the second audio clip and the first audio clip are audio clips in different songs, so that the obtained accompaniment audio data of the target audio file is the accompaniment audio data in different songs, the fixed mode that the accompaniment audio data of the traditional target audio file is the accompaniment audio data in the same song is broken, and the singing receiving mode of the user is more diversified.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the apparatus, the modules and the sub-modules described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Fig. 20 is a schematic structural diagram of an audio segment splicing apparatus provided in an embodiment of the present disclosure, and referring to fig. 20, the apparatus 500 may include: a processor 501, a memory 502 and a computer program stored on the memory 502 and operable on the processor, wherein the processor 501, when executing the computer program, can implement the method for splicing audio segments provided by the above method embodiments, such as the method shown in fig. 2 or fig. 3.
The embodiment of the present disclosure provides a computer-readable storage medium, in which operations are stored, and when the computer-readable storage medium runs on a computer, the computer is caused to execute the splicing method of the audio segments provided by the above method embodiment, for example, the method shown in fig. 2 or fig. 3.
The embodiment of the present disclosure also provides a computer program product containing instructions, which when run on the computer, causes the computer to execute the method for splicing audio segments provided by the above method embodiment, for example, the method shown in fig. 2 or fig. 3.
It will be understood by those skilled in the art that all or part of the steps of implementing the above embodiments may be implemented by hardware, or may be implemented by operating the relevant hardware by a program, where the program is stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present disclosure and is not intended to limit the present disclosure, and any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (11)

1. A splicing method of audio segments is applied to a terminal, and the method comprises the following steps:
acquiring a first audio clip;
determining a second audio clip according to the received first operation of the user, wherein the first audio clip and the second audio clip are audio clips in different songs;
acquiring accompaniment audio data of the second audio clip and lyrics of the second audio clip;
acquiring singing data of a user, and synthesizing the singing data of the user and the accompaniment audio data of the second audio clip into a target audio clip;
and splicing the first audio clip and the target audio clip to obtain a target audio file, wherein the lyrics of the first audio clip are displayed in a first area of a screen of the terminal, the lyrics of the second audio clip are displayed in a second area of the screen, and the second area and the first area are different areas in the screen.
2. The method of claim 1, wherein determining the second audio segment based on the received first action by the user comprises:
displaying a recommendation list comprising lyrics of a plurality of candidate second audio fragments;
receiving the first operation of the user, wherein the first operation is used for indicating the second audio clip selected by the user in the recommendation list.
3. The method of claim 2, wherein the recommendation list further comprises:
and the song, the original singer and the singing times of the user for the accompaniment audio data of the candidate second audio clip of the audio file to which each candidate second audio clip belongs.
4. The method of claim 1, wherein determining the second audio segment based on the received first action by the user comprises:
receiving the first operation of the user, wherein the first operation is used for indicating a keyword input by the user;
acquiring a target song of which the lyrics comprise the keywords;
determining one audio clip of the target song as the second audio clip.
5. The method of claim 4, wherein determining one audio clip of the target song as the second audio clip comprises:
displaying lyrics of the target song in the screen;
receiving a second operation of the user, wherein the second operation is used for indicating the target lyrics determined by the user in the lyrics of the target song;
determining the audio segment indicated by the target lyrics as the second audio segment.
6. The method of claim 5, wherein after the receiving the second operation by the user, the method further comprises:
displaying the target lyrics and lyrics of the target song except the target lyrics in at least one of the following modes:
displaying in different colors;
displaying with different font sizes;
the display is performed in different types.
7. The method of any of claims 1 to 6, wherein prior to said obtaining the first audio segment, the method further comprises:
acquiring a plurality of audio files, wherein each audio file comprises a third audio clip and a fourth audio clip, the fourth audio clip is formed by combining accompaniment audio data of the fourth audio clip and singing data of a user, and the third audio clip and the fourth audio clip are audio clips in different songs;
the obtaining the first audio piece includes:
receiving a third operation of the user, wherein the third operation is used for indicating the initial audio file selected by the user in the plurality of audio files;
determining a third audio segment in the initial audio file as the first audio segment.
8. The method according to any one of claims 1 to 6, wherein the terminal is connected to a server, and after the splicing of the first audio segment and the target audio segment is performed to obtain a target audio file, the method further comprises:
and sending the target audio file to the server.
9. An apparatus for splicing audio pieces, the apparatus comprising:
the first acquisition module is used for acquiring a first audio clip;
the determining module is used for determining a second audio clip according to the received first operation of the user, wherein the first audio clip and the second audio clip are audio clips in different songs;
the second obtaining module is used for obtaining the accompaniment audio data of the second audio clip and the lyrics of the second audio clip;
the synthesis module is used for acquiring singing data of a user and synthesizing the singing data of the user and the accompaniment audio data of the second audio clip into a target audio clip;
and the splicing module is used for splicing the first audio clip and the target audio clip to obtain a target audio file, wherein the lyrics of the first audio clip are displayed in a first area of a screen of a terminal, the lyrics of the second audio clip are displayed in a second area of the screen, and the second area and the first area are different areas in the screen.
10. An apparatus for splicing audio pieces, the apparatus comprising: processor, memory and computer program stored on the memory and executable on the processor, the processor implementing the method of splicing audio segments according to any one of claims 1 to 8 when executing the computer program.
11. A computer-readable storage medium having stored therein instructions which, when run on a computer, cause the computer to execute the method of splicing audio pieces according to any one of claims 1 to 8.
CN201911080116.XA 2019-11-07 2019-11-07 Audio clip splicing method and device Active CN110910917B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911080116.XA CN110910917B (en) 2019-11-07 2019-11-07 Audio clip splicing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911080116.XA CN110910917B (en) 2019-11-07 2019-11-07 Audio clip splicing method and device

Publications (2)

Publication Number Publication Date
CN110910917A CN110910917A (en) 2020-03-24
CN110910917B true CN110910917B (en) 2021-08-31

Family

ID=69816379

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911080116.XA Active CN110910917B (en) 2019-11-07 2019-11-07 Audio clip splicing method and device

Country Status (1)

Country Link
CN (1) CN110910917B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112000308B (en) * 2020-09-10 2023-04-18 成都拟合未来科技有限公司 Double-track audio playing control method, system, terminal and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10015546B1 (en) * 2017-07-27 2018-07-03 Global Tel*Link Corp. System and method for audio visual content creation and publishing within a controlled environment
CN110189741A (en) * 2018-07-05 2019-08-30 腾讯数码(天津)有限公司 Audio synthetic method, device, storage medium and computer equipment
CN110211556A (en) * 2019-05-10 2019-09-06 北京字节跳动网络技术有限公司 Processing method, device, terminal and the storage medium of music file
CN110349559A (en) * 2019-07-12 2019-10-18 广州酷狗计算机科技有限公司 Carry out audio synthetic method, device, system, equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6502194B1 (en) * 1999-04-16 2002-12-31 Synetix Technologies System for playback of network audio material on demand
JP2001093226A (en) * 1999-09-21 2001-04-06 Sony Corp Information communication system and method, and information communication device and method
CN106486128B (en) * 2016-09-27 2021-10-22 腾讯科技(深圳)有限公司 Method and device for processing double-sound-source audio data
CN107665703A (en) * 2017-09-11 2018-02-06 上海与德科技有限公司 The audio synthetic method and system and remote server of a kind of multi-user

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10015546B1 (en) * 2017-07-27 2018-07-03 Global Tel*Link Corp. System and method for audio visual content creation and publishing within a controlled environment
CN110189741A (en) * 2018-07-05 2019-08-30 腾讯数码(天津)有限公司 Audio synthetic method, device, storage medium and computer equipment
CN110211556A (en) * 2019-05-10 2019-09-06 北京字节跳动网络技术有限公司 Processing method, device, terminal and the storage medium of music file
CN110349559A (en) * 2019-07-12 2019-10-18 广州酷狗计算机科技有限公司 Carry out audio synthetic method, device, system, equipment and storage medium

Also Published As

Publication number Publication date
CN110910917A (en) 2020-03-24

Similar Documents

Publication Publication Date Title
US9741067B2 (en) Internet radio and broadcast apparatus and system with music information and purchasing
US8285776B2 (en) System and method for processing a received media item recommendation message comprising recommender presence information
CN104205209B9 (en) Playback controlling apparatus, playback controls method
US9164993B2 (en) System and method for propagating a media item recommendation message comprising recommender presence information
US8306976B2 (en) Methods and systems for utilizing contextual feedback to generate and modify playlists
US6936758B2 (en) Player information-providing method, server, program for controlling the server, and storage medium storing the program
US20080301241A1 (en) System and method of generating a media item recommendation message with recommender presence information
CN113050857B (en) Music sharing method and device, electronic equipment and storage medium
US20080301187A1 (en) Enhanced media item playlist comprising presence information
KR101800193B1 (en) Method and system for searching content creators
CN108476343A (en) The video broadcasting method and device that each of music is segmented
US20210035541A1 (en) Systems and methods for recommending collaborative content
CN102567447A (en) Information processing device and method, information processing system, and program
CN107872685A (en) A kind of player method of multi-medium data, device and computer installation
van der Hoeven et al. Articulations of identity and distinction: The meanings of language in Dutch popular music
CN110910917B (en) Audio clip splicing method and device
CN111357046A (en) Information processing apparatus, information processing method, and program
WO2024007834A1 (en) Video playing method and apparatus, and device and storage medium
KR102534870B1 (en) Method and apparatus for providing an audio mixing interface using a plurality of audio stems
KR20180034718A (en) Method of providing music based on mindmap and server performing the same
KR102608935B1 (en) Method and apparatus for providing real-time audio mixing service based on user information
KR20180036687A (en) Method of providing music based on mindmap and server performing the same
US20220124383A1 (en) Audio bullet screen processing method and device
KR20010091206A (en) Method for producing popular songs using Internet
JP7369604B2 (en) information processing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant