CN111583973B - Music sharing method and device and computer readable storage medium - Google Patents

Music sharing method and device and computer readable storage medium Download PDF

Info

Publication number
CN111583973B
CN111583973B CN202010414543.3A CN202010414543A CN111583973B CN 111583973 B CN111583973 B CN 111583973B CN 202010414543 A CN202010414543 A CN 202010414543A CN 111583973 B CN111583973 B CN 111583973B
Authority
CN
China
Prior art keywords
audio
music
target
sharing
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010414543.3A
Other languages
Chinese (zh)
Other versions
CN111583973A (en
Inventor
胡焱华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010414543.3A priority Critical patent/CN111583973B/en
Publication of CN111583973A publication Critical patent/CN111583973A/en
Application granted granted Critical
Publication of CN111583973B publication Critical patent/CN111583973B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a music sharing method, a device and a computer readable storage medium, wherein the music sharing method comprises the following steps: when a music editing instruction is received at a music playing interface, acquiring audio to be inserted and acquiring original audio corresponding to currently played music; inserting the audio to be inserted into the target position of the original audio to obtain a target synthetic audio; and sharing the music playing resource associated with the target synthetic audio to the target user according to the music sharing instruction. Through the implementation of the scheme, the user is supported to edit the currently played music in the music playing interface, and the edited music playing resources are shared, so that the interactivity of music sharing and the expression capability of music are effectively improved.

Description

Music sharing method and device and computer readable storage medium
Technical Field
The present application relates to the field of electronic technologies, and in particular, to a music sharing method and apparatus, and a computer-readable storage medium.
Background
With the continuous development of terminal technology, a terminal can provide more and more applications to meet more diversified use requirements of users, wherein a music player is deeply favored by users as a main entertainment application.
At present, music players used by users all have a music sharing function, and the users can share specific music in the players to other users through the music sharing function so as to enhance interactivity. However, the music sharing function in the prior art only supports sharing music played by the user, and interactivity and expression capability are still limited.
Disclosure of Invention
The embodiment of the application provides a music sharing method and device and a computer readable storage medium, which can at least solve the problem that the music sharing function in the prior art only supports sharing music played by a user, and the interactivity and the expression capability are limited.
A first aspect of an embodiment of the present application provides a music sharing method, including:
when a music editing instruction is received at a music playing interface, acquiring audio to be inserted and acquiring original audio corresponding to currently played music;
inserting the audio to be inserted into the target position of the original audio to obtain a target synthetic audio;
and sharing the music playing resource associated with the target synthetic audio to the target user according to the music sharing instruction.
A second aspect of the embodiments of the present application provides a music sharing apparatus, including:
the acquisition module is used for acquiring the audio to be inserted and acquiring the original audio corresponding to the currently played music when the music playing interface receives a music editing instruction;
the inserting module is used for inserting the audio to be inserted into the target position of the original audio to obtain a target synthetic audio;
and the sharing module is used for sharing the music playing resource related to the target synthetic audio to the target user according to the music sharing instruction.
A third aspect of embodiments of the present application provides an electronic apparatus, including: the music sharing method includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the steps of the music sharing method provided in the first aspect of the embodiments of the present application.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium, on which a computer program is stored, where when the computer program is executed by a processor, the steps in the music sharing method provided in the first aspect of the embodiments of the present application are implemented.
As can be seen from the above, according to the music sharing method, device and computer-readable storage medium provided in the present application, when a music editing instruction is received at a music playing interface, an audio to be inserted is acquired, and an original audio corresponding to a currently played music is acquired; inserting the audio to be inserted into the target position of the original audio to obtain a target synthetic audio; and sharing the music playing resource associated with the target synthetic audio to the target user according to the music sharing instruction. Through the implementation of the scheme, the user is supported to edit the currently played music in the music playing interface, and the edited music playing resources are shared, so that the interactivity of music sharing and the expression capability of music are effectively improved.
Drawings
Fig. 1 is a schematic basic flowchart of a music sharing method according to a first embodiment of the present application;
fig. 2 is a schematic interface diagram of inputting a music editing instruction according to a first embodiment of the present application;
fig. 3 is an interface schematic diagram of a music editing interface according to a first embodiment of the present application;
fig. 4 is a schematic flowchart of a method for acquiring an audio to be inserted according to a first embodiment of the present application;
fig. 5 is a schematic flowchart of a target position determining method according to a first embodiment of the present application;
fig. 6 is a flowchart illustrating an audio insertion method according to a first embodiment of the present application;
fig. 7 is an interface diagram of another music editing interface provided in the first embodiment of the present application;
fig. 8 is an interface diagram of another music editing interface according to the first embodiment of the present application;
fig. 9 is a schematic diagram of a music feedback operation interface according to a first embodiment of the present application;
fig. 10 is a schematic diagram of a music feedback display interface according to a first embodiment of the present application;
fig. 11 is a schematic flowchart illustrating a detailed process of a music sharing method according to a second embodiment of the present application;
fig. 12 is a schematic diagram illustrating program modules of a music sharing device according to a third embodiment of the present application;
fig. 13 is a schematic view of program modules of another music sharing device according to a third embodiment of the present application;
fig. 14 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present application.
Detailed Description
In order to make the objects, features and advantages of the present invention more apparent and understandable, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In order to overcome the defect that the music sharing function in the prior art only supports sharing music played by a user, and the interactivity and the expression capability are relatively limited, a first embodiment of the present application provides a music sharing method, for example, fig. 1 is a basic flowchart of the music sharing method provided in this embodiment, where the music sharing method includes the following steps:
step 101, when a music editing instruction is received by a music playing interface, acquiring an audio to be inserted, and acquiring an original audio corresponding to currently played music.
Specifically, in this embodiment, the music playing interface is an interface displayed when the user plays music in the music player. In addition, the music editing instruction can be triggered by a special music editing control. As shown in fig. 2, which is an interface schematic diagram for inputting a music editing instruction provided in this embodiment, in order to improve interaction friendliness and convenience, the music editing instruction may be input by performing a long-press operation at a specific position or an arbitrary position of the music playing interface. It should be noted that the original audio of the present embodiment is an audio file corresponding to currently played music, and the audio to be inserted is used for performing personalized editing on the original audio.
It should be noted that, in this embodiment, a user may insert corresponding audio according to his own emotion expression requirement, that is, each audio to be inserted may correspond to an emotion expression theme, where the emotion expression theme may include: whites, apology, commitments, etc.
And 102, inserting the audio to be inserted into the target position of the original audio to obtain a target synthetic audio.
Specifically, in this embodiment, the audio to be inserted is inserted as a segment of the whole original audio, that is, an insertion node of the audio to be inserted in the original audio is determined, and then mixed flow processing is performed on the audio to be inserted and the original audio according to the insertion node, that is, the audio to be inserted is used as one audio track, the original audio is used as another audio track, and the two audio tracks are synthesized to obtain a synthesized audio. It should be noted that the audio insertion position of the present embodiment may be a preset fixed position or a flexibly selected position, and is not limited herein. As a preference of this embodiment, the target position may be positioned by a user manually dragging the music playing progress bar.
And 103, sharing the music playing resource associated with the target synthetic audio to the target user according to the music sharing instruction.
Specifically, in this embodiment, a music sharing control is opened on the music playing interface, the music sharing control may be a voice control or a touch control, and after the music synthesis and editing is completed, if the user triggers the music sharing control, a music sharing instruction is generated, so as to share the music playing resource associated with the synthesized audio. It should be understood that the sharing manner in this embodiment may be sharing in the music player application, that is, sharing to the music player account of the opposite user, or sharing by using a third-party application, that is, sharing to the third-party application (for example, wechat) account of the opposite user.
It should be further noted that, in practical applications, when music is shared, the shared resource is not limited to the audio file, and therefore, the music playing resource of this embodiment may include, in addition to the audio file, other types of text files and picture files, where the text file may include lyrics, personalized customized text, and the picture file may include a playing interface background image.
In this embodiment, since the music playing resource shared by the user is the music playing resource obtained after the audio is synthesized and edited, after the music sharing event is received by the opposite-end user, the music playing resource can be played, and after the node into which the audio to be inserted is played, the inserted audio can be suddenly played.
In some implementations of this embodiment, obtaining the audio to be inserted includes: and recording the voice information of the user in real time in the music playing process to obtain the audio to be inserted. Correspondingly, inserting the audio to be inserted into the target position of the original audio to obtain the target synthetic audio comprises: determining the music playing time corresponding to the initial recording time of the user voice information as a target position of the audio to be inserted to the original audio; and inserting the audio to be inserted into the original audio according to the target position to obtain the target synthetic audio.
As shown in fig. 3, an interface schematic diagram of a music editing interface provided in this embodiment is specifically provided, in this embodiment, a voice recording control (such as a microphone icon in fig. 3) may be arranged on a music playing interface, so that an audio to be inserted may be recorded by a user in real time, that is, in an actual application, in a music playing process, when the audio is played to a specific position, if the user triggers the voice recording control, the user voice recording is performed while the music is played until the user finishes the voice recording, and a music playing interval corresponding to the voice recording process is an insertion position of the recorded audio in an original audio corresponding to the currently played music.
As shown in fig. 4, which is a schematic flow chart of a method for acquiring an audio to be inserted provided in this embodiment, further, in another implementation manner of this embodiment, acquiring the audio to be inserted specifically includes the following steps:
step 401, determining a target emotion expression theme according to an emotion expression theme selection instruction;
step 402, determining a corresponding target audio template based on the target emotional expression theme and the mapping relation between the preset emotional expression type and the audio template;
step 403, determining the target audio template as the audio to be inserted.
Specifically, different from the previous embodiment in which the audio to be inserted is manually entered in real time, the present embodiment may provide a preset audio template for the user to select, where the audio template may be obtained from a network server, or may be entered and stored in advance by the user. Each audio template of the embodiment is correspondingly provided with an emotion expression theme, so that a user can select a specific emotion expression theme through a selection instruction in the actual use process, then the system automatically acquires the corresponding audio template as audio to be inserted according to the selected emotion expression theme, the convenience of music editing operation can be effectively provided, and the flexibility of audio editing can be improved due to the fact that the embodiment supports the editing of audio with different emotion expression themes, and the emotion expression capability of music sharing can be further improved.
As shown in fig. 5, as a schematic flow chart of the target position determining method provided in this embodiment, in some embodiments of this embodiment, before inserting the audio to be inserted into the target position of the original audio to obtain the target synthesized audio, the method further includes the following steps:
step 501, obtaining the integral frequency spectrum characteristics of original audio;
step 502, analyzing an audio segment corresponding to a target emotion expression theme in an original audio based on the integral frequency spectrum characteristics;
step 503, determining the position of the audio clip as the target position.
Specifically, be different from the manual selection audio frequency inserted position of user, audio frequency inserted position can also be discerned automatically to this embodiment to can improve audio frequency editing's convenience and accuracy. In this embodiment, the frequency spectrum is also called a vibration spectrum, which is a short term for frequency spectral density, and is a distribution curve of frequencies, and the spectral characteristics include frequencies and/or amplitudes, and the spectral characteristics of different audio segments are different, for example, the spectral characteristic value of a climax segment is significantly higher than that of an introduction, a tail, and other audio segments. In the embodiment, in consideration of the fact that the emotion expression capacities of different segments of currently played music are different, in order to best exert the current emotion expression purpose of a user, the audio which the user desires to insert should be inserted into a proper position in the original audio.
As shown in fig. 6, which is a schematic flow chart of an audio insertion method provided in this embodiment, in some embodiments of this embodiment, inserting an audio to be inserted into a target position of an original audio to obtain a target synthesized audio specifically includes the following steps:
601, extracting human voice audio in an audio clip of an original audio target position;
step 602, removing the voice audio corresponding to the audio clip in the original audio;
and 603, inserting the audio to be inserted into the target position to obtain the target synthetic audio.
Specifically, in practical application, when the audio to be inserted is synthesized with the original audio, the two audios may be directly synthesized, so that the original audio component and the audio component to be inserted are superimposed at the insertion position of the audio to be inserted, however, the voice in the original audio component generally interferes with the voice of the audio to be inserted; or the audio component corresponding to the insertion position of the audio to be inserted in the original audio can be deleted, and then the audio to be inserted is spliced to the insertion position, so that the insertion position of the audio to be inserted only includes the audio component to be inserted, and the audio to be inserted is monotonous. Based on the above, in this embodiment, in consideration of the poor emotion expression capability easily caused by the above two ways, the vocal component corresponding to the insertion position of the audio to be inserted in the original audio is removed, and only the accompaniment component is retained, so that in the synthesized audio, the insertion position corresponding to the audio to be inserted includes the audio component to be inserted and the accompaniment component, and the emotion expression capability of the shared music can be effectively improved.
In some embodiments of this embodiment, before sharing the music playback resource associated with the target synthesized audio to the target user according to the music sharing instruction, the method further includes: identifying an audio text corresponding to the audio to be inserted; and synchronizing the interface display node of the audio text with the insertion node of the audio to be inserted. Correspondingly, according to the music sharing instruction, sharing the music playing resource associated with the target synthesized audio to the target user includes: and sharing the music playing resources comprising the target synthetic audio and the audio text to the target user according to the music sharing instruction.
As shown in fig. 7, specifically, in this embodiment, in order to further improve the expressive ability of music sharing, voice recognition is further performed on an audio to be inserted to obtain an audio text corresponding to the audio to be inserted, an interface display node of the audio text is synchronized with the audio insertion node, and then a music playing resource carrying a synthesized audio and the audio text is shared, so that when an opposite-end user opens the music playing resource to play music, and plays the music to a position corresponding to the inserted audio, the audio text is synchronously displayed on the music playing interface.
In some embodiments of this embodiment, according to the music sharing instruction, before sharing the music playing resource associated with the target synthesized audio to the target user, the method further includes: when an interface background setting instruction is received, acquiring a background image corresponding to the interface background setting instruction; and synchronizing the interface display node of the background image with the insertion node of the audio to be inserted. Correspondingly, according to the music sharing instruction, sharing the music playing resource associated with the target synthesized audio to the target user includes: and sharing the music playing resources comprising the target synthetic audio and the background image to the target user according to the music sharing instruction.
Specifically, in this embodiment, in order to further deepen the emotion expression capability of the shared music, a personalized background may be further set in the music playing interface in this embodiment, that is, the user may input an interface background setting instruction through a preset interface background setting control to correspondingly select a background image, then synchronize a time node displaying the background image with an insertion node of the inserted audio, and then share the music playing resource carrying the synthesized audio and the background image, so that when the music playing resource is opened by an opposite-end user for music playing, when the music playing resource is played to a position corresponding to the inserted audio, the music playing interface is switched and displayed as the background image. In this embodiment, an emotion expression theme of music sharing is taken as a white list, and as shown in fig. 8, an interface schematic diagram of another music editing interface provided in this embodiment is shown, an interface background of a time node corresponding to an audio to be inserted may be set as a photo of a target user to be shared, so that an opposite user may get unexpected surprise and honesty when playing music.
Further, in some embodiments of this embodiment, when the background image is the target user photo, sharing the music playing resource including the target synthesized audio and the background image to the target user according to the music sharing instruction includes: when a music sharing instruction is received, acquiring a user identifier corresponding to a target user photo; determining a corresponding target user from the user list based on the user identification; and sharing the music playing resource comprising the target synthetic audio and the target user photo to the target user.
Specifically, in this embodiment, considering that when music sharing is performed, it is tedious and laborious to manually select a target user from the user list, in this embodiment, when a trigger instruction input by a user for the music sharing control is received, a user identifier of the target user to be shared is identified according to a previously set photo of the target user to be shared, and then the target user is automatically determined from the user list according to the user identifier to perform music playing resource sharing, so that operation can be effectively simplified and operation convenience can be improved.
In some embodiments of this embodiment, after sharing the music playback resource associated with the target synthesized audio to the target user according to the music sharing instruction, the method further includes: when a music feedback instruction sent by a target user is received, acquiring an interface display event corresponding to the music feedback instruction; and displaying the interface display event on the music playing interface.
Specifically, in this embodiment, when the opposite-end user plays the music playing resource shared by the home-end user and endowed with personal emotion, a music feedback instruction may be input for the music feedback control of the music playing interface to perform emotion feedback, and emotion fed back by the opposite-end user is displayed on the music playing interface of the home-end user in the form of an interface display event, as shown in fig. 9, which is a schematic diagram of a music feedback operation interface provided in this embodiment, and also, taking an emotion expression subject of the home-end user as an example, the opposite-end user may manually click the music feedback control (see a love icon in fig. 9) of the music playing interface to input the music feedback instruction, as shown in fig. 10, when the home-end user receives music feedback of the opposite-end user, an interface display event (see the "heart-action" icon shown in fig. 10) is displayed on the music playing interface, and the icon can dynamically jump on the music playing interface when the local user plays the music playing resource, so that the interest of interaction between users can be improved.
Based on the technical scheme of the embodiment of the application, when a music editing instruction is received by a music playing interface, audio to be inserted is obtained, and original audio corresponding to currently played music is obtained; inserting the audio to be inserted into the target position of the original audio to obtain a target synthetic audio; and sharing the music playing resource associated with the target synthetic audio to the target user according to the music sharing instruction. Through the implementation of the scheme, the user is supported to edit the currently played music in the music playing interface, and the edited music playing resources are shared, so that the interactivity of music sharing and the expression capability of music are effectively improved.
The method in fig. 11 is a refined music sharing method according to a second embodiment of the present application, where the music sharing method includes:
step 1101, recording the voice information of the user in real time when the music playing interface receives a music editing instruction, and obtaining the audio to be inserted.
In the embodiment, in the music playing process, when the music is played to a specific position, if the user triggers the voice recording control, the voice of the user is recorded while the music is played until the user finishes the voice recording.
Step 1102, determining a music playing time corresponding to the initial recording time of the user voice information as a target position of the audio to be inserted to the original audio corresponding to the currently played music.
In this embodiment, the music playing interval corresponding to the voice recording process is an insertion position of the recorded audio in the original audio corresponding to the currently played music.
Step 1103, extracting the human voice audio in the audio clip corresponding to the original audio target position.
In this embodiment, the original audio may be subjected to a shunting process to obtain an overall vocal audio and an overall accompaniment audio, and then corresponding segment vocal audio is extracted from the overall vocal audio based on the located target position.
And 1104, removing the human voice audio corresponding to the audio clip in the original audio.
And 1105, inserting the audio to be inserted into the target position of the original audio subjected to the voice audio removal to obtain the target synthetic audio.
In the embodiment, the human voice component corresponding to the insertion position of the audio to be inserted in the original audio is removed, and only the accompaniment component is reserved, so that in the synthesized audio, the insertion position corresponding to the audio to be inserted comprises the audio component to be inserted and the accompaniment component, and the emotion expression capability of the shared music can be effectively improved.
Step 1106, identifying an audio text corresponding to the audio to be inserted, and acquiring a corresponding background image according to the interface background setting instruction.
And step 1107, synchronizing the interface display nodes of the audio text and the background image with the insertion nodes of the audio to be inserted.
Specifically, in order to further improve the performance of the shared music, the embodiment further performs voice recognition on the audio to be inserted, acquires an audio text corresponding to the audio to be inserted, supports the user to set a personalized background for the music playing interface, and synchronizes the audio text and the interface display node of the background image with the audio insertion node, so that when the music playing resource is opened by an opposite-end user for music playing, when the music playing resource is played to a position corresponding to the inserted audio, the music playing interface is switched and displayed as the background image, and the corresponding audio text is displayed on the interface image.
Step 1108, according to the music sharing instruction, sharing the music playing resource including the target synthesized audio, the audio text and the background image to the target user.
In the embodiment, the user inserts personalized audio into music and correspondingly customizes a music playing interface, so that personal emotion is given to the music, interactivity and expression capability of music sharing are effectively improved, and the user who is not good at directly expressing emotion can be helped to express personal emotion in a conservative manner.
And 1109, displaying the music feedback event on a music playing interface when receiving the music feedback event sent by the target user.
In this embodiment, when the opposite-end user plays the music playing resource shared by the local-end user and endowed with personal emotion, music feedback can be performed on the music playing resource to feed back emotion expression of the local-end user, and the music feedback of the opposite-end user is displayed on the music playing interface of the local-end user in an interface display event mode, so that the interest of interaction between users can be improved.
It should be understood that, the size of the serial number of each step in this embodiment does not mean the execution sequence of the step, and the execution sequence of each step should be determined by its function and inherent logic, and should not be limited uniquely to the implementation process of the embodiment of the present application.
The embodiment of the application discloses a music sharing method, which comprises the steps of recording voice information of a user in real time when a music editing instruction is received by a music playing interface to obtain audio to be inserted; extracting the human voice audio in the audio clip corresponding to the original audio target position of the currently played music, and removing the corresponding human voice audio in the original audio; inserting the audio to be inserted to the target position of the original audio subjected to the voice audio removal to obtain a target synthetic audio; identifying an audio text corresponding to the audio to be inserted, and acquiring a corresponding background image according to an interface background setting instruction; and synchronizing the interface display nodes of the audio text and the background image with the insertion node of the audio to be inserted, and sharing the music playing resources comprising the target synthetic audio, the audio text and the background image to the target user according to the music sharing instruction. Through the implementation of the scheme, the user is supported to edit the currently played music in the music playing interface, and the edited music playing resources are shared, so that the interactivity of music sharing and the expression capability of music are effectively improved; in addition, the original audio in the audio inserting position is removed by the method and the device, so that the emotion expression capability of the shared music can be further optimized.
Fig. 12 is a music sharing device according to a third embodiment of the present application. The music sharing device can be used for realizing the music sharing method in the embodiment. As shown in fig. 12, the music sharing apparatus mainly includes:
the obtaining module 1201 is configured to obtain, when a music editing instruction is received by a music playing interface, an audio to be inserted and an original audio corresponding to currently played music;
the inserting module 1202 is configured to insert the audio to be inserted into a target position of the original audio to obtain a target synthesized audio;
the sharing module 1203 is configured to share the music playing resource associated with the target synthesized audio to the target user according to the music sharing instruction.
In some embodiments of this embodiment, the obtaining module 1201 is specifically configured to: and recording the voice information of the user in real time in the music playing process to obtain the audio to be inserted. Correspondingly, the insertion module 1202 is specifically configured to: determining the music playing time corresponding to the initial recording time of the user voice information as a target position of the audio to be inserted to the original audio; and inserting the audio to be inserted into the original audio according to the target position to obtain the target synthetic audio.
In other embodiments of this embodiment, the obtaining module 1201 is specifically configured to: determining a target emotion expression theme according to the emotion expression theme selection instruction; determining a corresponding target audio template based on the target emotion expression theme and the mapping relation between the preset emotion expression theme and the audio template; and determining the target audio template as the audio to be inserted.
As shown in fig. 13, another music sharing device provided in this embodiment is further provided, in another embodiment of this embodiment, the music sharing device further includes: a determining module 1204 for: before inserting the audio to be inserted into the target position of the original audio to obtain the target synthetic audio, acquiring the integral frequency spectrum characteristic of the original audio; analyzing an audio segment corresponding to the target emotion expression theme in the original audio based on the integral frequency spectrum characteristic; and determining the position of the audio segment as the target position.
In other embodiments of this embodiment, the insertion module 1202 is specifically configured to: extracting human voice audio in the audio clip of the original audio target position; removing the human voice audio corresponding to the audio clip in the original audio; and inserting the audio to be inserted into the target position to obtain the target synthetic audio.
Referring to fig. 13 again, in some embodiments of the present embodiment, the music sharing device further includes: a presentation module 1205 for: after music playing resources related to the target synthetic audio are shared to a target user according to a music sharing instruction, acquiring an interface display event corresponding to the music feedback instruction when the music feedback instruction sent by the target user is received; and displaying the interface display event on the music playing interface.
Referring to fig. 13 again, in some embodiments of the present embodiment, the music sharing device further includes: a synchronization module 1206 to: identifying an audio text corresponding to the audio to be inserted before sharing the music playing resource associated with the target synthetic audio to the target user according to the music sharing instruction; and synchronizing the interface display node of the audio text with the insertion node of the audio to be inserted. Correspondingly, the sharing module 1203 is specifically configured to: and sharing the music playing resources comprising the target synthetic audio and the audio text to the target user according to the music sharing instruction.
In some implementations of this embodiment, the synchronization module 1206 is further to: before music playing resources related to a target synthetic audio are shared to a target user according to a music sharing instruction, and when an interface background setting instruction is received, a background image corresponding to the interface background setting instruction is obtained; and synchronizing the interface display node of the background image with the insertion node of the audio to be inserted. Correspondingly, the sharing module 1203 is further configured to: and sharing the music playing resources comprising the target synthetic audio and the background image to the target user according to the music sharing instruction.
Further, in some embodiments of this embodiment, when the background image is a target user photo, the sharing module 1203 is specifically configured to: when a music sharing instruction is received, acquiring a user identifier corresponding to a target user photo; determining a corresponding target user from the user list based on the user identification; and sharing the music playing resource comprising the target synthetic audio and the target user photo to the target user.
It should be noted that, the music sharing methods in the first and second embodiments can be implemented based on the music sharing device provided in this embodiment, and it can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working process of the music sharing device described in this embodiment may refer to the corresponding process in the foregoing method embodiment, and details are not described herein.
According to the music sharing device provided by the embodiment, when a music editing instruction is received by a music playing interface, audio to be inserted is acquired, and original audio corresponding to currently played music is acquired; inserting the audio to be inserted into the target position of the original audio to obtain a target synthetic audio; and sharing the music playing resource associated with the target synthetic audio to the target user according to the music sharing instruction. Through the implementation of the scheme, the user is supported to edit the currently played music in the music playing interface, and the edited music playing resources are shared, so that the interactivity of music sharing and the expression capability of music are effectively improved.
Referring to fig. 14, fig. 14 is an electronic device according to a fourth embodiment of the present disclosure. The electronic device can be used for realizing the music sharing method in the embodiment. As shown in fig. 14, the electronic device mainly includes:
a memory 1401, a processor 1402, a bus 1403, and computer programs stored on the memory 1401 and executable on the processor 1402, the memory 1401 and the processor 1402 being connected by the bus 1403. When the processor 1402 executes the computer program, the music sharing method in the foregoing embodiments is implemented. Wherein the number of processors may be one or more.
The Memory 1401 may be a high-speed Random Access Memory (RAM) Memory or a non-volatile Memory (non-volatile Memory), such as a disk Memory. A memory 1401 is used to store executable program code and a processor 1402 is coupled to the memory 1401.
Further, an embodiment of the present application also provides a computer-readable storage medium, where the computer-readable storage medium may be provided in an electronic device in the foregoing embodiments, and the computer-readable storage medium may be the memory in the foregoing embodiment shown in fig. 14.
The computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements the music sharing method in the foregoing embodiments. Further, the computer-readable storage medium may be various media that can store program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a RAM, a magnetic disk, or an optical disk.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of modules is merely a division of logical functions, and an actual implementation may have another division, for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
Modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a readable storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present application. And the aforementioned readable storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In view of the above description of the music sharing method, apparatus and computer readable storage medium provided by the present application, those skilled in the art will appreciate that there are variations in the embodiments and applications of the concept of the present application, and therefore the content of the present specification should not be construed as a limitation to the present application.

Claims (12)

1. A music sharing method, comprising:
when a music editing instruction is received at a music playing interface, acquiring audio to be inserted and acquiring original audio corresponding to currently played music; wherein, each audio to be inserted corresponds to different emotion expression themes, and the emotion expression theme comprises: the method comprises the following steps of (1) performing a form white, a regroun and a commitment, wherein the original audio comprises a human voice audio component and an accompaniment audio component;
inserting the audio to be inserted into the target position of the original audio to obtain a target synthetic audio;
and sharing the music playing resource associated with the target synthetic audio to the target user according to the music sharing instruction.
2. The music sharing method according to claim 1, wherein the obtaining the audio to be inserted comprises:
recording user voice information in real time in the music playing process to obtain audio to be inserted;
inserting the audio to be inserted into the target position of the original audio to obtain a target synthetic audio, wherein the step of inserting the audio to be inserted into the target position of the original audio comprises the following steps:
determining the music playing time corresponding to the initial recording time of the user voice information as the target position of the audio to be inserted to the original audio;
and inserting the audio to be inserted into the original audio according to the target position to obtain a target synthetic audio.
3. The music sharing method according to claim 1, wherein the obtaining the audio to be inserted comprises:
determining a target emotion expression theme according to the emotion expression theme selection instruction;
determining a corresponding target audio template based on the target emotion expression theme and the mapping relation between a preset emotion expression theme and the audio template;
and determining the target audio template as the audio to be inserted.
4. The music sharing method according to claim 3, wherein before inserting the audio to be inserted into the target position of the original audio to obtain the target synthesized audio, the method further comprises:
acquiring the integral frequency spectrum characteristic of the original audio;
analyzing an audio segment corresponding to the target emotion expression theme in the original audio based on the overall spectral feature;
and determining the position of the audio segment as the target position.
5. The music sharing method according to claim 1, wherein the inserting the audio to be inserted into the target position of the original audio to obtain a target synthesized audio comprises:
extracting the human voice audio in the audio clip of the original audio target position;
removing the human voice audio corresponding to the audio clip in the original audio;
and inserting the audio to be inserted into the target position to obtain a target synthetic audio.
6. The music sharing method according to claim 1, wherein after sharing the music playing resource associated with the target synthesized audio to the target user according to the music sharing instruction, the method further comprises:
when a music feedback instruction sent by the target user is received, acquiring an interface display event corresponding to the music feedback instruction;
and displaying the interface display event on the music playing interface.
7. The music sharing method according to any one of claims 1 to 6, wherein the sharing a music playing resource associated with the target synthesized audio to a target user according to the music sharing instruction further comprises:
identifying an audio text corresponding to the audio to be inserted;
synchronizing the interface display node of the audio text with the insertion node of the audio to be inserted;
the sharing, according to the music sharing instruction, the music playing resource associated with the target synthesized audio to the target user includes:
and sharing the music playing resources comprising the target synthetic audio and the audio text to the target user according to the music sharing instruction.
8. The music sharing method according to any one of claims 1 to 6, wherein the sharing a music playing resource associated with the target synthesized audio to a target user according to the music sharing instruction further comprises:
when an interface background setting instruction is received, acquiring a background image corresponding to the interface background setting instruction;
synchronizing the interface display node of the background image with the insertion node of the audio to be inserted;
the sharing, according to the music sharing instruction, the music playing resource associated with the target synthesized audio to the target user includes:
and sharing the music playing resource comprising the target synthetic audio and the background image to a target user according to the music sharing instruction.
9. The music sharing method according to claim 8, wherein the background image is a photograph of a target user, and the sharing the music playing resource including the target synthesized audio and the background image to the target user according to the music sharing instruction comprises:
when a music sharing instruction is received, acquiring a user identifier corresponding to the target user photo;
determining a corresponding target user from a user list based on the user identification;
and sharing the music playing resource comprising the target synthetic audio and the target user photo to the target user.
10. A music sharing apparatus, comprising:
the acquisition module is used for acquiring the audio to be inserted and acquiring the original audio corresponding to the currently played music when the music playing interface receives a music editing instruction; wherein, each audio to be inserted corresponds to different emotion expression themes, and the emotion expression theme comprises: the method comprises the following steps of (1) performing a form white, a regroun and a commitment, wherein the original audio comprises a human voice audio component and an accompaniment audio component;
the inserting module is used for inserting the audio to be inserted into the target position of the original audio to obtain a target synthetic audio;
and the sharing module is used for sharing the music playing resource related to the target synthetic audio to the target user according to the music sharing instruction.
11. An electronic device, comprising: a memory, a processor, and a bus;
the bus is used for realizing connection communication between the memory and the processor;
the processor is configured to execute a computer program stored on the memory;
the processor, when executing the computer program, performs the steps of the method of any one of claims 1 to 9.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 9.
CN202010414543.3A 2020-05-15 2020-05-15 Music sharing method and device and computer readable storage medium Active CN111583973B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010414543.3A CN111583973B (en) 2020-05-15 2020-05-15 Music sharing method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010414543.3A CN111583973B (en) 2020-05-15 2020-05-15 Music sharing method and device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111583973A CN111583973A (en) 2020-08-25
CN111583973B true CN111583973B (en) 2022-02-18

Family

ID=72125000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010414543.3A Active CN111583973B (en) 2020-05-15 2020-05-15 Music sharing method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111583973B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112837709B (en) * 2021-02-24 2022-07-22 北京达佳互联信息技术有限公司 Method and device for splicing audio files
CN114020958B (en) * 2021-09-26 2022-12-06 天翼爱音乐文化科技有限公司 Music sharing method, equipment and storage medium
CN114697745A (en) * 2022-03-30 2022-07-01 咪咕文化科技有限公司 Video sharing method, device, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107464555A (en) * 2016-06-03 2017-12-12 索尼移动通讯有限公司 Background sound is added to the voice data comprising voice
CN108153831A (en) * 2017-12-13 2018-06-12 北京小米移动软件有限公司 Music adding method and device
CN111079423A (en) * 2019-08-02 2020-04-28 广东小天才科技有限公司 Method for generating dictation, reading and reporting audio, electronic equipment and storage medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7882258B1 (en) * 2003-02-05 2011-02-01 Silver Screen Tele-Reality, Inc. System, method, and computer readable medium for creating a video clip
CN101609703A (en) * 2008-06-20 2009-12-23 索尼爱立信移动通讯有限公司 Music browser device and music browsing method
US8948893B2 (en) * 2011-06-06 2015-02-03 International Business Machines Corporation Audio media mood visualization method and system
US20130178961A1 (en) * 2012-01-05 2013-07-11 Microsoft Corporation Facilitating personal audio productions
US9431004B2 (en) * 2013-09-05 2016-08-30 International Business Machines Corporation Variable-depth audio presentation of textual information
CN104506751A (en) * 2014-12-19 2015-04-08 天脉聚源(北京)科技有限公司 Method and device for generating electronic postcard with voice
CN108536655A (en) * 2017-12-21 2018-09-14 广州市讯飞樽鸿信息技术有限公司 Audio production method and system are read aloud in a kind of displaying based on hand-held intelligent terminal
CN109147745B (en) * 2018-07-25 2020-03-10 北京达佳互联信息技术有限公司 Song editing processing method and device, electronic equipment and storage medium
CN108986843B (en) * 2018-08-10 2020-12-11 杭州网易云音乐科技有限公司 Audio data processing method and device, medium and computing equipment
CN109461462B (en) * 2018-11-02 2021-12-17 王佳一 Audio sharing method and device
CN110636409A (en) * 2019-09-20 2019-12-31 百度在线网络技术(北京)有限公司 Audio sharing method and device, microphone and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107464555A (en) * 2016-06-03 2017-12-12 索尼移动通讯有限公司 Background sound is added to the voice data comprising voice
CN108153831A (en) * 2017-12-13 2018-06-12 北京小米移动软件有限公司 Music adding method and device
CN111079423A (en) * 2019-08-02 2020-04-28 广东小天才科技有限公司 Method for generating dictation, reading and reporting audio, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111583973A (en) 2020-08-25

Similar Documents

Publication Publication Date Title
CN111583973B (en) Music sharing method and device and computer readable storage medium
US11569922B2 (en) System and method for generating an audio file
CN105006234B (en) A kind of K sings processing method and processing device
CN110324718B (en) Audio and video generation method and device, electronic equipment and readable medium
CN106653037B (en) Audio data processing method and device
US10580394B2 (en) Method, client and computer storage medium for processing information
US10861499B2 (en) Method of editing media, media editor, and media computer
CN107978310B (en) Audio processing method and device
CN108053696A (en) A kind of method, apparatus and terminal device that sound broadcasting is carried out according to reading content
CN109618229A (en) Association playback method, device, server and the storage medium of audio-video
CN104681048A (en) Multimedia read control device, curve acquiring device, electronic equipment and curve providing device and method
CN113821189B (en) Audio playing method, device, terminal equipment and storage medium
KR101477492B1 (en) Apparatus for editing and playing video contents and the method thereof
CN111883090A (en) Method and device for making audio file based on mobile terminal
CN112115283A (en) Method, device and equipment for processing picture book data
KR20220118358A (en) How to insert a name or place name into a song
CN116229996A (en) Audio production method, device, terminal, storage medium and program product
CN115695680A (en) Video editing method and device, electronic equipment and computer readable storage medium
CN115623266A (en) Video processing method and device, storage medium and electronic equipment
JP2004266629A (en) Image processing apparatus
IES86526Y1 (en) A system and method for generating an audio file

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant