CN116935817A - Music editing method, apparatus, electronic device, and computer-readable storage medium - Google Patents

Music editing method, apparatus, electronic device, and computer-readable storage medium Download PDF

Info

Publication number
CN116935817A
CN116935817A CN202210357396.XA CN202210357396A CN116935817A CN 116935817 A CN116935817 A CN 116935817A CN 202210357396 A CN202210357396 A CN 202210357396A CN 116935817 A CN116935817 A CN 116935817A
Authority
CN
China
Prior art keywords
editing
target
music
track
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210357396.XA
Other languages
Chinese (zh)
Inventor
邱悦
胡建丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210357396.XA priority Critical patent/CN116935817A/en
Publication of CN116935817A publication Critical patent/CN116935817A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0008Associated control or indicating means
    • G10H1/0025Automatic or semi-automatic music composition, e.g. producing random music, applying rules from music theory or modifying a musical piece
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/101Music Composition or musical creation; Tools or processes therefor
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/101Music Composition or musical creation; Tools or processes therefor
    • G10H2210/105Composing aid, e.g. for supporting creation, edition or modification of a piece of music
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/155Musical effects
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/091Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/091Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith
    • G10H2220/101Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith for graphical creation, edition or control of musical data or parameters
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2250/00Aspects of algorithms or signal processing methods without intrinsic musical character, yet specifically adapted for or used in electrophonic musical processing
    • G10H2250/311Neural networks for electrophonic musical instruments or musical processing, e.g. for musical recognition or control, automatic composition or improvisation

Abstract

The embodiment of the application discloses a music editing method, a device, electronic equipment and a computer readable storage medium; in the embodiment of the application, a music editing interface of the music to be edited aiming at a target object is displayed, wherein the music editing interface comprises an editing confirmation control and an initial sound track of the music to be edited, and the initial sound track is at least one sound track obtained after the sound track of the music to be edited is separated; and responding to the confirmation operation of the editing confirmation control, displaying a target sound track, wherein the target sound track is a sound track obtained after the initial sound track is edited according to target editing parameters matched with the target object. The embodiment of the application can automatically edit the music to be edited, and is convenient.

Description

Music editing method, apparatus, electronic device, and computer-readable storage medium
Technical Field
The present application relates to the field of music processing technology, and in particular, to a music editing method, apparatus, electronic device, and computer readable storage medium.
Background
With the development of the internet, the user has a higher interest degree in music, and more users want to edit the music, so as to obtain the music wanted by the user.
At present, a user edits the music wanted by the user by a manual editing mode through a music editor, which is complicated.
Disclosure of Invention
The embodiment of the application provides a music editing method, a device, electronic equipment and a computer readable storage medium, which can solve the technical problem of complicated manual editing of music.
A music editing method comprising:
displaying a music editing interface of the music to be edited aiming at the target object, wherein the music editing interface comprises an editing confirmation control and an initial sound track of the music to be edited, and the initial sound track is at least one sound track obtained by separating the sound track of the music to be edited;
and displaying a target sound track in response to the confirmation operation of the editing confirmation control, wherein the target sound track is a sound track obtained by editing the initial sound track according to target editing parameters matched with the target object.
Accordingly, an embodiment of the present application provides a music editing apparatus, including:
the first display module is used for displaying a music editing interface of the music to be edited aiming at the target object, wherein the music editing interface comprises an editing confirmation control and an initial sound track of the music to be edited, and the initial sound track is at least one sound track obtained by separating the sound track of the music to be edited;
And the second display module is used for responding to the confirmation operation of the editing confirmation control and displaying a target sound track, wherein the target sound track is a sound track obtained by editing the initial sound track according to target editing parameters matched with the target object.
Optionally, the second display module is specifically configured to perform:
responding to the confirmation operation of the editing confirmation control, and acquiring at least one editing object of the music to be edited;
matching the at least one editing object with the target object;
taking the editing parameters corresponding to the editing objects matched with the target objects as target editing parameters matched with the target objects;
editing the initial audio track according to the target editing parameters to obtain a target audio track;
the target track is displayed.
Optionally, the second display module is specifically configured to perform:
calculating a similarity between the attribute data of the at least one editing object and the attribute data of the target object;
and taking the editing object with the similarity higher than the preset threshold value as the editing object matched with the target object.
Optionally, the second display module is specifically configured to perform:
taking the editing object matched with the target object as a candidate editing object, and displaying an object interface, wherein the object interface comprises the candidate editing object and editing parameters corresponding to the candidate editing object;
And responding to the object selection operation of the target object on the candidate editing object, and taking the editing parameter of the candidate editing object corresponding to the object selection operation as a target editing parameter matched with the target object.
Optionally, the second display module is specifically configured to perform:
taking editing parameters corresponding to the editing objects matched with the target objects as candidate editing parameters, and displaying a parameter interface, wherein the parameter interface comprises the candidate editing parameters;
and responding to the parameter selection operation of the target object on the candidate editing parameters, and screening target editing parameters matched with the target object from the candidate editing parameters.
Optionally, the second display module is specifically configured to perform:
acquiring editing parameters corresponding to the editing object matched with the target object in at least one time period;
acquiring the frequency of use of the editing parameters corresponding to each time period;
and taking the editing parameter corresponding to the time period with the highest frequency of use as a target editing parameter matched with the target object.
Optionally, the second display module is specifically configured to perform:
screening a first initial track and a second initial track from the initial tracks, wherein the first initial track is an initial track in which no corresponding editing parameter exists in editing parameters corresponding to editing objects matched with the target objects, and the second initial track is an initial track in which corresponding editing parameters exist in editing parameters corresponding to editing objects matched with the target objects;
Determining the editing parameters of the first initial audio track according to the editing parameters of the second initial audio track;
and taking the editing parameters of the second initial audio track and the editing parameters of the first initial audio track as target editing parameters matched with the target object.
Optionally, the second display module is specifically configured to perform:
screening a first editing object and a second editing object from editing objects matched with the target object, wherein the editing parameters of the first editing object comprise the editing parameters of a third initial audio track in the initial audio tracks, and the editing parameters of the second editing object comprise the editing parameters of a fourth initial audio track in the initial audio tracks;
and taking the editing parameters of the first editing object and the editing parameters of the second editing object as target editing parameters matched with the target object.
Optionally, the music editing apparatus further includes:
a first editing module for performing:
when detecting that the target track data of the target track is abnormal, displaying an editing interface of the abnormal track, wherein the editing interface comprises an editing control, and the abnormal track is the target track with the abnormal target track data;
And responding to the parameter editing operation of the editing control, displaying a first edited audio track, wherein the first edited audio track is an audio track obtained by editing the abnormal audio track according to the parameter editing operation.
Optionally, the second display module is specifically configured to perform:
responding to the confirmation operation of the editing confirmation control to acquire the attribute data of the target object;
determining the display sequence of the target audio track according to the attribute data of the target object;
and displaying the target audio track according to the display sequence.
Optionally, the second display module is further configured to perform:
responding to the selected operation of the target audio track, and determining the type of the target audio track corresponding to the selected operation;
determining display parameters of the target audio track corresponding to the selected operation according to the type;
and displaying the target audio track corresponding to the selected operation on the music editing interface according to the display parameters.
In addition, the embodiment of the application also provides electronic equipment, which comprises a processor and a memory, wherein the memory stores a computer program, and the processor is used for running the computer program in the memory to realize the music editing method provided by the embodiment of the application.
In addition, the embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program is suitable for being loaded by a processor to execute any one of the music editing methods provided by the embodiment of the application.
In addition, the embodiment of the application also provides a computer program product, which comprises a computer program, and the computer program realizes any one of the music editing methods provided by the embodiment of the application when being executed by a processor.
In the embodiment of the application, a music editing interface of the music to be edited aiming at the target object is displayed, wherein the music editing interface comprises an editing confirmation control and an initial sound track of the music to be edited, and the initial sound track is at least one sound track obtained after sound track separation of the music to be edited. And then, responding to the confirmation operation of the target object on the editing confirmation control, and displaying a target sound track, wherein the target sound track is a sound track obtained by editing the initial sound track according to target editing parameters matched with the target object.
In the application, the displayed music editing interface comprises the editing confirmation control, so that the target track can be displayed in response to the confirmation operation of the target object on the editing confirmation control, and the target track is the track obtained after the initial track is edited according to the target editing parameters matched with the target object, so that the automatic editing of the initial track of the music to be edited can be realized, and the convenience of editing the initial track is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of a music editing process according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a music editing method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a first page of a client according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a music interface provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of a music matching interface provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of a music editing interface provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of another music editing interface provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of a music extraction interface provided by an embodiment of the present application;
fig. 9 is a schematic diagram of an uploading process of first extracted music provided in an embodiment of the present application;
fig. 10 is a schematic diagram of a process of performing track separation on music to be edited provided by an embodiment of the present application;
FIG. 11 is a schematic illustration of waveforms of different instruments provided by an embodiment of the present application;
FIG. 12 is a schematic diagram of waveforms of sound provided by an embodiment of the present application;
FIG. 13 is a schematic diagram of another process of track separation for music to be edited provided by an embodiment of the application;
FIG. 14 is a schematic diagram of another music interface provided by an embodiment of the present application;
FIG. 15 is a schematic diagram of another music matching interface provided by an embodiment of the present application;
FIG. 16 is a schematic diagram of another music matching interface provided by an embodiment of the present application;
FIG. 17 is a schematic diagram of an editing interface provided by an embodiment of the present application;
FIG. 18 is a schematic diagram of another music editing interface provided by an embodiment of the present application;
FIG. 19 is a schematic diagram of another music editing interface provided by an embodiment of the present application;
FIG. 20 is a schematic diagram of a parameter selection interface provided by an embodiment of the present application;
FIG. 21 is a schematic diagram of another music editing interface provided by an embodiment of the present application;
FIG. 22 is a schematic diagram of a method for playing music to be edited or music after editing according to an embodiment of the present application;
FIG. 23 is a flowchart of another music editing method according to an embodiment of the present application;
FIG. 24 is a schematic diagram of a process for displaying an initial audio track provided by an embodiment of the present application;
FIG. 25 is a schematic diagram of a process for obtaining target editing parameters provided by an embodiment of the present application;
FIG. 26 is a schematic diagram of another embodiment of the present application for obtaining target editing parameters;
FIG. 27 is a schematic diagram of another music editing method provided by an embodiment of the present application;
fig. 28 is a schematic structural view of a music editing apparatus provided by an embodiment of the present application;
fig. 29 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
The embodiment of the application provides a music editing method, a device, electronic equipment and a computer readable storage medium. The music editing apparatus may be integrated in an electronic device, which may be a server or a terminal.
The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, network acceleration services (Content Delivery Network, CDN), basic cloud computing services such as big data and an artificial intelligent platform.
And, wherein a plurality of servers may be organized into a blockchain, and the servers are nodes on the blockchain.
The terminal may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the present application is not limited herein.
For example, as shown in fig. 1, the terminal may display a music editing interface of the music to be edited for the target object, where the music editing interface includes an edit confirmation control and an initial track of the music to be edited, where the initial track is at least one track obtained by performing track separation on the music to be edited; and responding to the confirmation operation of the target object on the editing confirmation control, displaying a target sound track, wherein the target sound track is a sound track obtained by editing the initial sound track according to target editing parameters matched with the target object.
In addition, "plurality" in the embodiments of the present application means two or more. "first" and "second" and the like in the embodiments of the present application are used for distinguishing descriptions and are not to be construed as implying relative importance.
The following will describe in detail. The following description of the embodiments is not intended to limit the preferred embodiments.
In the present embodiment, description will be made in terms of a music editing apparatus which can be integrated in a device such as a server or a terminal, and for convenience in explaining the music editing method of the present application, the music editing apparatus will be integrated in the terminal, that is, the terminal will be used as an execution subject.
Referring to fig. 2, fig. 2 is a flowchart illustrating a music editing method according to an embodiment of the application. The music editing method may include:
s201, displaying a music editing interface of the music to be edited aiming at the target object, wherein the music editing interface comprises an editing confirmation control and an initial track of the music to be edited, and the initial track is at least one track obtained after the track separation of the music to be edited.
And when the terminal receives the starting instruction of the client, starting the client and displaying the interface of the client. Alternatively, the displayed page may be a home page of the client, a music interface of the client, or other interfaces of the client.
When the music interface of the client is displayed, the terminal responds to the triggering operation of the target object on the control of the music page, and the music interface is displayed.
For example, when an authoring interface of the client is displayed (the authoring interface is a home page of the client), as shown in fig. 3, the terminal may display a music interface in response to a trigger operation of the start authoring control.
The terminal responds to a first music selection operation of the target object on the music identifier, and displays a music matching interface of the first music identifier corresponding to the first music selection operation, wherein the music matching interface comprises a matching control, and the first music corresponding to the first music identifier is the music to be edited.
For example, the music interface may be as shown in fig. 4, and the terminal may display a music matching interface of the first music identifier in response to the selection operation of the target object on the first music identifier, and the music matching interface of the music to be edited may be as shown in fig. 5.
Optionally, the music interface may further include a search control, and the terminal may display a search result interface in response to an input operation of the target object on the search control, where the search result interface includes a music identifier of a music result corresponding to the input operation. And the terminal responds to the first music selection operation of the music identifier, and displays a music matching interface of the first music identifier corresponding to the first music selection operation.
The music matching interface comprises a matching control, and the terminal responds to triggering operation of the matching control to display a music editing interface of the to-be-edited music aiming at the target object.
The music editing interface includes an edit confirmation control and an initial track of the music to be edited. The initial track is a track corresponding to each of various instruments and voices constituting music. For example, when the initial track includes an initial human track, an initial Bei Siyin track, an initial accompaniment track, and an initial drum point track, the music editing interface may be as shown in fig. 6.
That is, at this time, the terminal may display a music editing interface, and then an initial human voice track, an initial Bei Siyin track, an initial accompaniment track, and an initial drum spot track are displayed on the music editing interface in a preset display order.
That is, the initial voice data of the music to be edited is subjected to pitch separation (atmosphere 120 pitch) to obtain each pitch data, namely, the initial voice data corresponding to the initial voice track is separated, and then the pitch data is normalized, so that a 24-layer pitch musical scale chart is displayed on a music editing interface, wherein the musical scale chart is the initial voice track, and the effect that the music to be edited changes along with the tone change of the voice pitch or the tone of the voice is realized.
The target drum point data corresponding to the initial drum point track includes initial drum point data of a heavy drum type and initial drum point data of a light drum type. The drum points corresponding to the initial drum point data of the heavy drum type can be drawn in one mode (for example, blue large circles), the drum points corresponding to the initial drum point data of the light drum type can be drawn in another mode (for example, green small circles), and the drum points corresponding to the initial drum point data are drawn on the initial drum point track according to the time sequence of the occurrence of the initial drum point data. And, the terminal can adopt the CALayer technique when drawing the drum point. The CALayer technique may improve rendering performance compared to the UIView technique.
In addition, the terminal can dynamically enlarge and display the drum points reached by the playing progress bar, so that the target object can know the rhythm of the drum points reached by the playing progress bar more clearly.
The pitch of the initial accompaniment data of the music to be edited is plotted on the initial accompaniment track, so that a target object can conveniently know the fluctuation of the music to be edited on accompaniment visually.
Because the bass is low-frequency audio, the target object is difficult to feel the bass, so that the embodiment draws whether the initial bass data of the music to be edited exist or not, thereby facilitating the target object to know the composition of the music to be edited more.
The initial audio track of the music to be edited is displayed on the music editing interface, so that the information of the music to be edited can be displayed more accurately, the target object can see the music to be edited while hearing the music to be edited, the target object can understand the music to be edited conveniently, and the target object which is not professional can know the music to be edited well.
When the terminal displays the music editing interface, the terminal may automatically pop up the edit confirmation control, that is, at this time, the edit confirmation control may be displayed on the music editing interface in a pop-up window, for example, as shown in 701 in fig. 7, where the confirmation control is the edit confirmation control. Alternatively, the edit confirmation control can be embedded on a music editing interface, such as shown at 702 in FIG. 7.
When the music interface includes the music identifier, the music corresponding to the music identifier is the music already existing in the client, that is, the music to be edited already exists in the server corresponding to the client.
In some embodiments, the music to be edited may also be the extracted music uploaded by the target object, i.e. when the terminal displays the music interface, the music interface does not yet include an identification of the extracted music. At this time, the music interface may further include an extraction control, and the terminal displays a music selection interface including extracting music or extracting video in response to a trigger operation of the extraction control.
The terminal responds to initial selection operation of the target object on the extracted music or the extracted video, uploads a first extracted music or the first extracted video corresponding to the initial selection operation to a server corresponding to the client (if the initial selection operation corresponds to the video, the terminal can firstly extract the music in the first extracted video to obtain the first extracted music, and then uploads the first extracted music to the server corresponding to the client), and when uploading is successful, a music extraction interface is displayed, wherein the music extraction interface comprises a music matching interface of the first extracted music, namely, the music matching interface of the music to be edited is a sub-interface of the music extraction interface.
The music extraction interface may also include an extraction control so that the terminal may continue to display the music selection interface in response to a triggering operation of the extraction control by the target object. For example, the music extraction interface may be as shown in fig. 8.
Referring to fig. 9, the process of uploading the first extracted music to the server corresponding to the client by the terminal may be: the terminal sends the first extracted music and the permission packet to an uploading middle station through the client; the uploading middle station unpacks the permission packet through the service side uploading module to obtain the permission information of the client; then the uploading middle station sends the authority information to the logging middle station through the service side uploading module; and the login middle station checks the authority of the client based on the authority information, and returns a check result to the uploading middle station. If the verification result is successful, the uploading center generates a file identifier of the first extracted music through the service side uploading module, sends the first extracted music and the file identifier to the cloud database for storage, and then returns the file identifier and the storage address of the first extracted music in the cloud database to the terminal.
It should be appreciated that the client may exist in the form of an application program, a web page, or an applet. The existence form of the client can be selected by the user according to the actual situation, and the embodiment is not limited herein.
In other embodiments, in response to a triggering operation of the matching control, a music editing interface for music to be edited for a target object is displayed, including: and responding to the triggering operation of the matching control, carrying out track separation on the music to be edited to obtain each initial track, and displaying a music editing interface of the music to be edited of the target object, wherein the music editing interface comprises the initial tracks.
Referring to fig. 10, the process of performing track separation on the music to be edited to obtain the initial track corresponding to the music to be edited may be: and the terminal responds to the triggering operation of the matching control, and sends the file identification of the music to be edited and the target account number of the target object to the matching server. The matching server verifies the login state of the target account. If the login state of the target account is the logged-in state and the file identifier exists in the cloud database, the matching server searches the track separation running water of the file identifier from the cache, if the file identifier is not separated, the matching server creates the track separation running water of the file identifier, and then the track separation running water is stored in the cache.
The matching server sends the file identification to the audio server, so that the audio server creates an audio track separation task corresponding to the file identification, and operates the audio track separation task to separate the audio track of the music to be edited corresponding to the file identification, and meanwhile, the identification of the audio track separation task is returned to the matching server, so that the matching server stores the identification of the audio track separation task in a cache.
When the audio server finishes the track separation of the music to be edited, the audio server sends the initial track data of the initial track to the matching server and the cloud database, and the matching server sends the initial track data of the initial track to the terminal.
And, the audio server may create a corresponding step sub-stream for each step in the track separation process of the music to be edited, i.e., when running to the step, create a step sub-stream corresponding to the step. And then the step sub-stream is sent to a matching server, and the matching server sends the step sub-stream and the track separation stream to the worm hole.
And the terminal responds to the triggering operation of the matching control, and sends the file identification of the music to be edited to the streaming server so that the streaming server creates a streaming task corresponding to the file identification. And then the streaming server runs the streaming task so as to send a streaming acquisition request to the worm hole (the worm hole refers to a channel connecting the streaming server and the matching server), the worm hole sends the track separation streaming and the step sub-streaming to the streaming server based on the streaming acquisition request, and the streaming server sends the track separation streaming and the step sub-streaming to the streaming database, and when the matching of the music to be edited is completed, the streaming task is ended.
The music to be edited can be subjected to track separation through a trained neural network model or an independent component analysis algorithm. Because the vibration of the sound source produces not sound waves of a single frequency but composite sounds consisting of fundamental tones and overtones of different frequencies. For example, as shown in fig. 11, fig. 11 shows waveforms of different musical instruments. As can also be seen from fig. 12, the waveforms of the sound are composed of different waveforms.
Therefore, it is possible to perform track separation on the music to be edited, thereby obtaining waveforms of the respective initial tracks, and then determine initial track data of the respective initial tracks according to the amplitudes and frequencies of the respective waveforms. When the tracks are separated, the music to be edited can be subjected to Fourier transformation to obtain a matrix of the music to be edited in a frequency domain, and then the matrix is divided to obtain a submatrix of each initial track, wherein the submatrix of the initial track is the waveform of the initial track.
If the matching server searches the track separation stream of the file identifier from the cache, initial track data of the initial track of the file identifier is obtained from the cloud database, the initial track data is returned to the terminal (refer to fig. 13), and the terminal draws the initial track according to the initial track data.
Each step in the process of separating the tracks of the music to be edited creates a corresponding step sub-stream, so that when the process of separating the tracks of the music to be edited is problematic, the step in which the problem occurs can be rapidly determined without separating the tracks of the music to be edited from the beginning.
In other embodiments, the terminal may display a music matching interface of a plurality of music to be edited within the music interface or within the music extraction interface.
That is, after the terminal displays the music matching interface of the music to be edited, since the music interface further includes the music identifier, the terminal may also respond to the second music selection operation of the target object on the music identifier in the music interface, and display the music matching interface of the second music identifier corresponding to the second music selection operation, where the second music corresponding to the second music identifier is also the music to be edited. The music identification is an identification that no corresponding music matching interface exists in the music interface.
That is, the target object may select a plurality of music identifications in the music interface, so that a music matching interface corresponding to the plurality of music identifications is displayed in the music interface.
Or, since the music extraction interface further includes an extraction control, the terminal may continue to respond to the triggering operation of the target object on the extraction control, display the music selection interface, and then continue to respond to the target object on the extraction music or the target selection operation of the extraction video, display the music extraction interface, where the music extraction interface includes a music matching interface of the first extraction music and a music matching interface of the second extraction music corresponding to the target selection operation.
In addition, the music matching interface of the first music may further include a listening test area, and when the terminal responds to the second music selection operation, the listening test area may not be displayed on the music matching interface of the first music, but a matching control may also exist in the music matching interface of the first music.
Or, the music matching interface of the first extracted music may further include a listening area, and when the terminal responds to the target selection operation, the listening area may not be displayed on the music matching interface of the first extracted music, but a matching control may also exist in the music matching interface of the first extracted music.
For example, a music matching interface of music to be edited is described as an example on a music interface. The terminal responds to the first music selection operation of the target object on the music identifier, and displays a music matching interface of the first music identifier corresponding to the first music selection operation on the music interface, and at this time, the music interface can be shown as 1401 in fig. 14. The terminal responds to the second music selection operation of the target object on the music identifier, displays a music matching interface of the second music identifier corresponding to the second music selection operation, and does not display a listening area on the music matching interface of the first music, and at this time, the music interface can be as shown in 1402 in fig. 14.
It should be noted that, the terminal may announce more music matching interfaces corresponding to music, and the implementation manner may refer to the foregoing embodiment, which is not described herein again.
In other embodiments, the music matching interface may further include a shooting control terminal that may display a shooting interface of the music to be edited in response to the first triggering operation of the shooting control by the target object, so that the terminal shoots the video according to the music to be edited.
In other embodiments, a collection control may also be included within the music matching interface. The terminal can respond to the second triggering operation of the target object on the collection control, and display a collection page, wherein the collection page comprises music to be matched.
The process of displaying the collection page by the terminal in response to the second triggering operation of the target object on the collection control may be: the terminal can respond to the second triggering operation of the collection control by the target object, and the login state of the target account corresponding to the target object is checked. And if the login state of the target account is the logged-in state, displaying a collection interface, wherein the collection page comprises music to be edited. And if the login state of the target account is the unregistered state, displaying a login interface, wherein the login page comprises a login control, and the terminal responds to the confirmation operation of the target object on the login control and displays a collection interface.
For example, as shown in fig. 15, the music matching interface may include a play control, a shoot control, a collection control, a matching control, and a listening area. It should be understood that when the terminal receives the play instruction, the music to be edited may be played, and the listening area may be displayed on the music matching interface of the music to be edited, and when the terminal detects that the music to be edited is in the pause state, the listening area may not be displayed on the music matching interface of the music to be edited.
It should be noted that, when the terminal responds to the second music selection operation, the music matching interface of the first music may include a play control, a shooting control, a collection control, and a matching control. Or, when the terminal responds to the target selection operation, the music matching interface of the first extracted music can further comprise a play control, a shooting control, a collection control and a matching control.
For example, the music matching interface of the first music and the music matching interface of the second music may be as shown in fig. 16.
S202, in response to the confirmation operation of the editing confirmation control, displaying a target sound track, wherein the target sound track is obtained by editing the initial sound track according to target editing parameters matched with a target object.
After the terminal displays the edit confirmation control, if the target object needs to automatically realize the editing of the music to be edited, the target object can click the edit confirmation control, so that the terminal can respond to the confirmation operation of the target object on the edit confirmation control to display the target track.
The target editing parameter refers to an editing parameter corresponding to an editing object that matches the target object. The editing object matched with the target object refers to the editing object with the similarity with the target object being larger than a preset threshold value.
In some embodiments, in response to a validation operation to the edit validation control, displaying the target track includes:
responding to the confirmation operation of the editing confirmation control, and acquiring at least one editing object of the music to be edited;
matching at least one editing object with a target object;
taking the editing parameters corresponding to the editing objects matched with the target objects as target editing parameters matched with the target objects;
and editing the initial audio track according to the target editing parameters to obtain the target audio track.
The target track is displayed.
The editing object of the music to be edited refers to an object which has been edited with the music to be edited, and the obtaining of the editing object of the music to be edited may refer to obtaining of attribute data of the editing object. Before acquiring the editing object of the music to be edited, the terminal may display a music editing interface of the music to be edited for the editing object, the music editing interface including an editing confirmation control and an initial track. The terminal then displays an editing interface in response to a selection operation of the initial track, the editing interface including at least one editing control. Then, the terminal responds to the initial triggering operation of the editing control to generate editing parameters of the initial audio track, then the editing parameters are stored in association with the editing object and the music to be edited, the initial audio track is edited based on the editing parameters, and the editing audio track is displayed.
Or, the music editing interface further comprises an adding control, the terminal can respond to the third selection operation of the adding control, and a parameter selection interface corresponding to the third selection operation is generated, wherein the parameter selection interface comprises music parameters. Then, the terminal responds to a fourth selection operation on the music parameters, adds the music parameters corresponding to the fourth selection operation to the music to be edited, obtains the music after initial editing, and takes the music parameters corresponding to the fourth selection operation as editing parameters to be associated and stored with the music to be edited and the editing object.
The editing parameters may be, for example: dividing the data of the initial drum point track, copying the data of the initial Bei Siyin track, cutting the data of the initial human voice track, changing the speed of background music, adding the data of sound effect, and circularly playing the data A-B.
In this embodiment, the editing object, the editing parameters of the music to be edited and the music to be edited are stored in an associated manner, then the terminal responds to the confirmation operation of the editing confirmation control to find the editing object matched with the target object, and takes the editing parameters corresponding to the editing object matched with the target object as the target editing parameters matched with the target object. Finally, the initial audio track can be edited according to the target editing parameters to obtain the target audio track, so that the initial audio track is automatically edited, a target object is not required to be edited manually, and the method is convenient.
The method comprises the steps of editing an initial sound track according to target editing parameters to obtain a target sound track, modifying data of the initial sound track according to the target editing parameters to obtain target data of the target sound track, and rendering the target data to a music editing interface so as to display the target sound track on the music editing interface.
Matching the editing object with the target object comprises:
calculating the similarity between the attribute data of at least one editing object and the attribute data of the target object;
and taking the editing object with the similarity higher than the preset threshold value as the editing object matched with the target object.
The attribute data may include age, gender, and hobby music type. Optionally, calculating the similarity between the attribute data of the editing object and the attribute data of the target object may include:
and respectively digitizing the attribute data of the editing object and the attribute data of the target object to obtain first numerical data of the editing object and second numerical data of the target object, and then calculating the similarity between the first numerical data and the second numerical data by adopting a similarity algorithm.
For example, the attribute data of the editing object is [ gender a, school B, dance style C, music preference type D, age F ], and the attribute data of the target object is [ gender a, school H, dance style E, music preference type D, age G ], then the first numerical data may be [1,3, 10, 20,5], and the second numerical data may be [1,7, 30, 20,2].
The similarity algorithm may be selected according to practical situations, for example, a cosine similarity algorithm or a euclidean distance method is selected as the similarity algorithm in this embodiment, which is not limited herein.
When the similarity between the first numerical data and the second numerical data is calculated using the cosine similarity algorithm, the first numerical data and the second numerical data may be substituted into formula (1):
wherein x is i Representing the ith first numerical data, y i Represents the i-th second numerical data, n represents the number of the first numerical data, that is, the number of the second numerical data, and cos (θ) represents the similarity between the first numerical data and the second numerical data.
Since there may be at least one editing object having a similarity higher than the preset threshold, that is, at this time, there are a plurality of editing objects matching the target object, resulting in the presence of editing parameters that can be used for editing the initial track, in other embodiments, editing parameters corresponding to the editing objects matching the target object are taken as target editing parameters matching the target object, including:
taking the editing object matched with the target object as a candidate editing object, and displaying an object interface, wherein the object interface comprises the candidate editing object and editing parameters corresponding to the candidate editing object;
And responding to the object selection operation of the target object on the candidate editing object, and taking the editing parameters of the candidate editing object corresponding to the object selection operation as target editing parameters matched with the target object.
In this embodiment, after acquiring the editing object matched with the target object, the terminal takes the editing object matched with the target object as a candidate editing object, and displays the candidate editing object on the object interface, so that the target object can further select the editing parameters of each candidate editing object, and the initial audio track can be edited more accurately.
Or, the editing parameter corresponding to the editing object matched with the target object is taken as the target editing parameter matched with the target object, and the editing parameter comprises:
determining the similarity of each piece of similarity higher than a preset threshold value;
and taking the editing parameter of the editing object corresponding to the maximum similarity higher than the preset threshold value as a target editing parameter matched with the target object.
After obtaining the editing object matched with the target object, the terminal may have some editing parameters which are not needed by the target object, so in other embodiments, the editing parameters corresponding to the editing object matched with the target object are taken as target editing parameters matched with the target object, including:
Taking editing parameters corresponding to the editing objects matched with the target objects as candidate editing parameters, and displaying a parameter interface, wherein the parameter interface comprises the candidate editing parameters;
and responding to the parameter selection operation of the target object on the candidate editing parameters, and screening target editing parameters matched with the target object from the candidate editing parameters.
For example, the editing parameters of the editing object matched with the target object include the editing parameters of the initial human voice track and the editing parameters of the initial drum point voice track, but the target object only needs to edit the initial drum point voice track, the target object can select the editing parameters of the initial drum point voice track, and then the terminal responds to the parameter selection operation of the target object on the editing parameters of the initial drum point voice track, and screens the editing parameters of the initial drum point voice track from the candidate editing parameters, namely, the target editing parameters matched with the target object are the editing parameters of the initial drum point voice track.
In this embodiment, candidate editing parameters are displayed on the parameter interface, so that the target object can select the candidate editing parameters, and the terminal can screen the target editing parameters matched with the target object from the candidate editing parameters in response to the parameter selection operation of the target object on the candidate editing parameters, thereby enabling the initial audio track to be edited more accurately.
Because the editing object matching the target object may edit the initial track in different time periods, i.e., edit the initial track multiple times, i.e., the initial track has editing parameters for different time periods, in other embodiments, the editing parameters corresponding to the editing object matching the target object are taken as target editing parameters matching the target object, including:
acquiring editing parameters corresponding to an editing object matched with a target object in at least one time period;
acquiring the frequency of use of editing parameters corresponding to each time period;
and taking the editing parameter corresponding to the time period with the highest frequency of use as a target editing parameter matched with the target object.
The highest frequency of use of the editing parameters indicates that the editing parameters may be more suitable for the requirements of the target object, so in this embodiment, when the editing parameters corresponding to different time periods exist in the editing objects matched with the target object, the editing parameters with the highest frequency of use are used as the target editing parameters, thereby improving the accuracy of editing the initial audio track.
There may be some phenomenon that the initial tracks are not edited if the editing parameters of the editing object matched with the target object do not include the editing parameters of all the initial tracks, so in other embodiments, the editing parameters corresponding to the editing object matched with the target object are used as the target editing parameters matched with the target object, including:
Screening a first initial sound track and a second initial sound track from the initial sound tracks, wherein the first initial sound track is an initial sound track in which no corresponding editing parameters exist in editing parameters corresponding to an editing object matched with a target object, and the second initial sound track is an initial sound track in which corresponding editing parameters exist in editing parameters corresponding to the editing object matched with the target object;
determining editing parameters of the first initial audio track according to the editing parameters of the second initial audio track;
and taking the editing parameters of the second initial audio track and the editing parameters of the first initial audio track as target editing parameters matched with the target object.
For example, the editing parameters of the editing object matched with the target object include the editing parameters of the initial human voice track and the editing parameters of the initial Bei Siyin track, the initial voice track includes the initial human voice track, the initial drum point voice track, the initial Bei Siyin track and the initial accompaniment voice track, the editing parameters of the editing object matched with the target object do not include the editing parameters of the initial accompaniment voice track and the editing parameters of the initial drum point voice track, the first initial voice track is the initial accompaniment voice track and the initial drum point voice track, and the second initial voice track is the initial human voice track and the initial Bei Siyin track.
In order to make the first edited music have acoustic coordination, the first edited music is music containing the target audio track, so when the first initial audio track has no editing parameters, but the second initial audio track has editing parameters, the editing parameters of the first initial audio track can be determined according to the editing parameters corresponding to the second initial audio track.
For example, the editing parameters of the second initial audio track are the editing parameters of the current playing volume of the audio data corresponding to the second initial audio track, if the difference between the target playing volume obtained after the editing of the current playing volume by the editing parameters of the current playing volume and the playing volume of the audio data of the first initial audio track exceeds the preset range, the first edited music does not have hearing coordination, so in the embodiment, the editing parameters of the first initial audio track are determined according to the editing parameters of the second initial audio track, and then the first initial audio track is edited according to the editing parameters of the first initial audio track, so that the initial audio track is edited more accurately, the first edited music has hearing coordination, and the first edited music meets the requirements of the target object more.
In other embodiments, the editing parameter corresponding to the editing object matched with the target object is taken as the target editing parameter matched with the target object, and the editing parameter comprises:
screening a first editing object and a second editing object from editing objects matched with the target object, wherein the editing parameters of the first editing object comprise the editing parameters of a third initial audio track in the initial audio tracks, and the editing parameters of the second editing object comprise the editing parameters of a fourth initial audio track in the initial audio tracks;
and taking the editing parameters of the first editing object and the editing parameters of the second editing object as target editing parameters matched with the target object.
In this embodiment, the initial audio track includes a third initial audio track and a fourth initial audio track, the editing parameters of the first editing object include the editing parameters of the third initial audio track, the editing parameters of the second editing object include the editing parameters of the fourth initial audio track, and then the editing parameters of the first editing object and the editing parameters of the second editing object are used as target editing parameters matched with the target object, so that the initial audio track is edited more accurately, the first edited music has better hearing effect, and the first edited music meets the requirements of the target object more.
For example, the third initial track is an initial human track and an initial Bei Siyin track, the fourth initial track is an initial drum point track and an initial accompaniment track, the editing parameters of the first editing object include the editing parameters of the initial human track and the editing parameters of the initial Bei Siyin track, and the editing parameters of the second editing object include the editing parameters of the initial drum point track and the editing parameters of the initial accompaniment track.
The hearing effect of the first edited music required by the target object can be analyzed according to the attribute data of the target object, and the first editing object and the second editing object are screened from editing objects matched with the target object.
In other embodiments, after displaying the target track in response to a confirmation operation to the edit confirmation control, further comprising:
when detecting that the target track data of the target track is abnormal, displaying an editing interface of the abnormal track, wherein the editing interface comprises an editing control, and the abnormal track is the target track with the abnormality of the target track data;
and responding to the parameter editing operation of the editing control, displaying a first edited sound track, wherein the first edited sound track is a sound track obtained by editing the abnormal sound track according to the parameter editing operation.
Because there may be a non-professional user in the editing object matched with the target object, after the non-professional user edits the initial audio track, there may be an abnormality in the editing parameters, and then after the initial audio track parameters of the initial audio track are edited according to the editing parameters, there is also an abnormality in the target audio track data of the obtained target audio track. Therefore, when the terminal detects that the target track data of the target track is abnormal, the terminal can display an editing interface of the abnormal track, wherein the editing interface comprises an editing control, and then the terminal can respond to parameter editing operation on the editing control to display the first edited track.
For example, if one of the target audio track data is not on the waveform diagram corresponding to the target audio track, the target audio track is abnormal. For another example, if the difference between the amplitudes of the waveform corresponding to the first target track and the waveform corresponding to the second target track in the target tracks at the same time exceeds the preset amplitude range, the first target track and the second target track are both abnormal tracks.
The method for detecting whether the target track data of the target track is abnormal may be selected according to the actual situation, for example, a trained neural network model or a method for performing curve analysis on waveforms corresponding to the target track data may be used as the abnormality detection method in this embodiment, which is not limited herein.
In addition, the abnormal track includes at least one abnormal track point (for example, when the abnormal track is a target drum point track, the target drum point track includes at least one abnormal drum point), and for the abnormal track, the terminal may further display the abnormal track point according to an abnormal display dynamic effect.
When the initial track includes an initial human track, an initial Bei Siyin track, an initial accompaniment track, and an initial drum point track, the target track includes a target drum point track, a target accompaniment track, a target human track, and a target Bei Siyin track.
At this time, in response to a confirmation operation to the edit confirmation control, a target track is displayed, including:
and in response to the confirmation operation of the edit confirmation control, displaying the target drum point track, the target accompaniment track, the target person sound track and the target Bei Siyin track on the music editing interface according to a preset display sequence.
That is, after obtaining the target editing parameters, the terminal may modify the initial drumbeat data of the music to be edited according to the target editing parameters corresponding to the target drumbeat audio track, so as to obtain the target drumbeat data corresponding to the target drumbeat audio track. And modifying the initial accompaniment data of the music to be edited according to the target editing parameters corresponding to the target accompaniment tracks to obtain target accompaniment data corresponding to the target accompaniment tracks. And modifying the initial voice data of the music to be edited according to the target editing parameters corresponding to the target voice track to obtain the target voice data corresponding to the target voice track. And modifying the initial bass data of the music to be edited according to the target editing parameters corresponding to the target Bei Siyin track to obtain target bass data corresponding to the target Bei Siyin track.
And then the terminal renders the target drumbeat data, the target voice data, the target accompaniment data and the target bass data to the music editing interface to obtain a target drumbeat track, a target voice track, a target accompaniment track and a target Bei Siyin track.
It should be noted that, the target audio track and the initial audio track may or may not have a one-to-one relationship. For example, when the initial track is an initial human track, an initial Bei Siyin track, an initial accompaniment track, and an initial drum point track, the target track may be a target drum point track, a target accompaniment track, a target human track, and a target Bei Siyin track, or the target track may be a target drum point track, a target accompaniment track, a target human track, a target Bei Siyin track, and an audio effect track.
Optionally, in response to a confirmation operation of the edit confirmation control, displaying the target track includes:
responding to the confirmation operation of the editing confirmation control, and acquiring attribute data of the target object;
determining the display sequence of the target audio track according to the attribute data of the target object;
according to the display order, the target track is displayed.
In this embodiment, according to attribute data of the target object, the degree of interest of the target object for each target track is determined, and then the target tracks are ordered in order of the degree of interest from large to small.
After the terminal displays the target track, the target object may edit the target track again. Thus, in other embodiments, after displaying the target track in response to a validation operation to the edit validation control, further comprising:
responsive to a selection operation of the target track, displaying an editing interface, the editing interface including at least one editing control;
and responding to parameter editing operation on the editing control, and displaying the first edited audio track.
The editing interface may be displayed on the music editing interface in the form of a pop-up window, for example, as shown in fig. 17. The editing controls may include at least one of a segmentation control, a volume control, a fade control, a speed change control, a delete control, and a complex control.
The segmentation control is used for segmenting the target audio track corresponding to the selection operation by taking the playing progress bar as a reference. At this time, in response to a target trigger operation on the editing control, a final track is displayed, including:
responding to parameter editing operation of the splitting control, and splitting a target track corresponding to the selected operation according to the playing progress bar to obtain a second edited track;
and displaying the second edited audio track.
That is, the playing progress bar is used as a dividing line, and the target audio track corresponding to the selected operation is divided at the position of the playing progress bar.
The volume control is used for adjusting the volume of the target audio track corresponding to the selection operation. The desalination control is used for carrying out desalination treatment on the target audio track corresponding to the selection operation. The speed change control is used for carrying out speed change processing on the sound of the target sound track corresponding to the selection operation. The deletion control is used for deleting the sound of the target audio track corresponding to the selection operation. The copy control is used for copying the target audio track corresponding to the selection operation.
It should be noted that, when there are more editing controls and all the editing controls cannot be displayed on the music editing interface, the terminal may display a preset number of editing controls on the music editing interface, then the terminal responds to a sliding instruction on the music editing interface to display the editing controls that are not displayed on the music editing interface, or the music editing interface may include an update control, for example, as shown in fig. 17, and then the terminal responds to a triggering operation on the update control to display the editing controls that are not displayed on the music editing interface.
In other embodiments, after displaying the target track in response to a confirmation operation to the edit confirmation control, further comprising:
determining the type of the target audio track corresponding to the selected operation in response to the selected operation of the target audio track;
Determining display parameters of a target audio track corresponding to the selected operation according to the type;
and displaying the target audio track corresponding to the selected operation on the music editing interface according to the display parameters.
In this embodiment, different types of target tracks may correspond to different display parameters, for example, when the type of the target track is a drum point track type, that is, when the target track is a target drum point track, the display parameters are dynamic magnification parameters. For another example, when the type of the target track is a human voice track type, that is, when the target track is a target human voice track, the display parameter is a parameter to increase the frame and to zoom in. For example, when the target track corresponding to the selected operation is the target person track, the target person track may be as shown in fig. 18.
In addition, the terminal can also respond to the dragging instruction of the initial audio track and edit the playing duration of the initial audio track by taking the playing progress bar as a reference.
In other embodiments, the music editing interface includes an add control. Accordingly, after displaying the target track in response to the confirmation operation to the edit confirmation control, further comprising:
responding to a first selection operation of the adding control, and generating a parameter selection interface corresponding to the first selection operation, wherein the parameter selection interface comprises music parameters;
And responding to a second selection operation of the music parameters, and executing an operation corresponding to the music parameters corresponding to the second selection operation on the first edited music to obtain second edited music, wherein the first edited music is music comprising a target track.
The add control may be displayed on a bottom area of the music editing interface, for example, as shown in fig. 19. That is, when the terminal responds to a selection operation of the target track, the editing interface is overlaid on the addition control in the form of a popup window.
The add controls may include at least one of an add sound effect control, an add music control, an overall speed change control, an a-B loop control, and a full selection track control. The sound effect control is used for adding a sound effect sound track on the music editing interface, namely adding the sound effect sound track into edited music. The add music control is used to add background music of the edited music. The integral speed change control is used for carrying out speed change processing on the edited music. The A-B loop control is used for playing the edited music of the selected section. The full selection track is used to select all target tracks.
For example, when the adding control is an adding sound effect control, responding to a first selection operation of the adding control, generating a parameter selection interface corresponding to the first selection operation, wherein the parameter selection interface comprises music parameters and comprises:
Responding to a first selection operation of the sound effect adding control, wherein the sound effect selection interface corresponds to the first selection operation, and the sound effect selection interface comprises sound effects;
in response to a second selection operation of the music parameters, performing an operation corresponding to the music parameters corresponding to the second selection operation on the first edited music, including:
and responding to the second selection operation of the sound effects, displaying sound effect sound tracks on the music editing interface, wherein the sound effect sound tracks are sound effects corresponding to the second selection operation and corresponding sound tracks.
In other words, the parameter selection interface corresponding to the first selection operation is an audio selection interface, (the parameter selection interface corresponding to the general audio in street dance) may be as shown in fig. 20, that is, the parameter selection interface corresponding to the first selection operation is displayed on the music editing interface in a popup window mode, where the music parameters are various audio. And executing operation fingers corresponding to the music parameters corresponding to the second selection operation on the edited music, and displaying the sound effect corresponding to the second selection operation and the corresponding sound track on a music editing interface by the terminal. And adding the corresponding sound effect of the second selection operation to the edited music to obtain second edited music. At this time, the music editing interface may be as shown in fig. 21.
For another example, when the adding control is an adding music control, the parameter selection interface corresponding to the first selection operation is a music selection interface, and the music parameters are various music. And executing operation fingers corresponding to the music parameters corresponding to the second selection operation on the first edited music, and adding the music corresponding to the second selection operation to the first edited music.
For another example, when the adding control is an a-B loop control, the parameter selection interface corresponding to the first selection operation is a selection interface of a start position and an end position of music, and the music parameter is the start position and the end position of music. And executing operation corresponding to the music parameters corresponding to the second selection operation on the first edited music to play the music between the starting position and the ending position in the first edited music.
For another example, when the adding control is a full-selection audio track control, the parameter selection interface corresponding to the first selection operation is an editing interface, and the music parameters are all editing controls. And executing operation fingers corresponding to the music parameters corresponding to the second selection operation on the first edited music, and displaying a second edited audio track on the music editing interface.
For another example, when the adding control is an overall speed change control, the parameter selection interface corresponding to the first selection operation is a sound speed selection interface, and the music parameters are the sound speeds. And executing operation fingers corresponding to the music parameters corresponding to the second selection operation on the first edited music, and playing the first edited music according to the sound speed corresponding to the second selection operation.
In other embodiments, a release control is also included within the music editing interface. The music editing method further includes:
and in response to the triggering operation of the release control, displaying a release interface, wherein the release interface comprises edited music, the edited music comprises one of first edited music, second edited music and third edited music, and the third edited music comprises a second edited audio track.
In other embodiments, a capture control is also included within the music editing interface. The music editing method further includes:
and responding to the triggering operation of the shooting control, displaying a shooting interface corresponding to the edited music, wherein the edited music comprises one of first edited music, second edited music and third edited music, and the third edited music comprises one of second edited audio tracks.
In other embodiments, the music editing interface includes a play control, and the terminal may play the music to be edited or the music after editing and display a play progress bar in response to a trigger operation on the play control. So after displaying the target track in response to a confirmation operation to the edit confirmation control, it further includes:
when the edited music is played, the dynamic playing is performed on the target audio track according to the playing progress bar.
Or after displaying the music editing interface of the music to be edited for the target object, the method further comprises:
when playing the music to be edited, the dynamic playing is performed on the initial audio track according to the playing progress bar.
In other embodiments, according to the playing progress bar, active playing is performed on the target audio track, including:
screening out a target drum point track from the target track, and identifying a currently played target drum point from the target drum point track according to the playing progress bar;
and according to the drum point type of the target drum point, performing active playing on the target drum point audio track.
The drum point type includes a heavy drum type and a light drum type. The dynamic effect types corresponding to different drum types, namely the dynamic effect playing modes corresponding to different drum types, can be different or the same. The dynamic playing mode may be a dynamic amplifying mode or a static amplifying mode, and for the specific dynamic playing mode, the user may select according to the actual situation, which is not limited in this embodiment.
The method for identifying the target drum point currently played in the target drum point track according to the playing progress bar comprises the following steps:
acquiring the position information of a playing progress bar in a target drum point track and the position interval of each drum point in the target drum point track;
And matching the position information with the position interval, and taking the drum point corresponding to the position interval matched with the position information as the target drum point of current playing.
However, there are two overlapping drum points on the target drum point track, that is, there are two drum points reached by the playing progress bar at this time, the terminal can simultaneously play the two drum points reached by the progress bar, that is, there are two target drum points screened out at this time, so that the previous drum point in the two drum points is played repeatedly, which results in errors.
To solve the technical problem, in other embodiments, referring to fig. 22, the music editing method further includes:
and according to the drum point type of the target drum point, performing active playing on the target drum point audio track, wherein the active playing comprises the following steps:
determining the storage state of the target drum point in the played array;
if the storage state is the non-storage state, acquiring the drum point type of the target drum point, and determining the dynamic effect type of the target drum point according to the drum point type;
and based on the dynamic effect type, performing dynamic effect playing on the target drum point track, and storing the target drum point into a played array.
If the storage state is not stored, which indicates that the terminal has not played the target drum point, the terminal may play the target drum point based on the dynamic effect type of the target drum point, and store the target drum point in the played array.
If the storage state is the stored state, the target drum point in the played array may be deleted if the target drum point has already been played, so in other embodiments, after determining the storage state of the target drum point in the played array, the method further includes:
if the storage state is the stored state, acquiring the current position information of the playing progress bar on the target drum point track;
and deleting the target drum point in the played array when the current position information is not matched with the position interval of the target drum point.
In this embodiment, a played array is set, and then the played target drum points are stored in the played array, so that the terminal can determine whether the target drum points have been played according to the played array, so that the played target drum points are not played repeatedly.
The preset dynamic effect may be in a static amplification form or a dynamic amplification form, and the user may select the preset dynamic effect according to the actual situation, which is not limited in this embodiment.
In other embodiments, the music editing interface further includes an adjustment control for the target track. After displaying the target track in response to a confirmation operation to the edit confirmation control, further comprising:
Responding to the triggering operation of the adjustment control, and acquiring the current playing volume of the audio file of the adjustment target audio track corresponding to the adjustment control;
when the current playing volume exceeds the mute volume, the current playing volume is adjusted to the mute volume, and a cover layer is added to the adjustment target audio track so as to hide the adjustment target audio track in the music editing interface, and adjusted music is obtained.
After obtaining each target track, the target track data of each target track is equivalent to an independent m4a format audio file, and the terminal can respond to the triggering operation of the adjusting control to play and stop playing the audio file of the target track.
Therefore, when the terminal responds to the triggering operation of the adjustment control, the current playing volume of the audio file of the adjustment target audio track corresponding to the adjustment control is obtained, the current playing volume is stored as the historical playing volume, when the current playing volume exceeds the mute volume, the current playing volume is adjusted to the mute volume, adjusted music is obtained, and a covering layer is added to the adjustment target audio track so as to hide the adjustment target audio track at the audio-video interface.
When the current playing volume does not exceed the mute volume, the playing volume of the audio file corresponding to the adjustment target audio track is adjusted to be the historical playing volume, and the covering layer on the adjustment target audio track is removed so as to display the adjustment target audio track on the audio-video interface.
When the current playing volume exceeds the mute volume, the terminal adjusts the current playing volume to the mute volume to obtain adjusted music, and adds a cover layer to the adjustment target track. After the audio-video interface conceals the adjustment target audio track, the cover layer on the adjustment target audio track can be removed in response to the triggering operation of the adjustment control, so that the adjustment target audio track is displayed on the audio-video interface, the playing volume of the audio file corresponding to the adjustment target audio track is adjusted to the current playing volume, and therefore the playing volume recovery processing and the visual recovery processing of the adjustment target audio track are achieved.
The mute volume may be 0, or may be another volume threshold, which may be set by the user according to the actual situation, which is not limited in this embodiment.
The adjustment control may be an identification of the target track, or the adjustment control may be a control that is additionally provided. Each target track has a corresponding adjustment control, so that the terminal can respond to the triggering operation of the adjustment control to realize the silence of a single target track corresponding to the triggering operation. And when the current playing volume of the terminal on the target audio track is adjusted to be mute volume, the terminal can hide the target audio track, other target audio tracks are still normally displayed, and audio files of other target audio tracks can still be normally played.
In addition, the terminal can adjust the current playing amounts of the plurality of target audio tracks to be mute volume, so that only the audio file of one target audio track is played finally, the target object can better know the sound effect of a single target audio track in the music to be matched, and the target object is further helped to better understand the music to be matched in a layering manner.
It should be noted that, the above method for processing the target audio track may be applicable to the audio track displayed on the music editing interface, that is, may also be applicable to the initial audio track, or may be applicable to other audio tracks displayed on the music editing interface, which is not described herein.
As can be seen from the above, in the embodiment of the present application, a music editing interface of the music to be edited for the target object is displayed, the music editing interface includes an edit confirmation control and an initial track of the music to be edited, and the initial track is at least one track obtained by performing track separation on the music to be edited. And then, responding to the confirmation operation of the target object on the editing confirmation control, and displaying a target sound track, wherein the target sound track is a sound track obtained by editing the initial sound track according to target editing parameters matched with the target object.
In the application, the displayed music editing interface comprises the editing confirmation control, so that the target track can be displayed in response to the confirmation operation of the target object on the editing confirmation control, and the target track is the track obtained after the initial track is edited according to the target editing parameters matched with the target object, so that the automatic editing of the initial track of the music to be edited can be realized, and the convenience of editing the initial track is improved.
The method described in the above embodiments is described in further detail below by way of example.
Referring to fig. 23, fig. 23 is a flowchart illustrating a music editing method according to an embodiment of the application. The music editing method flow may include:
s2301, displaying a music editing interface of the to-be-edited music aiming at the target object by the terminal, wherein the music editing interface comprises an editing confirmation control, an adding control, a release control, a shooting control and an initial sound track of the to-be-edited music, and the initial sound track comprises an initial drum point sound track, an initial human sound track, an initial accompaniment sound track and an initial Bei Siyin track.
The process of displaying the initial audio track by the terminal may be as shown in fig. 24, where the terminal sends the identifier of the music to be edited to the server through the client, and the server performs authority verification on the client. If the authority verification is passed, the server returns the initial audio track data to the terminal, and the terminal renders the initial audio track data to the music editing interface. And if the authority verification is not passed, returning error information to the terminal.
S2302, the terminal responds to the confirmation operation of the editing confirmation control to acquire the editing object of the music to be edited, and calculates the similarity between the attribute data of the editing object and the attribute data of the target object.
S2303, using the editing parameters corresponding to the editing objects with similarity higher than the preset threshold as target editing parameters matched with the target objects.
For example, as shown in fig. 25, the terminal may acquire an editing object of music to be edited from the music editing database according to the identification of the music to be edited, and then calculate the similarity between the attribute data of the editing object and the attribute data of the target object. If the similarity is higher than the preset threshold, the editing parameter corresponding to the editing object with the similarity higher than the preset threshold is used as the target editing parameter matched with the target object. And if the similarity is smaller than or equal to the preset threshold value, displaying that the editing parameters are not found.
Alternatively, the terminal may acquire the target editing parameters from the server. For example, as shown in fig. 26, in response to a confirmation operation of the edit confirmation control, the terminal sends an acquisition request to the server, where the acquisition request includes an identifier of music to be edited and a target account number of the target object. After receiving the acquisition request, the server acquires the editing object of the music to be edited according to the identification of the music to be edited, and calculates the similarity between the attribute data of the editing object and the attribute data of the target object. Finally, the server takes the editing parameters corresponding to the editing objects with similarity higher than the preset threshold value as target editing parameters matched with the target objects, and returns the target editing parameters to the terminal, so that the terminal obtains the target editing parameters.
S2304, the terminal edits the initial audio track according to the target editing parameters to obtain a target audio track, and the target audio track is displayed on the music editing interface, wherein the target audio track comprises a target drum point audio track, a target person audio track, a target accompaniment audio track and a target Bei Siyin track.
S2305, the terminal responds to the selection operation of the target audio track, and displays an editing interface, wherein the editing interface comprises at least one editing control.
S2306, the terminal responds to the parameter editing operation of the editing control to display a second edited audio track, wherein the second edited audio track is an audio track obtained by editing the target audio track according to the parameter editing sending operation.
S2307, the terminal responds to the first selection operation of the adding control, and a parameter selection interface corresponding to the first selection operation is generated, wherein the parameter selection interface comprises music parameters.
S2308, the terminal responds to the second selection operation of the music parameters, and executes the operation corresponding to the music parameters corresponding to the second selection operation on the first edited music to obtain second edited music, wherein the first edited music comprises the target track.
S2309, the terminal responds to the selection operation of the initial audio track and displays an editing interface.
S23010, the terminal responds to the editing trigger operation of the editing control to display a third edited audio track, wherein the third edited audio track is an audio track obtained by editing the initial audio track according to the editing trigger operation.
S23011, the terminal responds to the editing triggering operation of the editing control, and stores editing parameters corresponding to the editing triggering operation, music to be edited and the target object in an associated mode.
S23012, the terminal responds to the triggering operation of the release control, and displays a release interface, wherein the release interface comprises edited music, the edited music comprises one of first edited music, second edited music, third edited music and fourth edited music, and the fourth edited music comprises a third edited audio track.
S23013, the terminal responds to the triggering operation of the shooting control, and displays a shooting interface corresponding to the edited music, wherein the edited music comprises one of first edited music, second edited music, third edited music and fourth edited music.
For example, as shown in fig. 27, the terminal acquires data of an initial audio track, draws the initial audio track on the music editing interface according to the initial audio track data of the initial audio track, and displays the music editing interface. The terminal responds to the confirmation operation of the editing confirmation control, obtains target editing parameters matched with the target object, modifies the initial audio track data according to the target editing parameters to obtain target audio track data, and draws the target audio track based on the target audio track data. And the terminal responds to the selected operation of the target audio track and displays an editing interface. And the terminal responds to the parameter editing operation of the editing control, edits the target track according to the parameter editing operation, and displays the second edited track.
The terminal determines whether the third edited music (music including the second edited track) is edited music in response to a trigger operation of the release control. If yes, the terminal displays a release interface of the third edited music, if not, the terminal sets a release control to be in a locking state, and at the moment, the target object cannot trigger the release control. And the terminal responds to the triggering operation of the shooting control, and shoots according to the third edited music.
The specific implementation manner and the corresponding beneficial effects in this embodiment may be specifically referred to the above music editing method, and this embodiment is not described herein again.
In order to facilitate better implementation of the music editing method provided by the embodiment of the application, the embodiment of the application also provides a device based on the music editing method. In which the meaning of nouns is the same as in the music editing method described above, reference may be made to the description of the method embodiments for specific implementation details.
For example, as shown in fig. 28, the music editing apparatus may include:
the first display module 2801 is configured to display a music editing interface of the to-be-edited music for the target object, where the music editing interface includes an edit confirmation control and an initial audio track of the to-be-edited music, and the initial audio track is at least one audio track obtained by performing audio track separation on the to-be-edited music;
The second display module 2802 is configured to display a target track in response to a confirmation operation of the edit confirmation control, where the target track is a track obtained by editing the initial track according to a target editing parameter matched with the target object.
Optionally, the second display module 2802 is specifically configured to perform:
responding to the confirmation operation of the editing confirmation control, and acquiring at least one editing object of the music to be edited;
matching at least one editing object with a target object;
taking the editing parameters corresponding to the editing objects matched with the target objects as target editing parameters matched with the target objects;
editing the initial audio track according to the target editing parameters to obtain a target audio track;
the target track is displayed.
Optionally, the second display module 2802 is specifically configured to perform:
calculating the similarity between the attribute data of at least one editing object and the attribute data of the target object;
and taking the editing object with the similarity higher than the preset threshold value as the editing object matched with the target object.
Optionally, the second display module 2802 is specifically configured to perform:
taking the editing object matched with the target object as a candidate editing object, and displaying an object interface, wherein the object interface comprises the candidate editing object and editing parameters corresponding to the candidate editing object;
And responding to the object selection operation of the target object on the candidate editing object, and taking the editing parameters of the candidate editing object corresponding to the object selection operation as target editing parameters matched with the target object.
Optionally, the second display module 2802 is specifically configured to perform:
taking editing parameters corresponding to the editing objects matched with the target objects as candidate editing parameters, and displaying a parameter interface, wherein the parameter interface comprises the candidate editing parameters;
and responding to the parameter selection operation of the target object on the candidate editing parameters, and screening target editing parameters matched with the target object from the candidate editing parameters.
Optionally, the second display module 2802 is specifically configured to perform:
acquiring editing parameters corresponding to an editing object matched with a target object in at least one time period;
acquiring the frequency of use of editing parameters corresponding to each time period;
and taking the editing parameter corresponding to the time period with the highest frequency of use as a target editing parameter matched with the target object.
Optionally, the second display module 2802 is specifically configured to perform:
screening a first initial sound track and a second initial sound track from the initial sound tracks, wherein the first initial sound track is an initial sound track in which no corresponding editing parameters exist in editing parameters corresponding to an editing object matched with a target object, and the second initial sound track is an initial sound track in which corresponding editing parameters exist in editing parameters corresponding to the editing object matched with the target object;
Determining editing parameters of the first initial audio track according to the editing parameters of the second initial audio track;
and taking the editing parameters of the second initial audio track and the editing parameters of the first initial audio track as target editing parameters matched with the target object.
Optionally, the second display module 2802 is specifically configured to perform:
screening a first editing object and a second editing object from editing objects matched with the target object, wherein the editing parameters of the first editing object comprise the editing parameters of a third initial audio track in the initial audio tracks, and the editing parameters of the second editing object comprise the editing parameters of a fourth initial audio track in the initial audio tracks;
and taking the editing parameters of the first editing object and the editing parameters of the second editing object as target editing parameters matched with the target object.
Optionally, the music editing apparatus further includes:
a first editing module for performing:
when detecting that the target track data of the target track is abnormal, displaying an editing interface of the abnormal track, wherein the editing interface comprises an editing control, and the abnormal track is the target track with the abnormality of the target track data;
and responding to the parameter editing operation of the editing control, displaying a first edited sound track, wherein the first edited sound track is a sound track obtained by editing the abnormal sound track according to the parameter editing operation.
Optionally, the second display module 2802 is specifically configured to perform:
responding to the confirmation operation of the editing confirmation control, and acquiring attribute data of the target object;
determining the display sequence of the target audio track according to the attribute data of the target object;
according to the display order, the target track is displayed.
Optionally, the second display module 2802 is specifically configured to perform:
determining the type of the target audio track corresponding to the selected operation in response to the selected operation of the target audio track;
determining display parameters of a target audio track corresponding to the selected operation according to the type;
and displaying the target audio track corresponding to the selected operation on the music editing interface according to the display parameters.
In the specific implementation, each module may be implemented as an independent entity, or may be combined arbitrarily, and implemented as the same entity or a plurality of entities, and the specific implementation and the corresponding beneficial effects of each module may be referred to the foregoing method embodiments, which are not described herein again.
The embodiment of the present application further provides an electronic device, which may be a server or a terminal, as shown in fig. 29, and shows a schematic structural diagram of the electronic device according to the embodiment of the present application, specifically:
The electronic device may include one or more processing cores 'processors 2901, one or more computer-readable storage media's memory 2902, power supply 2903, and input unit 2904, among other components. It will be appreciated by those skilled in the art that the electronic device structure shown in fig. 29 is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. Wherein:
the processor 2901 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by running or executing computer programs and/or modules stored in the memory 2902, and invoking data stored in the memory 2902. Optionally, the processor 2901 may include one or more processing cores; preferably, the processor 2901 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 2901.
The memory 2902 may be used to store computer programs and modules, and the processor 2901 executes various functional applications and data processing by executing the computer programs and modules stored in the memory 2902. The memory 2902 may mainly include a storage program area that may store an operating system, computer programs required for at least one function (such as a sound playing function, an image playing function, etc.), and a storage data area; the storage data area may store data created according to the use of the electronic device, etc. In addition, memory 2902 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 2902 may also include a memory controller to provide access to the memory 2902 by the processor 2901.
The electronic device further includes a power supply 2903 for powering the various components, and preferably the power supply 2903 may be logically coupled to the processor 2901 by a power management system for managing charge, discharge, and power consumption by the power management system. The power source 2903 may also include one or more of any components, such as a direct current or alternating current power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
The electronic device may also include an input unit 2904, which input unit 2904 may be used to receive entered numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the electronic device may further include a display unit or the like, which is not described herein. In particular, in this embodiment, the processor 2901 in the electronic device loads executable files corresponding to the processes of one or more computer programs into the memory 2902 according to the following instructions, and the processor 2901 executes the computer programs stored in the memory 2902, so as to implement various functions, such as:
displaying a music editing interface of the music to be edited aiming at the target object, wherein the music editing interface comprises an editing confirmation control and an initial track of the music to be edited, and the initial track is at least one track obtained after the track separation of the music to be edited;
and responding to the confirmation operation of the editing confirmation control, displaying a target track, wherein the target track is obtained by editing the initial track according to target editing parameters matched with the target object.
The specific embodiments and the corresponding beneficial effects of the above operations can be referred to the above detailed description of the image processing method, and are not described herein.
It will be appreciated by those of ordinary skill in the art that all or part of the steps of the various methods of the above embodiments may be performed by a computer program, or by computer program control related hardware, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present application provides a computer-readable storage medium in which a computer program is stored, the computer program being capable of being loaded by a processor to perform the steps of any of the music editing processing methods provided by the embodiment of the present application. For example, the computer program may perform the steps of:
displaying a music editing interface of the music to be edited aiming at the target object, wherein the music editing interface comprises an editing confirmation control and an initial track of the music to be edited, and the initial track is at least one track obtained after the track separation of the music to be edited;
and responding to the confirmation operation of the editing confirmation control, displaying a target track, wherein the target track is obtained by editing the initial track according to target editing parameters matched with the target object.
The specific embodiments and the corresponding beneficial effects of each of the above operations can be found in the foregoing embodiments, and are not described herein again.
Wherein the computer-readable storage medium may comprise: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
Since the computer program stored in the computer readable storage medium can execute the steps in any music editing method provided by the embodiment of the present application, the beneficial effects that any music editing method provided by the embodiment of the present application can be achieved, and detailed descriptions of the foregoing embodiments are omitted herein.
Wherein according to an aspect of the application, a computer program product or a computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the music editing method described above.
The foregoing has described in detail a music editing method, apparatus, electronic device and computer readable storage medium according to embodiments of the present application, and specific examples have been applied to illustrate the principles and embodiments of the present application, and the above description of the embodiments is only for aiding in understanding the method and core idea of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, the present description should not be construed as limiting the present application.

Claims (15)

1. A music editing method, comprising:
displaying a music editing interface of the music to be edited aiming at a target object, wherein the music editing interface comprises an editing confirmation control and an initial track of the music to be edited, and the initial track is at least one track obtained by carrying out track separation on the music to be edited;
and responding to the confirmation operation of the editing confirmation control, displaying a target sound track, wherein the target sound track is a sound track obtained after the initial sound track is edited according to target editing parameters matched with the target object.
2. The music editing method according to claim 1, wherein the displaying a target track in response to a confirmation operation of the edit confirmation control includes:
responding to the confirmation operation of the editing confirmation control, and acquiring at least one editing object of the music to be edited;
matching the at least one editing object with the target object;
taking the editing parameters corresponding to the editing objects matched with the target objects as target editing parameters matched with the target objects;
editing the initial audio track according to the target editing parameters to obtain a target audio track;
The target track is displayed.
3. The music editing method according to claim 2, wherein said matching the at least one editing object with a target object comprises:
calculating the similarity between the attribute data of the at least one editing object and the attribute data of the target object;
and taking the editing object with the similarity higher than the preset threshold value as the editing object matched with the target object.
4. The music editing method according to claim 2, wherein the editing parameters corresponding to the editing object to be matched with the target object as the target editing parameters to be matched with the target object include:
taking the editing object matched with the target object as a candidate editing object, and displaying an object interface, wherein the object interface comprises the candidate editing object and editing parameters corresponding to the candidate editing object;
and responding to the object selection operation of the target object on the candidate editing object, and taking the editing parameter of the candidate editing object corresponding to the object selection operation as a target editing parameter matched with the target object.
5. The music editing method according to claim 2, wherein the editing parameters corresponding to the editing object to be matched with the target object as the target editing parameters to be matched with the target object include:
Taking editing parameters corresponding to the editing objects matched with the target objects as candidate editing parameters, and displaying a parameter interface, wherein the parameter interface comprises the candidate editing parameters;
and responding to the parameter selection operation of the target object on the candidate editing parameters, and screening target editing parameters matched with the target object from the candidate editing parameters.
6. The music editing method according to claim 2, wherein the editing parameters corresponding to the editing object to be matched with the target object as the target editing parameters to be matched with the target object include:
acquiring editing parameters corresponding to the editing object matched with the target object in at least one time period;
acquiring the frequency of use of the editing parameters corresponding to each time period;
and taking the editing parameter corresponding to the time period with the highest frequency of use as a target editing parameter matched with the target object.
7. The music editing method according to claim 2, wherein the editing parameters corresponding to the editing object to be matched with the target object as the target editing parameters to be matched with the target object include:
Screening a first initial audio track and a second initial audio track from the initial audio tracks, wherein the first initial audio track is an initial audio track in which no corresponding editing parameters exist in editing parameters corresponding to an editing object matched with the target object, and the second initial audio track is an initial audio track in which corresponding editing parameters exist in editing parameters corresponding to the editing object matched with the target object;
determining editing parameters of the first initial audio track according to the editing parameters of the second initial audio track;
and taking the editing parameters of the second initial audio track and the editing parameters of the first initial audio track as target editing parameters matched with the target object.
8. The music editing method according to claim 2, wherein the editing parameters corresponding to the editing object to be matched with the target object as the target editing parameters to be matched with the target object include:
screening a first editing object and a second editing object from editing objects matched with the target object, wherein the editing parameters of the first editing object comprise the editing parameters of a third initial sound track in the initial sound tracks, and the editing parameters of the second editing object comprise the editing parameters of a fourth initial sound track in the initial sound tracks;
And taking the editing parameters of the first editing object and the editing parameters of the second editing object as target editing parameters matched with a target object.
9. The music editing method according to claim 1, characterized by further comprising, after said displaying a target track in response to a confirmation operation of said edit confirmation control:
when detecting that the target track data of the target track is abnormal, displaying an editing interface of the abnormal track, wherein the editing interface comprises an editing control, and the abnormal track is the target track with the abnormal target track data;
and responding to the parameter editing operation of the editing control, displaying a first edited audio track, wherein the first edited audio track is an audio track obtained by editing the abnormal audio track according to the parameter editing operation.
10. The music editing method according to claim 1, wherein the displaying a target track in response to a confirmation operation of the edit confirmation control includes:
responding to the confirmation operation of the editing confirmation control, and acquiring attribute data of the target object;
determining the display sequence of the target audio track according to the attribute data of the target object;
And displaying the target audio track according to the display sequence.
11. The music editing method according to claim 1, characterized by further comprising, after said displaying a target track in response to a confirmation operation of said edit confirmation control:
responding to the selected operation of the target audio track, and determining the type of the target audio track corresponding to the selected operation;
according to the type, determining display parameters of the target audio track corresponding to the selected operation;
and displaying the target audio track corresponding to the selected operation on the music editing interface according to the display parameters.
12. A music editing apparatus, comprising:
the first display module is used for displaying a music editing interface of the music to be edited aiming at the target object, wherein the music editing interface comprises an editing confirmation control and an initial sound track of the music to be edited, and the initial sound track is at least one sound track obtained after the sound track of the music to be edited is separated;
and the second display module is used for responding to the confirmation operation of the editing confirmation control and displaying a target sound track, wherein the target sound track is a sound track obtained after the initial sound track is edited according to the target editing parameters matched with the target object.
13. An electronic device comprising a processor and a memory, the memory storing a computer program, the processor being configured to execute the computer program in the memory to perform the music editing method of any of claims 1 to 11.
14. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program adapted to be loaded by a processor for performing the music editing method of any of claims 1 to 11.
15. A computer program product, characterized in that the computer program product stores a computer program adapted to be loaded by a processor for performing the music editing method of any of claims 1 to 11.
CN202210357396.XA 2022-04-01 2022-04-01 Music editing method, apparatus, electronic device, and computer-readable storage medium Pending CN116935817A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210357396.XA CN116935817A (en) 2022-04-01 2022-04-01 Music editing method, apparatus, electronic device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210357396.XA CN116935817A (en) 2022-04-01 2022-04-01 Music editing method, apparatus, electronic device, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN116935817A true CN116935817A (en) 2023-10-24

Family

ID=88391266

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210357396.XA Pending CN116935817A (en) 2022-04-01 2022-04-01 Music editing method, apparatus, electronic device, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN116935817A (en)

Similar Documents

Publication Publication Date Title
US9532136B2 (en) Semantic audio track mixer
US9031243B2 (en) Automatic labeling and control of audio algorithms by audio recognition
CN116612731A (en) Network-based processing and distribution of multimedia content for live musical performances
MX2011012749A (en) System and method of receiving, analyzing, and editing audio to create musical compositions.
CN104040618A (en) System and method for producing a more harmonious musical accompaniment and for applying a chain of effects to a musical composition
CN101657816A (en) The portal website that is used for distributed audio file editing
WO2019114015A1 (en) Robot performance control method and robot
WO2015092492A1 (en) Audio information processing
WO2023207472A1 (en) Audio synthesis method, electronic device and readable storage medium
US9037278B2 (en) System and method of predicting user audio file preferences
US20230186782A1 (en) Electronic device, method and computer program
JP2023527473A (en) AUDIO PLAYING METHOD, APPARATUS, COMPUTER-READABLE STORAGE MEDIUM AND ELECTRONIC DEVICE
CN114491140A (en) Audio matching detection method and device, electronic equipment and storage medium
CN112422999B (en) Live content processing method and computer equipment
JP2008216486A (en) Music reproduction system
KR101813704B1 (en) Analyzing Device and Method for User's Voice Tone
CN116935817A (en) Music editing method, apparatus, electronic device, and computer-readable storage medium
CN106448710B (en) A kind of calibration method and music player devices of music play parameters
Wilmering et al. Audio effect classification based on auditory perceptual attributes
CN116939323A (en) Music matching method, device, electronic equipment and computer readable storage medium
CN116932810A (en) Music information display method, device and computer readable storage medium
US11943591B2 (en) System and method for automatic detection of music listening reactions, and mobile device performing the method
KR20180099375A (en) Method of searching highlight in multimedia data and apparatus therof
CN116932809A (en) Music information display method, device and computer readable storage medium
JPH11167388A (en) Music player device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination