CN111680185A - Music generation method, music generation device, electronic device and storage medium - Google Patents

Music generation method, music generation device, electronic device and storage medium Download PDF

Info

Publication number
CN111680185A
CN111680185A CN202010478115.7A CN202010478115A CN111680185A CN 111680185 A CN111680185 A CN 111680185A CN 202010478115 A CN202010478115 A CN 202010478115A CN 111680185 A CN111680185 A CN 111680185A
Authority
CN
China
Prior art keywords
music
attribute information
confirmation instruction
candidate
pieces
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010478115.7A
Other languages
Chinese (zh)
Inventor
蒋慧军
黄尹星
姜凯英
韩宝强
肖京
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202010478115.7A priority Critical patent/CN111680185A/en
Publication of CN111680185A publication Critical patent/CN111680185A/en
Priority to PCT/CN2020/134835 priority patent/WO2021115311A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/61Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/635Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/65Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/683Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present application relates to the field of data processing technologies, and in particular, to a music generation method and apparatus, an electronic device, and a storage medium. The music generation method comprises the following steps: receiving a first confirmation instruction, and obtaining first music attribute information and emotion labels of the music according to the first confirmation instruction; determining second music attribute information of the music piece based on a preset incidence relation between the emotion label and the music attribute and the emotion label; traversing a pre-constructed music database by taking the first music attribute information and the second music attribute information as screening conditions to obtain candidate music pieces matched with the screening conditions; and selecting music pieces from the candidate music pieces according to the received second confirmation instruction, and splicing the music pieces according to the sequence of the corresponding music bars to obtain new music pieces. By utilizing the method provided by the application, the new music meeting the requirements of the user can be controllably generated.

Description

Music generation method, music generation device, electronic device and storage medium
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a music generation method and apparatus, an electronic device, and a storage medium.
Background
For music creation, creators need to have corresponding music theory knowledge, so that many non-professional people who love music cannot create music according with the preference of the non-professional people. The automatic creation of music melodies, especially the automatic creation of complete melodies with specific styles and emotions, has been an urgent problem to be solved.
With the development of computer technology, a number of aids have appeared to help non-professionals create music, such as: the music is generated by adopting a deep learning model, but the music created by the method is deficient in the aspect of specialty, the generated melody has certain randomness, and the music generation process cannot be reproduced, namely the music meeting the requirements of users cannot be controllably generated.
Disclosure of Invention
The application provides a music generation method, an electronic device and a storage medium, and aims to provide a new music which can meet the requirements of users only by controlled life.
The embodiment of the application firstly provides a music generating method, which comprises the following steps:
receiving a first confirmation instruction, and obtaining first music attribute information and emotion labels of the music according to the first confirmation instruction; the first confirmation instruction is a selection confirmation instruction of the first music attribute information and the emotion label;
determining second music attribute information of the music piece based on a preset incidence relation between the emotion label and the music attribute and the emotion label;
traversing a pre-constructed music database by taking the first music attribute information and the second music attribute information as screening conditions to obtain candidate music pieces matched with the screening conditions; the music database stores a plurality of music pieces which are divided in advance according to music attributes;
selecting music pieces from the candidate music pieces according to the received second confirmation instruction, and splicing the music pieces according to the sequence of the corresponding music measures to obtain new music pieces; wherein the second confirmation instruction is a selection confirmation instruction for the music piece.
Optionally, after the step of obtaining a new music piece, the method further includes:
and acquiring a melody curve of the new music, and adjusting the melody curve according to the curve characteristics of the melody curve to obtain the optimized music.
Optionally, the step of adjusting the melody curve according to the curve feature includes:
and detecting that the melody curve has small amplitude convolution, and adjusting the middle sound in the melody curve in the direction opposite to the direction of the intonation sound.
Optionally, the step of determining second music attribute information of the music composition based on the preset association relationship between the emotion tag and the music attribute and the emotion tag includes:
quantifying the emotion labels through a pre-constructed emotion model;
determining candidate second music attribute information of the music according to the quantized emotion labels;
and receiving and analyzing a third confirmation instruction, and determining second music attribute information of the music from the candidate second music attribute information according to analysis information of the third confirmation instruction, wherein the third confirmation instruction is a selection confirmation instruction of the second music attribute.
Optionally, the step of traversing a pre-constructed music database with the first music attribute information and the second music attribute information as filtering conditions includes:
acquiring orchestrator information of the music;
traversing a pre-constructed music database according to the first music attribute information, the second music attribute information and the distributor information.
Optionally, after the step of obtaining the candidate music pieces matching the filtering condition, the method further includes:
determining the matching priority of each music attribute in the first music attribute information and the second music attribute information according to a preset rule;
sequencing the candidate music pieces according to the matching priority of each music attribute;
obtaining the ordered candidate music pieces;
the selecting music segments from the candidate music segments according to the received second confirmation instruction, and splicing the music segments according to the sequence of the corresponding music bars to obtain a new music piece, comprising:
and selecting music segments from the sorted candidate music segments according to the received second confirmation instruction, and splicing the music segments according to the sequence of the corresponding music bars to obtain new music.
Optionally, after the step of obtaining a new music piece, the method further includes:
acquiring feedback information of the user on the new music, and adjusting at least one of first music attribute information, second music attribute information and candidate music segments based on the feedback information;
and determining the adjusted music segment based on at least one of the adjusted first music attribute information, the adjusted second music attribute information and the adjusted candidate music segment, and obtaining the adjusted optimized music based on the adjusted music segment.
Accordingly, an embodiment of the present application further provides a music generating apparatus, including:
the music processing device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for receiving a first confirmation instruction and acquiring first music attribute information and emotion labels of music according to the first confirmation instruction; the first confirmation instruction is a selection confirmation instruction of the first music attribute information and the emotion label;
the second music attribute information determining module is used for determining second music attribute information of the music piece based on the preset incidence relation between the emotion label and the music attribute and the emotion label;
the module for obtaining candidate music segments is used for traversing a pre-constructed music database by taking the first music attribute information and the second music attribute information as screening conditions to obtain candidate music segments matched with the screening conditions; the music database stores a plurality of music pieces which are divided in advance according to music attributes;
the new music acquiring module is used for selecting music fragments from the candidate music fragments according to the received second confirmation instruction, and splicing the music fragments according to the sequence of the corresponding music measures to acquire new music; wherein the second confirmation instruction is a selection confirmation instruction for the music piece.
Further, an embodiment of the present application also provides an electronic device, where the electronic device includes: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the music generation method as described above when executing the program.
Further, to achieve the above object, the present application also provides a computer-readable storage medium including a music generation program, which when executed by a processor, implements the steps in the music generation method as described above.
Compared with the prior art, the scheme provided by the application at least has the following advantages:
according to the music generation method provided by the embodiment of the application, the first music attribute information and the second music attribute information of the music are sequentially determined based on the first confirmation instruction, the music database is traversed by utilizing the first music attribute information and the second music attribute information to obtain a plurality of candidate music fragments meeting the screening condition, the music fragments are determined according to the second confirmation instruction, and the music fragments are spliced to form a new music. The first confirmation instruction provided by the application is based on the selection of a user on the music attribute, the second confirmation instruction is based on the selection of a candidate music segment by the user, the music segment is determined by utilizing the first confirmation instruction and the second confirmation instruction, namely, the music attribute and the music segment are determined in the new music generation process based on the selection of the user, so that the generated music is highly related to the preference of the user, the man-machine interaction in the music generation process is strong, and compared with deep learning and other modes, the music generated by the application has controllability and reproducibility of the music, namely, the process of generating the music can be reproduced. The music generation method provided by the application stores music measures in advance according to music attribute information aiming at massive music data samples to obtain a music database, and in the process of generating music, music segments matched with the music data samples are screened by using the music attributes selected by a user, wherein the music segments comprise a plurality of music measures.
Further, according to the music generation scheme provided by the application, after the new music is generated, the melody curve is adjusted according to the curve characteristics, so that the melody of the optimized music after adjustment is more unique.
Drawings
Fig. 1 is a flowchart of a music generation method according to an embodiment of the present application;
fig. 2 is a flowchart illustrating determining second music attribute information of a music piece based on a preset association relationship between emotion labels and music attributes and the emotion labels according to an embodiment of the present application;
FIG. 3 is a diagram of a value-Arousal dimension emotion model according to an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating a melody curve having a small amplitude convolution according to an embodiment of the present application, wherein a middle tone of the melody curve is adjusted in a direction opposite to a key of an intonation;
fig. 5 is a schematic structural diagram of a music generating apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present application. It should be understood that the drawings and embodiments of the present application are for illustration purposes only and are not intended to limit the scope of the present application.
It should be understood that the various steps recited in the method embodiments of the present application may be performed in a different order and/or in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present application is not limited in this respect.
The term "include" and its variants, as used herein, are inclusive, i.e., "including but not limited to"; the term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It is noted that references to "a", "an", and "the" modifications in this application are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that reference to "one or more" unless the context clearly dictates otherwise.
The following describes the technical solutions of the present application and how to solve the above technical problems in detail with specific embodiments. The following embodiments may be combined, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
An embodiment of the present application first provides a music generating method, and fig. 1 is a flowchart of the music generating method provided in an embodiment of the present application, where the method may be executed by a device, the device may be implemented by software and/or hardware, the method may be executed at a user side, and the user side includes a human-computer interaction interface.
S110, receiving a first confirmation instruction, and obtaining first music attribute information and emotion labels of the music according to the first confirmation instruction; the first confirmation instruction is a selection confirmation instruction of the first music attribute information and the emotion label;
s120, determining second music attribute information of the music based on the preset association relationship between the emotion label and the music attribute and the emotion label;
s130, traversing a pre-constructed music database by taking the first music attribute information and the second music attribute information as screening conditions to obtain candidate music segments matched with the screening conditions; the music database stores a plurality of music pieces which are divided in advance according to music attributes;
s140, selecting music pieces from the candidate music pieces according to the received second confirmation instruction, and splicing the music pieces according to the sequence of the corresponding music bars to obtain new music pieces; wherein the second confirmation instruction is a selection confirmation instruction for the music piece.
The music attribute provided by the application at least comprises the following information: the music: the music type, the music structure, the music length, the key and the time sequence, and the second music attribute information comprises: mode, harmony trend, speed and rhythm type information. The overall structure of the music piece can be determined according to the first music attribute information, and the emotion expressed by the music piece is determined according to the second music attribute information.
The server side or the user side receives a first confirmation instruction, wherein the first confirmation instruction is a selection confirmation instruction of the user on the first music attribute information and the emotion label, the first confirmation instruction can be a selection confirmation instruction of the first music attribute information and the emotion label of the music to be generated, which is input through a man-machine interaction interface and sent through the client side, and the first confirmation instruction can be regarded as selection of the user on the first music attribute information and the emotion label, namely, the first music attribute information and the emotion label can be selected by the user according to preference of the user.
And calling the pre-established association relationship between the emotion label and the music attribute, and determining second music attribute information of the music according to the determined emotion label and the association relationship.
The association relationship between the emotion tag and the music attribute is pre-established, and the association relationship between the emotion tag and the music attribute can be embodied by corresponding emotion and rhythm type information or/and speed, the granularity of each music measure can influence the emotion expression of a melody curve, and meanwhile, the same note granularity can have different emotion expressions under different speeds, so that the scheme associates second music attribute information with the emotion tag when a music database is established, for example: music with a higher speed tends to express an happy mood, and music with a slower speed tends to express an emotional state such as depression and sadness.
Traversing a pre-constructed music database by taking the first music attribute information and the second music attribute information as screening conditions to obtain candidate music segments matched with the screening conditions in the music database, wherein the music database stores a plurality of music segments which are divided according to the music attributes in advance.
Before this, a music database was first constructed, the process being as follows: the method comprises the steps of obtaining a large number of music data samples which can be selected according to the preference of a user so that the finally generated music is more in line with the preference of the user, dividing the music data samples into a plurality of music sections according to music sections, wherein each music section comprises a plurality of continuous music sections, the position and the number of the music section on each music section are respectively stored, establishing a music database, and the music database is also called a data set dictionary. That is, the music database is a dictionary storing music attribute information and music attribute parameters, and the dictionary may be stored in a hierarchical structure such as rhythm type-chord.
Because a large number of music pieces are stored in the music database, and the identification information of the music pieces comprises the first music attribute information and the second music attribute information, the music database is traversed by taking the first music attribute information and the second music attribute information selected by the user as the screening conditions, the music pieces meeting the screening conditions are selected, namely the music pieces matched with the first music attribute information and the second music attribute information selected by the user are screened, and the screened music pieces are candidate music pieces.
Displaying the candidate music segments on a human-computer interaction interface, receiving a selection confirmation instruction of a user on the music segments, receiving and analyzing a second confirmation instruction, and selecting the music segments from the candidate music segments according to analysis information of the second confirmation instruction.
A plurality of music pieces are determined according to the second confirmation instruction, and the music pieces are spliced in the order of music bars included in the music pieces to obtain a new music piece.
According to the music generation method, the first confirmation instruction and the second confirmation instruction are used as screening conditions to confirm the music segments of the new music, and the first confirmation instruction and the second confirmation instruction are respectively the selection confirmation instructions of the user on the first music attribute information, the emotion labels and the candidate music segments and can reflect the preference of the user, so that the finally formed new music accords with the preference of the user. Moreover, the pieces of music composing the new music piece are extracted from the music database, and the pieces of music stored in the music database are all existing music data, so that the generated new music piece has continuity; further, compared with a generation mode with the property of black boxes such as deep learning, the path of the generated new music can be traced back, and the generated new music has controllability and reproducibility.
In order to make the storage scheme of the log information and the technical effect thereof more clear, the following examples are provided to describe specific embodiments thereof in detail.
In an embodiment, the step of determining the second music attribute information of the music piece based on the preset association relationship between the emotion tag and the music attribute and the emotion tag in step S120 may be implemented in a manner as shown in fig. 2, and includes the following sub-steps:
s210, quantifying the emotion labels through a pre-constructed emotion model;
s220, determining candidate second music attribute information of the music according to the quantized emotion labels;
and S230, receiving and analyzing a third confirmation instruction, and confirming second music attribute information of the music from the candidate second music attribute information according to analysis information of the third confirmation instruction, wherein the third confirmation instruction is a selection confirmation instruction of the second music attribute.
Analyzing the first confirmation instruction, determining an emotion tag selected by a user, and quantizing the emotion tag through a pre-constructed emotion model, wherein the emotion model can be a reference-Arousal dimension emotion model, a schematic diagram of the reference-Arousal dimension emotion model is shown in FIG. 3, the emotion model has two dimensions, and one dimension is used for representing the positive and negative of emotion, and the example is as follows: happy, angry, etc., and another dimension represents the positive level of emotion, such as different levels of emotion corresponding to happy, which may include: satisfaction, surprise and the like represent the joyful degree, the positive and negative characters and the positive degree of emotion are collectively called as emotion tags of the user, the association relationship between the emotion tags and the second attribute of the music is preset, and the emotion tags and the second attribute of the music can correspond to the following second music attribute information if the joyful emotion is expressed: rising in style, clear and fast in rhythm, etc.
And determining candidate second music attribute information by combining the association relation between the emotion label and the second attribute based on the emotion label determined by the user in the value-Arousal dimension emotion model.
Specifically, the user can select any one point from the coordinate axis corresponding to the emotion model, can determine an emotion label corresponding to the coordinate of the point according to data statistical analysis, and then determine candidate second music attribute information including mode, harmony trend, speed and rhythm type information according to the association relationship between the preset emotion label and the second music attribute information.
Specifically, firstly, the dimensionality information of the emotion model is defined correspondingly to the second music attribute, the model is assumed to have four boundary points, and then the second music attribute is set to have a linear relation with the distance from any point in the emotion model to each boundary point, namely, the corresponding relation between the distance from any point in the emotion model to each boundary point and the music attribute is predefined. And after the distance between any point selected by the user in the model space and each boundary point is calculated, second music attribute information corresponding to the point is obtained according to the corresponding distance.
If a plurality of candidate second music attribute information exist, receiving and analyzing a third confirmation instruction of the second music attribute information, and determining the second music attribute information of the music piece from the candidate second music attribute information according to the analysis information of the third confirmation instruction.
According to the scheme provided by the embodiment, the emotion labels are quantized by using the emotion model, the second music attribute information of the music is determined according to the quantized emotion labels, and the purpose of determining the second music attribute information of the music according to the emotion labels is achieved. When there are a plurality of candidate second music attribute information determined from the emotion label, the following is included: the method comprises the steps that a plurality of candidate second music attribute information conforming to emotion labels exist, the plurality of candidate second music attribute information are required to be filtered or screened, a selection confirmation instruction, namely a third confirmation instruction, of the second music attribute information sent by a user side is received and analyzed, the second music attribute information of the music is screened out from the plurality of candidate second music attribute information according to the analysis information of the third confirmation instruction, and the third confirmation instruction is based on the selection of a user, so that the screened second music attribute information conforms to the preference of the user, man-machine interaction in the music generation process is achieved, and user experience is improved.
In a possible implementation manner, the step of traversing the pre-constructed music database with the first music attribute information and the second music attribute information as the filtering condition in step S130 includes:
a1, acquiring distributor information of the music;
and A2, traversing a pre-constructed music database according to the first music attribute information, the second music attribute information and the distributor information.
The orchestrator information comprises instruments assigned to the vocal part, the first music attribute information, the second music attribute information and the orchestrator information cover more music attribute information of a piece of music, and a plurality of candidate music pieces matched with the screening conditions are screened out based on the first music attribute information, the second music attribute information and the orchestrator information by traversing the music data dictionary.
And adding distributor information into the screening conditions, and traversing the music database by integrating the first music attribute information, the second music attribute information and the distributor information, so that the number of candidate music pieces collected and screened from the music database can be reduced, the process of determining the music pieces is facilitated to be simplified, and finally obtained music pieces are more in line with the user requirements.
In a possible implementation manner, after the step of obtaining the candidate music pieces matching the filtering condition in step S130, the method further includes:
b1, determining the matching priority of each music attribute in the first music attribute information and the second music attribute information according to a preset rule;
b2, sorting the candidate music segments according to the matching priority of the music attributes;
and B3, according to the sorted candidate music pieces.
Dividing the matching priority of each music attribute according to a preset rule, wherein the matching priority of the music attributes can be from high to low in sequence: curved structure, chord, rhythm type, distributor. If a plurality of candidate music pieces are available for the user to select, the candidate music pieces can be sorted according to the music type structure, the music pieces are sorted according to the chord under the condition that the music type structure is the same, and the music pieces are sorted according to the rhythm under the condition that the music type structure and the chord are the same, and so on. The music attributes are sorted according to the influence on the whole structure of the music, and the candidate music segments are sorted according to the mode, so that the finally generated music is favorable for meeting the requirements of users.
After the candidate music pieces are sequenced, the music pieces are selected from the sequenced candidate music pieces according to the received second confirmation instruction, and the music pieces are spliced according to the sequence of the corresponding music bars to obtain new music pieces.
In a feasible implementation manner, the step of creating ranked candidate music pieces for each position of the music piece according to the above method and splicing the music pieces according to the corresponding music section sequence may be implemented as follows:
and splicing the candidate music pieces which are ranked most front and correspond to each position according to the position identification.
Specifically, each candidate music segment is provided with identification information, and the identification information includes a track (represented by a track number, for example) to which the music segment belongs and a location (represented by a location number of the music segment in the track).
If the candidate music segments corresponding to the positions all contain the same song, the method establishes the ordered candidate music segments for each position of the song, and splices the music segments according to the corresponding music segment sequence, and the method comprises the following steps: preferably, candidate music pieces from the same song are spliced to ensure the fluency of the new music.
In a possible implementation, the step of obtaining a new music in step S140 is followed by:
s150, obtaining the melody curve of the new music, and adjusting the melody curve according to the curve characteristics of the melody curve to obtain the optimized music.
The melody curve here may include note information, pitch information, and the like, and the curve features, such as: the melody curve has positions continuously ascending (or descending) more than three units, the melody curve has small amplitude convolution, the curve slope of adjacent units is larger than a preset threshold value, and the pitches of notes continuously exceeding three units are equal.
Specifically, the step of adjusting the melody curve according to the curve characteristics includes:
and when the melody curve is detected to have small amplitude convolution, adjusting the middle sound in the melody curve in the direction opposite to the direction of the intonation sound.
If there is a small amplitude convolution of the melody curve, as shown in fig. 4, and the two connected tones are adjacent tones in the selected key, the middle tone is modified in the opposite direction to the pitch of the key, as follows: if the selected pitch component is [ do, re, do ], the pitch component is modified to [ do, si, do ], and the adjustment process is shown in fig. 4 by a solid line box, wherein the value of the vertical axis is the MIDI value of the pitch, and the note corresponding to the MIDI value of 60 is C4.
According to the scheme provided by the embodiment, the melody curve of the new music is adjusted to generate the optimized music, and the generated optimized music is more unique, coherent and beautiful in melody.
In a possible implementation mode, when the melody curve is detected to have positions continuously ascending more than three units, any position on the melody curve is randomly selected to carry out tonal modification and downward shift; when the melody curve is detected to have positions continuously descending more than three units, randomly selecting any position on the section of the melody curve to carry out intonation upward movement modification.
In one possible embodiment, when the slope of the curve of the adjacent unit on the melody curve is detected to be greater than the predetermined threshold, a key is inserted into the melody curve.
In a possible embodiment, when the pitch of the note is detected to be the same for more than three units, the next to last unit of the melody curve is subjected to adjacent intonation shifting, such as: the pitch up and down of the key is randomly performed.
The melody curve of the new music is adjusted through the embodiment, and more unique optimized music is obtained.
On the basis, the duration of the notes can be correspondingly modified, so that the adjusted optimized music is more harmonious.
Further, after obtaining the new music, the method further comprises:
c1, obtaining feedback information of the user on the new music, and adjusting at least one of the first music attribute information, the second music attribute information and the candidate music fragment based on the feedback information;
and C2, determining the adjusted music segment based on at least one of the adjusted first music attribute information, the adjusted second music attribute information and the adjusted candidate music segment, and obtaining the adjusted optimized music piece based on the adjusted music segment.
Acquiring feedback information of a user on the new music, such as: the time length and the speed of the new music are correspondingly adjusted according to the feedback information, the obtained candidate music segments are changed after any factor of the first music attribute information and the second music attribute information of the new music is changed, the new music generated based on different candidate music segments is correspondingly changed, and the optimized music which is more in line with the user requirements is formed.
The selection of the candidate music pieces can be adjusted according to the feedback information, namely the selection standard of the music pieces is adjusted, the finally determined music pieces are adjusted based on the modified selection standard, and the optimized music pieces which are more in line with the preference of the user are obtained based on the adjusted music pieces. The adjustment of the candidate music pieces may be performed on the basis of the adjustment of the first music attribute information or/and the second music attribute information, or the candidate music pieces may be individually adjusted without adjusting the first music attribute information and the second music attribute information.
The scheme provided by the embodiment adjusts the music attribute, the music fragment and the like of the new music piece based on the feedback information provided by the user by utilizing a negative feedback mechanism and based on the feedback information provided by the user, and generates the optimized music piece according to the adjusted music attribute or/and the music fragment, so that the adjusted music piece is more in line with the preference of the user.
Accordingly, an embodiment of the present application further provides a music generating apparatus 500, a schematic structural diagram of which is shown in fig. 5, the music generating apparatus 500 includes: the module 510 for obtaining first music attribute information, the module 520 for determining second music attribute information, the module 530 for obtaining candidate music pieces, and the module 540 for obtaining new music pieces include the following specific steps:
a first music attribute information obtaining module 510, configured to receive a first confirmation instruction, and obtain first music attribute information and an emotion tag of a musical composition according to the first confirmation instruction; the first confirmation instruction is a selection confirmation instruction of the first music attribute information and the emotion label;
a second music attribute information determining module 520, configured to determine second music attribute information of the music composition based on a preset association relationship between the emotion tag and the music attribute and the emotion tag;
a candidate music segment obtaining module 530, configured to traverse a pre-constructed music database with the first music attribute information and the second music attribute information as filtering conditions, to obtain candidate music segments matching the filtering conditions; the music database stores a plurality of music pieces which are divided in advance according to music attributes;
a new music piece obtaining module 540, configured to select music pieces from the candidate music pieces according to the received second confirmation instruction, and splice the music pieces according to the order of the corresponding music measures to obtain a new music piece; wherein the second confirmation instruction is a selection confirmation instruction for the music piece.
With regard to the music generating apparatus in the above-described embodiment, the specific manner in which the respective modules perform the operations has been described in detail in the embodiment related to the method, and will not be elaborated here.
The music generation method provided by the embodiment can be applied to an electronic device. The structure diagram is shown in fig. 6.
In this embodiment, the electronic device 600 may be a terminal device with an operation function, such as a smart phone, a tablet computer, a portable computer, and a desktop computer.
The electronic device includes: a memory and a processor, wherein the processor may be referred to as the processing device 601 hereinafter, and the memory may include at least one of a Read Only Memory (ROM)602, a Random Access Memory (RAM)603 and a storage device 608 hereinafter, which are specifically shown as follows:
as shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 3 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText transfer protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer-readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform operations comprising:
receiving a first confirmation instruction, and obtaining first music attribute information and emotion labels of the music according to the first confirmation instruction; the first confirmation instruction is a selection confirmation instruction of the first music attribute information and the emotion label;
determining second music attribute information of the music piece based on a preset incidence relation between the emotion label and the music attribute and the emotion label;
traversing a pre-constructed music database by taking the first music attribute information and the second music attribute information as screening conditions to obtain candidate music pieces matched with the screening conditions; the music database stores a plurality of music pieces which are divided in advance according to music attributes;
selecting music pieces from the candidate music pieces according to the received second confirmation instruction, and splicing the music pieces according to the sequence of the corresponding music measures to obtain new music pieces; wherein the second confirmation instruction is a selection confirmation instruction for the music piece.
Furthermore, embodiments of the present application also provide a computer-readable storage medium, which may be a tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable medium may be a machine readable signal medium or a machine readable storage medium. A computer readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a computer-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. The computer-readable storage medium includes a music generation program, and the music generation program realizes the steps of the music generation method according to any one of the above-described embodiments when executed by a processor.
The embodiments of the computer-readable storage medium of the present application are substantially the same as the embodiments of the music generation method and the electronic device, and are not described herein again.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (10)

1. A music piece generation method, comprising:
receiving a first confirmation instruction, and obtaining first music attribute information and emotion labels of the music according to the first confirmation instruction; the first confirmation instruction is a selection confirmation instruction of the first music attribute information and the emotion label;
determining second music attribute information of the music piece based on a preset incidence relation between the emotion label and the music attribute and the emotion label;
traversing a pre-constructed music database by taking the first music attribute information and the second music attribute information as screening conditions to obtain candidate music pieces matched with the screening conditions; the music database stores a plurality of music pieces which are divided in advance according to music attributes;
selecting music pieces from the candidate music pieces according to the received second confirmation instruction, and splicing the music pieces according to the sequence of the corresponding music measures to obtain new music pieces; wherein the second confirmation instruction is a selection confirmation instruction for the music piece.
2. The music generation method according to claim 1, wherein the step of obtaining a new music is followed by:
and acquiring a melody curve of the new music, and adjusting the melody curve according to the curve characteristics of the melody curve to obtain the optimized music.
3. The music piece generation method according to claim 2, wherein the step of adjusting the melody curve according to the curve feature includes:
and detecting that the melody curve has small amplitude convolution, and adjusting the middle sound in the melody curve in the direction opposite to the direction of the intonation sound.
4. A music piece generation method according to claim 1, wherein the step of determining second music attribute information of a music piece based on an association relationship between a preset emotion tag and a music attribute and the emotion tag includes:
quantifying the emotion labels through a pre-constructed emotion model;
determining candidate second music attribute information of the music according to the quantized emotion labels;
and receiving and analyzing a third confirmation instruction, and determining second music attribute information of the music from the candidate second music attribute information according to analysis information of the third confirmation instruction, wherein the third confirmation instruction is a selection confirmation instruction of the second music attribute.
5. A music piece generation method according to claim 1, wherein the step of traversing a pre-constructed music database with the first music attribute information and the second music attribute information as filtering conditions includes:
acquiring orchestrator information of the music;
traversing a pre-constructed music database according to the first music attribute information, the second music attribute information and the distributor information.
6. A music piece generation method according to claim 1, wherein the step of obtaining candidate music pieces matching the filtering condition further comprises, after the step of obtaining candidate music pieces matching the filtering condition:
determining the matching priority of each music attribute in the first music attribute information and the second music attribute information according to a preset rule;
sequencing the candidate music pieces according to the matching priority of each music attribute;
obtaining the ordered candidate music pieces;
the selecting music segments from the candidate music segments according to the received second confirmation instruction, and splicing the music segments according to the sequence of the corresponding music bars to obtain a new music piece, comprising:
and selecting music segments from the sorted candidate music segments according to the received second confirmation instruction, and splicing the music segments according to the sequence of the corresponding music bars to obtain new music.
7. The music generation method according to claim 1, wherein the step of obtaining a new music is followed by:
acquiring feedback information of the user on the new music, and adjusting at least one of first music attribute information, second music attribute information and candidate music segments based on the feedback information;
and determining the adjusted music segment based on at least one of the adjusted first music attribute information, the adjusted second music attribute information and the adjusted candidate music segment, and obtaining the adjusted optimized music based on the adjusted music segment.
8. A music generation device characterized by comprising:
the music processing device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for receiving a first confirmation instruction and acquiring first music attribute information and emotion labels of music according to the first confirmation instruction; the first confirmation instruction is a selection confirmation instruction of the first music attribute information and the emotion label;
the second music attribute information determining module is used for determining second music attribute information of the music piece based on the preset incidence relation between the emotion label and the music attribute and the emotion label;
the module for obtaining candidate music segments is used for traversing a pre-constructed music database by taking the first music attribute information and the second music attribute information as screening conditions to obtain candidate music segments matched with the screening conditions; the music database stores a plurality of music pieces which are divided in advance according to music attributes;
the new music acquiring module is used for selecting music fragments from the candidate music fragments according to the received second confirmation instruction, and splicing the music fragments according to the sequence of the corresponding music measures to acquire new music; wherein the second confirmation instruction is a selection confirmation instruction for the music piece.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements a music generation method according to any one of claims 1 to 7 when executing the program.
10. A computer-readable storage medium characterized in that a music generation program is included therein, and when the music generation program is executed by a processor, the steps of the music generation method according to any one of claims 1 to 7 are implemented.
CN202010478115.7A 2020-05-29 2020-05-29 Music generation method, music generation device, electronic device and storage medium Pending CN111680185A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010478115.7A CN111680185A (en) 2020-05-29 2020-05-29 Music generation method, music generation device, electronic device and storage medium
PCT/CN2020/134835 WO2021115311A1 (en) 2020-05-29 2020-12-09 Song generation method, apparatus, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010478115.7A CN111680185A (en) 2020-05-29 2020-05-29 Music generation method, music generation device, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN111680185A true CN111680185A (en) 2020-09-18

Family

ID=72453277

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010478115.7A Pending CN111680185A (en) 2020-05-29 2020-05-29 Music generation method, music generation device, electronic device and storage medium

Country Status (2)

Country Link
CN (1) CN111680185A (en)
WO (1) WO2021115311A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112634843A (en) * 2020-12-28 2021-04-09 四川新网银行股份有限公司 Information generation method and device for expressing data in voice mode
CN112785993A (en) * 2021-01-15 2021-05-11 杭州网易云音乐科技有限公司 Music generation method, device, medium and computing equipment
CN112837664A (en) * 2020-12-30 2021-05-25 北京达佳互联信息技术有限公司 Song melody generation method and device and electronic equipment
WO2021115311A1 (en) * 2020-05-29 2021-06-17 平安科技(深圳)有限公司 Song generation method, apparatus, electronic device, and storage medium
CN113035162A (en) * 2021-03-22 2021-06-25 平安科技(深圳)有限公司 National music generation method, device, equipment and storage medium
CN113066455A (en) * 2021-03-09 2021-07-02 未知星球科技(东莞)有限公司 Music control method, equipment and readable storage medium
CN113096624A (en) * 2021-03-24 2021-07-09 平安科技(深圳)有限公司 Method, device, equipment and storage medium for automatically creating symphony music
CN113763910A (en) * 2020-11-25 2021-12-07 北京沃东天骏信息技术有限公司 Music generation method and device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11615772B2 (en) * 2020-01-31 2023-03-28 Obeebo Labs Ltd. Systems, devices, and methods for musical catalog amplification services

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2073193A1 (en) * 2007-12-17 2009-06-24 Sony Corporation Method and device for generating a soundtrack
CN108806655A (en) * 2017-04-26 2018-11-13 微软技术许可有限责任公司 Song automatically generates
CN109102787A (en) * 2018-09-07 2018-12-28 温州市动宠商贸有限公司 A kind of simple background music automatically creates system
CN110148393A (en) * 2018-02-11 2019-08-20 阿里巴巴集团控股有限公司 Music generating method, device and system and data processing method
CN110808019A (en) * 2019-10-31 2020-02-18 维沃移动通信有限公司 Song generation method and electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10321842B2 (en) * 2014-04-22 2019-06-18 Interaxon Inc. System and method for associating music with brain-state data
CN110599985B (en) * 2018-06-12 2023-08-18 阿里巴巴集团控股有限公司 Audio content generation method, server device and client device
CN109887095A (en) * 2019-01-22 2019-06-14 华南理工大学 A kind of emotional distress virtual reality scenario automatic creation system and method
CN111680185A (en) * 2020-05-29 2020-09-18 平安科技(深圳)有限公司 Music generation method, music generation device, electronic device and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2073193A1 (en) * 2007-12-17 2009-06-24 Sony Corporation Method and device for generating a soundtrack
CN108806655A (en) * 2017-04-26 2018-11-13 微软技术许可有限责任公司 Song automatically generates
CN110148393A (en) * 2018-02-11 2019-08-20 阿里巴巴集团控股有限公司 Music generating method, device and system and data processing method
CN109102787A (en) * 2018-09-07 2018-12-28 温州市动宠商贸有限公司 A kind of simple background music automatically creates system
CN110808019A (en) * 2019-10-31 2020-02-18 维沃移动通信有限公司 Song generation method and electronic equipment

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021115311A1 (en) * 2020-05-29 2021-06-17 平安科技(深圳)有限公司 Song generation method, apparatus, electronic device, and storage medium
CN113763910A (en) * 2020-11-25 2021-12-07 北京沃东天骏信息技术有限公司 Music generation method and device
CN112634843A (en) * 2020-12-28 2021-04-09 四川新网银行股份有限公司 Information generation method and device for expressing data in voice mode
CN112837664A (en) * 2020-12-30 2021-05-25 北京达佳互联信息技术有限公司 Song melody generation method and device and electronic equipment
CN112837664B (en) * 2020-12-30 2023-07-25 北京达佳互联信息技术有限公司 Song melody generation method and device and electronic equipment
CN112785993A (en) * 2021-01-15 2021-05-11 杭州网易云音乐科技有限公司 Music generation method, device, medium and computing equipment
CN112785993B (en) * 2021-01-15 2024-04-12 杭州网易云音乐科技有限公司 Music generation method, device, medium and computing equipment
CN113066455A (en) * 2021-03-09 2021-07-02 未知星球科技(东莞)有限公司 Music control method, equipment and readable storage medium
CN113035162A (en) * 2021-03-22 2021-06-25 平安科技(深圳)有限公司 National music generation method, device, equipment and storage medium
CN113096624A (en) * 2021-03-24 2021-07-09 平安科技(深圳)有限公司 Method, device, equipment and storage medium for automatically creating symphony music
CN113096624B (en) * 2021-03-24 2023-07-25 平安科技(深圳)有限公司 Automatic creation method, device, equipment and storage medium for symphony music

Also Published As

Publication number Publication date
WO2021115311A1 (en) 2021-06-17

Similar Documents

Publication Publication Date Title
CN111680185A (en) Music generation method, music generation device, electronic device and storage medium
US11017010B2 (en) Intelligent playing method and apparatus based on preference feedback
CN106372059B (en) Data inputting method and device
US10109264B2 (en) Composing music using foresight and planning
CN107657017A (en) Method and apparatus for providing voice service
US20200160821A1 (en) Information processing method
US20210216725A1 (en) Method and apparatus for processing information
US20110219940A1 (en) System and method for generating custom songs
US20180315452A1 (en) Generating audio loops from an audio track
CN112489606B (en) Melody generation method, device, readable medium and electronic equipment
CN111782576B (en) Background music generation method and device, readable medium and electronic equipment
CN108986843B (en) Audio data processing method and device, medium and computing equipment
CN108924218A (en) Method and apparatus for pushed information
CN110600004A (en) Voice synthesis playing method and device and storage medium
CN109087627A (en) Method and apparatus for generating information
CN115640398A (en) Comment generation model training method, comment generation device and storage medium
Guo et al. An automatic music generation and evaluation method based on transfer learning
US10681402B2 (en) Providing relevant and authentic channel content to users based on user persona and interest
US20160364204A1 (en) Creating an event driven audio file
Harrison et al. A computational cognitive model for the analysis and generation of voice leadings
CN111062201B (en) Method and device for processing information
CN114461885A (en) Song quality evaluation method, device and storage medium
CN113763910A (en) Music generation method and device
CN114512113B (en) Audio synthesis method and related method and equipment
De Prisco et al. Creative DNA computing: splicing systems for music composition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200918