WO2021115311A1 - Procédé de génération de chanson, appareil, dispositif électronique et support de stockage - Google Patents

Procédé de génération de chanson, appareil, dispositif électronique et support de stockage Download PDF

Info

Publication number
WO2021115311A1
WO2021115311A1 PCT/CN2020/134835 CN2020134835W WO2021115311A1 WO 2021115311 A1 WO2021115311 A1 WO 2021115311A1 CN 2020134835 W CN2020134835 W CN 2020134835W WO 2021115311 A1 WO2021115311 A1 WO 2021115311A1
Authority
WO
WIPO (PCT)
Prior art keywords
music
attribute information
confirmation instruction
fragments
candidate
Prior art date
Application number
PCT/CN2020/134835
Other languages
English (en)
Chinese (zh)
Inventor
蒋慧军
黄尹星
姜凯英
韩宝强
肖京
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021115311A1 publication Critical patent/WO2021115311A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/61Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/635Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/65Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/683Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Definitions

  • This application relates to the field of data processing technology, and in particular to a method, device, electronic device, and storage medium for generating music.
  • the present application provides a music generation method, electronic equipment, and storage medium, the main purpose of which is to live in a controlled manner to meet the needs of users with new music.
  • the embodiment of the present application first provides a method for generating music, including the following steps: receiving a first confirmation instruction, and obtaining first music attribute information and emotion tags of the music according to the first confirmation instruction; the first confirmation instruction is correct The first music attribute information and the selection confirmation instruction of the emotion tag; determine the second music attribute information of the music based on the preset association relationship between the emotion tag and the music attribute and the emotion tag; combine the first music attribute information with The second music attribute information is used as a filter condition to traverse the pre-built music database to obtain candidate music pieces matching the filter condition; wherein the music database stores a plurality of music pieces pre-divided according to the music attributes; The received second confirmation instruction selects a music fragment from the candidate music fragments, and splices the music fragments in the order of the corresponding music bars to obtain a new music piece; wherein, the second confirmation instruction is to select a music fragment Confirm the instruction.
  • an embodiment of the present application also provides a music generating device, including: a first music attribute information obtaining module, configured to receive a first confirmation instruction, and obtain first music attribute information of the music according to the first confirmation instruction and Emotion tag; the first confirmation instruction is a selection confirmation instruction for the first music attribute information and the emotion tag; the second music attribute information module is determined to be used based on the pre-set association relationship between the emotion tag and the music attribute and the The emotion tag determines the second music attribute information of the music; the obtaining candidate music segment module is used to traverse the pre-built music database using the first music attribute information and the second music attribute information as filtering conditions, and obtain the information corresponding to the filtering conditions.
  • a matching candidate music piece wherein the music database stores a plurality of music pieces pre-divided according to music attributes; a new music piece module is used to select from the candidate music pieces according to the received second confirmation instruction Music fragments, the music fragments are spliced in the order of corresponding music bars to obtain a new music piece; wherein, the second confirmation instruction is a selection confirmation instruction for the music fragment.
  • an embodiment of the present application also provides an electronic device, the electronic device includes: a memory, a processor, and a computer program stored in the memory and capable of running on the processor, and when the processor executes the program
  • the following method is implemented: receiving a first confirmation instruction, and obtaining first music attribute information and emotion tags of a music piece according to the first confirmation instruction; the first confirmation instruction is a selection confirmation instruction for the first music attribute information and emotion tags; Determine the second music attribute information of the music based on the preset association relationship between the emotion tag and the music attribute and the emotion tag; use the first music attribute information and the second music attribute information as filter conditions to traverse the pre-built music A database to obtain candidate music pieces matching the screening conditions; wherein the music database stores a plurality of music pieces pre-divided according to music attributes; according to the received second confirmation instruction, from the candidate music pieces The music fragments are selected, and the music fragments are spliced in the order of the corresponding music bars to obtain a new music piece; wherein the second confirmation instruction is a selection confirmation
  • the present application also provides a computer-readable storage medium
  • the computer-readable storage medium includes a music generation program, and when the music generation program is executed by a processor, the following method is implemented: Confirmation instruction, according to the first confirmation instruction to obtain the first music attribute information and emotion tag of the music; the first confirmation instruction is a selection confirmation instruction for the first music attribute information and emotion tag; based on the preset emotion tag and The association relationship between the music attributes and the emotion tag determine the second music attribute information of the music; the first music attribute information and the second music attribute information are used as filtering conditions to traverse the pre-built music database to obtain the information related to the filtering Candidate music fragments matching the conditions; wherein the music database stores a plurality of music fragments pre-divided according to music attributes; according to the received second confirmation instruction, a music fragment is selected from the candidate music fragments, and the The music fragments are spliced in the order of the corresponding music bars to obtain a new music piece; wherein, the second
  • the first confirmation instruction provided in this application is based on the user's selection of music attributes
  • the second confirmation instruction is based on the user's selection of candidate music fragments.
  • the first confirmation instruction and the second confirmation instruction are used to determine the music fragment, that is, the process of generating new music.
  • the determination of the music attributes and music fragments is based on user selection, so the generated music is highly related to the user’s preferences, and the human-computer interaction in the music generation process is strong.
  • the generated music is generated by this application.
  • the music is controllable and reproducible, that is, it can reproduce the process of music generation.
  • the music generation method provided in this application preliminarily stores music sections according to music attribute information for a large number of music data samples to obtain a music database.
  • the music attributes selected by the user are used to filter matching music.
  • Fragments, music fragments include multiple music bars. Since the new music is formed based on existing music fragments, the melody of the new music conforms to the harmony trend, and the generated music is coherent.
  • FIG. 1 is a flowchart of a method for generating a music composition according to an embodiment of the application.
  • FIG. 2 is a flowchart of determining second music attribute information of a music piece based on the association relationship between preset emotion tags and music attributes and the emotion tags according to an embodiment of the application.
  • FIG. 3 is a schematic diagram of a valence-Arousal dimensional emotion model provided by an embodiment of this application.
  • FIG. 4 is a schematic diagram of a melody curve provided by an embodiment of the application with a small-amplitude gyration, and the middle tone in the melody curve is adjusted in the opposite direction to the tuning inner tone.
  • FIG. 5 is a schematic structural diagram of a music generating device provided by an embodiment of the application.
  • FIG. 6 is a schematic structural diagram of an electronic device provided by an embodiment of this application.
  • the technical solution of this application can be applied to the fields of artificial intelligence, smart city, blockchain and/or big data technology.
  • the data involved in this application such as attribute information, tags, and/or music, can be stored in a database, or can be stored in a blockchain, such as distributed storage through a blockchain, which is not limited in this application.
  • Figure 1 is a flowchart of the method for generating music provided by an embodiment of the application.
  • the method can be executed by a device, and the device can be implemented by software and/or hardware.
  • the method can be executed on the user side, and the user side includes a human-computer interaction interface.
  • S110 Receive a first confirmation instruction, and obtain first music attribute information and emotion tags of the music according to the first confirmation instruction; the first confirmation instruction is a selection confirmation instruction for the first music attribute information and emotion tags.
  • S120 Determine the second music attribute information of the music based on the preset association relationship between the emotion tag and the music attribute and the emotion tag.
  • S140 Select a music fragment from the candidate music fragments according to the received second confirmation instruction, and splice the music fragments in the order of the corresponding music bars to obtain a new music piece; wherein the second confirmation instruction is right Confirm the selection of music fragments.
  • the music attributes provided in this application include at least the following information: rhythm type, speed, chord, key, mode, time signature, musical structure, texture, orchestration, etc.
  • the music attribute information is divided into The first music attribute information and the second music attribute information.
  • the first music attribute information includes: music type, structure, music length, key, time signature
  • the second music attribute information includes: mode, harmony direction, speed, and Rhythm information.
  • the overall structure of the music can be determined according to the first music attribute information, and the emotion expressed by the music can be determined according to the second music attribute information.
  • the server or the client receives the first confirmation instruction, the first confirmation instruction is the user's selection confirmation instruction of the first music attribute information and emotion tag, the first confirmation instruction may be input through the human-computer interaction interface and sent through the client
  • the selection confirmation instruction for the first music attribute information and emotion tag of the generated music, the first confirmation instruction can be regarded as the user's selection of the first music attribute information and emotion tag, that is, both the first music attribute information and emotion tag can be Selected for users according to their own preferences.
  • the pre-built association relationship between the emotion tag and the music attribute is called, and the second music attribute information of the music piece is determined according to the determined emotion tag and the association relationship.
  • the association relationship between emotion tags and music attributes can be reflected by matching emotions with rhythmic information or/and speed.
  • the granularity of each music section will affect The emotional expression of the melody curve.
  • the same note granularity may have different emotional expressions at different speeds. Therefore, this solution associates the second music attribute information with the emotional tags when constructing the music database, for example, the speed is relatively high. Fast music often expresses cheerful emotions, while slow music often expresses melancholy and sad emotions.
  • the first music attribute information and the second music attribute information are used as filter conditions to traverse the pre-built music database to obtain candidate music pieces matching the filter conditions in the music database.
  • the music database stores a plurality of pre-divided music attributes. Music fragments.
  • a music database Prior to this, first construct a music database, the process is as follows: obtain a large number of music data samples, these music data samples can be selected according to the user's preferences, so that the final generated music is more in line with the user's preferences, the music data samples are divided into music subsections It is a plurality of music fragments, each music fragment includes a plurality of continuous music bars, and each music bar has the position and number of the music in which it is located. The music fragments are classified and stored according to the above-mentioned music attributes, and a music database is constructed.
  • the music database is also called a data set dictionary. That is, the music database is a dictionary storing music attribute information and music attribute parameters, and the dictionary can be stored in a hierarchical structure of rhythm-chords.
  • the identification information of these music pieces includes first music attribute information and second music attribute information
  • the first music attribute information and the second music attribute information selected by the user are used.
  • the music database is traversed as a filter condition, and music pieces that meet the filter condition are selected, that is, music pieces that match the first music attribute information and the second music attribute information selected by the user are filtered out, and the multiple music pieces selected are candidate music pieces.
  • a plurality of music fragments are determined according to the second confirmation instruction, and the music fragments are spliced in the order of the music bars included in the music fragment to obtain a new music piece.
  • the first confirmation instruction and the second confirmation instruction are used as filter conditions to determine the music fragments of the new music, because the first confirmation instruction and the second confirmation instruction are the user’s response to the first music attribute information and the emotion tag, respectively.
  • the selection confirmation instructions of candidate music fragments can all reflect the user's preferences, therefore, the final new music composition conforms to the user's preferences.
  • the music fragments that make up the new music are extracted from the music database, and the music fragments stored in the music database are all existing music data. Therefore, the new music generated has continuity; further, it is not compatible with the use of deep learning. Compared with the box-like generation method, the path of generating new music in this scheme can be traced back, and the new music generated is controllable and reproducible.
  • step S120 the step of determining the second music attribute information of the music based on the pre-set association relationship between the emotion tag and the music attribute and the emotion tag may be implemented as shown in FIG. 2 , Including the following sub-steps.
  • S220 Determine candidate second music attribute information of the music piece according to the quantized emotion tag.
  • S230 Receive and analyze a third confirmation instruction, and confirm the second music attribute information of the music piece from the candidate second music attribute information according to the analysis information of the third confirmation instruction, wherein the third confirmation instruction is for the first 2.
  • the selection confirmation command of the music attribute is for the first 2.
  • the emotion model may be a valence-Arousal dimensional emotion model.
  • a schematic diagram of the valence-Arousal dimensional emotion model is shown in Figure 3
  • the emotion model has two dimensions. One dimension is used to express the positive and negative of emotions, such as happiness, anger, etc., and the other dimension represents the positive degree of emotions. For example, different levels of emotions corresponding to happiness can include: Satisfaction, surprise, etc. indicate the degree of happiness.
  • the positive and negative of the emotion and its positive degree are collectively referred to as the user’s emotional label.
  • the relationship between the emotional label and the second attribute of music is preset.
  • the expression of happy emotion can correspond to the following second Music attribute information: tune up, bright rhythm, etc.
  • the candidate second music attribute information is determined by combining the relationship between the emotion label and the second attribute.
  • the user can select any point in the coordinate axis corresponding to the emotion model, and determine the emotion label corresponding to the coordinate of the point according to the statistical analysis of the data, and then according to the association between the preset emotion label and the second music attribute information
  • the relationship determines candidate second music attribute information, including mode, harmony direction, speed, and rhythm information.
  • the second music attribute corresponding to the dimensional information of the emotion model assuming that the model has four boundary points, and then set the second music attribute to have a linear relationship with the distance from any point in the emotion model to each boundary point, namely The corresponding relationship between the distance from any point to each boundary point in the emotion model and the music attribute is defined in advance. After calculating the distance between any point selected by the user in the model space and each boundary point, the second music attribute information corresponding to the point is obtained according to the corresponding distance.
  • If there are multiple candidate second music attribute information then receive and analyze the third confirmation instruction for the second music attribute information, and determine the music composition from the candidate second music attribute information according to the analysis information of the third confirmation instruction Second music attribute information.
  • the solution provided by this embodiment uses an emotion model to quantify the emotion label, and determines the second music attribute information of the music according to the quantized emotion label, so as to achieve the purpose of determining the second music attribute information of the music according to the emotion label.
  • the second music attribute information is selected from the multiple candidate second music attribute information.
  • the third confirmation instruction is based on the user's selection, the filtered second music attribute information meets the user's preference, and realizes the human-computer interaction in the music generation process, and improves the user experience.
  • the step of traversing a pre-built music database using the first music attribute information and the second music attribute information as filtering conditions in step S130 includes: A1, obtaining orchestration information of the music; A2 , Traverse a pre-built music database according to the first music attribute information, the second music attribute information, and the orchestrator information.
  • the orchestration information includes the allocation of musical instruments to the parts, the first music attribute information, the second music attribute information, and the orchestration information cover more music attribute information of a piece of music, based on the first music attribute information and the second music attribute information.
  • the information and orchestration information traverses the music data dictionary to filter out a number of candidate music pieces that match the filtering conditions.
  • Adding orchestration information to the screening conditions, and traversing the music database by integrating the first music attribute information, the second music attribute information and the orchestration information can reduce the number of candidate music pieces that are collected and filtered from the music database, and help simplify the process of determining music pieces , And the finally obtained music fragments are more in line with user needs.
  • the method further includes: B1, determining the first music attribute information and the second music attribute information according to a preset rule Matching priority of each music attribute; B2, sort the candidate music pieces according to the matching priority of each music attribute; B3, sort the candidate music pieces according to the sorted.
  • the matching priority of each music attribute is divided, and the matching priority of the music attribute can be in order from high to low: musical structure, chord, rhythm pattern, orchestration. If there are currently several candidate music pieces for the user to choose from, you can first sort the candidate music pieces according to the music structure. In the case of the same music structure, sort the music pieces according to the chord. If the music structure and chord are the same In this case, the music clips are sorted according to the rhythm pattern, and so on. These music attributes are sorted according to their influence on the overall structure of the music. According to this method, the candidate music pieces are sorted, which is beneficial for the final generated music to be more in line with user needs.
  • a music fragment is selected from the sorted candidate music fragments according to the received second confirmation instruction, and the music fragments are spliced in the order of the corresponding music bars to obtain a new music piece.
  • a sorted candidate music segment is established for each position of the music, and the step of splicing the music segments in the order of the corresponding music subsections can be implemented in the following manner:
  • the candidate music pieces with the highest ranking corresponding to the positions are spliced according to their position identifiers.
  • each candidate music piece is provided with identification information, and the identification information includes the track to which the music piece belongs (e.g., represented by a track number) and location (e.g., represented by the position number of the music section in the track to which it belongs) ).
  • the identification information includes the track to which the music piece belongs (e.g., represented by a track number) and location (e.g., represented by the position number of the music section in the track to which it belongs) ).
  • the sorted candidate music fragments are created for each position of the music according to the above method, and the steps of splicing the music fragments according to the sequence of the corresponding music sections include:
  • candidate music pieces from the same song are spliced to ensure the fluency of the new music.
  • the method further includes: S150, obtaining the melody curve of the new music, and adjusting the melody curve according to the curve characteristics of the melody curve to obtain Optimize the music.
  • the melody curve is adjusted according to the curve characteristics, so that the melody of the adjusted optimized music composition is more unique.
  • the melody curve here may include note information, pitch information, etc.
  • the characteristics of the curve such as: the melody curve has a position where the melody curve has continuous upward (or downward) more than three units, the melody curve has a small-amplitude convolution, and the adjacent unit The slope of the curve is greater than the preset threshold, the pitch of the notes that exceed three consecutive units is the same, and so on.
  • the step of adjusting the melody curve according to the curve characteristics includes: when it is detected that the melody curve has a small-amplitude gyration, adjusting the middle tone in the melody curve in the opposite direction to the inner tone of the tuning.
  • the melody curve has a small convolution, as shown in Figure 4, if the two consecutive tones are adjacent in the selected key, the middle tone is modified in the opposite direction of the tone, such as: the original upward convolution, then it is modified to the downward convolution , And vice versa, if the selected pitch composition is [do,re,do], modify the pitch composition to [do,si,do], the adjustment process is shown in the solid line box in Figure 4, where , The value on the vertical axis is the MIDI value of pitch, and the note corresponding to a MIDI value of 60 is C4.
  • the melody curve of the new music is adjusted to generate an optimized music, and the melody of the generated optimized music is more unique, coherent and beautiful.
  • any position on the melody curve is randomly selected to modify the tune down; when it is detected that the melody curve has a continuous downward movement For positions exceeding three units, randomly select any position on the melody curve to modify the tune up.
  • the in-tune tones are inserted into the melody curve.
  • the penultimate unit of the melody curve is moved adjacently, such as: random tuning Move up or down.
  • the duration of the notes can also be modified accordingly to make the adjusted optimized music more coordinated.
  • obtaining a new piece of music it further includes: C1, obtaining user feedback information on the new piece of music, and adjusting at least one of the first music attribute information, the second music attribute information, and the candidate music segment based on the feedback information C2, based on at least one of the adjusted first music attribute information, adjusted second music attribute information, and adjusted candidate music segments, determine the adjusted music segment, and obtain the adjusted music segment based on the adjusted music segment Optimized music.
  • the new music has a shorter duration and faster speed, etc.
  • the feedback information correspondingly adjust the duration and speed of the new music, the first music attribute information and the second music attribute information of the new music
  • the obtained candidate music pieces will all change, and new music pieces generated based on different candidate music pieces will also be changed accordingly to form an optimized piece of music that is more in line with user needs.
  • the selection of candidate music fragments can also be adjusted according to the feedback information, that is, the selection criteria of the music fragments are adjusted, the final music fragments are adjusted based on the modified selection criteria, and the optimized music pieces more in line with the user's preferences are obtained based on the adjusted music fragments.
  • the adjustment of the candidate music piece may be performed on the basis of adjusting the first music attribute information or/and the second music attribute information, or the candidate music piece may be adjusted separately without adjusting the first music attribute information and the second music attribute information.
  • the solution provided by this embodiment is based on the feedback information provided by the user, using a negative feedback mechanism to adjust the music attributes, music fragments, etc. of the new music based on the feedback information provided by the user, and generate optimized music based on the adjusted music attributes or/and music fragments. Make the adjusted music more in line with user preferences.
  • an embodiment of the present application also provides a musical composition generating device 500.
  • the schematic structural diagram of the musical composition generating device 500 is shown in FIG. 5.
  • the musical composition generating device 500 includes a module 510 for obtaining first music attribute information, and a module for determining second music attribute information. 520.
  • the candidate music segment obtaining module 530 and the new music obtaining module 540 are specifically as follows.
  • the first music attribute information obtaining module 510 is configured to receive a first confirmation instruction, and obtain the first music attribute information and mood tag of the music according to the first confirmation instruction; the first confirmation instruction is for the first music attribute information and Confirm the selection of emotion tags.
  • the determining second music attribute information module 520 is configured to determine the second music attribute information of the music based on the preset association relationship between the emotion tag and the music attribute and the emotion tag.
  • the candidate music segment obtaining module 530 is configured to use the first music attribute information and the second music attribute information as filtering conditions to traverse a pre-built music database to obtain candidate music pieces that match the filtering conditions; wherein, the The music database stores a plurality of music pieces pre-divided according to music attributes.
  • the obtaining new music module 540 is configured to select music fragments from the candidate music fragments according to the received second confirmation instruction, and splice the music fragments in the order of the corresponding music bars to obtain a new music; wherein, the The second confirmation command is a selection confirmation command for the music segment.
  • the music generation method provided in the foregoing embodiment can be applied to an electronic device.
  • the electronic device 600 may be a terminal device with arithmetic function, such as a smart phone, a tablet computer, a portable computer, a desktop computer, and the like.
  • the electronic device includes a memory and a processor.
  • the processor here may be referred to as the processing device 601 below, and the memory may include a read-only memory (ROM) 602, a random access memory (RAM) 603, and a storage device 608 below.
  • ROM read-only memory
  • RAM random access memory
  • storage device 608 At least one item of is as follows.
  • the electronic device 600 may include a processing device (such as a central processing unit, a graphics processor, etc.) 601, which may be loaded into a random access device according to a program stored in a read-only memory (ROM) 602 or from a storage device 608.
  • the program in the memory (RAM) 603 executes various appropriate actions and processing.
  • various programs and data required for the operation of the electronic device 600 are also stored.
  • the processing device 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604.
  • An input/output (I/O) interface 605 is also connected to the bus 604.
  • the following devices can be connected to the I/O interface 605: including input devices 606 such as touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, liquid crystal display (LCD), speakers, vibration An output device 607 such as a device; a storage device 608 such as a magnetic tape, a hard disk, etc.; and a communication device 609.
  • the communication device 609 may allow the electronic device 600 to perform wireless or wired communication with other devices to exchange data.
  • FIG. 3 shows an electronic device 600 having various devices, it should be understood that it is not required to implement or have all of the illustrated devices. It may be implemented alternatively or provided with more or fewer devices.
  • an embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a non-transitory computer readable medium, and the computer program contains program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication device 609, or installed from the storage device 608, or installed from the ROM 602.
  • the processing device 601 the above-mentioned functions defined in the method of the embodiment of the present disclosure are executed.
  • the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium.
  • the computer-readable signal medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wire, optical cable, RF (radio frequency), etc., or any suitable combination of the above.
  • the client and server can use HTTP (HyperText Any currently known or future developed network protocol such as Hypertext Transfer Protocol for communication, and can be interconnected with any form or medium of digital data communication (for example, a communication network).
  • HTTP HyperText Any currently known or future developed network protocol such as Hypertext Transfer Protocol for communication
  • Examples of communication networks include local area networks ("LAN”), wide area networks ("WAN”), the Internet (for example, the Internet), and end-to-end networks (for example, ad hoc end-to-end networks), as well as any currently known or future research and development network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or it may exist alone without being assembled into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs.
  • the electronic device is caused to perform the following operations: receiving a first confirmation instruction, and obtaining music according to the first confirmation instruction
  • the first music attribute information and the emotion tag is a selection confirmation instruction for the first music attribute information and the emotion tag
  • the first confirmation instruction is a selection confirmation instruction for the first music attribute information and the emotion tag
  • based on the preset association relationship between the emotion tag and the music attribute and the emotion tag Determine the second music attribute information of the music
  • the The music database stores a plurality of music fragments pre-divided according to music attributes
  • the received second confirmation instruction a music fragment is selected from the candidate music fragments, and the music fragments are spliced in the order of the corresponding music sections, Obtain a new music piece
  • the second confirmation instruction is a selection confirmation instruction for a music
  • the embodiment of the present application also proposes a computer-readable storage medium.
  • the computer-readable medium may be a tangible medium, which may contain or store for use by the instruction execution system, apparatus, or equipment, or be used in conjunction with the instruction execution system, apparatus, or equipment.
  • the program used in combination may be a machine-readable signal medium or a machine-readable storage medium.
  • the computer-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any suitable combination of the foregoing.
  • Computer-readable storage media More specific examples of computer-readable storage media would include electrical connections based on one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • the computer-readable storage medium includes a music generation program, and when the music generation program is executed by a processor, the steps of the music generation method according to any one of the above technical solutions are implemented.
  • the storage medium involved in this application such as a computer-readable storage medium, may be non-volatile or volatile.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

La présente invention se rapporte au domaine technique du traitement de données, et concerne, en particulier, un procédé de génération de chanson, un appareil, un dispositif électronique et un support de stockage. Le procédé de génération de chanson consiste à : recevoir une première instruction de confirmation, et sur la base de la première instruction de confirmation, acquérir des premières informations d'attribut musical et une étiquette d'humeur pour une chanson ; sur la base d'une relation d'association prédéfinie entre des étiquettes d'humeur et des propriétés musicales et sur la base de l'étiquette d'humeur, déterminer des secondes informations d'attribut musical pour la chanson ; amener les premières informations d'attribut musical et les secondes informations d'attribut musical à servir de conditions de filtrage pour traverser une base de données de musiques pré-construite, pour acquérir des segments musicaux candidats correspondant aux conditions de filtrage ; et sur la base d'une seconde instruction de confirmation reçue, sélectionner des segments musicaux parmi les segments musicaux candidats, et assembler les segments musicaux selon une séquence correspondante de mesures de musique pour acquérir une nouvelle chanson. Grâce au procédé proposé dans la présente invention, de nouvelles chansons répondant aux exigences d'un utilisateur peuvent être générées de manière contrôlable.
PCT/CN2020/134835 2020-05-29 2020-12-09 Procédé de génération de chanson, appareil, dispositif électronique et support de stockage WO2021115311A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010478115.7 2020-05-29
CN202010478115.7A CN111680185A (zh) 2020-05-29 2020-05-29 乐曲生成方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2021115311A1 true WO2021115311A1 (fr) 2021-06-17

Family

ID=72453277

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/134835 WO2021115311A1 (fr) 2020-05-29 2020-12-09 Procédé de génération de chanson, appareil, dispositif électronique et support de stockage

Country Status (2)

Country Link
CN (1) CN111680185A (fr)
WO (1) WO2021115311A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210241731A1 (en) * 2020-01-31 2021-08-05 Obeebo Labs Ltd. Systems, devices, and methods for assigning mood labels to musical compositions

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111680185A (zh) * 2020-05-29 2020-09-18 平安科技(深圳)有限公司 乐曲生成方法、装置、电子设备及存储介质
CN113763910A (zh) * 2020-11-25 2021-12-07 北京沃东天骏信息技术有限公司 一种音乐生成方法和装置
CN112634843A (zh) * 2020-12-28 2021-04-09 四川新网银行股份有限公司 一种以声音方式表达数据的信息生成方法与装置
CN112837664B (zh) * 2020-12-30 2023-07-25 北京达佳互联信息技术有限公司 歌曲旋律的生成方法、装置、电子设备
CN112785993B (zh) * 2021-01-15 2024-04-12 杭州网易云音乐科技有限公司 一种乐曲生成方法、装置、介质和计算设备
CN113066455A (zh) * 2021-03-09 2021-07-02 未知星球科技(东莞)有限公司 音乐控制方法、设备及可读存储介质
CN113096624B (zh) * 2021-03-24 2023-07-25 平安科技(深圳)有限公司 交响乐曲自动创作方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108806655A (zh) * 2017-04-26 2018-11-13 微软技术许可有限责任公司 歌曲的自动生成
CN109887095A (zh) * 2019-01-22 2019-06-14 华南理工大学 一种情绪刺激虚拟现实场景自动生成系统及方法
US10321842B2 (en) * 2014-04-22 2019-06-18 Interaxon Inc. System and method for associating music with brain-state data
CN110599985A (zh) * 2018-06-12 2019-12-20 阿里巴巴集团控股有限公司 一种音频内容生成方法、服务端设备和客户端设备
CN111680185A (zh) * 2020-05-29 2020-09-18 平安科技(深圳)有限公司 乐曲生成方法、装置、电子设备及存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2073193A1 (fr) * 2007-12-17 2009-06-24 Sony Corporation Procédé et système pour générer une bande son
CN110148393B (zh) * 2018-02-11 2023-12-15 阿里巴巴集团控股有限公司 音乐生成方法、装置和系统以及数据处理方法
CN109102787B (zh) * 2018-09-07 2022-09-27 王国欣 一种简易背景音乐自动创建系统
CN110808019A (zh) * 2019-10-31 2020-02-18 维沃移动通信有限公司 一种歌曲生成方法及电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10321842B2 (en) * 2014-04-22 2019-06-18 Interaxon Inc. System and method for associating music with brain-state data
CN108806655A (zh) * 2017-04-26 2018-11-13 微软技术许可有限责任公司 歌曲的自动生成
CN110599985A (zh) * 2018-06-12 2019-12-20 阿里巴巴集团控股有限公司 一种音频内容生成方法、服务端设备和客户端设备
CN109887095A (zh) * 2019-01-22 2019-06-14 华南理工大学 一种情绪刺激虚拟现实场景自动生成系统及方法
CN111680185A (zh) * 2020-05-29 2020-09-18 平安科技(深圳)有限公司 乐曲生成方法、装置、电子设备及存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210241731A1 (en) * 2020-01-31 2021-08-05 Obeebo Labs Ltd. Systems, devices, and methods for assigning mood labels to musical compositions

Also Published As

Publication number Publication date
CN111680185A (zh) 2020-09-18

Similar Documents

Publication Publication Date Title
WO2021115311A1 (fr) Procédé de génération de chanson, appareil, dispositif électronique et support de stockage
US11562722B2 (en) Cognitive music engine using unsupervised learning
US11017010B2 (en) Intelligent playing method and apparatus based on preference feedback
US11635936B2 (en) Audio techniques for music content generation
US9799312B1 (en) Composing music using foresight and planning
US20200160821A1 (en) Information processing method
US20180315452A1 (en) Generating audio loops from an audio track
Loughran et al. Evolutionary music: applying evolutionary computation to the art of creating music
US20220043876A1 (en) Systems and methods for recommending collaborative content
US20220262328A1 (en) Musical composition file generation and management system
US20210034661A1 (en) Systems and methods for recommending collaborative content
Chen et al. Automatic composition of Guzheng (Chinese Zither) music using long short-term memory network (LSTM) and reinforcement learning (RL)
Yu Research on multimodal music emotion recognition method based on image sequence
US20210350778A1 (en) Method and system for processing audio stems
Huang et al. Research on music emotion intelligent recognition and classification algorithm in music performance system
Brunner et al. Neural symbolic music genre transfer insights
Wang Music choreography algorithm based on feature matching and fragment segmentation
CN113763910A (zh) 一种音乐生成方法和装置
De Prisco et al. Creative DNA computing: splicing systems for music composition
US20240153475A1 (en) Music management services
Singh Sifting sound interactive extraction, exploration, and expressive recombination of large and heterogeneous audio collections
US11830463B1 (en) Automated original track generation engine
KR102625795B1 (ko) 인공지능 기반의 유사 음원 검색 시스템 및 방법
US20230197042A1 (en) Automated generation of audio tracks
Riaz An Investigation of Supervised Learning in Music Mood Classification for Audio and MIDI

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20898207

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20898207

Country of ref document: EP

Kind code of ref document: A1