CN107123415B - Automatic song editing method and system - Google Patents
Automatic song editing method and system Download PDFInfo
- Publication number
- CN107123415B CN107123415B CN201710317274.7A CN201710317274A CN107123415B CN 107123415 B CN107123415 B CN 107123415B CN 201710317274 A CN201710317274 A CN 201710317274A CN 107123415 B CN107123415 B CN 107123415B
- Authority
- CN
- China
- Prior art keywords
- note
- model
- training
- vector
- music
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 238000000605 extraction Methods 0.000 claims abstract description 21
- 239000000203 mixture Substances 0.000 claims description 13
- 238000003062 neural network model Methods 0.000 claims description 9
- 238000013528 artificial neural network Methods 0.000 claims description 7
- 238000011478 gradient descent method Methods 0.000 claims description 6
- 239000011295 pitch Substances 0.000 description 5
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/0008—Associated control or indicating means
- G10H1/0025—Automatic or semi-automatic music composition, e.g. producing random music, applying rules from music theory or modifying a musical piece
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/101—Music Composition or musical creation; Tools or processes therefor
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/101—Music Composition or musical creation; Tools or processes therefor
- G10H2210/111—Automatic composing, i.e. using predefined musical rules
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Auxiliary Devices For Music (AREA)
- Electrophonic Musical Instruments (AREA)
Abstract
The application discloses an automatic song editing method and system, and the method comprises the following steps: step S11: determining an input note corresponding to the initial moment to obtain a current input note; step S12: performing feature extraction on the current input note to obtain the feature of the current input note; step S13: inputting the characteristics of the current input musical notes into a pre-established training model to obtain the musical notes correspondingly output by the training model and obtain the current output musical notes; step S14: and determining the current output note as the input note corresponding to the next moment, determining the next moment as the current moment, then re-entering the step S12 until the cycle number reaches a preset number threshold, and combining the output notes corresponding to each moment to obtain the corresponding music. The method and the device greatly improve the creation efficiency of the music, and simultaneously reduce the creation cost of the music and the music creation threshold.
Description
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an automatic song compiling method and system.
Background
The song compiling is an artistic creation, belongs to the field of artistic creation with high threshold all the time, and can be completed by professional composers. Currently, most composers need to consume a relatively long time period to complete the creation of a musical composition, that is, the existing musical composition creation efficiency is low, and the creation cost is very high, so that the increasing demand of the public on the kinds and the number of music is hardly satisfied.
In conclusion, it can be seen that how to improve the efficiency and reduce the cost of music creation is a problem to be solved.
Disclosure of Invention
In view of the above, the present invention provides an automatic music composing method and system, which can greatly improve the music composition efficiency and reduce the music composition cost. The specific scheme is as follows:
an automatic composing method, comprising:
step S11: determining an input note corresponding to the initial moment to obtain a current input note;
step S12: performing feature extraction on the current input note to obtain the feature of the current input note;
step S13: inputting the characteristics of the current input musical notes into a pre-established training model to obtain the musical notes correspondingly output by the training model and obtain the current output musical notes;
step S14: determining the current output note as the input note corresponding to the next moment, determining the next moment as the current moment, then re-entering the step S12 until the cycle number reaches a preset number threshold, and combining the output notes corresponding to each moment to obtain corresponding music;
wherein the creating process of the training model comprises the following steps: the method comprises the steps of obtaining a music training sample, extracting time dimension characteristic information of the music training sample and corresponding note dimension characteristic information at different moments, and performing model training by using the time dimension characteristic information and the note dimension characteristic information to obtain a training model.
Optionally, the process of extracting note dimension feature information corresponding to any time includes:
extracting a first note characteristic vector, a second note characteristic vector, a third note characteristic vector, a fourth note characteristic vector and a fifth note characteristic vector of corresponding notes at the current moment;
the first note characteristic vector is a vector used for recording a corresponding digital value of a pitch of a corresponding note in a MIDI file, the second note characteristic vector is a vector used for recording a position of the corresponding note in one octave, the third note characteristic vector is a vector used for recording a relation between the note at the current moment and the note at the last moment, the fourth note characteristic vector is a vector used for recording an incidence relation at the last moment, and the fifth note characteristic vector is a vector used for recording a beat.
Optionally, the process of performing model training by using the time dimension feature information and the note dimension feature information to obtain the training model includes:
and inputting the time dimension characteristic information and the note dimension characteristic information into a pre-designed neural network model for model training to obtain the training model.
Optionally, the neural network model is a model designed in advance based on an LSTM neural network.
Optionally, after the process of performing model training by using the time dimension characteristic information and the note dimension characteristic information to obtain the training model, the method further includes:
and updating the training model by using a gradient descent method.
The invention also correspondingly discloses an automatic music composing system, which comprises a model establishing module, a note determining module, a feature extracting module, a note acquiring module and a music generating module; wherein,
the model creating module is used for creating a training model in advance;
the note determining module is used for determining input notes corresponding to the initial time to obtain current input notes;
the characteristic extraction module is used for extracting the characteristics of the current input musical notes to obtain the characteristics of the current input musical notes;
the note acquiring module is used for inputting the characteristics of the currently input notes into the training model to obtain notes correspondingly output by the training model and obtain currently output notes;
the music generation module is used for determining the current output musical notes as the input musical notes corresponding to the next moment, determining the next moment as the current moment, restarting the feature extraction module until the starting times reach a preset time threshold, and combining the output musical notes corresponding to each moment to obtain corresponding music;
wherein the model creation module comprises:
the music training device comprises a sample acquisition unit, a music training unit and a music training unit, wherein the sample acquisition unit is used for acquiring a music training sample;
the characteristic extraction unit is used for extracting time dimension characteristic information of the music training samples and corresponding note dimension characteristic information at different moments;
and the model training unit is used for carrying out model training by utilizing the time dimension characteristic information and the note dimension characteristic information to obtain the training model.
Optionally, the feature extraction unit is specifically configured to extract a first note feature vector, a second note feature vector, a third note feature vector, a fourth note feature vector, and a fifth note feature vector of a note corresponding to the current time;
the first note characteristic vector is a vector used for recording a corresponding digital value of a pitch of a corresponding note in a MIDI file, the second note characteristic vector is a vector used for recording a position of the corresponding note in one octave, the third note characteristic vector is a vector used for recording a relation between the note at the current moment and the note at the last moment, the fourth note characteristic vector is a vector used for recording an incidence relation at the last moment, and the fifth note characteristic vector is a vector used for recording a beat.
Optionally, the model training unit is specifically configured to input the time dimension feature information and the note dimension feature information into a pre-designed neural network model for model training, so as to obtain the training model.
Optionally, the model training unit is specifically configured to input the time dimension feature information and the note dimension feature information into a model designed in advance based on an LSTM neural network to perform model training, so as to obtain the training model.
Optionally, the automatic composition system further includes:
and the model updating module is used for updating the training model obtained by the model training unit by using a gradient descent method.
The invention relates to an automatic song editing method, which comprises the following steps: step S11: determining an input note corresponding to the initial moment to obtain a current input note; step S12: performing feature extraction on the current input note to obtain the feature of the current input note; step S13: inputting the characteristics of the current input musical notes into a pre-established training model to obtain the musical notes correspondingly output by the training model and obtain the current output musical notes; step S14: determining the current output note as the input note corresponding to the next moment, determining the next moment as the current moment, then re-entering the step S12 until the cycle number reaches a preset number threshold, and combining the output notes corresponding to each moment to obtain corresponding music; wherein, the creating process of the training model comprises the following steps: the method comprises the steps of obtaining a music training sample, extracting time dimension characteristic information of the music training sample and corresponding note dimension characteristic information at different moments, and carrying out model training by using the time dimension characteristic information and the note dimension characteristic information to obtain a training model.
Therefore, the method and the device have the advantages that the time dimension characteristic information and the note dimension characteristic information in the music training sample are utilized in advance to carry out model training to obtain the corresponding training model, the corresponding output notes at multiple moments are determined through the training model subsequently, and the corresponding music can be obtained by combining the corresponding output notes at the multiple moments.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of an automatic song editing method disclosed in the embodiment of the present invention;
fig. 2 is a schematic structural diagram of an automatic song compilation system disclosed in the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention discloses an automatic song compiling method, which is shown in figure 1 and comprises the following steps:
step S11: and determining the input musical notes corresponding to the initial time to obtain the current input musical notes.
It should be noted that, the process of determining the input note corresponding to the initial time may specifically include: the input note corresponding to the initial time is determined by means of random generation. For example, by means of random extraction, one note is randomly extracted from a note library created in advance as an input note corresponding to the initial time, so as to obtain the current input note in step S11.
In addition, it is understood that the note library may be specifically a database for storing notes corresponding to various types of music styles in a classified manner. When it is necessary to automatically compose music using the method of the present embodiment, one note may be randomly extracted from the storage area of the note library in which the melody is stored, as an input note corresponding to the initial time, according to the melody of the music piece desired to be created.
Step S12: and performing feature extraction on the current input musical notes to obtain the features of the current input musical notes.
Step S13: and inputting the characteristics of the current input musical notes into a pre-established training model to obtain the musical notes correspondingly output by the training model and obtain the current output musical notes.
Step S14: and determining the current output note as the input note corresponding to the next moment, determining the next moment as the current moment, then re-entering the step S12 until the cycle number reaches a preset number threshold, and combining the output notes corresponding to each moment to obtain the corresponding music.
Wherein, the creating process of the training model comprises the following steps: the method comprises the steps of obtaining a music training sample, extracting time dimension characteristic information of the music training sample and corresponding note dimension characteristic information at different moments, and carrying out model training by using the time dimension characteristic information and the note dimension characteristic information to obtain a training model.
The preset number threshold is specifically a threshold determined according to the time length of the music to be composed.
In this embodiment, the music training samples include a plurality of training samples, and the music styles corresponding to different training samples may be the same or different, and for example, the music styles may be training samples of different music styles, such as pop, blues, classical, jazz, and the like, to perform model training. The time dimension feature information corresponding to any training sample is time information of the training sample on a time axis, and the note dimension feature information corresponding to any training sample is feature information of notes of the training sample at different times.
Therefore, the embodiment of the invention performs model training by utilizing the time dimension characteristic information and the note dimension characteristic information in the music training sample in advance to obtain the corresponding training model, subsequently determines the corresponding output notes at a plurality of moments through the training model, and combines the corresponding output notes at the plurality of moments to obtain the corresponding music.
The embodiment of the invention discloses a specific automatic song editing method, and compared with the previous embodiment, the embodiment further explains and optimizes the technical scheme. Specifically, the method comprises the following steps:
in the creating process of the training model in the previous embodiment, the extracting process of the note dimension feature information corresponding to any time may specifically include:
extracting a first note characteristic vector, a second note characteristic vector, a third note characteristic vector, a fourth note characteristic vector and a fifth note characteristic vector of corresponding notes at the current moment;
the first note feature vector is a vector for recording a Digital value corresponding to the pitch of a corresponding note in a MIDI file (Musical Instrument Digital Interface), specifically, the MIDI format represents pitches from C-2 to G8 with 0-127, each liter is a half tone higher, the number is increased by one, for example, the MIDI value of a4 is 69, and the MIDI value of B4 is 71; the second note feature vector is a vector for recording the position of a corresponding note within one octave, specifically, in MIDI, one octave has twelve tones, and is represented by a 12-dimensional vector, each position represents whether to be played by 1 or 0, the position value of the currently played note is 1, and the other position values are 0; the third note feature vector is a vector for recording a relationship between a note at the current time and a note at the previous time, specifically, each position can be represented by one 2-dimension between two octaves (the first 12 notes and the last 12 notes) before and after the note at the current time, the first dimension represents whether the note at the current time is played, if so, the value is 1, otherwise, the value is 0; the second dimension represents whether the note is repeatedly played at the last moment (the repeated playing refers to playing again, but not always keeping playing), if the note is repeatedly played, the value is 1, otherwise, the value is 0; the fourth note feature vector is a vector for recording the association relationship at the previous time, and specifically, may be represented by a 12-dimensional vector, where the value at the position i is equal to the number of times that the note at the previous time was played, for example, the note at the current time is C, and the last time is played 2 times E, then the value at the fourth position is 2; the fifth tone feature vector is a vector for recording a beat, and specifically, the beat may be represented by 4-dimensional data, each dimension is 0 or 1, and the beat is cyclically recorded by 0000, 0001, 0010 … 1110, 1111 in the case of 4/4 beats.
In addition, in the creating process of the training model in the previous embodiment, the process of performing model training by using the time dimension feature information and the note dimension feature information to obtain the training model may specifically include:
and inputting the time dimension characteristic information and the note dimension characteristic information into a pre-designed neural network model for model training to obtain a training model.
In this embodiment, the neural network model is preferably a model designed in advance based on an LSTM neural network (LSTM, Long-Short Term Memory).
Specifically, the process of performing model training by using any one of the music training samples may specifically include: and performing feature extraction on the training sample to obtain time dimension feature information and note dimension feature information of the training sample, and inputting the feature information into the model designed based on the LSTM neural network for training.
In addition, after the process of performing model training by using the time dimension characteristic information and the note dimension characteristic information to obtain a training model, the method may further include: and updating the training model by using a gradient descent method so as to optimize the training model.
Correspondingly, the embodiment of the invention also discloses an automatic song composing system, which comprises a model creating module 11, a note determining module 12, a feature extracting module 13, a note acquiring module 14 and a music generating module 15, and is shown in fig. 2; wherein,
a model creation module 11, configured to create a training model in advance;
a note determining module 12, configured to determine an input note corresponding to the initial time to obtain a current input note;
the feature extraction module 13 is configured to perform feature extraction on the current input musical note to obtain features of the current input musical note;
the note acquiring module 14 is configured to input the features of the currently input notes into the training model to obtain notes output by the training model correspondingly, and obtain currently output notes;
the music generation module 15 is configured to determine a current output note as an input note corresponding to a next moment, determine the next moment as a current moment, restart the feature extraction module until the starting time reaches a preset time threshold, and combine the output notes corresponding to each moment to obtain a corresponding music;
the model creating module 11 includes:
a sample acquisition unit 111 for acquiring a music training sample;
a feature extraction unit 112, configured to extract time dimension feature information of the music training sample and corresponding note dimension feature information at different times;
and the model training unit 113 is configured to perform model training by using the time dimension feature information and the character dimension feature information to obtain a training model.
The feature extraction unit 112 may be specifically configured to extract a first note feature vector, a second note feature vector, a third note feature vector, a fourth note feature vector, and a fifth note feature vector of a corresponding note at the current time;
the first note characteristic vector is a vector used for recording a corresponding numerical value of a pitch of a corresponding note in a MIDI file, the second note characteristic vector is a vector used for recording the position of the corresponding note within one octave, the third note characteristic vector is a vector used for recording the relation between the note at the current moment and the note at the previous moment, the fourth note characteristic vector is a vector used for recording the association relation at the previous moment, and the fifth note characteristic vector is a vector used for recording a beat.
In addition, the model training unit 113 may be specifically configured to input the time dimension feature information and the note dimension feature information into a pre-designed neural network model for model training, so as to obtain a training model.
More specifically, the model training unit 113 may be configured to input the time dimension feature information and the note dimension feature information into a model designed in advance based on an LSTM neural network to perform model training, so as to obtain a training model.
Further, the automatic song composition system in this embodiment may further include:
and the model updating module is used for updating the training model obtained by the model training unit by using a gradient descent method.
Therefore, the embodiment of the invention performs model training by utilizing the time dimension characteristic information and the note dimension characteristic information in the music training sample in advance to obtain the corresponding training model, subsequently determines the corresponding output notes at a plurality of moments through the training model, and combines the corresponding output notes at the plurality of moments to obtain the corresponding music.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above detailed description is provided for the automatic song editing method and system provided by the present invention, and the principle and implementation of the present invention are explained by applying specific examples, and the description of the above examples is only used to help understanding the method and core ideas of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
Claims (8)
1. An automatic composing method, characterized by comprising:
step S11: determining an input note corresponding to the initial moment to obtain a current input note;
step S12: performing feature extraction on the current input note to obtain the feature of the current input note;
step S13: inputting the characteristics of the current input musical notes into a pre-established training model to obtain the musical notes correspondingly output by the training model and obtain the current output musical notes;
step S14: determining the current output note as the input note corresponding to the next moment, determining the next moment as the current moment, then re-entering the step S12 until the cycle number reaches a preset number threshold, and combining the output notes corresponding to each moment to obtain corresponding music;
wherein the creating process of the training model comprises the following steps: acquiring a music training sample, extracting time dimension characteristic information of the music training sample and corresponding note dimension characteristic information at different moments, and performing model training by using the time dimension characteristic information and the note dimension characteristic information to obtain a training model;
the time dimension characteristic information corresponding to any one music training sample is time information of the music training sample on a time axis;
and, the extraction process of the corresponding note dimension characteristic information at any moment comprises the following steps:
extracting a first note characteristic vector, a second note characteristic vector, a third note characteristic vector, a fourth note characteristic vector and a fifth note characteristic vector of corresponding notes at the current moment; the first note characteristic vector is a vector used for recording a corresponding digital value of a pitch of a corresponding note in a MIDI file, the second note characteristic vector is a vector used for recording a position of the corresponding note in one octave, the third note characteristic vector is a vector used for recording a relation between the note at the current moment and the note at the last moment, the fourth note characteristic vector is a vector used for recording an incidence relation at the last moment, and the fifth note characteristic vector is a vector used for recording a beat.
2. The automatic composition method according to claim 1, wherein the process of performing model training using the time dimension feature information and the note dimension feature information to obtain the training model includes:
and inputting the time dimension characteristic information and the note dimension characteristic information into a pre-designed neural network model for model training to obtain the training model.
3. The automatic composition method according to claim 2,
the neural network model is a model designed in advance based on an LSTM neural network.
4. The automatic composition method according to claim 3, wherein the process of performing model training using the time dimension feature information and the note dimension feature information to obtain the training model further comprises:
and updating the training model by using a gradient descent method.
5. An automatic song composition system is characterized by comprising a model establishing module, a note determining module, a feature extracting module, a note acquiring module and a music generating module; wherein,
the model creating module is used for creating a training model in advance;
the note determining module is used for determining input notes corresponding to the initial time to obtain current input notes;
the characteristic extraction module is used for extracting the characteristics of the current input musical notes to obtain the characteristics of the current input musical notes;
the note acquiring module is used for inputting the characteristics of the currently input notes into the training model to obtain notes correspondingly output by the training model and obtain currently output notes;
the music generation module is used for determining the current output musical notes as the input musical notes corresponding to the next moment, determining the next moment as the current moment, restarting the feature extraction module until the starting times reach a preset time threshold, and combining the output musical notes corresponding to each moment to obtain corresponding music;
wherein the model creation module comprises:
the music training device comprises a sample acquisition unit, a music training unit and a music training unit, wherein the sample acquisition unit is used for acquiring a music training sample;
the characteristic extraction unit is used for extracting time dimension characteristic information of the music training samples and corresponding note dimension characteristic information at different moments;
the model training unit is used for carrying out model training by utilizing the time dimension characteristic information and the note dimension characteristic information to obtain the training model;
the feature extraction unit is specifically configured to extract a first note feature vector, a second note feature vector, a third note feature vector, a fourth note feature vector, and a fifth note feature vector of a note corresponding to the current time;
the first note characteristic vector is a vector used for recording a corresponding digital value of a pitch of a corresponding note in a MIDI file, the second note characteristic vector is a vector used for recording a position of the corresponding note in one octave, the third note characteristic vector is a vector used for recording a relation between the note at the current moment and the note at the last moment, the fourth note characteristic vector is a vector used for recording an incidence relation at the last moment, and the fifth note characteristic vector is a vector used for recording a beat.
6. The automatic composition system according to claim 5,
the model training unit is specifically configured to input the time dimension feature information and the note dimension feature information into a pre-designed neural network model for model training, so as to obtain the training model.
7. The automatic composition system according to claim 6,
the model training unit is specifically configured to input the time dimension feature information and the note dimension feature information into a model designed based on an LSTM neural network in advance to perform model training, so as to obtain the training model.
8. The automatic composition system of claim 7, further comprising:
and the model updating module is used for updating the training model obtained by the model training unit by using a gradient descent method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710317274.7A CN107123415B (en) | 2017-05-04 | 2017-05-04 | Automatic song editing method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710317274.7A CN107123415B (en) | 2017-05-04 | 2017-05-04 | Automatic song editing method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107123415A CN107123415A (en) | 2017-09-01 |
CN107123415B true CN107123415B (en) | 2020-12-18 |
Family
ID=59727441
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710317274.7A Expired - Fee Related CN107123415B (en) | 2017-05-04 | 2017-05-04 | Automatic song editing method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107123415B (en) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108009218B (en) * | 2017-11-21 | 2021-09-21 | 华南理工大学 | Clustering analysis-based personalized music collaborative creation matching method and system |
CN109192187A (en) * | 2018-06-04 | 2019-01-11 | 平安科技(深圳)有限公司 | Composing method, system, computer equipment and storage medium based on artificial intelligence |
CN108806657A (en) * | 2018-06-05 | 2018-11-13 | 平安科技(深圳)有限公司 | Music model training, musical composition method, apparatus, terminal and storage medium |
CN110660375B (en) * | 2018-06-28 | 2024-06-04 | 北京搜狗科技发展有限公司 | Method, device and equipment for generating music |
CN109360543B (en) * | 2018-09-12 | 2023-01-31 | 范子文 | Method and device for customizing pronunciation assembly |
CN109326270A (en) * | 2018-09-18 | 2019-02-12 | 平安科技(深圳)有限公司 | Generation method, terminal device and the medium of audio file |
CN109285560B (en) * | 2018-09-28 | 2021-09-03 | 北京奇艺世纪科技有限公司 | Music feature extraction method and device and electronic equipment |
CN109346045B (en) * | 2018-10-26 | 2023-09-19 | 平安科技(深圳)有限公司 | Multi-vocal part music generation method and device based on long-short time neural network |
CN109448684B (en) * | 2018-11-12 | 2023-11-17 | 合肥科拉斯特网络科技有限公司 | Intelligent music composing method and system |
CN109785818A (en) * | 2018-12-18 | 2019-05-21 | 武汉西山艺创文化有限公司 | A kind of music music method and system based on deep learning |
CN109727590B (en) * | 2018-12-24 | 2020-09-22 | 成都嗨翻屋科技有限公司 | Music generation method and device based on recurrent neural network |
CN110120212B (en) * | 2019-04-08 | 2023-05-23 | 华南理工大学 | Piano auxiliary composition system and method based on user demonstration audio frequency style |
CN110136678B (en) * | 2019-04-26 | 2022-06-03 | 北京奇艺世纪科技有限公司 | Music editing method and device and electronic equipment |
CN110288965B (en) * | 2019-05-21 | 2021-06-18 | 北京达佳互联信息技术有限公司 | Music synthesis method and device, electronic equipment and storage medium |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS5846393A (en) * | 1981-09-14 | 1983-03-17 | カシオ計算機株式会社 | Automatic accompanying apparatus |
JP3407375B2 (en) * | 1993-12-28 | 2003-05-19 | ヤマハ株式会社 | Automatic arrangement device |
FR2785438A1 (en) * | 1998-09-24 | 2000-05-05 | Baron Rene Louis | MUSIC GENERATION METHOD AND DEVICE |
AUPR150700A0 (en) * | 2000-11-17 | 2000-12-07 | Mack, Allan John | Automated music arranger |
DE102004049457B3 (en) * | 2004-10-11 | 2006-07-06 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Method and device for extracting a melody underlying an audio signal |
CN100373382C (en) * | 2005-09-08 | 2008-03-05 | 上海交通大学 | Rhythm character indexed digital music data-base based on contents and generation system thereof |
US9620092B2 (en) * | 2012-12-21 | 2017-04-11 | The Hong Kong University Of Science And Technology | Composition using correlation between melody and lyrics |
CN105893460B (en) * | 2016-03-22 | 2019-11-29 | 无锡五楼信息技术有限公司 | A kind of automatic creative method of music based on artificial intelligence technology and device |
CN106205572B (en) * | 2016-06-28 | 2019-09-20 | 海信集团有限公司 | Sequence of notes generation method and device |
-
2017
- 2017-05-04 CN CN201710317274.7A patent/CN107123415B/en not_active Expired - Fee Related
Also Published As
Publication number | Publication date |
---|---|
CN107123415A (en) | 2017-09-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107123415B (en) | Automatic song editing method and system | |
US7696426B2 (en) | Recombinant music composition algorithm and method of using the same | |
CN112382257B (en) | Audio processing method, device, equipment and medium | |
WO2020015153A1 (en) | Method and device for generating music for lyrics text, and computer-readable storage medium | |
US11948542B2 (en) | Systems, devices, and methods for computer-generated musical note sequences | |
US10957293B2 (en) | Systems, devices, and methods for varying musical compositions | |
JP2024038111A (en) | Information processing device, information processing method, and information processing program | |
CN109346045A (en) | Counterpoint generation method and device based on long neural network in short-term | |
US12014708B2 (en) | Systems, devices, and methods for harmonic structure in digital representations of music | |
CN110010159B (en) | Sound similarity determination method and device | |
US20220406283A1 (en) | Information processing apparatus, information processing method, and information processing program | |
CN105718486A (en) | Online query by humming method and system | |
JP7439755B2 (en) | Information processing device, information processing method, and information processing program | |
CN112669811A (en) | Song processing method and device, electronic equipment and readable storage medium | |
CN110517655B (en) | Melody generation method and system | |
CN109448684B (en) | Intelligent music composing method and system | |
US10431191B2 (en) | Method and apparatus for analyzing characteristics of music information | |
JP2013164609A (en) | Singing synthesizing database generation device, and pitch curve generation device | |
WO2021166745A1 (en) | Arrangement generation method, arrangement generation device, and generation program | |
CN112825244B (en) | Music audio generation method and device | |
CN110111813B (en) | Rhythm detection method and device | |
CN101710367A (en) | Computer composing method based on Schoenberg twelve-tone system | |
KR102227415B1 (en) | System, device, and method to generate polyphonic music | |
CN112489607A (en) | Method and device for recording songs, electronic equipment and readable storage medium | |
CN113851098B (en) | Melody style conversion method and device, terminal equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20201218 Termination date: 20210504 |