CN105931625A - Rap music automatic generation method based on character input - Google Patents
Rap music automatic generation method based on character input Download PDFInfo
- Publication number
- CN105931625A CN105931625A CN201610253695.3A CN201610253695A CN105931625A CN 105931625 A CN105931625 A CN 105931625A CN 201610253695 A CN201610253695 A CN 201610253695A CN 105931625 A CN105931625 A CN 105931625A
- Authority
- CN
- China
- Prior art keywords
- riff
- method based
- generation method
- automatic generation
- music
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/0008—Associated control or indicating means
- G10H1/0025—Automatic or semi-automatic music composition, e.g. producing random music, applying rules from music theory or modifying a musical piece
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/101—Music Composition or musical creation; Tools or processes therefor
- G10H2210/141—Riff, i.e. improvisation, e.g. repeated motif or phrase, automatically added to a piece, e.g. in real time
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Electrophonic Musical Instruments (AREA)
Abstract
The invention discloses a rap music automatic generation method based on character input. The method comprises the following steps that S1. a user inputs text information; S2. word segmentation is performed on the text information so that multiple phrases are obtained; S3. candidate Riff is selected from a material library through screening according to the phrases obtained in the step S2; S4. the text after word segmentation is converted into segmented voice; S5. effectors are added in the segmented voice and the candidate Riff; and S6. rap music is outputted. The general public are enabled to participate in music production, interaction and other professional activities to create their own music through assistance of machine learning and other technologies, and the user only requires to input the text information in the process so that the corresponding rap music can be automatically generated.
Description
Technical field
The present invention relates to music making technical field, particularly relate to a kind of Chinese musical telling music automatic generation method based on word input.
Background technology
Looking back the development history of music, never there is excessive change in the creation of music and interactive mode.In today of human civilization high development, first music is created out by professional person traditionally, and then enters popular ear with forms such as tape, CD, radio station or internet audio streams.The on-the-spot performance meeting impromptu reorganization of contingent part, or it is similar to dialogues such as " music creation stories behind ", music is from being authored out, until the whole process of propagation there's almost no any change in masses.Meanwhile, the mutual aspect the most only staying in " you write me and listen " between music itself and audience.Owing to audience types, emotion, hobby etc. lack sensing transmission medium between extrinsic factor and music itself, music also cannot change with external world's input change.
In recent years, under the driving of the frontier science and technology such as machine learning techniques and audio algorithm, occur in that music work station and all kinds of plug-in unit (such as Cubase, Protool, Ablton Live etc.) of PC end.The latest edition of Ablton Live has supported speed-variation without tone and the Fragmentation of audio file.Owing to audio workstation is absorbed in recording, contracting mixes and post-production, and its use is confined to the professional persons such as recording engineer, music, composition, and its distance ordinary populace is far away.It addition, audio workstation is only responsible for providing " use instrument ", and the role of " authoring tools " cannot be competent at.Medium as one transmission " idea of people ", audio work stands under the commander of people, the idea of people is implemented to musically, the demo existed is processed into high-quality song (premise is that music personnel need complete music thinking, and audio workstation itself cannot provide this thinking).The high-quality plug-in unit that emerges in an endless stream (providing the special audios such as reverberation equilibrium to process) in effect already close to hardware, this makes the ability of audio workstation further strengthened, but the most all of audio workstation all cannot realize " music automatically generates " or hand over " generation of mutual formula music ".
Summary of the invention
It is an object of the invention to overcome the deficiencies in the prior art, it is provided that a kind of Chinese musical telling music automatic generation method based on word input, it is possible to according to the text message of user's input, automatically generate Chinese musical telling music.
It is an object of the invention to be achieved through the following technical solutions: Chinese musical telling music automatic generation method based on word input, comprise the following steps:
S1. user input text information;
S2. text message is carried out participle, obtain multiple phrase;
S3. from material database, alternative Riff is filtered out according to the phrase obtained in step S2;
S4. the text after participle is converted to segmentation voice;
S5. effect device is added to segmentation voice and alternative Riff;
S6. output Chinese musical telling music.
In described step S3, with the phrase obtained in step S2 with dub in background music between Riff and between Riff and Riff, harmony is target to the maximum on the whole, screening material database obtains alternative Riff.
Described harmony includes coincideing of the harmony of rhythm, the coupling of speed and stress.
The step of relative position between each phrase of obtaining also is included in fine tuning step S2 of local between described step S3 and step S4.
Relative position between also including between described step S4 and S5 according to each phrase obtained in step S2 carries out speed-variation without tone operation to segmentation voice.
Also include setting up material database before described step S3, and mark the step of the attribute of Riff in material database.
In described step S5, effect device includes reverberation effect device, flange device, carryover effects device and Echo device.
The step that Chinese musical telling music is shared social media is also included after described step S6.
The invention has the beneficial effects as follows: in the present invention, help by technology such as machine learning, ordinary populace can be participated in music making, mutual this professional activity and create one's own music, user only need to input text message in the process, can automatically generate music of talking and singing accordingly.
Accompanying drawing explanation
Fig. 1 is the flow chart of the Chinese musical telling music automatic generation method that the present invention inputs based on word.
Detailed description of the invention
Technical scheme is described in further detail below in conjunction with the accompanying drawings, but protection scope of the present invention is not limited to the following stated.
As it is shown in figure 1, Chinese musical telling music automatic generation method based on word input, comprise the following steps:
S1. user input text information.
S2. text message is carried out participle, obtain multiple phrase.
S3. from material database, alternative Riff(i.e. scalping is filtered out according to the phrase obtained in step S2).
In described step S3, with the phrase obtained in step S2 with dub in background music between Riff and between Riff and Riff, harmony is target to the maximum on the whole, screening material database obtains alternative Riff.The present invention use Optimum Matching algorithm to realize participle after text and dub in background music between Riff and harmony on the whole between Riff and Riff;In the present embodiment, Optimum Matching algorithm uses gene pairing algorithm, such as Blast algorithm.
Described harmony includes coincideing of the harmony of rhythm, the coupling of speed and stress.
Also include setting up material database before described step S3, and mark the step of the attribute of Riff in material database.The mode being labeled the attribute of Riff includes semi-supervised learning mode and artificial notation methods, in semi-supervised learning mode in the present embodiment, in conjunction with artificial mark, label is added for all Riff of storage in material database, i.e. be labeled (such as the speed of Riff, length, root sound, rhythms such as drum, guitar, basses, and type of emotion etc.).
Riff includes audio fragment such as Loop(such as drum, guitar, bass, string music, special sound effect etc.) and VST(include that midi file and virtual musical instrument are sampled), the time order and function order that multiple different Riff are played at that by music is arranged to make up the Riff collection of a rail, Riff collection (the most common bulging rail Riff collection of some rails, guitar rail Riff collection, bass rail Riff collection, string music rail Riff collection, special sound effect rail Riff collection etc.) constitute the musical portions of a first full songs.
The attribute of described Riff includes which kind of musical instrument this Riff belongs to, is what bat, speed, duration, maximum time stretching/compressing ratio, which and the style (rock and roll, folk rhyme) of Riff, emotion (that releive, irritability), or it is best suitable for coming across period (introduction part, climax parts, chorus section).
The step of relative position between each phrase of obtaining also is included in fine tuning step S2 of local between described step S3 and step S4.Maximize text and Riff harmony partially (can add here multiple regular terms to optimization aim, to reach some specific purposes).
S4. according to the relative position information between each phrase obtained after participle, the text after participle is converted to the segmentation voice of band rhythm.
Relative position between also including between described step S4 and S5 according to each phrase obtained in step S2 carries out corresponding speed-variation without tone operation to segmentation voice.The present embodiment uses SOLA algorithm realize the speed-variation without tone to segmentation voice to operate, SOLA algorithm can make one section of voice on the premise of intonation does not changes, accelerate, slow down speech speed, it is widely used in the field such as language repeater, Voice Scan, core component in the softwares such as commercial pitch scale modification is that SOLA algorithm can be used for improving, reducing the tone of voice on the premise of keeping speech speed constant.
S5., to segmentation voice and alternative Riff, under certain constraint, the respectively random effect device (effect device exists with card format, is fabricated separately) adding appropriateness, to realize a Chinese musical telling melodious property on the whole and multiformity.
The step creating effect device is also included before described step S5.
In described step S5, effect device includes reverberation effect device, flange device, carryover effects device and Echo device.
S6. output Chinese musical telling music.Segmentation voice and alternative Riff are ranked up combination, generate Chinese musical telling music and export.
The step that Chinese musical telling music is shared social media is also included after described step S6.
The above is only the preferred embodiment of the present invention, it is to be understood that the present invention is not limited to form disclosed herein, it is not to be taken as the eliminating to other embodiments, and can be used for other combinations various, amendment and environment, and can be modified by above-mentioned teaching or the technology of association area or knowledge in contemplated scope described herein.And the change that those skilled in the art are carried out and change are without departing from the spirit and scope of the present invention, the most all should be in the protection domain of claims of the present invention.
Claims (8)
1. Chinese musical telling music automatic generation method based on word input, it is characterised in that: comprise the following steps:
S1. user input text information;
S2. text message is carried out participle, obtain multiple phrase;
S3. from material database, alternative Riff is filtered out according to the phrase obtained in step S2;
S4. the text after participle is converted to segmentation voice;
S5. effect device is added to segmentation voice and alternative Riff;
S6. output Chinese musical telling music.
Chinese musical telling music automatic generation method based on word input the most according to claim 1, it is characterized in that: in described step S3, with the phrase obtained in step S2 with dub in background music between Riff and between Riff and Riff, harmony is target to the maximum on the whole, screening material database obtains alternative Riff.
Chinese musical telling music automatic generation method based on word input the most according to claim 2, it is characterised in that: described harmony includes coincideing of the harmony of rhythm, the coupling of speed and stress.
Chinese musical telling music automatic generation method based on word input the most according to claim 1, it is characterised in that: also include in fine tuning step S2 of local the step of relative position between each phrase of obtaining between described step S3 and step S4.
Chinese musical telling music automatic generation method based on word input the most according to claim 4, it is characterised in that: the relative position between also including between described step S4 and S5 according to each phrase obtained in step S2 carries out speed-variation without tone operation to segmentation voice.
Chinese musical telling music automatic generation method based on word input the most according to claim 1, it is characterised in that: also include setting up material database before described step S3, and mark the step of the attribute of Riff in material database.
Chinese musical telling music automatic generation method based on word input the most according to claim 1, it is characterised in that: in described step S5, effect device includes reverberation effect device, flange device, carryover effects device and Echo device.
Chinese musical telling music automatic generation method based on word input the most according to claim 1, it is characterised in that: also include the step that Chinese musical telling music is shared social media after described step S6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610253695.3A CN105931625A (en) | 2016-04-22 | 2016-04-22 | Rap music automatic generation method based on character input |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610253695.3A CN105931625A (en) | 2016-04-22 | 2016-04-22 | Rap music automatic generation method based on character input |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105931625A true CN105931625A (en) | 2016-09-07 |
Family
ID=56839751
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610253695.3A Pending CN105931625A (en) | 2016-04-22 | 2016-04-22 | Rap music automatic generation method based on character input |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105931625A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018121368A1 (en) * | 2016-12-30 | 2018-07-05 | 阿里巴巴集团控股有限公司 | Method for generating music to accompany lyrics and related apparatus |
CN108648767A (en) * | 2018-04-08 | 2018-10-12 | 中国传媒大学 | A kind of popular song emotion is comprehensive and sorting technique |
CN111402843A (en) * | 2020-03-23 | 2020-07-10 | 北京字节跳动网络技术有限公司 | Rap music generation method and device, readable medium and electronic equipment |
WO2022012164A1 (en) * | 2020-07-16 | 2022-01-20 | 百果园技术(新加坡)有限公司 | Method and apparatus for converting voice into rap music, device, and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101399036A (en) * | 2007-09-30 | 2009-04-01 | 三星电子株式会社 | Device and method for conversing voice to be rap music |
CN101694772A (en) * | 2009-10-21 | 2010-04-14 | 北京中星微电子有限公司 | Method for converting text into rap music and device thereof |
CN103440862A (en) * | 2013-08-16 | 2013-12-11 | 北京奇艺世纪科技有限公司 | Method, device and equipment for synthesizing voice and music |
-
2016
- 2016-04-22 CN CN201610253695.3A patent/CN105931625A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101399036A (en) * | 2007-09-30 | 2009-04-01 | 三星电子株式会社 | Device and method for conversing voice to be rap music |
CN101694772A (en) * | 2009-10-21 | 2010-04-14 | 北京中星微电子有限公司 | Method for converting text into rap music and device thereof |
CN103440862A (en) * | 2013-08-16 | 2013-12-11 | 北京奇艺世纪科技有限公司 | Method, device and equipment for synthesizing voice and music |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018121368A1 (en) * | 2016-12-30 | 2018-07-05 | 阿里巴巴集团控股有限公司 | Method for generating music to accompany lyrics and related apparatus |
CN108268530A (en) * | 2016-12-30 | 2018-07-10 | 阿里巴巴集团控股有限公司 | Dub in background music generation method and the relevant apparatus of a kind of lyrics |
CN108268530B (en) * | 2016-12-30 | 2022-04-29 | 阿里巴巴集团控股有限公司 | Lyric score generation method and related device |
CN108648767A (en) * | 2018-04-08 | 2018-10-12 | 中国传媒大学 | A kind of popular song emotion is comprehensive and sorting technique |
CN108648767B (en) * | 2018-04-08 | 2021-11-05 | 中国传媒大学 | Popular song emotion synthesis and classification method |
CN111402843A (en) * | 2020-03-23 | 2020-07-10 | 北京字节跳动网络技术有限公司 | Rap music generation method and device, readable medium and electronic equipment |
CN111402843B (en) * | 2020-03-23 | 2021-06-11 | 北京字节跳动网络技术有限公司 | Rap music generation method and device, readable medium and electronic equipment |
WO2022012164A1 (en) * | 2020-07-16 | 2022-01-20 | 百果园技术(新加坡)有限公司 | Method and apparatus for converting voice into rap music, device, and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Collins et al. | Electronic music | |
Henriques | Sonic bodies: Reggae sound systems, performance techniques, and ways of knowing | |
Eidsheim | Synthesizing race: Towards an analysis of the performativity of vocal timbre | |
Taylor | Voice, body and the transmission of the real in documentary theatre | |
Sallis et al. | Live-Electronic Music | |
CN105931625A (en) | Rap music automatic generation method based on character input | |
Bates | Mixing for parlak and bowing for a büyük ses: the aesthetics of arranged traditional music in Turkey | |
Steinbeck | Intermusicality, Humor, and Cultural Critique in the Art Ensemble of Chicago's “A Jackson in Your House” | |
CN105976802A (en) | Music automatic generation system based on machine learning technology | |
CN109741723A (en) | A kind of Karaoke audio optimization method and Caraok device | |
von Coler et al. | CMMSD: A data set for note-level segmentation of monophonic music | |
CN105931624A (en) | Rap music automatic generation method based on voice input | |
CN105976801A (en) | Pure music automatic generation method based on user's real-time action input | |
Fulton | The performer as historian: Black Messiah, To Pimp a Butterfly, and the matter of albums | |
Stevens | Teaching Electronic Music: Cultural, Creative, and Analytical Perspectives | |
Bates et al. | Producing TV series music in Istanbul | |
Cushing | Three solitudes and a DJ: A mashed-up study of counterpoint in a digital realm | |
Fragomeni | Optimality Theory and the Semiotic Triad: A New Approach for Songwriting, Sound Recording, and Artistic Analysis | |
McCourt | Aurality and the Actor in Filter Theatre's Twelfth Night | |
Clements | A Study of 21st-Century Works for Clarinet and Multimedia Featuring Three Newly Commissioned Works for Clarinet and Electronics with Visuals | |
Lamb | Old and New: Musical characteristics and effects of the Irish folk music movement of the twentieth century | |
Hulme | Manipulating Musical Surface: Perception as compositional material in live looping and organ with electronics | |
Zhao | The Study on the Performance Characteristics of the Violin Tone in the Computer Music | |
Han | Digitally Processed Music Creation (DPMC): Music composition approach utilizing music technology | |
Brunson | Text-Sound Composition–The Second Generation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20160907 |