TWI784434B - System and method for automatically composing music using approaches of generative adversarial network and adversarial inverse reinforcement learning algorithm - Google Patents
System and method for automatically composing music using approaches of generative adversarial network and adversarial inverse reinforcement learning algorithm Download PDFInfo
- Publication number
- TWI784434B TWI784434B TW110108455A TW110108455A TWI784434B TW I784434 B TWI784434 B TW I784434B TW 110108455 A TW110108455 A TW 110108455A TW 110108455 A TW110108455 A TW 110108455A TW I784434 B TWI784434 B TW I784434B
- Authority
- TW
- Taiwan
- Prior art keywords
- music
- note
- learning algorithm
- deep
- algorithm
- Prior art date
Links
Images
Landscapes
- Machine Translation (AREA)
- Reverberation, Karaoke And Other Acoustics (AREA)
Abstract
Description
本發明係關於自動作曲系統的技術領域,尤指一種使用對抗生成網路與對抗逆增強式學習法的自動作曲系統及方法。The invention relates to the technical field of an automatic composition system, in particular to an automatic composition system and method using an adversarial generation network and an adversarial inverse reinforcement learning method.
音樂是人類生活中極為重要的一環,不僅作為日常生活中放鬆心情的方法,且根據古羅馬文獻記載音樂是對人體具有療效且能改善人類的心情。根據專題報導指出,音樂對於人們具有以下七大益處:增加學習效率、紓解壓力、具有止痛效果、增強記憶力、改善失眠、提升運動效率以及促使心情更快樂。Music is an extremely important part of human life. It is not only used as a way to relax in daily life, but also according to ancient Roman literature, music has curative effects on the human body and can improve human mood. According to a special report, music has the following seven benefits for people: increasing learning efficiency, relieving stress, having analgesic effect, enhancing memory, improving insomnia, improving exercise efficiency, and promoting a happier mood.
然而,傳統的作曲方法之中,作曲者必須學習多年的樂器技巧與樂理知識同時耗費多日才能完成一首曲子;因此,為了讓人們不受樂理知識與樂器背景的限制且有效率地自行作出獨特的曲子,現今已有許多種自動作曲系統被提出。其中一種自動作曲系統係利用監督式深度學習(Deep Supervised Learning)演算法為模型。但上述監督式深度學習演算法過度重複使用相同的旋律,而導致其產出的音樂具有不悅耳的缺點。However, in the traditional composing method, the composer must learn many years of musical instrument skills and knowledge of music theory and spend many days to complete a piece of music; Unique tunes, many automatic composition systems have been proposed. One of the automatic composition systems uses a supervised deep learning (Deep Supervised Learning) algorithm as a model. But the above-mentioned supervised deep learning algorithm reuses the same melody excessively, and the music it produces has the disadvantage of being unpleasant.
由上述說明可知,實有必要對現有的自動作曲系統進行改良與重新設計,使其可以產出更加悅耳與受人類喜愛的樂曲。有鑑於此,本案之發明人係極力加以研究創作,而終於研發完成本發明之一種使用對抗生成網路與逆增強式學習法的自動作曲系統及方法。From the above description, it can be seen that it is necessary to improve and redesign the existing automatic composition system so that it can produce more melodious and popular music. In view of this, the inventor of this case made great efforts to research and create, and finally developed an automatic composition system and method using confrontational generative network and inverse reinforcement learning method of the present invention.
本發明之主要目的在於提供一種使用對抗生成網路與逆增強式學習法的自動作曲系統及方法,其中,所述自動作曲系統係應用於一電子裝置之中,使該電子裝置依據複數個參考樂曲資料而產生一樂曲,且包括:一樂曲資料庫、一樂曲特徵萃取單元、一第一運算模組、一第二運算模組以及一第三運算模組。該樂曲資料庫用以儲存複數個參考樂曲資料。特別地,所述樂曲特徵萃取單元對各個所述參考樂曲資料執行一特徵萃取處理,從而萃取出複數個樂曲特徵。接著,該第一運算模組用以利用一深度學習演算法對所述樂曲特徵執行一第一運算,從而獲取至少一樂曲概率特徵以及至少一預訓練權重參數。此外,該第二運算模組利用一增強式學習演算法對該複數個樂曲特徵執行一第二運算以獲取至少一音符獎勵函數。再者,該第三運算模組利用一深度強化學習演算法藉由所述預訓練權重參數進行一初始化設定,且對儲存於一樂理資料庫之中的複數組樂理資料、該至少一音符獎勵函數以及該至少一預訓練權重參數執行一第三運算,從而獲取複數個複音參考樂曲資料;所述複音參考樂曲資料即為悅耳且受人們喜愛的音樂。The main purpose of the present invention is to provide an automatic composition system and method using adversarial generative network and inverse reinforcement learning method, wherein the automatic composition system is applied to an electronic device, so that the electronic device is based on a plurality of references A music is generated from the music data, and includes: a music database, a music feature extraction unit, a first computing module, a second computing module and a third computing module. The music database is used to store a plurality of reference music data. In particular, the music feature extraction unit performs a feature extraction process on each of the reference music data, thereby extracting a plurality of music features. Next, the first operation module is used to perform a first operation on the music feature by using a deep learning algorithm, so as to obtain at least one music probability feature and at least one pre-training weight parameter. In addition, the second operation module uses a reinforcement learning algorithm to perform a second operation on the plurality of music features to obtain at least one note reward function. Furthermore, the third calculation module uses a deep reinforcement learning algorithm to perform an initialization setting through the pre-training weight parameters, and rewards the plurality of sets of music theory data stored in a music theory database, the at least one note The function and the at least one pre-training weight parameter perform a third operation, so as to obtain a plurality of polyphonic reference music data; the polyphonic reference music data is pleasant and popular music.
為了達成上述本發明之主要目的,本案發明人係提供所述使用對抗生成網路與逆增強式學習法的自動作曲系統之一實施例,其應用於一電子裝置之中,使該電子裝置依據複數個參考樂曲資料而產生一樂曲;所述自動作曲系統包括: 一樂曲資料庫,用以儲存該複數個參考樂曲資料; 一樂曲特徵萃取單元,用以對各個所述參考樂曲資料執行一特徵萃取處理,從而萃取出複數個樂曲特徵; 一第一運算模組,用以利用一深度增強式學習演算法對該複數個樂曲特徵執行一第一運算,從而獲取至少一樂曲概率特徵以及至少一預訓練權重參數; 一第二運算模組,用以利用一增強式學習演算法對該複數個樂曲特徵執行一第二運算,從而獲取至少一音符獎勵函數; 一第三運算模組,用以利用一深度增強式學習演算法藉由該至少一預訓練權重參數進行一初始化設定,且對該儲存於一樂理資料庫之中的複數組樂理資料、該至少一音符獎勵函數以及該至少一預訓練權重參數執行一第三運算,從而獲取複數個複音樂曲資料。 In order to achieve the above-mentioned main purpose of the present invention, the inventor of this case provides an embodiment of the automatic composition system using the confrontation generation network and the inverse reinforcement learning method, which is applied to an electronic device, so that the electronic device is based on A plurality of reference music data are used to generate a music; the automatic composition system includes: a music database for storing the plurality of reference music data; a music feature extraction unit, configured to perform a feature extraction process on each of the reference music data, thereby extracting a plurality of music features; A first operation module, for performing a first operation on the plurality of music features by using a deep enhanced learning algorithm, so as to obtain at least one music probability feature and at least one pre-training weight parameter; A second operation module, used for performing a second operation on the plurality of music features by using a reinforcement learning algorithm, so as to obtain at least one note reward function; A third calculation module is used to use a deep reinforcement learning algorithm to perform an initialization setting with the at least one pre-training weight parameter, and for the complex set of music theory data stored in a music theory database, the at least one A note reward function and the at least one pre-trained weight parameter perform a third operation, so as to obtain a plurality of complex music data.
並且,為了達成上述本發明之主要目的,本案發明人係同時提供所述使用對抗生成網路與逆增強式學習法的自動作曲方法之一實施例,其係應用在一電子裝置以藉由該電子裝置之一處理器實現,且包括以下步驟: (1)提供一樂曲特徵萃取單元用以對儲存於該電子裝置之一樂曲資料庫之中的複數個樂曲資料分別進行一特徵萃取處理,從而萃取出複數個樂曲特徵; (2)提供一第一運算模組以利用一深度學習演算法對該複數個樂曲特徵執行一第一運算,從而獲取至少一樂曲概率特徵以及至少一預訓練權重參數; (3)提供一第二運算模組以利用一增強式學習演算法對該複數個樂曲特徵執行一第二運算,從而獲取至少一音符獎勵函數; (4)提供一第三運算模組以利用一深度強化學習演算法對該至少一預訓練權重參數進行一初始化設定,且對儲存於該樂理資料庫之中的複數組樂理資料、該至少一音符獎勵函數以及該至少一預訓練權重參數執行一第三運算,從而獲取複數個複音樂曲資料。 And, in order to achieve the main purpose of the above-mentioned present invention, the inventor of the present case provides an embodiment of the automatic composition method using the confrontation generation network and the inverse reinforcement learning method at the same time, which is applied to an electronic device through the A processor of the electronic device implements, and includes the following steps: (1) providing a music feature extraction unit for performing a feature extraction process on a plurality of music data stored in a music database of the electronic device, thereby extracting a plurality of music features; (2) providing a first operation module to perform a first operation on the plurality of music features by using a deep learning algorithm, thereby obtaining at least one music probability feature and at least one pre-training weight parameter; (3) providing a second operation module to perform a second operation on the plurality of music features by using an enhanced learning algorithm, so as to obtain at least one note reward function; (4) Provide a third calculation module to use a deep reinforcement learning algorithm to initialize the at least one pre-training weight parameter, and to store the plurality of sets of music theory data stored in the music theory database, the at least one The note reward function and the at least one pre-trained weight parameter perform a third operation to obtain a plurality of complex music data.
為了能夠更清楚地描述本發明所提出之一種使用對抗生成網路與逆增強式學習法的自動作曲系統及方法,以下將配合圖式,詳盡說明本發明之較佳實施例。In order to more clearly describe an automatic composition system and method using an adversarial generative network and an inverse reinforcement learning method proposed by the present invention, preferred embodiments of the present invention will be described in detail below with reference to the drawings.
圖1顯示應用有本發明之一種使用對抗生成網路與逆增強式學習法的自動作曲系統的一電子裝置的立體圖。並且,圖2顯示本發明之使用對抗生成網路與逆增強式學習法的自動作曲系統的功能方塊圖。如圖1所示,本發明之使用對抗生成網路與逆增強式學習法的自動作曲系統1應用於一電子裝置2之中。在一實施例中,本發明之自動作曲系統1安裝在該電子裝置2的一作業系統(OS)之中,使該電子裝置2的一處理器透過執行本發明之自動作曲系統1而能夠依據複數個參考樂曲資料而產生一樂曲。如圖2所示,本發明之自動作曲系統1主要包括:一樂曲資料庫11、一樂曲特徵萃取單元12、一第一運算模組13、一第二運算模組14、一第三運算模組15以及一樂理資料庫16。其中,所述樂曲資料庫用以儲存複數個樂曲資料,且所述樂理資料庫16用以儲存複數個樂理資料。所述樂曲特徵萃取單元12對各個所述參考樂曲資料執行一特徵萃取處理,從而萃取出複數個樂曲特徵。更具體地說明,所述樂曲特徵係以MIDI(Musical Instrument Digital Interface)值紀錄樂曲的音高。圖3顯示紀錄樂曲音高之示意圖。如圖3所示,位於右邊之矩陣,水平向的列(row)代表每個節拍演奏的和絃,而垂直向的行(column)為以MIDI值紀錄之音高;其中,圖3為8個節拍的和旋;從圖3之矩陣可以得知其第一個音符演奏節拍為二又二分之一拍。也就是說,透過此紀錄方式可以記錄下樂曲之複音(harmony)旋律。接著,該樂曲特徵萃取單元12之示意矩陣如下(1)所示:
…………….(1)
FIG. 1 shows a perspective view of an electronic device applied with an automatic composition system using an adversarial generative network and an inverse reinforcement learning method of the present invention. Moreover, FIG. 2 shows a functional block diagram of an automatic composition system using an adversarial generative network and an inverse reinforcement learning method of the present invention. As shown in FIG. 1 , an automatic
於上式(1)中,N為每一個音高之MIDI值,T為第幾節拍,p記錄有無演奏,且a為相對應的演奏法(articulation)。補充說明的是,該第一運算模組13利用一深度增強式學習演算法對該複數個樂曲特徵執行一第一運算,從而獲取至少一樂曲概率特徵以及至少一預訓練權重(weights)參數。圖4顯示第一運算模組之功能方塊圖。如圖4所示,該第一運算模組13包括:一第一運算單元131、一第二運算單元132、一第三運算單元133以及一第四運算單元134。更具體地說明,該第一運算單元131用以將各個所述樂曲特徵分別轉換成一音符向量特徵;其中,所述音符向量特徵之長度為79,且係由當前音調向量特徵、當前音高向量特徵、前音符之音高向量特徵、後音符之音高向量特徵、以及節拍向量特徵所組成。接著,該第二運算單元132用以利用一深度學習演算法對所述音符向量特徵執行沿著時間軸的一第四運算,從而獲取至少一時間參數。並且,該第三運算單元133用以利用一深度學習演算法對所述音符向量特徵執行沿著音符軸的一第五運算,從而獲取至少一音符參數。值得說明的是,所數深度學習演算法為一長短期記憶(Long short-term memory, LSTM)神經網路演算法。其中,所述第一運算模組13透過該第二運算單元132與該第三運算單元133分別沿著時間軸與音符軸做所述長短期記憶神經網路演算法,換句話說,本發明之第一運算模組13係採用雙軸結構(Bi-axial architecture)之長短期記憶神經網路演算法。In the above formula (1), N is the MIDI value of each pitch, T is the beat number, p records whether there is performance, and a is the corresponding articulation. It is supplemented that the
承上述,該第四運算單元134利用一深度學習演算法(非遞歸線性演算法)對所述時間參數、所述音符參數執行一第五運算,從而獲取至少一訓練權重參數以及由至少一音符概率特徵以及至少一演奏法概率特徵所組成的所述樂曲概率特徵。補充說明的是,為了避免所述長短期記憶神經網路演算法過度擬合,本發明之第一運算模組13利用一捨棄(dropout)演算法於該第四運算與該第五運算之中。As mentioned above, the
承上述,該第二運算模組14用以利用一增強式學習演算法對該複數個樂曲特徵執行一第二運算,從而獲取至少一音符獎勵函數(reward function)。更具體地說明,所述增強式學習演算法係為一對抗性逆增強式學習(Adversarial Inverse Reinforcement Learning, AIRL)演算法。所述對抗性逆增強式學習演算法係類似於習知生成對抗式成本指導學習法(Generative Adversarial Guided Cost Learning, GAN-GCL)的架構;換句話說,透過該第二運算模組14所利用的所述逆增強式學習演算法,以令其獲取最大化之獎勵(Reward),進而獲取所述音符獎勵函數。其中,其分別同時訓練一生成器(generator)以及一判別器(discriminator)。其中,利用下式(2)、(3)和(4)可推得前述判別器。
r
t(s,a)=c* r mt(s,a)+(1-c)* r airl(s,a) ……………..(2)
L(θ)=E[r(s,a)+ γ(maxa’Q(s’,a’;θ
- )-Q(s,a;θ))
2] …………….(3)
………….…..(4)
According to the above, the
於上式(2)與上式(3)中,其中c為常數,s為當前狀態,a為當前動作,s’為下一狀態,a’為下一動作, r mt為樂理獎勵函數, r airl為對抗式逆強化學習獎勵函數,θ
- 為目標Q網絡(Target-Q network)的權重,γ為未來獎勵的折扣因子(discount factor)。於上式(4)中,其中θ為Q網絡(Q-network)的權重(weights),q(τ)為生成器密度(generator density)。並且,
為實際分布p(τ)被以波爾曼分佈(Boltzmann distribution)所表示之,且其獎勵函數為能量函數。繼續地說明本發明之技術,該第三運算模組15用以利用一深度強化學習演算法藉由該至少一預訓練權重參數進行一初始化設定,且對該儲存於一樂理資料庫16之中的複數組樂理資料、該至少一音符獎勵函數以及該至少一預訓練權重參數執行一第三運算,從而獲取複數個複音樂曲資料。其中,本實施例之第三運算模組15所使用之深度強化學習演算法為一深度Q網絡學習(Deep Q Learning network, DQN)演算法結合所述雙軸式長短期記憶神經網路演算法。更詳細地說明,本發明人設定不同數值的獎勵折扣因子c(Reward discount factor)之模擬與實驗,得到以下表(1):
接著,發明人將本發明所產生的所述複音樂曲資料與不同演算法所產生的樂曲進行比對,所述比對為由演算法產生的樂曲資料與現有的人類樂曲(即,參考樂曲資料或大師樂曲資料)之間的一差異性比對,比對結果如下表(2):
由上表(2)可以得知,本實施例所採用的樂理以及AIRL演算法並用的差異性比對數值,有三項數值最低。值得說明的是,數值越低即越相似人類所做的樂曲。換句話說,前述之差異性比對的差異值越小,表示本發明之自動作曲系統1所做出的樂曲越接近人類所創作之樂曲。進一步地,本發明人對不同演算法架構以及人類樂曲進行使用者的喜好調查,如下表(3)所示:
由上表(3)可以得知,本實施例所採用的樂理以及AIRL演算法架構獲得的喜好程度僅低於人類所作的曲子,且差異性小。從上述表(2)的客觀分析以及表(3)的主觀分析都可以得出本發明之對抗生成網路與逆增強式學習法的自動作曲系統所產生的樂曲最為接近人類所做得樂曲。換句話說,本發明所產生的曲目具有悅耳且受人類喜愛的優點。It can be known from the above table (3) that the degree of preference obtained by the music theory and the AIRL algorithm architecture adopted in this embodiment is only lower than that of the music composed by humans, and the difference is small. From the objective analysis of the above table (2) and the subjective analysis of the table (3), it can be concluded that the music produced by the automatic composition system of the present invention against the generative network and the inverse reinforcement learning method is the closest to the music made by humans. In other words, the repertoire produced by the present invention has the advantage of being pleasing to the ear and loved by humans.
如此,上述係已完整說明本發明之一種使用對抗生成網路與逆增強式學習法的自動作曲系統,接著,下文中將繼續說明本發明之一種使用對抗生成網路與逆增強式學習法的自動作曲方法。繼續地參閱圖1與圖2,且同時參閱圖5與圖6,其顯示本發明之一種使用對抗生成網路與逆增強式學習法的自動作曲方法的第一流程圖與第二流程圖。本發明之使用對抗生成網路與逆增強式學習法的自動作曲方法係應用於一電子裝置2之中,使該電子裝置2的處理器透過執行本發明之自動作曲方法1而能夠依據複數個參考樂曲資料而產生一樂曲。如圖5與圖6所示,本發明之自動作曲方法包括多個執行步驟。首先,於步驟S1之中,該樂曲特徵萃取單元12對儲存於該樂曲資料庫11之中的複數個參考樂曲資料分別進行一特徵萃取處理,從而萃取出複數個樂曲特徵。In this way, the above-mentioned system has fully described an automatic composition system using the confrontation generation network and the inverse reinforcement learning method of the present invention. Next, the following will continue to describe an automatic composition system using the confrontation generation network and the inverse reinforcement learning method Automatic composition method. Continuing to refer to FIG. 1 and FIG. 2 , and refer to FIG. 5 and FIG. 6 at the same time, which show a first flowchart and a second flowchart of an automatic composition method using an adversarial generative network and an inverse reinforcement learning method of the present invention. The automatic composition method using the confrontation generative network and the inverse reinforcement learning method of the present invention is applied in an
如圖5與圖6所示,方法流程係接著執行步驟S2:該第一運算模組利用一深度學習演算法對該複數個樂曲特徵執行一第一運算,從而獲取至少一樂曲概率特徵以及至少一預訓練權重參數。值得注意的是,方法流程會執行步驟S3:該第二運算模組(AIRL)利用一增強式學習演算法對該複數個樂曲特徵執行一第二運算,從而獲取至少一音符獎勵函數(Reward function)。進一步地,於步驟S4之中,該第三運算模組利用一深度強化學習演算法對該至少一預訓練權重參數進行一初始化設定,且對儲存於該樂理資料庫之中的複數組樂理資料、該至少一音符獎勵函數以及該至少一預訓練權重參數執行一第三運算,從而獲取複數個複音參考樂曲資料 。 As shown in FIG. 5 and FIG. 6, the method flow is followed by step S2: the first operation module uses a deep learning algorithm to perform a first operation on the plurality of music features, thereby obtaining at least one music probability feature and at least one A pretrained weight parameter. It is worth noting that the method flow will execute step S3: the second operation module (AIRL) uses an enhanced learning algorithm to perform a second operation on the plurality of music features, thereby obtaining at least one note reward function (Reward function ). Further, in step S4, the third operation module uses a deep reinforcement learning algorithm to perform an initialization setting on the at least one pre-training weight parameter, and performs an initialization setting on the complex sets of music theory data stored in the music theory database , the at least one note reward function and the at least one pre-trained weight parameter perform a third operation, so as to obtain a plurality of polyphonic reference music data .
更詳細地說明,步驟S2之中包括以下步驟:步驟S21之中,該第一運算模組13的一第一運算單元131將各個所述樂曲特徵分別轉換成一音符向量特徵。接著,執行步驟S22,該第一運算模組13的一第二運算單元132利用一深度學習演算法對所述音符向量特徵執行沿著時間軸的一第四運算,從而獲取至少一時間參數。如圖5所示,方法流程係接著執行步驟S23,該第一運算模組13的一第三運算單元133用以利用一深度學習演算法對所述音符向量特徵執行沿著音符軸的一第五運算,從而獲取至少一音符參數。最後,該第一運算模組13的一第四運算單元134用以利用一深度學習演算法(非遞歸線性演算法)對所述時間參數、所述音符參數執行一第五運算,從而獲取至少一訓練權重參數以及由至少一音符概率特徵以及至少一演奏法概率特徵所組成的所述樂曲概率特徵。In more detail, step S2 includes the following steps: in step S21, a
如此,上述係已完整且清楚地說明本發明之一種使用對抗生成網路與逆增強式學習法的自動作曲系統及方法;並且,經由上述可知本發明係具有下列之優點:In this way, the above system has completely and clearly described an automatic composition system and method using the confrontation generation network and the inverse reinforcement learning method of the present invention; and, it can be seen from the above that the present invention has the following advantages:
(1)本發明主要以一樂曲資料庫11、一樂曲特徵萃取單元12、一第一運算模組13、一第二運算模組14、一第三運算模組15組成本發明之使用對抗生成網路與逆增強式學習法的自動作曲系統1。特別地,所述樂曲特徵萃取單元12對各個所述參考樂曲資料執行一特徵萃取處理,從而萃取出複數個樂曲特徵。其中,所述樂曲特徵係以矩陣的方式將樂曲之和聲紀錄其中。此外,該第二運算模組14利用一逆增強式學習演算法對該複數個樂曲特徵執行一第二運算以獲取至少一音符獎勵函數。該第三運算模組15利用一深度強化學習演算法藉由至少一預訓練權重參數進行一初始化設定,且對該複數組樂理資料、所述音符獎勵函數值及所述預訓練權重參數執行一第三運算,從而獲取複數個複音參考樂曲資料。經由上述客觀與主觀的分析,可以得知本發明使用對抗生成網路與逆增強式學習法的自動作曲系統1所產出的樂曲最為接近人類所做的曲子。並且,本發明之系統產出的樂曲不僅悅耳且較受人類的喜愛。(1) The present invention mainly uses a
必須加以強調的是,上述之詳細說明係針對本發明可行實施例之具體說明,惟該實施例並非用以限制本發明之專利範圍,凡未脫離本發明技藝精神所為之等效實施或變更,均應包含於本案之專利範圍中。It must be emphasized that the above detailed description is a specific description of a feasible embodiment of the present invention, but the embodiment is not used to limit the patent scope of the present invention, any equivalent implementation or modification that does not depart from the technical spirit of the present invention, All should be included in the patent scope of this case.
<本發明> 1:自動作曲系統 2:電子裝置 11: 樂曲資料庫 12:樂曲特徵萃取單元 13:第一運算模組 131:第一運算單元 132:第二運算單元 133:第三運算單元 134:第四運算單元 14:第二運算模組 15:第三運算模組 16:樂理資料庫 S1~S4:步驟 S21~S24:步驟 <The present invention> 1: Automatic composition system 2: Electronic device 11: Song Database 12: Music Feature Extraction Unit 13: The first computing module 131: The first computing unit 132: Second computing unit 133: The third computing unit 134: The fourth computing unit 14: Second computing module 15: The third computing module 16: Music theory database S1~S4: steps S21~S24: Steps
<習知> 無 <Knowledge> none
圖1顯示應用有本發明之一種使用對抗生成網路與逆增強式學習法的自動作曲系統的一電子裝置的立體圖; 圖2顯示本發明之使用對抗生成網路與逆增強式學習法的自動作曲系統的功能方塊圖; 圖3顯示紀錄樂曲音高之示意圖; 圖4顯示第一運算模組的功能方塊圖; 圖5顯示本發明之使用對抗生成網路與逆增強式學習法的自動作曲方法的第一流程圖;以及 圖6顯示本發明之使用對抗生成網路與逆增強式學習法的自動作曲方法的第二流程圖。 FIG. 1 shows a perspective view of an electronic device applied with an automatic composition system using an adversarial generative network and an inverse reinforcement learning method according to the present invention; Fig. 2 shows the functional block diagram of the automatic composition system using the confrontation generation network and the inverse reinforcement learning method of the present invention; Figure 3 shows a schematic diagram of recording the pitch of a musical piece; Fig. 4 shows the functional block diagram of the first computing module; Fig. 5 shows the first flow chart of the automatic composition method using the confrontation generation network and the inverse reinforcement learning method of the present invention; and FIG. 6 shows the second flow chart of the automatic composition method using adversarial generative network and inverse reinforcement learning method of the present invention.
1: 使用對抗生成網路與逆增強式學習法的自動作曲系統 11: 樂曲資料庫 12: 樂曲特徵萃取單元 13: 第一運算模組 14: 第二運算模組 15: 第二運算模組 16: 樂理資料庫 1: Automatic composition system using adversarial generative networks and inverse reinforcement learning 11: Song Database 12: Music Feature Extraction Unit 13: The first computing module 14: Second computing module 15: Second computing module 16: Music Theory Database
Claims (17)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW110108455A TWI784434B (en) | 2021-03-10 | 2021-03-10 | System and method for automatically composing music using approaches of generative adversarial network and adversarial inverse reinforcement learning algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW110108455A TWI784434B (en) | 2021-03-10 | 2021-03-10 | System and method for automatically composing music using approaches of generative adversarial network and adversarial inverse reinforcement learning algorithm |
Publications (2)
Publication Number | Publication Date |
---|---|
TW202236173A TW202236173A (en) | 2022-09-16 |
TWI784434B true TWI784434B (en) | 2022-11-21 |
Family
ID=84957187
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW110108455A TWI784434B (en) | 2021-03-10 | 2021-03-10 | System and method for automatically composing music using approaches of generative adversarial network and adversarial inverse reinforcement learning algorithm |
Country Status (1)
Country | Link |
---|---|
TW (1) | TWI784434B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI285880B (en) * | 2005-08-16 | 2007-08-21 | Univ Nat Chiao Tung | Expert system for automatic composition |
US20170358285A1 (en) * | 2016-06-10 | 2017-12-14 | International Business Machines Corporation | Composing Music Using Foresight and Planning |
TW201824249A (en) * | 2016-12-30 | 2018-07-01 | 香港商阿里巴巴集團服務有限公司 | Method for generating music to accompany lyrics and related apparatus |
US20200168196A1 (en) * | 2015-09-29 | 2020-05-28 | Amper Music, Inc. | Method of scoring digital media objects using musical experience descriptors to indicate what, where and when musical events should appear in pieces of digital music automatically composed and generated by an automated music composition and generation system |
-
2021
- 2021-03-10 TW TW110108455A patent/TWI784434B/en active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI285880B (en) * | 2005-08-16 | 2007-08-21 | Univ Nat Chiao Tung | Expert system for automatic composition |
US20200168196A1 (en) * | 2015-09-29 | 2020-05-28 | Amper Music, Inc. | Method of scoring digital media objects using musical experience descriptors to indicate what, where and when musical events should appear in pieces of digital music automatically composed and generated by an automated music composition and generation system |
US20170358285A1 (en) * | 2016-06-10 | 2017-12-14 | International Business Machines Corporation | Composing Music Using Foresight and Planning |
TW201824249A (en) * | 2016-12-30 | 2018-07-01 | 香港商阿里巴巴集團服務有限公司 | Method for generating music to accompany lyrics and related apparatus |
Also Published As
Publication number | Publication date |
---|---|
TW202236173A (en) | 2022-09-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Crowe | Music and soulmaking: Toward a new theory of music therapy | |
McAngus Todd | The dynamics of dynamics: A model of musical expression | |
McClellan | The healing forces of music: History, theory and practice | |
Carterette et al. | Comparative music perception and cognition | |
Guernsey | The role of consonance and dissonance in music | |
Blacking | Tonal organization in the music of two Venda initiation schools | |
Kassler | Inner music: Hobbes, hooke, and north on internal character | |
Kramer | Soul music as exemplified in nineteenth-century German psychiatry | |
Sacks | Tales of Music and the Brain | |
Hill | Markov melody generator | |
TWI784434B (en) | System and method for automatically composing music using approaches of generative adversarial network and adversarial inverse reinforcement learning algorithm | |
Johnson | 'Disease Is Unrhythmical': Jazz, Health, and Disability in 1920s America | |
Tenzer | Chasing the Phantom: Features of a Supracultural New Music | |
Sessions | Problems and issues facing the composer today | |
JP2019109357A5 (en) | ||
Tanner | The power of performance as an alternative analytical discourse: The Liszt Sonata in B minor | |
Liang et al. | Research on generating xi'an drum music based on generative adversarial network | |
JP3730139B2 (en) | Method and system for automatically generating music based on amino acid sequence or character sequence, and storage medium | |
Ockelford | Imagination feeds memory: exploring evidence from a musical savant using zygonic theory | |
Vasudha et al. | Application of computer-aided music composition in music therapy | |
McGraw | The perception and cognition of time in Balinese music | |
Zhang et al. | Evolving expressive music performance through interaction of artificial agent performers | |
Neill | Schopenhauer | |
Spiller | 10 Sundanese Penca Silat and Dance Drumming in West Java | |
Lopez-Real et al. | Musical artistry and identity in balance |