JP2019109357A - 音楽情報の特徴解析方法及びその装置 - Google Patents
音楽情報の特徴解析方法及びその装置 Download PDFInfo
- Publication number
- JP2019109357A JP2019109357A JP2017242127A JP2017242127A JP2019109357A JP 2019109357 A JP2019109357 A JP 2019109357A JP 2017242127 A JP2017242127 A JP 2017242127A JP 2017242127 A JP2017242127 A JP 2017242127A JP 2019109357 A JP2019109357 A JP 2019109357A
- Authority
- JP
- Japan
- Prior art keywords
- probability
- transition
- performance
- string pattern
- improvisation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/0008—Associated control or indicating means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
- G06F16/68—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/683—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/01—Probabilistic graphical models, e.g. probabilistic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/08—Computing arrangements based on specific mathematical models using chaos models or non-linear system models
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/0008—Associated control or indicating means
- G10H1/0025—Automatic or semi-automatic music composition, e.g. producing random music, applying rules from music theory or modifying a musical piece
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/031—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
- G10H2210/061—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for extraction of musical phrases, isolation of musically relevant segments, e.g. musical thumbnail generation, or for temporal structure analysis of a musical piece, e.g. determination of the movement sequence of a musical work
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/031—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
- G10H2210/066—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for pitch analysis as part of wider processing for musical purposes, e.g. transcription, musical performance evaluation; Pitch recognition, e.g. in polyphonic sounds; Estimation or use of missing fundamental
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/031—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
- G10H2210/081—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for automatic key or tonality recognition, e.g. using musical rules or a knowledge base
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/031—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
- G10H2210/086—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for transcription of raw audio or music data to a displayed or printed staff representation or to displayable MIDI-like note-oriented data, e.g. in pianoroll format
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/101—Music Composition or musical creation; Tools or processes therefor
- G10H2210/111—Automatic composing, i.e. using predefined musical rules
- G10H2210/115—Automatic composing, i.e. using predefined musical rules using a random process to generate a musical note, phrase, sequence or structure
- G10H2210/121—Automatic composing, i.e. using predefined musical rules using a random process to generate a musical note, phrase, sequence or structure using a knowledge base
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2240/00—Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
- G10H2240/011—Files or data streams containing coded musical information, e.g. for transmission
- G10H2240/046—File format, i.e. specific or non-standard musical file format used in or adapted for electrophonic musical instruments, e.g. in wavetables
- G10H2240/056—MIDI or other note-oriented file format
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2240/00—Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
- G10H2240/121—Musical libraries, i.e. musical databases indexed by musical parameters, wavetables, indexing schemes using musical parameters, musical rule bases or knowledge bases, e.g. for automatic composing methods
- G10H2240/131—Library retrieval, i.e. searching a database or selecting a specific musical piece, segment, pattern, rule or parameter set
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2250/00—Aspects of algorithms or signal processing methods without intrinsic musical character, yet specifically adapted for or used in electrophonic musical processing
- G10H2250/005—Algorithms for electrophonic musical instruments or musical processing, e.g. for automatic composition or resource allocation
- G10H2250/015—Markov chains, e.g. hidden Markov models [HMM], for musical processing, e.g. musical analysis or musical composition
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Multimedia (AREA)
- Computational Mathematics (AREA)
- Algebra (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Acoustics & Sound (AREA)
- Library & Information Science (AREA)
- Probability & Statistics with Applications (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Nonlinear Science (AREA)
- Databases & Information Systems (AREA)
- Auxiliary Devices For Music (AREA)
Abstract
Description
本発明は、音楽理論に縛られない即興演奏のフレーズを提供するシステム及び方法を提供するものである。
図1は、この実施形態にかかるシステムを示す概略構成図である。
基本プログラムの他、本実施例に関係するプログラムのみ挙げると、
(1)音楽記録媒体に記録された即興演奏者の楽曲データを解析し記号化する楽曲情報コーディング部10と、
(2)記号化した楽曲データからマルコフモデルによる確率解析を行うために、n次マルコフ連鎖として起こり得る全ての1次〜n次音列パターンを抽出する音列パターン抽出部11と、
(3)上記抽出した各音列パターンのピッチ遷移列を求めるピッチ遷移列抽出部12と、
(4)マルコフモデルを用いて、ピッチ遷移列の遷移確率と各遷移列の出現確率を1次〜n次の各階層レベルで計算する遷移確率・出現確率演算部13と、
(5)前記遷移確率と出現確率に基づいて、各階層レベルでピッチ遷移率を並べ替え、統計的に起こり得る可能性の高いピッチ遷移列を特定し、12音平均律に基づいて全調作成し楽譜化することで即興演奏フレーズを生成する即興演奏フレーズ構成部14と
が格納されている。
ここで、マルコフモデルとは、時系列データの確率モデルであって、現在の状態の確率が、直前のn個前の状態に依存して決まるモデルをいう。
音楽では、曲毎に調が存在し、この調によって曲全体の音の高さや、各音高での役割が変わってくる。本発明では、このような音楽特有の法則や理論等を排除するために、遷移パターン毎に最初のピッチ周波数を0、半音増加を1、半音減少を-1とし、各遷移パターンの最初の音からピッチが相対的にどのように遷移するかだけを抽出する。
まず、この工程では、前記で抽出した遷移列のうち、遷移確率の高いものを計算から導き出すために、多重マルコフモデルを用いて各遷移パターンの遷移確率を計算する。
(5−2)12音平均律に基づいて、各遷移列を全調作成し、楽譜化する工程(ステップS7)。
2…RAM
3…出力部・表示部
4…入力部
5…バス
6…データ記憶部
7…プログラム記憶部
8…即興演奏者楽曲データ
9…即興演奏フレーズ情報
10…楽曲情報コーディング部
11…音列パターン抽出部
12…ピッチ遷移列抽出部
13…遷移確率・出現確率演算部
14…即興演奏フレーズ構成部
20…音楽記憶媒体
22…楽曲データ
Claims (11)
- 演奏者の実際の演奏を記録した楽曲データを解析し、1次〜n次の全ての音列パターンを抽出する音列パターン抽出部と、
上記抽出した各音列パターンの遷移確率と、全音列パターンにおける出現確率を演算する遷移確率・出現確率演算部と、
前記遷移確率と出現確率とに基づいて、統計的に起こり得る可能性の高い音列パターンを特定し、その音列パターンを楽譜化することで前記演奏者の演奏フレーズを生成し出力する演奏フレーズ構成部と
を有することを特徴とする即興演奏解析システム。 - 請求項1記載の演奏解析システムにおいて、
前記演奏者の演奏は、即興演奏であることを特徴とする演奏解析システム。 - 請求項1記載の演奏解析システムにおいて、
前記音列パターン抽出部は、n次マルコフ連鎖として起こり得る全ての1次〜n次音列パターンを抽出するものであり、
前記遷移確率・出現確率演算部は、マルコフモデルを用いて、1次〜n次の各階層レベルの各ピッチ遷移列の遷移確率を求めるものである
ことを特徴とする演奏解析システム。 - 前記楽曲データは、XMLファイルであることを特徴とする演奏解析システム。
- 前記楽曲データは、MIDI音源であることを特徴とする演奏解析システム。
- コンピュータが、演奏者の実際の演奏を記録した楽曲データを解析し、1次〜n次の全ての音列パターンを抽出する音列パターン抽出工程と、
コンピュータが、上記抽出した各音列パターンの遷移確率と、全音列パターンにおける出現確率を演算する遷移確率・出現確率演算工程と、
コンピュータが、前記遷移確率と出現確率とに基づいて、統計的に起こり得る可能性の高い音列パターンを特定し、その音列パターンを楽譜化することで前記演奏者の演奏フレーズを生成し出力する演奏フレーズ構成工程と
を有することを特徴とする即興演奏解析方法。 - 請求項6記載の演奏解析方法において、
前記演奏者の演奏は、即興演奏であることを特徴とする演奏解析方法。 - 請求項6記載の演奏解析方法において、
前記音列パターン抽出工程は、n次マルコフ連鎖として起こり得る全ての1次〜n次音列パターンを抽出するものであり、
前記遷移確率・出現確率演算工程は、マルコフモデルを用いて、1次〜n次の各階層レベルの各ピッチ遷移列の遷移確率を求めるものである
ことを特徴とする演奏解析方法。 - 記憶媒体に格納され、即興演奏解析を実行するコンピュータソフトウエアプログラムであって、以下の
コンピュータが、演奏者の実際の演奏を記録した楽曲データを解析し、1次〜n次の全ての音列パターンを抽出する音列パターン抽出工程と、
コンピュータが、上記抽出した各音列パターンの遷移確率と、全音列パターンにおける出現確率を演算する遷移確率・出現確率演算工程と、
コンピュータが、前記遷移確率と出現確率とに基づいて、統計的に起こり得る可能性の高い音列パターンを特定し、その音列パターンを楽譜化することで前記演奏者の演奏フレーズを生成し出力する演奏フレーズ構成工程と
を実行させるものであることを特徴とするコンピュータソフトウエアプログラム。 - 請求項9記載のコンピュータソフトウエアプログラムにおいて、
前記演奏者の演奏は、即興演奏であることを特徴とするコンピュータソフトウエアプログラム。 - 請求項9記載のコンピュータソフトウエアプログラムにおいて、
前記音列パターン抽出工程は、n次マルコフ連鎖として起こり得る全ての1次〜n次音列パターンを抽出するものであり、
前記遷移確率・出現確率演算工程は、マルコフモデルを用いて、1次〜n次の各階層レベルの各ピッチ遷移列の遷移確率を求めるものである
ことを特徴とする演奏解析方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2017242127A JP6722165B2 (ja) | 2017-12-18 | 2017-12-18 | 音楽情報の特徴解析方法及びその装置 |
US16/104,284 US10431191B2 (en) | 2017-12-18 | 2018-08-17 | Method and apparatus for analyzing characteristics of music information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2017242127A JP6722165B2 (ja) | 2017-12-18 | 2017-12-18 | 音楽情報の特徴解析方法及びその装置 |
Publications (3)
Publication Number | Publication Date |
---|---|
JP2019109357A true JP2019109357A (ja) | 2019-07-04 |
JP2019109357A5 JP2019109357A5 (ja) | 2020-03-26 |
JP6722165B2 JP6722165B2 (ja) | 2020-07-15 |
Family
ID=66815272
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP2017242127A Active JP6722165B2 (ja) | 2017-12-18 | 2017-12-18 | 音楽情報の特徴解析方法及びその装置 |
Country Status (2)
Country | Link |
---|---|
US (1) | US10431191B2 (ja) |
JP (1) | JP6722165B2 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20210127480A (ko) * | 2020-04-14 | 2021-10-22 | 에프알씨앤씨 주식회사 | 음원 관련 서비스 제공 장치 및 방법 |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113470601B (zh) * | 2021-07-07 | 2023-04-07 | 南昌航空大学 | 一种自动作曲方法及系统 |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE3403550C2 (de) * | 1984-02-02 | 1986-04-30 | Adam Opel AG, 6090 Rüsselsheim | Transport- und Lagergestell |
US20010044719A1 (en) | 1999-07-02 | 2001-11-22 | Mitsubishi Electric Research Laboratories, Inc. | Method and system for recognizing, indexing, and searching acoustic signals |
JP3776673B2 (ja) | 2000-04-06 | 2006-05-17 | 独立行政法人科学技術振興機構 | 音楽情報解析装置、音楽情報解析方法及び音楽情報解析プログラムを記録した記録媒体 |
AU2002368387A1 (en) | 2002-11-28 | 2004-06-18 | Agency For Science, Technology And Research | Summarizing digital audio data |
US7323629B2 (en) * | 2003-07-16 | 2008-01-29 | Univ Iowa State Res Found Inc | Real time music recognition and display system |
JP2007225661A (ja) | 2006-02-21 | 2007-09-06 | Univ Of Tokyo | 音楽情報解析方法及び装置 |
US7737354B2 (en) * | 2006-06-15 | 2010-06-15 | Microsoft Corporation | Creating music via concatenative synthesis |
US20090071315A1 (en) * | 2007-05-04 | 2009-03-19 | Fortuna Joseph A | Music analysis and generation method |
US8058544B2 (en) * | 2007-09-21 | 2011-11-15 | The University Of Western Ontario | Flexible music composition engine |
JP5463655B2 (ja) * | 2008-11-21 | 2014-04-09 | ソニー株式会社 | 情報処理装置、音声解析方法、及びプログラム |
JP5625235B2 (ja) * | 2008-11-21 | 2014-11-19 | ソニー株式会社 | 情報処理装置、音声解析方法、及びプログラム |
JP5593608B2 (ja) * | 2008-12-05 | 2014-09-24 | ソニー株式会社 | 情報処理装置、メロディーライン抽出方法、ベースライン抽出方法、及びプログラム |
JP5293460B2 (ja) * | 2009-07-02 | 2013-09-18 | ヤマハ株式会社 | 歌唱合成用データベース生成装置、およびピッチカーブ生成装置 |
CN101950377A (zh) * | 2009-07-10 | 2011-01-19 | 索尼公司 | 新型马尔可夫序列生成器和生成马尔可夫序列的新方法 |
JP6019858B2 (ja) * | 2011-07-27 | 2016-11-02 | ヤマハ株式会社 | 楽曲解析装置および楽曲解析方法 |
-
2017
- 2017-12-18 JP JP2017242127A patent/JP6722165B2/ja active Active
-
2018
- 2018-08-17 US US16/104,284 patent/US10431191B2/en active Active
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20210127480A (ko) * | 2020-04-14 | 2021-10-22 | 에프알씨앤씨 주식회사 | 음원 관련 서비스 제공 장치 및 방법 |
KR102410513B1 (ko) | 2020-04-14 | 2022-06-20 | 에프알씨앤씨 주식회사 | 음원 관련 서비스 제공 장치 및 방법 |
Also Published As
Publication number | Publication date |
---|---|
US20190189100A1 (en) | 2019-06-20 |
JP6722165B2 (ja) | 2020-07-15 |
US10431191B2 (en) | 2019-10-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Norgaard | How jazz musicians improvise: The central role of auditory and motor patterns | |
Gillick et al. | Machine learning of jazz grammars | |
Collins et al. | Developing and evaluating computational models of musical style | |
Raczyński et al. | Melody harmonization with interpolated probabilistic models | |
US20210335333A1 (en) | Computing orders of modeled expectation across features of media | |
Cogliati et al. | Transcribing Human Piano Performances into Music Notation. | |
CN117334170A (zh) | 生成音乐数据的方法 | |
Cambouropoulos | The harmonic musical surface and two novel chord representation schemes | |
Lin et al. | Generationmania: Learning to semantically choreograph | |
Chemillier | Toward a formal study of jazz chord sequences generated by Steedman’s grammar | |
Briot et al. | Representation | |
JP2019109357A (ja) | 音楽情報の特徴解析方法及びその装置 | |
CN113763913A (zh) | 一种曲谱生成方法、电子设备及可读存储介质 | |
Glickman et al. | (A) Data in the Life: Authorship Attribution of Lennon-McCartney Songs | |
Van Balen | Audio description and corpus analysis of popular music | |
CN110134823B (zh) | 基于归一化音符显马尔可夫模型的midi音乐流派分类方法 | |
Yanchenko et al. | Classical music composition using state space models | |
Trochidis et al. | CAMeL: Carnatic percussion music generation using n-gram models | |
Conklin et al. | Pattern and antipattern discovery in Ethiopian bagana songs | |
De Valk | Structuring lute tablature and MIDI data: Machine learning models for voice separation in symbolic music representations | |
Srivatsan et al. | Checklist models for improved output fluency in piano fingering prediction | |
Shepherd | Let’s Calculate Bach: Applying Information Theory and Statistics to Numbers in Music | |
Ma et al. | AMI–creating musical compositions with a coherent long-term structure | |
KR102149773B1 (ko) | 딥러닝 기반의 작곡용 드럼 패턴 생성장치 및 그 방법 | |
Khatri et al. | Guitar Tuning Identification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A521 | Request for written amendment filed |
Free format text: JAPANESE INTERMEDIATE CODE: A523 Effective date: 20191213 |
|
A521 | Request for written amendment filed |
Free format text: JAPANESE INTERMEDIATE CODE: A523 Effective date: 20191217 |
|
A621 | Written request for application examination |
Free format text: JAPANESE INTERMEDIATE CODE: A621 Effective date: 20191217 |
|
A711 | Notification of change in applicant |
Free format text: JAPANESE INTERMEDIATE CODE: A711 Effective date: 20191217 |
|
A871 | Explanation of circumstances concerning accelerated examination |
Free format text: JAPANESE INTERMEDIATE CODE: A871 Effective date: 20191217 |
|
A521 | Request for written amendment filed |
Free format text: JAPANESE INTERMEDIATE CODE: A821 Effective date: 20191217 |
|
A975 | Report on accelerated examination |
Free format text: JAPANESE INTERMEDIATE CODE: A971005 Effective date: 20200304 |
|
A131 | Notification of reasons for refusal |
Free format text: JAPANESE INTERMEDIATE CODE: A131 Effective date: 20200414 |
|
A521 | Request for written amendment filed |
Free format text: JAPANESE INTERMEDIATE CODE: A523 Effective date: 20200423 |
|
TRDD | Decision of grant or rejection written | ||
A01 | Written decision to grant a patent or to grant a registration (utility model) |
Free format text: JAPANESE INTERMEDIATE CODE: A01 Effective date: 20200609 |
|
A61 | First payment of annual fees (during grant procedure) |
Free format text: JAPANESE INTERMEDIATE CODE: A61 Effective date: 20200619 |
|
R150 | Certificate of patent or registration of utility model |
Ref document number: 6722165 Country of ref document: JP Free format text: JAPANESE INTERMEDIATE CODE: R150 |
|
R250 | Receipt of annual fees |
Free format text: JAPANESE INTERMEDIATE CODE: R250 |