WO2017195292A1 - Dispositif, structure et programme d'analyse de structure musicale - Google Patents

Dispositif, structure et programme d'analyse de structure musicale Download PDF

Info

Publication number
WO2017195292A1
WO2017195292A1 PCT/JP2016/063981 JP2016063981W WO2017195292A1 WO 2017195292 A1 WO2017195292 A1 WO 2017195292A1 JP 2016063981 W JP2016063981 W JP 2016063981W WO 2017195292 A1 WO2017195292 A1 WO 2017195292A1
Authority
WO
WIPO (PCT)
Prior art keywords
section
music
development
structure analysis
feature
Prior art date
Application number
PCT/JP2016/063981
Other languages
English (en)
Japanese (ja)
Inventor
四郎 鈴木
Original Assignee
Pioneer DJ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pioneer DJ株式会社 filed Critical Pioneer DJ株式会社
Priority to JP2018516262A priority Critical patent/JPWO2017195292A1/ja
Priority to PCT/JP2016/063981 priority patent/WO2017195292A1/fr
Priority to EP16901640.9A priority patent/EP3457395A4/fr
Publication of WO2017195292A1 publication Critical patent/WO2017195292A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0008Associated control or indicating means
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • G10H2210/061Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for extraction of musical phrases, isolation of musically relevant segments, e.g. musical thumbnail generation, or for temporal structure analysis of a musical piece, e.g. determination of the movement sequence of a musical work
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • G10H2210/076Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for extraction of timing, tempo; Beat detection
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • G10H2210/081Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for automatic key or tonality recognition, e.g. using musical rules or a knowledge base
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/571Chords; Chord sequences
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/121Musical libraries, i.e. musical databases indexed by musical parameters, wavetables, indexing schemes using musical parameters, musical rule bases or knowledge bases, e.g. for automatic composing methods

Definitions

  • the present invention relates to a music structure analysis apparatus, a music structure analysis method, and a music structure analysis program.
  • the music structure is automatically assigned by assigning characteristic sections that characterize the music structure such as so-called Intro, A melody (Verse1), B melody (Verse2), Sabi (Hook), Outro, etc. Analysis techniques are known.
  • Patent Literature 1 discloses a technique for assigning feature sections such as stanzas and refrains to music data by performing similarity determination between segments (feature sections) assigned to music data. Yes.
  • An object of the present invention is to provide a music structure analysis apparatus, a music structure analysis method, and a music structure analysis program capable of easily assigning characteristic sections characterizing music data.
  • the music structure analysis apparatus of the present invention is A music structure analysis apparatus that assigns the feature section to the music data set with the development points of the feature section characterizing the structure of the music data, A position information acquisition unit for acquiring position information of the development point; Based on the position information of the expansion points acquired by the position information acquisition unit, a sound number analysis unit that analyzes the number of sounds with different frequencies for each section between the expansion points; A feature section allocating section that assigns a feature section to a section between other development points based on a section of development points that takes the maximum value of the number of sounds analyzed by the sound number analysis section, It is characterized by having.
  • the music structure analysis method of the present invention includes: A music structure analysis method for assigning a characteristic section to music data in which a development point of a characteristic section characterizing the structure of the music data is set, Obtaining position information of the development point; A procedure for analyzing the number of sounds having different frequencies for each section between the development points based on the acquired position information of the development points; A procedure for assigning a feature section to a section between other development points based on a section between development points taking the maximum value of the number of analyzed sounds; It is characterized by implementing.
  • the music structure analysis program of the present invention causes a computer to function as the music structure analysis apparatus described above.
  • the schematic diagram showing the structure of the acoustic system which concerns on embodiment of this invention.
  • FIG. 1 shows an acoustic control system 1 according to an embodiment of the present invention.
  • the acoustic control system 1 includes two digital players 2, a digital mixer 3, A computer 4 and a speaker 5 are provided.
  • the digital player 2 includes a jog dial 2A, a plurality of operation buttons (not shown), and a display 2B, and the operator of the digital player 2 operates the jog dial 2A or the operation buttons to perform acoustic control information corresponding to the operation. Can be output.
  • the acoustic control information is output to the computer 4 via a USB (Universal Serial Bus) cable 6 capable of bidirectional communication.
  • USB Universal Serial Bus
  • the digital mixer 3 includes an operation switch 3A, a volume adjustment lever 3B, and a left / right switching lever 3C. By operating these switches 3A, levers 3B, 3C, acoustic control information can be output.
  • the acoustic control information is output to the computer 4 via the USB cable 7.
  • the music information processed by the computer 4 is input to the digital mixer 3, and the music information including the input digital signal is converted into an analog signal and output from the speaker 5 through the analog cable 8.
  • the digital player 2 and the digital mixer 3 are connected to each other via a LAN (Local Area Network) cable 9 compliant with the IEEE 1394 standard, and the sound control generated by operating the digital player 2 without using the computer 4. Information can also be output directly to the digital mixer 3 for DJ performance.
  • LAN Local Area Network
  • FIG. 2 shows a functional block diagram of the computer 4 as a music structure analyzing apparatus.
  • the computer 4 includes a position information acquisition unit 11, a sound number analysis unit 12, a bass level analysis unit 13, a ratio calculation unit 14, a feature section allocation unit 15, and a display as a music structure analysis program executed on the arithmetic processing device 10.
  • An information generation unit 16 is provided.
  • the position information acquisition unit 11 acquires the number of measures of the development point set in the music data M1 as the position information of the development point. Specifically, as shown in FIG. 3, the position information acquisition unit 11 has development points P1, P2,..., Pn, Pn + 3,... Set between bars in the music data M1.
  • the position information of Pe-4 in this embodiment, the number of bars is acquired.
  • the expansion point is the frequency analysis of the number of sounds with different frequencies for each measure by FFT, etc., the number of peaks of the sound pressure level is counted, and the other is based on the measure that is the maximum number of sounds in the music data M1.
  • the ratio of the sound of the bars is calculated and set as the inter-bar position where the ratio changes greatly.
  • the expansion points can be set in the computer 4 using the sound number analysis unit 12, but the expansion points P1, P2,..., Pn, by analyzing the number of sounds in advance as in this embodiment.
  • Music data M1 in which Pn + 3,... Pe-4 is set may be used.
  • the setting of the development points P1, P2,..., Pn, Pn + 3,... Pe-4 is not limited to the method described above, and may be performed based on, for example, the similarity of phrases in the music data M1. .
  • the sound number analysis unit 12 detects the signal level of each frequency band for each segment between the development points P1, P2,..., Pn, Pn + 3,. Analyze the number of sounds.
  • the “number of sounds” may be a case where sounds having different frequencies may be counted as different sounds, or a fundamental tone, overtones, etc. may be counted as one with the same scale.
  • the input music data M1 may be music data stored on a hard disk in the computer 4, or music data recorded on a CD, a Blu-ray disc or the like inserted in a slot of the digital player 2. Alternatively, it may be music data that can be downloaded from a network via a communication line.
  • the sound number analysis unit 12 for example, as shown in FIG. 4, for each section between the development points P1, P2,..., Pn, Pn + 3,.
  • Count the number of sounds by counting the number of peaks in the frequency.
  • the analysis of sounds having different frequencies is performed using FFT, but the present invention is not limited to this, and for example, frequency conversion may be performed using discrete cosine transform or discrete Fourier transform.
  • the sound number analysis unit 12 outputs the analysis result to the ratio calculation unit 14.
  • the bass level analysis unit 13 takes into account the average value for each bar section of the low-frequency sound pressure peak level, which is the low-frequency signal level, and develops points P1, P2, ..., Pn, Pn + 3, ... Pe-4. It is provided to allocate a chorus section in the section between.
  • the low sound level analysis unit 13 has a frequency lower than a predetermined frequency in a section between the development points P1, P2,..., Pn, Pn + 3,. Analyzes the low sound pressure peak level. Specifically, the bass level analysis unit 13 acquires bass sound levels such as bass drums and bass, and calculates the average value of the bass sound pressure peak levels in the sections for each measure section as the development point P1, .., Pn, Pn + 3,... Pe-4. The bass level analysis unit 13 outputs the analysis result to the feature section allocation unit 15.
  • the feature interval allocation unit 15 sets the interval between the expansion points P 1, P 2,..., Pn, Pn + 3,.
  • Characteristic sections such as an intro section, an A melody (Verse1) section, a B melody (Verse2) section, a chorus (Hook) section, a C melody (Verse3) section, and an outro section are allocated.
  • the feature section allocating unit 15 searches for a maximum value 1 and a maximum value 2 that are larger than the preceding and following sections based on the analysis result of the sound number analysis unit 12. Then, maximal value 1 and maximal value 2 are set as candidates for the rust section (Hook).
  • the feature section allocating unit 15 obtains the average value of the low frequency sound pressure peak levels of each feature section based on the analysis result of the bass level analyzing unit 13, and the maximum value 1 Then, it is determined whether or not the average value of the low-frequency sound pressure peak level in the section having the maximum value 2 is higher than a predetermined threshold value, and the hook section is allocated.
  • the feature interval allocation unit 15 sets the interval before the maximum value 1 and the maximum value 2 as the A melody interval (Verse1), and the subsequent interval as the B melody interval (Verse2). Further, the subsequent section is allocated to the C melody section (Verse 3) or the like.
  • Which feature section is assigned is determined by whether or not the number of sounds exceeds a predetermined threshold.
  • the predetermined threshold value may be a fixed threshold value that is smaller than the maximum value, or may be a threshold value that is set as a predetermined ratio with respect to the maximum value and varies according to the maximum value.
  • the name of the feature section to be assigned is arbitrary. In FIG. 5, names such as A-Verse and B-Verse may be assigned.
  • the feature section allocating unit 15 assigns intro sections and outro sections in advance. Keep going.
  • the display information generating unit 16 generates the feature section assigned by the feature section assigning unit 15 as display information together with the music data M1. Specifically, as shown in FIG. 6, the characteristic section is displayed as the music data M1 progresses, and the display information for changing the color of the characteristic section as the music data M1 progresses is generated.
  • the display information generated by the display information generation unit 16 is output to the display 2B serving as a display device of the digital player 2, and the DJ performer is performing which characteristic section is being played as the music performance of the music data M1 progresses. Can be confirmed.
  • the position information acquisition unit 11 acquires position information of the development points P1, P2,..., Pn, Pn + 3,... Pe-4 in the music data M1 (step S1).
  • the sound number analysis unit 12 analyzes the number of sounds in the section between the development points P1, P2,..., Pn, Pn + 3,... Pe-4 (step S2).
  • the ratio calculation unit 14 uses the interval between the expansion points Pn and Pn + 3 having the maximum number of sounds as a reference to other expansion points P1, P2,. -4 is calculated (step S3).
  • the feature section assigning unit 15 assigns the intro section to the section from the start point of the music data M1 to the first development point P1 (step S4). Subsequently, the feature section assigning unit 15 assigns an outro section from the last development point Pe-4 of the music data M1 to the end point of the music-multiple M1 (step S5).
  • the feature section allocating unit 15 searches for the maximum value in the section between the development points other than the intro section and the outro section (step S6). The search may start from the section next to the intro section or may start from the section before the outro section. When the section having the maximum value is found, the feature section allocating unit 15 obtains the average value of the low-frequency sound pressure peak levels in the section between the expansion points P1, P2,..., Pn, Pn + 3,. (Procedure S7).
  • the feature section allocating unit 15 determines whether or not the average value of the low frequency sound pressure peak level in the section having the maximum value exceeds a predetermined threshold (step S8). When the value is equal to or less than the predetermined threshold, the feature section allocating unit 15 searches for the next maximum value. When the predetermined threshold value is exceeded, the feature section allocating unit 15 allocates a hook section to the section (step S9). The feature section allocating unit 15 repeats steps S6 to S9 for all sections having the maximum value in the music data M1 (step S10). Steps S8 and S9 are performed in order to improve the detection accuracy of the chorus section, and only the search for the maximum value of the section between the expansion points that takes the maximum value of the number of sounds, A rust section may be allocated.
  • the feature section assigning unit 15 acquires the number of section sounds between other development points before and after the section set as the chorus section (step S11).
  • the feature section allocating unit 15 determines whether or not the number of sounds in the section between other development points exceeds a predetermined threshold (step S12).
  • the feature section allocating unit 15 allocates an A melody (Verse1) section to the section (step S13), and when the ratio is equal to or less than the predetermined threshold, the feature section allocating unit. 15 assigns a B melody (Verse2) section to the section (step S14).
  • the feature section assigning unit 15 repeats until the assignment of the feature section to the section between all the development points P1, P2,..., Pn, Pn + 3,.
  • the feature section assigning unit 15 outputs the assigned result to the display information generating unit 16, and the display information generating unit 16 generates display information based on the assignment result, The generated display information is output to the display 2B of the digital player 2 (step S16).
  • the music data M1 can be easily and quickly allocated. It is possible to assign feature sections.
  • a user performing DJ performance can visually recognize which feature section is currently being played, and therefore performs a higher level DJ performance. be able to.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Auxiliary Devices For Music (AREA)

Abstract

L'invention concerne un dispositif d'analyse de structure musicale (4) qui attribue des segments caractéristiques caractérisant la structure de données musicales (M1) à des données musicales (M1) dans lesquelles des points de développement sont établis dans des segments caractéristiques, le dispositif d'analyse de structure musicale (4) comprenant : une unité d'acquisition d'informations de position (11) permettant d'acquérir les informations de position pour les points de développement ; une unité d'analyse du nombre de sons (12) permettant d'analyser un certain nombre de sons ayant des fréquences différentes pour chaque segment entre chacun des points de développement d'après les informations de position concernant les points de développement acquises par l'unité d'acquisition d'informations de position (11) ; et une unité d'attribution de segments caractéristiques (15) permettant d'attribuer les segments caractéristiques à un segment entre d'autres points de développement d'après le segment entre des points de développement déterminés pour contenir une valeur maximale locale pour le nombre de sons analysés par l'unité d'analyse du nombre de sons (12).
PCT/JP2016/063981 2016-05-11 2016-05-11 Dispositif, structure et programme d'analyse de structure musicale WO2017195292A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2018516262A JPWO2017195292A1 (ja) 2016-05-11 2016-05-11 楽曲構造解析装置、楽曲構造解析方法および楽曲構造解析プログラム
PCT/JP2016/063981 WO2017195292A1 (fr) 2016-05-11 2016-05-11 Dispositif, structure et programme d'analyse de structure musicale
EP16901640.9A EP3457395A4 (fr) 2016-05-11 2016-05-11 Dispositif, structure et programme d'analyse de structure musicale

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2016/063981 WO2017195292A1 (fr) 2016-05-11 2016-05-11 Dispositif, structure et programme d'analyse de structure musicale

Publications (1)

Publication Number Publication Date
WO2017195292A1 true WO2017195292A1 (fr) 2017-11-16

Family

ID=60266426

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/063981 WO2017195292A1 (fr) 2016-05-11 2016-05-11 Dispositif, structure et programme d'analyse de structure musicale

Country Status (3)

Country Link
EP (1) EP3457395A4 (fr)
JP (1) JPWO2017195292A1 (fr)
WO (1) WO2017195292A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022070396A1 (fr) * 2020-10-02 2022-04-07 AlphaTheta株式会社 Dispositif d'analyse de musique, procédé d'analyse de musique et programme
WO2023054237A1 (fr) * 2021-09-30 2023-04-06 パイオニア株式会社 Dispositif de sortie d'effets sonores

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4775380B2 (ja) 2004-09-28 2011-09-21 ソニー株式会社 楽曲の時間セグメントをグループ化するための装置および方法
JP2012194387A (ja) * 2011-03-16 2012-10-11 Yamaha Corp 抑揚判定装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4926737A (en) * 1987-04-08 1990-05-22 Casio Computer Co., Ltd. Automatic composer using input motif information
EP2088518A1 (fr) * 2007-12-17 2009-08-12 Sony Corporation Procédé pour l'analyse de la structure musicale

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4775380B2 (ja) 2004-09-28 2011-09-21 ソニー株式会社 楽曲の時間セグメントをグループ化するための装置および方法
JP2012194387A (ja) * 2011-03-16 2012-10-11 Yamaha Corp 抑揚判定装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3457395A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022070396A1 (fr) * 2020-10-02 2022-04-07 AlphaTheta株式会社 Dispositif d'analyse de musique, procédé d'analyse de musique et programme
WO2023054237A1 (fr) * 2021-09-30 2023-04-06 パイオニア株式会社 Dispositif de sortie d'effets sonores

Also Published As

Publication number Publication date
EP3457395A1 (fr) 2019-03-20
EP3457395A4 (fr) 2019-10-30
JPWO2017195292A1 (ja) 2019-03-07

Similar Documents

Publication Publication Date Title
US11915725B2 (en) Post-processing of audio recordings
US9117429B2 (en) Input interface for generating control signals by acoustic gestures
JP4650662B2 (ja) 信号処理装置および信号処理方法、プログラム、並びに記録媒体
JP4645241B2 (ja) 音声処理装置およびプログラム
KR20090130833A (ko) 디지털 오디오 파일로부터 햅틱 이벤트들을 자동으로 생성하는 시스템 및 방법
WO2017057530A1 (fr) Dispositif de traitement audio, et procédé de traitement audio
JPH04195196A (ja) Midiコード作成装置
WO2017195292A1 (fr) Dispositif, structure et programme d'analyse de structure musicale
CN108369800B (zh) 声处理装置
US9087503B2 (en) Sampling device and sampling method
JP6176480B2 (ja) 楽音発生装置、楽音発生方法およびプログラム
JP6281211B2 (ja) 音響信号のアライメント装置、アライメント方法及びコンピュータプログラム
WO2017135350A1 (fr) Support d'enregistrement, dispositif de traitement acoustique et procédé de traitement acoustique
JP6625202B2 (ja) 楽曲構造解析装置、楽曲構造解析方法および楽曲構造解析プログラム
Stöter et al. Unison Source Separation.
CN114724583A (zh) 一种音乐片段的定位方法、装置、设备及存储介质
US10424279B2 (en) Performance apparatus, performance method, recording medium, and electronic musical instrument
JP2015087436A (ja) 音声処理装置、音声処理装置の制御方法およびプログラム
JP6357772B2 (ja) 電子楽器、プログラム及び発音音高選択方法
WO2024034115A1 (fr) Dispositif de traitement de signaux audio, procédé de traitement de signaux audio, et programme
WO2024034118A1 (fr) Dispositif et procédé de traitement de signal audio, et programme associé
JP5151603B2 (ja) 電子楽器
WO2024034116A1 (fr) Dispositif de traitement de données audio, procédé de traitement de données audio et programme
JP4238807B2 (ja) 音源用波形データの決定装置
JP4094441B2 (ja) 電子楽器

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2018516262

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16901640

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2016901640

Country of ref document: EP

Effective date: 20181211