WO2012074070A1 - リズムパターンの類似度に基づく楽音データの検索 - Google Patents
リズムパターンの類似度に基づく楽音データの検索 Download PDFInfo
- Publication number
- WO2012074070A1 WO2012074070A1 PCT/JP2011/077839 JP2011077839W WO2012074070A1 WO 2012074070 A1 WO2012074070 A1 WO 2012074070A1 JP 2011077839 W JP2011077839 W JP 2011077839W WO 2012074070 A1 WO2012074070 A1 WO 2012074070A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- rhythm
- input
- rhythm pattern
- pattern
- data
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/36—Accompaniment arrangements
- G10H1/40—Rhythm
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/005—Musical accompaniment, i.e. complete instrumental rhythm synthesis added to a performed melody, e.g. as output by drum machines
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/031—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
- G10H2210/071—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for rhythm pattern analysis or rhythm style recognition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/341—Rhythm pattern selection, synthesis or composition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/341—Rhythm pattern selection, synthesis or composition
- G10H2210/361—Selection among a set of pre-established rhythm patterns
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2240/00—Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
- G10H2240/121—Musical libraries, i.e. musical databases indexed by musical parameters, wavetables, indexing schemes using musical parameters, musical rule bases or knowledge bases, e.g. for automatic composing methods
- G10H2240/131—Library retrieval, i.e. searching a database or selecting a specific musical piece, segment, pattern, rule or parameter set
- G10H2240/141—Library retrieval matching, i.e. any of the steps of matching an inputted segment or phrase with musical database contents, e.g. query by humming, singing or playing; the steps may include, e.g. musical analysis of the input, musical feature extraction, query formulation, or details of the retrieval process
Definitions
- the present invention relates to retrieval of musical tone data based on the similarity of rhythm patterns, and more particularly to a musical tone data processing apparatus, musical tone data creation system, method and program using the retrieval technique.
- DAW Digital Audio Workstation
- PC Personal Computer
- Patent Document 1 discloses a technique in which when a user inputs a rhythm pattern, music data corresponding to a rhythm pattern similar to the input rhythm pattern is retrieved from music data stored in a memory and presented. It is disclosed. Further, in Patent Document 2, when a time-series signal that alternately repeats an on state and an off state is input, a search unit searches for rhythm data having a variation pattern that is the same as or similar to the input time-series signal, A technique is disclosed in which information associated with music (information such as a song name) stored in correspondence with the recorded rhythm data is added and output as a search result.
- the rhythm pattern is directly input via an input device such as a pad or a keyboard in the techniques described in Patent Documents 1 and 2, the rhythm pattern is input according to the sense of time that the user feels, There may be a time error in the input rhythm due to the user's sense of time lapse.
- a rhythm pattern that is different from the rhythm pattern intended by the user for example, a phrase of 16 minutes with respect to a phrase of 8 minutes
- the present invention has been made in view of the above-described background, and an object of the present invention is to search musical tone data of a phrase composed of a rhythm pattern that satisfies a condition in which the degree of similarity with a rhythm pattern intended by a user is satisfied.
- the musical sound data processing device corresponds to musical sound data indicating a plurality of sounds in a predetermined period and a musical sound rhythm pattern representing an arrangement of pronunciation times of the plurality of sounds.
- a storage unit for storing information, a specified time in the period as time progresses, a notification unit for notifying the user of the specified time, and a user when the specified time is notified by the notification unit
- Based on the operation input by the acquisition unit an acquisition unit that acquires an input rhythm pattern that represents the sequence of the specified times corresponding to the pattern of the operation, and search for musical tone data stored in the storage unit, the input
- a search unit for specifying musical tone data associated with a musical tone rhythm pattern whose degree of similarity with a rhythm pattern meets a predetermined condition.
- the storage unit stores a rhythm category determined based on the interval of the sounding time represented by the musical sound rhythm pattern in association with the musical sound rhythm pattern, and represents the input rhythm pattern.
- a determination unit that determines a rhythm category to which the input rhythm pattern belongs based on an interval of each designated time, and a calculation unit that calculates a distance between the input rhythm pattern and the musical sound rhythm pattern, and the search unit includes: The degree of similarity between the input rhythm pattern and the musical sound rhythm pattern is calculated based on the relationship between the rhythm category to which the input rhythm pattern belongs and the rhythm category to which the musical rhythm pattern belongs, and the calculated distance.
- the musical sound data specified by the search unit is the input rhythm pattern and the calculation And the degree of similarity is equal to or a musical tone data associated with the musical tone rhythm pattern that matches the predetermined condition.
- the search unit includes an input time interval histogram representing a frequency distribution of pronunciation time intervals represented by the input rhythm pattern, and a frequency distribution of the pronunciation time intervals in the musical tone rhythm pattern for each rhythm category.
- the rhythm category of the rhythm category histogram indicating that the similarity to the input time interval histogram is high, and the musical sound data specified by the search unit is the specified Of the musical tone rhythm patterns associated with a rhythm category, the musical tone data is associated with musical tone rhythm patterns whose degree of similarity matches a predetermined condition.
- the predetermined period is configured by a plurality of sections
- the storage unit includes a musical tone rhythm pattern that represents an arrangement of sound generation times of the plurality of sounds and the musical sound data for each of the sections.
- the calculation unit calculates a distance between the input rhythm pattern and the musical tone rhythm pattern for each section stored in the storage unit
- the search unit calculates the calculation for each section. Based on a distance between the input rhythm pattern and the musical sound rhythm pattern calculated by a unit, and a relationship between a rhythm category to which the input rhythm pattern belongs and a rhythm category to which the musical rhythm pattern belongs.
- the degree of similarity between the pattern and the musical tone rhythm pattern is calculated, and the musical tone data specified by the search unit is the input rhythm pattern. Wherein the degree of similarity that the calculated and over emissions is tone data associated with the musical tone rhythm pattern that matches the predetermined condition.
- the supply of the musical sound data specified by the search unit to the audio output unit that outputs a sound according to the musical sound data in synchronization with the notification of the specified time by the notification unit comprises a part.
- the storage unit stores a musical tone pitch pattern representing an arrangement of pitches of sounds indicated by the musical tone data in association with the musical tone data, and the notification unit notifies the specified time.
- a pitch pattern acquisition unit that acquires an input pitch pattern representing a pitch sequence based on an operation input by a user when the input is performed, and the search unit includes the input pitch pattern and the musical tone sound
- the degree of similarity between the input rhythm pattern and the musical tone rhythm pattern is calculated based on the variance of the pitch difference of each sound in the high pattern
- the musical tone data specified by the search unit is the input rhythm It is a musical tone data associated with a musical tone rhythm pattern whose pattern and the calculated degree of similarity match a predetermined condition.
- the storage unit stores a musical tone velocity pattern representing a sequence of sound intensities indicated by the musical tone data in association with the musical tone data, and the notification unit notifies the specified time.
- a velocity pattern acquisition unit that acquires an input velocity pattern representing a sequence of sound intensities based on an operation input by a user when the user is in the search mode, and the search unit includes each of the input velocity pattern and the musical tone velocity pattern.
- the degree of similarity between the input rhythm pattern and the musical tone rhythm pattern is calculated based on the absolute value of the difference in sound intensity, and the musical sound data specified by the search unit is the calculated similarity with the input rhythm pattern.
- the musical sound data associated with the musical sound rhythm pattern whose degree of sound matches a predetermined condition And features.
- the storage unit stores a musical tone duration pattern representing a sequence of sound lengths indicated by the musical tone data in association with the musical tone data, and the notification unit notifies the specified time.
- a duration pattern acquisition unit that acquires an input duration pattern representing a sequence of sound lengths based on an operation input by the user when the user is operating, and the search unit includes the input duration pattern and the musical sound duration pattern On the basis of the absolute value of the difference between the lengths of the respective sounds in, the degree of similarity between the input rhythm pattern and the musical sound rhythm pattern is calculated,
- the musical sound data specified by the search unit is musical sound data associated with a musical rhythm pattern whose degree of similarity with the input rhythm pattern matches a predetermined condition.
- the present invention is also an input device for inputting a performance operation by a user, and the musical sound data processing device according to any one of claims 1 to 8, wherein the notification unit of the musical sound data processing device determines the predetermined operation.
- the notification unit of the musical sound data processing device determines the predetermined operation.
- a music data creation system comprising a musical sound data processing device obtained as a rhythm pattern.
- the present invention relates to a procedure for storing, in association with each other, musical sound data indicating a plurality of sounds in a predetermined period and a musical sound rhythm pattern representing an arrangement of pronunciation times of the plurality of sounds in the computer, and the period
- a procedure for obtaining an input rhythm pattern representing the sequence of the specified times corresponding to the operation pattern and a musical sound data stored in a storage device are searched, and the degree of similarity with the input rhythm pattern meets a predetermined condition
- a computer that stores the procedure for identifying the musical tone data associated with the musical tone rhythm pattern to be played and the program for executing it. Providing over data-readable storage medium.
- System configuration diagram according to the first embodiment
- Block diagram showing the contents of the rhythm DB The block diagram showing the functional composition of the information processor concerning a 1st embodiment.
- Flow chart of search processing Diagram showing onset time interval distribution table Schematic diagram for explaining deviation calculation in rhythm pattern Schematic diagram for explaining processing in loop playback mode Schematic diagram for explaining the processing in the performance playback mode
- the schematic diagram showing the rhythm input device which concerns on 2nd Embodiment.
- FIG. 1 is a configuration diagram of a system according to the first embodiment of the present invention.
- This music data creation system 100 includes a rhythm input device 10 and an information processing device 20, and each is connected by a communication line so that they can communicate with each other. This communication may be realized by radio.
- the rhythm input device 10 includes, for example, an electronic pad as input means.
- the rhythm input device 10 When the user strikes the hitting surface of the electronic pad provided on the rhythm input device 10, the rhythm input device 10 is triggered, that is, the trigger data indicating that the performance operation has been performed, and the strength of the hit, Velocity data indicating the strength of the performance operation is input to the information processing apparatus 20 in units of one measure.
- the trigger data is generated every time the user strikes the hitting surface of the electronic pad, and velocity data is associated with each trigger data.
- a set of trigger data and velocity data generated in one measure represents a rhythm pattern (referred to as an input rhythm pattern) input by the user using the rhythm input device 10 in the one measure.
- the rhythm input device 10 is an example of an input device to which a performance operation by a user is input.
- the information processing apparatus 20 is, for example, a PC.
- modes of operation when the information processing apparatus 20 executes the application program there are a loop playback mode, a performance playback mode, and a performance loop playback mode.
- the user can change the operation mode through an operation unit 25 (described later) provided in the information processing apparatus 20.
- the mode of operation is the loop playback mode
- the information processing device 20 uses the same or most similar rhythm to the input rhythm pattern input from the rhythm input device 10 in a database storing a plurality of musical sound data each having a different rhythm pattern.
- Musical sound data having a pattern is searched, and the musical sound data of the search result is converted into a sound and output.
- the information processing apparatus 20 repeatedly reproduces sound based on the musical sound data of the search result.
- the information processing apparatus 20 when the mode of operation is the performance playback mode, the information processing apparatus 20 not only outputs the sound data of the search result as sound, but also sounds based on performance operations using the constituent sounds in the music data of the search result. Can be output.
- the operation mode is the performance loop playback mode
- the information processing apparatus 20 repeatedly outputs a sound based on the musical sound data of the search result and performs a performance performed by the user using the constituent sounds in the search result phrase. It is possible to output the sound based on it repeatedly.
- the search function can be switched ON / OFF by the user through the operation unit 25.
- FIG. 2 is a block diagram showing the hardware configuration of the information processing apparatus 20.
- the information processing apparatus 20 includes a control unit 21, a storage unit 22, an input / output interface unit 23, a display unit 24, an operation unit 25, and an audio output unit 26, and each unit is connected via a bus.
- the control unit 21 includes a CPU (Central Processing Unit), a ROM (Read Only Memory), a RAM (Random Access Memory), and the like.
- the CPU reads out the application program stored in the ROM or the storage unit 22, loads it into the RAM, and executes it to control each unit of the information processing apparatus 20 via the bus.
- the RAM functions as a work area when the CPU processes data.
- the storage unit 22 includes a rhythm DB (Database) 221.
- the rhythm DB 221 includes tone data having different rhythm patterns and information related to the tone data.
- the input / output interface unit 23 inputs data output from the rhythm input device 10 to the information processing device 20 and outputs various signals for controlling the rhythm input device 10 to the rhythm input device 10 in accordance with instructions from the control unit 21.
- the display unit 24 is a display, for example, and displays an interactive screen for the user.
- the operation unit 25 is, for example, a mouse or a keyboard, and passes a signal corresponding to a user operation to the control unit 21.
- the control unit 21 controls each unit according to the received signal.
- the audio output unit 26 includes a DAC (Digital Analog Converter), an amplifier, and a speaker.
- the audio output unit 26 converts the digital musical tone data retrieved from the rhythm DB 221 by the control unit 21 into analog musical tone data by the DAC, further amplifies it with an amplifier, and an amplified analog audio signal from the speaker.
- the sound according to is output. That is, the voice output unit 26 is an example of a voice output unit that outputs a sound corresponding to the musical tone data.
- FIG. 3 is a diagram showing the contents of the rhythm DB 221.
- the rhythm DB 221 includes a musical instrument type table, a rhythm category table, and a phrase table.
- FIG. 3A shows an example of the instrument type table.
- “Musical instrument type ID” is an identifier for uniquely identifying the type of musical instrument, and is represented by, for example, a three-digit number.
- “Musical instrument type” is a name indicating the type of musical instrument.
- a musical instrument type ID is associated with each different musical instrument type such as “drum kit”, “conga”, and “djembe” and is described in the musical instrument type table.
- the musical instrument type “drum kit” is described in the musical instrument type table in association with the musical instrument type ID “001”.
- the other instrument types are similarly associated with the instrument type ID and described in the instrument type table.
- the “musical instrument type” is not limited to the content shown in FIG.
- FIG. 3B shows an example of a rhythm category table.
- the “rhythm category ID” is an identifier for uniquely identifying a rhythm pattern category (referred to as a rhythm category), and is represented by, for example, a two-digit number.
- the rhythm pattern represents a sequence of times at which each sound is generated during a predetermined length of time.
- the rhythm pattern represents a sequence of times at which each sound is generated in one measure.
- “Rhythm category” is a name that represents a rhythm category.
- a rhythm category ID is associated with each rhythm category such as “8 minutes”, “16 minutes”, and “8 minutes in triplicate”. It is described in.
- the rhythm category “8 minutes” is described in the rhythm category table in association with the rhythm category ID “01”.
- rhythm category is not limited to the contents shown in FIG.
- categorization such as time signature and genre, or assignment of one different category ID for each rhythm pattern may be performed.
- FIG. 3 (c) shows an example of a phrase table.
- the phrase table a plurality of phrase records are described in which pieces of information associated with musical tone data of phrases constituting one measure are linked.
- a phrase is one of the units representing a group of several notes.
- These phrase records are grouped for each instrument type ID, and the user can select the instrument type using the operation unit 25 before inputting the rhythm using the rhythm input device 10. is there.
- the type of musical instrument selected by the user is stored in the RAM.
- FIG. 3C as an example of the phrase table, a plurality of phrase records whose instrument type is “drum kit” (instrument type ID is “001”) are shown.
- One phrase record includes a plurality of items such as instrument type ID, phrase ID, rhythm category ID, phrase musical sound data, rhythm pattern data, and attack strength pattern data.
- the musical instrument type ID is an identifier for uniquely identifying the type of musical instrument.
- the phrase ID is an identifier for uniquely identifying each phrase record, and is composed of, for example, a 4-digit number.
- the rhythm category ID is an identifier for identifying which rhythm category each phrase record belongs to among the rhythm categories described above. For example, the phrase record whose rhythm category ID is “01” in FIG. 3C belongs to the rhythm category “8 minutes” as shown in the rhythm category table of FIG.
- Phrase music data is a data file related to the sound itself (called component sound) included in a phrase that constitutes one measure, and is recorded in an audio file format such as WAVE (RIFF waveform Audio Audio Format) or mp3 (MPEG Audio Layer-3).
- Rhythm pattern data is a data file in which the pronunciation start time of each constituent sound in a phrase constituting one measure is recorded.
- a text file describes the pronunciation start time of each constituent sound.
- the sound generation start time of each constituent sound is normalized in advance with the length of one measure being 1. That is, the sound generation start time of each constituent sound described in the rhythm pattern data takes a value between 0 and 1.
- the rhythm DB 211 has a plurality of rhythm patterns representing a sequence of sound generation times at which each component sound, that is, each sound is generated, in a period of a predetermined length (here, one measure), and the rhythm pattern.
- 3 is an example of a storage unit that stores the musical tone data of a phrase composed of Further, as described above, the rhythm DB 211 stores each rhythm pattern group when a plurality of rhythm patterns are classified into rhythm pattern groups categorized as “8 minutes” or “16 minutes” according to the sequence of pronunciation times. This is also an example of a storage unit that stores the rhythm classification identifier (rhythm category ID in the present embodiment) assigned to the rhythm pattern in association with each rhythm pattern included in the rhythm pattern group.
- rhythm classification identifier rhythm category ID in the present embodiment
- Rhythm pattern data may be created in advance as follows.
- a person who creates rhythm pattern data (hereinafter referred to as an operator) extracts the constituent sound start time from a commercially available audio loop material in which the constituent sound start time is embedded in the waveform.
- the operator excludes unnecessary constituent sound start times in a range that can be ignored, such as ghost notes, from the extracted constituent sound start times. Data after the unnecessary constituent sound start time is excluded may be used as rhythm pattern data.
- the attack intensity pattern data is a data file in which the attack intensity of each constituent sound in a phrase constituting one measure is recorded.
- the attack intensity of each constituent sound is described as a numerical value in a text file.
- This attack strength corresponds to velocity data indicating the strength of the performance operation in the input rhythm pattern. That is, the attack strength means the strength of each constituent sound in the phrase musical sound data.
- As a method for calculating the attack intensity there are a method of using the maximum value of the waveform of the constituent sound, a method of calculating by integrating the energy of the waveform in a certain section where the waveform volume is large, and the like.
- a phrase record whose instrument type is “drum kit” is shown as an example, but in reality, the phrase table includes multiple types of instruments (conga, maracas, djembe, TR-808, etc.). A corresponding phrase record is described.
- FIG. 4 is a block diagram showing a functional configuration of the information processing apparatus 20.
- the control unit 21 reads out a program stored in the ROM or the storage unit 22 to the RAM and executes it, thereby executing a bar line clock output unit 211, an input rhythm pattern storage unit 212, a rhythm pattern search unit 213, and a performance processing unit 214. Each function is realized. In the following, processing will be described with these units as the subject, but the entity of the subject of processing is the control unit 21. In the following description, “onset” means that the input state in the rhythm input device 10 is switched from off to on.
- onset means that the pad is hit if the input means of the rhythm input device 10 is an electronic pad, and the keyboard is pressed if the input means of the rhythm input device 10 is a keyboard. If the input means of the rhythm input device 10 is a button, the button is pressed.
- the “onset time” represents each time when the input state in the rhythm input device 10 is switched from off to on. In other words, the “onset time” represents the time when the trigger data is generated in the rhythm input device 10.
- the bar line clock output unit 211 indicates where the current time is located in one bar on the time axis in progress when the length of one bar is normalized to 1, every several tens of milliseconds (milliseconds).
- a clock signal hereinafter referred to as bar line clock. That is, the bar clock takes a value between 0 and 1.
- the input rhythm pattern storage unit 212 stores, in the RAM, the time when the trigger data is generated, that is, the onset time, which is input from the rhythm input device 10 with reference to the bar line clock, for each bar. In this way, the sequence of onset times stored in the RAM in units of one measure becomes the input rhythm pattern.
- the bar line clock output unit 211 is an example of a notification unit that advances a period of a predetermined length (here, one bar) and notifies the user of the passage of time in the period.
- the input rhythm pattern storage unit 212 is inputted by the user when a period of a predetermined length (here, one measure) is being advanced by the measure line clock output unit 211 serving as a notification means, It is an example of the acquisition means which acquires the rhythm pattern showing the arrangement
- the information processing device 20 sends a signal to the rhythm input device 10 that is an input device when a period of a predetermined length (here, one measure) is advanced by the measure line clock output unit 211 that is a notification means.
- a period of a predetermined length here, one measure
- this is an example of a musical sound data processing apparatus that acquires a sequence of times when each performance operation is input by a user as a rhythm pattern (input rhythm pattern) representing a sequence of pronunciation times when each sound is generated.
- the above-described period of a predetermined length that is advanced by the bar line clock output unit 211 may or may not be repeated. Further, a bar line clock that is input to the information processing apparatus 20 from the outside may be used.
- the performance processing unit 214 may reproduce the accompaniment sound source in which the bar line position is added in advance in accordance with the bar line clock. In this case, the user inputs a rhythm pattern in accordance with the bar line felt by the user from the accompaniment sound source.
- the rhythm pattern search unit 213 searches the phrase table in the rhythm DB 221 using the input rhythm pattern stored in the RAM, and uses the phrase record having the rhythm pattern data having the same or highest degree of similarity as the input rhythm pattern as a search result. Store in RAM. That is, the rhythm pattern search unit 213 satisfies the condition indicating that the degree of similarity with the rhythm pattern acquired by the input rhythm pattern storage unit 212 as the acquisition unit is high from the musical sound data stored in the storage unit. It is an example of the search means which searches the musical sound data matched with the rhythm pattern.
- the performance processing unit 214 uses the phrase musical sound data in the phrase record of the search result stored in the RAM as a reproduction target, and synchronizes with the bar line clock, and outputs the sound based on the reproduction target phrase musical sound data from the audio output unit 26. In addition to outputting, the performance operation by the user using the constituent sounds in the phrase record of the search result in the performance playback mode and the performance loop playback mode is controlled.
- FIG. 5 is a flowchart of search processing performed by the rhythm pattern search unit 213.
- the rhythm pattern search unit 213 searches the phrase table using the instrument type ID stored in the RAM (step Sb1).
- the instrument type ID is stored in the RAM when the user designates it in advance using the operation unit 25.
- the rhythm pattern search unit 213 sets the phrase record as the search result in step S1 as a processing target.
- the input rhythm pattern includes the onset time normalized with the length of one measure being 1.
- the rhythm pattern search unit 213 calculates the distribution of onset time intervals in the input rhythm pattern stored in the RAM (step Sb2).
- the onset time interval is an interval between adjacent onset times on the time axis, and is represented by a numerical value between 0 and 1.
- the distribution of the above-mentioned onset time intervals is represented by the number of onset time intervals corresponding to each time when one measure is equally divided by 48 times.
- the reason for dividing one measure at 48 times is that if one beat is divided into 12 equal parts on the premise of a four-beat rhythm, a plurality of different rhythm categories such as 8 minutes, 8 minutes, 3 reams, and 16 minutes.
- the resolution is suitable for identifying the.
- the resolution is a note having the shortest length that can be expressed by a sequencer or sequence software such as an application program in the present embodiment. Since the resolution in this embodiment is 48 in one measure, it is set so that one quarter note can be divided into twelve.
- the phrases “onset time” and “onset time interval” are used in the description corresponding to the input rhythm pattern in the description of the phrase record. That is, in the phrase record, the sound generation start time of each constituent sound described in the rhythm pattern data corresponds to the onset time. In the phrase record, the interval between adjacent onset times on the time axis corresponds to the onset time interval.
- an input rhythm pattern in which an onset time represented by the following (a) is recorded is an 8-minute phrase.
- (A) 0, 0.25, 0.375, 0.5, 0.625, 0.75, 0.875
- the rhythm pattern search unit 213 calculates an onset time interval represented by (b) below from the input rhythm pattern of (a).
- the rhythm pattern search unit 213 multiplies each onset time interval calculated in (b) by 48, and further performs a process of rounding down the decimal point of the numerical value obtained by adding 0.5 (referred to as quantization). Then, a numerical group represented by (c) below is calculated. (C) 12, 6, 6, 6, 6, 6
- quantize means that the rhythm pattern search unit 213 corrects each onset time interval in accordance with the resolution. The reasons for the quantize are as follows. The sounding start time described in the rhythm pattern data in the phrase table is according to the resolution (here 48).
- the rhythm pattern search unit 213 performs the above-described quantization process for each onset time interval represented by (b).
- FIG. 6A is a distribution table of onset time intervals in the input rhythm pattern.
- the horizontal axis represents a time interval (referred to as a time interval) when one measure is divided by 48 times, and the vertical axis corresponds to each time interval represented by the horizontal axis.
- the above-described numerical value group (c) based on the input rhythm pattern is assigned to the distribution table. This number ratio is normalized by the rhythm pattern search unit 213 so that the sum is 1.
- FIG. 6A it can be seen that there is a distribution peak in the time interval of “6”, the largest number in the numerical value group of (c) which is the quantized onset time interval.
- the rhythm pattern search unit 213 calculates the distribution of onset time intervals for each rhythm category using all the rhythm patterns described in the phrase table (step Sb3).
- the rhythm patterns described in the phrase table step Sb3
- FIG. 6B shows a distribution table in which the distribution of onset time intervals calculated for each rhythm category is assigned. If the instrument type is not changed in the second and subsequent steps S1 when the search function is turned on while the search function is ON, the phrase record and rhythm category to be processed are not changed. Since they are the same, the process of step Sb3 is omitted. On the other hand, when the series of search processes is repeated, if the instrument type is changed in step S1, the process of step Sb3 is performed.
- the rhythm pattern search unit 213 is based on the onset time interval distribution table based on the input rhythm pattern (here, FIG. 6A) and the rhythm pattern for each rhythm category described in the phrase table.
- a distribution table of onset time intervals here, FIG. 6B
- similarity distance a distance representing the degree of similarity between them (hereinafter simply referred to as similarity distance) is calculated (step Sb4).
- FIG. 6C shows an onset time interval distribution based on the input rhythm pattern (FIG. 6A) and an onset time interval distribution based on the rhythm pattern for each rhythm category described in the phrase table (FIG. 6). It is a distribution table showing the difference with (b)).
- the calculation method of the similarity distance in step Sb4 is as follows.
- the rhythm pattern search unit 213 uses both distribution tables in the distribution table of onset time intervals based on the input rhythm pattern and the distribution table of onset time intervals based on the rhythm pattern for each rhythm category described in the phrase table.
- the absolute value of the difference in the number ratio is obtained at the same time interval in FIG.
- the rhythm pattern search part 213 calculates the square root of the value which added all the absolute values calculated
- the value represented by the calculated square root is the similarity distance.
- the similarity distance indicates that the smaller the value is, the higher the similarity is, and the larger the value is, the lower the similarity is.
- FIG. 6C the difference in the number ratio based on FIGS.
- 6A and 6B is the smallest in the rhythm category of 8 minutes. This is because, according to the above calculation method, the input rhythm pattern and the 8-minute rhythm category have the most similarity distance among the 8-minute, 16-minute, and 8-minute-triple rhythm categories shown in the distribution table. It is small.
- the rhythm pattern search unit 213 determines that the rhythm category having the smallest similarity distance among the rhythm categories described in the phrase table is the rhythm category to which the input rhythm pattern corresponds (step). Sb5). That is, here, the rhythm pattern search unit 213 specifies that the rhythm category of the input rhythm pattern is 8 minutes. As described above, the rhythm pattern search unit 213 identifies a rhythm category having a high possibility of corresponding to the input rhythm pattern through steps Sb2 to Sb5.
- rhythm pattern search unit 213 is input by the user, and the input time interval histogram representing the frequency distribution of the pronunciation time intervals represented by the rhythm pattern acquired by the input rhythm pattern storage unit 212 as an acquisition unit (this embodiment) 6 (a)) and a rhythm classification histogram (FIG. 6 (in this embodiment) showing the frequency distribution of the pronunciation time intervals in the rhythm pattern stored in the storage means for each rhythm classification identifier (rhythm category in this embodiment).
- the absolute value of the difference from b)) is obtained for each rhythm classification identifier, and among the rhythm patterns associated with the rhythm classification identifier having the smallest absolute value, the rhythm pattern that satisfies the condition indicating that the degree of similarity is high 3 is an example of a search means for searching for musical tone data associated with the.
- the rhythm pattern search unit 213 specifies all the rhythm patterns described in the phrase table in order to identify the rhythm patterns described in the phrase table that have the same or highest degree of similarity as the input rhythm pattern. And the input rhythm pattern are calculated (step Sb6).
- the magnitude of the deviation represents how far each onset time in the input rhythm pattern is apart from each onset time in each rhythm pattern described in the phrase table. That is, the smaller the amount of deviation, the higher the degree of similarity between the input rhythm pattern and the rhythm pattern described in the phrase table.
- step Sb5 a rhythm category having a high possibility of corresponding to the input rhythm pattern is specified, while in the process of step Sb6, phrase records belonging to all rhythm categories are targeted for calculation.
- the reason is as follows.
- the rhythm pattern data included in the phrase record, which rhythm category belongs to such as when there are approximately the same number of onset time intervals of 8 minutes and onset time intervals of 16 minutes within one measure, There is something that is.
- the rhythm category for the input rhythm pattern is not correctly determined in step Sb5
- the user intends that the phrase records belonging to all the rhythm categories are calculated in step Sb6. This increases the possibility that the detected rhythm pattern is correctly detected.
- FIG. 7 is a schematic diagram for explaining the calculation of the deviation in the rhythm pattern.
- the input rhythm pattern is J and the rhythm pattern described in the phrase table is K
- the calculation of the difference between J and K is performed according to the following procedure.
- the rhythm pattern search unit 213 calculates the absolute value of the time difference from the nearest onset time in the rhythm pattern K with reference to each onset time in the input rhythm pattern J ((1) in FIG. 7).
- the rhythm pattern search unit 213 calculates the sum of the absolute values calculated in the procedure (1).
- the rhythm pattern search unit 213 calculates the absolute value of the time difference from the nearest onset time in the input rhythm pattern J with reference to each onset time in the rhythm pattern K ((3) in FIG. 7). . (4) The rhythm pattern search unit 213 calculates the sum of the absolute values calculated in step (3). (5) The rhythm pattern search unit 213 calculates an average value of the sum calculated in the procedure (2) and the sum calculated in the procedure (4) as a difference between the input rhythm pattern J and the rhythm pattern K.
- the rhythm pattern search unit 213 since a sufficient number of rhythm patterns are not prepared, as shown in FIG. 7 (3), the rhythm pattern search unit 213 has the absolute value of the time difference as a reference interval ( Here, since processing is performed so that a value larger than 0.125) is not used when calculating the total sum because it is 8 minutes, the above processing is unnecessary when a sufficient number of rhythm patterns can be prepared.
- the rhythm pattern search unit 213 performs the above-described procedures (1) to (5) for the rhythm patterns in all the phrase records included in the phrase table. As described above, the rhythm pattern search unit 213 includes, among the sound generation times represented by the rhythm pattern acquired by the input rhythm pattern storage unit 212 serving as an acquisition unit, and the sound generation times represented by the rhythm pattern stored in the storage unit.
- the sum of the difference between the pronunciation time represented by the rhythm pattern acquired by the acquisition means and the nearest pronunciation time on the time axis is calculated, and the rhythm pattern having the smallest total value is calculated as the degree of similarity.
- search means which searches the musical sound data matched with the said rhythm pattern from the said memory
- the rhythm pattern search unit 213 multiplies the similarity distance calculated for each rhythm category in step Sb4 by the deviation of the rhythm pattern calculated in step Sb6, so that all the phrase records included in the phrase table are included.
- the distance from the input rhythm pattern is calculated (step Sb7).
- step Sb7 is expressed by the following equation. The smaller the distance between the J and K rhythm patterns, the higher the degree of similarity between the K rhythm pattern and the J input rhythm pattern.
- the rhythm pattern search unit 213 determines whether or not the rhythm category determined in step Sb5 matches the rhythm category in B. If they do not match, the rhythm pattern search unit 213 determines a constant (for example, 0.5) determined in advance in the above formula. Is added.
- the distance between the rhythm patterns is calculated to be large, so that it is determined in step Sb5 that the input rhythm pattern is applicable. Search results are easily output from the rhythm category.
- the rhythm pattern search unit 213 regards the rhythm pattern having the shortest distance from the input rhythm pattern as a rhythm pattern that satisfies the condition indicating that the degree of similarity with the input rhythm pattern is high, and has this rhythm pattern data.
- the phrase record is output as a search result (step Sb8).
- the user can cause the performance processing unit 214 to output a sound based on the phrase record of the search result (hereinafter referred to as the search result phrase) by inputting the input rhythm pattern (loop playback mode). And performance loop playback mode). Further, as described above, the user performs a performance operation on the rhythm input device 10 using the constituent sounds in the phrase record of the search result, and causes the performance processing unit 214 to output the sound of the phrase resulting from the performance operation. Is possible (performance playback mode and performance loop playback mode). Hereinafter, differences between the loop playback mode, the performance playback mode, and the performance loop playback mode will be described.
- FIG. 8 is a schematic diagram for explaining processing performed by the performance processing unit 214 in the loop playback mode.
- the loop playback mode is a mode in which a sound based on a search result phrase consisting of one measure is BPM (Beats Per Minute) engraved by the measure line clock output unit 211, and the performance processing unit 214 repeatedly outputs as a playback target in accordance with the accompaniment. is there.
- the measure line clock exceeds the sound generation start time of each constituent sound of the search result phrase within one measure
- the performance processing unit 214 sets this constituent sound as a reproduction target.
- the value of the bar-line clock reaches 1, that is, when 1 bar has elapsed, the value again becomes 0, and thereafter, the value between 0 and 1 is repeated.
- the sound based on the search result phrase is repeatedly output as a reproduction target in the cycle of the bar line clock.
- the performance processing unit 214 selects each constituent sound for reproduction as indicated by an arrow in FIG. .
- the loop playback mode is a mode mainly designated when the user confirms what volume, tone color, and rhythm pattern the search result phrase is composed of.
- FIG. 9 is a schematic diagram for explaining processing performed by the performance processing unit 214 in the performance reproduction mode.
- the performance processing unit 214 plays back the constituent sounds of the search result phrase corresponding to the time when the user performed the performance operation. It is.
- the sound is played back only at the time when the user performs the performance operation within one measure. That is, unlike the loop playback mode, the performance playback mode does not output sound at a time when the user does not perform a performance operation using the rhythm input device 10.
- the performance playback mode is a mode specified when the user wants to continuously perform himself / herself using the constituent sounds in the search result phrase.
- FIG. 9 shows that the user has performed a performance operation using the rhythm input device 10 at the time indicated by the arrows in the section indicated by the bidirectional arrows (“01” to “06”).
- four parameters are input to the performance processing unit 214: velocity data, trigger data, pronunciation start time of each constituent sound of the search result phrase, and waveform of each constituent sound.
- velocity data and trigger data are based on an input rhythm pattern using the rhythm input device 10 by the user.
- the pronunciation start time of each constituent sound of the search result phrase and the waveform of each constituent sound are included in the phrase record of the search result phrase.
- the performance processing unit 214 designates the volume of the constituent sound with the smallest time difference between the onset time of the trigger data and the pronunciation start time of each constituent sound of the search result phrase, and specifies the volume corresponding to the velocity data size.
- the performance processing unit 214 starts the onset time of the trigger data and the sound of each component sound of the search result phrase.
- the waveform of the constituent sound having the smallest time difference from the time is output to the sound output unit 26 by specifying a volume corresponding to the magnitude of velocity data with respect to the magnitude of the attack intensity of each constituent sound in the search result phrase. It may be. Note that the waveform of the constituent sound corresponding to the section in which the trigger data is not input (here, “02” and “03”) is not output to the sound output unit 26.
- the performance loop playback mode is a mode that combines the loop playback mode and the performance playback mode.
- the performance processing unit 214 determines whether a performance operation using the rhythm input device 10 has been performed by the user for each measure. In the performance loop playback mode, until the user performs a performance operation using the rhythm input device 10, the performance processing unit 214 sets the sound based on the search result phrase as a playback target. That is, at this time, the operation is the same as the loop playback mode.
- the performance processing unit 214 performs processing according to the same operation as the performance reproduction mode during this one measure.
- the performance processing unit 214 sets a playback sound as a constituent sound of the search result phrase corresponding to the time when the user performs the performance operation.
- the performance loop playback mode if a performance operation is performed by the user even once, if the performance operation is not performed in the subsequent bar, the constituent sound of the search result phrase corresponding to the time entered by the user in the previous bar is displayed.
- the performance processing unit 214 makes a playback target.
- the performance loop playback mode is a mode that is designated when the user performs himself / herself using the constituent sounds in the search result phrase and wants the constituent sounds of the search result phrase to be loop reproduced by the input rhythm pattern.
- the information processing apparatus 20 of the first embodiment it is possible to search for musical tone data of a phrase configured with a rhythm pattern that satisfies a condition in which the degree of similarity with the rhythm pattern intended by the user is satisfied. Can do.
- the user can perform using the constituent sounds of the phrase of the search result.
- the second embodiment is a music data creation system as an example of a music data processing system, and a system that creates automatic accompaniment data as an example of music data.
- the automatic accompaniment data in this embodiment is read by an electronic musical instrument, a sequencer, etc., and plays the same role as the so-called MIDI automatic accompaniment data.
- the music data creation system 100a in the second embodiment has the same configuration as that shown in FIG. However, since the configurations of the rhythm input device and the information processing device are different from those in the first embodiment, “a” is added to the reference numerals to indicate the respective components.
- the music data creation system 100a includes a rhythm input device 10a and an information processing device 20a, and each is connected by a communication line so that they can communicate with each other. This communication may be realized by radio.
- the rhythm input device 10a includes, for example, a keyboard and a pad as input means.
- the rhythm input device 10a is triggered by the key, that is, the trigger operation indicating that a performance operation has been performed, and the strength of the key press, Velocity data indicating the strength of the performance operation is input to the information processing apparatus 20a in units of one measure.
- the trigger data is generated every time the user presses the keyboard, and is represented by key-on information indicating that the key is pressed.
- Each trigger data is associated with velocity data.
- the set of trigger data and velocity data generated in one measure represents a rhythm pattern (referred to as an input rhythm pattern) input by the user using the rhythm input device 10a in the one measure.
- the user inputs this input rhythm pattern for the part corresponding to the keyboard range.
- the rhythm input device 10a is an example of an input unit through which a performance operation by a user is input.
- the information processing apparatus 20a is, for example, a PC.
- the information processing device 20a includes a database including a plurality of musical tone data used for automatic accompaniment data and each part constituting the automatic accompaniment data, and an application using the database.
- This application has a selection function for selecting a part based on the input rhythm pattern input when searching for musical sound data, and a playback function for playing back automatic accompaniment data being created and automatic accompaniment data after creation.
- the automatic accompaniment data consists of multiple parts each with a unique rhythm pattern. Each part is composed of, for example, a bass, chord chord, single tone phrase, bass drum, snare drum, hi-hat, and cymbal. There is a combination of them.
- these data are composed of an automatic accompaniment data table and various files such as txt and WAVE defined in the table.
- the musical tone data which is the data of each part, is recorded in a file format such as WAVE or mp3 for performance sounds each having a predetermined tone (for example, 2 bars, 4 bars or 8 bars). It is.
- the database also records musical tone data that is currently unused in automatic Bando data, which is used when replacing automatic accompaniment data.
- the information processing device 20a searches the database for musical tone data having the same or similar rhythm as the input rhythm pattern input from the rhythm input device 10a by the selection function for the part when the input rhythm pattern is input.
- the names of the automatic accompaniment data having the musical tone data of the search result are displayed as a list.
- the information processing device 20a outputs a sound based on the automatic accompaniment data selected by the user from the list.
- the information processing apparatus 20a repeatedly reproduces the sound based on the automatic accompaniment data of the search result.
- the information processing apparatus 20a selects the selected automatic accompaniment. Play sound based on data.
- playback is performed with the tempo changed faster or slower as necessary so that a predetermined timing (for example, beat timing) is synchronized with the part.
- a predetermined timing for example, beat timing
- FIG. 10 is a schematic diagram showing the rhythm input device 10a.
- the rhythm input device 10a includes a keyboard 11 and an input pad 12 as input means.
- the information processing apparatus 20a searches for musical sound data based on the input rhythm pattern.
- One of the different parts is associated with a predetermined range of the keyboard 11 or the type of the input pad 12 in the rhythm input device 10a.
- the entire range of the keyboard 11 is divided into a low-range keyboard, a mid-range keyboard and a high-range keyboard at two split points.
- the low range keyboard is used as the bass input range keyboard 11a, and is associated with the bass part.
- the mid-range keyboard is used as the chord input range keyboard 11b and is associated with chord parts consisting of chords.
- the high range keyboard is used as the phrase input range keyboard 11c, and is associated with a phrase part composed of a single note.
- a bass drum part is associated with the bass drum input pad 12a.
- a snare drum part is associated with the snare drum input pad 12b.
- a hi-hat part is associated with the hi-hat input pad 12c.
- a cymbal part is associated with the cymbal input pad 12d.
- the rhythm input device 10a the user designates the key range of the keyboard 11 to be depressed or the type of the input pad 12 to be depressed, and performs the performance operation, whereby musical tone data is obtained for the part associated with the designated input means. It is possible to search. In this way, each region where the keyboard 11 and the input pad 12 exist corresponds to each of the performance operators such as the keyboard 11 and the input pad 12.
- the information processing apparatus 20a when the user inputs a rhythm pattern by pressing a key corresponding to the bass input range keyboard 11a, the information processing apparatus 20a has a rhythm pattern that is the same as the rhythm pattern or included in a predetermined similarity range.
- the musical sound data of the base is specified, and the musical sound data of the specified base is displayed as a search result.
- the hi-hat input pad 12c and the cymbal input pad 12d may be called performance operators.
- the rhythm input device 10a When the user operates a certain performance operator in the rhythm input device 10a, the rhythm input device 10a inputs, for example, an operation signal corresponding to the operation to the information processing device 20a.
- the operation signal is, for example, information in MIDI (Musical Instrument Digital Interface) format (hereinafter referred to as MIDI information).
- MIDI information includes a note number if the performance operator is a keyboard, and channel information if the performance operator is a pad.
- the information processing apparatus 20a identifies the target part from the received MIDI information.
- the rhythm input device 10a includes a BPM input operator 13.
- BPM is the number of beats in one minute, and represents the tempo of the musical sound notified to the user in the rhythm input device 10a.
- the BPM input operator 13 includes a display surface such as a liquid crystal display and a wheel, for example. When the user rotates the wheel, the BPM having a value corresponding to the rotation stop position is displayed on the display surface.
- the BPM input using the BPM input operator 13 is referred to as input BPM.
- the rhythm input device 10a inputs MIDI information including information for identifying the input BPM together with the input rhythm pattern to the information processing device 20a.
- the information processing apparatus 20a Based on the input BPM included in the MIDI information, the information processing apparatus 20a outputs a sound from the audio output unit 26 or blinks light on the display unit 24 to notify the user of the tempo and performance progress timing. (This is a so-called metronome function). The user operates the performance operator based on the tempo and performance progress timing experienced from these sounds or lights.
- FIG. 11 is a block diagram showing the hardware configuration of the information processing apparatus 20a.
- the information processing apparatus 20a includes a control unit 21, a storage unit 22a, an input / output interface unit 23, a display unit 24, an operation unit 25, and an audio output unit 26, and these units are connected via a bus.
- the control unit 21, the input / output interface unit 23, the display unit 24, the operation unit 25, and the audio output unit 26 are the same as those in the first embodiment.
- the storage unit 22a stores an automatic accompaniment DB 222.
- the automatic accompaniment DB 222 includes various information related to automatic accompaniment data, musical sound data, and various information related thereto.
- the automatic accompaniment DB 222 includes a part table, instrument type table, rhythm category table, rhythm pattern table, and automatic accompaniment data table.
- FIG. 12A shows an example of a part table.
- the “part ID” is an identifier for uniquely identifying the parts constituting the automatic accompaniment data, and is represented by, for example, a two-digit number.
- Part name is a name indicating the type of part, and different parts such as “bass”, “chord”, “phrase”, “bass drum”, “snare drum”, “hi-hat”, and “cymbal” described above. Each part ID is associated and described in the part table.
- the “part name” is not limited to the content shown in FIG.
- the “note number” is MIDI information indicating which key range each part is assigned to on the keyboard.
- note number “60” is assigned to “center C” on the keyboard.
- the “base” part is assigned a note number with a note number equal to or less than the first threshold “45”, and the “phrase” part.
- the first threshold value and the second threshold value described above are examples, and are not limited to these values.
- the first threshold value and the second threshold value may be changeable by the user.
- Channel information is MIDI information indicating to which input pad each part is assigned. For example, as shown in FIG. 12A, channel information “12a” is assigned to the “bass drum” part, and channel information “12b” is assigned to the “snare drum” part. “12c” channel information is assigned to the “" part, and “12d” channel information is assigned to the "cymbal” part.
- FIG. 12 (b) shows an example of the instrument type table.
- “Musical instrument type ID” is an identifier for uniquely identifying the type of musical instrument, and is represented by, for example, a three-digit number.
- “Musical instrument type” is a name representing the type of musical instrument.
- a musical instrument type table is associated with a musical instrument type ID for each type of different musical instrument such as “wood bass”, “electric bass”, and “slap base”. It is described in.
- the musical instrument type “wood bass” is described in the musical instrument type table in association with the musical instrument type ID “001”.
- the other instrument types are similarly described in the instrument type table in association with each instrument type ID.
- the “musical instrument type” is not limited to the content shown in FIG.
- FIG. 12 (c) shows an example of a rhythm category table.
- the “rhythm category ID” is an identifier for uniquely identifying a rhythm pattern category (referred to as a rhythm category), and is represented by, for example, a two-digit number.
- the rhythm pattern represents a sequence of times at which each sound is generated in a predetermined length of time.
- the rhythm pattern represents a sequence of times at which each sound is generated in one measure.
- “Rhythm category” is a name that represents a rhythm category.
- a rhythm category ID is associated with each rhythm category such as “8 minutes”, “16 minutes”, and “8 minutes in triplicate”. It is described in.
- the rhythm category “8 minutes” is described in the rhythm category table in association with the rhythm category ID “01”.
- the “rhythm category” is not limited to the content shown in FIG. For example, broad categories such as time signature and genre, more detailed categories such as assigning one different category ID for each rhythm pattern, or combining these to give multiple levels of categories. May be.
- FIG. 13A shows an example of a rhythm pattern table.
- the rhythm pattern table a plurality of rhythm pattern records grouped for each part ID for uniquely identifying a part are described.
- FIG. 13A as an example of the rhythm pattern table, a plurality of rhythm pattern records whose part is “base” (part ID is “01”) are shown.
- One rhythm pattern record includes “automatic accompaniment ID”, “part ID”, “instrument type ID”, “rhythm category ID”, “rhythm pattern ID”, “rhythm pattern data”, “attack intensity pattern data”, It consists of a plurality of items such as “musical sound data”, “key”, “genre”, “BPM”, and “code”. This rhythm pattern table is described for each part.
- “Automatic accompaniment ID” is an identifier for uniquely identifying automatic accompaniment data, and the same ID is assigned to each combination of rhythm pattern records for each part.
- the automatic accompaniment data having the same automatic accompaniment ID is pre-combined so as to have the same contents for items such as genre, key, or BPM, so that the automatic accompaniment data is reproduced as an ensemble for a plurality of parts Sometimes it feels less uncomfortable.
- the “musical instrument type ID” is an identifier for uniquely identifying the type of musical instrument. Rhythm pattern records having the same part ID are grouped for each instrument type ID, and the user selects the instrument type using the operation unit 25 before inputting the rhythm using the rhythm input device 10a. Is possible. The type of musical instrument selected by the user is stored in the RAM.
- the “rhythm category ID” is an identifier for identifying which rhythm category each rhythm pattern record belongs to among the rhythm categories described above. For example, the rhythm pattern record whose “rhythm category ID” is “01” in FIG. 13A belongs to the rhythm category of “8 minutes” as shown in the rhythm category table of FIG.
- the “rhythm pattern ID” is an identifier for uniquely identifying each rhythm pattern record, and is composed of, for example, a 9-digit number. This 9-digit number is a combination of “Part ID” of 2 digits, “Musical Instrument Type ID” of 3 digits, “Rhythm Category ID” of 2 digits, and branch number of 2 digits.
- Rhythm pattern data is a data file in which the pronunciation start time of each constituent sound in a phrase constituting one measure is recorded, for example, the pronunciation start time of each constituent sound is described in a text file. This sounding start time corresponds to trigger data indicating that a performance operation has been performed in the input rhythm pattern.
- the sound generation start time of each constituent sound is normalized in advance with the length of one measure being 1. That is, the sound generation start time of each constituent sound described in the “rhythm pattern data” takes a value between 0 and 1.
- the rhythm pattern data is not limited to a method of creating a ghost note from a commercially available audio loop material by an operator as described above. It may be extracted.
- the rhythm pattern data may be created by a computer in the following manner. The CPU of the computer extracts the constituent sound start time for each channel from the MIDI format data for one measure, and excludes ghost notes that are difficult to judge as rhythm input (for example, those with extremely small velocity data). To do.
- rhythm pattern data is automatically created.
- the CPU of the computer extracts rhythm pattern data as follows.
- the instrument sound that corresponds to each note number is often determined. For example, it is assumed that a snare drum tone is assigned to note number 40 in the drum spurt. Based on this, the CPU of the computer extracts the constituent sound start time of the note number to which the sound of the snare drum is assigned in the channel where the drum spurt of the accompaniment sound source is recorded, so that the rhythm pattern data of the snare drum is extracted. Perform extraction.
- “Attack intensity pattern data” is a data file in which the attack intensity of each component sound in a phrase constituting one measure is recorded.
- the attack intensity of each component sound is described as a numerical value in a text file.
- This attack strength corresponds to velocity data indicating the strength of the performance operation in the input rhythm pattern. That is, the attack strength means the strength of each constituent sound.
- the attack strength may be described in a text file as velocity data itself of MIDI information.
- “Musical data” is the name of a data file related to the sound itself based on the rhythm pattern record, and indicates a musical data file having an audio file format such as WAVE or mp3.
- the “key” represents a reference pitch when the tone data is converted. Since the value of this “key” indicates the pitch name within a specific octave, it substantially indicates the pitch of the musical tone data.
- “Genre” represents the genre of music to which the rhythm pattern record belongs.
- “BPM” is the number of beats per minute and represents the tempo of the sound based on the musical sound data included in the rhythm pattern record.
- “Cord” indicates the type of chord of the musical tone indicated by the musical tone data. “Cord” is set in a rhythm pattern record whose part is a chord indicating the type of chord.
- “Maj7” is shown as an example of “code” in the rhythm pattern record whose “part ID” is “02”.
- a rhythm pattern record whose part is a chord has a plurality of types of “chords” for one rhythm pattern ID, and has musical tone data corresponding to each “chord”.
- the rhythm pattern record whose rhythm pattern ID is “020040101” is not shown for each of a plurality of types of codes such as “Maj”, “7”, “min”, “dim”, “Sus4”.
- rhythm pattern records having the same rhythm pattern ID have the same contents except for “musical sound data” and “chord”.
- each rhythm pattern record is a musical sound data composed of only the root sound of each chord (the same pitch as the “key”), and a musical sound data composed of the respective constituent sounds excluding the root sound of each chord. You may make it have.
- the control unit 21 simultaneously reproduces the musical sound indicated by the musical sound data composed of the root sound and the musical sound indicated by the musical sound data indicating the chord designated by the user and composed of the constituent sound excluding the root sound. .
- the rhythm pattern record whose part is “base” is shown as an example, but actually, the rhythm pattern table includes a plurality of types of parts ( Here, rhythm pattern records corresponding to chords, phrases, bass drums, snare drums, hi-hats, cymbals) are described.
- FIG. 13B is an automatic accompaniment data table.
- This table defines which musical tone data is used under what conditions for each part during automatic accompaniment.
- the configuration of the automatic accompaniment data table is the same as that of the rhythm pattern table.
- the automatic accompaniment data described in the first row of the table is a combination of related specific parts, and defines information relating to automatic accompaniment at the time of ensemble.
- the “part ID” is “99”
- the “instrument type ID” is “999”
- the rhythm pattern ID is “999990101”. It has become.
- Each of these values represents that the data is automatic accompaniment data by ensemble.
- the information related to the automatic accompaniment at the time of the ensemble includes one piece of musical tone data synthesized by combining the musical tone data of each part.
- the musical sound data “BeBop01.wav” When the musical sound data “BeBop01.wav” is reproduced, the musical sound is reproduced with all the parts combined. It should be noted that a file capable of playing a plurality of parts with one piece of musical sound data as automatic accompaniment data is not necessarily required.
- the “rhythm pattern data” and “attack intensity pattern data” in the information relating to the automatic accompaniment at the time of the ensemble describe the rhythm pattern and the attack intensity based on the musical sound of the automatic accompaniment at the time of the ensemble (ie, BeBop01.wav).
- the automatic accompaniment data in the second and subsequent lines represented by the part ID “01” represents the content selected by the user for each part.
- a specific musical instrument is designated for each part ID “01” to “07” by the user, and “BeBop” style automatic accompaniment data is selected.
- the “key” of the part corresponding to the rhythm instrument is not specified, when converting the pitch, the reference pitch is specified, and the pitch between the specified pitch and the reference pitch is set. Accordingly, the pitch of the musical sound data may be converted.
- FIG. 14 is a block diagram illustrating the functional configuration of the information processing apparatus 20a and its surroundings.
- the control unit 21 reads each program constituting the application stored in the ROM or the storage unit 22a into the RAM and executes it, thereby executing the tempo acquisition unit 211a, the progression unit 212a, the notification unit 213a, the part selection unit 214a, and the pattern acquisition.
- the functions of the unit 215a, the search unit 216a, the specifying unit 217a, the output unit 218a, the chord receiving unit 219a, and the pitch receiving unit 220a are realized. In the following, processing will be described with these units as the subject, but the entity of the subject of processing is the control unit 21.
- onset means that the input state in the rhythm input device 10a is switched from off to on.
- onset means that the keyboard is depressed if the input means of the rhythm input device 10a is a keyboard, and the pad is hit if the input means of the rhythm input device 10a is a pad. If the input means of the rhythm input device 10a is a button, the button is pressed.
- offset means a state where the keyboard is pressed and released if the input means of the rhythm input device 10a is a keyboard, and the input means of the rhythm input device 10a is a pad.
- the “onset time” represents each time when the input state in the rhythm input device 10a is switched from off to on. In other words, the “onset time” represents the time when the trigger data is generated in the rhythm input device 10a.
- the “offset time” represents each time when the input state in the rhythm input device 10a is switched from on to off. In other words, the “offset time” represents the time when the trigger data generated in the rhythm input device 10a disappears.
- “onset information” is information input from the rhythm input device 10a to the information processing device 20a at the onset time.
- “Onset information” includes the note number of the keyboard, channel information, and the like in addition to the trigger data described above.
- the tempo acquisition unit 211a acquires the BPM designated by the user, that is, the designated tempo.
- the BPM designated by the user is a BPM designated using at least one of the BPM input operator 13 and a BPM designation slider 201 described later.
- the BPM input operation element 13 and the BPM designation slider 201 are interlocked.
- the designated BPM is displayed in the other display column.
- the progression unit 212a receives a user tempo notification start instruction from a switch (not shown) and advances the current position (performance progression timing) in the measure from the received timing based on the designated tempo.
- the notification unit 213a notifies the current position in the measure.
- the notification unit 213a when the length of one measure is normalized as 1, the notification unit 213a generates a clock signal (hereinafter referred to as a measure line clock) every several tens of milliseconds (milliseconds) on the ongoing time axis. It outputs to the pattern acquisition part 215a. That is, the bar line clock indicates where the current time is located in one bar, and takes a value between 0 and 1.
- the notification unit 213a generates a bar line clock based on a designated tempo designated by the user.
- the part selection unit 214a selects a specific part from a plurality of performance parts based on the user's designation. Specifically, the part selection unit 214a specifies whether the information for identifying the part included in the MIDI information input from the rhythm input device 10a is a note number or channel information. Based on the specified contents and the part table included in the automatic accompaniment DB 222, the part selection unit 214a determines which performance operator is operated by the user, that is, a plurality of parts constituting the musical tone data. Which part is designated and input is specified, and the musical sound data and rhythm pattern table of the part to be searched are selected.
- the part selection unit 214a compares the note number with the description contents of the part table, and the user's operation is based on the bass input range keyboard 11a, the chord input range keyboard 11b, or The phrase input range keyboard 11c is specified to which the musical sound data and rhythm pattern table of the corresponding part are selected. Further, when the received MIDI information is channel information, the part selection unit 214a compares the note number with the description contents of the part table, and the user's operation is performed by the bass drum input pad 12a and the snare drum input pad 12b. The hi-hat input pad 12c or the cymbal input pad 12d is specified, and the corresponding musical sound data and rhythm pattern table are selected. The part selection unit 214a outputs the part ID corresponding to the selected part to the search unit 216a.
- the pattern acquisition unit 215a acquires an input pattern for a specific part selected from a plurality of performance parts. Specifically, the pattern acquisition unit 215a stores the time at which trigger data is generated, that is, the onset time, which is input from the rhythm input device 10a with reference to the bar line clock, in each bar. In this way, the sequence of onset times stored in the RAM in units of one measure becomes the input rhythm pattern. Since the onset time stored here is based on the bar line clock, it takes a value between 0 and 1, similar to the bar line clock. Further, the bar line clock may be input from the outside to the information processing apparatus 20a.
- the information processing apparatus 20a emits sound or light or changes the display content of the screen visually or audibly to the user at the time of measures or beats, such as a metronome.
- the position of the bar line need only be communicated.
- the audio output unit 26 emits sound or the display unit 24 emits light based on the bar line clock output from the notification unit 213a.
- the output unit 218a may reproduce an accompaniment sound to which, for example, a click sound indicating the position of the bar line is added in advance in accordance with the bar line clock. In this case, the user inputs a rhythm pattern in accordance with the bar line felt by the user from the accompaniment sound source.
- the search unit 216a searches the automatic accompaniment DB 222, which is a database storing a plurality of musical tone data combining musical tone data, and based on the comparison result between the musical tone rhythm pattern and the input rhythm pattern included in the musical tone data of the specific part.
- the musical tone data is acquired as a search result.
- the search unit 216a displays the search result on the display unit 24.
- the search unit 216a registers this as automatic accompaniment part data of one part in the automatic accompaniment data. To do. And a user produces automatic accompaniment data by repeating this operation
- the automatic accompaniment DB 222 includes individual musical tone data and automatic accompaniment data for a plurality of performance parts, and a plurality of tables for managing information on the respective data.
- the output unit 218a reads out the musical sound data specified from the current position in the bar, that is, the data position based on the bar line clock, during reproduction of the musical sound data and the automatic accompaniment data, and the musical sound indicated by the musical sound data is read out from the musical sound data.
- a musical tone reproduction signal is output to the audio output unit.
- the audio output unit 26 outputs audio based on this reproduction signal.
- the output unit 218a controls the performance operation by the user using the constituent sounds in the musical sound data of the search result in the performance playback mode and the performance loop playback mode.
- the code receiving unit 219a receives an input of a code designated by the user.
- the pitch receiving unit 220a receives input of pitch information representing the pitch of a sound designated by the user.
- FIG. 15 is a flowchart of processing performed by the information processing apparatus 20a.
- the program of this flow is executed.
- the information processing apparatus 20a performs initial setting after the execution of the program starts. (Step Sa0).
- the user designates the instrument type corresponding to each key range using the operation unit 25 and the instrument type corresponding to the input pad, and inputs the BPM using the BPM input operator 13.
- the control unit 21 reads the various tables shown in FIGS.
- the user uses the rhythm input device 10a to input a rhythm pattern by designating a predetermined key range on the keyboard 11 or any of the input pads 12a to 12d, that is, a part.
- the rhythm input device 10a transmits MIDI information including information for identifying the designated part, information for identifying the designated instrument type, information for identifying the input BPM, and an input rhythm pattern to the information processing device 20a. To do.
- the control unit 21 executes processing according to the flow shown in FIG.
- step Sa1 when the control unit 21 acquires information identifying the BPM input by the user included in the received MIDI information, this is used as the BPM of the automatic accompaniment data to be recorded in the automatic accompaniment table read out to the RAM.
- step Sa2 the control unit 21 designates the user based on information identifying the part selected by the user, such as a note number and channel information, included in the received MIDI information, and a part table included in the automatic accompaniment DB 211.
- the part ID of the selected part is acquired, it is stored as the part ID of the part to be recorded in the part table and the automatic accompaniment table in the RAM (step Sa2).
- the user inputs a rhythm pattern using the bass input range keyboard 11a, and the control unit 21 acquires “01” as the part ID as shown in FIG. 12A and stores it in the RAM.
- the control unit 21 determines the instrument type specified by the user based on the information included in the received MIDI information for identifying the instrument type specified by the user and the instrument type table included in the automatic accompaniment DB 211. Is acquired as the instrument type ID of the part to be recorded in the instrument type table and the automatic accompaniment table read out to the RAM (step Sa3).
- the control unit 21 acquires “002” as the instrument type ID as shown in FIG. Suppose that this is stored as the instrument type ID of the part to be recorded in the automatic accompaniment table read into the RAM.
- the control unit 21 stores it in the RAM (step Sa4).
- step Sa5 the control unit 21 searches the automatic accompaniment DB 222 for musical sound data having a rhythm pattern that is the same as or similar to the input rhythm pattern for the part and instrument type specified by the user (step Sa5).
- step Sa5 the same processing as described with reference to FIG. 5 in the first embodiment is performed.
- step Sb8 of FIG. 5 the control unit 21 reduces the distance from the musical sound data having rhythm pattern data having a small distance from the input rhythm pattern based on the rhythm pattern table and the input rhythm pattern of the selected part.
- the predetermined number is stored as a parameter in the storage unit 22 a and may be changed by the user using the operation unit 25.
- the control unit 21 includes a filtering function for outputting only the musical sound data group having a BPM close to the BPM input by the user as a search result, and the user performs the filtering function via the operation unit 25. Can be arbitrarily turned ON / OFF.
- step Sb8 the control unit 21 excludes the musical sound data group having the input BPM and the BPM whose difference does not fall within a predetermined range from the search result. Specifically, for example, in step Sb8, the control unit 21 acquires only a musical sound data group having a BPM of (1/2 1/2 ) times to 2 1/2 times the input BPM as a search result, and otherwise. The musical sound data of is excluded from the search results. Note that the above-described coefficients of (1/2 1/2 ) times and 2 1/2 times are merely examples, and other values may be used.
- the control unit 21 can reproduce the musical sound indicated by the musical sound data of the search result with the BPM input or designated by the user. At this time, if a BPM that is significantly different from the original BPM of the musical sound data is input by the user by the control unit 21, when the musical sound indicated by the musical sound data is emitted by the voice output unit 26, the user feels uncomfortable. May give. For example, as a result of the user inputting a rhythm pattern at the tempo of BPM 240 and the control unit 21 searching for a musical sound data group having the rhythm pattern, the original BPM indicated by the musical sound data included in the musical sound data group of the search result is 60. Consider the case.
- the musical sound based on the musical sound data included in the search result is emitted by the audio output unit 26 at a BPM that is four times the original BPM, so that the sound is fast-forwarded and reproduced at a speed that is four times the original BPM.
- the user feels uncomfortable.
- the musical sound data is an audio file such as WAVE or mp3
- the sound quality during reproduction deteriorates as the difference between the original BPM and the BPM specified by the user increases.
- the control unit 21 has a filtering function.
- step Sa5 the control unit 21 displays the musical sound data group stored in the RAM in step Sb8 on the display unit 24 as a search result (step Sa6).
- FIG. 16 is a schematic diagram showing an example of search results for musical sound data.
- FIG. 16 shows a case where a musical sound data group acquired as a search result by the control unit 21 is displayed on the display unit 24 based on the rhythm pattern input by the user using the bass input range keyboard 11a.
- a BPM designation slider 201, a key designation keyboard 202, and a chord designation box 203 are displayed at the top of the display unit 24.
- the BPM designation slider 201 includes, for example, a groove having a predetermined length, a knob provided in the groove, and a BPM display field.
- the control unit 21 displays the BPM corresponding to the changed position of the knob in the BPM display column.
- FIG. 16 shows a case where a musical sound data group acquired as a search result by the control unit 21 is displayed on the display unit 24 based on the rhythm pattern input by the user using the bass input range keyboard 11a.
- a BPM designation slider 201, a key designation keyboard 202, and a chord designation box 203
- the BPM displayed in the BPM display column becomes faster as the position of the knob is moved from the left end to the right end of the groove, and as the position of the knob is moved from the right end of the groove toward the left end.
- the BPM displayed in the BPM display column is slow.
- the control unit 21 reproduces the musical tone indicated by the musical tone data included in the musical tone data group selected by the user from the search result by the BPM designated using the BPM designation slider 201 (referred to as designated BPM). That is, the control unit 21 synchronizes the BPM of the musical sound data included in the musical sound data group selected by the user from the search result with the designated BPM.
- the information processing apparatus 20 may receive the BPM designated by the external device and use the received BPM as the designated BPM. .
- the BPM designated using the BPM designation slider 201 may be transmitted to an external device.
- the key designation keyboard 202 is an image simulating a keyboard to which a predetermined range (here, one octave) is assigned. A corresponding pitch is assigned to each key in the key designation keyboard 202.
- the control unit 21 acquires the pitch assigned to the designated key and stores it in the RAM as the designated key.
- the control unit 21 reproduces the musical tone indicated by the musical tone data included in the musical tone data group selected by the user from the search result with a key designated using the key designation keyboard 202. That is, the control unit 21 synchronizes the key of the musical tone data included in the musical tone data group selected by the user from the search result with the designated key.
- the information processing device 20 When the information processing device 20 is connected in synchronization with the external device, the information processing device 20 receives the key specified in the external device and uses the received key as the specified key. Also good. In this case, a key designated using the key designation keyboard 202 may be transmitted to an external device.
- the code designation box 203 is an input box that accepts input of a code designated by the user.
- the control unit 21 stores the input chord type in the RAM as a specified code.
- the control unit 21 acquires, as a search result, musical tone data having a chord type designated using the chord designation box 203 among the musical tone data included in the musical tone data group from the retrieval result.
- the code designation box 203 may be filtered by displaying a list of code names as a pull-down format.
- the information processing device 20 When the information processing device 20 is connected in synchronization with the external device, the information processing device 20 receives the code specified in the external device and uses the received code as the specified code. Also good. In this case, a code designated using the code designation box 203 may be transmitted to an external device. Further, as an input form, a button may be displayed for each chord type on the display unit, and the button may be clicked for designation.
- a musical sound data group as a search result is displayed as a list.
- the user designates one of tabs (referred to as part tabs) representing different parts in the list of search results, thereby displaying a list of musical sound data groups of the search results for each part.
- part tab When the drum part tab is designated, the user presses a key to which an arrow such as up, right, or left is pressed using the operation unit 25 (in this case, a keyboard), so that the control unit 21 can perform bass drum and snare.
- a search result of a part corresponding to the pressed key is displayed among a plurality of parts constituting the drum such as a drum, a hi-hat, and a cymbal.
- a tab that is displayed as a reproduction history displays a musical sound data group having musical sound data that has been selected and reproduced by the user so far.
- a tab named “Auto Accompaniment Data” is added and registered in order to display a list of automatic accompaniment data that is a registered combination of waveform data for each part that the user likes. The automatic accompaniment data may be retrieved.
- the item “rank” represents a rank with high similarity in the musical sound data group of the search result.
- the item “file name” represents a file name of musical tone data included in the musical tone data group.
- the item “similarity” represents the distance between the input rhythm pattern and the rhythm pattern included in each musical tone data of the musical tone data group that is the search result. In other words, the smaller the numerical value represented by “similarity”, the shorter the distance and the higher the degree of similarity to the input rhythm pattern.
- the control unit 21 displays the names of the musical sound data and the related information side by side in ascending order of similarity.
- the “key” represents a reference pitch when the musical tone data is subjected to pitch conversion.
- the “key” is displayed as unspecified in the part data corresponding to the rhythm instrument.
- “Genre” represents the genre to which each piece of musical sound data belongs.
- BPM is a BPM included in each musical sound data, and represents the original BPM of the musical sound indicated by the musical sound data.
- Part name is the name of the part identified by the part ID included in the musical sound data group.
- the control unit 21 automatically generates the musical sound data selected by the user.
- the data is specified as one part of the data, and information is recorded in the corresponding part column of the automatic accompaniment data table in the RAM (step Sa7).
- the control unit 21 displays the background color of the tone data that has been selected and double-clicked on the search result display screen different from the background color of the tone data group that has not been selected.
- the control unit 21 reads out the musical tone data of each part specified in step Sa7 and registered in the automatic accompaniment table from the data position based on the bar line clock, and the BPM associated with each musical tone data.
- a time stretch is applied to the musical sound indicated by the musical sound data so that the speed is based on the relationship with the BPM designated by the user, that is, the BPM of the specified musical musical data is synchronized with the designated BPM, Playback is performed with a pitch conversion if necessary (step Sa8).
- the BPM designated by the user the input BPM is used at the time of the initial search, and when the user designates the BPM using the BPM designation slider 201 for the search result, this designated BPM is used.
- the control unit 21 may read out musical tone data from the head of the bar line, not limited to the data position based on the bar line clock.
- FIG. 17 is a schematic diagram illustrating BPM synchronization processing.
- a known method may be used, but for example, the following method may be used.
- the musical sound data is an audio file such as WAVE or mp3
- the sound quality during reproduction deteriorates as the difference between the BPM of the musical sound data and the BPM specified by the user increases.
- the control unit 21 performs the following process. In the case of “(BPM of music data ⁇ (1/2 1/2 )) ⁇ (BPM specified by the user) ⁇ (BPM of music data ⁇ 2 1/2 )”, the control unit 21 makes the user
- the musical sound data is time stretched so as to have the designated BPM (FIG. 17A).
- the control unit 21 is set so that the BPM is double the BPM specified by the user.
- time stretching is applied to the musical sound data (FIG. 17B).
- the control unit 21 sets the music data so that the BPM is half the BPM specified by the user. Is subjected to time stretch (FIG. 17C).
- the control unit 21 pitches the musical tone indicated by the musical tone data according to the difference between the key associated with the musical tone data and the designated key. That is, the key of the specified musical sound data is reproduced in synchronization with the designated key. For example, when the key associated with the specified musical sound data is “C” and the designated key is “A”, a method of increasing and decreasing the pitch of the specified musical sound data can be considered. Here, since the amount of pitch shift is small and it can be expected that the deterioration of sound quality is reduced, a method of increasing the pitch of the specified musical sound data is adopted.
- FIG. 18 shows a key table.
- the key table is stored in the storage unit 22a.
- the controller 21 refers to the key table when performing pitch conversion.
- the control unit 21 calculates a value obtained by subtracting the key No corresponding to the designated key from the key No corresponding to the key associated with the specified musical sound data. This value is called a key difference.
- the control unit 21 pitches the specified musical sound data so that the frequency of the musical sound is “2 (key difference / 12) to the power”. Convert.
- control unit 21 sets the specified musical tone data so that the frequency of the musical tone is “2 ((key difference ⁇ 12) / 12)”. Convert the pitch.
- control unit 21 sets the specified musical tone data so that the frequency of the musical tone is “2 ((key difference + 12) / 12)”. Convert the pitch.
- the control unit 21 causes the sound output unit 26 to output the musical sound indicated by the musical sound data subjected to pitch conversion.
- the above formula is an example, and it may be determined in advance so as to ensure the sound quality during reproduction.
- control unit 21 causes the tone data selected from the search result to be played back tone data whose pitch has been converted according to the designated chord. That is, the control unit 21 converts the pitch of the specified musical sound data into a designated code and reproduces it.
- step Sa8 when another musical sound data is selected from the search results and clicked (step Sa9; Yes), the control unit 21 returns the process to step Sa7.
- the control unit 21 specifies the part of the automatic accompaniment data being created (step Sa7), and performs the process of step Sa8.
- Music sound data can be registered as a part of automatic accompaniment data until the specified number of parts is reached. For example, drum parts (maximum 4 channels), bass parts (maximum 1 channel), chord parts (maximum 3 channels) Each part has an upper limit on the number of registrations. For example, if five drum parts are to be designated, newly designated musical sound data is registered instead of the musical tone data of the drum that has been reproduced.
- step Sa8 if another musical sound data is not selected by the user from the search result (step Sa9; No), and the end of the search process is instructed by the user (step Sa10; Yes), the control is performed.
- the unit 21 converts the automatic accompaniment data table and the file specified by the table into one data file, stores the data in the storage unit 22 as a file of automatic accompaniment data (step Sa11), and ends the processing flow.
- the user can arbitrarily read the automatic accompaniment data stored in the storage unit 22 using the operation unit 25.
- step Sa10; No when the end of the search process is not instructed by the user (step Sa10; No), the control unit 21 returns the process to step Sa1.
- musical tone data of different parts of the automatic accompaniment data is registered.
- automatic accompaniment data is created by the user performing operations until the number of parts registered for the number required for combining the automatic accompaniment data is completed.
- the musical tone indicated by the musical tone data of the newly selected part is output in superposition with the musical tone indicated by the musical tone data of the already played part.
- the control unit 21 reads the musical sound data from the data position based on the bar line clock, the musical sound of the musical sound data for a plurality of parts is output in synchronization.
- the following three types of variations are considered for the progress of each part.
- any one of measure unit, 2-beat unit, 1-beat unit, 8-minute unit, or no specification is selected. It is possible to start playback at the timing quantized using the reference.
- the first form of progress is “synchronize at the beginning of a node”.
- the musical sound data is reproduced from the position of the corresponding bar head when the bar line clock becomes the bar head.
- the second form of progression is “synchronize at the beginning of the beat”.
- the musical sound data is reproduced from the corresponding beat position when the bar line clock starts.
- the third form of progress is “not synchronized”. In this case, immediately after the user designates the accompaniment of each part, the musical sound data is reproduced from the corresponding progress position.
- the setting content is stored in the storage unit 22, and the user can read out the arbitrary setting content using the operation unit 25.
- the musical tone closest to the musical tone pattern intended by the user can be identified.
- the user since the user selects an arbitrary part from among the different parts associated with the plurality of performance operators and inputs the rhythm pattern, when the user comes up with a rhythm pattern for a specific part, It is possible to search by inputting a rhythm pattern for. Further, since the user only has to select a part, input a rhythm pattern, and register it as a performance of each part, automatic accompaniment data can be created intuitively and efficiently.
- the automatic accompaniment data selected by the user among the automatic accompaniment data searched in this way is reproduced in synchronization, the user can intuitively and efficiently perform the automatic accompaniment sound of the ensemble by a plurality of parts. Obtainable.
- the third embodiment is a system that searches for style data as an example of a music data processing system.
- the automatic accompaniment DB 222 stores style data and has a style table for searching for style data.
- 3rd Embodiment and 2nd Embodiment are the same.
- the style data in this embodiment is read into an electronic musical instrument, a sequencer, or the like as in the second embodiment, and plays the same role as so-called automatic accompaniment data.
- accompaniment sound data pieces collected for different styles such as “Bebop01”, “HardRock01”, and “Salsa01” are section data for each “section” (1 to several bars) that is the minimum unit of the accompaniment pattern.
- the style data for each section includes performance data identifiers (rhythm pattern IDs) described according to the MIDI format for each part of the bass drum, snare drum, hi-hat, cymbal, phrase, chord and bass. Yes.
- the control unit 21 analyzes the rhythm pattern for each part from the performance data, and the contents according to the analysis result are registered in the style table.
- the control unit 21 analyzes the arrangement of the performance data along the time series using the reference pitch, and registers the content according to the analysis result in the style table. To do.
- the control unit 21 analyzes the chord used in the performance data using a chord as a reference, and chord information such as “Cmaj7” as the content according to the analysis result. It is registered in a chord progression information table described later.
- section progression information and chord progression information are further provided corresponding to each style data.
- the section progress information is information for sequentially specifying sections from the style data in time series in accordance with the progress of music performance.
- the chord progression information is information for sequentially specifying the chords to be played in time series in accordance with the progress of the music performance.
- each section may be selected by user designation without using the section progress information.
- chord information may be specified from input sound input using the keyboard 11 without using chord progression information, and accompaniment may be reproduced based on the chord information.
- the chord information includes chord root sound and chord information indicating the chord type.
- FIG. 19A is a diagram illustrating an example of a style table.
- a style table a plurality of style data whose “genre” is “Swing & JAZZ” is represented.
- One style data includes “style ID”, “style name”, “section”, “key”, “genre”, “BPM”, “time signature”, “base rhythm pattern ID”, “code rhythm pattern ID”.
- “Style ID” is an identifier for uniquely identifying style data.
- the “style name” is a name that uniquely identifies each style data.
- style data having a certain style name is composed of a plurality of sections.
- Sections are, for example, intro (I (normal), II (variation 1), III (variation 2)), main (A (normal), B (variation 1), C (variation 2), D (variation 3))
- intro I (normal), II (variation 1), III (variation 2)
- main A (normal)
- B variable
- C variable
- D variable
- section represents a section to which each style having a certain style name belongs.
- the control unit 21 in the style data having the style name of “Bebop01” has a normal pattern “I” with an intro section.
- the musical sound based on the style data is reproduced, and then the musical sound based on the style data whose section is the main normal pattern “A” is repeatedly reproduced a predetermined number of times, and then the normal pattern “1” whose section is the ending. Play music based on.
- the control unit 21 reproduces the musical sound based on the style data of the selected style according to the order of the sections.
- the “key” represents a pitch that serves as a reference when the pitch data of each style data is converted.
- the pitch name is described. However, since the pitch name is shown in a specific octave, the pitch is substantially expressed.
- “Genre” represents the name of the genre of music to which each style data belongs.
- “BPM” represents a tempo when a sound based on each style data is reproduced.
- “Time signature” represents the type of time signature in each style data, such as three or four time signatures. If there is a variation switching instruction during the performance, the performance is switched to the variation pattern of the corresponding section, and the performance is returned to the normal pattern after the end of the pattern.
- each style data for each part, the rhythm pattern ID for each part is described in one-to-one correspondence.
- the style data whose “style ID” is “0001” has “base rhythm pattern ID” “010010101”. This is because, in the rhythm pattern table shown in FIG. 13A, the part ID is “01” (base), the rhythm pattern ID is “010010101”, and the “rhythm pattern data” is “BebopBass01Rhythm.txt”, “ This represents that the rhythm pattern record whose “musical sound data” is “BebopBass01Rhythm.Wav” and the style data whose “style ID” is “0001” are associated with each other.
- FIG. 19B (a) shows a section progress information table.
- the section progress information table is a table in which information for sequentially specifying sections from the style data in time series is collected according to the progress of music performance.
- each section information Sni designates a storage area for data related to the corresponding section, and the timing data Tssi and Tsei before and after the section information Sni instruct the start and end of accompaniment by the designated section. Therefore, using this section progress information, sections can be sequentially specified from the accompaniment style data designated by the accompaniment style designation data St by repeating combinations of timing data Tssi, Tsei and section information Sni. .
- FIG. 19B (b) shows a chord progression information table.
- the chord progression information table is a table in which information for sequentially designating chords to be played in time series in accordance with the progress of music performance.
- the chord information Cnj defined by the two pieces of information Crj and Ctj indicates the type of chord to be played with respect to the chord performance data in the section specified by the section information Sni, and timings before and after the chord information Cnj.
- the data Tcsi and Tcei instruct the start and end of performance with this chord. Therefore, by using this chord progression information, it is possible to sequentially specify chords to be played by repeating the combination of the timing data Tcsj, Tcej and the chord information Cnj after specifying the key by the key information Key.
- section progression information and chord progression information is normally set in units of measures or beats, but any other timing can be adopted as necessary, for example, timing setting in units of clock timing.
- the number of clock timings from the beginning of the measure of the music can be used for various timing data.
- the next section Sni + 1 or chord Cnj + 1 starts immediately after the end of a certain section Sni or chord Cnj, either the end timing data Tsei, Tcei or the start timing data Tss + 1, Tce + 1 can be omitted.
- section progress information and chord progression information are mixedly stored in the master track.
- the control unit 21 reads the accompaniment style designation data St in the “section progress information” and the accompaniment sound data piece of the section (eg, “Main-A” of “BeBop01”) designated by the sequentially read section information Sni. This is stored in the RAM.
- data relating to each section is stored in accordance with a chord that is a reference (for example, “Cmaj”).
- the storage unit 22 stores a conversion table that describes conversion rules for converting an accompaniment sound data piece according to the reference chord to a sound according to any specified chord.
- chord information Cnj for example, “Dmaj” sequentially read out from the chord progression information table
- the accompaniment data piece according to the reference chord is stored in this conversion table. Based on this, it is converted into a sound according to the read arbitrary chord information Cnj.
- the audio output unit 26 outputs the converted sound.
- FIG. 20 is a flowchart of processing performed by the information processing apparatus 20 according to the third embodiment.
- steps Sd0 to Sd5 are the same as steps Sa0 to Sa5 of FIG. 15 according to the second embodiment.
- the control unit 21 displays style data in which the same pattern ID as the rhythm pattern record of the search result in step Sd5 is set as the rhythm pattern ID of any part as the search result. .
- FIG. 21 is a diagram illustrating an example of a search result of style data.
- FIG. 21A shows a case where the style data output as a search result by the control unit 21 is displayed on the display unit 24 based on the rhythm pattern input by the user using the chord input range keyboard 11b. .
- the item “similarity” represents the distance between the input rhythm pattern and the rhythm pattern of the style data that is the search result. That is, the smaller the numerical value represented by “similarity”, the higher the degree of similarity to the input rhythm pattern.
- the style data is displayed in the order of decreasing “similarity” (the distance between rhythm patterns calculated in step Sb7), that is, in the order of high similarity to the input rhythm pattern. Has been.
- the user can filter and display the search result using at least one item of “key”, “genre”, or “BPM”.
- the BPM when the user inputs a rhythm pattern that is, the input BPM is displayed as an input BPM display column 301 at the top of the search result.
- a tempo filter 302 for filtering the search result style data with the input BPM and a time signature filter 303 for filtering the search result style data with the specified time signature are displayed at the top of the search result.
- “Cord”, “Scale”, and “Tone” are displayed. If “Cord”, the chord used in the chord part is used. If “Scale”, the style data is created. In the case of “Tone”, it may be filtered by the tone color of each part.
- the control unit 21 includes a filtering function for the user to output only style data having a BPM close to the input BPM as a search result. It is possible to arbitrarily set the filtering function ON / OFF via the operation unit 25. Specifically, as described above, since each style data includes “BPM”, when the filtering function is ON, the control unit 21 determines, for example, (1/2 1/2 ) of the input BPM. Information on style data having BPM of 2 to 2 1/2 times is displayed as a search result. The coefficient values (1/2 1/2 ) to 2 1/2 applied to the input BPM are examples, and other values may be used.
- FIG. 21B shows a state in which the user has turned on the filtering function from the state of FIG. 21B, the control unit 21 performs filtering using the coefficients (1/2 1/2 ) to 2 1/2 described above. That is, in FIG. 21B, since the input BPM is 100, style data having BPMs 71 to 141 are displayed as the filtering result. By doing in this way, the user can obtain style data having a BPM close to the input BPM as a search result, and can be more satisfied with the search result.
- the user inputs information representing an arbitrary time signature such as 4/4 time signature to the time signature filter 303 using the operation unit 25, whereby the style data information associated with the input time signature information is obtained. Filtering can be performed so that it is displayed as a search result. In addition to narrowing down to the specified time signature style data, it may be extracted including related time signatures grouped in advance. For example, when 4 beats are specified, not only 4 beat style data is extracted, but also 2 beats and 6/8 beats that are easy to input on a 4 beat metronome may be extracted. Good.
- the user first designates a certain part, searches for style data having a rhythm pattern close to the input rhythm pattern (first search), and then designates a part different from the part designated first.
- first search searches for style data having a rhythm pattern close to the input rhythm pattern
- second search By inputting a pattern and performing a search again (second search), it is possible to obtain a search result in which the style data output in the first search is narrowed down by the second search.
- the “similarity” in the search result in this case is a value obtained by adding the similarity in the part specified in the first search and the similarity in the part specified in the second search in each style data. It has become.
- FIG. 21C shows the display contents of the result of the user specifying a hi-hat as a part and inputting a rhythm pattern in the state where the search result of FIG. 21A is displayed.
- FIG. 21C style data having time signature information “4/4” input to the time signature filter 303 is displayed as a search result.
- “Similarity” in FIG. 21C is a value obtained by adding the similarity when the target part is “code” and the similarity when the target part is “hi-hat” in each style data. It has become.
- FIG. 21 shows that search is possible using two parts as represented by the items “first search part” and “second search part”. The number of parts that can be made is not limited to this. Further, when a user designates a certain part and performs a search, and then inputs a rhythm pattern by designating a part (second retrieval part) different from the part designated first (first retrieval part), control is performed.
- the unit 21 may output the search result using only the second search part regardless of the search result using the first search part (referred to as overwriting search). Such narrowing search and overwriting search may be switchable by the user via the operation unit 25 of the information processing apparatus 20.
- the search method when a plurality of different parts are specified is not limited to the above-described content.
- the control unit 21 calculates the similarity between the two.
- the control part 21 adds the similarity calculated about the rhythm pattern record of each designated part for every style data matched with the said rhythm pattern record.
- the display unit 24 displays the ascending order from the style data having the smallest similarity after the addition, that is, the most similar style data.
- the controller 21 calculates the bass drum similarity and the snare drum similarity, respectively, for each style data. Add both together. In this way, the user can simultaneously specify a plurality of parts and search for style data having a phrase composed of a rhythm pattern that satisfies a condition in which the degree of similarity with the intended rhythm pattern is determined. Can do.
- control unit 21 specifies the style data selected by the user (Step Sd7).
- the configuration display screen of the specified style data is displayed on the display unit 24.
- FIG. 22 is a diagram illustrating an example of a style data configuration display screen.
- style data whose style name is “Bebop01” from the search result.
- the style name, key, BPM, and time signature of the selected style are displayed at the top of the playback screen.
- a tab referred to as a section tab 401 indicating a section is displayed, and information on each part in the section indicated by each tab is expanded and displayed on a track.
- the BPM, rhythm category, and key in each rhythm pattern record are displayed, and the rhythm pattern of each part in the section indicated by the tab indicates the horizontal axis that advances to the right in each track.
- a predetermined image 402 is displayed at a position corresponding to the sound generation time with the left end of the display area of the image 402 as the performance start timing.
- the image 402 is displayed in a bar shape having a predetermined width in the vertical direction on the configuration display screen.
- the control unit 21 performs a rhythm pattern based on the style data of the section of the selected tab (step Sd8).
- the information processing apparatus 20a can reproduce the style data in response to a reproduction start instruction by the user operating an operator (not shown) on the style data configuration display screen.
- the user can switch between these modes by using the operation unit 25.
- the automatic accompaniment mode performance data based on the selected style data is reproduced, and the user can perform a performance operation using the rhythm input device 10a or the operation unit 25. Are output together with the musical tone based on the selected style data.
- the control unit 21 has a mute function, and the user can prevent the performance data of this part from being reproduced by using the operation unit 25 to activate the mute function for any part. .
- the user can perform a performance operation for the muted part while listening to the unmuted part like an accompaniment sound source.
- the control unit 21 replaces the designated part with the performance data selected from the search result based on the input rhythm pattern, among the performance data combined in advance in the style data being reproduced.
- the control unit 21 performs the above-described search process for the designated part, and the search result shown in FIG. The screen is displayed on the display unit 24.
- the control unit 21 replaces the performance data of the part specified by the user with the selected one in the style being played.
- the user can replace the desired part of the style selected from the search results with performance data based on his / her input rhythm pattern.
- the user can obtain style data reflecting his / her intended rhythm pattern for each section and part as well as those already combined, so only the search using the information processing apparatus 20a is possible. It becomes possible to compose instead.
- the control unit 21 performs the performance operation.
- the performance data that is compatible with the input rhythm pattern of the part for which the performance has been performed is searched for each part for which the performance operation has not been performed.
- the good compatibility between rhythm patterns may be determined in advance based on the fact that the key, genre, or time signature is the same, or that the BPM is within a predetermined range with respect to the input BPM.
- control unit 21 When the control unit 21 specifies performance data having the lowest similarity (that is, the most similar) among performance data having good compatibility with the input rhythm pattern of the part on which the performance operation has been performed, the control unit 21 synchronizes and reproduces them. . In this way, even if the user is not satisfied with the search results, the user can input style data that is compatible with the rhythm pattern that he / she has input by specifying the part and inputting the rhythm pattern. It can be played back.
- step Sd8 when the user selects another style data using the operation unit 25 (step Sd9; Yes), the control unit 21 returns the process to step Sd7.
- the control unit 21 specifies the newly selected style data (step Sd7), and causes the display unit 24 to display a reproduction screen of the specified style data.
- step Sd8 if the user does not select another style data using the operation unit 25 (step Sd9; No) and the end of the search process is instructed (step Sd10; Yes), the control unit 21 The process ends.
- the user inputs a rhythm pattern by performing a performance operation on the selected part, so that the rhythm pattern similar to the input rhythm pattern is not limited to the musical sound data of a specific part. It is possible to obtain style data obtained by combining musical sound data and musical sound data having good compatibility with the musical sound data. Further, the user can replace the musical tone data of an arbitrary part of the search result style data with musical tone data similar to an input rhythm pattern different from the first input. Thereby, the user can compose music as well as search using the information processing apparatus 20a.
- one phrase record is output as a search result in the loop playback mode or the performance loop playback mode.
- the rhythm pattern search unit 213 uses the input rhythm pattern as a reference.
- a plurality of phrase records having the above similarities may be rearranged in order of high similarity and output as search results.
- the number of phrase records output as a search result may be stored as a constant in the ROM, or may be stored as a variable in the storage unit 22 and changeable by the user. For example, when the number of phrase records output as a search result is 5, the names of the phrase musical sound data of each phrase record are displayed in a list form on the display unit 24 for five cases. A sound based on the phrase record selected by the user is output from the audio output unit 26.
- Modification 2 When the type of musical instrument is a musical instrument having a wide pitch, the key of each constituent sound in the phrase musical sound data may not match the key of the accompaniment including the external sound source.
- the control unit 21 may be able to change the key of each constituent sound in the phrase musical sound data by the user performing an operation through the operation unit 25. Such a key change may be performed via the operation unit 25 or may be performed via an operator such as a fader, a knob, a wheel, or a button provided in the rhythm input device 10. .
- the control unit 21 by storing data representing the keys of the above-described sounds in the rhythm DB 221 and the automatic accompaniment DB 222, when the user changes the key, the control unit 21 tells the user what the changed key is. You may make it announce.
- Modification 3 Depending on the musical sound data, there is a case where the amplitude (power) of the waveform is not converged near 0 near the end of the constituent sound. In this case, clip noise may occur after the sound based on the constituent sound is output. is there. In order to prevent this, the control unit 21 may have a function of automatically fading in or out a certain range near the start and end of the constituent sounds. Here, the user can select whether to apply the fade-in or the fade-out through the operation unit 25 or some kind of operation element provided in the rhythm input device 10.
- FIG. 23 is a schematic diagram showing a case where fade-out is applied to each constituent sound in the musical sound data.
- the fade-out is applied to the location indicated by the arrow “Fade”, so that the amplitude of the waveform at the corresponding location gradually decreases, and the amplitude at the end time of the constituent sound is reduced. It is almost zero.
- the period during which the fade-out is applied can be adjusted by the user within a range of about several milliseconds to several tens of milliseconds.
- the process of applying the fade-out may be performed as a pre-process for a performance operation by the user, or may be performed as a real-time process or a post-process.
- the control unit 21 may record a phrase that is a result of a performance operation by the user, and the recorded content may be output in a file format that is generally used for sound source loop material. For example, when a rhythm pattern desired by the user is not stored in the rhythm DB 221 in music production, if there is a function for recording his / her performance in this way, the user has a phrase musical sound whose image is close to what he / she wanted. Data can be obtained.
- control unit 21 is not limited to one piece of musical sound data to be reproduced, and may be configured to output a plurality of phrases as voices with a plurality of phrases being superimposed on each other.
- a plurality of tracks are displayed on the display unit 24, and the user can assign different musical sound data and reproduction modes to the respective tracks.
- the user can perform the performance by assigning the musical data of the conga to the track A in the loop playback mode as an accompaniment and assigning the musical data of the djembe in the performance reproduction mode in the track B. It becomes.
- the attack intensity of the component sound (referred to as component sound A) having the same time as the trigger data associated with the velocity data in the musical sound data of the search result
- the values are extremely different (here, a case where a predetermined threshold value is exceeded)
- the control unit 21 replaces the constituent sound A and the constituent sound A selected at random from a plurality of constituent sounds having attack magnitudes substantially corresponding to the input velocity data in the musical sound data.
- the user can select whether or not to apply the above-described processing via an operation unit 25 or some operation element provided in the rhythm input device 10. In this way, the user can obtain an output result that is closer to the performance operation performed by the user.
- the phrase musical tone data has a file format such as WAVE or mp3.
- the present invention is not limited to this, and may be sequence data such as a MIDI format.
- the storage unit 22 stores files in the form of MIDI data, and the configuration corresponding to the audio output unit 26 is a MIDI sound source.
- processing such as time stretching is not required for key shift or pitch conversion. Therefore, in this case, when the user designates a key using the key designation keyboard 202, the control unit 21 changes information indicating the key in the MIDI information represented by the musical tone data to the designated key.
- each rhythm pattern record does not need to have musical tone data corresponding to a plurality of chords.
- the control unit 21 changes information indicating the chord in the MIDI information represented by the musical tone data to the designated chord.
- style data using audio data may be used.
- the basic configuration is the same as the style data of the third embodiment, except that the performance data of each part is stored as audio data. Further, style data in which MIDI and audio are combined may be used.
- the control unit 21 detects a specific phrase record or rhythm pattern record by comparing the trigger data generated by the user's performance operation with the phrase musical sound data included in the rhythm DB 221 or the rhythm pattern data included in the automatic accompaniment DB 222. Not limited to this, the control unit 21 may search the rhythm DB 221 and the automatic accompaniment DB 222 using both trigger data and velocity data generated by a user's performance operation. In this case, when there are two pieces of musical tone data having the same rhythm pattern, musical tone data closer to the velocity data obtained by the user's performance operation with respect to the attack intensity of each constituent tone is detected as a search result. In this way, it is possible to output musical tone data close to what the user has imagined with respect to the attack intensity as a search result.
- Mode 9 For example, after determining a rhythm category corresponding to an input rhythm pattern so that a search result with a consistent rhythm category is output, only a phrase record or rhythm pattern record belonging to the rhythm category of the determination result is targeted. Pattern shift calculation (step Sb6) and rhythm pattern distance calculation (step Sb7) may be performed. In this case, since the calculation amount is small, the load on the information processing apparatus 20 is reduced and the response time for the user is also shortened.
- Modification 10 For the onset time in which the absolute value of the time difference with the onset time in the rhythm pattern to be compared in the input rhythm pattern is smaller than the threshold, the absolute value of the time difference is determined by the user. Assuming that the input is not intended, the deviation value is corrected to 0 or smaller than the original value.
- This threshold is a value of “1”, for example, and is stored in advance in the storage unit 22a.
- the control unit 21 performs correction by multiplying the absolute value of the difference between the onset times by the coefficient ⁇ .
- the coefficient ⁇ takes a value between 0 and 1, and is 0 here.
- the control unit 21 calculates the difference between the rhythm patterns as 0.
- the coefficient ⁇ may be determined in advance and stored in the storage unit 22a, but a correction curve in which the value of the coefficient ⁇ is associated with the amount of deviation between the two rhythm patterns is stored in the storage unit 22a for each rhythm category.
- the control unit 21 may determine the coefficient ⁇ according to the correction curve.
- the control unit 21 does not use the calculation for the onset time in which the absolute value of the time difference from the onset time in the rhythm pattern to be compared in the input rhythm pattern is larger than the threshold value, or from the original value. Correct the deviation so that it becomes smaller. Thereby, for example, even when the user inputs a rhythm pattern only in the first half or the second half of one measure, the search is performed for the section in which the rhythm pattern is input.
- rhythm pattern record can be obtained as a search result.
- the control unit 21 calculates the absolute value of the time difference from the nearest onset time in the rhythm pattern A with reference to each onset time in the rhythm pattern B. (15) The control unit 21 calculates the sum of the absolute values calculated in step (14). (16) The control unit 21 calculates the sum of absolute values of the differences between the velocity data at each onset time in the rhythm pattern B and the attack intensity at each onset time in the rhythm pattern A. (17) The control unit 21 calculates the deviation between the rhythm pattern A and the rhythm pattern B according to the following equation (1).
- Deviation between rhythm pattern A and rhythm pattern B ⁇ ⁇ (sum of absolute values of time differences calculated in procedure (12) + sum of absolute values of time differences calculated in procedure (15)) / 2+ (1 ⁇ ) ⁇ (sum of absolute values of velocity differences calculated in procedure (13) + sum of absolute values of velocity differences calculated in procedure (16)) / 2 Equation (1)
- ⁇ is a predetermined coefficient that satisfies 0 ⁇ ⁇ 1, and is stored in the storage unit 22a.
- the user can change the value of the coefficient ⁇ using the operation unit 25. For example, when searching for a rhythm pattern, the user may set the value of the coefficient ⁇ depending on which of the onset time coincidence and the velocity coincidence has priority. In this way, the user can obtain a search result considering the velocity.
- the control unit 21 calculates the absolute value of the time difference from the nearest onset time in the rhythm pattern A with each onset time in the rhythm pattern B as a reference. (25) The control unit 21 calculates the sum of the absolute values calculated in the procedure (24). (26) The control unit 21 calculates the sum of the absolute values of the differences between the duration pattern of each onset time in the rhythm pattern B and the duration pattern of each onset time in the rhythm pattern A. (27) The control unit 21 calculates the difference between the rhythm pattern A and the rhythm pattern B according to the following equation (2).
- Deviation between rhythm pattern A and rhythm pattern B ⁇ ⁇ (sum of absolute values of time differences calculated in procedure (22) + sum of absolute values of time differences calculated in procedure (25)) / 2+ (1 ⁇ ) ⁇ (sum of absolute values of duration differences calculated in step (23) + sum of absolute values of duration differences calculated in step (26)) / 2 Equation (2)
- ⁇ is a predetermined coefficient that satisfies 0 ⁇ ⁇ 1, and is stored in the storage unit 22.
- the user can change the value of the coefficient ⁇ using the operation unit 25. For example, when searching for a rhythm pattern, the user may set the value of the coefficient ⁇ depending on whether the onset time coincidence or the duration pattern coincidence has priority. In this way, the user can obtain a search result considering the duration.
- the above is a description of variations of the method for calculating the deviation between rhythm patterns.
- step Sb7 in the first embodiment to the third embodiment the control unit 21 multiplies the similarity distance calculated for the rhythm category in step Sb4 by the deviation of the rhythm pattern calculated in step Sb6.
- the distance between the rhythm patterns is calculated as “0” when either the similarity distance or the deviation is “0”, and the other value is not reflected. It becomes a thing.
- the control unit 21 may calculate the distance between the rhythm patterns according to the following equation (3).
- ⁇ and ⁇ are predetermined constants and are stored in the storage unit 22a.
- ⁇ and ⁇ may be appropriately small values. In this way, even if one of the similarity distance calculated for the rhythm category in step Sb4 and the deviation of the rhythm pattern calculated in step Sb6 is “0”, the other value is reflected. The distance between rhythm patterns is calculated.
- step Sb7 the control unit 21 calculates the distance between rhythm patterns according to the following equation (4).
- Distance between rhythm patterns ⁇ ⁇ similarity distance calculated for rhythm category in step Sb4 + (1 ⁇ ) ⁇ deviation of rhythm pattern calculated in step Sb6 Expression (4)
- ⁇ is a predetermined coefficient that satisfies 0 ⁇ ⁇ 1.
- the coefficient ⁇ is stored in the storage unit 22, and the user can change the value using the operation unit 25. For example, when searching for a rhythm pattern, the user may set the value of the coefficient ⁇ depending on whether priority is given to the similarity distance calculated for the rhythm category or the shift of the rhythm pattern. In this way, the user can obtain more desired search results.
- step Sb7 the control unit 21 calculates the distance between rhythm patterns according to the following equation (5-1).
- Distance between rhythm patterns (similarity distance calculated for rhythm category in step Sb4 + deviation of rhythm pattern calculated in step Sb6) ⁇ ⁇ ⁇
- ⁇ is a predetermined constant that satisfies 0 ⁇ ⁇ 1.
- ⁇ is stored in the storage unit 22, and the user can change the value using the operation unit 25.
- the user may set the value of the coefficient ⁇ depending on how much priority is given to the difference in BPM when searching for a rhythm pattern.
- the rhythm pattern record in which the difference between the input BPM and the BPM exceeds a predetermined threshold may be excluded from the search result by the control unit 21. In this way, the user can obtain a search result considering BPM.
- the following may be used.
- Distance between rhythm patterns (similarity distance calculated for rhythm category in step Sb4 + deviation of rhythm pattern calculated in step Sb6) + ⁇ ⁇
- ⁇ is a predetermined constant satisfying 0 ⁇ ⁇ 1 as in the equation (5-1).
- ⁇ is stored in the storage unit 22, and the user can change the value using the operation unit 25.
- equation (5-2) for example, by setting ⁇ to a very small value, basically, the closer the rhythm pattern is, the higher the result is output, and the more the rhythm patterns match. Can be displayed in order from the closest tempo.
- step Sb7 The calculation of the distance between the rhythm patterns in step Sb7 is not limited to the above, and the following method may be used.
- the control unit 21 matches the right side with the tone color of the musical tone specified when the rhythm pattern is input and the tone color of the rhythm pattern record to be compared.
- Multiply A well-known method may be used as a method for calculating the tone matching degree.
- the smaller the matching value the closer the two timbres are, and the larger the matching value, the farther the timbres are. In this way, since the user can easily obtain a rhythm pattern record having a timbre close to the timbre experienced when inputting the rhythm pattern as a search result, the user can be satisfied with the search result.
- timbre data (specifically, timbre program number, MSB (Most ⁇ Significant Bit) and LSB (Least Significant Bit)) used for each part is described in advance in the style table in association with the timbre ID. deep.
- the user designates timbre data using the operation unit 25 and inputs a rhythm pattern.
- the control unit 21 performs control so that style data corresponding to timbre data that matches the specified timbre data is easily output as a search candidate.
- a data table in which the degree of similarity for each timbre data is described in advance with the timbre ID as a unit is stored in the storage unit 22 in advance, and the control unit 21 stores the style data having the timbre ID of the timbre data with a high degree of similarity. You may control to search preferentially.
- Modification 18 The calculation of the distance between the rhythm patterns in step Sb7 is not limited to the above, and the following method may be used.
- the user can specify a genre using the operation unit 25 when inputting a rhythm pattern.
- the control unit 21 multiplies the right side by the degree of coincidence between the genre specified when inputting the rhythm pattern and the genre of the rhythm pattern record to be compared.
- the genres may be classified in stages, such as a large genre, a medium genre, and a small genre.
- the control unit 21 makes the distance between the rhythm pattern record that matches or includes the specified genre and the input rhythm pattern smaller, and the rhythm pattern record that does not match or does not include the specified genre and the input rhythm pattern.
- the degree of coincidence between the genres may be calculated so that the distance between and the equation in step Sb7 is corrected. In this way, the user can easily obtain a rhythm pattern record that matches or includes the genre specified at the time of inputting the rhythm pattern as a search result, and therefore the user can be satisfied with the search result.
- the above is a description of variations of the method for calculating the distance between rhythm patterns.
- FIG. 24 is a diagram illustrating an example of an onset time interval table.
- the onset time interval table is stored in advance in the storage unit 22.
- the onset time interval table includes a combination of a name indicating the classification of a rhythm category and an onset time interval that is a target in each rhythm category.
- the contents of the onset time interval table are determined in advance, assuming that one measure is equally divided into 48 bars.
- the control unit 21 calculates the onset time interval from the onset time in the input rhythm pattern by the above-described method, and quantizes the calculation result. Suppose that it is calculated. (D) 12, 6, 6, 6, 6, 6
- the control unit 21 specifies that the 4-minute onset time interval is 1 and the 8-minute onset time interval is 5 according to the calculated numerical value group and the onset time interval table shown in FIG.
- the control part 21 calculates the distance of an input rhythm pattern and each rhythm category according to the following formula
- the above-described formula is an example, and the rhythm category including more target onset time intervals only needs to be calculated so that the distance from the input rhythm pattern is smaller.
- the control unit 21 calculates, for example, the distance between the input rhythm pattern and the 8-minute rhythm category as 0.166.
- the control unit 21 calculates, for example, the distance between the input rhythm pattern and the 4-minute rhythm category as 0.833. The control unit 21 calculates the distance between the input rhythm pattern and each rhythm category in this way, and determines that the input rhythm pattern belongs to the rhythm category calculated with the smallest distance.
- the method for calculating the distance between the input rhythm pattern and the rhythm category is not limited to the above, and may be as follows.
- the storage unit 22 stores a distance reference table.
- FIG. 25 is a diagram illustrating an example of a distance reference table.
- the distance between the rhythm category to which the input rhythm pattern can belong and the rhythm category to which each rhythm pattern record stored in the automatic accompaniment DB 222 can belong is shown in a matrix.
- the control unit 21 determines that the rhythm category to which the input rhythm pattern belongs is 8 minutes.
- the control unit 21 specifies the distance between the input rhythm pattern and each rhythm category based on the rhythm category to which the input rhythm pattern that is the determination result belongs and the distance reference table. For example, in this case, the control unit 21 specifies the distance between the input rhythm pattern and the 4-minute rhythm category as “0.8”, and specifies the distance between the input rhythm pattern and the 8-minute rhythm category as “0”. To do. Thereby, the control unit 21 determines that the rhythm category of 8 minutes is the shortest distance from the input rhythm pattern.
- FIG. 26 is a diagram illustrating an example of an onset time table.
- the onset time table is stored in advance in the storage unit 22a.
- the onset time table includes a combination of a name indicating the classification of the rhythm category, an onset time that is a target in each rhythm category, and a score that is added when the target onset time is included in the input rhythm pattern.
- the contents of the onset time table are determined in advance as one bar divided into 48 equal parts and normalized.
- control unit 21 obtains the onset time indicated by (e) from the input rhythm pattern. (E) 0, 12, 18, 24, 30, 36, 42
- the control unit 21 calculates the score of the input rhythm pattern for each rhythm category based on the onset time and the onset time table.
- the control unit 21 calculates “8” as the score for the rhythm category of 4 minutes, calculates “10” as the score for the rhythm category of 8 minutes, and sets “4” as the score for the rhythm category of 8 minutes 3 consecutively. Is calculated, and “7” is calculated as the score for the rhythm category of 16 minutes.
- the control unit 21 determines that the rhythm category having the highest calculated score is the rhythm category having the smallest distance from the input rhythm pattern.
- the control unit 21 determines that the input rhythm pattern has the shortest distance from the rhythm category of 8 minutes. This completes the description of variations of the method for calculating the distance between the input rhythm pattern and the rhythm category.
- the search may be performed by a pitch pattern input by the user specifying a part.
- the item name “rhythm pattern ID” is “pattern ID”.
- an item “pitch pattern data” is added to each rhythm pattern record in the rhythm pattern table shown in FIG. 13A.
- the pitch pattern data is a data file in which changes along the time series of the pitches of the constituent sounds in a phrase constituting one measure are recorded.
- the pitch pattern data is, for example, a text file in which changes along the time series of pitches of constituent sounds are described.
- the onset information includes the note number of the keyboard in addition to the trigger data.
- the sequence of onset times in the trigger data corresponds to the input rhythm pattern
- the sequence of note numbers on the keyboard corresponds to the input pitch pattern.
- the information processing apparatus 20 searches for a pitch pattern, any of known methods may be used. For example, when the user designates a chord as a part and inputs a pitch sequence of the route “CDE”, the control unit 21 of the information processing apparatus 20 indicates the progress of the pitch in this sequence as “0”.
- a rhythm pattern record having pitch pattern data represented by a relative numerical value “ ⁇ 2-4” is output as a search result.
- the control unit 21 when the user designates a phrase as a part and inputs a single pitch pattern “DDEG”, the control unit 21 generates MIDI information representing the input pitch pattern. To do.
- the control unit 21 outputs a pitch pattern record having pitch pattern data that is the same as or similar to the generated MIDI information as a search result in the rhythm pattern table.
- Such a search using a pitch pattern and a search using a rhythm pattern may be switched by the user via the operation unit 25 of the information processing apparatus 20.
- Each rhythm pattern record in the rhythm pattern table of the modified example 23 includes “pitch pattern data” for each part together with “pattern ID” for each part.
- FIG. 27 is a schematic diagram showing the processing content of the search using the pitch pattern.
- the horizontal axis represents the elapsed time
- the vertical axis represents the pitch.
- the following processing is added to the search processing flowchart of FIG.
- the user operates the keyboard in the bass input range keyboard 11a and inputs the pitch pattern “CE-GE” at a rhythm of 4 minutes.
- the input pitch pattern at this time is represented by, for example, a sequence of note numbers “60, 64, 67, 64”.
- FIG. 27A shows the pitch pattern.
- step Sb6 the rhythm pattern search unit 214 uses the pitch pattern record whose part ID is “01 (bass)” in the pitch pattern table as a comparison target. The difference between the pitch pattern data included in the pitch pattern record and the input pitch pattern is calculated.
- the control unit 21 sets the input pitch pattern and the pitch pattern (hereinafter referred to as the tone pattern of the sound source) represented by the pitch pattern data included in each rhythm pattern record whose part ID is “01 (bass)”. Find the pitch variance. This is based on the idea that the melody pattern can be considered to be more similar as the difference in pitches is less varied. For example, assume that the input pitch pattern is represented by “60, 64, 67, 64” as described above, and the pitch pattern of a certain sound source is represented by “57, 60, 64, 60”. In FIG. 27 (b), the input pitch pattern and the pitch pattern of the sound source are shown side by side.
- the variance of the pitch can be obtained by the formula (8).
- step Sb7 the control unit 21 obtains the degree of similarity between the input rhythm pattern and the search result rhythm pattern when the pitch pattern is considered.
- the similarity between the input rhythm pattern and the rhythm pattern of the search result is defined as S
- V the variance of the pitch difference
- the degree of similarity Sp between the input rhythm pattern and the rhythm pattern of the search result can be expressed by the following equation (9) using the variable x and the constant y. In the formula (9), 0 ⁇ x ⁇ 1 and y> 0.
- the control unit 21 determines which note of the pitch pattern of the sound source corresponds to each onset of the input pitch pattern according to the following procedure. (31) The control unit 21 calculates a difference in pitch from the note at the nearest onset time in the tone pattern of the sound source with reference to the onset time of each note in the input pitch pattern. (32) The control unit 21 calculates the difference in pitch from the note at the nearest onset time in the input pitch pattern with reference to the onset time of each note in the pitch pattern of the sound source.
- the control unit 21 calculates an average value of the difference calculated in the procedure (31) and the difference calculated in the procedure (32) as a pitch difference between the input pitch pattern and the tone pattern of the sound source. .
- the pitch difference may be calculated using only one of procedure (31) and procedure (32). Note that the calculation method of the degree of similarity between the input rhythm pattern and the search result rhythm pattern when the pitch pattern is considered is not limited to the above-described method, and other methods may be used.
- the control unit 21 calculates the difference of the pitch patterns in the 12 scales according to the following equations (10) and (11).
- the search result may be output based on the rhythm pattern similarity considering both the input pitch pattern itself and the pitch variation pattern in the 12th scale.
- the calculation formula in this case can be expressed as, for example, the following formula (13).
- Rhythm pattern similarity taking into account the input pitch pattern itself and the pitch variation pattern in the 12th scale (1-X) ⁇ (rhythm pattern similarity) + XY ⁇ (1- ⁇ ) (pitch pattern (Similarity) + ⁇ (similarity of 12-scale pattern) ⁇ (13)
- X, Y, and ⁇ are predetermined constants that satisfy 0 ⁇ X ⁇ 1, Y> 0, and 0 ⁇ ⁇ 1, respectively.
- the said formula is an example and is not this limitation.
- rhythm pattern intended by the user not only the rhythm pattern intended by the user but also the pitch pattern that is close to the one intended by the user is output as a search result.
- the user can obtain rhythm pattern records having the same input rhythm pattern and rhythm pattern but different pitch patterns as search results.
- the control unit 21 may search the rhythm DB 221 and the automatic accompaniment DB 222 using both trigger data and velocity data generated by a user's performance operation. In this case, when there are two rhythm pattern data having very similar rhythm patterns, the control unit 21 determines that the attack intensity of each component sound described in the attack intensity pattern data is based on velocity data obtained by the user's performance operation. Output near rhythm pattern data as search results. In this way, it is possible to output automatic accompaniment data that is close to what the user imagined for the attack intensity as a search result.
- duration data representing the length of the same sound
- the duration data for each component sound is represented by the length of time obtained by subtracting the previous onset time from the offset time.
- the duration data can be used particularly effectively because the information processing device 20 can clearly acquire the offset time when the input means in the rhythm input device 10 is a keyboard.
- an item “duration pattern data” is added to the phrase table and the rhythm pattern table.
- the duration pattern data is a data file in which the length of each constituent sound in a phrase constituting one measure is recorded. For example, a text file describes the length of each constituent sound.
- the information processing apparatus 20 uses the duration pattern in one measure input by the user, and the phrase having the duration pattern data most similar to the input duration pattern from the phrase table or rhythm pattern table.
- a record or rhythm pattern record may be output as a search result.
- the information processing apparatus 20 has a slur (stretch) rhythm pattern, a staccato (spring) rhythm pattern, etc. Can be identified and output as a search result.
- each timbre to be used is associated with identification information of the timbre, and when the user inputs a rhythm pattern, the timbre is specified in advance. Then, after narrowing down to a rhythm pattern that is pronounced with the corresponding timbre, it is only necessary to search for a rhythm pattern with a high degree of similarity.
- the second embodiment and the third embodiment will be described as examples. In this case, the item “tone color ID” is added to the rhythm pattern table.
- the user When the user inputs a rhythm pattern using the performance operator, for example, the user designates a timbre using the operation unit 25.
- the designation of the timbre may be performed by an operator provided in the rhythm input device 10.
- the timbre ID specified when the performance operation is performed is input to the information processing apparatus 20 as part of the MIDI information.
- the information processing apparatus 20 compares the timbre of the sound based on the input timbre ID with the timbre of the sound based on the timbre ID in each rhythm pattern record of the specified part in the rhythm pattern table, and based on the comparison result, Is in a predetermined correspondence relationship, the rhythm pattern record is specified.
- This correspondence is determined in advance so that it can be identified that the instrument types are the same based on the comparison result, and stored in the storage unit 22a.
- a known method such as comparing the spectrums of the waveforms of each sound may be used. In this way, in addition to the rhythm patterns being similar, the user can obtain automatic accompaniment data with similar timbres for the designated part.
- it is realizable by the method similar to the content described in the modification 17.
- Modification 27 Although the absolute value of the difference between the input time interval histogram and the pronunciation time interval histogram was the smallest, it was judged that the similarity of the pronunciation time interval histogram to the input time interval histogram was high.
- the condition shown is not limited to the absolute value of the difference between the two histograms as described above.
- the degree of correlation such as the product of the time interval components of both histograms is the largest or exceeds the threshold
- the square of the difference between the two histograms is the smallest or less than the threshold
- each time interval of both histograms Any condition may be used as long as it indicates that the values in the components are similar.
- the information processing device 20 searches for musical sound data having a rhythm pattern similar to the rhythm pattern, converts the musical sound data of the search result into sound, and outputs it.
- the server device that provides the web service has the function of the information processing apparatus 20 in the above embodiment.
- the own terminal for example, PC
- the client device transmits the input rhythm pattern to the server device via the Internet, a dedicated line, or the like.
- the server device Based on the received input rhythm pattern, the server device searches for musical tone data having a rhythm pattern similar to the input rhythm pattern from the storage means, and transmits the musical tone data as a search result to the terminal. The terminal then outputs a sound based on the received musical sound data.
- a bar line clock may be presented to the user on a website or application provided by the server device.
- the performance operator is not limited to a shape such as a keyboard or a drum pad, but is a stringed instrument, a wind instrument, or a button. It may be a shape.
- the performance operator may be a tablet PC equipped with a touch panel, a smartphone, a mobile phone, or the like.
- the performance operator is a touch panel.
- a plurality of icons may be displayed on the screen.
- an image of a musical instrument or a musical instrument operator (such as a keyboard) is displayed on each icon, the user can play a musical sound based on which musical instrument or musical instrument operator by touching which icon. You will know what will be done.
- each area where each icon is displayed on the touch panel corresponds to each performance operator in the embodiment.
- the control unit 21 converts the rhythm pattern record into the rhythm pattern record according to the operation performed by the user using the operation unit 25.
- the musical sound indicated by the included musical sound data may be reproduced with the original BPM.
- the control unit 21 specifies this, immediately after the control unit 21 is specified, the speed is based on the input BPM or the specified BPM.
- the musical sound indicated by the musical sound data included in the rhythm pattern record may be reproduced, and control may be performed so that the BPM gradually approaches the original rhythm pattern record with time.
- the method for making the user feel more convinced with the search result is not limited to the filtering function described above.
- ⁇ Weighing similarity by BPM difference> For convenience of explanation, the second embodiment and the third embodiment will be described as examples here.
- weighting based on the difference between the input BPM and the original BPM of the rhythm pattern record may be introduced into the calculation formula for obtaining the distance between the input rhythm pattern and the rhythm pattern record included in the rhythm pattern table.
- a calculation formula for calculating the similarity when the weighting is introduced is, for example, (10).
- Similarity L +
- the user may specify a specific target by pull-down to narrow down the display result.
- the performance information at the time of inputting the rhythm pattern is automatically analyzed and the display result is automatically obtained.
- a method of squeezing may be used.
- the chord type and scale may be determined from the performance information of the rhythm input pitch input from the keyboard or the like, and the accompaniment registered with the chord type and scale may be automatically displayed as a search result. For example, if you input a rhythm with a rock-like chord, you can search for a rock style. In addition, if you input rhythm on a Middle Eastern scale, it will be easier to search for Middle Eastern phrases.
- timbre information having the same timbre information and having the same rhythm pattern based on the timbre information designated at the time of keyboard input. For example, when a rhythm is input with a snare drum rim shot, the rim shot timbre accompaniment is preferentially displayed among candidates having the same rhythm pattern.
- the rhythm input device 10 when the rhythm input device 10 is not provided with the input pad 12, the rhythm input device 10 may take the following structures.
- the keyboard 11 is assigned a bass input range keyboard 11a, a chord input range keyboard 11b, and a phrase input range keyboard 11c, respectively.
- the control unit 21 assigns the drum part to a predetermined key range of the keyboard 11. For example, the control unit 21 assigns a bass drum part to C3, a snare drum part to D3, a hi-hat part to E3, and a cymbal part to F3.
- control unit 21 can assign different instrument sounds to the operators (that is, the keys) in the entire key range of the keyboard 11.
- the control unit 21 may display an image (for example, a snare drum image) relating to the assigned instrument sound on the upper or lower portion of each operation element (each key) on the keyboard 11.
- the control unit 21 has an image relating to an assigned part (for example, an image in which a guitar chord is pressed, an image in which a piano is played in a single tone) on the upper and lower portions of each operation element (each key) on the keyboard 11 ( For example, an image in which a single key is pressed with a finger) or a snare drum image) is displayed.
- an assigned part for example, an image in which a guitar chord is pressed, an image in which a piano is played in a single tone
- each operation element for example, an image in which a single key is pressed with a finger
- a snare drum image is displayed.
- control unit 21 may display the above-described image on the display unit 24 in addition to displaying the above image on the upper and lower portions of each operation element.
- a keyboard image simulating the keyboard 11 is displayed on the display unit 24, and each key of the keyboard image is displayed in the same assignment state as that assigned to each keyboard range of the actual keyboard 11.
- the image of the part assigned to the area is displayed.
- the control unit 21 causes the audio output unit 26 to output bass sound. In this way, the user can visually and audibly determine which part is searched by operating which operator, so that the operation input becomes easy, and as a result, the user desires. Accompaniment sound source is easier to obtain.
- ⁇ Search calculation Processing order can be changed> (Modification 34)
- step Sb1 the onset time interval distribution in the input rhythm pattern was calculated (step Sb3).
- step Sb3 The order of step Sb3 may be changed.
- the control unit 21 may store the calculation result in the RAM or the storage unit 22 after calculating the distribution of the onset time interval for each rhythm category regardless of the exchange of the processing steps. In this way, the control unit 21 does not need to recalculate the result of the calculation once, and can improve the processing speed.
- the user operates a plurality of controls within a predetermined time, for example, when the user presses the keyboard so as to be a chord on the bass input range keyboard 11a.
- a rhythm pattern there are the following problems. For example, assume that the user wants to input a rhythm at a timing of “0.25” in one measure.
- a certain control is actually operated at the onset time of “0.25”, and other controls are operated. Is operated at the onset time of “0.26”, the control unit 21 stores the input rhythm pattern according to these onset times.
- the control unit 21 applies to a plurality of operators in the same part at the same timing based on the onset information input from the rhythm input device 10 and the part table included in the automatic accompaniment DB 211. It is determined whether or not an operation has been performed. For example, when the difference between the onset time of a certain operator and the onset time of another operator among the operators included in the bass input range keyboard 11a falls within a predetermined time, the control unit 21 It is determined that the operator is operated at the same timing.
- the predetermined time is, for example, 50 msec (milliseconds).
- the control unit 21 outputs, to the control unit 21, information indicating that the operations on these operators are regarded as operations at the same timing in association with the trigger data having the corresponding onset time. To do.
- the control unit 21 uses the input rhythm pattern that excludes the trigger data having the onset time of the later start time from the trigger data associated with information indicating that the operations are performed at the same timing. Perform pattern search. That is, in this case, the onset time with the earlier start time among the onset times based on the user's operation within a predetermined time is used for the rhythm pattern search.
- the present invention is not limited to this, and the onset time with the later start time among the onset times based on the user's operation within a predetermined time may be used for the rhythm pattern search. That is, the control unit 21 may perform a rhythm pattern search using any of a plurality of onset times based on a user operation within a predetermined time. In addition, the control unit 21 obtains an average of a plurality of onset times based on the user's operation within a predetermined time, and performs a rhythm pattern search using this average value as the onset time in the operation within the user's predetermined time. You may do it. In this way, even when the user performs rhythm input using a plurality of operators within a predetermined time, a search result close to what the user intended can be output. It becomes possible.
- the control unit 21 when the control unit 21 stores the input rhythm pattern in the RAM, from the time point that is several tens of msec earlier than the head of the measure (that is, the last several tens of msec in the immediately preceding measure), The range up to the end of the bar other than several tens of msec may be set as the processing target range. That is, the control unit 21 shifts the target range of the input rhythm pattern stored in the RAM forward by several tens of msec. In this way, it is possible to reduce the output of search results different from those intended by the user.
- the search method according to the present invention can be applied to a musical sound data processing apparatus having a playback function in which musical sound data as a search result is reproduced in synchronization with a bar line clock at a measure immediately after rhythm input. is there.
- the search result in order for the musical sound data of the search result to be reproduced from the beginning of the measure immediately after the rhythm input, the search result must be output before the start of the measure, that is, within the measure in which the rhythm input has been performed. There is.
- the search result is displayed within the measure in which the rhythm is input. It is necessary to read the musical sound data and store it in the RAM.
- the timing at which the control unit 21 performs the rhythm pattern search may be, for example, several tens of msec earlier than the measure switching timing. In this way, the search is performed before the bars are switched, and the music data of the search result is stored in the RAM, so that the music data of the search result is reproduced from the head of the bar immediately after the rhythm input. It is possible to
- the input rhythm pattern is not limited to one bar unit, but may be as follows so that a rhythm pattern extending over a plurality of bars (N bars) can be searched.
- N bars a rhythm pattern extending over a plurality of bars
- the second embodiment and the third embodiment will be described as examples here.
- the control unit 21 searches the rhythm pattern table using an input rhythm pattern having a group of N bars.
- the search result is output after N bars, it takes time until the search result is output.
- the following may be performed.
- FIG. 28 is a schematic diagram showing the processing contents when searching for a rhythm pattern of a plurality of measures.
- the rhythm pattern table in the automatic accompaniment DB 222 includes a rhythm pattern record having rhythm pattern data extending over N measures.
- the user uses the operation unit 25 to specify the number of measures of the rhythm pattern to be searched. This designated content is displayed on the display unit 24, for example.
- the control unit 21 first stores the input rhythm pattern of the first measure.
- the control part 21 searches a rhythm pattern based on the input rhythm pattern of the 1st measure.
- the search procedure is as follows.
- the control unit 21 targets a rhythm pattern record having rhythm pattern data extending over two measures, an input rhythm pattern of the first measure, and a rhythm pattern of the first measure and a rhythm pattern of the second measure in each rhythm pattern data. Calculate the distance.
- the control unit 21 calculates the distance between the input rhythm pattern of the first measure and the rhythm pattern of the first measure and the input rhythm pattern of the first measure and the rhythm pattern of the second measure calculated for each rhythm pattern data. The smaller one of the distances is stored in the RAM for each rhythm pattern data.
- the control unit 21 performs the same process for the input rhythm pattern of the second measure.
- control part 21 will add the said distance memorize
- Expression (11) is a calculation expression for obtaining the nth input onset time in the input rhythm pattern.
- L represents the end of a measure when the start of a measure is 0, and is a real number of 0 or more.
- N represents a resolution that is the number of clocks in one measure.
- the value of “0.5” has the effect of rounding off the fraction when the onset time is calculated. May be. For example, if this value is “0.2”, the effect of rounding off to the fraction is brought about. This value is stored as a parameter in the storage unit 22 and can be changed by the user using the operation unit 25.
- the phrase data and rhythm pattern data may be created in advance by extracting the constituent sound start time from a commercially available audio loop material by the operator.
- the sound of a backing guitar may be intentionally shifted slightly from the just timing that should be intended in order to increase the depth of the auditory sound.
- phrase data and rhythm pattern data as a result of rounding up or down are obtained.
- the above-described deviation is eliminated. Therefore, the user can input a rhythm pattern at a desired just timing without searching for the deviation. Can be done.
- the present invention may be realized by a device in which the rhythm input device 10 and the information processing device 20 are integrated.
- a device for example, a mobile phone or a mobile communication terminal equipped with a touch screen can be considered.
- this device for example, a mobile phone or a mobile communication terminal equipped with a touch screen can be considered.
- this device is a mobile communication terminal provided with a touch screen.
- FIG. 29 is a diagram illustrating the mobile communication terminal 600 in the present modification.
- the mobile communication terminal 600 has a touch screen 610 on the surface thereof.
- the user can perform an operation on the mobile communication terminal 600 by touching an arbitrary position on the touch screen 610, and display content corresponding to the user operation is displayed on the touch screen 610.
- the hardware configuration of the mobile communication terminal 600 is such that the combined function of the display unit 24 and the operation unit 25 is realized by the touch screen 610, and the mobile communication terminal 600 includes the rhythm input device 10 and the information processing device 20. Except for being integrated, it is the same as that shown in FIG. Below, a control part, a memory
- BPM designation slider 201, key designation keyboard 202, and code designation box 203 are displayed on the upper part of touch screen 610.
- the BPM designation slider 201, key designation keyboard 202, and code designation box 203 are the same as those described with reference to FIG.
- search result rhythm pattern records are displayed as a list.
- the control unit 21 causes the touch screen 610 to display a list of rhythm pattern records as search results for the designated parts.
- the item “rank”, the item “file name”, the item “similarity”, the item “BPM”, and the item “key” in the search result are the same as those described with reference to FIG.
- related information such as “genre” and “instrument type” may be displayed.
- the musical tone data of the rhythm pattern record corresponding to the designated reproduction instruction image 630 is reproduced.
- Such a mobile communication terminal 600 also provides the same effects as described above in the second and third embodiments.
- Modification 41 In addition to the musical sound data processing apparatus, the present invention can be understood as a method for realizing these and a program for causing a computer to realize the functions shown in FIGS.
- a program may be provided in the form of a recording medium such as an optical disk in which the program is stored, or may be provided in the form of being downloaded to a computer via the Internet or the like and installed and used.
- Mode 42 Regarding the search mode, in addition to the three types of the automatic accompaniment mode, the replacement search mode, and the follow-up search mode in the above-described embodiment, the following mode switching can be considered.
- the first is a mode in which the search process is always automatically performed for each measure, and one similar top level or a similar predetermined number of search results are automatically reproduced. This mode is mainly used for automatic accompaniment.
- FIG. 30A and FIG. 30B are schematic views showing a list of search results for accompaniment sound sources as a modification of the first embodiment.
- the search result list for the accompaniment sound source includes “file name”, “similarity”, “key”, “genre”, and “BPM (Beat Per Minute) ”.
- “File name” is a name for uniquely identifying an accompaniment sound source.
- the “similarity” is a numerical value indicating how similar the rhythm pattern of the accompaniment sound source is based on the input rhythm pattern. The lower the numerical value, the higher the degree of similarity (the distance between the rhythm patterns described above is shorter).
- “Key” represents the height of the key (pitch) of the accompaniment sound source.
- “Genre” represents the genre (for example, rock, Latin, etc.) of music to which the accompaniment sound source belongs.
- “BPM” is the number of beats per minute and represents the tempo of the accompaniment sound source.
- FIG. 30A is an example of a case where accompaniment sound sources having rhythm patterns with a certain degree of similarity or higher are listed as search results in descending order of similarity based on the rhythm pattern input by the user.
- the user can filter and display the search result using items such as “key”, “genre”, or “BPM”.
- FIG. 30B shows a result of filtering the “genre” by “Latin” by the user with respect to the search result of FIG. Note that the user can perform filtering using one or a plurality of items.
- a search may be made for a specific track.
- step Sb5 the determination of the rhythm category (steps Sb2 to Sb5) may be omitted, and the distance between the rhythm patterns in step Sb7 may be performed using only the result of the rhythm pattern shift calculation (step Sb6).
- a stronger attack intensity is obtained by multiplying the calculated deviation value by the attack intensity of the corresponding constituent sound.
- Rhythm pattern records including constituent sounds may be easily removed from search result candidates.
- the one-measure automatic accompaniment data is used, but the length of the sound is not limited to this.
- the user may be able to specify a part using the operation unit 25 without depending on the performance operator. In this case, when the user operates the performance operator after designating the part, the designated part is input. For example, when the user designates the “bass” part using the operation unit 25, even if the chord input range keyboard 11 b is subsequently operated, the control unit 21 regards this as the input of the “bass” part. And so on.
- each of the different timbres such as the bass drum input pad 12a, the snare drum input pad 12b, the hi-hat input pad 12c, and the cymbal input pad 12d is used.
- the present invention is not limited to this, and a plurality of rhythm parts having different timbres may be input using a single pad. In this case, the user can specify the timbre of the rhythm part using the operation unit 25.
- the rhythm pattern data is represented by a decimal value from 0 to 1, but may be represented by a plurality of integer values (for example, 0 to 96).
- detection may be performed under other conditions.
- the similarity may be detected within a predetermined range, or the range may be set by the user, and the range included in the range may be detected.
- a function that edits musical tone data, automatic accompaniment data, style data, etc. is provided, and the desired musical tone data, automatic accompaniment data, style data are selected from the screen displaying the detection results, and the selected one is displayed. Then, each part may be expanded and displayed, and various data such as desired musical tone data, automatic accompaniment data, and performance data may be edited for each part.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Electrophonic Musical Instruments (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
前記検索部が特定する楽音データは、前記入力リズムパターンと前記算出した類似の度合いが所定の条件に合致する楽音リズムパターンに対応付けられた楽音データであることを特徴とする。
<第1実施形態>
(楽音データの検索システム)
<構成>
図1は、本発明の第1実施形態に係るシステムの構成図である。この音楽データ作成システム100は、リズム入力装置10、及び情報処理装置20を備えており、各々が通信線により接続されて相互に通信ができるようになっている。この通信は、無線によって実現されてもよい。リズム入力装置10は、入力手段として例えば電子パッドを備えている。ユーザが、リズム入力装置10に設けられた電子パッドの打面を叩くことにより、リズム入力装置10は、打撃されたこと、すなわち演奏操作されたことを示すトリガーデータと、打撃の強度、すなわち当該演奏操作の強度を示すベロシティデータとを、1小節を単位として、情報処理装置20に入力する。ここで、トリガーデータは、ユーザが電子パッドの打面を叩く毎に生成されるとともに、トリガーデータの各々にはベロシティデータが対応付けられている。1小節内に生成されたトリガーデータとベロシティデータの集合は、この1小節において、ユーザがリズム入力装置10を用いて入力したリズムのパターン(入力リズムパターンという)を表している。このように、リズム入力装置10は、ユーザによる演奏操作が入力される入力装置の一例である。
次に、図5~図7を用いて、検索機能がONの状態において、リズムパターン検索部213が、入力リズムパターンに基づいて、フレーズテーブルから特定のフレーズレコードを検出する一連の処理について説明を行う。
図5はリズムパターン検索部213が行う検索処理のフロー図である。初めに、リズムパターン検索部213は、RAMに記憶されている楽器種類IDを用いて、フレーズテーブルを検索する(ステップSb1)。この楽器種類IDはユーザが操作部25を用いて予め指定することでRAMに記憶されたものである。以降の処理において、リズムパターン検索部213は、このステップS1における検索結果のフレーズレコードを処理の対象とする。
(a)0,0.25,0.375,0.5,0.625,0.75,0.875
リズムパターン検索部213は、(a)の入力リズムパターンから、以下の(b)で表されるオンセット時刻間隔を算出する。
(b)0.25,0.125,0.125,0.125,0.125,0.125
次にリズムパターン検索部213は、(b)で算出された各々のオンセット時刻間隔に48を乗算し、さらに0.5を加算した数値の小数点以下を切り捨てる処理(クオンタイズという)を行うことで、以下の(c)で表す数値群を算出する。
(c)12,6,6,6,6,6
ここでクオンタイズとは、リズムパターン検索部213が、各オンセット時刻間隔を分解能にあわせて補正することを意味している。クオンタイズを行う理由は、以下のとおりである。フレーズテーブルにおけるリズムパターンデータに記述された発音開始時刻は、分解能(ここでは48)に従ったものとなっている。よって、オンセット時刻間隔を用いてフレーズテーブルを検索する際には、検索に用いるオンセット時刻間隔も分解能に従ったものでないと、検索の精度が低くなってしまう。このような理由により、リズムパターン検索部213は、(b)で表される各々のオンセット時刻間隔に対して、上述したクオンタイズの処理を施す。
図6(a)は入力リズムパターンにおけるオンセット時刻間隔の分布表である。図6(a)において、横軸は、1小節を48の時刻で区切ったときの時刻の間隔(時刻間隔という)を表し、縦軸は、横軸で表された各々の時刻間隔に相当する、クオンタイズされたオンセット時刻間隔の個数の比率を表す。図6(a)では、入力リズムパターンに基づく、上述した(c)の数値群が分布表に割り当てられている。この個数比率は、合計が1となるようにリズムパターン検索部213によって正規化されている。図6(a)では、クオンタイズされたオンセット時刻間隔である(c)の数値群において、最も個数が多い「6」の時刻間隔に分布のピークがあることが分かる。
・ 8分カテゴリ
(A)0,0.25,0.375,0.5,0.625,0.75,0.875
(B)0,0.121,0.252,0.37,0.51,0.625,0.749,0.876
・ 16分カテゴリ
(C)0,0.125,0.1875,0.251,0.374,0.4325,0.5,0.625,0.6875,0.75,0.876,0.9325
(D)0,0.625,0.125,0.1875,0.251,0.3125,0.375,0.4325,0.5,0.5625,0.625,0.6875,0.75,0.8125,0.875,0.9325
・ 8分3連カテゴリ
(E)0,0.0833,0.1666,0.25,0.3333,0.4166,0.5,0.5833,0.6666,0.75,0.8333,0.91666
(F)0,0.1666,0.25,0.3333,0.4166,0.5,0.6666,0.75,0.8333,0.91666
リズムパターン検索部213は、上記(A)~(F)に対してステップSb2と同様の計算方法を用いて、リズムカテゴリ毎にオンセット時刻間隔の分布を求める。図6(b)は、リズムカテゴリ毎に計算したオンセット時刻間隔の分布を分布表に割り当てたものである。なお、検索機能がONの状態において一連の検索処理が繰り返されるにあたり、2回目以降のステップS1において楽器種類が変更されない場合は、処理の対象となるフレーズレコード及びリズムカテゴリは変更されることがなく同一であるため、ステップSb3の処理は省略される。逆に、一連の検索処理が繰り返されるにあたって、ステップS1で楽器種類が変更された場合、ステップSb3の処理が行われる。
(1)リズムパターン検索部213は、入力リズムパターンJにおける各オンセット時刻を基準として、リズムパターンKにおける最も近いオンセット時刻との時刻差の絶対値を算出する(図7における(1))。
(2)リズムパターン検索部213は、手順(1)で算出した各絶対値の総和を算出する。
(3)リズムパターン検索部213は、リズムパターンKにおける各オンセット時刻を基準として、入力リズムパターンJにおける最も近いオンセット時刻との時刻差の絶対値を算出する(図7における(3))。
(4)リズムパターン検索部213は、手順(3)で算出した各絶対値の総和を算出する。
(5)リズムパターン検索部213は、手順(2)で算出した総和と手順(4)で算出した総和との平均値を、入力リズムパターンJとリズムパターンKとのズレとして算出する。
JとKのリズムパターンの距離=(Jと、Kが所属するリズムカテゴリとの類似度距離)×(JとKのリズムパターンのズレ)
ただしこの計算に際して、基本的には、ステップSb5で入力リズムパターンが該当すると判定されたリズムカテゴリ内から、検索結果が出力されるように、以下のような処理が行われる。リズムパターン検索部213は、ステップSb5で判定されたリズムカテゴリとBにおけるリズムカテゴリとが一致するかを判定し、一致しない場合には、上記計算式において予め定められた定数(例えば0.5)を加算する。このようにすれば、ステップSb5で判定されたリズムカテゴリと一致しないリズムカテゴリに属するフレーズレコードについては、リズムパターン同士の距離が大きく算出されるため、ステップSb5で入力リズムパターンが該当すると判定されたリズムカテゴリ内から、検索結果が出力されやすくなる。そして、リズムパターン検索部213は、入力リズムパターンとの距離が最も小さいリズムパターンを、入力リズムパターンと類似の度合いが高いことを示す条件を満たすリズムパターンであるとみなし、このリズムパターンデータを持つフレーズレコードを、検索結果として出力する(ステップSb8)。以上が、検索機能がONの状態において、リズムパターン検索部213が、入力リズムパターンに基づいて、フレーズテーブルから特定のフレーズレコードを検索結果として出力する一連の処理についての説明である。
<第2実施形態>
(音楽データ作成システム)
<構成>
第2実施形態は、音楽データ処理システムの一例としての音楽データ作成システムであり、音楽データの一例として自動伴奏データを作成するシステムである。本実施形態における自動伴奏データは、電子楽器やシーケンサなどに読み込まれ、いわゆるMIDIの自動伴奏データと同様の役割を果たす。第2実施形態における音楽データ作成システム100aは、図1に示すものと同様の構成を有する。ただし、リズム入力装置及び情報処理装置のそれぞれの構成が第1実施形態と異なるため、符号に「a」を付してそれぞれを表す。この音楽データ作成システム100aは、リズム入力装置10a、及び情報処理装置20aを備えており、各々が通信線により接続されて相互に通信ができるようになっている。この通信は、無線によって実現されてもよい。第2実施形態において、リズム入力装置10aは、入力手段として例えば鍵盤とパッドを備えている。ユーザが、リズム入力装置10aに設けられた鍵盤を押鍵することにより、リズム入力装置10aは、押鍵されたこと、すなわち演奏操作がなされたことを示すトリガーデータと、押鍵の強度、すなわち当該演奏操作の強度を示すベロシティデータとを、1小節を単位として、情報処理装置20aに入力する。ここで、トリガーデータは、ユーザが鍵盤を押鍵する毎に生成されるものであって、押鍵されたことを示すキーオン情報で表される。トリガーデータの各々にはベロシティデータが対応付けられている。1小節内に生成されたトリガーデータとベロシティデータの集合は、この1小節において、ユーザがリズム入力装置10aを用いて入力したリズムのパターン(入力リズムパターンという)を表している。ユーザは、この入力リズムパターンを鍵盤の鍵域に対応したパートについて入力することになる。また、打楽器を示すパートについては、ユーザは、パッドを使って入力リズムパターンを入力する。このように、リズム入力装置10aは、ユーザによる演奏操作が入力される入力手段の一例である。
制御部21は、ROMや記憶部22aに記憶されたアプリケーションを構成する各プログラムをRAMに読み出して実行することにより、テンポ取得部211a、進行部212a、通知部213a、パート選択部214a、パターン取得部215a、検索部216a、特定部217a、出力部218a、コード受付部219a及び音高受付部220aの各機能を実現する。以降において、これら各部を主体として処理の説明を行うことがあるが、処理の主体の実体は、制御部21である。なお、以降の説明において「オンセット」とは、リズム入力装置10aにおける入力状態がオフからオンに切り替わることを指す。例えば、「オンセット」とは、リズム入力装置10aの入力手段が鍵盤であれば鍵盤が押鍵された状態のことであり、リズム入力装置10aの入力手段がパッドであればパッドを叩かれた状態のことであり、リズム入力装置10aの入力手段がボタンであればボタンが押下された状態のことである。これに対して「オフセット」とは、リズム入力装置10aの入力手段が鍵盤であれば鍵盤が押鍵されてから離された状態のことであり、リズム入力装置10aの入力手段がパッドであればパッドに対する打撃が完了した状態のことであり、リズム入力装置10aの入力手段がボタンであれば押下されたボタンから指が離された状態のことである。また、以降の説明において「オンセット時刻」とは、リズム入力装置10aにおける入力状態がオフからオンに切り替わった時の各々の時刻を表す。換言すれば、「オンセット時刻」とは、リズム入力装置10aにおいてトリガーデータが発生した時刻を表す。これに対して、「オフセット時刻」とは、リズム入力装置10aにおける入力状態がオンからオフに切り替わった時の各々の時刻を表す。換言すれば、「オフセット時刻」とは、リズム入力装置10aにおいて発生したトリガーデータが消滅した時刻を表す。また、以降の説明において「オンセット情報」とは、オンセット時刻においてリズム入力装置10aから情報処理装置20aへ入力される情報である。「オンセット情報」には、上述したトリガーデータ以外に、鍵盤のノートナンバーやチャンネル情報等が含まれている。
図15は、情報処理装置20aが行う処理のフロー図である。まず、ユーザが、リズム入力装置10aの図示しない操作子で自動伴奏データ作成を指示すると、本フローのプログラムが実行される。情報処理装置20aは、ユーザの指示に基づいて、プログラムの実行開始後、初期設定を行う。(ステップSa0)。初期設定では、ユーザは、操作部25を用いて各鍵域それぞれに対応する楽器種類および入力パッドに対応する楽器種類を指定し、BPM入力操作子13を用いてBPMを入力する。また、制御部21は、図12、図13A及び図13Bに示した各種テーブルをRAMに読み出す。初期設定後、ユーザは、リズム入力装置10aを用いて、鍵盤11における所定の鍵域又は入力パッド12a~12dにおけるいずれか、すなわちパートを指定してリズムパターンを入力する。リズム入力装置10aは、指定されたパートを識別する情報と、指定された楽器種類を識別する情報と、入力BPMを識別する情報と、入力リズムパターンとを含むMIDI情報を情報処理装置20aに送信する。制御部21は、入出力インターフェース部23を用いてリズム入力装置10aからMIDI情報を受信すると、図15に示すフローに沿って処理を実行する。
<第3実施形態>
(スタイルデータの検索システム)
<構成>
第3実施形態では、音楽データ処理システムの例としてのスタイルデータを検索するシステムである。第2実施形態の音楽データ作成システム100aにおいて、自動伴奏DB222が、スタイルデータを記憶するとともに、スタイルデータを検索するためのスタイルテーブルを有している。その他の構成については、第3実施形態と第2実施形態は同様である。
スタイルデータは、「Bebop01」、「HardRock01」又は「Salsa01」といったそれぞれ異なるスタイルについて集めた伴奏音データ片を、伴奏パターンの最小単位である「セクション」(1~数小節)毎に、それぞれセクションデータとして纏めた伴奏音データ片の集合であり、記憶部22に記憶される。セクションには複数の種類があり、例えば、「イントロ」、「メイン」、「フィルイン」又は「エンディング」といった構成上の種類と、さらに各セクションにおける「ノーマル」、「バリエーション1」又は「バリエーション2」といったパターンの種類とが存在する。また、セクション毎のスタイルデータには、バスドラム、スネアドラム、ハイハット、シンバル、フレーズ、コード及びベースのそれぞれのパートについて、MIDIフォーマットに従って記述された演奏データの識別子(リズムパターンID)が含まれている。それぞれのスタイルデータの各セクションについて、制御部21により演奏データからパート毎にリズムパターンが解析され、解析結果に従った内容が、スタイルテーブルに登録される。例えば、ベースパートのデータについては、制御部21は、基準となる音高を用いて、演奏データにおける音高の時系列に沿った並びを解析し、解析結果に従った内容をスタイルテーブルに登録する。また、コードパートのデータについては、制御部21は、基準となる和音を用いて、演奏データにおいて用いられているコードを解析し、解析結果に従った内容として「Cmaj7」などのコード情報を、後述する和音進行情報テーブルに登録する。
図19Aは、スタイルテーブルの一例を表した図である。図19Aにおいては、スタイルテーブルの一例として、「ジャンル」が「Swing&JAZZ」である複数のスタイルデータが表されている。1件のスタイルデータは、「スタイルID」、「スタイル名」、「セクション」、「キー」、「ジャンル」、「BPM」、「拍子」、「ベースリズムパターンID」、「コードリズムパターンID」、「フレーズリズムパターンID」、「バスドラムリズムパターンID」、「スネアドラムリズムパターンID」、「ハイハットリズムパターンID」、及び「シンバルリズムパターンID」といった複数の項目からなる。「スタイルID」は、スタイルデータを一意に識別するための識別子である。「スタイル名」は、各スタイルデータを一意に識別する名称である。
セクション進行情報テーブルは、楽曲演奏の進行に従って、スタイルデータ中からセクションを時系列的に順次指定するための情報を取りまとめたテーブルである。図19B(a)の構成例に示されるように、セクション進行情報は、スタイルID、スタイルを指定するスタイル指定データSt、セクションを指定するセクション情報Sni、各セクションの開始時間と終了時間との位置(通常、小節単位)を表すセクション開始タイミングデータTssi,Tsei(i=1,2,3,…)、及びセクション進行情報の最終位置を表すセクション進行エンドデータSeから構成することができ、例えば記憶部22に記憶される。つまり、各セクション情報Sniは、対応するセクションに関するデータの記憶領域を指定し、その前後にあるタイミングデータTssi,Tseiは、指定されたセクションによる伴奏の開始及び終了を指示する。従って、このセクション進行情報を用いて、伴奏スタイル指定データStで指定された伴奏スタイルデータの中から、タイミングデータTssi,Tsei及びセクション情報Sniの組合わせの繰返しで、セクションを順次指定することができる。
和音進行情報テーブルは、楽曲演奏の進行に従って演奏されるべき和音を時系列的に順次指定するための情報を取りまとめたテーブルである。和音進行情報は、図19B(b)の構成例に示されるように、スタイルID、調情報Key、和音名Cnj、和音名Cnjを規定するための和音根音情報Crj、和音タイプ情報Ctj、各和音の開始及び終了時間位置(通常、拍単位)を表す和音開始及び終了タイミングデータTcsj,Tcej(j=1,2,3,…)、及び和音進行情報の最終位置を表す和音進行エンドデータCeから構成することができ、例えば、記憶部22に記憶される。ここで、各2つの情報Crj,Ctjで規定される和音情報Cnjは、セクション情報Sniで指定されたセクション中の和音演奏データに対して演奏すべき和音の種類を指示し、その前後にあるタイミングデータTcsi,Tceiは、この和音による演奏の開始及び終了を指示する。従って、この和音進行情報を用いると、調情報Keyにより調を指定した上、タイミングデータTcsj,Tcej及び和音情報Cnjの組合わせの繰返しによって、演奏すべき和音を順次指定することができる。
図20は、第3実施形態に係る情報処理装置20が行う処理のフロー図である。図20において、ステップSd0~Sd5については、第2実施形態に係る図15のステップSa0~Sa5と同様である。第3実施形態のステップSd6において、制御部21は、ステップSd5における検索結果のリズムパターンレコードと同一のパターンIDが、いずれかのパートのリズムパターンIDに設定されたスタイルデータを検索結果として表示させる。
なお、この画面では、図示しない操作子の操作によって、ユーザが作成したオリジナルのスタイルデータや既存あるいはオリジナルのスタイルデータに含まれる演奏データを、登録したり編集・確認することも可能である。
以下の説明で、特定の実施形態を指定して説明しているものを除き、以上の実施形態は次のように変形可能である。尚、以下の変形例は適宜組み合わせて実施しても良い。
第1実施形態では、ループ再生モード或いは演奏ループ再生モードにおいて1件のフレーズレコードが検索結果として出力されていたが、これに限らず、リズムパターン検索部213が、入力リズムパターンを基準として、一定以上の類似度を持つ複数のフレーズレコードを、類似度の高い順番に並び替えて検索結果として出力してもよい。このとき、検索結果として出力されるフレーズレコードの件数は、ROMに定数として記憶されていてもよいし、記憶部22に変数として記憶され、ユーザにより変更可能としてもよい。例えば検索結果として出力されるフレーズレコードの件数が5件である場合、各フレーズレコードのフレーズ楽音データの名称が5件分、表示部24にリスト形式で表示される。そしてユーザが選択したフレーズレコードに基づく音が、音声出力部26から出力される。
楽器の種類が音高に幅のある楽器の場合、フレーズ楽音データにおける各構成音のキーと、外部音源を含む伴奏のキーが一致しない場合がある。このような場合に備えて、ユーザが操作部25を通じて操作を行うことで、制御部21が、フレーズ楽音データにおける各構成音のキーを変更可能としてもよい。また、このようなキーの変更は、操作部25を介して行われてもよいし、リズム入力装置10に設けられたフェーダー、つまみ、ホイール、ボタン等の操作子を介して行われてもよい。また、予め上記構成音のキーを表すデータをリズムDB221及び自動伴奏DB222に記憶させておくことで、ユーザがキーを変更した場合に、制御部21が、変更後のキーが何かをユーザに告知するようにしてもよい。
楽音データによっては、構成音の終了付近で波形の振幅(パワー)が0近くに収束されていないものがあり、この場合、構成音に基づく音声が出力された後に、クリップノイズが発生する場合がある。これを防ぐために、制御部21が、構成音の開始及び終了付近の一定範囲を自動でフェードイン又はフェードアウトする機能を有していてもよい。ここでユーザは、上記フェードイン又はフェードアウトを適用させるかどうかを、操作部25あるいはリズム入力装置10に設けられた何らかの操作子を介して選択可能である。
ユーザが演奏操作を行った結果であるフレーズを、制御部21が録音し、この録音した内容を、一般的に音源ループ素材で用いられているようなファイルフォーマットで出力可能としてもよい。例えば、楽曲制作において、ユーザ自らが欲するリズムパターンがリズムDB221に記憶されていない場合に、このように自身の演奏を記録する機能があると、ユーザは、欲していたものとイメージが近いフレーズ楽音データを手に入れることが可能となる。
また、制御部21は、再生対象とする楽音データを1つに限らず、複数の楽音データを再生対象として、複数のフレーズを音声として重ねて出力可能としてもよい。この場合、例えば表示部24に複数のトラックが表示され、ユーザは各々のトラックに対して、それぞれ異なる楽音データ及び再生モードを割り当てることが可能である。これにより、例えばユーザは、トラックAにコンガの楽音データをループ再生モードで割り当てることでこれを伴奏とし、トラックBにおいてジャンベの楽音データを演奏再生モードで割り当てることで演奏を行う、といったことが可能となる。
ユーザの演奏操作によって入力されたベロシティデータに対して、検索結果の楽音データにおける、上記ベロシティデータに紐付けられたトリガーデータと同じ時刻を持つ構成音(構成音Aとする)のアタック強度の大きさが極端に異なる(ここでは予め定められた閾値を超える場合とする)場合、以下のような処理が行われてもよい。上述のような場合、制御部21は、上記楽音データにおいて、入力されたベロシティデータにほぼ対応した大きさのアタック強度を持つ複数の構成音から、ランダムに選択した構成音と構成音Aを差し替える。ここでユーザは、上述した処理を適用させるかどうかを、操作部25あるいはリズム入力装置10に設けられた何らかの操作子を介して選択可能である。このようにすれば、ユーザは自らが行った演奏操作により近い出力結果を得ることが可能となる。
上述した第3実施形態を除く実施形態においては、フレーズ楽音データはWAVEやmp3といったファイルフォーマットからなるとしたが、これに限らず、例えばMIDI形式のようなシーケンスデータとしてもよい。この場合、記憶部22には、MIDIデータの形式でファイルが記憶され、音声出力部26に相当する構成はMIDI音源となる。特に第2実施形態において楽音データがMIDI形式である場合、キーシフトやピッチ変換に際してタイムストレッチなどの処理が不要となる。従って、この場合、制御部21は、ユーザによりキー指定鍵盤202を用いてキーが指定されたときは、楽音データが表すMIDI情報のうちキーを示す情報を指定キーに変更する。また、この場合、リズムパターンテーブルにおいて、各リズムパターンレコードが複数のコードに応じた楽音データを有する必要がない。制御部21は、ユーザによりコード指定ボックス203を用いてコードが指定されたときは、楽音データが表すMIDI情報のうちコードを示す情報を、指定されたコードに変更する。このように、楽音データがMIDI形式のファイルであっても、上述した実施形態と同様の効果を奏することができる。また、第3実施形態においては、オーディオデータを用いたスタイルデータを利用してもよい。この場合、基本的な構成は、第3実施形態のスタイルデータと同様だが、各パートの演奏データがオーディオデータとして記憶されている点が異なるものである。また、MIDIとオーディオが組み合わされたスタイルデータを利用してもよい。
ユーザの演奏操作によるトリガーデータと、リズムDB221に含まれるフレーズ楽音データ又は自動伴奏DB222に含まれるリズムパターンデータとの比較によって、制御部21が特定のフレーズレコード又はリズムパターンレコードを検出していたが、これに限らず、制御部21は、ユーザの演奏操作によるトリガーデータ及びベロシティデータの双方を用いてリズムDB221及び自動伴奏DB222を検索するようにしてもよい。この場合、同じリズムパターンを持つ楽音データが2つ存在した場合に、各構成音のアタック強度が、ユーザの演奏操作によるベロシティデータと、より近い楽音データが検索結果として検出される。このようにすれば、アタック強度についても、ユーザがイメージしていたものに近い楽音データが、検索結果として出力されることが可能となる。
リズムパターン同士のズレの計算方法は一例に過ぎず、異なる計算方法を用いてもよい。以下には、その計算方法のバリエーションを記載する。
(変形例9)
例えば、確実にリズムカテゴリが一致した検索結果が出力されるように、入力リズムパターンに該当するリズムカテゴリを判定した後、判定結果のリズムカテゴリに属するフレーズレコード又はリズムパターンレコードのみを対象として、リズムパターンのズレの計算(ステップSb6)、及びリズムパターン同士の距離の計算(ステップSb7)が行われるようにしてもよい。このようにした場合、計算量が少なくてすむため、情報処理装置20における負荷が下がるとともに、ユーザにとっての応答時間も短くなる。
(変形例10)
ステップSb6におけるリズムパターン同士のズレの計算において、以下のような処理を施してもよい。変形例10において、制御部21は、入力リズムパターンにおいて比較対象のリズムパターンにおけるオンセット時刻との時刻差の絶対値が閾値より小さいオンセット時刻については、その時刻差の絶対値はユーザの手入力による意図しないものであるとみなして、ズレの値を0あるいは本来の値より小さくなるように補正する。この閾値は、例えば「1」という値であり、予め記憶部22aに記憶されている。例えば、入力リズムパターンのオンセット時刻が「1,13,23,37」であり、比較対象のリズムパターンのオンセット時刻が「0,12,24,36」であったとする。この場合、各オンセット時刻の差の絶対値は、「1,1,1,1」となる。ここで閾値が「1」であったとすると、制御部21は、各オンセット時刻の差の絶対値に係数αを乗算して補正を行う。係数αは0から1の間を取り、ここでは0であるものとする。この場合、補正後の各オンセット時刻の差の絶対値は、「0,0,0,0」となるから、制御部21は、両者のリズムパターンのズレを0と算出する。係数αは、予め決められて記憶部22aに記憶されていてもよいが、2つのリズムパターンのズレの大きさに係数αの値を対応付けた補正カーブをリズムカテゴリ毎に記憶部22aに記憶させることで、制御部21がこの補正カーブに従って係数αを決定してもよい。
(変形例11)
ステップSb6におけるリズムパターン同士のズレの計算において、以下のような処理を施してもよい。変形例11において、制御部21は、入力リズムパターンにおいて比較対象のリズムパターンにおけるオンセット時刻との時刻差の絶対値が閾値より大きいオンセット時刻については、計算に用いない、あるいは本来の値より小さくなるようにズレを補正する。これにより、例えばユーザが1小節の前半部分や後半部分だけリズムパターンを入力した場合であっても、リズムパターンが入力された区間を対象として検索が行われる。従って、例えば第2実施形態及び第3実施形態において、1小節通して同一のリズムパターンを有するリズムパターンレコードが自動伴奏DB222に含まれなくても、ユーザは、入力リズムパターンに或る程度類似したリズムパターンレコードを検索結果として得ることができる。
(変形例12)
ステップSb6におけるリズムパターン同士のズレの計算において、ベロシティパターンの差を考慮した算出方法を用いてもよい。入力リズムパターンをAとして、リズムパターンレコードに記述されたリズムパターンをBとすると、変形例12におけるAとBとのズレの大きさの計算は、以下の手順で行われる。
(11)制御部21は、リズムパターンAにおける各オンセット時刻を基準として、リズムパターンBにおける最も近いオンセット時刻との時刻差の絶対値を算出する。
(12)制御部21は、手順(11)で算出した各絶対値の総和を算出する。
(13)制御部21は、リズムパターンAにおける各オンセット時刻のベロシティデータと、リズムパターンBにおける各オンセット時刻のアタック強度との差の絶対値の総和を算出する。
(14)制御部21は、リズムパターンBにおける各オンセット時刻を基準として、リズムパターンAにおける最も近いオンセット時刻との時刻差の絶対値を算出する。
(15)制御部21は、手順(14)で算出した各絶対値の総和を算出する。
(16)制御部21は、リズムパターンBにおける各オンセット時刻のベロシティデータと、リズムパターンAにおける各オンセット時刻のアタック強度との差の絶対値の総和を算出する。
(17)制御部21は、以下の式(1)に従って、リズムパターンAとリズムパターンBとのズレを算出する。
リズムパターンAとリズムパターンBとのズレ=α×(手順(12)で算出した時刻差の絶対値の総和+手順(15)で算出した時刻差の絶対値の総和)/2+(1-α)×(手順(13)で算出したベロシティの差の絶対値の総和+手順(16)で算出したベロシティの差の絶対値の総和)/2・・・式(1)
式(1)においてαは0<α<1を満たす予め決められた係数であり、記憶部22aに記憶されている。ユーザは、操作部25を用いて係数αの値を変更可能である。例えば、ユーザは、リズムパターンを検索するにあたり、オンセット時刻の一致度とベロシティの一致度とのどちらを優先するかによって、係数αの値を設定すればよい。このようにすれば、ユーザは、ベロシティを考慮した検索結果を得ることができる。
(変形例13)
ステップSb6におけるリズムパターン同士のズレの計算において、デュレーションパターンの差を考慮した算出方法を用いてもよい。入力リズムパターンをAとして、リズムパターンレコードに記述されたリズムパターンをBとすると、変形例13におけるAとBとのズレの大きさの計算は、以下の手順で行われる。
(21)制御部21は、リズムパターンAにおける各オンセット時刻を基準として、リズムパターンBにおける最も近いオンセット時刻との時刻差の絶対値を算出する。
(22)制御部21は、手順(21)で算出した各絶対値の総和を算出する。
(23)制御部21は、リズムパターンAにおける各オンセット時刻のデュレーションパターンと、リズムパターンBにおける各オンセット時刻のデュレーションパターンとの差の絶対値の総和を算出する。
(24)制御部21は、リズムパターンBにおける各オンセット時刻を基準として、リズムパターンAにおける最も近いオンセット時刻との時刻差の絶対値を算出する。
(25)制御部21は、手順(24)で算出した各絶対値の総和を算出する。
(26)制御部21は、リズムパターンBにおける各オンセット時刻のデュレーションパターンと、リズムパターンAにおける各オンセット時刻のデュレーションパターンとの差の絶対値の総和を算出する。
(27)制御部21は、以下の式(2)に従って、リズムパターンAとリズムパターンBとのズレを算出する。
リズムパターンAとリズムパターンBとのズレ=β×(手順(22)で算出した時刻差の絶対値の総和+手順(25)で算出した時刻差の絶対値の総和)/2+(1-β)×(手順(23)で算出したデュレーションの差の絶対値の総和+手順(26)で算出したデュレーションの差の絶対値の総和)/2・・・式(2)
式(2)においてβは0<β<1を満たす予め決められた係数であり、記憶部22に記憶されている。ユーザは、操作部25を用いて係数βの値を変更可能である。例えば、ユーザは、リズムパターンを検索するにあたり、オンセット時刻の一致度とデュレーションパターンの一致度とのどちらを優先するかによって、係数βの値を設定すればよい。このようにすれば、ユーザは、デュレーションを考慮した検索結果を得ることができる。
以上が、リズムパターン同士のズレの計算方法のバリエーションについての説明である。
リズムパターン同士の距離の計算方法は一例に過ぎず、異なる計算方法を用いてもよい。以下には、その計算方法のバリエーションを記載する。
<両者の値の総和に係数を用いる>
(変形例14)
第1実施形態~第3実施形態におけるステップSb7において、制御部21は、ステップSb4でリズムカテゴリについて算出した類似度距離と、ステップSb6で算出したリズムパターンのズレとを乗算することで、リズムパターン同士の距離を計算していたが、上記類似度距離又は上記ズレのどちらか一方が「0」の値であると、リズムパターン同士の距離が「0」と算出され、他方の値を反映しないものとなってしまう。これに対して、ステップSb7において制御部21が以下の式(3)に従ってリズムパターン同士の距離を算出してもよい。
リズムパターン同士の距離=(ステップSb4でリズムカテゴリについて算出した類似度距離+γ)×(ステップSb6で算出したリズムパターンのズレ+δ)・・・式(3)
式(3)において、γ及びδは予め決められた定数であり、記憶部22aに記憶されている。ここで、γ及びδは適当に小さな値であればよい。このようにすれば、ステップSb4でリズムカテゴリについて算出した類似度距離と、ステップSb6で算出したリズムパターンのズレのどちらか一方が「0」の値であっても、他方の値が反映されたリズムパターン同士の距離が算出される。
(変形例15)
ステップSb7におけるリズムパターン同士の距離の計算は、上述したものに限らず、次のような方法を用いてもよい。変形例15では、ステップSb7において制御部21が以下の式(4)に従ってリズムパターン同士の距離を算出する。
リズムパターン同士の距離=ε×ステップSb4でリズムカテゴリについて算出した類似度距離+(1-ε)×ステップSb6で算出したリズムパターンのズレ・・・式(4)
式(4)においてεは0<ε<1を満たす予め決められた係数である。係数εは記憶部22に記憶されており、ユーザは操作部25を用いてその値を変更可能である。例えば、ユーザは、リズムパターンを検索するにあたり、リズムカテゴリについて算出した類似度距離と、リズムパターンのズレとのどちらを優先するかによって、係数εの値を設定すればよい。このようにすれば、ユーザは、より自らの所望する検索結果を得ることができる。
(変形例16)
ステップSb7におけるリズムパターン同士の距離の計算は、上述したものに限らず、次のような方法を用いてもよい。変形例16では、ステップSb7において制御部21が以下の式(5-1)に従ってリズムパターン同士の距離を算出する。
リズムパターン同士の距離=(ステップSb4でリズムカテゴリについて算出した類似度距離+ステップSb6で算出したリズムパターンのズレ)×з×|入力BPM-リズムパターンレコードのBPM|・・・式(5-1)
式(5-1)においてзは0<з<1を満たす予め決められた定数である。зは記憶部22に記憶されており、ユーザは操作部25を用いてその値を変更可能である。例えば、ユーザは、リズムパターンを検索するにあたり、BPMの差をどの程度優先するかによって、係数зの値を設定すればよい。このとき、入力BPMとそのBPMとの差が予め決められた閾値を越えるようなリズムパターンレコードは、制御部21が検索結果から除外するようにしてもよい。このようにすれば、ユーザは、BPMを考慮した検索結果を得ることができる。
また、上記式(5-1)の別の例として、以下のものを用いてもよい。
リズムパターン同士の距離=(ステップSb4でリズムカテゴリについて算出した類似度距離+ステップSb6で算出したリズムパターンのズレ)+з×|入力BPM-リズムパターンレコードのBPM|・・・式(5-2)
式(5-2)においても、式(5-1)と同様に、зは0<з<1を満たす予め決められた定数である。зは記憶部22に記憶されており、ユーザは操作部25を用いてその値を変更可能である。式(5-2)を用いれば、例えばεをかなり小さい値にすることで、基本的にはリズムパターンが近いものほど高順位で結果が出力され、更に、リズムパターンが一致しているものの中からテンポが近い順に表示されるようにすることができる。
(変形例17)
ステップSb7におけるリズムパターン同士の距離の計算は、上述したものに限らず、次のような方法を用いてもよい。変形例17では、ステップSb7における上述した式のいずれかにおいて、制御部21が、右辺に対して、リズムパターンの入力時に指定された楽音の音色と比較対象のリズムパターンレコードの音色との一致度を乗算する。音色の一致度の算出方法については、周知の方法が用いられればよい。ここでは、一致度の値が小さい程、両者の音色が近いものであり、一致度の値が大きい程、両者の音色が離れているものとする。このようにすれば、ユーザはリズムパターンの入力時に体感する音色に近い音色のリズムパターンレコードが検索結果として得られやすいため、ユーザは検索結果により納得感を持つことができる。
(変形例18)
ステップSb7におけるリズムパターン同士の距離の計算は、上述したものに限らず、次のような方法を用いてもよい。変形例18では、ユーザは、リズムパターンの入力時に、操作部25を用いてジャンルを指定することが可能である。変形例18では、ステップSb7における上述した式のいずれかにおいて、制御部21が、右辺に対して、リズムパターンの入力時に指定されたジャンルと比較対象のリズムパターンレコードのジャンルとの一致度を乗算する。ここで、ジャンルは、大ジャンル、中ジャンル、小ジャンルといったように、段階的に区分けされていてもよい。制御部21は、指定されたジャンルと一致する又は含むリズムパターンレコードと入力リズムパターンとの距離がより小さくなるように、また、指定されたジャンルと一致しない又は含まないリズムパターンレコードと入力リズムパターンとの距離がより大きくなるように、ジャンルの一致度を算出し、ステップSb7における式に補正を行えばよい。このようにすれば、ユーザはリズムパターンの入力時に指定したジャンルと一致する又は含むリズムパターンレコードが検索結果として得られやすいため、ユーザは検索結果により納得感を持つことができる。
以上が、リズムパターン同士の距離の計算方法のバリエーションについての説明である。
入力リズムパターンとリズムカテゴリとの距離の算出方法は一例に過ぎず、異なる計算方法を用いてもよい。以下には、その算出方法のバリエーションを記載する。
<カテゴリに特有の入力時刻間隔の個数>
(変形例19)
変形例19において、制御部21は、入力リズムパターンにおいて比較対象のリズムカテゴリを象徴するオンセット時刻間隔が含まれる個数に基づいて、入力リズムパターンと各リズムカテゴリとの距離を算出する。図24は、オンセット時刻間隔テーブルの一例を表す図である。オンセット時刻間隔テーブルは記憶部22に予め記憶されている。オンセット時刻間隔テーブルは、リズムカテゴリの分類を示す名称と、各リズムカテゴリにおいて対象となるオンセット時刻間隔との組み合わせからなる。なお、ここでは1小節を48に等分して正規化したものとしてオンセット時刻間隔テーブルの内容が予め決められている。
(d)12,6,6,6,6,6
制御部21は、算出した数値群と、図24に示すオンセット時刻間隔テーブルとに従って、4分のオンセット時刻間隔が1個、8分のオンセット時刻間隔が5個であると特定する。そして制御部21は、以下の式(6)に従って、入力リズムパターンと各リズムカテゴリとの距離を算出する。
入力リズムパターンとリズムカテゴリNとの距離=1-(入力リズムパターンにおけるリズムカテゴリNの対象となるオンセット時刻間隔の個数/入力リズムパターンにおけるオンセット時刻間隔の総数)・・・(6)
上述した式は一例であり、対象となるオンセット時刻間隔がより多く含まれているリズムカテゴリほど、入力リズムパターンとの距離が小さく算出されるものであればよい。式(6)に従った結果、制御部21は、例えば入力リズムパターンと8分のリズムカテゴリとの距離を、0.166と算出する。また、式(6)に従った結果、制御部21は、例えば入力リズムパターンと4分のリズムカテゴリとの距離を、0.833と算出する。制御部21は、このようにして入力リズムパターンと各リズムカテゴリとの距離を算出し、最も距離が小さく算出されたリズムカテゴリに入力リズムパターンが属する、と判定する。
(変形例20)
入力リズムパターンとリズムカテゴリとの距離の算出方法は、上述したものに限らず、次のようにしてもよい。変形例20では、距離参照表なるものを記憶部22が記憶している。図25は、距離参照表の一例を表す図である。距離参照表では、入力リズムパターンの属し得るリズムカテゴリと、自動伴奏DB222に記憶された各リズムパターンレコードが属し得るリズムカテゴリとの距離が、マトリクス状に表されている。例えば、制御部21が、入力リズムパターンが属するリズムカテゴリを8分と判定したとする。制御部21は、判定結果である入力リズムパターンが属するリズムカテゴリと、距離参照表とに基づいて、入力リズムパターンと各リズムカテゴリとの距離を特定する。例えば、この場合、制御部21は、入力リズムパターンと4分のリズムカテゴリとの距離を「0.8」と特定し、入力リズムパターンと8分のリズムカテゴリとの距離を「0」と特定する。これにより、制御部21は、8分のリズムカテゴリが入力リズムパターンと最も距離が小さいと判定する。
(変形例21)
入力リズムパターンとリズムカテゴリとの距離の算出方法は、上述したものに限らず、次のようにしてもよい。変形例21において、制御部21は、入力リズムパターンにおいて比較対象のリズムカテゴリを象徴するオンセット時刻が含まれる個数に基づいて、入力リズムパターンと各リズムカテゴリとの距離を算出する。図26は、オンセット時刻テーブルの一例を表す図である。オンセット時刻テーブルは記憶部22aに予め記憶されている。オンセット時刻テーブルは、リズムカテゴリの分類を示す名称と、各リズムカテゴリにおいて対象となるオンセット時刻と、入力リズムパターンに対象となるオンセット時刻が含まれる場合に加算するスコアとの組み合わせからなる。なお、ここでは1小節を48に等分して正規化したものとしてオンセット時刻テーブルの内容が予め決められている。
(e)0,12,18,24,30,36,42
制御部21は、このオンセット時刻とオンセット時刻テーブルとに基づいて、各リズムカテゴリに対する入力リズムパターンのスコアを算出する。ここでは、制御部21は、4分のリズムカテゴリに対するスコアとして「8」を算出し、8分のリズムカテゴリに対するスコアとして「10」を算出し、8分3連のリズムカテゴリに対するスコアとして「4」を算出し、16分のリズムカテゴリに対するスコアとして「7」を算出する。そして制御部21は、算出したスコアが最も高いリズムカテゴリを、入力リズムパターンとの距離が最も小さいリズムカテゴリであると判定する。ここでは、制御部21は、入力リズムパターンが8分のリズムカテゴリと最も距離が小さいと判定する。
以上が、入力リズムパターンとリズムカテゴリとの距離の算出方法のバリエーションについての説明である。
(変形例22)
ユーザがパートを指定して入力した音高パターンによって検索が行われるようにしてもよい。説明の便宜上、ここでは第2実施形態及び第3実施形態を例に挙げて説明する。変形例22では、図13Aに示すリズムパターンテーブルにおいて、「リズムパターンID」という項目名は「パターンID」となる。また、変形例22において、図13Aに示すリズムパターンテーブルにおける各リズムパターンレコードには、「音高パターンデータ」という項目が追加される。音高パターンデータは、1小節を構成するフレーズにおける各構成音の音高の時系列に沿った変化が記録されたデータファイルである。音高パターンデータは、例えばテキストファイルに、構成音の音高の時系列に沿った変化が記述されたものである。また、前述したように、オンセット情報には、トリガーデータ以外に鍵盤のノートナンバーが含まれている。このうち、トリガーデータにおけるオンセット時刻の並びが入力リズムパターンに相当し、鍵盤のノートナンバーの並びが入力音高パターンに相当する。ここで、情報処理装置20が音高パターンを検索するときには周知の方法のいずれかを用いればよい。例えば、ユーザがパートにコードを指定して、「C-D-E」というルートの音高シーケンスを入力した場合、情報処理装置20の制御部21は、このシーケンスにおける音高の進行を「0-2-4」という相対的な数値で表した音高パターンデータを持つリズムパターンレコードを検索結果として出力する。
(変形例23)
ユーザがパートを指定して入力したリズムパターンによって検索が行われた結果のうち、上記音高パターンがより類似したリズムパターンレコードが検索結果として出力されるようにしてもよい。説明の便宜上、ここでは第2実施形態及び第3実施形態を例に挙げて説明する。変形例23のリズムパターンテーブルにおける各リズムパターンレコードは、各パートについての「パターンID」とともに、各パートについての「音高パターンデータ」を含んでいる。
((|60-57|)+(|64-60|)+(|67-64|)+(|64-60|))/4=3.5・・・式(7)
((|3.5-3|)2+(|3.5-4|)2+(|3.5-3|)2+(|3.5-4|)2)/4=0.25・・・式(8)
上記式で表されるように、「60,64,67,64」で表される入力音高パターンと、「57,60,64,60」で表される音源の音高パターンとの音高の差分の分散は、「0.25」となる。制御部21は、このような音程の分散を、音源の音高パターンの全てについて算出する。
Sp=(1-x)S+xyV・・・式(9)
変数xが0の場合、上記式は「Sp=S」となるため、求められる類似の度合いは、音高パターンを考慮しない場合のものとなる。一方、xが1に近づくにつれて、上記式で求められる類似の度合いは、より音高パターンを考慮した場合のものとなる。変数xの大きさは、ユーザが操作部25を用いて変更可能としてもよい。なお、式(9)においては、上記音高の差分の分散に代えて、音高の差分の平均誤差を用いてもよい。そして制御部21は、検索結果であるリズムパターンレコードを、音高パターンを考慮した場合の、検索結果のリズムパターンと入力リズムパターンとの類似の度合いが高い(距離が小さい)順番で並び替えると、RAMに記憶させる。
(31)制御部21は、入力音高パターンの各ノートのオンセット時刻を基準として、音源の音高パターンにおける最も近いオンセット時刻のノートとの音高の差分を算出する。
(32)制御部21は、音源の音高パターンの各ノートのオンセット時刻を基準として、入力音高パターンにおける最も近いオンセット時刻のノートとの音高の差分を算出する。
(33)制御部21は、手順(31)で算出した差分と手順(32)で算出した差分との平均値を、入力音高パターンと音源の音高パターンとの音高の差分として算出する。
なお、計算量を抑えるために、手順(31)若しくは手順(32)のいずれかのみを用いて音高の差分が算出されるようにしてもよい。なお、音高パターンを考慮した場合の、入力リズムパターンと検索結果のリズムパターンとの類似の度合いの算出方法は、上述の方法に限らず、他の方法が用いられてもよい。
(|36-36|/12)+(|43-31|/12)+(|36-36|/12)=0・・・式(10)
(|0-0|^2)(|0-0|^2)(|0-0|^2)=0・・・式(11)
12音階における音高の変動のパターンが一致しているため、音高パターンAと音高パターンBの12音階における音高パターンの類似度は0と算出される。すなわち、この場合、音高パターンBが、音高パターンAに最も類似する音高パターンとして出力される。このように、入力音高パターンそのものとの類似度合いだけでなく、12音階における音高の変動のパターンも考慮すると、ユーザは、より納得感のある結果が得られる。
入力音高パターンそのものと12音階における音高の変動のパターンを考慮したリズムパターンの類似度=(1-X)×(リズムパターンの類似度)+XY{(1-κ)(音高のパターンの類似度)+κ(12音階のパターンの類似度)}・・・式(13)
ここで、X、Y及びκは、0<X<1、Y>0、0<κ<1をそれぞれ満たす予め決められた定数である。なお、上記式は一例であってこの限りではない。
(変形例24)
制御部21は、ユーザの演奏操作によるトリガーデータ及びベロシティデータの双方を用いてリズムDB221及び自動伴奏DB222を検索するようにしてもよい。この場合、極めて類似したリズムパターンを持つリズムパターンデータが2つ存在したときは、制御部21は、アタック強度パターンデータに記述された各構成音のアタック強度がユーザの演奏操作によるベロシティデータとより近いリズムパターンデータを検索結果として出力する。このようにすれば、アタック強度についても、ユーザがイメージしていたものに近い自動伴奏データが、検索結果として出力されることが可能となる。
制御部21がリズムDB221及び自動伴奏DB222を検索するにあたって、トリガーデータ及びベロシティデータに加えて、同一の音が鳴り続ける長さを表すデュレーションデータを用いてもよい。ここで、各構成音におけるデュレーションデータは、オフセット時刻から直前のオンセット時刻を差し引いた時間の長さで表される。デュレーションデータは、リズム入力装置10における入力手段が鍵盤である場合に、情報処理装置20がオフセット時刻を明確に取得することが可能であるため、特に有効に活用することができる。この場合、フレーズテーブル及びリズムパターンテーブルにおいて、「デュレーションパターンデータ」という項目が追加されることとなる。デュレーションパターンデータは、1小節を構成するフレーズにおける各構成音の鳴り続ける長さが記録されたデータファイルであり、例えばテキストファイルに、各構成音の鳴り続ける長さが記述されたものである。この場合、情報処理装置20は、ユーザにより入力された、1小節におけるデュレーションのパターンを用いて、フレーズテーブル又はリズムパターンテーブルから、上記入力されたデュレーションのパターンと最も類似するデュレーションパターンデータを持つフレーズレコード又はリズムパターンレコードを検索結果として出力するようにすればよい。このようにすれば、類似したリズムパターンを持つフレーズレコード又はリズムパターンレコードが複数存在しても、情報処理装置20が、スラー(伸ばす)のあるリズムパターンや、スタッカート(はねる)のあるリズムパターン等を識別して検索結果として出力することが可能となる。
(変形例26)
情報処理装置20が検索を行うにあたって、入力リズムパターンにおける音色と同一又は類似の度合いが高い音色を持つフレーズを持つ自動伴奏データを検索するようにしてもよい。例えば、それぞれのリズムパターンデータに対し、使用される音色毎にその音色の識別情報を対応付けて持たせておき、ユーザがリズムパターンを入力する際に、事前に音色を指定しておくことで、対応する音色で発音するリズムパターンに絞ったうえでそのリズムパターンの類似度が高いものを検索されるようにすればよい。ここでは、第2実施形態及び第3実施形態を例に挙げて説明する。この場合、リズムパターンテーブルにおいて、項目「音色ID」が追加される。ユーザは、演奏操作子を用いてリズムパターンを入力する際に、例えば操作部25を用いて音色を指定する。音色の指定は、リズム入力装置10が備える操作子によって行われてもよい。ユーザが演奏操作を行うと、演奏操作が行われた際に指定された音色のIDが、MIDI情報の一部として情報処理装置20に入力される。情報処理装置20は、入力された音色IDに基づく音の音色と、リズムパターンテーブルにおける指定されたパートの各リズムパターンレコードにおける音色IDに基づく音の音色を比較し、比較結果に基づいて、両者が予め決められた対応関係にある場合、そのリズムパターンレコードを特定する。この対応関係は、例えば、比較結果に基づいて両者の楽器種類が同じであると識別可能なように予め決められており、記憶部22aに記憶されている。ここで、音色の比較については、各々の音の波形におけるスペクトラムを比較するなど、周知の方法を用いればよい。このようにすれば、リズムパターンが類似していることに加え、指定したパートについて音色が類似した自動伴奏データをユーザは得ることが可能となる。なお、具体的な手法としては、変形例17に記載した内容と同様の手法で実現できる。
入力時刻間隔ヒストグラムと発音時刻間隔ヒストグラムとの差分の絶対値が最も小さいことを、入力時刻間隔ヒストグラムに対する発音時刻間隔ヒストグラムの類似度が高いと判断していたが、両者の類似度が高いことを示す条件は、上記のような両ヒストグラムの差分の絶対値に限らない。例えば、両ヒストグラムの各時刻間隔成分の積などの相関度が最も大きいとか閾値を超えるといったことや、両ヒストグラムの差分の2乗が最も小さいとか閾値未満であるといったこと、両ヒストグラムの各時刻間隔成分における値が類似していることを示すような条件であれば、どのような条件を用いてもよい。
リズム入力装置10から入力されたリズムパターンに従って、情報処理装置20が、当該リズムパターンに類似するリズムパターンを持つ楽音データを検索し、検索結果の楽音データを音に変換して出力していたが、次のようにしてもよい。例えば、上記実施形態に相当する内容をウェブサービスによって実施する場合、上記実施形態において情報処理装置20が備える機能を、当該ウェブサービスを提供するサーバ装置が備えることとなる。そして、ユーザの操作によって、クライアント装置である自端末(例えばPC)が、インターネットや専用線等を介して入力リズムパターンを上記サーバ装置に送信する。サーバ装置は、受信した入力リズムパターンに基づいて、記憶手段から当該入力リズムパターンに類似するリズムパターンを持つ楽音データを検索し、検索結果の楽音データを自端末に送信する。そして自端末は、受信した楽音データに基づく音を出力する。なお、この場合、サーバ装置の提供するウェブサイトやアプリケーションにおいて、小節線クロックがユーザに提示されるようにすればよい。
リズム入力装置10における演奏操作子は、ユーザが演奏操作したときに、少なくともトリガーデータを出力するものであれば、鍵盤やドラムパッドのような形状に限らず、弦楽器、吹奏楽器、あるいはボタンなどの形状であってもよい。また、その他に演奏操作子は、タッチパネルを備えたタブレットPC、スマートフォン、携帯電話等であってもよい。
(変形例30)
第2実施形態及び第3実施形態において、リズムパターンレコードには本来のBPMが情報として含まれているから、ユーザが操作部25を用いて行った操作に従って、制御部21が、リズムパターンレコードに含まれる楽音データが示す楽音を本来のBPMで再生するようにしてもよい。また、検索結果から特定のリズムパターンレコードがユーザにより選択されて制御部21がこれを特定すると、制御部21が、特定された直後は上記入力されたBPMまたは上記指定されたBPMに基づく速度でリズムパターンレコードに含まれる楽音データが示す楽音を再生し、時間の経過に伴って、BPMがリズムパターンレコードの持つ本来のものに徐々に近づくように制御してもよい。
ユーザが検索結果に対してより納得感を持てるようにするための方法は、上述したフィルタリング機能に限ったものではない。
<類似度にBPMの差による重み付けを行う>
説明の便宜上、ここでは第2実施形態及び第3実施形態を例に挙げて説明する。例えば、入力リズムパターンとリズムパターンテーブルに含まれるリズムパターンレコードとの距離を求める算出式に、入力BPMとリズムパターンレコードが有する本来のBPMとの差に基づく重み付けを導入してもよい。ここで、aを予め定められた定数とし、入力リズムパターンとリズムパターンテーブルに含まれるリズムパターンレコードとの距離をLとすると、上記重み付けを導入した場合の類似度を求める計算式は、例えば以下の式(10)のように表せる。
類似度=L+|入力BPM-リズムパターンレコードの有するBPM|/a・・・式(14)
なお、上記類似度を求める計算式は式(14)のようなものに限ったものではなく、入力BPMとリズムパターンレコードの持つBPMとが近いほど類似度が小さくなる(すなわち類似の度合いが高くなる)ような計算式であればよい。
上述の実施形態のように、プルダウンで特定の対象をユーザが指定して表示結果を絞るように使用してもよいが、リズムパターン入力時の演奏情報を自動で解析して自動で表示結果を絞るような方法でもよい。また、鍵盤などから入力したリズム入力のピッチの演奏情報から、コードタイプやスケールを判定してそのコードタイプやスケールで登録された伴奏が自動で検索結果として表示されるようにしてもよい。例えばロックっぽいコードでリズム入力したら、ロックのスタイルが検索されやする。他には、中東っぽいスケールでリズムを入力したら、中東っぽいフレーズが検索されやすくなる。また、鍵盤入力時に指定された音色の情報によって、同じ音色情報を持ち、且つリズムパターンが一致するものを検索するようにしてもよい。例えば、スネアドラムのリムショットでリズム入力した場合、同じリズムパターンを持つ候補の中でもリムショットの音色の伴奏が優先して表示されるようにする。
(変形例32)
第2実施形態及び第3実施形態において、リズム入力装置10が入力パッド12を備えない場合、リズム入力装置10が次のような構成を取ってもよい。この場合、デフォルトの状態では鍵盤11には、ベース入力音域鍵盤11a、コード入力音域鍵盤11b、及びフレーズ入力音域鍵盤11cが所定の鍵域に各々割り当てられている。ここでユーザが、操作部25を介してドラムスのパートの入力を行う旨を指示すると、制御部21は、鍵盤11の所定の鍵域にドラムのパートを割り当てる。例えば、制御部21は、C3にバスドラムのパートを割り当て、D3にスネアドラムのパートを割り当て、E3にハイハットのパートを割り当て、F3にシンバルのパートを割り当てる、といった具合である。なお、この場合、制御部21は、鍵盤11の全鍵域における操作子(すなわち各鍵)に、各々異なる楽器音を割り当てることが可能である。ここで制御部21が、鍵盤11における各操作子(各鍵)の上部や下部に、割り当てられる楽器音に関する画像(例えばスネアドラムの画像)を表示するようにしてもよい。
(変形例33)
第2実施形態及び第3実施形態において、ユーザが、どの操作子を操作すれば、どのパートが制御部21によって検索されるのかを視覚的に分かりやすくするために、次のようにしてもよい。例えば、制御部21が、鍵盤11における各操作子(各鍵)の上部や下部に、割り当てられるパートに関する画像(例えば、ギターのコードが押さえられた画像、ピアノが単音で演奏されている画像(例えば単一の鍵が指で押さえられている画像)、又はスネアドラムの画像等)を表示する。また、制御部21が、上述の画像を各操作子の上部や下部に表示するに限らず、表示部24に表示させてもよい。この場合、表示部24には、例えば鍵盤11を模した鍵盤の画像が表示されるとともに、実際の鍵盤11の各鍵域に割り当てられているのと同じ割り当て状態で、鍵盤の画像の各鍵域に対して割り当てられたパートの画像が表示される。または、ユーザが、どの操作子を操作すれば、どのパートが制御部21によって検索されるのかを聴覚的に分かりやすくするために、次のようにしてもよい。例えば、ユーザがベース入力音域鍵盤11aに対して入力を行うと、制御部21は、音声出力部26からベースの音声を出力させる。このようにすれば、ユーザは、どの操作子を操作すれば、どのパートが検索されるのかを視覚的又は聴覚的に判別可能となるため、操作入力が容易なものとなり、結果として自らが望む伴奏音源を得やすくなる。
(変形例34)
図5の処理フローにおいて、リズムカテゴリ毎のオンセット時刻間隔の分布を計算(ステップSb1)した後に、入力リズムパターンにおけるオンセット時刻間隔の分布を計算していた(ステップSb3)が、ステップSb1とステップSb3の順番を入れ替えてもよい。また、処理ステップの入れ替えに関わらず、制御部21が、リズムカテゴリ毎のオンセット時刻間隔の分布を計算した後、計算結果をRAMや記憶部22に記憶させるようにしてもよい。このようにすれば、制御部21は一度計算した結果を再度計算する必要がなく、処理速度の向上を見込むことが可能となる。
(変形例35)
第1実施形態~第3実施形態では、例えば、ユーザがベース入力音域鍵盤11aにおいて、和音となるように鍵盤を押下したとき等のように、ユーザが所定時間内において複数の操作子を操作してリズムパターンを入力する場合には、次のような問題がある。例えば、1小節において、ユーザが「0.25」のタイミングでリズムを入力したかったとする。ここで、ユーザが、自身では同一のタイミングのつもりで複数の操作子を操作しても、実際には、或る操作子が「0.25」のオンセット時刻で操作され、他の操作子が「0.26」のオンセット時刻で操作された場合、制御部21は、これらのオンセット時刻のとおりに入力リズムパターンを記憶してしまう。このような場合、ユーザが意図していたものとは異なる検索結果が出力される可能性があり、ユーザにとっては操作性がよいものとはいえない。これに対して、次のようにしてもよい。なお、説明の便宜上、ここでは第2実施形態及び第3実施形態を例に挙げて説明する。
(変形例36)
制御部21が小節単位で入力リズムパターンを記憶するタイミングを、小節線クロックに基づく小節の切り替えタイミングと同じものすると、以下のような問題が生じることがある。例えば、ユーザの操作によってリズムパターンが入力される場合、ユーザが自身で感じている時刻間隔と小節線クロックとのズレによって、ユーザが意図していたリズムパターンと実際のオンセット時刻との間に数msec~数十msecの誤差が生じる。従って、例えば、ユーザが小節頭の拍を入力しているつもりでも、上記誤差によって1つ前の小節のリズム入力として扱われてしまうことがあり、この場合、このリズム入力は入力リズムパターンとして記憶されないことがある。このような場合、ユーザが意図していたものと異なる検索結果が出力されてしまい、ユーザにとって操作性がよいとはいえない。このような問題に対しては、制御部21がRAMに入力リズムパターンを記憶させる際に、小節の頭よりも数十msec早い時点(すなわち直前の小節における最後の数十msec)から、最後の数十msecを除く小節の終わりまでを処理の対象範囲とすればよい。つまり、制御部21は、RAMに記憶させる入力リズムパターンの対象範囲を数十msec分だけ前にずらすこととなる。このようにすれば、ユーザが意図していたものと異なる検索結果が出力されることを少なくすることができる。
(変形例37)
制御部21がリズムパターン検索を行うタイミングを、小節線クロックに基づく小節の切り替えタイミングと同じものすると、以下のような問題が生じることがある。例えば、本発明における検索方法は、検索結果の楽音データが、リズム入力の直後の小節で小節線クロックと同期して再生されるようなプレイバック機能を備えた楽音データ処理装置にも応用可能である。この場合、リズム入力の直後の小節における頭から検索結果の楽音データが再生されるには、上記小節の頭の時点より以前、つまりリズム入力が行われた小節内で検索結果が出力される必要がある。また、RAMの記憶容量の問題等により、事前に再生対象である楽音データを予め読み込んでRAMに記憶させておくことが不可能な場合には、リズム入力が行われた小節内で検索結果の楽音データを読み込んでRAMに記憶させる必要がある。このような問題に対しては、制御部21がリズムパターン検索を行うタイミングを、小節の切り替えタイミングよりも、例えば数十msec早いものとすればよい。このようにすれば、小節の切り替えが行われる前に検索が行われ、検索結果の楽音データがRAMに記憶されることで、リズム入力の直後の小節における頭から検索結果の楽音データが再生されることが可能となる。
(変形例38)
入力リズムパターンを1小節単位に限らず、複数小節(N小節とする)に跨るリズムパターンを検索できるように、以下のようにしてもよい。説明の便宜上、ここでは第2実施形態及び第3実施形態を例に挙げて説明する。この場合、例えば、制御部21が、N小節のまとまりを持った入力リズムパターンを用いてリズムパターンテーブルを検索する方法がある。しかしこの方法では、ユーザが、小節線クロックに合わせてリズムパターンを入力する際に、1小節目がどこかを指定する必要がある。また、検索結果がN小節後に出力されるため、検索結果が出力されるまでに時間がかかってしまう。これに対して、次のようにしてもよい。
(変形例39)
制御部21が、入力リズムパターンをRAMに記憶させるにあたり、上述した方法に限らず、以下のようにしてもよい。式(11)は、入力リズムパターンにおいてn番目に入力されたオンセット時刻を求める計算式である。式(11)において、Lは或る小節の先頭を0としたときの、この小節の末尾を表し、0以上の実数である。また、式(11)においてNは、1小節内のクロック回数である分解能を表す。
[(n番目のオンセット時刻-小節の開始時刻)/(小節の終了時刻-小節の開始時刻)×N+0.5]×L/N・・・式(11)
式(11)において、「0.5」の値は、オンセット時刻が算出されるにあたり、端数に対して四捨五入の効果をもたらすものであり、これを、0以上1未満の別の数値に置き換えてもよい。例えば、この値を「0.2」とすると、端数に対して七捨八入の効果がもたらされる。この値は、パラメータとして記憶部22に記憶されており、ユーザが操作部25を用いて変更可能である。
本発明は、リズム入力装置10及び情報処理装置20が一体となった装置により実現されてもよい。例えば、第2実施形態及び第3実施形態における、このような例を考える。この場合、この装置として、例えば、携帯電話や、タッチスクリーンを備えた移動通信端末などが考えられる。本変形例では、この装置がタッチスクリーンを備えた移動通信端末である場合を例に挙げて説明する。
本発明は、楽音データ処理装置以外にも、これらを実現するための方法や、コンピュータに
図4及び図14に示した機能を実現させるためのプログラムとしても把握される。かかるプログラムは、これを記憶させた光ディスク等の記録媒体の形態で提供されたり、インターネット等を介して、コンピュータにダウンロードさせ、これをインストールして利用させるなどの形態でも提供されたりする。
検索モードについては、上述の実施形態における自動伴奏モード、差し替え検索モード、追従検索モードの3種類とは別に、以下のようなモードの切り替えが考えられる。1つ目は、常に検索処理が小節ごとに自動で動作しており、類似した最上位の1つあるいは類似した所定数の検索結果が自動で再生されるモードである。このモードは、主に自動伴奏などを用途として用いられる。2つ目は、ユーザが検索の開始を指示したときにメトロノームのみが再生され、ユーザがリズムを入力すると、リズム入力が終了後、自動で又は操作の指示を与えたことを契機として、検索結果が表示されるモードである。
第1実施形態の変形例として、検索機能がONの状態において、リズムパターン検索部213(図4)が、入力リズムパターンを基準として、一定以上の類似度のリズムパターンを持つ伴奏音源を、類似度が高い順番で一覧表示するようにしてもよい。図30(a)及び図30(b)は、前記第1実施形態の変形例として、伴奏音源についての検索結果の一覧を表した模式図である。図30(a)及び図30(b)に示されるように、伴奏音源についての検索結果一覧は、「ファイル名」、「類似度」、「キー」、「ジャンル」、及び「BPM(Beat Per Minute)」といった複数の項目からなる。「ファイル名」は伴奏音源を一意に識別する名称である。「類似度」は、入力リズムパターンを基準として、伴奏音源のリズムパターンがどの位類似しているかを表す数値であり、数値が低いほど類似の度合いが高い(上述したリズムパターン同士の距離が短い)ことを表す。「キー」は、伴奏音源のキー(音高)の高さを表す。「ジャンル」は、伴奏音源が所属する音楽のジャンル(例えばロック、ラテン等)を表す。「BPM」は、1分間における拍の数であって、伴奏音源のテンポを表す。
ステップSb6におけるリズムパターンのズレの計算において、Aを基準としたBの時刻差と、Bを基準としたAの時刻差との2つを用いていたが(symmetric Distance方式という)、これに限らず、両者のうちどちらか一方のみを用いて計算を行ってもよい。
また、第2実施形態及び第3実施形態において、ユーザが演奏操作子に拠らずに操作部25を用いてパートを指定可能としてもよい。この場合、ユーザが、パートを指定したあとに演奏操作子を操作すると、指定されたパートの入力が行われることとなる。例えば、ユーザが、操作部25を用いて「ベース」のパートを指定すると、その後にコード入力音域鍵盤11bを操作しても、制御部21は、これを「ベース」のパートの入力とみなす、といった具合である。
Claims (11)
- 予め決められた期間における複数の音を示す楽音データと、当該複数の音の発音時刻の並びを表す楽音リズムパターンとを対応付けて記憶する記憶部と、
前記期間における指定時刻を時間経過に伴って進行させるとともに、当該指定時刻をユーザに通知する通知部と、
前記通知部により前記指定時刻が通知されているときにユーザによって入力された操作に基づいて、当該操作のパターンに対応する前記指定時刻の並びを表す入力リズムパターンを取得する取得部と、
前記記憶部に記憶されている楽音データを検索して、前記入力リズムパターンと類似の度合いが所定の条件に合致する楽音リズムパターンに対応付けられた楽音データを特定する検索部と
を備えることを特徴とする楽音データ処理装置。 - 前記記憶部は、
前記楽音リズムパターンが表す前記発音時刻の間隔に基づいて決められたリズムのカテゴリを、当該楽音リズムパターンに対応付けて記憶し、
前記入力リズムパターンが表す各指定時刻の間隔に基づいて当該入力リズムパターンが属するリズムのカテゴリを判定する判定部と、
前記入力リズムパターンと前記楽音リズムパターンとの距離を算出する算出部とを備え、
前記検索部は、
前記入力リズムパターンが属するリズムのカテゴリと前記楽音リズムパターンが属するリズムのカテゴリとの関係、および前記算出された距離に基づいて、前記入力リズムパターンと前記楽音リズムパターンとの前記類似の度合いを算出し、
前記検索部が特定する楽音データは、前記入力リズムパターンと前記算出した類似の度合いが所定の条件に合致する楽音リズムパターンに対応付けられた楽音データである
ことを特徴とする請求項1に記載の楽音データ処理装置。 - 前記検索部は、
前記入力リズムパターンが表す発音時刻間隔の度数分布を表す入力時刻間隔ヒストグラムと、前記楽音リズムパターンにおける前記発音時刻間隔の度数分布を前記リズムのカテゴリごとに表すリズムカテゴリヒストグラムとを比較して、前記入力時刻間隔ヒストグラムに対する類似度が高いことを示す前記リズムカテゴリヒストグラムの前記リズムのカテゴリを特定し、
前記検索部が特定する楽音データは、
前記特定したリズムのカテゴリと対応付けられた楽音リズムパターンのうち、前記類似の度合いが所定の条件に合致する楽音リズムパターンに対応付けられた楽音データである
ことを特徴とする請求項2に記載の楽音データ処理装置。 - 前記予め定められた期間が複数の区間により構成され、
前記記憶部は、
前記区間毎に、前記複数の音の発音時刻の並びを表す楽音リズムパターンと前記楽音データとを対応付けて記憶し、
前記算出部は、
前記入力リズムパターンと前記記憶部に記憶された前記区間毎の前記楽音リズムパターンとの距離を算出し、
前記検索部は、
前記区間毎に前記算出部により算出された、前記入力リズムパターンと前記楽音リズムパターンとの距離と、前記入力リズムパターンが属するリズムのカテゴリと前記楽音リズムパターンが属するリズムのカテゴリとの関係とに基づいて、前記入力リズムパターンと前記楽音リズムパターンとの前記類似の度合いを算出し、
前記検索部が特定する楽音データは、前記入力リズムパターンと前記算出した類似の度合いが所定の条件に合致する楽音リズムパターンに対応付けられた楽音データである
ことを特徴とする請求項2又は3に記載の楽音データ処理装置。 - 楽音データに応じた音を出力する音声出力部に対し、前記通知部による前記指定時刻の通知に同期して、前記検索部によって特定された前記楽音データを供給する供給部
を備えることを特徴とする請求項1~4のいずれかに記載の楽音データ処理装置。 - 前記記憶部は、
前記楽音データが示す音の音高の並びを表す楽音音高パターンを、当該楽音データに対応付けて記憶し、
前記通知部により前記指定時刻が通知されているときにユーザによって入力された操作に基づいて、音高の並びを表す入力音高パターンを取得する音高パターン取得部を備え、
前記検索部は、
前記入力音高パターンと前記楽音音高パターンとにおける各々の音の音高の差の分散に基づいて、前記入力リズムパターンと前記楽音リズムパターンとの前記類似の度合いを算出し、
前記検索部が特定する楽音データは、前記入力リズムパターンと前記算出した類似の度合いが所定の条件に合致する楽音リズムパターンに対応付けられた楽音データである
ことを特徴とする請求項1~5のいずれかに記載の楽音データ処理装置。 - 前記記憶部は、
前記楽音データが示す音の強度の並びを表す楽音ベロシティパターンを、当該楽音データに対応付けて記憶し、
前記通知部により前記指定時刻が通知されているときにユーザによって入力された操作に基づいて、音の強度の並びを表す入力ベロシティパターンを取得するベロシティパターン取得部を備え、
前記検索部は、
前記入力ベロシティパターンと前記楽音ベロシティパターンとにおける各々の音の強度の差の絶対値に基づいて、前記入力リズムパターンと前記楽音リズムパターンとの類似の度合いを算出し、
前記検索部が特定する楽音データは、前記入力リズムパターンと前記算出した類似の度合いが所定の条件に合致する楽音リズムパターンに対応付けられた楽音データである
ことを特徴とする請求項1~6のいずれかに記載の楽音データ処理装置。 - 前記記憶部は、
前記楽音データが示す音の音長の並びを表す楽音デュレーションパターンを、当該楽音データに対応付けて記憶し、
前記通知部により前記指定時刻が通知せれているときにユーザによって入力された操作に基づいて、音の音長の並びを表す入力デュレーションパターンを取得するデュレーションパターン取得部を備え、
前記検索部は、
前記入力デュレーションパターンと前記楽音デュレーションパターンとにおける各々の音の音長の差の絶対値に基づいて、前記入力リズムパターンと前記楽音リズムパターンとの類似の度合いを算出し、
前記検索部が特定する楽音データは、前記入力リズムパターンと前記算出した類似の度合いが所定の条件に合致する楽音リズムパターンに対応付けられた楽音データである
ことを特徴とする請求項1~7のいずれかに記載の楽音データ処理装置。 - ユーザによる演奏操作が入力される入力装置と、
請求項1~8のいずれかに記載の楽音データ処理装置であって、当該楽音データ処理装置の前記通知部により前記予め決められた期間における指定時刻が進行されているときに前記入力装置に対して前記ユーザにより各々の演奏操作が入力された時刻間隔の並びを、各々の音が発音される発音時刻間隔の並びを表すリズムパターンとして取得する楽音データ処理装置と
を備えることを特徴する音楽データ作成システム。 - 楽音データを検索するための、コンピュータによって実行される方法であって、
予め決められた期間における複数の音を示す楽音データと、当該複数の音の発音時刻の並びを表す楽音リズムパターンとを対応付けて記憶する手順と、
前記期間における指定時刻を時間経過に伴って進行させるとともに、当該指定時刻をユーザに通知する手順と、
前記通知する手順により前記指定時刻が通知されているときにユーザによって入力された操作に基づいて、当該操作のパターンに対応する前記指定時刻の並びを表す入力リズムパターンを取得する手順と、
記憶装置に記憶されている楽音データを検索して、前記入力リズムパターンと類似の度合いが所定の条件に合致する楽音リズムパターンに対応付けられた楽音データを特定する手順、
を具備する方法。 - コンピュータ読み取り可能な記憶媒体であって、コンピュータに、
予め決められた期間における複数の音を示す楽音データと、当該複数の音の発音時刻の並びを表す楽音リズムパターンとを対応付けて記憶する手順と、
前記期間における指定時刻を時間経過に伴って進行させるとともに、当該指定時刻をユーザに通知する手順と、
前記通知する手順により前記指定時刻が通知されているときにユーザによって入力された操作に基づいて、当該操作のパターンに対応する前記指定時刻の並びを表す入力リズムパターンを取得する手順と、
記憶装置に記憶されている楽音データを検索して、前記入力リズムパターンと類似の度合いが所定の条件に合致する楽音リズムパターンに対応付けられた楽音データを特定する手順、
を実行させるためのプログラムを記憶している記憶媒体。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/395,433 US9053696B2 (en) | 2010-12-01 | 2011-12-01 | Searching for a tone data set based on a degree of similarity to a rhythm pattern |
JP2012513378A JP5949544B2 (ja) | 2010-12-01 | 2011-12-01 | リズムパターンの類似度に基づく楽音データの検索 |
CN2011800038408A CN102640211B (zh) | 2010-12-01 | 2011-12-01 | 根据与节奏模式的相似度搜索乐音数据组 |
EP11822840.2A EP2648181B1 (en) | 2010-12-01 | 2011-12-01 | Musical data retrieval on the basis of rhythm pattern similarity |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010268661 | 2010-12-01 | ||
JP2010-268661 | 2010-12-01 | ||
JP2011263088 | 2011-11-30 | ||
JP2011-263088 | 2011-11-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012074070A1 true WO2012074070A1 (ja) | 2012-06-07 |
Family
ID=46171995
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/077839 WO2012074070A1 (ja) | 2010-12-01 | 2011-12-01 | リズムパターンの類似度に基づく楽音データの検索 |
Country Status (5)
Country | Link |
---|---|
US (1) | US9053696B2 (ja) |
EP (1) | EP2648181B1 (ja) |
JP (1) | JP5949544B2 (ja) |
CN (1) | CN102640211B (ja) |
WO (1) | WO2012074070A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103514158A (zh) * | 2012-06-15 | 2014-01-15 | 国基电子(上海)有限公司 | 音乐文件搜索方法及多媒体播放装置 |
JP2014029425A (ja) * | 2012-07-31 | 2014-02-13 | Yamaha Corp | 伴奏進行生成装置及びプログラム |
JP2016191855A (ja) * | 2015-03-31 | 2016-11-10 | カシオ計算機株式会社 | ジャンル選択装置、ジャンル選択方法、プログラムおよび電子楽器 |
WO2021044563A1 (ja) * | 2019-09-04 | 2021-03-11 | ローランド株式会社 | 自動演奏装置および自動演奏プログラム |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8507781B2 (en) * | 2009-06-11 | 2013-08-13 | Harman International Industries Canada Limited | Rhythm recognition from an audio signal |
JP2011164171A (ja) * | 2010-02-05 | 2011-08-25 | Yamaha Corp | データ検索装置 |
CA2746274C (en) * | 2010-07-14 | 2016-01-12 | Andy Shoniker | Device and method for rhythm training |
JP5728888B2 (ja) * | 2010-10-29 | 2015-06-03 | ソニー株式会社 | 信号処理装置および方法、並びにプログラム |
WO2012132856A1 (ja) * | 2011-03-25 | 2012-10-04 | ヤマハ株式会社 | 伴奏データ生成装置 |
JP5891656B2 (ja) * | 2011-08-31 | 2016-03-23 | ヤマハ株式会社 | 伴奏データ生成装置及びプログラム |
US8614388B2 (en) * | 2011-10-31 | 2013-12-24 | Apple Inc. | System and method for generating customized chords |
US9219992B2 (en) * | 2012-09-12 | 2015-12-22 | Google Inc. | Mobile device profiling based on speed |
US9012754B2 (en) | 2013-07-13 | 2015-04-21 | Apple Inc. | System and method for generating a rhythmic accompaniment for a musical performance |
JP6048586B2 (ja) * | 2014-01-16 | 2016-12-21 | ヤマハ株式会社 | リンクにより音設定情報を設定し編集すること |
JP6759545B2 (ja) * | 2015-09-15 | 2020-09-23 | ヤマハ株式会社 | 評価装置およびプログラム |
US9651921B1 (en) * | 2016-03-04 | 2017-05-16 | Google Inc. | Metronome embedded in search results page and unaffected by lock screen transition |
US11024272B2 (en) * | 2017-01-19 | 2021-06-01 | Inmusic Brands, Inc. | Graphical interface for selecting a musical drum kit on an electronic drum module |
US10510327B2 (en) * | 2017-04-27 | 2019-12-17 | Harman International Industries, Incorporated | Musical instrument for input to electrical devices |
EP3428911B1 (en) * | 2017-07-10 | 2021-03-31 | Harman International Industries, Incorporated | Device configurations and methods for generating drum patterns |
JP2019200390A (ja) | 2018-05-18 | 2019-11-21 | ローランド株式会社 | 自動演奏装置および自動演奏プログラム |
WO2019226861A1 (en) * | 2018-05-24 | 2019-11-28 | Aimi Inc. | Music generator |
US10838980B2 (en) * | 2018-07-23 | 2020-11-17 | Sap Se | Asynchronous collector objects |
JP7140096B2 (ja) * | 2019-12-23 | 2022-09-21 | カシオ計算機株式会社 | プログラム、方法、電子機器、および演奏データ表示システム |
WO2021163377A1 (en) | 2020-02-11 | 2021-08-19 | Aimi Inc. | Music content generation |
EP4350684A1 (en) * | 2022-09-28 | 2024-04-10 | Yousician Oy | Automatic musician assistance |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0887297A (ja) * | 1994-09-20 | 1996-04-02 | Fujitsu Ltd | 音声合成システム |
JP2000029487A (ja) * | 1998-07-08 | 2000-01-28 | Nec Corp | 発音記号を用いた音声データ変換・復元装置 |
JP2000187671A (ja) * | 1998-12-21 | 2000-07-04 | Tomoya Sonoda | ネットワ―クを利用した歌声による曲検索システム及び検索時に用いる歌声の入力端末装置 |
JP2002047066A (ja) | 2000-08-02 | 2002-02-12 | Tokai Carbon Co Ltd | SiC成形体およびその製造方法 |
JP2002215632A (ja) * | 2001-01-18 | 2002-08-02 | Nec Corp | 携帯端末を用いた音楽検索システム、音楽検索方法、および購入方法 |
JP2005227850A (ja) * | 2004-02-10 | 2005-08-25 | Toshiba Corp | 情報処理装置、情報処理方法及びプログラム |
JP2005338353A (ja) * | 2004-05-26 | 2005-12-08 | Matsushita Electric Ind Co Ltd | 音楽検索装置 |
JP2006106818A (ja) | 2004-09-30 | 2006-04-20 | Toshiba Corp | 音楽検索装置、音楽検索方法及び音楽検索プログラム |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE69908226T2 (de) | 1998-03-19 | 2004-03-25 | Tomonari Sonoda | Vorrichtung und Verfahren zum Wiederauffinden von Melodien |
KR100893797B1 (ko) * | 2000-12-07 | 2009-04-20 | 소니 가부시끼 가이샤 | 콘텐츠 검색 장치 및 방법과 통신 시스템 및 방법 |
JP4520490B2 (ja) * | 2007-07-06 | 2010-08-04 | 株式会社ソニー・コンピュータエンタテインメント | ゲーム装置、ゲーム制御方法、及びゲーム制御プログラム |
JP5560861B2 (ja) * | 2010-04-07 | 2014-07-30 | ヤマハ株式会社 | 楽曲解析装置 |
-
2011
- 2011-12-01 CN CN2011800038408A patent/CN102640211B/zh not_active Expired - Fee Related
- 2011-12-01 JP JP2012513378A patent/JP5949544B2/ja not_active Expired - Fee Related
- 2011-12-01 WO PCT/JP2011/077839 patent/WO2012074070A1/ja active Application Filing
- 2011-12-01 US US13/395,433 patent/US9053696B2/en active Active
- 2011-12-01 EP EP11822840.2A patent/EP2648181B1/en not_active Not-in-force
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0887297A (ja) * | 1994-09-20 | 1996-04-02 | Fujitsu Ltd | 音声合成システム |
JP2000029487A (ja) * | 1998-07-08 | 2000-01-28 | Nec Corp | 発音記号を用いた音声データ変換・復元装置 |
JP2000187671A (ja) * | 1998-12-21 | 2000-07-04 | Tomoya Sonoda | ネットワ―クを利用した歌声による曲検索システム及び検索時に用いる歌声の入力端末装置 |
JP2002047066A (ja) | 2000-08-02 | 2002-02-12 | Tokai Carbon Co Ltd | SiC成形体およびその製造方法 |
JP2002215632A (ja) * | 2001-01-18 | 2002-08-02 | Nec Corp | 携帯端末を用いた音楽検索システム、音楽検索方法、および購入方法 |
JP2005227850A (ja) * | 2004-02-10 | 2005-08-25 | Toshiba Corp | 情報処理装置、情報処理方法及びプログラム |
JP2005338353A (ja) * | 2004-05-26 | 2005-12-08 | Matsushita Electric Ind Co Ltd | 音楽検索装置 |
JP2006106818A (ja) | 2004-09-30 | 2006-04-20 | Toshiba Corp | 音楽検索装置、音楽検索方法及び音楽検索プログラム |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103514158A (zh) * | 2012-06-15 | 2014-01-15 | 国基电子(上海)有限公司 | 音乐文件搜索方法及多媒体播放装置 |
JP2014029425A (ja) * | 2012-07-31 | 2014-02-13 | Yamaha Corp | 伴奏進行生成装置及びプログラム |
JP2016191855A (ja) * | 2015-03-31 | 2016-11-10 | カシオ計算機株式会社 | ジャンル選択装置、ジャンル選択方法、プログラムおよび電子楽器 |
WO2021044563A1 (ja) * | 2019-09-04 | 2021-03-11 | ローランド株式会社 | 自動演奏装置および自動演奏プログラム |
JPWO2021044563A1 (ja) * | 2019-09-04 | 2021-03-11 | ||
JP7190056B2 (ja) | 2019-09-04 | 2022-12-14 | ローランド株式会社 | 自動演奏装置および自動演奏プログラム |
Also Published As
Publication number | Publication date |
---|---|
EP2648181A1 (en) | 2013-10-09 |
CN102640211A (zh) | 2012-08-15 |
JPWO2012074070A1 (ja) | 2014-05-19 |
EP2648181B1 (en) | 2017-07-26 |
CN102640211B (zh) | 2013-11-20 |
JP5949544B2 (ja) | 2016-07-06 |
US20120192701A1 (en) | 2012-08-02 |
US9053696B2 (en) | 2015-06-09 |
EP2648181A4 (en) | 2014-12-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5949544B2 (ja) | リズムパターンの類似度に基づく楽音データの検索 | |
JP6056437B2 (ja) | 音データ処理装置及びプログラム | |
JP5982980B2 (ja) | 楽音発生パターンを示すクエリーを用いて演奏データの検索を行う装置、方法および記憶媒体 | |
JP5970934B2 (ja) | 楽音発生パターンを示すクエリーを用いて演奏データの検索を行う装置、方法および記録媒体 | |
US20210326102A1 (en) | Method and device for determining mixing parameters based on decomposed audio data | |
US7626112B2 (en) | Music editing apparatus and method and program | |
JP3598598B2 (ja) | カラオケ装置 | |
JP5799977B2 (ja) | 音符列解析装置 | |
US8791350B2 (en) | Accompaniment data generating apparatus | |
JP5879996B2 (ja) | 音信号生成装置及びプログラム | |
US20030167907A1 (en) | Electronic musical instrument and method of performing the same | |
JP3879524B2 (ja) | 波形生成方法、演奏データ処理方法および波形選択装置 | |
JP5909967B2 (ja) | 調判定装置、調判定方法及び調判定プログラム | |
JP7425558B2 (ja) | コード検出装置及びコード検出プログラム | |
JP6036800B2 (ja) | 音信号生成装置及びプログラム | |
JP2012168323A (ja) | 音信号生成装置及びプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201180003840.8 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012513378 Country of ref document: JP |
|
REEP | Request for entry into the european phase |
Ref document number: 2011822840 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011822840 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13395433 Country of ref document: US |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11822840 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |