CN112634847A - Electronic musical instrument, control method, and storage medium - Google Patents

Electronic musical instrument, control method, and storage medium Download PDF

Info

Publication number
CN112634847A
CN112634847A CN202010828464.7A CN202010828464A CN112634847A CN 112634847 A CN112634847 A CN 112634847A CN 202010828464 A CN202010828464 A CN 202010828464A CN 112634847 A CN112634847 A CN 112634847A
Authority
CN
China
Prior art keywords
setting
data
user operation
automatic accompaniment
timing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010828464.7A
Other languages
Chinese (zh)
Inventor
吉野顺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Casio Computer Co Ltd
Original Assignee
Casio Computer Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Casio Computer Co Ltd filed Critical Casio Computer Co Ltd
Publication of CN112634847A publication Critical patent/CN112634847A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/361Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0008Associated control or indicating means
    • G10H1/0016Means for indicating which keys, frets or strings are to be actuated, e.g. using lights or leds
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0033Recording/reproducing or transmission of music for electrophonic musical instruments
    • G10H1/0041Recording/reproducing or transmission of music for electrophonic musical instruments in coded form
    • G10H1/0058Transmission between separate instruments or between individual components of a musical system
    • G10H1/0066Transmission between separate instruments or between individual components of a musical system using a MIDI interface

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Electrophonic Musical Instruments (AREA)

Abstract

An electronic musical instrument, a control method, and a storage medium. An electronic musical instrument includes a sound source and a processor, wherein the processor instructs the sound source to emit accompaniment sounds corresponding to an automatic accompaniment pattern, instructs the sound source to emit musical tones corresponding to a pitch specified by a1 st user operation based on a1 st setting, the 1 st setting corresponds to a1 st timing of setting data, the 1 st user operation corresponds to a1 st timing of the automatic accompaniment pattern, instructs the sound source to emit musical tones corresponding to a pitch specified by a 2 nd user operation based on a 2 nd setting, the 2 nd setting corresponds to a 2 nd timing of the setting data and is different from the 1 st setting, and the 2 nd user operation corresponds to a 2 nd timing of the automatic accompaniment pattern. By implementing the invention, richer performance expression can be performed.

Description

Electronic musical instrument, control method, and storage medium
Technical Field
The invention relates to an electronic musical instrument, a control method, and a storage medium.
Background
An electronic musical instrument including a numeric keypad includes a processor and a memory, and can be said to be an assembled computer with a keyboard. In a model provided with a sound source and a loudspeaker, various timbres can be played by a single body. Most models are known to correspond to the MIDI (Musical Instrument Digital Interface) standard, and some models have an automatic accompaniment function.
The automatic accompaniment function includes a plurality of modes. In the basic mode, the loop reproduction rhythm section is started while the start button is pressed, and designation of a chord is awaited. When Do, Mi, So are pressed in this state, the machine interprets the chord as a C major chord and automatically reproduces the melody corresponding to the chord. In another mode, a rhythm style (rhythm pattern) suitable for the melody line played by the user is automatically selected by the machine and played in the background.
Patent document 1: japanese patent laid-open No. 2001 and 175263
Disclosure of Invention
An electronic musical instrument according to one embodiment of the present invention includes a sound source and a processor,
indicating the sound source with the sound of accompaniment sound corresponding to the automatic accompaniment pattern,
instructing the sound source to emit musical tones corresponding to a pitch specified according to a1 st user operation based on a1 st setting, the 1 st user operation corresponding to a1 st timing of the automatic accompaniment pattern, the 1 st setting corresponding to the 1 st timing of setting data,
instructing the sound source to emit a musical tone corresponding to a pitch specified according to a 2 nd user operation based on a 2 nd setting, the 2 nd user operation corresponding to a 2 nd timing of the automatic accompaniment pattern, the 2 nd setting corresponding to the 2 nd timing of the setting data and being different from the 1 st setting.
Thus, it is possible to reflect the 1 st setting corresponding to the 1 st timing of the automatic accompaniment pattern on the musical sound corresponding to the pitch designated by the 1 st performance operation of the user, and reflect the 2 nd setting corresponding to the 2 nd timing of the automatic accompaniment pattern on the musical sound corresponding to the pitch designated by the 2 nd performance operation of the user. By implementing the invention, richer performance expression can be performed.
Drawings
Fig. 1 is an external view showing an example of an electronic musical instrument according to the embodiment.
Fig. 2 is a block diagram showing an example of a control system of the electronic keyboard instrument according to the embodiment.
Fig. 3 is a functional block diagram showing an example of processing functions of the CPU201 and contents stored in the ROM202 according to the embodiment.
Fig. 4A is a diagram showing an example of an automatic accompaniment pattern.
Fig. 4B is a diagram showing an example of automatic accompaniment data corresponding to the automatic accompaniment pattern of fig. 4A.
Fig. 5A is a diagram showing an example of an automatic accompaniment pattern.
Fig. 5B is a diagram showing an example of automatic accompaniment data corresponding to the automatic accompaniment pattern of fig. 5A.
Fig. 6 is a flowchart showing an example of the processing procedure of the accompaniment control unit 201 b.
Fig. 7 is a diagram for explaining chord change along the background of the note information shown in fig. 5B.
Fig. 8 is a diagram showing an example of the accompaniment generated by the accompaniment control unit 201 b.
Fig. 9 is a diagram showing an example of setting data stored in the ROM 202.
Fig. 10 is a diagram for explaining effect additional data for controlling brightness.
Fig. 11 is a flowchart showing an example of the procedure of accompaniment sound generation processing according to the embodiment.
Fig. 12 is a diagram for explaining a glide sound.
Fig. 13 is a diagram showing another example of the setting data stored in the ROM 202.
Fig. 14 is a diagram showing another example of the setting data stored in the ROM 202.
Fig. 15 is a diagram showing another example of the setting data stored in the ROM 202.
Fig. 16 is a diagram showing another example of the setting data stored in the ROM 202.
Fig. 17 is a diagram showing another example of the setting data stored in the ROM 202.
Fig. 18 is a diagram showing another example of the setting data stored in the ROM 202.
Fig. 19 is a diagram showing another example of the setting data stored in the ROM 202.
Detailed Description
Hereinafter, an embodiment according to an aspect of the present invention will be described with reference to the drawings. The embodiments described below are merely illustrative in all aspects, and various improvements and modifications can be made without departing from the scope of the present invention. That is, when the present invention is implemented, the specific configuration according to the embodiment can be appropriately adopted.
< appearance and keyboard >
Fig. 1 is an external view showing an example of an electronic musical instrument according to the embodiment. In the embodiment, the numeric keypad 100 is assumed as an electronic musical instrument. The numeric keypad 100 includes a keyboard 101, a1 st switch panel 102, a 2 nd switch panel 103, and an LCD (Liquid Crystal Display) 104.
The keyboard 101 is a collection of keys. Each key is an operation member for specifying each pitch. Each key is embedded with a light emitting diode or the like to emit light, thereby also functioning as a performance guide.
The keyboard 101 includes a chord input keyboard 101a and a melody input keyboard 101b, and the melody input keyboard 101b is disposed at a position on the treble side of the chord input keyboard 101 a. The chord input keyboard 101a is pressed with the left hand to designate a basic note and a chord in the automatic accompaniment. The melody input keyboard 101b is played with the right hand in order to play the melody. The division point as the boundary between the chord input keyboard 101a and the melody input keyboard 101b is, for example, a key preset to a tone pitch F3. There are also models in which the division point can be changed.
The 1 st switch panel 102 is a user interface for instructing various settings such as designation of volume, tempo setting of automatic performance, start of automatic performance, and the like. The 2 nd switch panel 103 is used for selecting various modes, selecting an automatic play song, selecting a tone, and the like. The LCD104 functions as a display unit for displaying automatic accompaniment, lyrics during automatic performance, and various setting information. Note that the numeric keypad 100 may be provided with a speaker (sound emitting portion) for emitting musical sounds generated by musical performance, on a back surface portion, a side surface portion, a rear surface portion, or the like.
< Structure >
Fig. 2 is a block diagram showing an example of a control system 200 of the numeric keypad 100 according to the embodiment. The control system 200 includes an RMA (Random Access Memory) 203, a ROM (Read Only Memory) 202, an LCD104, an LCD controller 208, an LED (Light Emitting Diode) controller 207, a keyboard 101, a1 st switch panel 102, a 2 nd switch panel 103, a key scanner 206, a MIDI interface (I/F)215, a system bus 209, a CPU (Central Processing Unit) 201, a timer 210, a sound source system 300, and an audio system 400.
The sound source system 300 includes a sound source 204 including a DSP (Digital Signal Processor), for example, and an effector 212. The audio system 400 includes a digital-analog converter 211 and an amplifier 214.
The CPU201, ROM202, RAM203, sound source 204, digital-analog converter 211, key scanner 206, LED controller 207, LCD controller 208, MIDI interface 215 are connected to a system bus 209, respectively.
The CPU201 is a processor that controls the numeric keypad 100. That is, the CPU201 reads out and executes a program stored in the ROM202 to the RAM203 as a work memory, and realizes various functions of the numeric keypad 100. The CPU201 operates in accordance with the clock supplied from the timer 210. The clock is used, for example, to control a sequence of automatic performance, automatic accompaniment (sequence).
The ROM202 stores programs for implementing the processing of the embodiment, various setting data, automatic accompaniment data, and the like. The automatic accompaniment data may include melody data of a preset rhythm pattern, chord progression, basic pattern, accompaniment, or the like. The melody data may include pitch information of each tone, sound generation timing information of each tone, and the like.
The sound emission timing of each sound may be an interval time between sounds or an elapsed time from the start of the automatic playing music. Tick is used in the unit of time. tick is a unit based on the tempo of a song used in a general sequencer (sequencer). For example, if the sequencer resolution is 480, 1/480 for the time of the quarter note is 1 tick.
The automatic accompaniment data is not limited to being stored in the ROM202, and may be stored in an information storage device or an information storage medium, not shown. The format of the automatic accompaniment data may also be in accordance with a file format for MIDI.
The sound source 204 is, for example, a so-called GM sound source according to the GM (General MIDI: Standard musical Instrument digital interface) standard. Such a sound source can Change the tone if Program conversion (Program Change) is given as a MIDI message, and can control a predetermined effect if control conversion is given.
The sound source 204 has a capability of, for example, simultaneously emitting a maximum of 256 sounds. The sound source 204 reads musical tone waveform data from, for example, a waveform ROM (not shown), and outputs the musical tone waveform data to the effector 212 as digital musical tone waveform data. The effector 212 adds various effects by processing the digital tone waveform data. Typical effectors include, for example, an equalizer for emphasizing a specific frequency band, and a delay device having an echo effect of overlapping slightly shifted sounds. The wet sound to which the effect is applied or the dry sound to which no effect is applied is output to the digital-to-analog converter 211 as digital musical tone waveform data.
The digital-to-analog converter 211 converts the digital tone waveform data into an analog tone waveform signal. The analog musical tone waveform signal is amplified by the amplifier 214 and output from a speaker or an output terminal, not shown.
Without using the effector 212, the sound source 204 can be controlled by the MIDI message to obtain an effect. For example, a slide tone, on/off and the application on the scale of 0 to 127 (slide tone time) can be specified by a control change as a MIDI message. Various effects such as reverberation (reverb), tremolo (tremolo), chorus (chorus), and evening music (japanese: セレステ, celebrate) are also formulated as MIDI. Effects such as brightness and modulation can also be controlled by controlling the transformation. Further, the effect obtained by operating the pitch bender (pitch bender) or the chopper wheel can also be controlled by the control transform.
The key scanner 206 stably monitors the key depression/key release state of the keyboard 101 and the switch operation states of the 1 st switch panel 102 and the 2 nd switch panel 103. The key scanner 206 transmits the states of the keyboard 101, the 1 st switch panel 102, and the 2 nd switch panel 103 to the CPU 201.
The LED controller 207 is, for example, an IC (Integrated Circuit). The LED controller 207 illuminates keys of the keyboard 101 in accordance with an instruction from the CPU201 to navigate the performance of the player. The LCD controller 208 is an IC that controls the display state of the LCD 104.
The MIDI interface 215 inputs a MIDI message (performance data or the like) from an external device such as the MIDI device 4 or outputs a MIDI message to the external device. The numeric keypad 100 can transmit and receive MIDI messages and MIDI data files to and from an external device using an interface such as USB (Universal Serial Bus). The received MIDI message is transferred to the sound source 204 via the CPU 201. The sound source 204 sounds in accordance with the timbre, volume, timing, and the like specified by the MIDI message.
The storage device 3 as a removable medium may be connected to the system bus 209 via, for example, a USB. Examples of the storage device 3 include a USB memory, a Flexible Disk Drive (FDD), a Hard Disk Drive (HDD), a CD-ROM drive, and a magneto-optical disk (MO) drive. When the ROM202 does not store the program, the storage device 3 stores the program, and the program is read into the RAM203, whereby the CPU201 can be caused to execute the same operation as in the case where the ROM202 stores the program.
Fig. 3 is a functional block diagram showing an example of processing functions of the CPU201 and contents stored in the ROM202 according to the embodiment.
The ROM202 stores automatic accompaniment data a1 to an and setting data b1 to bm in addition to a program 500 for realizing the processing function of the CPU 201. The automatic accompaniment data a1 to an represent various automatic accompaniment patterns prepared in advance for each part, for example, a set of MIDI data. The automatic accompaniment data corresponding to the accompaniment pattern shown in fig. 4A is, for example, the content shown in fig. 4B.
< with respect to automatic accompaniment data >
Fig. 4A shows an automatic accompaniment pattern in which a C major chord of a half note is rung twice in 1 bar. As shown in fig. 4B, the automatic accompaniment pattern can be expressed by numerical values representing the timing on the spectrum, symbols representing note information (note name and pitch), numerical values representing the rate and duration. The unit of the duration is tick, for example. The tick number per 1 beat is often set to 96. In this case, the duration of the eighth note is counted as 48tick, and the duration of the half note is counted as 192 tick. In fig. 4B, since it is 160 ticks, it is known that the pronunciation is made with a sound length slightly shorter than the maximum sound length of the half note.
Fig. 5A is an example of an accompaniment (obbrigato) part for the C major chord, and is an automatic accompaniment pattern showing repetition of eight notes by the chord tone. As shown in fig. 5B, for example, the timing, note information, rate, and duration of each sound are set. It can be seen that the duration is 40 ticks, which is set slightly shorter than the maximum duration of the eighth note.
< function of CPU201 >
The CPU201 includes a chord detection unit 201a, an accompaniment control unit 201b, and a setting reflection unit 201c as processing functions of the embodiment. These functions are implemented by the program 500.
The chord detection unit 201a determines an automatic accompaniment pattern to be sounded from a plurality of automatic accompaniment patterns based on a user operation. The user operation here is, for example, an operation (musical performance) on the chord input keyboard 101 a. The output data of the chord input keyboard 101a is delivered to the chord detecting section 201 a. The output data of the melody performance keyboard 101b corresponding to the operation (performance) on the melody input keyboard 101b is transferred to the setting reflection unit 201 c.
The automatic accompaniment pattern referred to in the present application includes a case of a change pattern in which note information included in certain accompaniment data is changed. That is, the chord detection unit 201a may determine any one of the plurality of change patterns (automatic accompaniment sound emission patterns) in response to a user operation.
In the embodiment of the present application, the root value and the chord type value are decided based on the user operation to the chord input keyboard 101 a. Then, a manner of sounding the automatic accompaniment pattern to be sounded is determined based on the determined root value and chord type value. The automatic accompaniment pattern may also be decided based on the user operation of the chord input keypad 101 a.
The accompaniment control unit 201b instructs the sound source 204 or the sound source system 300 to generate accompaniment sounds corresponding to the automatic accompaniment pattern determined by the chord detection unit 201a. That is, when any one of the keys of the chord input keyboard 101a is pressed, the accompaniment control unit 201b determines the root sound value and the chord type value based on the note name (note number) associated with the key. Here, the root value is a 12-level tone name value from C to B, and the chord type value is numerical data associated with the type of chord. Incidentally, the types of chords include, for example, M (Major: Major), M (minor: minor), dim, aug, sus4, sus2, 7th, M7, M7, M,
Figure BDA0002637070650000071
7sus4, add9, madd9, mM7, dim7, 69, 6th, m6, and the like.
Then, the accompaniment control part 201b determines a chord name based on the decided root note value and chord type value. Further, the accompaniment control unit 201b determines an automatic accompaniment pattern to be sounded from among the plurality of automatic accompaniment patterns based on the determined chord name, and reads out corresponding automatic accompaniment data from the ROM 202.
The accompaniment control unit 201b generates accompaniment based on the read automatic accompaniment data and the chord type value delivered from the chord detection unit 201a, and delivers the accompaniment to the sound source system 300. The sound source system 300 converts the accompaniment generated by the accompaniment control unit 201b into audio data for each part, and outputs the audio data to the sound system 400.
The setting reflection unit 201c reflects the setting based on the setting data corresponding to the automatic accompaniment pattern determined by the chord detection unit 201a on the musical sound of the tone pitch specified by the key pressing of the melody input keyboard 101 b. That is, the setting reflection unit 201c creates, for example, a MIDI message reflecting the selected setting data in the phrase played on the melody input keyboard 101 b. The setting reflection unit 201c gives the MIDI message to the sound source system 300, and gives an effect to the right-hand performance. Alternatively, various settings corresponding to the selected setting data may be reflected in, for example, a MIDI message created based on the keys of the melody input keyboard 101 b. Alternatively, the MIDI message, for example, may be generated according to the setting data selected at a predetermined timing regardless of the key or non-key of the melody input keyboard 101 b.
The setting data includes a1 st setting corresponding to a1 st timing of the automatic accompaniment pattern and a 2 nd setting corresponding to a 2 nd timing of the automatic accompaniment pattern. That is, a plurality of values (1 st setting value and 2 nd setting value) different in timing in the data of the automatic accompaniment pattern are included in the setting data.
That is, the processor instructs the sound source to emit a musical tone corresponding to a pitch specified by a1 st user operation based on a1 st setting corresponding to the 1 st timing of the setting data, the 1 st user operation corresponding to a1 st timing of the automatic accompaniment pattern, and instructs the sound source to emit a musical tone corresponding to a pitch specified by a 2 nd user operation based on a 2 nd setting corresponding to the 2 nd timing of the setting data and different from the 1 st setting, the 1 st user operation corresponding to a 2 nd timing of the automatic accompaniment pattern.
In addition, in the memory 202 provided in the electronic musical instrument, data of the automatic accompaniment pattern and setting data are stored in advance before the start of the performance of the user.
The setting data includes at least any one of the following data:
effect addition data for adding a sound effect to the musical sound;
tone color changing data for changing the set 1 st tone color to the 2 nd tone color;
volume change data for changing the 1 st volume designated by the user operation to the 2 nd volume; and
pitch change data for changing the 1 st pitch designated by the user operation to the 2 nd pitch.
The processor instructs the sound source of accompaniment sounds corresponding to the automatic accompaniment pattern based on a user operation of the chord input keyboard. The setting based on the setting data is reflected on musical tones corresponding to a tone pitch specified by a user operation on a melody input pad provided on the higher tone side than the chord input pad.
< Generation of accompaniment >
Fig. 6 is a flowchart showing an example of the processing procedure of the accompaniment control unit 201 b. The accompaniment controller 201b changes note information (pitch) of the automatic accompaniment data for the parts other than the drum part according to the chord of the background. The drum portion is considered to be unnecessary to change the nature of the sound to be emitted.
When the accompaniment is started in fig. 6, if the automatic accompaniment data is not the drum part (no in step S1), the accompaniment control part 201b changes the note information of the automatic accompaniment data in correspondence with the chord data (chord name) detected by the chord detecting part 201a (step S2). As shown in fig. 7, if the chord detected by the chord detecting unit 201a is F major, the accompaniment control unit 201B changes the note information of the automatic accompaniment pattern (fig. 5A and 5B) for the chord of C major to 4 degrees each. Thereby, the melody of the automatic accompaniment is changed as shown in fig. 8.
< about setting data >
Fig. 9 is a diagram showing an example of setting data stored in the ROM202 (fig. 3). As an example of the setting data, there is data called "emotion data" by those skilled in the art. As shown in fig. 9, the setting data is created in advance of at least the performance of the user in association with the "effect-added data" at the "timing" and is stored in the ROM202 in advance. "timing" is in the section "section: beating: tick ", and corresponds to at least any one timing from the reproduction start timing to the reproduction end timing of the automatic accompaniment pattern. Fig. 9 shows data of section 1 and section 2.
The "effect addition data" for adding an acoustic effect to a sound to be played is set in association with, for example, timing. The brightness setting data b1 will be described as an example.
As shown in fig. 10, the brightness is controlled by the gain of the harmonic component, thereby giving the sound the effects of "sparkling" and "bright". For example, when the gain of the harmonic component is increased with a frequency band around 1kHz to 2kHz as a boundary, the effect of making sound brighter can be obtained. If the gain is reduced, the sound can be said to be rounded.
It is sufficient in the art that the gain can be controlled within a range of-12 dB to +12dB, and for example, a value represented by 8 bits and 256 levels (japanese: harmony) within a range of-127 to +127 is used as the effect additional data relating to brightness. If the effect-added data is positive, the gain of the harmonic sound is increased according to the positive value, and the high-frequency region is emphasized. If the effect-added data is negative, the gain of the harmonic sound is reduced according to the negative value, and the high frequency region is suppressed.
According to fig. 9, for example, in the 1 st beat of the 1 st section, data such as 0 is taken out, and in the 4 th beat of the 1 st section, data such as +100 is taken out. In addition, if the button of beat 4 in section 1 is pressed, the setting reflection unit 201c records the content in the MIDI message and instructs the sound source 204 because the effect addition data is-24. This reduces the high-frequency component of the timbre of the sound emitted at the timing, thereby producing a sweet sound. In the 4 th beat of the 2 nd measure, since the effect addition data is +100, the setting reflection unit 201c records the content in the MIDI message (control conversion) and instructs the sound source 204. This causes a high-frequency component of the tone of the sound emitted at that time to rise, resulting in a sparkling sound.
Fig. 11 is a flowchart showing an example of the procedure of accompaniment sound generation processing according to the embodiment. In a loop from the start to the end of the playback of the automatic accompaniment, a program including instructions for executing the flowchart is called up, for example, each time a predetermined interrupt timing comes, and executed by the CPU 201.
In fig. 11, the chord detecting section 201a waits for the key of the chord input keyboard 101a (step S11), and the setting reflecting section 201c waits for the key of the melody playing keyboard 101b (step S15).
When the chord input keyboard 101a is pressed (yes in step S11), the chord detecting section 201a acquires the pitch information (note information) and rate information of the pressed key (step S12). If the chord corresponding to the pitch information and the rate information is detected (step S13), the chord detecting section 201a delivers the root value of the chord and the chord type value to the accompaniment control section 201b (step S14), and returns to the call-out source (return).
On the other hand, when the key of the melody playing keyboard 101b is detected (yes in step S15), the setting reflection unit 201c acquires pitch information and rate information (step S16). Then, if the effect addition data is defined at the timing of this point in time and the effect addition data can be acquired from the accompaniment controller 201b (yes in step S17), the setting reflection unit 201c instructs the sound source 204 to generate sound based on the pitch information, the rate information, and the effect addition data (step S18).
Thereby, the sound played by the right hand is automatically given an effect reflecting the playing state at that point in time. For example, if the scene is intended to have a steep atmosphere at the final stage of the automatic accompaniment cycle, it is possible to expect an effect of increasing the emotion of the listener by emitting a sound in which high frequencies are emphasized by increasing the brightness value.
Further, if the effect added data is not defined at the timing of the time point of step S17 (no in step S17), the setting reflection section 201c instructs the sound source 204 of the pronunciation based on the pitch information and the rate information (step S19). This outputs the original sound.
As described above, in the embodiment, the setting data for giving the effect to the sound is stored in the ROM202 in advance. Then, in the reproduction cycle of the automatic accompaniment operated by the chord input keyboard 101a, setting data corresponding to the reproduction state of the automatic accompaniment is acquired, and the acquired setting data is reflected in the sound emitted by the user's performance of the melody pad 101 b. Thus, the tone of the right-handed performance and the performance state can be automatically changed according to the state of the automatic accompaniment.
In the conventional electronic keyboard instrument, the sound of the right-hand performance is not changed regardless of the reproduction state of the automatic accompaniment. It cannot be denied that the expression of playing a keyboard musical instrument is small as compared with a live musical instrument such as a saxophone or a guitar, and that playing a saxophone with a sampling keyboard or the like makes a monotonous sound.
Keyboard musical instruments such as pianos and organs are difficult to rock sound in their construction. Therefore, the player is required to be devised. On the other hand, it is known that in stringed instruments such as guitars, as players enjoy the musical, they accumulate power for suppressing the accumulation of fingers on fingerboards, and the tones (pitch) become sweet (incorrect tones). Furthermore, brass instruments also have a unique sound generation mechanism. In order to simulate such an effect with an electronic keyboard instrument, it is necessary to operate a volume pedal finely, or to frequently operate a modulation wheel in real time, and the like, and a very troublesome operation is required of a player.
In contrast, according to the embodiment, the performance sound is automatically added with an effect according to the automatic accompaniment. That is, it is possible to easily add an effect (effect) according to the accompaniment to the melody played by the user. In this way, the tone of the right-handed performance and the performance state are changed according to the state of the automatic accompaniment, and for example, when the automatic accompaniment is a vigorous performance, the tone of the right-handed performance is also changed, so that a rich performance expression can be realized. According to the embodiments of these cases, it is possible to provide an electronic musical instrument, a control method, and a program that can realize richer performance expression.
[ 1 st modification ]
In the above-described embodiments, the brightness is described as an example of the use of setting data (emotion data). In modification 1, a case where the slide sound is controlled by the setting data will be described.
Fig. 12 is a diagram for explaining a glide sound. The glide is an index indicating a speed at which the height of a sound changes, and is described below separately from the so-called "glide method". The pitch bend can adjust the application by specifying the pitch bend time as a parameter. The glide time is an index indicating the speed at which the height of the ringing sound changes continuously from the 1 st pitch to the 2 nd pitch. That is, the time required for the pitch change from the previous tone from the time when the key is pressed to the time when the sound of the pitch corresponding to the key is emitted is the pitch bend time.
In the setting of fig. 12(a), the glide time is short (the glide time is 5), and for example, when the C key is pressed (the timing at which the C key is pressed is indicated by a one-dot chain line), the C sound is rapidly emitted from the immediately preceding G. Such setting is directed to expressing the sound of a plucked instrument such as a harp, a guqin, or the like.
In the setting of fig. 12(b), the glide time is relatively long (glide time is 95), and the change from the G sound to the C sound is gradual. Such settings are directed to representing, for example, the playing method of the guitar and the change in the pitch of the voice. In particular, in an acoustic musical instrument, if the playing atmosphere rises, the pitch of the sound sometimes becomes sweet (brisk). According to the modification 1, such a performance state can be automatically simulated by changing the time of the roll-over.
Fig. 13 is a diagram showing another example of the setting data stored in the ROM 202. Like the brightness in fig. 9, for example, a slide tone time that can be set in a range of 0 to 127 is set as effect addition data in accordance with the timing. It can be seen that the composer intends to extend the sound-sliding time in the latter half of section 1 to thereby elevate the atmosphere, and further extend the sound-sliding time in the latter half of section 2 to thereby significantly elevate the atmosphere.
In this way, in modification 1, since the glide is automatically changed together with the state of the automatic accompaniment, for example, the player's excitement can be expressed easily.
[ modification 2 ]
In the above description, the setting data is explained by taking the effect addition data for adding the acoustic effect to the musical sound as an example. In modification 2, tone color change data for changing the set 1 st tone color to the 2 nd tone color will be described as an example of the setting data. The tone color change data is, for example, a program number (tone color number) included in the MIDI message. The change of the tone color may be instructed to give a program conversion for the tone color change to the sound source 204. Here, assume that saxophone is the timbre.
Fig. 14 is a diagram showing another example of the setting data stored in the ROM 202. The 3 tone color changes of the comic (japanese: レガード), normal (japanese: normal), and blowing (japanese: ブロー) are set as the effect added data at each timing. It can be seen that the composer's intention is to smoothly continue the playing in the front half of each bar, gradually increase the tension in the rear half, and largely heighten the atmosphere by the playing method in the rear half of the 2 nd bar.
[ modification 3 ]
In the modification 3, the volume change data for changing the 1 st volume designated by the performance of the melody input keyboard 101b to the 2 nd volume will be described as an example of the setting data.
Fig. 15 is a diagram showing another example of the setting data stored in the ROM 202. Here, the numerical value for further fine-tuning the detected rate is set as effect-added data for each timing. It can be seen that the intention of the composer is to make a louder sound in the latter half of the 1 st bar than the player's key, and then suppress and make the atmosphere rise in the latter half of the 2 nd bar with a louder sound.
[ 4 th modification ]
In the 4 th modification, pitch change data for changing the 1 st pitch specified by the performance of the melody input keyboard 101b to the 2 nd pitch is described as an example of setting data. The pitch change may be performed by transmitting a program conversion to the sound source 204.
Fig. 16 is a diagram showing another example of the setting data stored in the ROM 202. For example, a numerical value for adding a sound of a complete 5-degree improvement as a harmonized interval is set as effect addition data for each timing. It is known that such an effect is intended to be achieved when the pipe organ is stopped (the sound plug), and a unique effect can be created by mixing the sound of the musical performance with the sound of 5 degrees higher. Of course, sounds of 3 degrees higher, 4 degrees higher, 6 degrees lower, and the like can be emitted.
[ 5 th modification ]
In modification 5, control conversion data for changing effect data (parameters) will be described as an example of setting data.
Fig. 17 is a diagram showing another example of the setting data stored in the ROM 202. The delay time is one of the parameters of the digital delay contained in the effector 212 (fig. 2). Here, a numerical value for specifying the delay time is set as the effect-added data for each timing. It can be seen that the composer intends to make a short delay (short delay) in the latter half of each bar, and in bar 2, to make the atmosphere rise with a delay longer than that in bar 1. In addition, there is feedback as another parameter in the delay. It can also be designated by a numerical value as effect additional data.
[ 6th modification ]
In modification 6, a case where the setting data is changed in accordance with the detected chord will be described. That is, the effect-added data is set not for the timing as described above but for the chord type. Here, an example in which the brightness is operated according to the chord type will be described.
Fig. 18 is a diagram showing another example of the setting data stored in the ROM 202. Here, M, m, 7th, m7,
Figure BDA0002637070650000131
7sus4, dim7, and aug are 8 chord types, and a numerical value for specifying brightness is set as effect-added data. It is seen that the designer's intention is to set the brightness to sweet in the minor chord and to raise the brightness in the tension chord to stimulate the ears of the listener, with the major chord as a reference.
In the 6th modification, the chord type value determined by the accompaniment controller 201b (fig. 3) is transmitted to the setting reflection unit 201c, and the setting reflection unit 201c acquires effect additional data corresponding to the chord type value from the setting data of fig. 18. Then, a MIDI message for reflecting the setting data is created by the setting reflection unit 201c and transmitted to the sound source 204.
Further, the effect additional data may be selected not by the chord name itself but by the action (I, IV, V or its proxy chord, etc.) with the chord in the tune. In most cases, the effect additional data is set to be more emotional as the chord progression action I → IV → V.
[ 7th modification ]
In the above description, it is assumed that the setting data is fixedly set in advance integrally with the automatic accompaniment data. In the 7th modification, the setting data is changed to be active along with the progress of the automatic accompaniment.
In modification 7, a case where the setting data is changed by the variation of the accompaniment will be described. That is, the effect additional data is set for the variation of the accompaniment. That is, the effect additional data is set so that the emotion level changes according to the content of the accompaniment. For example, the effect additional data can be set to have a high emotion level in the head part and a low emotion level in the latter half. Here, an example in which the brightness is operated according to the accompaniment content will be described.
Fig. 19 is a diagram showing another example of the setting data stored in the ROM 202. A numerical value specifying brightness is set as effect addition data for each accompaniment content of the prelude (Japanese: イントロ), the normal (Japanese: ノーマル), the variations 1 to 5, and the ending (Japanese: アウトロ). It can be seen that the intention of the composer is to slightly elevate the atmosphere in the prelude, and in general, gradually elevate the atmosphere as the variation figures increase, reaching a climax in variation 5, and ending with a slight elevation in the end.
In addition, the effect-added data may be set high (emotion data is increased) when the effect-added data is suppressed (emotion data is reduced) and the fill-in state is entered in the normal accompaniment reproduction. When the emotion data is high, the tone becomes bright, and when the emotion data is low, the tone becomes dark.
Alternatively, the setting data may be incremented to increase the emotion level as the number of times of loop reproduction increases. In addition to this, it is needless to say that the effect addition data may be changed according to the classification of music, the classification of accompaniment, and the type of rhythm (8/16 rhythm, shuffle (shuffle), samba (samba), etc.).
As described above, according to the embodiment and the modifications, it is possible to provide an electronic musical instrument, a control method, and a program that can realize a richer performance expression. In addition, it is possible to realize a performance that has not been experienced by the player, in which the left-hand key has an influence on the right-hand performance.
The present invention is not limited to the above-described embodiments and modifications.
For example, the function of the sound source 204 may be installed as software using the computation resource of the CPU 201. It is to be noted that the sound source 204 may be controlled not by the MIDI message but by a control message based on a unique standard, and the sound source 204 does not necessarily have to comply with the MIDI standard.
In the embodiment, a mode in which the sound source 204 is instructed to change the musical sound is described. Not limited to this, the musical tones may be changed by controlling the effector 212.
Further, as the index affecting the expression of the sound, there are various indexes such as the speed, width, and fineness of the sustain (sustain), the detuning, the attack (attack), and the vibrato (vibrant) in addition to the above-mentioned indexes, and it is needless to say that these may be controlled by the setting data.
As a result, it is only necessary to apply some variation to the performance sound produced in accordance with the performance operation of the performance operating element by the user in accordance with the setting data at least corresponding to the reproduced automatic accompaniment data. That is, the present invention is not limited to the specific embodiments described above, and various modifications, improvements, and the like within the technical scope of the present invention and within the scope of achieving the object of the present invention are obvious to those skilled in the art from the description of the claims.

Claims (9)

1. An electronic musical instrument, comprising:
a sound source; and
a processor for processing the received data, wherein the processor is used for processing the received data,
the processor is used for processing the data to be processed,
indicating the sound source with the sound of accompaniment sound corresponding to the automatic accompaniment pattern,
instructing the sound source to emit musical tones corresponding to a pitch specified according to a1 st user operation based on a1 st setting, the 1 st user operation corresponding to a1 st timing of the automatic accompaniment pattern, the 1 st setting corresponding to the 1 st timing of setting data,
instructing the sound source to emit a musical tone corresponding to a pitch specified according to a 2 nd user operation based on a 2 nd setting, the 2 nd user operation corresponding to a 2 nd timing of the automatic accompaniment pattern, the 2 nd setting corresponding to the 2 nd timing of the setting data and being different from the 1 st setting.
2. The electronic musical instrument according to claim 1,
the automatic accompaniment pattern setting device is provided with a memory, wherein the data of the automatic accompaniment patterns and the setting data are stored in advance in the memory before the performance of a user starts.
3. The electronic musical instrument according to claim 1 or 2,
the setting data includes at least any one of the following data:
effect addition data for adding a sound effect to the musical sound;
tone color changing data for changing the set 1 st tone color to the 2 nd tone color;
volume change data for changing the 1 st volume designated by the user operation to the 2 nd volume; and
pitch change data for changing the 1 st pitch designated by the user operation to the 2 nd pitch.
4. The electronic musical instrument according to any one of claims 1 to 3,
the processor is used for processing the data to be processed,
based on the user operation, a root note value and a chord type value are decided,
determining a manner of sounding the automatic accompaniment pattern to be sounded based on the determined root note value and the chord type value.
5. The electronic musical instrument according to any one of claims 1 to 4,
the keyboard comprises a chord input keyboard and a melody input keyboard, wherein the melody input keyboard is arranged at a position higher than the chord input keyboard,
the processor is used for processing the data to be processed,
indicating to the sound source a sound of accompaniment sounds corresponding to the automatic accompaniment pattern based on a user operation of the chord input keyboard,
instructing the sound source to emit a musical tone according to a setting based on the setting data, the musical tone corresponding to a pitch specified based on a user operation on the melody input keyboard.
6. A control method in an electronic musical instrument, wherein,
the electronic musical instrument is made to emit accompaniment sounds corresponding to the automatic accompaniment pattern,
causing the electronic musical instrument to emit musical tones corresponding to a pitch specified according to a1 st user operation based on a1 st setting, the 1 st user operation corresponding to a1 st timing of the automatic accompaniment pattern, the 1 st setting corresponding to the 1 st timing of setting data,
causing the electronic musical instrument to emit musical tones corresponding to a pitch specified according to a 2 nd user operation based on a 2 nd setting, the 2 nd user operation corresponding to a 2 nd timing of the automatic accompaniment pattern, the 2 nd setting corresponding to the 2 nd timing of the setting data and being different from the 1 st setting.
7. The method of claim 6, wherein,
causing the electronic musical instrument to decide the root value and the chord type value based on the 1 st user operation,
and causing the electronic musical instrument to determine a manner of sounding an automatic accompaniment pattern to be sounded, based on the determined root note value and the chord type value.
8. The method of claim 6 or 7,
causing the electronic musical instrument to emit accompaniment sounds corresponding to the automatic accompaniment pattern based on user operations on the chord input keyboard,
the electronic musical instrument is caused to generate musical tones corresponding to a pitch specified by a user operation on a melody input keyboard provided on a higher tone side than the chord input keyboard, based on the setting data.
9. A storage medium storing a program, wherein the program:
the electronic musical instrument is made to emit accompaniment sounds corresponding to the automatic accompaniment pattern,
causing the electronic musical instrument to emit musical tones corresponding to a pitch specified according to a1 st user operation based on a1 st setting, the 1 st user operation corresponding to a1 st timing of the automatic accompaniment pattern, the 1 st setting corresponding to the 1 st timing of setting data,
causing the electronic musical instrument to emit musical tones corresponding to a pitch specified according to a 2 nd user operation based on a 2 nd setting, the 2 nd user operation corresponding to a 2 nd timing of the automatic accompaniment pattern, the 2 nd setting corresponding to the 2 nd timing of the setting data and being different from the 1 st setting.
CN202010828464.7A 2019-09-24 2020-08-18 Electronic musical instrument, control method, and storage medium Pending CN112634847A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-173111 2019-09-24
JP2019173111A JP7263998B2 (en) 2019-09-24 2019-09-24 Electronic musical instrument, control method and program

Publications (1)

Publication Number Publication Date
CN112634847A true CN112634847A (en) 2021-04-09

Family

ID=75157743

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010828464.7A Pending CN112634847A (en) 2019-09-24 2020-08-18 Electronic musical instrument, control method, and storage medium

Country Status (2)

Country Link
JP (2) JP7263998B2 (en)
CN (1) CN112634847A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02161497A (en) * 1988-12-15 1990-06-21 Casio Comput Co Ltd Automatic accompaniment device
JPH05119773A (en) * 1991-10-28 1993-05-18 Matsushita Electric Ind Co Ltd Automatic accompaniment device
JP2000259155A (en) * 1999-03-05 2000-09-22 Kawai Musical Instr Mfg Co Ltd Automatic accompaniment device
JP2001175263A (en) * 1996-08-15 2001-06-29 Yamaha Corp Device and method for generating automatic accompaniment pattern
CN104050954A (en) * 2013-03-14 2014-09-17 卡西欧计算机株式会社 Automatic accompaniment apparatus and a method of automatically playing accompaniment
CN108630186A (en) * 2017-03-23 2018-10-09 卡西欧计算机株式会社 Electronic musical instrument, its control method and recording medium
CN109559722A (en) * 2017-09-26 2019-04-02 卡西欧计算机株式会社 Electronic musical instrument, the control method of electronic musical instrument and its storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2529227Y2 (en) * 1991-10-04 1997-03-19 カシオ計算機株式会社 Electronic musical instrument

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02161497A (en) * 1988-12-15 1990-06-21 Casio Comput Co Ltd Automatic accompaniment device
JPH05119773A (en) * 1991-10-28 1993-05-18 Matsushita Electric Ind Co Ltd Automatic accompaniment device
JP2001175263A (en) * 1996-08-15 2001-06-29 Yamaha Corp Device and method for generating automatic accompaniment pattern
JP2000259155A (en) * 1999-03-05 2000-09-22 Kawai Musical Instr Mfg Co Ltd Automatic accompaniment device
CN104050954A (en) * 2013-03-14 2014-09-17 卡西欧计算机株式会社 Automatic accompaniment apparatus and a method of automatically playing accompaniment
CN108630186A (en) * 2017-03-23 2018-10-09 卡西欧计算机株式会社 Electronic musical instrument, its control method and recording medium
CN109559722A (en) * 2017-09-26 2019-04-02 卡西欧计算机株式会社 Electronic musical instrument, the control method of electronic musical instrument and its storage medium

Also Published As

Publication number Publication date
JP2023076772A (en) 2023-06-01
JP7263998B2 (en) 2023-04-25
JP2021051152A (en) 2021-04-01

Similar Documents

Publication Publication Date Title
US6703549B1 (en) Performance data generating apparatus and method and storage medium
JP3598598B2 (en) Karaoke equipment
JP2004264501A (en) Keyboard musical instrument
JP3266149B2 (en) Performance guide device
JP3324477B2 (en) Computer-readable recording medium storing program for realizing additional sound signal generation device and additional sound signal generation function
JP3915807B2 (en) Automatic performance determination device and program
JPH10214083A (en) Musical sound generating method and storage medium
JP5897805B2 (en) Music control device
JP3704842B2 (en) Performance data converter
JP3722005B2 (en) Electronic music apparatus, control method therefor, and program
CN112634847A (en) Electronic musical instrument, control method, and storage medium
JP4007418B2 (en) Performance data expression processing apparatus and recording medium therefor
JP3800778B2 (en) Performance device and recording medium
JP3613062B2 (en) Musical sound data creation method and storage medium
JP3618203B2 (en) Karaoke device that allows users to play accompaniment music
JP3192597B2 (en) Automatic musical instrument for electronic musical instruments
JP3047879B2 (en) Performance guide device, performance data creation device for performance guide, and storage medium
JP2002297139A (en) Playing data modification processor
JP3873914B2 (en) Performance practice device and program
WO1996004642A1 (en) Timbral apparatus and method for musical sounds
JP7404737B2 (en) Automatic performance device, electronic musical instrument, method and program
JP4120662B2 (en) Performance data converter
JP3226268B2 (en) Concert magic automatic performance device
JP4214845B2 (en) Automatic arpeggio device and computer program applied to the device
JP2006258938A (en) Automatic performance apparatus and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination