WO2019026233A1 - Effect control device - Google Patents

Effect control device Download PDF

Info

Publication number
WO2019026233A1
WO2019026233A1 PCT/JP2017/028238 JP2017028238W WO2019026233A1 WO 2019026233 A1 WO2019026233 A1 WO 2019026233A1 JP 2017028238 W JP2017028238 W JP 2017028238W WO 2019026233 A1 WO2019026233 A1 WO 2019026233A1
Authority
WO
WIPO (PCT)
Prior art keywords
effect
control device
cpu
effect control
type
Prior art date
Application number
PCT/JP2017/028238
Other languages
French (fr)
Japanese (ja)
Inventor
賢一 芝
一輝 柏瀬
桂三 濱野
Original Assignee
ヤマハ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ヤマハ株式会社 filed Critical ヤマハ株式会社
Priority to PCT/JP2017/028238 priority Critical patent/WO2019026233A1/en
Publication of WO2019026233A1 publication Critical patent/WO2019026233A1/en

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/02Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos
    • G10H1/04Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos by additional modulation
    • G10H1/053Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos by additional modulation during execution only
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/033Voice editing, e.g. manipulating the voice of the synthesiser

Definitions

  • the present invention relates to an effect control device that controls an effect to be applied to a sound to be produced.
  • Non-Patent Document 1 discloses a rotary knob as an operator for controlling an effect, and a user can assign a desired effect to the knob.
  • the value of the assigned effect is determined by the operating position of the knob in the direction of rotation.
  • the type of the effect that can be assigned to the knob among a plurality of types is limited to one desired, and only the strength (degree) of the effect can be controlled by the operation of the knob It is.
  • An object of the present invention is to provide an effect control device capable of setting the type of an effect to be applied according to an operation position of a control.
  • an operating element having an operating stroke, a detecting unit for detecting the position of the operating element in the operating stroke, and a position of the operating element detected by the detecting unit
  • an effect control device including: a determination unit which determines a type of an effect to be applied to a sound to be produced.
  • FIG. 1 is a schematic view of an electronic musical instrument to which an effect control device according to an embodiment of the present invention is applied.
  • This effect control device is included in the electronic musical instrument 100 which is a keyboard instrument as an example.
  • the electronic musical instrument 100 has a main body 30 and a neck 31.
  • the main body portion 30 has a first surface 30a, a second surface (not shown), a third surface 30c, and a fourth surface 30d.
  • the first surface 30a is a keyboard mounting surface (surface) on which the keyboard section KB including a plurality of keys is disposed.
  • the second surface is the back surface (bottom surface), and the hooks 36 and 37 are provided on the second surface.
  • a strap (not shown) can be placed between the hooks 36 and 37, and the player usually puts the strap on his shoulder and performs performance such as operating the keyboard KB. Therefore, at the time of use with shoulders, particularly when the scale direction (key arrangement direction) of the keyboard KB is in the left-right direction, the first surface 30a and the keyboard KB face the listener side, the third surface 30c, the fourth The faces 30d face generally downward and upward, respectively.
  • the viewpoint E is an assumed viewpoint position of the player who shoulders the electronic musical instrument 100.
  • the neck portion 31 is extended from the side of the main body 30.
  • the neck portion 31 is provided with various operators including the advance operator 34 and the return operator 35.
  • a display unit 33 composed of liquid crystal or the like is disposed on the fourth surface 30 d of the main body 30.
  • a rotation knob 38 is disposed on the first surface 30a as a rotary operator, and a slide operator 39 is further disposed.
  • the electronic musical instrument 100 is a musical instrument that simulates singing in response to an operation on a performance operator.
  • the song simulation is to output a voice simulating a human voice by song synthesis.
  • White keys and black keys are arranged in the order of pitches of the keys of the keyboard section KB, and the keys are associated with different pitches.
  • the user presses a desired key on the keyboard KB.
  • the electronic musical instrument 100 detects a key operated by the user, and produces a singing sound of a pitch corresponding to the operated key.
  • the order of the syllables of the singing voice to be pronounced is predetermined.
  • FIG. 2 is a block diagram of the electronic musical instrument 100.
  • the electronic musical instrument 100 includes a central processing unit (CPU) 10, a timer 11, a read only memory (ROM) 12, a random access memory (RAM) 13, a data storage unit 14, a performance control 15, and other operations.
  • Element 16 parameter value setting operator 17, display unit 33, sound source 19, effect circuit 20, sound system 21, communication I / F (Interface), bus 23, detection circuit 18 (detection unit And.
  • the CPU 10 is a central processing unit that controls the entire electronic musical instrument 100.
  • the timer 11 is a module that measures time.
  • the ROM 12 is a non-volatile memory that stores control programs and various data.
  • the RAM 13 is a volatile memory used as a work area of the CPU 10 and various buffers.
  • the display unit 33 is a display module such as a liquid crystal display panel or an organic EL (Electro-Luminescence) panel.
  • the display unit 33 displays the operation state of the electronic musical instrument 100, various setting screens,
  • the performance operator 15 is a module that mainly accepts a performance operation that designates a pitch.
  • the keyboard portion KB, the advance operator 34, and the return operator 35 are included in the performance operator 15.
  • the performance operation element 15 when the performance operation element 15 is a keyboard, the performance operation element 15 may be a note on / note off based on sensor on / off corresponding to each key, a key depression strength (speed, velocity), etc.
  • Output performance information may be in the form of a MIDI (musical instrument digital interface) message.
  • the other operator 16 is, for example, an operation module such as an operation button or an operation knob for performing settings other than performance, such as settings relating to the electronic musical instrument 100.
  • the parameter value setting operation unit 17 is an operation module such as an operation button or an operation knob that is mainly used to set parameters for the attribute of the singing voice. Examples of this parameter include harmonics, brightness, resonance, and gender factor.
  • the harmony is a parameter for setting the balance of the harmonic component contained in the voice. Brightness is a parameter for setting the tone of the voice and gives a tone change.
  • the resonance is a parameter for setting timbre and strength of singing voice and musical instrument sound.
  • the gender element is a parameter for setting formants, and changes the thickness and texture of the voice in a feminine or male manner.
  • the parameter value setting operator 17 includes a rotation knob 38. The parameter value setting operator 17 is connected to the bus 23 via the detection circuit 18.
  • the detection circuit 18 detects the operation of the parameter value setting operator 17 and sends the detection signal to the CPU 10.
  • the external storage device 3 is, for example, an external device connected to the electronic musical instrument 100, and is, for example, a device that stores audio data.
  • the communication I / F 22 is a communication module that communicates with an external device.
  • the bus 23 transfers data between the units in the electronic musical instrument 100.
  • the data storage unit 14 stores singing data 14a.
  • the song data 14a includes lyric text data, a phonological information database, and the like.
  • the lyric text data is data describing the lyric, and is data to be sung by the singing part (the sound source 19, the effect circuit 20 and the sound system 21).
  • the lyrics of each song are described divided in syllable units. That is, the lyric text data has character information obtained by dividing the lyrics into syllables, and the character information is also information for display corresponding to the syllables.
  • the syllable is a group of sounds output in response to one performance operation.
  • the phonological information database is a database storing speech segment data (syllable information).
  • the voice segment data is data indicating a waveform of voice, and includes, for example, spectrum data of a sample string of the voice segment as waveform data.
  • the speech segment data includes segment pitch data indicating the pitch of the waveform of the speech segment.
  • the lyrics text data and the speech segment data may each be managed by a database.
  • the sound source 19 is a module having a plurality of tone generation channels. Under the control of the CPU 10, one sound generation channel is assigned to the sound source 19 in accordance with the user's performance. In the case of producing a singing voice, the sound source 19 reads voice segment data corresponding to a performance from the data storage unit 14 in the assigned tone generation channel to generate singing voice data.
  • the effect circuit 20 applies the acoustic effect designated by the parameter value setting operator 17 to the singing voice data generated by the sound source 19.
  • the sound system 21 converts the singing sound data processed by the effect circuit 20 into an analog signal by a digital / analog converter. Then, the sound system 21 amplifies the singing sound converted into the analog signal and outputs it from a speaker or the like.
  • FIG. 3 is a plan view of the rotation knob 38.
  • the rotation knob 38 is configured to be rotatable about the center point G, and has an operation stroke with a position indicated by Min in FIG. 3 as an initial position and a position indicated by Max as a rotation end position.
  • the operation stroke is composed of a plurality of movable ranges R (R1, R2, R3, R4).
  • the player can position the designated position 38a of the rotary knob 38 at an arbitrary position in the operation stroke, and the position of the designated position 38a becomes the operated position.
  • the operation of the rotation knob 38 is detected by the detection circuit 18, and the CPU 10 acquires the operation position of the rotation knob 38 from the detection signal.
  • the CPU 10 acquires the rotational position indicated by the designated position 38 a as the operation position of the rotation knob 38.
  • the non-moving range r0 is a region that can not be the operation position of the rotation knob 38.
  • the immovable range r0 is located on the opposite side of the central point G when viewed from the viewpoint E. As a result, the player who shoulders the electronic musical instrument 100 can rotate the rotation knob 38 within a range in which the indication position 38 a is easy to see.
  • each of the movable ranges R Different types of effects are assigned to each of the movable ranges R.
  • “REVERB”, “CHORUS”, “DISTORTION”, and “DELAY” correspond to the movable ranges R1, R2, R3, and R4, respectively.
  • the value of each effect is associated with the rotational position.
  • the CPU 10 determines the effect of the type assigned to the movable range R to which the obtained operation position belongs to as the effect to be applied to the sound produced (apply), and further, a value according to the operation position is the value of the effect Determined as (parameter value). Therefore, the CPU 10 plays a role as a determination unit.
  • the CPU 10 executes an effect control process based on the determined effect and the value of the effect in generation and sound generation of a singing sound.
  • the value of the effect is set to be minimum at the side closer to the initial position, and to be larger as the rotation end position is closer.
  • the value of the effect of “REVERB” to be applied gradually increases. Then, at the timing when the switching position between the movable range R1 and the movable range R2 is passed, the effect of “REVERB” is reset, and the applied effect is transitioned to “CHORUS”.
  • display areas 41, 42, 43, 44 are provided corresponding to the movable ranges R1, R2, R3, R4, respectively.
  • a type name is written as information indicating the type of the corresponding effect
  • a bar is written as information indicating the value of the effect.
  • the magnitude of the value of the effect is indicated by the length of the bar.
  • the length of the rod is different for each of the rotational positions divided into a plurality of stages. As a result, the player can visually recognize the relationship between the operation position of the rotation knob 38 and the effect type and value.
  • FIG. 4 is a flowchart showing an example of the flow of processing when a performance is performed by the electronic musical instrument 100.
  • the processing in the case where the user performs the selection of the musical composition and the performance of the selected musical composition will be described. Further, in order to simplify the description, a case where only a single sound is output will be described even if a plurality of keys are simultaneously operated. In this case, only the highest pitch among the pitches of keys operated simultaneously may be processed, or only the lowest pitch may be processed.
  • the processing described below is realized, for example, by the CPU 10 executing a program stored in the ROM 12 or the RAM 13 and functioning as a control unit that controls various components provided in the electronic musical instrument 100.
  • the CPU 10 waits until an operation of selecting a song to be played is received from the user (step S101). Note that if there is no song selection operation even after a certain time has elapsed, the CPU 10 may determine that a song set by default has been selected.
  • the CPU 10 receives the selection of the song, it reads the lyric text data of the song data 14a of the selected song. Then, the CPU 10 sets the cursor position at the top syllable described in the lyric text data (step S102).
  • the cursor is a virtual index indicating the position of the syllable to be pronounced next.
  • the CPU 10 determines whether note-on has been detected based on the operation of the keyboard section KB (step S103).
  • the CPU 10 determines whether the note-off is detected (step S107). On the other hand, when note-on is detected, that is, when a new key depression is detected, the CPU 10 stops the output of the sound if the sound is being output (step S104). Next, the CPU 10 executes an output sound generation process for producing a singing sound according to note-on (step S105).
  • the CPU 10 corresponds to an instruction acquisition unit that acquires an instruction of singing based on the performance operation of the performance operator 15.
  • the CPU 10 reads voice segment data of a syllable corresponding to a cursor position, and outputs a sound of a waveform indicated by the read voice segment data at a pitch corresponding to note-on. Specifically, the CPU 10 obtains the difference between the pitch indicated by the segment pitch data included in the voice segment data and the pitch corresponding to the operated key, and the waveform data is obtained by the frequency corresponding to this difference. The spectral distribution shown is moved in the frequency axis direction. Thus, the electronic musical instrument 100 can output a singing sound at the pitch corresponding to the operated key.
  • the CPU 10 updates the cursor position (read position) (step S106), and advances the process to step S107.
  • FIG. 5 is a view showing an example of lyrics text data.
  • the lyrics of the five syllables c1 to c5 are described in the lyrics text data.
  • Each character "ha”, “ru”, “yo”, “ko”, "i” indicates one Japanese hiragana character and each character corresponds to one syllable.
  • the CPU 10 updates the cursor position in syllable units.
  • the CPU 10 moves the cursor position to the next syllable c4.
  • the CPU 10 sequentially moves the cursor position to the next syllable in response to the note-on.
  • FIG. 6 is a diagram showing an example of the type of speech segment data.
  • the CPU 10 extracts speech segment data corresponding to syllables from the phonological information database in order to pronounce syllables corresponding to the cursor position.
  • phoneme chain data is data indicating a speech segment when the pronunciation changes, such as "silence (#) to consonant", “consonant to vowel", "vowel to consonant or vowel (of the next syllable)" .
  • the steady part data is data indicating a speech segment when the pronunciation of the vowel continues.
  • the sound source 19 includes voice chain data “# -h” corresponding to “silence ⁇ consonant h”, “consonant h ⁇ vowel a
  • the voice chain data “ha” corresponding to “” and the stationary partial data “a” corresponding to “vowel a” are selected.
  • the CPU 10 operates the singing voice based on the voice chain data “# -h”, the voice chain data “ha,” and the steady part data “a”. Output according to the pitch according to, the velocity according to the operation.
  • the determination of the cursor position and the sounding of the singing sound are performed.
  • step S107 of FIG. 4 if the sound is being output, the CPU 10 stops the output of the sound (step S108), and the process proceeds to step S109.
  • step S109 the CPU 10 acquires the current rotational position indicated by the designated position 38a of the rotation knob 38 as the operation position.
  • the CPU 10 acquires the operation position when the power is turned on, and stores the information in the RAM 13.
  • step S109 when the newly acquired operation position is different from the operation position stored in the RAM 13, the CPU 10 updates the information of the operation position.
  • step S110 the CPU 10 determines whether or not there is a change in the operation position based on whether or not the information on the operation position has been updated in step S109. Then, when there is no change in the operation position, the CPU 10 determines whether or not the performance has ended (step S111). Then, the CPU 10 returns the process to step S103 when the performance has not ended. On the other hand, when the performance is ended, if the sound is being outputted, the CPU 10 stops the output of the sound (step S112), and the processing shown in FIG. 4 is ended. Note that the CPU 10 determines whether the performance has ended, for example, whether the last syllable of the selected song has been pronounced, or whether the operation to end the performance has been performed by the other operating element 16 or the like. It can be determined based on
  • step S110 if there is a change in the operation position, should the CPU 10 change the effect type based on the operation position after the change (after the update) (the operation position finally detected in step S109)? It is determined whether or not it is (step S113). That is, the CPU 10 determines the type assigned to the movable range R to which the changed operation position belongs as the type of the effect to be applied, and the type of the determined effect does not match the current effect type, Determine that the effect type should be changed. If not, the CPU 10 determines that the effect type should not be changed.
  • the CPU 10 updates the type of the effect to be applied from the current type to the determined type (step S114), and advances the process to step S115.
  • the CPU 10 updates the type of the effect to be applied from “CHORUS” to “DISTORTION”.
  • the CPU 10 keeps the type of the effect to be applied as it is and advances the process to step S115.
  • step S115 the CPU 10 determines the value of the effect based on the changed operation position (the operation position finally detected in step S109). Therefore, the value for the effect after updating is determined when the process proceeds through step S114, and the value for the effect maintained as it is is determined when the process after step S114 is not performed. Thereafter, the CPU 10 executes the effect control process according to the type of the effect to be applied and the value of the effect (step S116). Thereby, the effect corresponding to the operation position and the value thereof are given to the singing sound to be pronounced. Thereafter, the process proceeds to step S111.
  • the type of the effect to be applied and the value of the effect are determined based on the operation position of the rotation knob 38.
  • the type of effect to be given and the degree of the effect can be set. Therefore, since the type of effect to be applied and the degree of the effect can be switched and controlled by one operation element (rotation knob 38), the number of operation elements can be reduced.
  • the user can easily understand and can easily perform the operation of the effect control.
  • information indicating the types and values of the effects assigned to the plurality of movable ranges R1 to R4 is described in the display areas 41 to 44 corresponding to each of the plurality of movable ranges R1 to R4, the user can It is possible to visually recognize the effect type and value from the rotational position.
  • the information indicating the effect type is not limited to the name, and may be a specific mark.
  • the information which shows the value of an effect is not restricted to a rod-shaped mark, For example, the indication by the combination of an arrow and a large and small character may be sufficient.
  • step S110 of FIG. 4 it is determined whether or not the operation position has changed as needed even during the displacement of the rotation knob 38.
  • the present invention is not limited to this.
  • the CPU 10 acquires, as the current operation position, the position at which the rotation knob 38 is stopped for more than a predetermined time (0.5 second or the like), and the current operation position changes from the previous position. It may be determined that there has been a change in the operation position when it has been set. In this case, when it is determined that the operation position has changed, the CPU 10 reads out the corresponding type and value and updates them, but does not update them while the rotation knob 38 is displaced.
  • FIG. 7 is a schematic view of a part of the rotation knob 38.
  • the feeling of switching may be transmitted to the operator.
  • the portion integrally rotating with the rotation knob 38 rotates inside the substantially circular hole 45 formed in the main body 30.
  • a projection 38 b is provided on the outer periphery of a portion that rotates integrally with the rotation knob 38.
  • two locking portions 46 are formed adjacent to each other on the inner circumference of the hole 45. In the process of rotating the rotation knob 38, when the protrusion 38b passes over the locking portion 46, a click feeling is generated.
  • the rotational position at which the projection 38 b is located between the two locking portions 46 is designed to correspond to the switching position of the movable range R.
  • the pair of locking portions 46 may be provided at a plurality of locations according to the switching position of the movable range R.
  • FIG. 8 is a schematic plan view of the rotation knob 38 and the display area.
  • information indicating the type of effect and the value of the effect is written in advance in the display areas 41 to 44 by printing or the like. However, these pieces of information may be electrically displayed in the display area.
  • the display area is configured by an LED, an LCD or the like.
  • the display area is an area 41a, 42a, 43a, 44a for displaying the type of effect corresponding to the movable range R1, R2, R3, R4, and an area 41b, 42b, 43b for displaying the value of the effect. , 44b.
  • the CPU 10 controls the display of the display area. In addition, as illustrated in FIG. 8, it is not essential to provide the immovable range r0.
  • the type of effect is assigned in advance to each of the movable range R
  • the type of the desired effect may be assignable by an instruction from the user.
  • a value according to the rotational position may be set according to an instruction from the user.
  • a setting process may be provided separately from the process shown in FIG. 4 and allocation may be performed in the setting process.
  • the setting state may be stored even when the power is turned off.
  • allocation is performed according to an instruction from the user, the display content of the display area may be dynamically changed. For example, after the setting process and at the start of the process of FIG.
  • the CPU 10 causes the type of the effect to be applied and the value of the effect to be displayed in the corresponding area of the display area.
  • the length of the displayed bar corresponding to the operation position may correspond to the value of the effect.
  • the plurality of movable ranges R may not be equally divided, and may have different lengths. Also, the plurality of movable ranges R may be set arbitrarily according to an instruction from the user.
  • the CPU 10 serves as a setting unit that sets a plurality of movable ranges R in the setting process. Thereby, the movable range R can be set to a desired length, and the usability is improved.
  • the display area may be configured such that the boundary of the display section changes in accordance with the set movable range R.
  • FIG. 9 is a plan view of the slide operator 39.
  • the slide operator 39 moves linearly.
  • the above-described configuration in which the type and value of the effect correspond to the operation position is not limited to the rotary type operation element, and can be applied to various operation elements having operation strokes, and there is no limitation on the movable direction. Therefore, the present invention is also applicable to the slide operation element 39 which slides linearly as shown in FIG.
  • the number of movable ranges R may be plural, and the number is not limited.
  • different types of effects are assigned to each of the movable ranges R1, R2, and R3.
  • Information indicating the type of the corresponding effect and information indicating the value of the effect are displayed in the display area corresponding to each of the movable ranges R.
  • a configuration may be adopted in which information is electrically displayed in the display area.
  • the slide direction of the slide operation element 39 is not parallel to the longitudinal direction of the key, and is inclined so that the side near the fourth surface 30d is near the neck portion 31 side.
  • the closer to the initial position is the smallest and the closer to the rotation end position, the larger the value.
  • the side closer to the initial position may be the largest, and the closer to the rotation end position, the smaller the position.
  • the value of the effect may be maximum or minimum midway in the movable range R, and may be minimum or maximum at the end position near the initial position and the end position near the rotation end position.
  • FIG. 10 is a diagram showing the relationship between the operation stroke and the effect strength (value of effect) of the manipulator of the modified example.
  • the movable direction of the manipulator is not limited to the rotation direction, and may be a straight line or a curved line.
  • Three types of effects are assigned to the movable ranges R1, R2, and R3 as types of effects, respectively, and these effects are referred to as effects 47, 48, and 49, respectively.
  • the movable ranges R1 and R2 have overlapping regions, and the movable ranges R2 and R3 also have overlapping regions.
  • each of the movable ranges R the value of the corresponding effect increases as the operation stroke advances from the respective initial position, becomes maximum halfway, and becomes the minimum at the respective end positions.
  • two types of effects 47 and 48 are reflected in the effect control with respective values corresponding to the operation position.
  • two types of effects 48 and 49 are reflected in the effect control with respective values corresponding to the operation position.
  • the plurality of movable ranges R may be ranges that do not coincide with each other in the operation stroke, and the movable ranges R may have overlapping regions. In that case, overlapping effects may be applied in the overlapping area.
  • the rotation knob 38 may be configured to rotate 360 °, or may be configured to rotate infinitely. In the case of an infinite rotation configuration, the same effect may be applied to the same movable range R regardless of the number of rotations. Alternatively, the initial state when the power is turned on may be stored, and different types of effects (for example, at the first rotation and two rotations) may be applied depending on the number of rotations even in the same movable range R.
  • Japanese lyrics are exemplified as the lyrics to be sung, but the present invention is not limited to this, and other languages may be used.
  • One letter and one syllable do not necessarily correspond.
  • two letters "ta” (ta) and "" "correspond to one syllable For example, when the English lyrics are “september", it becomes three syllables of "sep" "tem” "ber”.
  • "sep" is one syllable, three characters “s” "e” "p” correspond to one syllable.
  • the CPU 10 sequentially pronounces each syllable at the pitch of the operated key.
  • the effect given to the singing sound is taken into consideration, but the invention is not limited thereto, and the effect control of the sound generated by the playing operation in the musical instrument or the generation by the sequence data in the sounding device
  • the present invention can also be applied to the effect control of sound.
  • the instruments to which the present invention is applied include not only keyboard instruments but also instruments in which strings are arranged side by side like a guitar.
  • the effect control device of the present invention may be configured as an independent device connectable to output control information regarding the effect to the electronic musical instrument or the sound producing device.

Abstract

Provided is an effect control device that makes it possible to use the operational position of an operation element to set a type of effect to be applied. According to the present invention, a rotary knob 38 has an operation stroke that comprises a plurality of movement ranges R. A different type of effect is allocated to each of the movement ranges R. The rotational position of the rotary knob 38 is detected by a detection circuit 18, and, from a detection signal therefrom, a CPU 10 acquires the operational position of the rotary knob 28. The type of effect that is allocated to the movement range R in which the acquired operational position falls is determined by the CPU 10 to be an effect to be applied to generated sound, and the value that corresponds to the operational position is determined to be a value (a parameter value) for the effect.

Description

効果制御装置Effect control device
 本発明は、発音する音に付与する効果を制御する効果制御装置に関する。 BACKGROUND OF THE INVENTION Field of the Invention The present invention relates to an effect control device that controls an effect to be applied to a sound to be produced.
 従来、電子楽器等の分野で、発音する音に付与する効果を制御する効果制御装置が知られている。例えば、非特許文献1には、効果を制御する操作子として回転型のノブが開示され、ユーザはこのノブに所望の効果を割り当てることができる。この効果制御装置では、割り当てられている効果の値が、回転方向におけるノブの操作位置によって決定される。 BACKGROUND In the field of electronic musical instruments and the like, conventionally, an effect control device that controls an effect to be applied to a sound to be produced is known. For example, Non-Patent Document 1 discloses a rotary knob as an operator for controlling an effect, and a user can assign a desired effect to the knob. In this effect control device, the value of the assigned effect is determined by the operating position of the knob in the direction of rotation.
 しかしながら、非特許文献1の効果制御装置では、複数種類のうちノブに割り当てることができる効果の種類は所望の1つに限られ、ノブの操作により制御できるのは効果の強さ(程度)だけである。従って、例えば演奏中に複数種類の効果を制御できるようにするためには、2つ以上のノブを設け、それらに互いに異なる種類の効果を割り当てるのが通常である。 However, in the effect control device of Non-Patent Document 1, the type of the effect that can be assigned to the knob among a plurality of types is limited to one desired, and only the strength (degree) of the effect can be controlled by the operation of the knob It is. Thus, for example, in order to be able to control more than one type of effect during playing, it is customary to provide more than one knob and assign them different types of effects.
 本発明の目的は、操作子の操作位置により、付与する効果の種類を設定することができる効果制御装置を提供することである。 An object of the present invention is to provide an effect control device capable of setting the type of an effect to be applied according to an operation position of a control.
 上記目的を達成するために本発明によれば、操作ストロークを有する操作子と、前記操作ストロークにおける前記操作子の位置を検出する検出部と、前記検出部により検出された前記操作子の位置に基づいて、発音する音に付与する効果の種類を決定する決定部と、を有する効果制御装置が提供される。 In order to achieve the above object, according to the present invention, an operating element having an operating stroke, a detecting unit for detecting the position of the operating element in the operating stroke, and a position of the operating element detected by the detecting unit There is provided an effect control device including: a determination unit which determines a type of an effect to be applied to a sound to be produced.
 本発明によれば、操作子の操作位置により、付与する効果の種類を設定することができる。 According to the present invention, it is possible to set the type of the effect to be provided by the operation position of the operation element.
効果制御装置が適用される電子楽器の模式図である。It is a schematic diagram of the electronic musical instrument to which an effect control apparatus is applied. 電子楽器のブロック図である。It is a block diagram of an electronic musical instrument. 回転ノブの平面図である。It is a top view of a rotation knob. 演奏が行われる場合の処理の流れの一例を示すフローチャートである。It is a flowchart which shows an example of the flow of a process in case a performance is performed. 歌詞テキストデータの一例を示す図である。It is a figure which shows an example of lyric text data. 音声素片データの種類の一例を示す図である。It is a figure which shows an example of the kind of voice | phonetic segment data. 回転ノブの一部の模式図である。It is a schematic diagram of a part of rotation knob. 回転ノブ及び表示領域の模式的平面図である。It is a schematic plan view of a rotation knob and a display area. スライド操作子の平面図である。It is a top view of a slide control. 変形例の操作子の、操作ストロークと効果強さ(効果の値)との関係を示す図である。It is a figure which shows the relationship of the operation stroke and the effect strength (value of an effect) of the manipulator of a modification.
 以下、図面を参照して本発明の実施の形態を説明する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings.
 図1は、本発明の一実施の形態に係る効果制御装置が適用される電子楽器の模式図である。この効果制御装置は、一例として鍵盤楽器である電子楽器100に含まれる。電子楽器100は、本体部30及びネック部31を有する。本体部30は、第1面30a、第2面(不図示)、第3面30c、第4面30dを有する。第1面30aは、複数の鍵から成る鍵盤部KBが配設される鍵盤配設面(表面)である。第2面は裏面(底面)であり、第2面にはフック36、37が設けられる。フック36、37間には不図示のストラップを架けることができ、演奏者は通常、ストラップを肩に掛けて鍵盤部KBの操作等の演奏を行う。従って、肩掛けした使用時で、特に鍵盤部KBの音階方向(鍵の配列方向)が左右方向となるとき、第1面30a及び鍵盤部KBが聴取者側を向き、第3面30c、第4面30dはそれぞれ概ね下方、上方を向く。視点Eは、電子楽器100を肩掛けした演奏者の、想定される視点位置である。ネック部31は本体部30の側部から延設される。ネック部31には、進み操作子34、戻し操作子35をはじめとする各種の操作子が配設される。本体部30の第4面30dには、液晶等で構成される表示ユニット33が配設される。第1面30aには、回転型の操作子として回転ノブ38が配設され、さらに、スライド操作子39が配設される。 FIG. 1 is a schematic view of an electronic musical instrument to which an effect control device according to an embodiment of the present invention is applied. This effect control device is included in the electronic musical instrument 100 which is a keyboard instrument as an example. The electronic musical instrument 100 has a main body 30 and a neck 31. The main body portion 30 has a first surface 30a, a second surface (not shown), a third surface 30c, and a fourth surface 30d. The first surface 30a is a keyboard mounting surface (surface) on which the keyboard section KB including a plurality of keys is disposed. The second surface is the back surface (bottom surface), and the hooks 36 and 37 are provided on the second surface. A strap (not shown) can be placed between the hooks 36 and 37, and the player usually puts the strap on his shoulder and performs performance such as operating the keyboard KB. Therefore, at the time of use with shoulders, particularly when the scale direction (key arrangement direction) of the keyboard KB is in the left-right direction, the first surface 30a and the keyboard KB face the listener side, the third surface 30c, the fourth The faces 30d face generally downward and upward, respectively. The viewpoint E is an assumed viewpoint position of the player who shoulders the electronic musical instrument 100. The neck portion 31 is extended from the side of the main body 30. The neck portion 31 is provided with various operators including the advance operator 34 and the return operator 35. A display unit 33 composed of liquid crystal or the like is disposed on the fourth surface 30 d of the main body 30. A rotation knob 38 is disposed on the first surface 30a as a rotary operator, and a slide operator 39 is further disposed.
 電子楽器100は、演奏操作子への操作に応じて歌唱模擬を行う楽器である。ここで、歌唱模擬とは、歌唱合成により人間の声を模擬した音声を出力することである。鍵盤部KBの各鍵は白鍵、黒鍵が音高順に並べられ、各鍵は、それぞれ異なる音高に対応付けられている。電子楽器100を演奏する場合、ユーザは、鍵盤部KBの所望の鍵を押下する。電子楽器100はユーザにより操作された鍵を検出し、操作された鍵に応じた音高の歌唱音を発音する。なお、発音される歌唱音の音節の順番は予め定められている。 The electronic musical instrument 100 is a musical instrument that simulates singing in response to an operation on a performance operator. Here, the song simulation is to output a voice simulating a human voice by song synthesis. White keys and black keys are arranged in the order of pitches of the keys of the keyboard section KB, and the keys are associated with different pitches. When playing the electronic musical instrument 100, the user presses a desired key on the keyboard KB. The electronic musical instrument 100 detects a key operated by the user, and produces a singing sound of a pitch corresponding to the operated key. The order of the syllables of the singing voice to be pronounced is predetermined.
 図2は、電子楽器100のブロック図である。電子楽器100は、CPU(Central Processing Unit)10と、タイマ11と、ROM(Read Only Memory)12と、RAM(Random Access Memory)13と、データ記憶部14と、演奏操作子15と、他操作子16と、パラメータ値設定操作子17と、表示ユニット33と、音源19と、効果回路20と、サウンドシステム21と、通信I/F(Interface)と、バス23と、検出回路18(検出部)と、を備える。CPU10は、電子楽器100全体の制御を行う中央処理装置である。タイマ11は、時間を計測するモジュールである。ROM12は制御プログラムや各種のデータなどを格納する不揮発性のメモリである。RAM13はCPU10のワーク領域及び各種のバッファなどとして使用される揮発性のメモリである。表示ユニット33は、液晶ディスプレイパネル、有機EL(Electro-Luminescence)パネルなどの表示モジュールである。表示ユニット33は、電子楽器100の動作状態、各種設定画面、ユーザに対するメッセージなどを表示する。 FIG. 2 is a block diagram of the electronic musical instrument 100. As shown in FIG. The electronic musical instrument 100 includes a central processing unit (CPU) 10, a timer 11, a read only memory (ROM) 12, a random access memory (RAM) 13, a data storage unit 14, a performance control 15, and other operations. Element 16, parameter value setting operator 17, display unit 33, sound source 19, effect circuit 20, sound system 21, communication I / F (Interface), bus 23, detection circuit 18 (detection unit And. The CPU 10 is a central processing unit that controls the entire electronic musical instrument 100. The timer 11 is a module that measures time. The ROM 12 is a non-volatile memory that stores control programs and various data. The RAM 13 is a volatile memory used as a work area of the CPU 10 and various buffers. The display unit 33 is a display module such as a liquid crystal display panel or an organic EL (Electro-Luminescence) panel. The display unit 33 displays the operation state of the electronic musical instrument 100, various setting screens, a message for the user, and the like.
 演奏操作子15は、主として音高を指定する演奏操作を受け付けるモジュールである。本実施の形態では、鍵盤部KB、進み操作子34、戻し操作子35は演奏操作子15に含まれる。一例として、演奏操作子15が鍵盤である場合、演奏操作子15は、各鍵に対応するセンサのオン/オフに基づくノートオン/ノートオフ、押鍵の強さ(速さ、ベロシティ)などの演奏情報を出力する。この演奏情報は、MIDI(musical instrument digital interface)メッセージ形式であってもよい。他操作子16は、例えば、電子楽器100に関する設定など、演奏以外の設定を行うための操作ボタンや操作つまみなどの操作モジュールである。パラメータ値設定操作子17は、主として歌唱音の属性についてのパラメータを設定するために使用される、操作ボタンや操作つまみなどの操作モジュールである。このパラメータとしては、例えば、和声(Harmonics)、明るさ(Brightness)、共鳴(Resonance)、性別要素(Gender Factor)等がある。和声とは、声に含まれる倍音成分のバランスを設定するパラメータである。明るさとは、声の明暗を設定するパラメータであり、トーン変化を与える。共鳴とは、歌唱音声や楽器音の、音色や強弱を設定するパラメータである。性別要素とは、フォルマントを設定するパラメータであり、声の太さ、質感を女性的、或いは、男性的に変化させる。パラメータ値設定操作子17には、回転ノブ38が含まれる。パラメータ値設定操作子17は検出回路18を介してバス23に接続されている。検出回路18はパラメータ値設定操作子17の操作を検出し、その検出信号をCPU10に送る。外部記憶装置3は、例えば、電子楽器100に接続される外部機器であり、例えば、音声データを記憶する装置である。通信I/F22は、外部機器と通信する通信モジュールである。バス23は電子楽器100における各部の間のデータ転送を行う。 The performance operator 15 is a module that mainly accepts a performance operation that designates a pitch. In the present embodiment, the keyboard portion KB, the advance operator 34, and the return operator 35 are included in the performance operator 15. As an example, when the performance operation element 15 is a keyboard, the performance operation element 15 may be a note on / note off based on sensor on / off corresponding to each key, a key depression strength (speed, velocity), etc. Output performance information. This performance information may be in the form of a MIDI (musical instrument digital interface) message. The other operator 16 is, for example, an operation module such as an operation button or an operation knob for performing settings other than performance, such as settings relating to the electronic musical instrument 100. The parameter value setting operation unit 17 is an operation module such as an operation button or an operation knob that is mainly used to set parameters for the attribute of the singing voice. Examples of this parameter include harmonics, brightness, resonance, and gender factor. The harmony is a parameter for setting the balance of the harmonic component contained in the voice. Brightness is a parameter for setting the tone of the voice and gives a tone change. The resonance is a parameter for setting timbre and strength of singing voice and musical instrument sound. The gender element is a parameter for setting formants, and changes the thickness and texture of the voice in a feminine or male manner. The parameter value setting operator 17 includes a rotation knob 38. The parameter value setting operator 17 is connected to the bus 23 via the detection circuit 18. The detection circuit 18 detects the operation of the parameter value setting operator 17 and sends the detection signal to the CPU 10. The external storage device 3 is, for example, an external device connected to the electronic musical instrument 100, and is, for example, a device that stores audio data. The communication I / F 22 is a communication module that communicates with an external device. The bus 23 transfers data between the units in the electronic musical instrument 100.
 データ記憶部14は、歌唱用データ14aを格納する。歌唱用データ14aには歌詞テキストデータ、音韻情報データベースなどが含まれる。この歌詞テキストデータは、歌詞を記述するデータであり、歌唱部(音源19、効果回路20及びサウンドシステム21)により歌唱されるためのデータである。歌詞テキストデータには、曲ごとの歌詞が音節単位で区切られて記述されている。すなわち、歌詞テキストデータは歌詞を音節に区切った文字情報を有し、この文字情報は音節に対応する表示用の情報でもある。ここで音節とは、1回の演奏操作に応じて出力する音のまとまりである。音韻情報データベースは、音声素片データ(音節情報)を格納するデータベースである。音声素片データは音声の波形を示すデータであり、例えば、音声素片のサンプル列のスペクトルデータを波形データとして含む。また、音声素片データには、音声素片の波形のピッチを示す素片ピッチデータが含まれる。歌詞テキストデータ、音声素片データは、それぞれ、データベースにより管理されてもよい。 The data storage unit 14 stores singing data 14a. The song data 14a includes lyric text data, a phonological information database, and the like. The lyric text data is data describing the lyric, and is data to be sung by the singing part (the sound source 19, the effect circuit 20 and the sound system 21). In the lyric text data, the lyrics of each song are described divided in syllable units. That is, the lyric text data has character information obtained by dividing the lyrics into syllables, and the character information is also information for display corresponding to the syllables. Here, the syllable is a group of sounds output in response to one performance operation. The phonological information database is a database storing speech segment data (syllable information). The voice segment data is data indicating a waveform of voice, and includes, for example, spectrum data of a sample string of the voice segment as waveform data. The speech segment data includes segment pitch data indicating the pitch of the waveform of the speech segment. The lyrics text data and the speech segment data may each be managed by a database.
 音源19は、複数の発音チャンネルを有するモジュールである。音源19には、CPU10の制御の基で、ユーザの演奏に応じて1つの発音チャンネルが割り当てられる。歌唱音を発音する場合、音源19は、割り当てられた発音チャンネルにおいて、データ記憶部14から演奏に対応する音声素片データを読み出して歌唱音データを生成する。効果回路20は、音源19が生成した歌唱音データに対して、パラメータ値設定操作子17により指定された音響効果を適用する。サウンドシステム21は、効果回路20による処理後の歌唱音データを、デジタル/アナログ変換器によりアナログ信号に変換する。そして、サウンドシステム21は、アナログ信号に変換された歌唱音を増幅してスピーカなどから出力する。 The sound source 19 is a module having a plurality of tone generation channels. Under the control of the CPU 10, one sound generation channel is assigned to the sound source 19 in accordance with the user's performance. In the case of producing a singing voice, the sound source 19 reads voice segment data corresponding to a performance from the data storage unit 14 in the assigned tone generation channel to generate singing voice data. The effect circuit 20 applies the acoustic effect designated by the parameter value setting operator 17 to the singing voice data generated by the sound source 19. The sound system 21 converts the singing sound data processed by the effect circuit 20 into an analog signal by a digital / analog converter. Then, the sound system 21 amplifies the singing sound converted into the analog signal and outputs it from a speaker or the like.
 図3は、回転ノブ38の平面図である。回転ノブ38は、中心点Gを中心として回転自在に構成され、図3のMinで示す位置を初期位置とし、Maxで示す位置を回転終了位置とする操作ストロークを有している。操作ストロークは複数の可動範囲R(R1、R2、R3、R4)から成る。演奏者は、操作ストロークにおける任意の位置に回転ノブ38の指示位置38aを位置させることができ、指示位置38aの位置が操作位置となる。回転ノブ38の操作は検出回路18によって検出され、その検出信号からCPU10が回転ノブ38の操作位置を取得する。すなわちCPU10は、指示位置38aが示す回転位置を、回転ノブ38の操作位置として取得する。不可動範囲r0は、回転ノブ38の操作位置となり得ない領域である。不可動範囲r0は、視点Eから見て中心点Gの反対側に位置する。これにより、電子楽器100を肩掛けした演奏者は、指示位置38aが見やすい範囲内で回転ノブ38を回転操作することができる。 FIG. 3 is a plan view of the rotation knob 38. As shown in FIG. The rotation knob 38 is configured to be rotatable about the center point G, and has an operation stroke with a position indicated by Min in FIG. 3 as an initial position and a position indicated by Max as a rotation end position. The operation stroke is composed of a plurality of movable ranges R (R1, R2, R3, R4). The player can position the designated position 38a of the rotary knob 38 at an arbitrary position in the operation stroke, and the position of the designated position 38a becomes the operated position. The operation of the rotation knob 38 is detected by the detection circuit 18, and the CPU 10 acquires the operation position of the rotation knob 38 from the detection signal. That is, the CPU 10 acquires the rotational position indicated by the designated position 38 a as the operation position of the rotation knob 38. The non-moving range r0 is a region that can not be the operation position of the rotation knob 38. The immovable range r0 is located on the opposite side of the central point G when viewed from the viewpoint E. As a result, the player who shoulders the electronic musical instrument 100 can rotate the rotation knob 38 within a range in which the indication position 38 a is easy to see.
 可動範囲Rの各々には、互いに異なる種類の効果が割り当てられている。一例として、可動範囲R1、R2、R3、R4にそれぞれ、「REVERB」、「CHORUS」、「DISTORTION」、「DELAY」が対応している。また、可動範囲Rの各々には、回転位置に対して、それぞれの効果の値が対応付けられている。CPU10は、取得した操作位置が属する可動範囲Rに割り当てられている種類の効果を、発音する音に付与する(適用する)効果として決定し、さらに、操作位置に応じた値を、効果の値(パラメータ値)として決定する。従ってCPU10は、決定部としての役割を果たす。CPU10は、歌唱音の生成・発音において、決定した効果と効果の値とに基づく効果制御処理を実行する。いずれの可動範囲Rにおいても、効果の値は、初期位置に近い側が最小で、回転終了位置に近いほど大きくなるように設定されている。例えば、可動範囲R1では、初期位置から回転終了位置へ回転する行程において、適用される「REVERB」の効果の値が徐々に大きくなる。そして可動範囲R1と可動範囲R2との切り替わり位置を過ぎるタイミングで、「REVERB」の効果がリセットされると共に、適用される効果が「CHORUS」へ遷移する。 Different types of effects are assigned to each of the movable ranges R. As an example, "REVERB", "CHORUS", "DISTORTION", and "DELAY" correspond to the movable ranges R1, R2, R3, and R4, respectively. Further, in each of the movable ranges R, the value of each effect is associated with the rotational position. The CPU 10 determines the effect of the type assigned to the movable range R to which the obtained operation position belongs to as the effect to be applied to the sound produced (apply), and further, a value according to the operation position is the value of the effect Determined as (parameter value). Therefore, the CPU 10 plays a role as a determination unit. The CPU 10 executes an effect control process based on the determined effect and the value of the effect in generation and sound generation of a singing sound. In any movable range R, the value of the effect is set to be minimum at the side closer to the initial position, and to be larger as the rotation end position is closer. For example, in the movable range R1, in the process of rotating from the initial position to the rotation end position, the value of the effect of “REVERB” to be applied gradually increases. Then, at the timing when the switching position between the movable range R1 and the movable range R2 is passed, the effect of “REVERB” is reset, and the applied effect is transitioned to “CHORUS”.
 ところで、可動範囲Rにおける回転ノブ38の周囲には、可動範囲R1、R2、R3、R4にそれぞれ対応して表示領域41、42、43、44が設けられる。それぞれの表記領域に、対応する効果の種類を示す情報として種類名が表記されると共に、効果の値を示す情報として棒が表記されている。効果の値の大きさは、棒の長さによって示される。図3の例では、複数段階に分けた回転位置ごとに、棒の長さが異なっている。これらにより、演奏者は、回転ノブ38の操作位置と効果種類及び値との関係を視認できる。 By the way, around the rotation knob 38 in the movable range R, display areas 41, 42, 43, 44 are provided corresponding to the movable ranges R1, R2, R3, R4, respectively. In each writing area, a type name is written as information indicating the type of the corresponding effect, and a bar is written as information indicating the value of the effect. The magnitude of the value of the effect is indicated by the length of the bar. In the example of FIG. 3, the length of the rod is different for each of the rotational positions divided into a plurality of stages. As a result, the player can visually recognize the relationship between the operation position of the rotation knob 38 and the effect type and value.
 図4は、電子楽器100による演奏が行われる場合の処理の流れの一例を示すフローチャートである。ここでは、ユーザにより演奏曲の選択と選択した曲の演奏とが行われる場合の処理について説明する。また、説明を簡単にするため、複数の鍵が同時に操作された場合であっても、単音のみを出力する場合について説明する。この場合、同時に操作された鍵の音高のうち、最も高い音高のみについて処理してもよいし、最も低い音高のみについて処理してもよい。なお、以下に説明する処理は、例えば、CPU10がROM12やRAM13に記憶されたプログラムを実行し、電子楽器100が備える各種構成を制御する制御部として機能することにより実現される。 FIG. 4 is a flowchart showing an example of the flow of processing when a performance is performed by the electronic musical instrument 100. Here, the processing in the case where the user performs the selection of the musical composition and the performance of the selected musical composition will be described. Further, in order to simplify the description, a case where only a single sound is output will be described even if a plurality of keys are simultaneously operated. In this case, only the highest pitch among the pitches of keys operated simultaneously may be processed, or only the lowest pitch may be processed. The processing described below is realized, for example, by the CPU 10 executing a program stored in the ROM 12 or the RAM 13 and functioning as a control unit that controls various components provided in the electronic musical instrument 100.
 電源がオンにされると、CPU10は、演奏する曲を選択する操作がユーザから受け付けられるまで待つ(ステップS101)。なお、一定時間経過しても曲選択の操作がない場合は、CPU10は、デフォルトで設定されている曲が選択されたと判断してもよい。CPU10は、曲の選択を受け付けると、選択された曲の歌唱用データ14aの歌詞テキストデータを読み出す。そして、CPU10は、歌詞テキストデータに記述された先頭の音節にカーソル位置を設定する(ステップS102)。ここで、カーソルとは、次に発音する音節の位置を示す仮想的な指標である。次に、CPU10は、鍵盤部KBの操作に基づくノートオンを検出したか否かを判定する(ステップS103)。CPU10は、ノートオンが検出されない場合、ノートオフを検出したか否かを判別する(ステップS107)。一方、ノートオンを検出した場合、すなわち新たな押鍵を検出した場合は、CPU10は、音を出力中であればその音の出力を停止する(ステップS104)。次にCPU10は、ノートオンに応じた歌唱音を発音する出力音生成処理を実行する(ステップS105)。ここでCPU10は、演奏操作子15の演奏操作に基づき歌唱の指示を取得する指示取得部に該当する。 When the power is turned on, the CPU 10 waits until an operation of selecting a song to be played is received from the user (step S101). Note that if there is no song selection operation even after a certain time has elapsed, the CPU 10 may determine that a song set by default has been selected. When the CPU 10 receives the selection of the song, it reads the lyric text data of the song data 14a of the selected song. Then, the CPU 10 sets the cursor position at the top syllable described in the lyric text data (step S102). Here, the cursor is a virtual index indicating the position of the syllable to be pronounced next. Next, the CPU 10 determines whether note-on has been detected based on the operation of the keyboard section KB (step S103). When the note-on is not detected, the CPU 10 determines whether the note-off is detected (step S107). On the other hand, when note-on is detected, that is, when a new key depression is detected, the CPU 10 stops the output of the sound if the sound is being output (step S104). Next, the CPU 10 executes an output sound generation process for producing a singing sound according to note-on (step S105). Here, the CPU 10 corresponds to an instruction acquisition unit that acquires an instruction of singing based on the performance operation of the performance operator 15.
 この出力音生成処理を略説する。CPU10はまず、カーソル位置に対応する音節の音声素片データを読み出し、ノートオンに対応する音高で、読み出した音声素片データが示す波形の音を出力する。具体的には、CPU10は、音声素片データに含まれる素片ピッチデータが示す音高と、操作された鍵に対応する音高との差分を求め、この差分に相当する周波数だけ波形データが示すスペクトル分布を周波数軸方向に移動させる。これにより、電子楽器100は、操作された鍵に対応する音高で歌唱音を出力することができる。次に、CPU10は、カーソル位置(読出位置)を更新し(ステップS106)、処理をステップS107に進める。 This output sound generation process is briefly described. First, the CPU 10 reads voice segment data of a syllable corresponding to a cursor position, and outputs a sound of a waveform indicated by the read voice segment data at a pitch corresponding to note-on. Specifically, the CPU 10 obtains the difference between the pitch indicated by the segment pitch data included in the voice segment data and the pitch corresponding to the operated key, and the waveform data is obtained by the frequency corresponding to this difference. The spectral distribution shown is moved in the frequency axis direction. Thus, the electronic musical instrument 100 can output a singing sound at the pitch corresponding to the operated key. Next, the CPU 10 updates the cursor position (read position) (step S106), and advances the process to step S107.
 ここで、ステップS105、S106の処理に係るカーソル位置の決定と歌唱音の発音について、具体例を用いて説明する。まず、カーソル位置の更新について説明する。図5は、歌詞テキストデータの一例を示す図である。図5の例では、歌詞テキストデータには、5つの音節c1~c5の歌詞が記述されている。各字「は」、「る」、「よ」、「こ」、「い」は、日本語のひらがなの1字を示し、各字が1音節に対応する。CPU10は、音節単位でカーソル位置を更新する。例えば、カーソルが音節c3に位置している場合、「よ」に対応する音声素片データをデータ記憶部14から読み出し、「よ」の歌唱音を発音する。CPU10は、「よ」の発音が終了すると、次の音節c4にカーソル位置を移動させる。このように、CPU10は、ノートオンに応じて次の音節にカーソル位置を順次移動させる。 Here, the determination of the cursor position and the sounding of the singing voice according to the processes of steps S105 and S106 will be described using a specific example. First, updating of the cursor position will be described. FIG. 5 is a view showing an example of lyrics text data. In the example of FIG. 5, the lyrics of the five syllables c1 to c5 are described in the lyrics text data. Each character "ha", "ru", "yo", "ko", "i" indicates one Japanese hiragana character and each character corresponds to one syllable. The CPU 10 updates the cursor position in syllable units. For example, when the cursor is located at the syllable c3, the voice segment data corresponding to "Y" is read from the data storage unit 14, and the singing voice of "Y" is pronounced. When the sound generation of "Yo" is completed, the CPU 10 moves the cursor position to the next syllable c4. Thus, the CPU 10 sequentially moves the cursor position to the next syllable in response to the note-on.
 次に、歌唱音の発音について説明する。図6は、音声素片データの種類の一例を示す図である。CPU10は、カーソル位置に対応する音節を発音させるために、音韻情報データベースから、音節に対応する音声素片データを抽出する。音声素片データには、音素連鎖データと、定常部分データの2種類が存在する。音素連鎖データとは、「無音(#)から子音」、「子音から母音」、「母音から(次の音節の)子音又は母音」など、発音が変化する際の音声素片を示すデータである。定常部分データは、母音の発音が継続する際の音声素片を示すデータである。例えば、カーソル位置が音節c1の「は(ha)」に設定されている場合、音源19は、「無音→子音h」に対応する音声連鎖データ「#-h」と、「子音h→母音a」に対応する音声連鎖データ「h-a」と、「母音a」に対応する定常部分データ「a」と、を選択する。そして、CPU10は、演奏が開始されて押鍵を検出すると、音声連鎖データ「#-h」、音声連鎖データ「h-a」、定常部分データ「a」に基づく歌唱音を、操作された鍵に応じた音高、操作に応じたベロシティで出力する。このようにして、カーソル位置の決定と歌唱音の発音が実行される。 Next, the pronunciation of the singing sound will be described. FIG. 6 is a diagram showing an example of the type of speech segment data. The CPU 10 extracts speech segment data corresponding to syllables from the phonological information database in order to pronounce syllables corresponding to the cursor position. There are two types of phonetic segment data: phoneme chain data and stationary partial data. The phoneme chain data is data indicating a speech segment when the pronunciation changes, such as "silence (#) to consonant", "consonant to vowel", "vowel to consonant or vowel (of the next syllable)" . The steady part data is data indicating a speech segment when the pronunciation of the vowel continues. For example, when the cursor position is set to “ha” of syllable c 1, the sound source 19 includes voice chain data “# -h” corresponding to “silence → consonant h”, “consonant h → vowel a The voice chain data “ha” corresponding to “” and the stationary partial data “a” corresponding to “vowel a” are selected. Then, when the performance is started and the key depression is detected, the CPU 10 operates the singing voice based on the voice chain data “# -h”, the voice chain data “ha,” and the steady part data “a”. Output according to the pitch according to, the velocity according to the operation. Thus, the determination of the cursor position and the sounding of the singing sound are performed.
 図4のステップS107でノートオフを検出した場合は、CPU10は、音を出力中であればその音の出力を停止して(ステップS108)、処理をステップS109に進める。一方、ノートオフが検出されない場合は、CPU10は処理をステップS109に進める。ステップS109では、CPU10は、回転ノブ38の指示位置38aが示す現在の回転位置を操作位置として取得する。なお、CPU10は電源のオン時に操作位置を取得し、その情報をRAM13に格納しておく。CPU10は、ステップS109では、新たに取得した操作位置が、RAM13に格納されている操作位置と異なる場合に、操作位置の情報を更新する。 When the note-off is detected in step S107 of FIG. 4, if the sound is being output, the CPU 10 stops the output of the sound (step S108), and the process proceeds to step S109. On the other hand, if note-off is not detected, the CPU 10 advances the process to step S109. In step S109, the CPU 10 acquires the current rotational position indicated by the designated position 38a of the rotation knob 38 as the operation position. The CPU 10 acquires the operation position when the power is turned on, and stores the information in the RAM 13. In step S109, when the newly acquired operation position is different from the operation position stored in the RAM 13, the CPU 10 updates the information of the operation position.
 次に、ステップS110では、CPU10は、操作位置に変化があったか否かを、ステップS109で操作位置の情報が更新されたか否かによって判別する。そして、操作位置に変化がない場合は、CPU10は、演奏が終了したか否かを判別する(ステップS111)。そしてCPU10は、演奏を終了していない場合は処理をステップS103に戻す。一方、演奏を終了した場合は、CPU10は、音を出力中であればその音の出力を停止して(ステップS112)、図4に示す処理を終了する。なお、CPU10は、演奏を終了したか否かを、例えば、選択曲の最後尾の音節が発音されたか否か、あるいは他操作子16により演奏を終了する操作が行われた否か、などに基づき判別できる。 Next, in step S110, the CPU 10 determines whether or not there is a change in the operation position based on whether or not the information on the operation position has been updated in step S109. Then, when there is no change in the operation position, the CPU 10 determines whether or not the performance has ended (step S111). Then, the CPU 10 returns the process to step S103 when the performance has not ended. On the other hand, when the performance is ended, if the sound is being outputted, the CPU 10 stops the output of the sound (step S112), and the processing shown in FIG. 4 is ended. Note that the CPU 10 determines whether the performance has ended, for example, whether the last syllable of the selected song has been pronounced, or whether the operation to end the performance has been performed by the other operating element 16 or the like. It can be determined based on
 ステップS110の判別の結果、操作位置に変化があった場合は、CPU10は、変化後(更新後)の操作位置(最後にステップS109で検出された操作位置)に基づき、効果種類を変更すべきか否かを判別する(ステップS113)。すなわちCPU10は、変化後の操作位置が属する可動範囲Rに割り当てられている種類を、適用すべき効果の種類として決定し、決定した効果の種類が、現在の効果の種類と一致しない場合に、効果種類を変更すべきと判別する。そうでない場合は、CPU10は、効果種類を変更すべきでないと判別する。そして、効果種類を変更すべき場合は、CPU10は、適用する効果の種類を、現在の種類から、決定した種類へと更新して(ステップS114)、処理をステップS115に進める。例えばCPU10は、操作位置が可動範囲R2から可動範囲R3に遷移した場合は、適用する効果の種類を「CHORUS」から「DISTORTION」に更新する。一方、効果種類を変更すべきでない場合は、CPU10は適用する効果の種類を現状のまま維持して処理をステップS115に進める。 As a result of the determination in step S110, if there is a change in the operation position, should the CPU 10 change the effect type based on the operation position after the change (after the update) (the operation position finally detected in step S109)? It is determined whether or not it is (step S113). That is, the CPU 10 determines the type assigned to the movable range R to which the changed operation position belongs as the type of the effect to be applied, and the type of the determined effect does not match the current effect type, Determine that the effect type should be changed. If not, the CPU 10 determines that the effect type should not be changed. Then, when the effect type is to be changed, the CPU 10 updates the type of the effect to be applied from the current type to the determined type (step S114), and advances the process to step S115. For example, when the operation position transitions from the movable range R2 to the movable range R3, the CPU 10 updates the type of the effect to be applied from “CHORUS” to “DISTORTION”. On the other hand, when the effect type is not to be changed, the CPU 10 keeps the type of the effect to be applied as it is and advances the process to step S115.
 ステップS115では、CPU10は、変化後の操作位置(最後にステップS109で検出された操作位置)に基づき、効果の値を決定する。従って、ステップS114を経た場合は更新後の効果に対する値が決定され、ステップS114を経ない場合は、現状維持された効果に対する値が決定される。その後、CPU10は、適用する効果の種類と効果の値とに応じて、効果制御処理を実行する(ステップS116)。これにより、操作位置に対応する効果とその値が、発音する歌唱音に対して付与される。その後、処理はステップS111に進む。 In step S115, the CPU 10 determines the value of the effect based on the changed operation position (the operation position finally detected in step S109). Therefore, the value for the effect after updating is determined when the process proceeds through step S114, and the value for the effect maintained as it is is determined when the process after step S114 is not performed. Thereafter, the CPU 10 executes the effect control process according to the type of the effect to be applied and the value of the effect (step S116). Thereby, the effect corresponding to the operation position and the value thereof are given to the singing sound to be pronounced. Thereafter, the process proceeds to step S111.
 本実施の形態によれば、回転ノブ38の操作位置に基づいて、適用する効果の種類、及び効果の値が決定されるので、回転ノブ38の操作位置によって、発音する音、特に歌唱音に付与する効果の種類や効果の程度を設定することができる。従って、付与する効果の種類及び効果の程度を1つの操作子(回転ノブ38)で切り替えて制御可能となるので、操作子の数を減らすことが可能となる。 According to the present embodiment, the type of the effect to be applied and the value of the effect are determined based on the operation position of the rotation knob 38. The type of effect to be given and the degree of the effect can be set. Therefore, since the type of effect to be applied and the degree of the effect can be switched and controlled by one operation element (rotation knob 38), the number of operation elements can be reduced.
 また、回転ノブ38の操作ストロークにおける互いに一致しない複数の可動範囲Rの各々に、互いに異なる種類の効果が割り当てられているので、ユーザにとってわかりやすく、効果制御の操作がしやすい。また、複数の可動範囲R1~R4に割り当てられた効果の種類や値を示す情報が、各々に対応する表示領域41~44に表記されているので、ユーザは、回転ノブ38の指示位置38aの回転位置から、効果種類や値を視認できる。なお、効果種類を示す情報は、名称に限られず、特定のマークでもよい。また、効果の値を示す情報は棒状のマークに限られず、例えば、矢印と、大や小の文字との組み合わせによる表記であってもよい。 Further, since different types of effects are assigned to each of a plurality of movable ranges R which do not coincide with each other in the operation stroke of the rotation knob 38, the user can easily understand and can easily perform the operation of the effect control. Further, since information indicating the types and values of the effects assigned to the plurality of movable ranges R1 to R4 is described in the display areas 41 to 44 corresponding to each of the plurality of movable ranges R1 to R4, the user can It is possible to visually recognize the effect type and value from the rotational position. The information indicating the effect type is not limited to the name, and may be a specific mark. Moreover, the information which shows the value of an effect is not restricted to a rod-shaped mark, For example, the indication by the combination of an arrow and a large and small character may be sufficient.
 なお、図4のステップS110では、回転ノブ38の変位途中であっても随時、操作位置に変化があったか否かが判別された。しかしこれに限定されず、例えば、CPU10は、回転ノブ38が所定時間(0.5秒等)を超えて静止した位置を現在の操作位置として取得し、現在の操作位置が前回位置から変化しているとき、操作位置に変化があったと判別するようにしてもよい。この場合、CPU10は、操作位置に変化があったと判別したときに、対応する種類と値を読み出してそれらを更新するが、回転ノブ38の変位中にはそれらを更新しない。 In step S110 of FIG. 4, it is determined whether or not the operation position has changed as needed even during the displacement of the rotation knob 38. However, the present invention is not limited to this. For example, the CPU 10 acquires, as the current operation position, the position at which the rotation knob 38 is stopped for more than a predetermined time (0.5 second or the like), and the current operation position changes from the previous position. It may be determined that there has been a change in the operation position when it has been set. In this case, when it is determined that the operation position has changed, the CPU 10 reads out the corresponding type and value and updates them, but does not update them while the rotation knob 38 is displaced.
 以下、図7~図10で本発明の変形例を説明する。図7は、回転ノブ38の一部の模式図である。可動範囲Rの切り替わり位置で、操作者に切り替わりの感触が伝わるように構成してもよい。例えば、図7に示すように、回転ノブ38と一体に回転する部分は、本体部30に形成された略円形の穴45の内側を回転する。回転ノブ38と一体に回転する部分の外周部に突起38bが突設される。一方、穴45の内周に、2つの係止部46が隣接して形成される。回転ノブ38が回転する行程で、突起38bが係止部46を乗り越えるときにクリック感が発生する。突起38bが2つの係止部46の間に位置するような回転位置が、可動範囲Rの切り替わり位置に対応するように設計される。なお、一対の係止部46は、可動範囲Rの切り替わり位置に応じた複数箇所に設けられてもよい。 Hereinafter, a modification of the present invention will be described with reference to FIGS. 7 to 10. FIG. 7 is a schematic view of a part of the rotation knob 38. As shown in FIG. At the switching position of the movable range R, the feeling of switching may be transmitted to the operator. For example, as shown in FIG. 7, the portion integrally rotating with the rotation knob 38 rotates inside the substantially circular hole 45 formed in the main body 30. A projection 38 b is provided on the outer periphery of a portion that rotates integrally with the rotation knob 38. On the other hand, two locking portions 46 are formed adjacent to each other on the inner circumference of the hole 45. In the process of rotating the rotation knob 38, when the protrusion 38b passes over the locking portion 46, a click feeling is generated. The rotational position at which the projection 38 b is located between the two locking portions 46 is designed to correspond to the switching position of the movable range R. The pair of locking portions 46 may be provided at a plurality of locations according to the switching position of the movable range R.
 図8は、回転ノブ38及び表示領域の模式的平面図である。図3に示した例では、表示領域41~44には、予め印刷等によって効果の種類や効果の値を示す情報が表記されていた。しかし、これらの情報を表示領域に電気的に表示させてもよい。表示領域は、LEDやLCD等によって構成される。表示領域は、可動範囲R1、R2、R3、R4に対応して、効果の種類を表示するための領域41a、42a、43a、44aと、効果の値を表示するための領域41b、42b、43b、44bと、を有する。CPU10が表示領域の表示を制御する。なお、図8で例示したように、不可動範囲r0を設けることは必須でない。 FIG. 8 is a schematic plan view of the rotation knob 38 and the display area. In the example shown in FIG. 3, information indicating the type of effect and the value of the effect is written in advance in the display areas 41 to 44 by printing or the like. However, these pieces of information may be electrically displayed in the display area. The display area is configured by an LED, an LCD or the like. The display area is an area 41a, 42a, 43a, 44a for displaying the type of effect corresponding to the movable range R1, R2, R3, R4, and an area 41b, 42b, 43b for displaying the value of the effect. , 44b. The CPU 10 controls the display of the display area. In addition, as illustrated in FIG. 8, it is not essential to provide the immovable range r0.
 ところで、可動範囲Rの各々には、効果の種類が予め割り当てられているとしたが、ユーザからの指示により所望の効果の種類を割り当て可能に構成してもよい。さらには、ユーザからの指示により、可動範囲Rの各々に対し、回転位置に応じた値の大きさを設定できるようにしてもよい。これらのようにするには、図4に示す処理とは別に設定処理を設け、設定処理の中で割り当てを実行してもよい。設定状態は電源オフ時にも記憶されるようにしてもよい。ユーザからの指示で割り当てを実行した場合、表示領域の表示内容を動的に変化させてもよい。例えばCPU10は、設定処理の後であって図4の処理の開始時に、適用する効果の種類と効果の値とを、表示領域の対応する領域へ表示させる。操作位置に応じた、表示する棒の長さについては、効果の値に対応させてもよい。これにより、複数種類の所望の効果を1つの操作子に割り当てることができ、操作子の数を減らすことができる。 By the way, although the type of effect is assigned in advance to each of the movable range R, the type of the desired effect may be assignable by an instruction from the user. Furthermore, for each of the movable ranges R, a value according to the rotational position may be set according to an instruction from the user. In order to do this, a setting process may be provided separately from the process shown in FIG. 4 and allocation may be performed in the setting process. The setting state may be stored even when the power is turned off. When allocation is performed according to an instruction from the user, the display content of the display area may be dynamically changed. For example, after the setting process and at the start of the process of FIG. 4, the CPU 10 causes the type of the effect to be applied and the value of the effect to be displayed in the corresponding area of the display area. The length of the displayed bar corresponding to the operation position may correspond to the value of the effect. Thereby, a plurality of desired effects can be assigned to one operator, and the number of operators can be reduced.
 なお、複数の可動範囲Rは等分割されたものでなくてもよく、互いに長さが異なっていてもよい。また、ユーザからの指示により、複数の可動範囲Rを任意に設定できるようにしてもよい。この場合、CPU10は、上記設定処理の中で複数の可動範囲Rを設定する設定部となる。これにより、可動範囲Rを所望の長さに設定でき、使い勝手が向上する。なお、電気的な表示領域を採用する場合、表示領域は、設定された可動範囲Rに応じて表示区分の境界が変化する構成であってもよい。 The plurality of movable ranges R may not be equally divided, and may have different lengths. Also, the plurality of movable ranges R may be set arbitrarily according to an instruction from the user. In this case, the CPU 10 serves as a setting unit that sets a plurality of movable ranges R in the setting process. Thereby, the movable range R can be set to a desired length, and the usability is improved. In the case of adopting an electrical display area, the display area may be configured such that the boundary of the display section changes in accordance with the set movable range R.
 図9は、スライド操作子39の平面図である。スライド操作子39は直線状に可動する。上記した、効果の種類と値を操作位置に対応させる構成は、回転式の操作子に限定されず、操作ストロークを有する各種の操作子に適用可能であり、可動方向に限定はない。従って、図9に示す直線状にスライドするスライド操作子39にも本発明を適用可能である。可動範囲Rの数は複数であればよく、数は問わない。例えばスライド操作子39において、可動範囲R1、R2、R3の各々には、互いに異なる種類の効果が割り当てられる。可動範囲Rの各々に対応する表示領域に、対応する効果の種類を示す情報と、効果の値を示す情報とが表示される。この構成においても、情報を表示領域に電気的に表示させる構成を採用してもよい。ところで、図1に示すように、スライド操作子39のスライド方向は鍵の長手方向と平行でなく、第4面30dに近い側がネック部31側に近くなるように傾斜している。これにより、電子楽器100を肩掛けした演奏者が操作しやすくなっている。 FIG. 9 is a plan view of the slide operator 39. The slide operator 39 moves linearly. The above-described configuration in which the type and value of the effect correspond to the operation position is not limited to the rotary type operation element, and can be applied to various operation elements having operation strokes, and there is no limitation on the movable direction. Therefore, the present invention is also applicable to the slide operation element 39 which slides linearly as shown in FIG. The number of movable ranges R may be plural, and the number is not limited. For example, in the slide operation element 39, different types of effects are assigned to each of the movable ranges R1, R2, and R3. Information indicating the type of the corresponding effect and information indicating the value of the effect are displayed in the display area corresponding to each of the movable ranges R. Also in this configuration, a configuration may be adopted in which information is electrically displayed in the display area. By the way, as shown in FIG. 1, the slide direction of the slide operation element 39 is not parallel to the longitudinal direction of the key, and is inclined so that the side near the fourth surface 30d is near the neck portion 31 side. As a result, the player who shoulders the electronic musical instrument 100 can easily operate it.
 なお、可動範囲Rにおける操作位置に対応する効果の値の設定に関し、初期位置に近い側が最小で、回転終了位置に近いほど大きくなることは必須でない。例えば、これとは逆に、初期位置に近い側が最大で、回転終了位置に近いほど小さくなるようにしてもよい。あるいは、効果の値が、可動範囲Rにおける途中で最大または最小となり、初期位置に近い端位置と回転終了位置に近い端位置では最小または最大となるようにしてもよい。 In addition, regarding the setting of the value of the effect corresponding to the operation position in the movable range R, it is not essential that the closer to the initial position is the smallest and the closer to the rotation end position, the larger the value. For example, on the contrary, the side closer to the initial position may be the largest, and the closer to the rotation end position, the smaller the position. Alternatively, the value of the effect may be maximum or minimum midway in the movable range R, and may be minimum or maximum at the end position near the initial position and the end position near the rotation end position.
 また、隣接する可動範囲R同士の境界は点に限らず、範囲を有していてもよい。図10は、変形例の操作子の、操作ストロークと効果強さ(効果の値)との関係を示す図である。操作子の可動方向は回転方向に限らず、直線や曲線であてもよい。可動範囲R1、R2、R3にそれぞれ、効果の種類として3種類の効果が割り当てられており、それらを効果47、48、49とする。可動範囲R1、R2は重なり領域があり、可動範囲R2、R3にも重なり領域がある。可動範囲Rの各々において、対応する効果の値は、それぞれの初期位置から操作ストロークが進むにつれて大きくなり、途中で最大となり、それぞれの終了位置で最小となる。可動範囲R1、R2の重なり領域では、効果47、48の2種類が、操作位置に応じたそれぞれの値で効果制御に反映される。可動範囲R2、R3の重なり領域では、効果48、49の2種類が、操作位置に応じたそれぞれの値で効果制御に反映される。このように、複数の可動範囲Rは、操作ストロークにおける互いに一致しない範囲であればよく、可動範囲R同士に重なり領域があってもよい。その場合、重なり領域では重複する効果が適用されるようにしてもよい。 Further, the boundary between adjacent movable ranges R is not limited to a point, and may have a range. FIG. 10 is a diagram showing the relationship between the operation stroke and the effect strength (value of effect) of the manipulator of the modified example. The movable direction of the manipulator is not limited to the rotation direction, and may be a straight line or a curved line. Three types of effects are assigned to the movable ranges R1, R2, and R3 as types of effects, respectively, and these effects are referred to as effects 47, 48, and 49, respectively. The movable ranges R1 and R2 have overlapping regions, and the movable ranges R2 and R3 also have overlapping regions. In each of the movable ranges R, the value of the corresponding effect increases as the operation stroke advances from the respective initial position, becomes maximum halfway, and becomes the minimum at the respective end positions. In the overlapping region of the movable ranges R1 and R2, two types of effects 47 and 48 are reflected in the effect control with respective values corresponding to the operation position. In the overlapping region of the movable ranges R2 and R3, two types of effects 48 and 49 are reflected in the effect control with respective values corresponding to the operation position. As described above, the plurality of movable ranges R may be ranges that do not coincide with each other in the operation stroke, and the movable ranges R may have overlapping regions. In that case, overlapping effects may be applied in the overlapping area.
 なお、図10に示す例とは逆に、隣接する可動範囲R同士の間に、どちらにも属さない範囲があってもよい。その場合、どちらにも属さない範囲では、適用する効果が未確定(適用される効果がない状態)となってもよい。 Note that, contrary to the example shown in FIG. 10, there may be a range which does not belong to either of the adjacent movable ranges R. In that case, in the range which does not belong to either, the effect to apply may become unconfirmed (state without the effect applied).
 なお、回転ノブ38は360°回転する構成であってもよく、また、無限に回転する構成としてもよい。無限回転する構成とした場合、何回転しても同じ可動範囲Rには同じ効果が適用されるとしてもよい。あるいは、電源オン時の初期状態を記憶しておき、同じ可動範囲Rであっても、回転数によって(例えば、1回転目と2回転とで)異なる効果の種類が適用されるとしてもよい。 The rotation knob 38 may be configured to rotate 360 °, or may be configured to rotate infinitely. In the case of an infinite rotation configuration, the same effect may be applied to the same movable range R regardless of the number of rotations. Alternatively, the initial state when the power is turned on may be stored, and different types of effects (for example, at the first rotation and two rotations) may be applied depending on the number of rotations even in the same movable range R.
 なお、本発明が適用される操作子は2以上あってもよい。可動範囲Rに割り当てされる効果の種類に限定はない。なお、構成を簡単にする観点からは、可動範囲Rによって効果が決定されるが、効果の値は固定値としてもよい。 Note that there may be two or more operators to which the present invention is applied. There is no limitation on the types of effects assigned to the movable range R. From the viewpoint of simplifying the configuration, the effect is determined by the movable range R, but the value of the effect may be a fixed value.
 また、本実施の形態では、歌唱される歌詞として日本語の歌詞を例示したがこれには限らず、他言語であってもよい。1文字と1音節とは必ずしも対応しない。例えば、濁点を有する「だ」(da)は、「た」(ta)と「"」の2文字が1音節に対応する。また、例えば英語の歌詞が「september」の場合、「sep」「tem」「ber」の3音節となる。「sep」は1音節であるが、「s」「e」「p」の3文字が1音節に対応する。CPU10は、演奏操作子15をユーザが操作する度に、各音節を、操作された鍵の音高で順次発音する。 Further, in the present embodiment, Japanese lyrics are exemplified as the lyrics to be sung, but the present invention is not limited to this, and other languages may be used. One letter and one syllable do not necessarily correspond. For example, in the case of "da" (da) having a cloud point, two letters "ta" (ta) and "" "correspond to one syllable. For example, when the English lyrics are "september", it becomes three syllables of "sep" "tem" "ber". Although "sep" is one syllable, three characters "s" "e" "p" correspond to one syllable. Each time the user manipulates the performance operation element 15, the CPU 10 sequentially pronounces each syllable at the pitch of the operated key.
 なお、本発明は、歌唱音に対して付与される効果を念頭においたが、これに限定されず、楽器において演奏操作によって生成される音の効果制御、または、発音装置においてシーケンスデータによって生成される音の効果制御に対しても、本発明を適用できる。本発明が適用される楽器には、鍵盤楽器だけでなく、ギターのように弦が並べて配置された楽器も含まれる。また、本発明の効果制御装置は、電子楽器または発音装置に対して効果に関する制御情報を出力するように接続可能な、独立した装置として構成されてもよい。 In the present invention, the effect given to the singing sound is taken into consideration, but the invention is not limited thereto, and the effect control of the sound generated by the playing operation in the musical instrument or the generation by the sequence data in the sounding device The present invention can also be applied to the effect control of sound. The instruments to which the present invention is applied include not only keyboard instruments but also instruments in which strings are arranged side by side like a guitar. Furthermore, the effect control device of the present invention may be configured as an independent device connectable to output control information regarding the effect to the electronic musical instrument or the sound producing device.
 以上、本発明をその好適な実施形態に基づいて詳述してきたが、本発明はこれら特定の実施形態に限られるものではなく、この発明の要旨を逸脱しない範囲の様々な形態も本発明に含まれる。 Although the present invention has been described in detail based on its preferred embodiments, the present invention is not limited to these specific embodiments, and various embodiments within the scope of the present invention are also included in the present invention. included.
10 CPU(決定部、設定部、指示取得部、割り当て部)
14a 歌唱用データ
18 検出回路(検出部)
38 回転ノブ(操作子)
41、42、43、44 表示領域
R 可動範囲
10 CPU (determination unit, setting unit, instruction acquisition unit, allocation unit)
14a Singing data 18 detection circuit (detection unit)
38 Rotation knob (controller)
41, 42, 43, 44 Display area R movable range

Claims (7)

  1.  操作ストロークを有する操作子と、
     前記操作ストロークにおける前記操作子の位置を検出する検出部と、
     前記検出部により検出された前記操作子の位置に基づいて、発音する音に付与する効果の種類を決定する決定部と、を有する効果制御装置。
    An operator having an operation stroke,
    A detection unit that detects the position of the operation element in the operation stroke;
    An effect control device comprising: a determination unit configured to determine a type of an effect to be applied to a sound to be produced based on the position of the operation element detected by the detection unit.
  2.  前記決定部は、前記検出された前記操作子の位置に基づいて、前記効果の値も決定する請求項1に記載の効果制御装置。 The effect control device according to claim 1, wherein the determination unit also determines the value of the effect based on the detected position of the operator.
  3.  前記操作子の前記操作ストロークにおける互いに一致しない複数の可動範囲の各々に、互いに異なる種類の効果が割り当てられ、
     前記決定部は、前記検出された前記操作子の位置が属する可動範囲に応じて、前記効果の種類を決定する請求項1または2に記載の効果制御装置。
    Different kinds of effects are assigned to each of a plurality of movable ranges which do not coincide with each other in the operation stroke of the operator.
    The effect control device according to claim 1, wherein the determination unit determines the type of the effect according to a movable range to which the detected position of the operation element belongs.
  4.  ユーザからの指示に基づき、前記複数の可動範囲の各々に、互いに異なる種類の効果を割り当てる割り当て部を有する請求項3に記載の効果制御装置。 The effect control device according to claim 3, further comprising an assignment unit that assigns different types of effects to each of the plurality of movable ranges based on an instruction from a user.
  5.  前記複数の可動範囲の各々に割り当てられた効果の種類を示す情報を、前記可動範囲の各々に対応して表示する表示領域を有する請求項3または4に記載の効果制御装置。 The effect control device according to claim 3 or 4, further comprising a display area that displays information indicating the type of the effect assigned to each of the plurality of movable ranges in correspondence with each of the movable ranges.
  6.  ユーザからの指示に基づき、前記複数の可動範囲を設定する設定部を有する請求項3~5のいずれか1項に記載の効果制御装置。 The effect control device according to any one of claims 3 to 5, further comprising a setting unit configured to set the plurality of movable ranges based on an instruction from a user.
  7.  歌唱の指示を取得する指示取得部と、
     前記指示取得部により歌唱の指示が取得されたことに応じて、歌唱用データにおける複数の音節情報のうち、予め定められた順番で規定される音節情報を、前記発音する音として歌唱する歌唱部と、を有する請求項1~6のいずれか1項に記載の効果制御装置。
    An instruction acquisition unit for acquiring an instruction of singing;
    A singing section that sings syllable information defined in a predetermined order among the plurality of syllable information in the singing data in response to the singing instruction being acquired by the instruction acquiring unit as the pronouncing sound The effect control device according to any one of claims 1 to 6, comprising:
PCT/JP2017/028238 2017-08-03 2017-08-03 Effect control device WO2019026233A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/028238 WO2019026233A1 (en) 2017-08-03 2017-08-03 Effect control device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/028238 WO2019026233A1 (en) 2017-08-03 2017-08-03 Effect control device

Publications (1)

Publication Number Publication Date
WO2019026233A1 true WO2019026233A1 (en) 2019-02-07

Family

ID=65232517

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/028238 WO2019026233A1 (en) 2017-08-03 2017-08-03 Effect control device

Country Status (1)

Country Link
WO (1) WO2019026233A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005115241A (en) * 2003-10-10 2005-04-28 Korg Inc Effect adding device
WO2015194423A1 (en) * 2014-06-17 2015-12-23 ヤマハ株式会社 Controller and system for voice generation based on characters
JP2016180948A (en) * 2015-03-25 2016-10-13 ヤマハ株式会社 Electronic music device
US20170004811A1 (en) * 2015-06-30 2017-01-05 Yamaha Corporation Parameter controller and method for controlling parameter

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005115241A (en) * 2003-10-10 2005-04-28 Korg Inc Effect adding device
WO2015194423A1 (en) * 2014-06-17 2015-12-23 ヤマハ株式会社 Controller and system for voice generation based on characters
JP2016180948A (en) * 2015-03-25 2016-10-13 ヤマハ株式会社 Electronic music device
US20170004811A1 (en) * 2015-06-30 2017-01-05 Yamaha Corporation Parameter controller and method for controlling parameter

Similar Documents

Publication Publication Date Title
US10354625B2 (en) Digital sight-singing piano with a fixed-solfège keyboard, continuous keys and adjustable tones by kneading piano keys
JP6705272B2 (en) Sound control device, sound control method, and program
US8735709B2 (en) Generation of harmony tone
JP6728754B2 (en) Pronunciation device, pronunciation method and pronunciation program
JP7259817B2 (en) Electronic musical instrument, method and program
US20220076658A1 (en) Electronic musical instrument, method, and storage medium
JPH0944150A (en) Electronic keyboard musical instrument
TWI471853B (en) Music generating device
US9697812B2 (en) Storage medium and tone generation state displaying apparatus
WO2019026233A1 (en) Effect control device
JP2016142967A (en) Accompaniment training apparatus and accompaniment training program
US9747879B2 (en) Storage medium, tone generation assigning apparatus and tone generation assigning method
JP6809608B2 (en) Singing sound generator and method, program
WO2018198381A1 (en) Sound-generating device, method, and musical instrument
JP2020144346A (en) Information processing apparatus, information processing method, performance data display system, and program
JP6617441B2 (en) Singing voice output control device
JP6732216B2 (en) Lyrics display device, lyrics display method in lyrics display device, and electronic musical instrument
WO2018198380A1 (en) Song lyric display device and method
WO2023120121A1 (en) Consonant length changing device, electronic musical instrument, musical instrument system, method, and program
JP6787491B2 (en) Sound generator and method
US20230326435A1 (en) 2d user interface for a musical instrument for playing combined sequences of chords and tunes, and computer-readable storage medium
WO2019003348A1 (en) Singing sound effect generation device, method and program
JP3933070B2 (en) Arpeggio generator and program
Tan Ksana: Compositional control of spectral fusion as a parameter of timbre functionality
JP2023138655A (en) Information processing apparatus, method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17919751

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17919751

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP