EP0463411B1 - Vorrichtung zur Erzeugung von Musikwellenformen - Google Patents

Vorrichtung zur Erzeugung von Musikwellenformen Download PDF

Info

Publication number
EP0463411B1
EP0463411B1 EP91109140A EP91109140A EP0463411B1 EP 0463411 B1 EP0463411 B1 EP 0463411B1 EP 91109140 A EP91109140 A EP 91109140A EP 91109140 A EP91109140 A EP 91109140A EP 0463411 B1 EP0463411 B1 EP 0463411B1
Authority
EP
European Patent Office
Prior art keywords
processing
data
sound source
tone
musical tone
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP91109140A
Other languages
English (en)
French (fr)
Other versions
EP0463411A2 (de
EP0463411A3 (en
Inventor
Ryuji c/o Patent Department Usami
Kosuke c/o Patent Department Shiba
Koichiro c/o Patent Department Daigo
Kazuo c/o Patent Department Ogura
Jun c/o Patent Department Hosoda
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Casio Computer Co Ltd
Original Assignee
Casio Computer Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2171215A external-priority patent/JP2869573B2/ja
Priority claimed from JP2172200A external-priority patent/JP2869574B2/ja
Application filed by Casio Computer Co Ltd filed Critical Casio Computer Co Ltd
Publication of EP0463411A2 publication Critical patent/EP0463411A2/de
Publication of EP0463411A3 publication Critical patent/EP0463411A3/en
Application granted granted Critical
Publication of EP0463411B1 publication Critical patent/EP0463411B1/de
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/02Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos
    • G10H1/04Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos by additional modulation
    • G10H1/053Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos by additional modulation during execution only
    • G10H1/057Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos by additional modulation during execution only by envelope-forming circuits
    • G10H1/0575Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos by additional modulation during execution only by envelope-forming circuits using a data store from which the envelope is synthesized
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H5/00Instruments in which the tones are generated by means of electronic generators
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/18Selecting circuits
    • G10H1/183Channel-assigning means for polyphonic instruments
    • G10H1/185Channel-assigning means for polyphonic instruments associated with key multiplexing
    • G10H1/186Microprocessor-controlled keyboard and assigning means
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H7/00Instruments in which the tones are synthesised from a data store, e.g. computer organs
    • G10H7/002Instruments in which the tones are synthesised from a data store, e.g. computer organs using a common processing for different operations or calculations, and a set of microinstructions (programme) to control the sequence thereof
    • G10H7/006Instruments in which the tones are synthesised from a data store, e.g. computer organs using a common processing for different operations or calculations, and a set of microinstructions (programme) to control the sequence thereof using two or more algorithms of different types to generate tones, e.g. according to tone color or to processor workload
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2250/00Aspects of algorithms or signal processing methods without intrinsic musical character, yet specifically adapted for or used in electrophonic musical processing
    • G10H2250/541Details of musical waveform synthesis, i.e. audio waveshape processing from individual wavetable samples, independently of their origin or of the sound they represent
    • G10H2250/621Waveform interpolation

Definitions

  • the present invention relates to a musical tone waveform generation apparatus.
  • a conventional apparatus is constituted by a special-purpose sound source circuit which realizes an architecture equivalent to a musical tone generation algorithm based on a required sound source method by hardware components.
  • Such a sound source circuit generates a musical tone waveform on the basis of a PCM or modulation method.
  • the above-mentioned sound source circuit has a large circuit scale regardless of the sound source method adopted.
  • the sound source circuit When the sound source circuit is formed in an LSI, it has a scale about twice that of a versatile data processing microprocessor since the sound source circuit requires complicated address control for accessing waveform data on the basis of various performance data.
  • Registers or the like for temporarily storing intermediate data obtained in the process of sound source generation processing must be arranged everywhere in the architecture corresponding to the sound source method.
  • shift registers or the like for time-divisionally executing sound source processing in a hardware manner must be arranged everywhere.
  • the conventional musical tone waveform generation apparatus is constituted by the special-purpose sound source circuit corresponding to the sound source method, its hardware scale is undesirably increased. This results in an increase in manufacturing cost in terms of, e.g., a yield in the manufacture of LSI chips, when the sound source circuit is realized by an LSI. This also results in an increase in size of the musical tone waveform generation apparatus.
  • a control circuit comprising, e.g., a microprocessor, for generating, based on performance data corresponding to a performance operation, data which can be processed by the sound source circuit, and for communicating performance data with another musical instrument.
  • the control circuit requires a sound source control program, corresponding to the sound source circuit, for supplying data corresponding to performance data to the sound source circuit in addition to a performance data processing program for processing performance data.
  • these two programs must be synchronously operated. The development of such complicated programs causes a considerable increase in cost.
  • the processing programs are complicated very much, and processing of the high-speed sound source method such as a modulation method cannot be executed in terms of a processing speed and a program capacity.
  • high-grade sound source processing for switching sound source methods in units of tone generation channels, and generating tones in different sound source methods in accordance with performance data so as to generate a real musical tone waveform having a complicated frequency structure like musical tones generated by an acoustic instrument cannot be performed.
  • a player sometimes wants to make a performance with a plurality of instrument tone colors by himself or herself to meet his or her requirements on a performance.
  • the following processing is required. That is, a split point is determined for tone ranges or velocities of ON keys of an electronic musical instrument, so that musical tones in a plurality of instrument tone colors can be generated in accordance with a range having the split point as a boundary to which the tone range or velocity belongs, thus attaining complicated colorful musical expressions.
  • simple software processing cannot attain such high-grade sound source method processing. It is also difficult to execute processing for generating tones in different instrument tone colors in units of music parts.
  • US-S-4,184,400 discloses an electronic musical instrument having a data processing system comprising memories for storing a program, a central processing unit for executing the program and a sound source circuitry including tone generator units.
  • this known electronic musical instrument provides sound source processing in accordance with the program stored in the memories which program can be changed into one of a variety of programs if desired.
  • the generation of musical tone waveforms is varified by hardware in shape of said tone generator units being coupled to the central processing unit executing the algorithm determined by the program stored in the memories.
  • EP-A-0 376 342 having an earlier priority but being published later than the priority date of the present invention, discloses a musical tone generator apparatus making use of a single microprocessor executing the tone waveform generating processing for a plurality of tone generation channels without making use of a hardware circuitry exclusively used for sound sources.
  • the apparatus in accordance with this concept of earlier priority uses only one single sound source generating method and, therefore, only one single sound source processing program is provided for being executed by the microprocessor.
  • a musical tone waveform generation apparatus comprising: program storage means for storing a performance data processing program for processing performance data, and a plurality of sound source processing programs corresponding to a plurality of sound source methods for obtaining a musical tone signal; address control means for controlling an address of the program storage means; data storage means for storing musical tone generation data necessary for generating a musical tone signal by an arbitrary one of the plurality of sound source methods in units of tone generation channels; arithmetic processing means for performing a predetermined arithmetic operation; program execution means for executing the performance data processing program and the sound source processing program stored in the program storage means while controlling the address control means, the data storage means, and the arithmetic processing means, for normally executing the performance data processing program to control musical tone generation data on the data storage means, for executing the sound source processing program at predetermined time intervals, for executing the performance data processing program again upon completion of the sound source processing program, and for executing time-divisional processing on the basis of
  • the program storage means, the address control means, the data storage means, the arithmetic processing means, and the program execution means have the same arrangement as a versatile microprocessor, and no special-purpose sound source circuit is required at all.
  • the musical tone signal output means is versatile in the category of a musical tone waveform generation apparatus although it has an arrangement different from that of a versatile microprocessor.
  • the circuit scale of the overall musical tone waveform generation apparatus can be greatly reduced, and when the apparatus is realized by an LSI, the same manufacturing technique as that of a normal processor can be adopted. Since the yield of chips can be increased, manufacturing cost can be greatly reduced. Since the musical tone signal output means can be constituted by simple latch circuits, addition of this circuit portion causes almost no increase in manufacturing cost.
  • a sound source processing program stored in the program storage means need only be changed to meet the above requirements. Therefore, the development cost of a new musical tone waveform generation apparatus can be greatly reduced, and a new modulation method can be presented to a user by means of, e.g., a ROM card.
  • the musical tone waveform generation apparatus realizes a data architecture in which musical tone generation data necessary for generating musical tones are stored on the data storage means.
  • a performance data processing program is executed, corresponding musical tone generation data on the data storage means are controlled, and when a sound source processing program is executed, musical tone signals are generated on the basis of the corresponding musical tone generation data on the data storage means.
  • a data communication between the performance data processing program and the sound source processing program is performed via musical tone generation data on the data storage means, and access of one program to the data storage means can be performed regardless of an execution state of the other program. Therefore, the two programs can have substantially independent module arrangements, and hence, a simple and efficient program architecture can be attained.
  • the musical tone waveform generation apparatus realizes the following program architecture. That is, the performance data processing program is normally executed to execute, e.g., scanning of keyboard keys and various setting switches, demonstration performance control, and the like. During execution of this program, the sound source processing program is executed at predetermined time intervals, and upon completion of the processing, the control returns to the performance data processing program. Thus, the sound source processing program forcibly interrupts the performance data processing program on the basis of an interrupt signal generated from the interrupt control means at predetermined time intervals. For this reason, the performance data processing program and the sound source processing program need not be synchronized.
  • the performance data processing program is normally executed to execute, e.g., scanning of keyboard keys and various setting switches, demonstration performance control, and the like.
  • the sound source processing program is executed at predetermined time intervals, and upon completion of the processing, the control returns to the performance data processing program.
  • the sound source processing program forcibly interrupts the performance data processing program on the basis of an interrupt signal generated from the interrupt control means at predetermined time intervals.
  • the program execution means executes the sound source processing program
  • its processing time changes depending on sound source methods.
  • the change in processing time can be absorbed by the musical tone signal output means. Therefore, no complicated timing control program for outputting musical tone signals to, e.g., a D/A converter is required.
  • the data architecture for attaining a data link between the performance data processing program and the sound source processing program via musical tone generation data on the data storage means, and the program architecture for executing the sound source processing program at predetermined time intervals while interrupting the performance data processing program are realized, and the musical tone signal output means is arranged. Therefore, sound source processing under the efficient program control can be realized by substantially the same arrangement as a versatile processor.
  • the data storage means stores musical tone generation data necessary for generating musical tone signals in an arbitrary one of a plurality of sound source methods in units of tone generation channels
  • the program execution means executes the performance data processing program and the sound source processing program by time-divisional processing in correspondence with the tone generation channels. Therefore, the program execution means accesses the corresponding musical tone generation data on the data storage means at each time-divisional timing, and executes a sound source processing program of the assigned sound source method while simply switching the two programs. In this manner, musical tone signals can be generated by different sound source methods in units of tone generation channels.
  • musical tone signals can be generated by different sound source methods in units of tone generation channels under the simple control, i.e., by simply switching between time-divisional processing for musical tone generation data in units of tone generation channels on the data storage means, and a sound source processing program based on the musical tone generation data.
  • a musical tone waveform generation apparatus comprising: storage means for storing a sound source processing program; musical tone signal generation means for executing the sound source processing program stored in the storage means to generate a musical tone signal; pitch designation means for designating a pitch of the musical tone signal generated by the musical tone signal generation means; tone color determination means for determining a tone color of the musical tone signal generated by the musical tone signal generation means in accordance with the pitch designated by the pitch designation means; control means for controlling the musical tone signal generation means to generate the musical tone signal having the pitch designated by the pitch designation means and the tone color determined by the tone color determination means; and musical tone signal output means for outputting the musical tone signal generated by the musical tone signal generation means at predetermined time intervals.
  • a musical tone waveform generation apparatus comprising: storage means for storing a sound source processing program; musical tone signal generation means for executing the sound source processing program stored in the storage means to generate a musical tone signal; a performance operation member for instructing the musical tone signal generation means to generate the musical tone signal; tone color determination means for determining a tone color of the musical tone signal to be generated by the musical tone signal generation means in accordance with an operation velocity of the performance operation member; control means for controlling the musical tone signal generation means to generate the musical tone signal having the tone color determined by the tone color determination means; and musical tone signal output means for outputting the musical tone signal generated by the musical tone signal generation means at predetermined time intervals.
  • a musical tone waveform generation apparatus comprising: storage means for storing a sound source processing program; musical tone signal generation means for executing the sound source processing program stored in the storage means to generate a musical tone signal; output means for outputting performance data of a plurality of parts constituting a music piece; tone color determination means for determining a tone color of the musical tone signal to be generated by the musical tone signal generation means in accordance with one of the plurality of parts to which the performance data output from the output means belongs; control means for controlling the musical tone generation means to generate the musical tone signal having the tone color determined by the tone color determination means; and musical tone signal output means for outputting the musical tone signal generated by the musical tone signal generation means at predetermined time intervals.
  • musical tone signals can be generated in different tone colors in units of regions, or operation velocities, or musical parts having a split point as a boundary without using a special-purpose sound source circuit. Since a constant output rate of musical tone signals can be maintained upon operation of the musical tone signal output means, a musical tone waveform will not be distorted.
  • a musical tone waveform generation apparatus comprising: program storage means for storing a performance data processing program for processing performance data, and a sound source processing program for obtaining a musical tone signal; address control means for controlling an address of the program storage means; split point designation means for causing a player to designate a split point to divide a range of a performance data value into a plurality of ranges; tone color designation means for designating tone colors of the plurality of ranges having the split point designated by the split point designation means as a boundary; data storage means for storing musical tone generation data necessary for generating the musical tone signal in correspondence with a plurality of tone colors; arithmetic processing means for processing data; program execution means for executing the performance data processing program and the sound source processing program stored in the program storage means while controlling the address control means, the data storage means, and the arithmetic processing means, for normally executing the performance data processing program to control musical tone generation data stored in the data storage means, for executing the sound source processing
  • a musical tone waveform generation apparatus comprising: program storage means for storing a performance data processing program for processing performance data, and a plurality of sound source processing programs corresponding to a plurality of sound source methods for obtaining a musical tone signal; address control means for controlling an address of the program storage means; split point designation means for causing a player to designate a split point to divide a range of a performance data value into a plurality of ranges; sound source method designation means for causing the player to designate the sound source methods for the divided ranges having the split point designated by the split point designation means as a boundary; data storage means for storing musical tone generation data necessary for generating the musical tone signal in correspondence with the plurality of sound source methods; arithmetic processing means for processing data; program execution means for executing the performance data processing program or the sound source processing program stored in the program control means while controlling the address control means, the data storage means, and the arithmetic processing means, for normally executing the performance data processing program to control musical
  • a musical tone waveform generation apparatus comprising: program storage means for storing a performance data processing program for processing performance data, and a sound source processing program for obtaining a musical tone signal; address control means for controlling an address of the program storage means; tone color designation means for causing a player to designate tone colors in units of music parts of musical tone signals to be played; data storage means for storing musical tone generation data necessary for generating a musical tone signal in an arbitrary one of the plurality of tone colors; arithmetic processing means for processing data; program execution means for executing the performance data processing program and the sound source processing program stored in the program control means while controlling the address control means, the data storage means, and the arithmetic processing means, for normally executing the performance data processing program to control musical tone generation data on the data storage means, for executing the sound source processing program at predetermined time intervals, for executing the performance data processing program again upon completion of the sound source processing program, and for generating, upon execution of the sound source processing program, the
  • a musical tone waveform generation apparatus comprising: program storage means for storing a performance data processing program for processing performance data, and a plurality of sound source processing programs corresponding to a plurality of sound source methods for obtaining a musical tone signal; address control means for controlling an address of the program storage means; sound source method designation means for causing a player to designate sound source methods in units of music parts of musical tone signals to be played; data storage means for storing musical tone generation data necessary for generating a musical tone signal by an arbitrary one of the plurality of sound source methods; arithmetic processing means for processing data; program execution means for executing the performance data processing program and the sound source processing program stored in the program control means while controlling the address control means, the data storage means, and the arithmetic processing means, for normally executing the performance data processing program to control musical tone generation data on the data storage means, for executing the sound source processing program at predetermined time intervals, for executing the performance data processing program again upon completion of the sound
  • a player can designate a split point, and can also designate tone colors or sound source methods in units of ranges having the designated split point as a boundary, so that musical tone signals can be generated by switching the corresponding tone colors or sound source methods in accordance with the above-described ranges of predetermined performance data.
  • tone colors or sound source methods can also be switched in accordance with not a split point but music parts.
  • Fig. 1 is a block diagram showing the overall arrangement according to the first embodiment of the present invention.
  • Fig. 1 the entire apparatus is controlled by a microcomputer 1011.
  • control input processing for an instrument but also processing for generating musical tones are executed by the microcomputer 1011, and no sound source circuit for generating musical tones is required.
  • a switch unit 1041 comprising a keyboard 1021 and function keys 1031 serves as an operation/input section of a musical instrument, and performance data input from the switch unit 1041 are processed by the microcomputer 1011. Note that the function keys 1031 will be described in detail later.
  • a display unit 1091 includes red and green LEDs indicating which tone color on the function keys 1031 is designated when a player determines a split point and sets different tone colors to keys as will be described later.
  • the display unit 1091 will be described in detail later in a description of Fig. 21 or 26.
  • An analog musical tone signal generated by the microcomputer 1011 is smoothed by a low-pass filter 1051, and the smoothed signal is amplified by an amplifier 1061. Thereafter, the amplified signal is produced as a tone via a loudspeaker 1071.
  • a power supply circuit 1081 supplies a necessary power supply voltage to the low-pass filter 1051 and the amplifier 1061.
  • Fig. 2 is a block diagram showing the internal arrangement of the microcomputer 1011.
  • a control data/waveform data ROM 2121 stores musical tone control parameters such as target values of envelope values (to be described later), musical tone waveform data in respective sound source methods, musical tone difference data, modulated waveform data, and the like.
  • a command analyzer 207 accesses the data on the control data/waveform data ROM 2121 while sequentially analyzing the content of a program stored in a control ROM 2011, thereby executing software sound source processing.
  • the control ROM 2011 stores a musical tone control program (to be described later), and sequentially outputs program words (commands) stored at addresses designated by a ROM address controller 2051 via a ROM address decoder 2021. More specifically, the word length of each program word is 28 bits, and a next address method is employed. In this method, a portion of each program word is input to the ROM address controller 2051 as lower bits (intra-page address) of an address to be read out next.
  • the control ROM 2011 may comprise a CPU of a conventional program counter type.
  • the command analyzer 2071 analyzes operation codes of commands output from the control ROM 2011, and supplies control signals to the respective units of the circuit so as to execute the designated operations.
  • a RAM address controller 2041 designates an address of a corresponding register in a RAM 2061.
  • the RAM 2061 stores various musical tone control data (to be described later with reference to Figs. 9 and 10) for eight tone generation channels, and various buffers (to be described later), and is used in sound source processing (to be described later).
  • an ALU unit 2081 and a multiplier 2091 respectively execute a subtraction/addition and logic arithmetic operation, and a multiplication on the basis of an instruction from the command analyzer 2071.
  • An interrupt controller 2031 supplies an interrupt signal to the ROM address controller 2051 and a D/A converter unit 2131 at predetermined time intervals on the basis of an internal hardware timer (not shown).
  • An input port 2101 and an output port 2111 are connected to the switch unit 1041 and the display unit 1091 (Fig. 1).
  • Various data read out from the control ROM 2011 or the RAM 2061 are supplied to the ROM address controller 2051, the ALU unit 2081, the multiplier 2091, the control data/waveform data ROM 2121, the D/A converter unit 2131, the input port 2101, and the output port 2111 via a bus.
  • the outputs from the ALU unit 2081, the multiplier 2091, and the control data/waveform data ROM 2121 are supplied to the RAM 2061 via the bus.
  • Fig. 4 shows the internal arrangement of the D/A converter unit 2131 shown in Fig. 1.
  • Data of musical tones for one sampling period generated by sound source processing are input to a latch 3011 via a data bus.
  • the clock input of the latch 3011 receives a sound processing end signal from the command analyzer 2071 (Fig. 2), the musical tone data for one sampling period on the data bus are latched by the latch 3011, as shown in Fig. 5.
  • the musical tone signals output from the latch 3011 are latched by a latch 3021 in response to interrupt signals equal to a sampling clock interval, which signals are output from the interrupt controller 2031 (Fig. 2), and are output to the D/A converter 3031 at predetermined time intervals.
  • the microcomputer 1011 repetitively executes a series of processing operations in steps S 502 to S 510 , as shown in the main flow chart of Fig. 6.
  • Sound source processing is executed as interrupt processing in practice. More specifically, the program executed as the main flow chart shown in Fig. 6 is interrupted at predetermined time intervals, and a sound source processing program for generating musical tone signals for eight channels is executed based on the interrupt. Upon completion of this processing, the musical tone signals for eight channels are added to each other, and the sum signal is output from the D/A converter unit 2131 shown in Fig. 2. Thereafter, the control returns from the interrupt state to the main flow. Note that the above-described interrupt operation is periodically performed on the basis of the internal hardware timer in the interrupt controller 2031 (Fig. 2). This period is equal to the sampling period when musical tones are output.
  • the main flow chart of Fig. 6 shows a flow of processing operations other than the sound source processing, which are executed by the microcomputer 1011 in a non-interrupt state from the interrupt controller 2031.
  • the power switch is turned on, and the contents of the RAM 2061 (Fig. 2) in the microcomputer 1011 are initialized (S 501 ).
  • Switches of the function keys 1031 (Fig. 1) externally connected to the microcomputer 1011 are scanned (S 502 ), and states of the respective switches are fetched from the input port 2101 to a key buffer area in the RAM 2061.
  • a function key whose state is changed is discriminated, and processing of a corresponding function is executed (S 503 ). For example, a musical tone number and an envelope number are set, and if a rhythm performance function is presented as an optional function, a rhythm number is set.
  • ON keyboard key data on the keyboard 1021 (Fig. 1) are fetched in the same manner as the function keys described above (S 504 ), and keys whose states are changed are discriminated, thereby executing key assignment processing (S 505 ).
  • the keyboard key processing is particularly associated with the present invention, and will be described later.
  • demonstration performance data (sequencer data) are sequentially read out from the control data/waveform data ROM 2121 to execute, e.g., key assignment processing (S 506 ).
  • key assignment processing (S 506 )
  • rhythm data are sequentially read out from the control data/waveform data ROM 2121 to execute, e.g., key assignment processing (S 507 ).
  • the demonstration performance processing (S 506 ) and the rhythm processing (S 507 ) are also particularly associated with the present invention, and will be described in detail later.
  • timer processing to be described below is executed (S 508 ). More specifically, a value of time data which is incremented by interrupt timer processing (S 512 ) (to be described later) is discriminated. The time data value is compared with time control sequencer data sequentially read out for demonstration performance control or time control rhythm data read out for rhythm performance control, thereby executing time control when a demonstration performance in step S 506 or a rhythm performance in step S 507 is performed.
  • tone generation processing in step S 509 pitch envelope processing, and the like are executed.
  • an envelope is added to a pitch of a musical tone to be subjected to tone generation processing, and pitch data is set in a corresponding tone generation channel.
  • one flow cycle preparation processing is executed (S 510 ).
  • processing for changing a state of a tone generation channel of a note number corresponding to an ON event detected in the keyboard key processing in step S 505 to an ON event state, and processing for changing a state of a tone generation channel of a note number corresponding to an OFF event to a muting state, and the like are executed.
  • sound source processing is started (S 511 ).
  • the sound source processing is shown in Fig. 8.
  • musical tone waveform data obtained by accumulating tones for eight tone generation channels is obtained in a buffer B (to be described later) of the RAM 2061 (Fig. 2).
  • step S 512 interrupt timer processing is executed.
  • the value of time data (not shown) on the RAM 2061 (Fig. 2) is incremented by utilizing the fact that the interrupt processing shown in Fig. 7 is executed for every predetermined sampling period. More specifically, a time elapsed from power-on can be detected based on the value of the time data.
  • the time data obtained in this manner is used in time control in the timer processing in step S 508 in the main flow chart shown in Fig. 6, as described above.
  • step S 513 the content of the buffer area is latched by the latch 3011 (Fig. 4) of the D/A converter unit 2131.
  • a waveform addition area on the RAM 2061 is cleared (S 513 ). Then, sound source processing is executed in units of tone generation channels (S 514 to S 521 ). After the sound source processing for the eighth channel is completed, waveform data obtained by adding those for eight channels is obtained in a predetermined buffer area B.
  • Fig. 9 is a schematic flow chart showing the relationship among the processing operations of the flow charts shown in Figs. 6 and 7.
  • This "processing" corresponds to, e.g., "function key processing", or "keyboard key processing” in the main flow chart of Fig. 6.
  • the control enters the interrupt processing, and sound source processing is started (S 602 ).
  • sound source processing is started (S 602 ).
  • a musical tone signal for one sampling period obtained by accumulating waveform data for eight tone generation channels can be obtained, and is output to the D/A converter unit 2131.
  • the control returns to some processing B in the main flow chart.
  • the above-mentioned operations are repeated while executing sound source processing for each of eight tone generation channels (S 604 to S 611 ).
  • the repetition processing continues as long as musical tones are being produced.
  • step S 511 in Fig. 7 The sound source processing executed in step S 511 in Fig. 7 will be described in detail below.
  • the microcomputer 1011 executes sound source processing for eight tone generation channels.
  • the sound source processing data for eight channels are set in areas in units of tone generation channels of the RAM 2061 (Fig. 2), as shown in Fig. 10.
  • the waveform data accumulation buffer B and tone color No. registers X and Y are allocated on the RAM 2061, as shown in Fig. 23.
  • a sound source method is set in (assigned to) each tone generation channel area shown in Fig. 10 by operations to be described in detail later, and thereafter, control data from the control data/waveform data ROM 2121 are set in the area in data formats in units of sound source methods, as shown in Fig. 12.
  • the data formats in the control data/waveform data ROM 2121 will be described in detail later with reference to Fig. 22.
  • different sound source methods can be assigned to tone generation channels, as will be described later.
  • S indicates a sound source method No. as a number for identifying the sound source methods.
  • A represents an address designated when waveform data is read out in the sound source processing, and
  • a I A 1 , and A 2 represent integral parts of current addresses, and directly correspond to addresses of the control data/waveform data ROM 2121 (Fig. 2) where waveform data are stored.
  • a F represents a decimal part of the current address, and is used for interpolating waveform data read out from the control data/waveform data ROM 2121.
  • a E and A L respectively represent end and loop addresses.
  • P I , P 1 , and P 2 represent integral parts of pitch data
  • P F represents a decimal part of pitch data.
  • X P represents storage of previous sample data
  • X N represents storage of the next sample data
  • D represents a difference between magnitudes of two adjacent sample data
  • E represents an envelope value.
  • O represents an output value.
  • Pitch data (P I , P F ) is added to the present address (S 101 ).
  • the pitch data corresponds to the type of a key determined as an ON key of the keyboard 1021 shown in Fig. 1.
  • step S 1002 it is then checked if the integral part A I of the sum address is changed (S 1002 ). If NO in step S 1002 , an interpolation data value O corresponding to the decimal part A F of the address is calculated by arithmetic processing D ⁇ A F using a difference D as a difference between sample data X N and X P at addresses (A I +1) and A I shown in Fig. 15 (S 1007 ). Note that the difference D has already been obtained by the sound source processing at the previous interrupt timing (see step S 1006 to be described later).
  • the sample data X P corresponding to the integral part A I of the address is added to the interpolation data value O to obtain a new sample data value O (corresponding to X Q in Fig. 15) corresponding to the current address (A I , A F ) (S 1008 ).
  • the sample data is multiplied with the envelope value E (S 1009 ), and the content of the obtained interpolation data value O is added to the content of the waveform data buffer B (Fig. 23) in the RAM 2061 (Fig. 2) (S 1010 ).
  • the sample data X P and the difference D are left unchanged, and only the interpolation data value O is updated in with the address A F .
  • the address A F is updated, new sample data X Q is obtained.
  • step S 1002 If the integral part A I of the current address is changed (S 1002 ) as a result of addition of the current address (A I , A F ) and the pitch data (P I , P F ) in step S 1001' it is checked if the address A I has reached or exceeded the end address A E (S 1003 ).
  • step S 1003 the next loop processing is executed. More specifically, a value (A I - A E ) as a difference between the updated current address and the end address A E is added to the loop address A L to obtain a new current address (A I , A F ). A loop reproduction is started from the integral part A I of obtained new current address (S 1004 ).
  • the end address A E is an end address of an area of the control data/waveform data ROM 2121 (Fig. 2) where PCM waveform data are stored.
  • the loop address A L is an address of a position where a player wants to repeat an output of a waveform.
  • step S 1003 If NO in step S 1003' the processing in step S 1004 is not executed.
  • Sample data is then updated.
  • sample data corresponding to the new updated current address A I and the immediately preceding address (A I -1) are read out as X N and X P from the control data/waveform data ROM 2121 (Fig. 2) (S 1005 ).
  • the difference so far is updated with a difference D between the updated data X N and X P (S 1006 ).
  • waveform data by the PCM method for one tone generation channel is generated.
  • sample data X P corresponding to an address A I of the control data/waveform data ROM 2121 (Fig. 2) is obtained by adding sample data corresponding to an address (A I -1) (not shown) to a difference between the sample data corresponding to the address (A I -1) and sample data corresponding to the address A I .
  • a difference D with sample data at the next address (A I +1) is written at the address A I of the control data/waveform data ROM 2121.
  • Sample data at the next address (A l +1) is obtained by X P + D.
  • sample data corresponding to the current address A I +A F is obtained by X P + D ⁇ A F .
  • a difference D between sample data corresponding to the current address and the next address is read out from the control data/waveform data ROM 2121, and is added to the current sample data to obtain the next sample data, thereby sequentially forming waveform data.
  • DPCM method when a waveform such as a voice or a musical tone which generally has a small difference between adjacent samples is to be quantized, quantization can be performed by a smaller number of bits as compared to the normal PCM method.
  • DPCM data in Table 1 shown in Fig. 12 which data are stored in the corresponding tone generation area (Fig. 10) on the RAM 2061 (Fig. 2).
  • Pitch data (P I , P F ) is added to the present address (A I , A F ) (S 1101 ).
  • step S 1102 an interpolation data value O corresponding to the decimal part A F of the address is calculated by arithmetic processing D ⁇ A F using a difference D at the address A I in Fig. 16 (S 1114 ). Note that the difference D has already been obtained by the sound source processing at the previous interrupt timing (see steps S 1106 and S 1110 to be described later).
  • the interpolation data value O is added to sample data X P corresponding to the integral part A I of the address to obtain a new sample data value O (corresponding to X Q in Fig. 16) corresponding to the current address (A I , A F ) (S 1115 ).
  • the sample data value O is multiplied with an envelope value E (S 1116 ), and the obtained value is added to a value stored in the waveform data buffer B (Fig. 23) in the RAM 2061 (Fig. 2) (S 1117 ).
  • the sample data X P and the difference D are left unchanged, and only the interpolation data O is updated in accordance with the address A F .
  • new sample data X Q is obtained.
  • step S 1101 If the integral part A I of the present address is changed (S 1102 ) as a result of addition of the current address (A I , A F ) and the pitch data (P I , P F ) in step S 1101 , it is checked if the address A I has reached or exceeded the end address A E (S 1103 ).
  • sample data corresponding to the integral part A I Of the updated present address is calculated by the following loop processing in steps S 1104 to S 1107 . More specifically, a value before the integral part A I of the present address is changed is stored in a variable "old A I " (see the column of DPCM in Table 1 shown in Fig. 12). This can be realized by repeating processing in step S 1106 or S 1113 (to be described later).
  • the old A I value is sequentially incremented in S 1106 , and differential waveform data on the control data/waveform data ROM 2121 (Fig. 2) addressed by the incremented old A I values are read, out as D in step S 1107 .
  • the readout data D are sequentially accumulated on sample data X P in step S 1105 .
  • the sample data X P as a value corresponding to the integral part A I of the changed current address.
  • step S 1104 When the sample data X P corresponding to the integral part A I of the current address is obtained in this manner, YES is determined in step S 1104 , and the control starts the arithmetic processing of the interpolation value (S 1114 ) described above.
  • step S 1103 the control enters the next loop processing.
  • An address value (A I -A E ) exceeding the end address A E is added to the loop address A L , and the obtained address is defined as an integral part A I of a new current address (S 1108 ).
  • sample data X P is initially set as the value of sample data X PL (see the column of DPCM in Table 1 shown in Fig. 12) at the current loop address A L
  • the old A I is set as the value of the loop address A L (S 1109 ).
  • the following processing operations in steps S 1110 to S 1113 are repeated. More specifically, the old A I value is sequentially incremented in step S 1113 , and differential waveform data on the control data/waveform data ROM 2121 designated by the incremented old A I values are read out as data D.
  • the data D are sequentially accumulated on the sample data X P in step S 1112 .
  • the sample data X P has a value corresponding to the integral part A I of the new current address after loop processing.
  • step S 1111 When the sample data X P corresponding to the integral part A I of the new current address is obtained in this manner, YES is determined in step S 1111 , and the control enters the above-mentioned arithmetic processing of the interpolation value (S 1114 ).
  • waveform data by the DPCM method for one tone generation channel is generated.
  • the sound source processing based on the FM method will be described below.
  • the FM method In the FM method, hardware or software elements having the same contents, called “operators”, are normally used, and are connected based on connection rules, called algorithms, thereby generating musical tones.
  • the FM method is realized by a software program.
  • Fig. 17 The operation of one embodiment executed when the sound source processing is performed using two operators will be described below with reference to the operation flow chart shown in Fig. 17.
  • the algorithm of the processing is shown in Fig. 18.
  • Variables in the flow chart are FM data in Table 1 shown in Fig. 12, which data are stored in the corresponding tone generation channel area (Fig. 10) on the RAM 2061 (Fig. 2).
  • processing of an operator 2 (OP2) as a modulator is performed.
  • pitch processing processing for accumulating pitch data for determining an incremental width of an address for reading out waveform data stored in the ROM 2121
  • an address consists of only an integral address A 2 .
  • modulation waveform data are stored in the control data/waveform data ROM 2121 (Fig. 2) at sufficiently fine incremental widths.
  • Pitch data P 2 is added to the current address A 2 (S 1301 ).
  • a feedback output F O2 is added to the address A 2 as a modulation input to obtain a new address A M2 (S 1302 ).
  • the feedback output F O2 has already been obtained upon execution of processing in step S 1305 (to described later) at the immediately preceding interrupt timing.
  • sine wave data are stored in the control data/wave from data ROM 2121, and are obtained by addressing the ROM 2121 by the address A M2 to read out the corresponding data (S 1303 ).
  • this output F O2 serves as an input to the operator 2 (OP2) at the next interrupt timing.
  • the output O 2 is multiplied with a modulation level M L2 to obtain a modulation output M O2 (S 1306 ).
  • the modulation output M O2 serves as a modulation input to an operator 1 (OP1).
  • the control then enters processing of the operator 1 (OP1).
  • This processing is substantially the same as that of the operator 2 (OP2) described above, except that there is no modulation input based on the feedback output.
  • the present address A 1 of the operator 1 (OP1) is added to pitch data P 1 (S 1307 ), and the sum is added to the above-mentioned modulation output M O2 to obtain a new address A M1 (S 1308 ).
  • the value of sine wave data corresponding to this address A M1 (phase) is read out from the control data/waveform data ROM 2121 (S 1309 ), and is multiplied with an envelope value E 1 to obtain a musical tone waveform output O 1 (S 1310 ).
  • This output O 1 is added to a value held in the buffer B (Fig. 23) in the RAM 2061 (S 1311 ), thus completing the FM processing for one tone generation channel.
  • the sound source processing based on the TM method will be described below.
  • the principle of the TM method will be described below.
  • f c is called a modified sine wave, and is the carrier signal generation function obtained by accessing by the carrier phase angle ⁇ c t the control data/waveform data ROM 2121 (Fig. 2) for storing different sine waveform data in units of phase angle regions.
  • the above-mentioned triangular wave function is modulated by a sum signal obtained by adding a carrier signal generated by the above-mentioned function f c (t) to the modulation signal sin ⁇ m (t) at a ratio indicated by the modulation index I(t).
  • a sine wave can be generated, and as the value I(t) is increased, a very deeply modulated waveform can always be generated.
  • Various other may be used in place of the modulation signal sin ⁇ m (t), and as will be described later, the same operator output in the previous arithmetic processing may be fed back at a predetermined feedback level, or an output from another operator may be input.
  • the sound source processing based on the TM method according to the abovementioned principle will be described below with reference to the operation flow chart shown in Fig. 19.
  • the sound source processing is also performed using two operators like in the FM method shown in Figs. 17 and 18, and the algorithm of the processing is shown in Fig. 20.
  • Variables in the flow chart are TM format data in Table 1 shown in Fig. 12, which data are stored in the corresponding tone generation channel area (Fig. 10) on the RAM 2061 (Fig. 2).
  • the present address A 2 is added to pitch data P 2 (S 1401 ).
  • Modified sine wave data corresponding to the address A 2 (phase) is read out from the control data/waveform waveform data ROM 2121 (Fig. 2) by the modified sine conversion f c , and is output as a carrier signal O 2 (S 1402 ).
  • the carrier signal O 2 is added to a feedback output F O2 (S 1406 ) as a modulation signal, and the sum signal is output as a new address O 2 (S 1403 ).
  • the feedback output F O2 has already been obtained upon execution of processing in step S 1406 (to be described later) at the immediately preceding interrupt timing.
  • the value of a triangular wave corresponding to the carrier signal O 2 is calculated.
  • the above-mentioned triangular wave data are stored in the control data/waveform data ROM 2121 (Fig. 2), and are obtained by addressing the ROM 2121 by the address O 2 to read out the corresponding triangular wave data (S 1404 ).
  • the triangular wave data is multiplied with an envelope value E 2 to obtain an output O 2 (S 1405 ).
  • the output O 2 is multiplied with a feedback level F L2 to obtain a feedback output F O2 (S 1407 ).
  • the output F O2 serves as an input to the operator 2 (OP2) at the next interrupt timing.
  • the output O 2 is multiplied with a modulation level M L2 to obtain a modulation output M O2 (S 1407 ).
  • the modulation output M O2 serves as a modulation input to an operator 1 (OP1).
  • the control then enters processing of the operator 1 (OP1).
  • This processing is substantially the same as that of the operator 2 (OP2) described above, except that there is no modulation input based on the feedback output.
  • the present address A 1 of the operator 1 is added to pitch data P 1 (S1408), and the sum is subjected to the above-mentioned modified sine conversion to obtain a carrier signal O 1 (S1409).
  • the carrier signal O 1 is added to the above- mentioned modulation output M O2 to obtain a new value O 1 (S1410), and the value O 1 is subjected to triangular wave conversion (S1411). The converted is multiplied with an envelope value E 1 to obtain a musical tone waveform output O 1 (S1412).
  • the output O 1 is added to a value held in the buffer B (Fig. 23) in the RAM 2061 (Fig. 2) (S1413), thus completing the TM processing for one tone generation channel.
  • the sound source processing operations based on four methods i.e., the PCM, DPCM, FM, and TM methods have been described.
  • the FM and TM methods are modulation methods, and, in the above examples, two-operator processing operations are executed based on the algorithms shown in Figs. 18 and 20.
  • more operators may be used, and the algorithms may be more complicated.
  • keyboard key processing S 505
  • Fig. 6 The operations of keyboard key processing (S 505 ) in the main flow chart shown in Fig. 6 when an actual electronic musical instrument is played will be described in detail below.
  • data in units of sound source methods are set in the corresponding tone generation channel areas (Fig. 10) on the RAM 2061 (Fig. 2) by the function keys 1031 (Fig. 1).
  • the function keys 1031 are connected to, e.g., an operation panel of the electronic musical instrument via the input port 2101 (Fig. 2).
  • split points based key codes and velocities, and two tone colors are designated in advance, thus allowing characteristic assignment of tone colors to the tone generation channels.
  • the split points and the tone colors are designated, as shown in Fig. 21 or 27.
  • Fig. 21 shows an arrangement of some function keys 1031 (Fig. 1).
  • a keyboard split point designation switch 15011 comprises a slide switch which has a click feeling, and can designated a split point based on key codes of ON keys in units of keyboard key.
  • two tone colors e.g., "piano" and "guitar”
  • the X tone color is designated for a bass tone range
  • the Y tone color is designated for a high tone range to have the above-mentioned split point as a boundary.
  • a tone color designated first is set as the X tone color, and for example, a red LED is turned on.
  • a tone color designated next is set as the Y tone color, and a green LED is turned on.
  • the LEDs correspond to the display unit 1091 (Fig. 1).
  • a split point based on velocities is designated by a velocity split point designation switch 15031 shown in Fig. 27.
  • a velocity split point designation switch 15031 shown in Fig. 27.
  • an X tone color is designated for ON events having a velocity of 60 or less
  • a Y tone color is designated for ON events having a velocity faster than 60.
  • the X and Y tone colors are designated by tone color switches 20021 (Fig. 27) in same manner as in Fig. 21 (the case of a split point based on key codes).
  • the control data/waveform data ROM 2121 (Fig. 2) stores various tone color parameters in data formats shown in Fig. 22. More specifically, tone color parameters for the four sound source methods, i.e., the PCM, DPCM, FM, and TM methods are stored in units of instruments corresponding to the tone color switches 15021 of "piano" as the tone color No. 1, "guitar” as the tone color No. 2, and the like shown in Fig. 21.
  • the tone color parameters for the respective sound source methods are stored in the data formats in units of sound source methods shown in Fig. 12.
  • the buffer B for accumulating waveform data for eight tone generation channels, and the tone color No. registers for holding the tone color Nos. of the X and Y tone colors are allocated on the RAM 2061 (Fig. 2).
  • Tone color parameters in units of sound source methods which have the data formats shown in Fig. 22, are set in the tone generation channel areas (Fig. 10) for the eight channels of the RAM 2061, and sound source processing is executed based on these parameters. Processing operations for assigning tone color parameters to the tone generation channels in accordance with ON events on the basis of the split point and the two, i.e., X and Y tone colors designated by the function keys shown in Fig. 21 or 27 will be described below in turn.
  • the embodiment A is for an embodiment having the arrangement shown in Fig. 21 as some function keys 1031 shown in Fig. 1.
  • key codes of ON keys are split into two groups at the split point.
  • musical tone signals in two, i.e., X and Y tone colors designated upon operation of the tone color switches 15021 (Fig. 21) by the player are generated.
  • one of the four sound source methods is selected in accordance with the magnitude of a velocity (corresponding to an ON key speed) obtained upon an ON event of a key on the keyboard 1021 (Fig. 1). Tone color generation is performed on the basis of the tone colors and the sound source method determined in this manner.
  • musical tone signals in the X tone color are generated using the first to fourth tone generation channels (ch1 to ch4), and musical tone signals in the Y tone color are generated using the fifth to eighth tone generation channels (ch5 to ch8).
  • Fig. 25 is an operation flow chart of the embodiment A of the keyboard key processing in step S 505 in the main flow chart shown in Fig. 6.
  • tone color parameters of the X tone color designated beforehand by the player are set in one of the first to fourth tone generation channels (Fig. 32) by the following processing operations in steps S 1802 to S 1805 and S 1810 to S 1813 . It is checked if the first to fourth tone generation channels include an empty channel (S 1802 ).
  • step S 1802 If it is determined that there is no empty channel, and NO is determined in step S 1802 , no assignment is performed.
  • tone color parameters for the X tone color, and corresponding to one of the PCM, DPCM, TM, and FM methods are set in the empty channel in accordance with the velocity value as follows.
  • step S1 803 e.i., if it is determined that the velocity value is equal to or smaller than 63, it is then checked if the value is equal to or smaller than 31 (almost corresponding to piano p) (S 1805 ).
  • step S 1805 the tone color parameters for the X tone color are set in the FM format shown in Fig. 12 in one tone generation channel area (empty channel area) of the first to fourth channels (Fig. 2) to which the ON key is assigned on the RAM 2061. More specifically, sound source method No. data S representing the FM method is set in the first area of the corresponding tone generation channel area (see the column of FM in Fig. 12). Then, the tone color parameters corresponding to the tone color of the tone color No. presently stored in the X tone color No. register (Fig. 23) on the RAM 2061 are read out from a data architecture portion shown in Fig. 22 of the control data/waveform data ROM 2121, and are set in the second and subsequent areas of the tone generation channel area (S 1813 ).
  • tone color parameters for the X tone color are set in the TM format shown in Fig. 12 in the tone generation channel area on the RAM 2061 to which the ON key is assigned (S 1812 ). In this case, the parameters set in the same manner as in step S 1813 .
  • step S 1803 it is then checked if the velocity value is equal to or smaller than 95 (almost corresponding to piano p) (S 1804 ).
  • tone color parameters for the X tone color are set in the DPCM format shown in Fig. 12 in the tone generation channel area on the RAM 2061 to which the ON key is assigned (S 1811 ). In this case, the parameters set in the same manner as in step S 1813 .
  • tone color parameters for the X tone color are set in the PCM format shown in Fig. 12 in the tone generation channel area on the RAM 2061 to which the ON key is assigned (S 1810 ). In this case, the parameters are set in the same manner as in step S 1813 .
  • tone color parameters for the Y tone color designated in advance by the player are set in one of the fifth to eighth tone generation channels (Fig. 32) by the following processing in steps S 1806 to S 1809 and S 1814 to S 1817 .
  • step S 1806 no assignment is performed.
  • tone color parameters for the Y tone color, and corresponding to one of the PCM, DPCM, TM, and FM methods are set in the empty channel in accordance with the velocity value as follows.
  • step S 1807 i.e., if it is determined that the velocity value is equal to or smaller than 63, it is then checked if the value is equal to or smaller than 31 (S 1808 ).
  • tone color parameters for the Y tone color are set in the FM format in Fig. 12 in one of the fifth to eighth channels to which the ON key is assigned. More specifically, sound source method No. data S representing the FM method is set in the first area of the corresponding tone generation channel area (see the column of FM in Fig. 12). Then, the tone color parameters corresponding to the tone color of the tone color No. presently stored in the Y tone color No. register (Fig. 23) on the RAM 2061 are read out from a data architecture portion shown in Fig. 22 of the control data/waveform data ROM 2121, and are set in the second and subsequent areas of the tone generation channel area (S 1814 ).
  • tone color parameters for the Y tone color are set in the TM format shown in Fig. 12 in the tone generation channel area on the RAM 2061 to which the ON key is assigned (S 1815 ). In this case, the parameters are set in the same manner as in step S 1814 .
  • step S 1807 it is checked if the velocity value is equal to or smaller than 95 (S 1809 ).
  • tone color parameters for the Y tone color are set in the DPCM format shown in Fig. 12 in the tone generation channel area on the RAM 2061 to which the ON key is assigned (S 1816 ). In this case, the parameters are set in the same manner as in step S 1814 .
  • tone color parameters for the Y tone color are set in the PCM format shown in Fig. 12 in the tone generation channel area on the RAM 2061 to which the ON key is assigned (S 1817 ). In this case, the parameters are set in the same manner as in step S 1814 .
  • one of the X and Y tone colors is selected in accordance with whether the key code is lower or higher than the split point, and one of the four sound source methods is selected in accordance with the magnitude of an ON key velocity, thus generating musical tones.
  • the tone generation channels to which the X and Y tone colors are assigned are fixed as the first to fourth tone generation channels and the fifth to eighth tone generation channels, respectively.
  • channels to which each tone color is assigned are not fixed, and the X and Y tone colors are sequentially assigned to empty channels, as shown in Fig. 33.
  • Fig. 26 is an operation flow chart of the embodiment B of the keyboard key processing in step S 505 in the main flow chart shown in Fig. 6. As shown in Fig. 26, it is checked if the first to eighth channels include an empty channel (S 1901 ). If there is an empty channel, tone color assignment is performed. The processing operations in steps S 1902 to S 1916 the same as those in steps S 1801 , S 1803 to S 1805 , and S 1806 to S 1817 in the embodiment A.
  • the embodiment C corresponds to a case wherein processing for a key code and processing for a velocity in the embodiment A are replaced.
  • the embodiment C is for an embodiment having an arrangement shown in Fig. 27 as some function keys 1031 shown in Fig. 1, and velocities of ON keys are split into two groups at the split point upon operation of the velocity split point designation switch 20011 (Fig. 27) by the player. Then, musical tone signals are generated in the two, i.e., X and Y tone colors designated upon operation of the tone color switches 20021 (Fig. 27) by the player. In this case one of the four sound source methods is selected in accordance with a key code value of an ON key on the keyboard 1021 (Fig. 1) by the player. Tone color generation is performed in accordance with the tone colors and the sound source method determined in this manner. The X and Y tone colors are assigned to the tone generation channels, as shown in Fig. 32, in the same manner as in the embodiment A.
  • Fig. 28 is an operation flow chart of the embodiment C of the keyboard key processing in step S 505 in the main flow chart of Fig. 6.
  • step S 504 it is checked if the velocity of a key determined as an "ON key" in step S 504 in the main flow chart in Fig. 6 is equal to or smaller than the velocity at the split point designated in advance by the player (S 2101 ).
  • tone color parameters for the X tone color designated in advance by the player are set in one of the first to fourth tone generation channels (Fig. 32) by the following processing in steps S 2102 to S 2105 and S 2110 to S 2113 .
  • step S 2102 If it is determined that there is no empty channel, and NO is determined in step S 2102 , no assignment is performed.
  • tone color parameters for the X tone color, and corresponding to one of the PCM, DPCM, TM, and FM methods are set in the empty channel in accordance with the key code value as follows.
  • step S 2103 i.e., if it is determined that the key code value is equal to or larger than 32, it is then checked if the value is equal to or larger than 48 (S 2105 ).
  • tone color parameters for the X tone color are set in the FM format shown in Fig. 12 in one of the first to fourth channels area on the RAM 2061 to which the ON key is assigned (Fig. 2). In this case, the parameters are set in the same manner as in step S 1813 in the embodiment A.
  • tone color parameters for the X tone color are set in the TM format shown in Fig. 12 in the tone generation channel area on the RAM 2061 to which the ON key is assigned (S 2112 ). In this case, the parameters are set in the same manner as in step S 1813 in the embodiment A.
  • step S 2103 it is checked if the key code value is equal to or larger than 16 (S 2104 ).
  • tone color parameters for the X tone color are set in the DPCM format shown in Fig. 12 in the tone generation channel area on the RAM 2061 to which the ON key is assigned (S 2111 ). In this case, the parameters are set in the same manner as in step S1813 in the embodiment A.
  • tone color parameters for the X tone color are set in the PCM format shown in Fig. 12 in the tone generation channel area on the RAM 2061 to which the ON key is assigned (S 2110 ). In this case, the parameters are set in the same manner as in step S 1813 in the embodiment A.
  • tone color parameters for the Y tone color designated in advance by the player are set in one of the fifth to eighth tone generation channels (Fig. 32) by the following processing in steps S 2106 to S 2109 and S 2114 to S 2117 .
  • step S 2106 If it is determined that there is no empty channel, and NO is determined in step S 2106 , no assignment is performed.
  • step S 2106 If there is an empty channel, and YES is determined in step S 2106 , it is checked in the processing in steps S 2107 to S 2109 having the same judgment conditions as those in steps S 2103 to S 2105 if the key code value falls within a range of 48 ⁇ K ⁇ 63, 32 ⁇ K ⁇ 48, 16 ⁇ K ⁇ 32, or 0 ⁇ K ⁇ 16.
  • steps S 2114 to S 2117 tone color parameters for the Y color and corresponding to one of the FM, TM, DPCM, and PCM methods are set in an empty channel.
  • the tone generation channels to which the X and Y tone colors are assigned are fixed as the first to fourth tone generation channels and the fifth to eighth tone generation channels, respectively.
  • channels to which each tone color is assigned are not fixed, and the X and Y tone colors are sequentially assigned to empty channels, as shown in Fig. 33 like in the embodiment B.
  • Fig. 29 is an operation flow chart of the embodiment D of the keyboard key processing in step S 505 in the main flow chart shown in Fig. 6. As shown in Fig. 29, it is checked if the first to eighth channels include an empty channel (S 2201 ). If there is empty channel, tone color assignment is performed.
  • the processing operations in steps S 2202 to S 2216 are the same as those in steps S 2201 , S 2203 to S 2205 , and S 2206 to S 2217 in the embodiment C shown in Fig. 28.
  • different tone colors and sound source methods can be assigned to the tone generation channels in accordance with whether the ON key plays a melody or accompaniment part.
  • Fig. 30 is an operation flow chart of an embodiment A of the demonstration performance processing in step S 506 in the main flow chart shown in Fig. 6.
  • X and Y tone colors are assigned to the tone generation channels, as shown in Fig. 32, in the same manner as the embodiment A or C of the keyboard key processing.
  • step S 2301 i.e., if it is determined that the key plays the melody part, it is checked if the first to fourth tone generation channels include an empty channel (S 2302 ).
  • step S 2302 If there is no empty channel, and NO is determined in step S 2302 , no assignment is performed.
  • tone color parameters for the X tone color are set in the FM format shown in Fig. 12 in one tone generation channel area of the first to fourth channels on the RAM 2061 (Fig. 2) to which the ON key is assigned. More specifically, sound source method No. data S representing the FM method is set in the first area of the corresponding tone generation channel area (see the column of FM in Fig. 12). Then, the tone color parameters corresponding to the tone color of the tone color No. presently stored in the X tone color No. register (Fig. 23) on the RAM 2061 are read out from a data architecture portion shown in Fig. 22 of the control data/waveform data ROM 2121, and are set in the second and subsequent areas of the tone generation channel area (S 2303 ).
  • step S 2301 it is checked if the fifth to eighth tone generation channels include an empty channel (S 2304 ).
  • step S 2304 If there is no empty channel, and NO is determined in step S 2304 , no assignment is performed.
  • tone color parameters for the Y tone color are set in the DPCM format shown in Fig. 12 in one tone generation channel area of the fifth to eighth channels on the RAM 2061 (Fig. 2) to which the ON key is assigned. More specifically, sound source method No. data S representing the DPCM method is set in the first area of the corresponding tone generation channel area (see the column of DPCM in Fig. 12). Then, the tone color parameters corresponding to the tone color of the tone color No. presently stored in the X tone color No. register (Fig. 23) on the RAM 2061 are read out from a data architecture portion shown in Fig. 22 of the control data/waveform data ROM 2121, and are set in the second and subsequent areas of the tone generation channel area (S 2305 ).
  • Fig. 31 is an operation flow chart of an embodiment B of demonstration performance processing in step S 506 in the main flow chart of Fig. 6.
  • channels to which each tone color is assigned are not fixed, and the X and Y tone colors are sequentially assigned to empty channels, as shown in Fig. 33 like in the embodiment B or D of the keyboard key processing.
  • Fig. 31 it is checked if the first to eighth channels include an empty channel (S 2401 ). If there is an empty channel, tone color assignment is performed.
  • the processing operations in steps S 2402 to S 2404 are the same as those in steps S 2302 to S 2304 in the embodiment A of the demonstration performance processing shown in Fig. 30.
  • two tone colors are switched to have a split point for key code or velocity values as a boundary, and sound source methods are switched in units of tone colors in accordance with the velocity or key code values.
  • the sound source methods may be switched to have a split point as a boundary, and tone colors may be switched in units of sound source methods in accordance with, e.g., velocity values.
  • the number of split points is not limited to one, and a plurality of tone colors or sound source methods may be switched in regions having two or more split points as boundaries.
  • performance data associated with the split point is not limited to a key code or a velocity.
  • tone colors and sound source methods can be assigned to tone generation channels in accordance with a melody or accompaniment part in a demonstration performance (automatic performance) mode.
  • tone colors and sound source methods may be switched in accordance with whether a player plays a melody or accompaniment part.
  • an assignment state of tone generation is changed in a permanent combination of tone colors and sound source methods in accordance with a melody or accompaniment part.
  • tone colors or sound source methods may be changed, and the kind of parameters may be desirably selected.
  • Fig. 34 is a block diagram showing the overall arrangement of this embodiment.
  • components other than an external memory 1162 are constituted in one chip.
  • two, i.e., master and slave CPUs (central processing units) exchange data to share sound source processing for generating musical tones.
  • 8 channels are processed by a master CPU 1012, and the remaining 8 channels are processed by a slave CPU 1022.
  • the sound source processing is executed in a software manner, and sound source methods such as PCM (Pulse Code Moduration) and DPCM (Differential PCM) methods, and sound source methods based on modulation methods such as FM and phase modulation methods are assigned in units of tone generation channels.
  • sound source methods such as PCM (Pulse Code Moduration) and DPCM (Differential PCM) methods
  • sound source methods based on modulation methods such as FM and phase modulation methods are assigned in units of tone generation channels.
  • a sound source method is automatically designated for tone colors of specific instruments, e.g., a trumpet, a tuba, and the like.
  • a sound source method can be selected by a selection switch, and/or can be automatically selected in accordance with a performance tone range, a performance strength such as a key touch, and the like.
  • different sound source methods can be assigned to two channels for one ON event of a key. That is, for example, the PCM method can be assigned to an attack portion, and the FM method can be assigned to a sustain portion.
  • the external memory 1162 stores musical tone control parameters such as target values of envelope values, a musical tone waveform in the PCM (pulse code modulation) method, a musical tone differential waveform in the DPCM (differential PCM) method, and the like.
  • musical tone control parameters such as target values of envelope values, a musical tone waveform in the PCM (pulse code modulation) method, a musical tone differential waveform in the DPCM (differential PCM) method, and the like.
  • the master CPU (to be abbreviated to as an MCPU hereinafter) 1012 and the slave CPU (to be abbreviated to as an SCPU hereinafter) 1022 access the data on the external memory 1162 to execute sound source processing while sharing processing operations. Since these CPUs 1012 and 1022 commonly use waveform data of the external memory 1162, a contention may occur when data is loaded from the external memory 1162. In order to prevent this contention, the MCPU 1013 and the SCPU 1022 outputan address signal for accessing the external memory, and external memory control data from output terminals 1112 and 1122 of an access address contention prevention circuit 1052 via an external memory access address latch unit 1032 for the MCPU, and an external memory access address latch unit 1042 for the SCPU. Thus, a contention between addresses from the MCPU 1012 and the SCPU 1022 can be prevented.
  • Data read out from the external memory 1162 on the basis of the designated address is input from an external memory data input terminal 1152 to an external memory selector 1062.
  • the external memory selector 1062 separates the readout data into data to be input to the MCPU 1012 via a data bus MD and data to be input to the SCPU 1022 via a data bus SD on the basis of a control signal from the address contention prevention circuit 1052, and inputs the separated data to the MCPU 1012 and the SCPU 1022.
  • a contention between readout data can also be prevented.
  • MCPU 1012 and the SCPU 1022 After the MCPU 1012 and the SCPU 1022 perform corresponding sound source processing operations of the input data by software, musical tone data of all the tone generation channels are accumulated, and a left-channel analog output and a right-channel analog output are then output from a left output terminal 1132 of a left D/A converter unit 1072 and a right output terminal 1142 of a right D/A converter unit 1082, respectively.
  • Fig. 35 is a block diagram showing an internal arrangement of the MCPU 1012.
  • a control ROM 2012 stores a musical tone control program (to be described later), and sequentially outputs program words (commands) addressed by a ROM address controller 2052 via a ROM address decoder 2022.
  • This embodiment employs a next address method. More specifically, the word length of each program word is, e.g., 28 bits, and a portion of a program word is input to the ROM address controller 2052 as a lower bit portion (intra-page address) of an address to be read out next.
  • the SCPU 1012 may comprise a conventional program counter type CPU insted of control ROM 2012.
  • a command analyzer 2072 analyzes operation codes of commands output from the control ROM 2012, and sends control signals to the respective units of the circuit so as to execute designated operations.
  • the RAM address controller 2042 designates an address of a corresponding internal register of a RAM 2062.
  • the RAM 206 stores various musical tone control data (to be described later with reference to Figs. 49 and 50) for eight tone generation channels, and includes various buffers (to be described later) or the like.
  • the RAM 2062 is used in sound source processing (to be described later).
  • an ALU unit 2082 and a multiplier 2092 respectively execute an addition/subtraction, and a multiplication on the basis of an instruction from the command analyzer 2072.
  • an interrupt controller 2032 supplies a reset cancel signal A to the SCPU 2012 (Fig. 34) and an interrupt signal to the D/A converter units 1072 and 1082 (Fig. 34) at predetermined time intervals.
  • the MCPU 1012 shown in Fig. 35 comprises the following interfaces associated with various buses: an interface 2152 for an address bus MA for addressing the external memory 1162 to access it; an interface 2162 for the data bus MD for exchanging the accessed data with the MCPU 1012 via the external memory selector 1062; an interface 2122 for a bus Ma for addressing the internal RAM of the SCPU 1022 so as to execute data exchange with the SCPU 1022; an interface 2132 for a data bus D OUT used by the MCPU 1012 to write data in the SCPU 1022; an interface 2142 for a data bus D IN used by the MCPU 1012 to read data from the SCPU 1022; an interface 2172 for a D/A data transfer bus for transferring final output waveforms to the left and right D/A converter units 1072 and 1082; and input and output ports 2102 and 2112 for exchanging data with an external switch unit or a keyboard unit (Figs. 45, and 46).
  • Fig. 36 shows the internal arrangement of the SCPU 1022.
  • the SCPU 1022 executes sound source processing upon reception of a processing start signal from the MCPU 1012, it does not comprise an interrupt controller corresponding to the controller 2032 (Fig. 35), I/O ports, corresponding to the ports 2102 and 2112 (Fig. 35) for exchanging data with an external circuit, and an interface, corresponding to the interface 2172 (Fig. 35) for outputting musical tone signals to the left and right D/A converter units 1072 and 1082.
  • Other circuits 3012, 3022, and 3042 to 3092 have the same functions as those of the circuits 2012, 2022, and 2042 to 2092 shown in Fig. 35.
  • Interfaces 3032, and 3102 to 3132 are arranged in correspondence with the interface 2122 to 2162 shown in Fig. 35.
  • the internal RAM address of the SCPU 1022 designated by the MCPU 1012 is input to the RAM address controller 3042.
  • the RAM address controller 3042 designates an address of the RAM 3062.
  • accumulated waveform data for eight tone generation channels generated by the SCPU 1022 and held in the RAM 3062 are output to the MCPU 1012 via the data bus D IN . This will be described later.
  • function keys 8012, keyboard keys 8022, and the like shown in Figs. 45 and 46 are connected to the input port 2102 of the MCPU 1012. Theses portions substantially constitute an instrument operation unit.
  • the D/A converter unit as one characteristic feature of the present invention will be described below.
  • Fig. 43 shows the internal arrangement of the left or right D/A converter unit 1027 or 1082 (the two converter units have the same contents) shown in Fig. 34.
  • One sample data of a musical tone generated by sound source processing is input to a latch 6012 via a data bus.
  • the clock input terminal of the latch 6012 receives a sound source processing end signal from the command analyzer 2072 (Fig. 35) of the MCPU 1012, musical tone data for one sample on the data bus is latched by the latch 6012, as shown in Fig. 44.
  • a time required for the sound source processing changes depending on the sound source processing software program. For this reason, a timing at which each sound source processing is ended, and musical tone data is latched by the latch 6012 is not fixed. For this reason, as shown in Fig. 42, an output from the latch 6012 cannot be directly input to a D/A converter 6032.
  • the output from the latch 6012 is latched by a latch 6022 in response to an interrupt signal equal to a sampling clock interval output from the interrupt controller 2032, and is output to the D/A converter 603 at predetermined time intervals.
  • the MCPU 1012 is mainly operated, and repetitively executes a series of processing operations in steps S402 to S410, as shown in the main flow chart of Fig. 37.
  • the sound source processing is performed by interrupt processing. More specifically, the MCPU 1012 and the SCPU 1022 are interrupted at predetermined time intervals, and each CPU executes sound source processing for generating musical tones for eight channels. Upon completion of this processing, musical tone waveforms for 16 channels are added, and are output from the left and right D/A converter units 1072 and 1082. Thereafter, the control returns from the interrupt state to the main flow.
  • the above-mentioned interrupt processing is periodically executed on the basis of the internal hardware timer in the interrupt controller 2032 (Fig. 35). This period is equal to a sampling period when a musical tone is output.
  • the main flow chart of Fig. 37 shows a processing flow executed by the MCPU 1012 in a state wherein no interrupt signal is supplied from the interrupt controller 2032.
  • the system e.g., the contents of the RAM 2062 in the MCPU 1012 are initialized (S401).
  • the function keys externally connected to the MCPU 1012 are scanned (S402) to fetch respective switch states from the input port 2102 to a key buffer area in the RAM 2062.
  • S402 a function key whose state is changed is discriminated, and processing of a corresponding function is executed (S403). For example, a musical tone number or an envelope number is set, or if optional functions include a rhythm performance function, a rhythm number is set.
  • demonstration performance data (sequencer data) are sequentially read out from the external memory 1162 to execute, e.g., key assignment processing (S406).
  • rhythm data are sequentially read out from the external memory 1162 to execute, e.g., key assignment processing (S407).
  • timer processing is executed (S408). More specifically, time data which is incremented by interrupt timer processing (S412) (to be described later) is compared with time control sequencer data sequentially read out for demonstration performance control or time control rhythm data read out for rhythm performance control, thereby executing time control when a demonstration performance in step S406 or a rhythm performance in step S407 is performed.
  • tone generation processing in step S409 pitch envelope processing, and the like are executed.
  • an envelope is added to a pitch of a musical tone to be generated, and pitch data is set in a corresponding tone generation channel.
  • one flow cycle preparation processing is executed (S410).
  • processing for changing a state of a tone generation channel assigned with a note number corresponding to an ON event detected in the keyboard key processing in step S405 to an "ON event" state, and processing for changing a state of a tone generation channel assigned with a note number corresponding to an OFF event to a "muting" state, and the like are executed.
  • the interrupt controller 2032 of the MCPU 1012 outputs the SCPU reset cancel signal A (Fig. 34) to the ROM address controller 3052 of the SCPU 1022, and the SCPU 1022 starts execution of the SCPU interrupt processing (Fig. 39).
  • Sound source processing (S415) is started in the SCPU interrupt processing almost simultaneously with the source processing (S411) in the MCPU interrupt processing.
  • the sound source processing for 16 tone generation channels can be executed in a processing time for eight tone generation channels, and a processing speed can be almost doubled (the interrupt processing will be described later with reference to Fig. 41).
  • the value of time data (not shown) on the RAM 2062 (Fig. 35) is incremented by utilizing the fact that the interrupt processing shown in Fig. 38 is executed for every predetermined sampling period. More specifically, a time elapsed from power-on can be detected based on the value of the time data.
  • the time data obtained in this manner is used in time control in the timer processing in step S408 in the main flow chart shown in Fig. 37.
  • the MCPU 1012 then waits for an SCPU interrup processing end signal B from the SCUP 1022 after interrupt timer processing in step S412 (S413).
  • the command analyzer 3072 of the SCPU 1022 supplies an SCPU processing end signal B (Fig. 34) to the ROM address controller 2052 of the MCPU 1012. In this manner, YES is determined in step S413 in the MCPU interrupt processing in Fig. 38.
  • waveform data generated by the SCPU 1022 are written in the RAM 2062 of the MCPU 1012 via the data bus D IN shown in Fig. 34 (S414).
  • the waveform data are stored in a predetermined buffer area (a buffer B to be described later) on the RAM 3062 of the SCPU 1022.
  • the command analyzer 2072 of the MCPU 1012 designates addresses of the buffer area to the RAM address controller 3042, thus reading the waveform data.
  • step S414' the contents of the buffer area B are latched by the latches 6012 (Fig. 43) of the left and right D/A converter units 1072 and 1082.
  • step S411 in the MCPU interrupt processing or in step S415 in the SCPU interrupt processing will be described below with reference to the flow chart of Fig. 40.
  • a waveform addition area on the RAM 2062 or 3062 is cleared (S416). Then, sound source processing is executed in units of tone generation channels (S417 to S424). After the sound source processing for the eighth channel is completed, waveform data obtained by adding those for eight channels is obtained in the buffer area B .
  • Fig. 41 is a schematic flow chart showing the relationship among the processing operations of the flow charts shown in Figs. 37, 38, and 39. As can be seen from Fig. 41, the MCPU 1012 and the SCPU 1022 share the sound source processing.
  • processing A (the same applies to B , C ,..., F ) is executed (S501).
  • This "processing" corresponds to, for example, “function key processing", or “keyboard key processing” in the main flow chart shown in Fig. 37.
  • the MCPU interrupt processing and the SCPU interrupt processing are executed, so that the MCPU 1012 andthe SCPU 1022 simultaneously start sound source processing (S502 and S503).
  • the SCPU processing end signal B is input to the MCPU 1012.
  • the sound source processing is ended earlier than the SCPU interrupt processing, and the MCPU waits for the end of the SCPU interrupt processing the SCPU processing end signal B is discriminated in the MCPU interrupt processing, waveform data generated by the SCPU 1022 is supplied to the MCPU 1012, and is added to the waveform data generated by the MCPU 1012. The waveform data is then output to the left and right D/A converter units 1072 and 1082. Thereafter, the control returns to some processing B in the main flow chart.
  • step S411 Fig. 38
  • step S415 Fig. 39
  • the two CPUs i.e., the MCPU 1012 and the SCPU 1022 share the sound source processing in units of eight channels.
  • Data for the sound source processing for eight channels are set in areas corresponding to the respective tone generation channels in the RAMs 2062 and 3062 of the MCPU 1012 and the SCPU 1022, as shown in Fig. 47.
  • Buffers BF, BT, B, and M are allocated on the RAM, as shown in Fig. 50.
  • each tone generation channel area shown in Fig. 47 an arbitrary sound source method can be set by an operation (to be described in detail later), as schematically shown in Fig. 48.
  • the sound source method is set, data are set in each tone generation channel area in Fig. 47 in a data format of the corresponding sound source method, as shown in Fig. 49.
  • different sound methods can be assigned to the tone generation channels.
  • G indicates a sound source method number for identifying the sound source methods.
  • A represents an address designated when waveform data is read out in the sound source processing, and
  • a I , A 1 , and A 2 represent integral parts of current addresses, and directly correspond to addresses of the external memory 1162 (Fig. 34) where waveform data are stored.
  • a F represents a decimal part of the current address, and is used for interpolating waveform data read out from the external memory 1162.
  • a E and A L respectively represent end and loop addresses.
  • P I , P 1 and P 2 represent integral parts of pitch data
  • P F represents a decimal part of pitch data.
  • X P represents previous sample data
  • X N represents the next sample data
  • D represents a difference between two adjacent sample data
  • E represents an envelope value
  • O represents an output value
  • C represents a flag which is used when a sound source method to be assigned to a tone generation channel is changed in accordance with performance data, as will be described later.
  • the sound source processing operations of the respective sound source methods executed using the above-mentioned data architecture will be described below in turn. These sound source processing operations are realized by analyzing and executing a sound source processing program stored in the control ROM 2012 or 3012 by the command analyzer 2072 or 3072 of the MCPU 1012 or the SCPU 1022. Assume that the processing is executed under this condition unless otherwise specified.
  • the sound source method No. data G of the data in the data format (Table 1) shown in Fig. 49 stored in the corresponding tone generation channel of the RAM 2062 or 3062 is discriminated to determine sound source processing of a sound source method to be described below.
  • Pitch data (P I , P F ) is added to the current address (S1001).
  • the pitch data corresponds to the type of an ON key of the keyboard keys 8012 shown in Figs. 45 and 46.
  • step S1002 an interpolation data value O corresponding to the decimal part A F of the address (Fig. 15) is calculated by arithmetic processing D ⁇ A F using a difference D as a difference between sample data X N and X P at addresses (A I +1) and A I (S1007). Note that the difference D has already been obtained by the sound source processing at previous interrupt timing (see step S1006 to be described later).
  • the sample data X P corresponding to the integral part A I of the address is added to the interpolation data value O to obtain a new sample data value O (corresponding to X Q in Fig. 15) corresponding to the current address (A I , A F ) (S1008).
  • the sample data is multiplied with the envelope value E (S1009), and the content of the obtained data O is added to a value held in the waveform data buffer B (Fig. 50) in the RAM 2062 or 3062 of the MCPU 1012 or the SCPU 1022 (S1010).
  • the sample data X P and the difference D are left unchanged, and only the interpolation data O is updated in accordance with the address A F .
  • new sample data X Q is obtained.
  • step S1002 If the integral part A I of the current address is changed (S1002) as a result of addition of the current address (A I , A F ) and the pitch data (P I , P F ) in step S1001, it is checked if the address A I has reached or exceeded the end address A E (S1003).
  • step S1003 the next loop processing is executed. More specifically, a value (A I - A E ) as a difference between the updated current address A T and the end address A E is added to the loop address A L to obtain a new current address (A I , A F ). A loop reproduction is started from the obtained new current address A I (S1004).
  • the end address A E is an end address of an area of the external memory 1162 (Fig. 34) where PCM waveform data are stored.
  • the loop address A L is an address of a position where a player wants to repeat an output of a waveform, and known loop processing is realized by the PCM method.
  • step S1003 If NO in step S1003, the processing in step S1004 is not executed.
  • Sample data is then updated.
  • sample data corresponding to the new updated current address A T and the immediately preceding address (A I -1) are read out as X N and X P from the external memory 1162 (Fig. 34) (S1005).
  • the difference so far is updated with a difference D between the updated data X N and X P (S1006).
  • sample data X P corresponding to an address A I of the external memory 1162 is obtained by adding sample data corresponding to an address (A I -1) (not shown) to a difference between the sample data corresponding to the address (A I -1) and sample data corresponding to the address A I .
  • a difference D with the next sample data is written at the address A I of the external memory 1162 (Fig. 34).
  • Sample data at the next address (A I +1) is obtained by X p + D.
  • sample data corresponding to the current address A F is obtained by X P + D x A F .
  • a difference D between sample data corresponding to the current address and the next address is read out from the external memory 1162 (Fig. 34), and is added to the current sample data to obtain the next sample data, thereby sequentially forming waveform data.
  • DPCM data in Table 1 shown in Fig. 49 which data are stored in the corresponding tone generation channel area (Fig. 49) on the RAM 2062 or 3062 of the MCPU 1012 or the SCPU 1022.
  • Pitch data (P I , P F ) is added to the current address (A I , A F ) (S1101).
  • step S1102 an interpolation data value O corresponding to the decimal part A F of the address is calculated by arithmetic processing D ⁇ A F using a difference D at the address A I in Fig. 16 (S1114). Note that the difference D has already been obtained by the sound source processing at the previous interrupt timing (see steps S1106 and S1110 to be described later).
  • the interpolation data value O is added to sample data X P corresponding to the integral part A I of the address to obtain a new sample data value O (corresponding X Q in Fig. 16) corresponding to the current address (A I , A F ) (S1115).
  • the sample data value O is multiplied with an envelope value E (S1116), and the obtained value is added to a value stored in the waveform data buffer B (Fig. 50) in the RAM 2062 or 3062 of the MCPU 1012 or the SCPU 1022 (S1117).
  • the sample data X P and the difference D are left unchanged, and only the interpolation data O is updated in accordance with the address A F .
  • new sample data X Q is obtained.
  • step S1102 If the integral part A I of the present address is changed (S1102) as a result of addition of the current address (A I , A F ) and the pitch data (P I , P F ) in step S1101, it is checked if the address A I has reached or exceeded the end address A E (S1103).
  • sample data corresponding to the integral part A I of the updated current address is calculated by the loop processing in steps S1104 to S1107. More specifically, a value before the integral part A I of the current address is changed is stored in a variable "old A I " (see the column of DPCM in Table 1 shown in Fig. 49). This can be realized by repeating processing in step S1106 or S1113 (to be described later).
  • the old A I value is sequentially incremented in S1106, and differential waveform data in the external memory 1162 (Fig. 34) addressed by the old A I values are read out as D in step S1107.
  • the readout data D are sequentially accumulated on sample data X P in step S1105.
  • the sample data X P has a value corresponding to the integral part A I of the changed current address.
  • step S1104 When the sample data X P corresponding to the integral part A I of the current address is obtained in this manner, YES is determined in step S1104, and the control starts the arithmetic processing of the interpolation value (S1114) described above.
  • step S1103 the control enters the next loop processing.
  • An address value (A I -A E ) exceeding the end address A E is added to the loop address A L , and the obtained address is defined as an integral part A I of a new current address (S1108).
  • sample data X P is initially set as the value of sample data X PL (see the column of DPCM in Table 1 shown in Fig. 49) at the preset loop address A L and the old A I is set as the value of the loop address A L (S1110).
  • the following processing operations in steps S1110 to S1113 are repeated. More specifically, the old A I value is sequentially incremented in step S1113, and differential waveform data on the external memory 1162 (Fig. 34) designated by the incremented old A I values read out as data D.
  • the data D are accumulated on the sample data X P in step S1112.
  • old A I value becomes equal to the integral part A I of the new current address
  • the sample data X P has a value corresponding to the integral part A I of the new current address after loop processing.
  • step S1111 When the sample data Xp corresponding to the integral part A I of the new current address is obtained in this manner, YES is determined in step S1111, and the control enters the above-mentioned arithmetic processing of the interpolation value (S1114).
  • waveform data by the DPCM method for one tone generation channel is generated.
  • the sound source processing based on the FM method will be described below.
  • the FM method In the FM method, hardware or software elements having the same contents, called “operators”, as indicated by OP1 to OP4 in Figs. 51 to 54, are normally used, and are connected based on connection rules indicated by algorithms 1 to 4 in Figs. 51 to 54, thereby generating musical tones.
  • the FM method is realized by a software program.
  • processing of an operator 2 (OP2) as a modulator is performed.
  • pitch processing processing for accumulating pitch data for determining an incremental width of an address for reading out waveform data stored in the waveform memory 1162
  • an address consists of an integral address A 2 , and has no decimal address.
  • modulation waveform data are stored in the external memory 1162 (Fig. 34) at sufficiently fine incremental widths.
  • Pitch data P 2 is added to the present address A 2 (S1301).
  • a feedback output F O2 is added to the address A 2 as a modulation input to obtain a new address A M2 which corresponds to phase of a sine wave (S1302).
  • the feedback output F O2 has already been obtained upon execution of processing in step S1305 (to be described later) at the immediately preceding interrupt timing.
  • sine wave data are stored in the external memory 1162 (Fig. 34), and are obtained by addressing the external memory 1162 by the address A M2 to read out the corresponding data (S1303).
  • the sine wave data is multiplied with an envelope value E 2 to obtain an output O 2 (S1304).
  • the output O 2 is multiplied with a modulation level M L2 to obtain a modulation output M O2 (S1306).
  • the modulation output M O2 serves as a modulation input to an operator 1 (OP1).
  • the control then enters processing of the operator 1 (OP1).
  • This processing is substantially the same as that of the operator 2 (OP2) described above, except that there is no modulation input based on the feedback output.
  • the current address A 1 of the operator 1 is added to pitch data P 1 (S1307), and the sum is added to the above-mentioned modulation output M O2 to obtain a new address A M1 (S1308).
  • the value of sine wave data corresponding to this address A M1 (phase) is read out from the external memory 1162 (Fig. 34) (S1309), and is multiplied with an envelope value E 1 to obtain a musical tone waveform output O 1 (S1310).
  • the output O 1 is added to a value held in the buffer B (Fig. 50) in the RAM 2062 (Fig. 35) or the RAM 3062 (Fig. 36) (S1311), thus completing the FM processing for one tone generation channel.
  • TM format data in Table 1 shown in Fig. 49, which data are stored in the corre-sponding tone generation channel area (Fig. 47) on the RAM 2062 or 3062 of the MCPU 1012 or the SCPU 1022.
  • an address for addressing the external memory 1162 consists of only an integral address A 2 .
  • the current address A 2 is added to pitch data P 2 (S1401).
  • a modified sine wave corresponding to the address A 2 (phase) is read out from the external memory 1162 (Fig. 34) by the modified sine conversion f c , and is output as a carrier signal O 2 (S1402).
  • a feedback output F O2 (S1460) as a modulation signal, is added to the carrier signal O 2 , and the sum signal is output as a new address O 2 (S1403).
  • the feedback output F O2 has already been obtained upon execution of processing in step S1406 (to be described later) at the immediately preceding interrupt timing.
  • triangular wave data are stored in the external memory 1162 (Fig. 34), and are obtained by addressing the external memory 1162 by the address O 2 to read out the corresponding data (S1404).
  • the triangular wave data is multiplied with an envelope value E 2 to obtain an output O 2 (S1405).
  • the output O 2 is multiplied with a feedback level F L2 to obtain a feedback output F O2 (S1407).
  • the output F O2 serves as an input to the operator 2 (OP2) at the next interrupt timing.
  • the output O 2 is multiplied with a modulation level M L2 to obtain a modulation output M O2 (S1407).
  • the modulation output M O2 serves as a modulation input to an operator 1 (OP1).
  • the control then enters processing of the operator 1 (OP1).
  • This processing is substantially the same as that of the operator 2 (OP2) described above, except that there is no modulation input based on the feedback output.
  • the current address A 1 of the operator 1 is added to pitch data P 1 (S1408), and the sum is subjected to the above-mentioned modified sine conversion to obtain a carrier signal O 1 (S1409).
  • the carrier signal O 1 is added to the modulation output M O2 to obtain a new value O 1 (S1410), and the value O 1 is subjected to triangular wave conversion (S1411). The converted value is multiplied with an value E 1 to obtain a musical tone waveform output O 1 (S1412).
  • the output O 1 is added to a value held in the buffer B (Fig. 50) in the RAM 2062 (Fig. 36) or the RAM 3062 (Fig. 36), thus completing the TM processing for one tone generation channel.
  • the sound source processing operations based on four methods i.e., the PCM, DPCM, FM, and TM methods have been described.
  • the FM and TM methods are modulation methods, and, in the above examples, two-operator processing operations are executed based on the algorithms shown in Figs. 18 and 20.
  • Figs. 51 to 54 show examples. In an algorithm 1 shown in Fig. 51, four modulation operations including a feedback input are performed, and a complicated waveform can be obtained.
  • Fig. 55 is an operation flow chart of normal sound source processing based on the FM method corresponding to the algorithm 1 shown in Figs. 55 to 54. Variables in the flow chart are stored in the corresponding tone generation channel area (Fig. 47) on the RAM 2062 or 3062 of the MCPU 1012 or the SCPU 1022. Although the variables used in Fig. 55 are not the same as data in the FM format of Table 1 in Fig. 49, they are obtained by expanding the concept of the data format shown in Fig. 49, and only have different suffixes.
  • the present address A 4 of an operator 4 is added to pitch data P 4 (S1901).
  • the address A 4 is added to a feedback output F O4 (S1905) as a modulation input to obtain a new address A M4 (S1902).
  • the value of a sine wave corresponding to the address M4 (phase) is read out from the external memory 1162 (Fig. 34) (S1903), and is multiplied with an envelope value E 4 to obtain an output O 4 (S1904).
  • the output O 4 is multiplied with a feedback level F L4 to obtain a feedback output F O4 (S1905).
  • the output O 4 is multiplied with a modulation level M L4 to obtain a modulation output M O4 (S1906).
  • the modulation output M O4 serves as a modulation input to the next operator 3 (OP3).
  • the control then enters processing of the operator 3 (OP3).
  • This processing is substantially the same as that of the operator 4 (OP4) described above, except that there is no modulation input based on the feedback output.
  • the current address A 3 of the operator 3 (OP3) is added to pitch data P 3 to obtain a new current address A 3 (S1907).
  • the address A 3 is added to a modulation output M O4 as a modulation input, thus obtaining a new address A M3 (S1908).
  • the value of a sine wave corresponding to the address A M3 (phase) is read out from the external memory 1162 (Fig. 34) (S1909), and is multiplied with an envelope value E 3 to obtain an output O 3 (S1910).
  • the output O 3 is multiplied with a modulation level M L3 to obtain a modulation output M O3 (S1911).
  • the modulation output M O3 serves as a modulation input to the next operator 2 (OP2).
  • Processing of the operator 2 is then executed. However, this processing is substantially the same as that of the operator 3, except that a modulation input is different, and a detailed description thereof will be omitted.
  • control enters processing of an operator 1 (OP1).
  • OP1 an operator 1
  • step S1920 A musical tone waveform output O 1 obtained in step S1920 is added to data stored in the buffer B as a carrier (S1921).
  • Fig. 50 is an operation flow chart of normal sound source processing based on the TM method corresponding to the algorithm 1 shown in Fig. 51. Variables in the flow chart are stored in the corresponding tone generation channel area (Fig. 47) on the RAM 2062 or 3062 of the MCPU 1012 or the SCPU 1022. Although the variables used in Fig. 55 are not the same as data in the TM format of Table 1 in Fig. 49, they are obtained by expanding the concept of the data format shown in Fig. 49, and only have different suffixes.
  • the current address A 4 of the operator 4 is added to pitch data P 4 (S2061).
  • a modified sine wave to the above-mentioned address A 4 (phase) is read out from the external memory 1162 (Fig. 34) by the modified sine conversion f c , and is output as a carrier signal O 4 (S2002).
  • a feedback output F O4 (see S2007) as a modulation signal is added to the carrier signal O 4 , and the sum signal is output as a new address O 4 (S2003).
  • the value of a triangular wave corresponding to the address O 4 (phase) is read out from the external memory 1162 (Fig.
  • the control then enters processing of the operator 3 (OP3).
  • This processing is substantially the same as that of the operator 4 (OP4) described above, except that there is no modulation input based on the feedback output.
  • the current address A 3 of the operator 3 (OP3) is added to pitch data P 3 (S2008) and the sum is subject to modified sine conversion to obtain a carrier signal O 3 (S2009).
  • the carrier signal O 3 is added to the above-mentioned modulation output M O4 to obtain a new value O 3 (S2010), and the value O 3 is subject to triangular wave conversion (S2011).
  • the converted value is multiplied with an envelope value E 3 to obtain an aoutput O 3 (S2012).
  • the output O3 is multiplied with a modulation level M L3 to obtain a modulation output M O3 (S2013).
  • the modulation output M O3 serves as a modulation input to the next operator 2 (OP2).
  • Processing of the operator 2 is then executed. However, this processing is substantially the same as that of the operator 3, except that a modulation input is different, and a detailed description thereof will be omitted.
  • step S2024 A musical tone waveform output O 1 obtained in step S2024 is accumulated in the buffer B (Fig. 50) as a carrier (S2025).
  • the MCPU 1012 and the SCPU 1022 each execute processing for eight channels (Fig. 40). If a modulation method is designated in a given tone generation channel, the above-mentioned sound source processing based on the modulation method is executed.
  • the first modulation of the sound source processing based on the modulation method will be described below.
  • Each operator processing cannot be executed unless a modulation input is determined. This is because a modulation input to each operator processing varies depending on the algorithm, as shown in Figs. 51 to 54. Which operator processing output is used as a modulation input or whether or not an output from its own operator processing is fed back, and is used as its own modulation input in place of another operator processing must be determined. In the operation flow chart shown in Fig. 57, such determinations are simultaneously performed in algorithm processing (S2105), and the connection relationship obtained by this processing determine modulation inputs to the respective operator processing operations (S2102 to S2104). Note that a given initial value is set as an input to each operator processing at the beginning of tone generation.
  • the program of the operator processing can remain the same, and only the algorithm processing can be modified in correspondence with algorithms. Therefore, the program size of the overall sound source processing based on the modulation method can be greatly reduced.
  • the operator 1 processing in the operation flow chart showing operator processing based on the FM method in Fig. 57 is shown in Fig. 58, and an arithmetic algorithm per operator is shown in Fig. 59.
  • the remaining operator 2 to 4 processing operations are the same except for different suffix numbers of variables.
  • Variables in the flow chart are stored in the corresponding tone generation channel (Fig. 47) on the RAM 2062 or 3062 of the MCPU 1012 or the SCPU 1022.
  • An address A l corresponding to a phase angle is added to pitch data P l to obtain a new address A l (S2201).
  • the address A l is added to a modulation input M Il , thus obtaining an address A M1 (S2202).
  • the modulation input M I1 is determined by the algorithm processing in step S2105 (Fig. 57) at the immediately preceding interrupt timing, and may be a feed back output F O1 of its own operator, or an output M02 from another operator, e.g., an operator 2 depending on the algorithm.
  • the value of a sine wave corresponding to this address (phase) A M1 is read out from the external memory 1162 (Fig. 34), thus obtaining an output O 1 (S2203).
  • a value obtained by multiplying the output O 1 with envelope data E 1 serves as an output O 1 of the operator 1 (S2204).
  • the output O 1 is multiplied with a feedback level F L1 to obtain a feedback output F O1 (S2205).
  • the output 01 is multipled with a modulation level M L1 , thus obtaining a modulation output M O1 (S2206).
  • the operator 1 processing in the operation flow chart showing operator processing based on the FM method in Fig. 57 is shown in Fig. 58, and an arithmetic algorithm per operator is shown in Fig. 59.
  • the remaining operator 2 to 4 processing operations are the same except for different suffix numbers of variables.
  • Variables in the flow chart are stored in the corresponding tone generation channel (Fig. 47) on the RAM 2062 or 3062 of the MCPU 1012 or the SCPU 1022.
  • the current address A 1 is added to pitch data P 1 (S2301).
  • a modified sine wave corresponding to the above-mentioned address A 1 (phase) is read out from the external memory 1162 (Fig. 34) by the modified sine conversion f c , and is generated as a carrier signal O 1 (S2302).
  • the output O 1 is added to a modulation input M I1 as a modulation signal, and the sum is defined as a new address O 1 (S2303).
  • the value of a triangular wave corresponding to the address O 1 (phase) is read out from the external memory 1162 (S2304), and is multiplied with an envelope value E 1 to obtain an output O 1 (S2306).
  • the output O 1 is multiplied with a feedback level F L1 to obtain a feedback output F O1 (S2306).
  • the output O 1 is multiplied with a modulation level M L1 to obtain a modulation output M O1 (S2307).
  • step S2105 in Fig. 57 for determining a modulation input in the operator processing in both the above-mentioned modulation methods, i.e., the FM and TM methods will be described in detail below with reference to the operation flow chart of Fig. 62.
  • the flow chart shown in Fig. 62 is common to both the FM and TM methods, and the algorithms 1 to 4 shown in Figs. 51 to 54 are selectively processed. In this case, choices of the algorithms 1 to 4 are made based on an instruction (not shown) from a player (S2400).
  • the algorithm 1 is of a series four-operator (to be abbreviated to as an OP hereinafter) type, and only the OP4 has a feedback input. More specifically, in the algorithm 1,
  • the OP2 and the OP4 have feedback inputs. More specifically, in the algorithm 2,
  • the OP2 and OP4 have feedback inputs, and two modules in which two operators are connected in series with each other are connected in parallel with each other. More specifically, in the algorithm 3,
  • the algorithm 4 is of a parallel four-OP type, and all the OPs have feedback inputs. More specifically, in the algorithm 4,
  • the sound source processing for one channel is completed by the above-mentioned operator processing and algorithm processing, and tone generation (sound source processing) continues in this state unless the algorithm is changed.
  • processing time is increased as the complicated algorithms are programmed, and as the number of tone generation channels (the number of polyphonic channels) is increased.
  • the first modification shown in Fig. 57 is further developed, so that only operator processing is performed at a given interrupt timing, and only algorithm processing is performed at the next interrupt timing.
  • the operator processing and the algorithm processing are alternately executed. In this manner, a processing load per interrupt timing can be greatly reduced. As a result, one sample data per two interrupts is output.
  • variable S is zero is checked (S2501).
  • the variable is provided for each tone generation channel, and is stored in the corresponding tone generation channel area (Fig. 47) on the RAM 2062 or 3062 of the MCPU 1012 or the SCPU 1022.
  • the process exits from the operator processing route, and executes output processing for setting a value of the buffer BF (for the FM method) or the buffer BT (for the TM method) (S2510).
  • the buffer BF or BT is provided for each tone generation channel, and is stored in the corresponding tone generation channel area (Fig. 47) on the RAM 2062 or 3062 of the MCPU 1012 or the SCPU 1022.
  • the buffer BF or BT stores a waveform output value after the algorithm processing. At the current interrupt timing, however, no algorithm processing been executed, and the content of the buffer BF or BT is not updated. For this reason, the same waveform output value as that at the immediately preceding interrupt timing is output.
  • step S2507 The process then enters an algorithm processing route, and sets the variable S to be a value "0". Subsequently, the algorithm processing is executed (S2508).
  • the content of the output O 1 of the operator 1 processing is directly stored in the buffer BF or BT (S2601 and S2602).
  • a value as a sum of the outputs O 1 and O 3 is stored in the buffer BF or BT (S2603).
  • a value as a sum of the output O 1 and the outputs O 2 , O 3 , and O 4 is stored in the buffer BF or BT (S2604).
  • a processing load per interrupt timing of the sound source processing program can be remarkably decreased.
  • the processing load can be reduced without increasing an interrupt time of the main operation flow chart shown in Fig. 37, i.e., without influencing the program operation. Therefore, a keyboard key sampling interval executed in Fig. 37 will not be prolonged, and the response performance of an electronic musical instrument will not be impaired.
  • parameters corresponding to sound source methods are set in the formats shown in Fig. 49 in the corresponding tone generation channel area (Fig. 47) on the RAM 2062 or 3062 (Figs. 35 and 36) by one of the function keys 8012 (Fig. 45) connected to the operation panel of the electronic musical instrument via the input port 2102 (Fig. 35) of the MCPU 1012.
  • Fig. 65 shows an arrangement of some function keys 8012 shown in Fig. 45.
  • some function keys 8012 are realized as tone color switches. When one of switches “piano”, “guitar”,..., “koto” in a group A is depressed, a tone color of the corresponding instrument tone is selected, and a guide lamp is turned on. Whether the tone color of the selected instrument tone is generated in the DPCM method or the TM method is selected by a DPCM/TM switch 27012.
  • a tone color based on the FM method is designated; when a switch "bass” is depressed, a tone color on both the PCM and TM methods is designated; and when a switch "trumpet” is depressed, a tone color based on the PCM method is designated. Then, a musical tone based on the designated sound source method is generated.
  • Figs. 66 and 67 show of sound source methods to the respective tone generation channel region (Fig. 47) on the RAM 2062 or 3062 when the switches "piano" and “bass" are depressed.
  • the DPCM method is assigned to all the 8-tone polyphonic tone generation channels of the MCPU 1012 and the SCPU 1022, as shown in Fig. 66.
  • the PCM method is assigned to the odd-numbered tone generation channels
  • the TM method is assigned to the even-numbered tone generation channels, as shown in Fig. 67.
  • a musical tone waveform for one musical tone can be obtained by mixing tone waveforms generated in the two tone generation channels based on the PCM and TM methods.
  • a 4-tone polyphonic system per CPU is attained, and an 8-tone polyphonic system as a total of two CPUs is attained.
  • Fig. 68 is a partial operation flow chart of the function key processing in step S403 in the main operation flow chart shown in Fig. 37, and shows processing corresponding to the tone color designation switch group shown in Fig. 65.
  • step S2901 It is checked if a player operates the DPCM/TM switch 27012 (S2901). If YES in step S2901, it is checked if a variable M is zero (S2902). The variable M stored on the RAM 2062 (Fig. 35) of the MCPU 1012, and has a value "0" for the DPCM method; a value "1" for the TM method. If YES in step S2902, i.e., if it is determined that the value of the variable M is 0, the variable M is set to be a value "1" (S2903). This means that the DPCM/TM switch 27012 is depressed in the DPCM method selection state, and the selection state is changed to the TM method selection state.
  • step S2902 i.e., if it is determined that the value of the variable M is "1”, the variable M is set to be a value "0" (S2904). This means that the DPCM/TM switch 27012 is depressed in the TM method selection state, and the selection state is changed to the DPCM method selection state.
  • a tone color in the group A shown in Fig. 65 is currently designated (S2905). Since the DPCM/TM switch 27012 is valid for tone co]ors of only group A , only when a tone color in the group A is designated, and YES is determined in step S2905, operations corresponding to the DPCM/TM switch 27012 in steps S2906 to S2908 are executed.
  • step S2906 since the DPCM method is selected by the DPCM/TM switch 27012, DPCM data are set in the DPCM format shown in Fig. 49 in the corresponding tone generation channel areas on the RAMs 2062 and 3062 (Figs. 35 and 36). More specifically, sound source method No. data G indicating the DPCM method is set in the start area of the corresponding tone generation channel area (see the column of DPCM in Fig. 49). Subsequently, various parameters corresponding to currently designated tone colors are respectively set in the second and subsequent areas of the corresponding tone generation channel area (S2907).
  • step S2906 since the TM method is selected by the DPCM/TM switch 27012, TM data are set in the TM format shown in Fig. 49 in the corresponding generation channel areas. More specifically, sound source method No. data G indicating the TM method is set in the start area of the corresponding tone generation channel area. Subsequently, various parameters corresponding to currently designated tone colors are respectively set in the second and subsequent areas of the corresponding tone generation channel area (S2908).
  • step S2901 A case has been exemplified wherein the DPCM/TM switch 27012 shown in Fig. 65 is operated. If the switch 27012 is not operated and NO is determined in step S2901, or if tone color of the group A is not designated and NO is determined in step S2905, processing from step S2909 is executed.
  • step S2909 It is checked in step S2909 if a change in tone color switch shown in Fig. 65 is detected.
  • step S2909 If NO in step S2909, since processing for the tone color switches need not be executed, the function key processing (S403 in Fig. 37) is ended.
  • step S2909 If it is determined that a change in tone color switch is detected, and YES is determined in step S2909, it is checked if a tone color in the group B is designated (S2910).
  • step S2910 data for the sound source method corresponding to the designated tone color are set in the predetermined format in the corresponding tone generation channel areas on the RAMs 2062 and 3062 (Figs. 35 and 36). More specifically, sound source method No. data G indicating the sound source method is set in the start area of the corresponding tone generation channel area (Fig. 49). Subsequently, various parameters corresponding to the currently designated tone color are respectively set in the second and subsequent areas of the corresponding tone generation channel area (S2911). For example, when the switch "bass" in Fig. 65 is selected, data corresponding to the PCM method are set in the odd-numbered tone generation channel areas, and data corresponding to the TM method are set in the even-numbered tone generation channel areas.
  • step S2910 If it is determined that the tone color switch in the group A is designated, and NO is determined in step S2910, it is checked if the variable M is "1" (S2912). If the TM method is currently selected, and YES is determined in step S2912, data are set in the TM format (Fig. 49) in the corresponding tone generation channel area (S2913) like in step S2908 described above.
  • step S2912 data are set in the DPCM format (Fig. 49) in the corresponding tone generation channel area (S2914) like in step S2907 described above.
  • the sound source method to be set in the corresponding tone generation channel area of the RAM 2062 or 3062 (Figs. 35 and 36) is automatically switched in accordance with an ON key position, i.e., a tone range of a musical tone.
  • This embodiment has a boundary between key code numbers 31 and 32 on the keyboard shown in Fig. 46. That is, when a key code of an ON key falls within a bass tone range equal to or lower than the 31st key code, the DPCM method is assigned to the corresponding tone generation channel.
  • Fig. 69 is a partial operation flow chart of the keyboard key processing in step S405 in the main operation flow chart of Fig. 37.
  • step S3001 If NO in step S3001, and a tone color in the group B is currently designated, special processing in Fig. 69 is not performed.
  • step S3001 If YES in step S3001, and a tone color in the group A is currently designated, it is checked if a key code of a key which is detected as an "ON key" in the keyboard key scanning processing in step S404 in the main operation flow chart shown in Fig. 37 is equal to or lower than the 31st key code (S3002).
  • step S3002 If a key in the bass tone range equal to or lower than the 31st key code is depressed, and YES is determined in step S3002, it is checked if the variable M is "1" (S3003).
  • the variable M is set in the operation flow chart shown in Fig. 68 as a part of the function key processing in step S403 in the main operation flow chart shown in Fig. 37, and is "0" for the DPCM method; "1" for the TM method, as described above.
  • step S3003 i.e., if it is determined that the TM method is currently designated as the sound source method, DPCM data in Fig. 49 are set in a tone generation channel area of the RAM 2062 or 3062 (Figs. 35 and 36) where the ON key is assigned so as to change the TM method to the DPCM method as a sound source method for the bass tone range (see the column of DPCM in Fig. 49). More specifically, sound source method No. data G indicating the DPCM method is set in the start area of the corresponding tone generation channel area. Subsequently, various parameters corresponding to the currently designated tone color are respectively set in the second and subsequent areas of the corresponding tone generation channel area (S3004).
  • the flag C is a variable (Fig. 49) stored in each tone generation channel area on the RAM 2062 (Fig. 35) of the MCPU 1012, and is used in OFF event processing to be described later with reference to Fig. 71.
  • step S3002 If it is determined that a key in the high tone range equal to or higher than the 31st key code is depressed, and NO is determined in step S3002, it is checked if the variable M is "1" (S3006).
  • step S3006 i.e., if it is determined that the DPCM method is currently designated as the sound source method, TM data in Fig. 49 are set in a tone generation channel area of the RAM 2062 or 3062 (Figs. 35 and 36) where the ON key is assigned so as to change the DPCM method to the TM method as a sound source method for the high tone range (see the column of TM in Fig. 49). More specifically, sound source method No. data G indicating the TM method is set in the start area of the corresponding tone generation channel area. Subsequently, various parameters corresponding to the currently designated tone color are respectively set in the second and subsequent areas of the corresponding tone generation channel area (S3007). Thereafter, a value "2" is set in a flag C (S3008).
  • step S3003 if NO in step S3003 and if YES in step S3006, since the desired sound source method is originally selected, no special is executed.
  • a tone color in the group A in Fig. 65 when a tone color in the group A in Fig. 65 is designated, a sound source method to be set in the corresponding tone generation channel area (Fig. 47) on the RAM 2062 or 2062 (Figs. 35 and 36) of the MCPU 1012 or the SCPU 1022 is automatically switched in accordance with an ON key speed, i.e., a velocity.
  • a switching boundary is set at a velocity value "64" half the maximum value "127" of the MIDI (Musical Instrument Digital Interface) standards. That is, when the velocity value of an ON key is equal to or larger than 64, the DPCM method is assigned; when the velocity of an ON key is equal to or smaller than 64, the TM method is assigned.
  • no special keyboard key processing is executed.
  • Fig. 70 is a partial operation flow chart of the keyboard key processing in step S405 in the main operation flow chart shown in Fig. 37.
  • step S3101 If NO in step S3101, and a tone color in the group B is presently selected, the special processing in Fig. 69 is not executed.
  • step S3101 If YES in step S3101, and a tone color in the group A is presently selected, it is checked if the velocity of a key which is detected as an "ON key” in the keyboard key scanning processing in step S404 in the main operation flow Chart Shown in Fig. 37 is equal to or larger than 64 (S3102). Note that the velocity value "64" corresponds to "mp (mezzo piano)" of the MIDI standards.
  • step S3102 If it is determined that the velocity value is equal to or larger than 64, and YES is determined in step S3102, it is checked if the variable M is "1" (S3102).
  • the variable M is set in the operation flow chart shown in Fig. 68 as a part of the function key processing in step S403 in the main operation flow chart shown in Fig. 37, and is "0" for the DPCM method; "1" for the TM method, as described above.
  • step S3102 If it is determined that the velocity value is smaller than 64 and NO is determined in step S3102, it is further checked if the variable M is "1" (S3106).
  • step S3103 if NO in step S3103 and if YES in step S3106, since the desired sound source method is originally selected, no special processing is executed.
  • the sound source method is automatically set in accordance with a key range (tone range) or a velocity. Upon an OFF event, the set sound source method must be restored.
  • the embodiment of the OFF event keyboard key processing to be described below can realize this processing.
  • Fig. 71 is a partial operation flow chart of the keyboard key processing in step S405 in the main operation flow chart shown in Fig. 37.
  • the value of the flag C set in the tone generation channel area on the RAM 2062 or 3062 (Figs. 35 and 36), where the key determined as an "OFF key” in the keyboard key scanning processing in step S404 in the main operation flow chart of Fig. 37 is assigned, is checked.
  • the flag C is set in steps S3005 and S3008 in Fig. 69, or in step S3105 or S3108 in Fig. 70, has an initial value "0", is set to be "1" when the sound source method is changed from the TM method to the DPCM method upon an ON event, and is set to be "2" when the sound source method is changed from the DPCM method to the TM method.
  • the flag C is left at the initial value "0".
  • step S3201 in the OFF event processing in Fig. 71 If it is determined in step S3201 in the OFF event processing in Fig. 71 that the value of the flag C is "0", since the sound source method is left unchanged in accordance with a key range or a velocity, no special processing is executed, and normal OFF event processing is performed.
  • step S3201 If it is determined in step S3201 that the value of the flag C is "1", the sound source method is changed from the TM method to the DPCM method upon an ON event.
  • TM data in Fig. 49 is set in the tone generation channel area on the RAM 2062 or 3062 (Fig. 35 or 36) where the ON key is assigned to restore the sound source method to the TM method.
  • sound source No. data G indicating the TM method is set in the start area of the corresponding tone generation channel area.
  • various parameters corresponding to the presently designated tone color are respectively set in the second and subsequent areas of the corresponding tone generation channel area (S3202).
  • step S3201 If it is determined in step S3201 that the value of the flag C is "2", the sound source method is changed from the DPCM method to the TM method.
  • DPCM data in Fig. 49 is set in the tone generation channel area on the RAM 2062 or 3062 where the ON key is assigned to restore the sound source method from the TM method to the DPCM method.
  • sound source method No. data G indicating the DPCM method is set in the start area of the corresponding tone generation channel area.
  • various parameters corresponding to the presently designated tone color are respectively set in the second and subsequent areas of the corresponding tone generation channel area (S3203).
  • the two CPUs i.e., the MCPU 1012 and the SCPU 1022 share processing of different tone generation channels.
  • the number of CPUs may be one or three or more.
  • control ROMs 2012 and 3012 shown in Figs. 35 and 36, and the external memory 1162 are constituted by, e.g., ROM cards, various sound source methods can be presented to a user by means of the ROM cards.
  • the input port 2102 of the MCPU 1012 shown in Fig. 35 can be connected to various other operation units in addition to the instrument operation unit shown in Fig. 45.
  • various other electronic musical instruments can be realized.
  • the present invention may be realized as a sound source module for executing only the sound source processing while receiving performance data from another electronic musical instrument.
  • the present invention may be applied to various other modulation methods.
  • the above embodiment exemplifies a 4-operator system.
  • the number of operators is not limited to this.
  • a musical tone waveform generation apparatus can be constituted by versatile processors without requiring a special-purpose sound source circuit at all. For this reason, the circuit scale of the overall musical tone waveform generation apparatus can be reduced, and the apparatus can be manufactured in the same manufacturing technique as a conventional microprocessor when the apparatus is constituted by an LSI, thus improving the yield of chips. Therefore, manufacturing cost can be greatly reduced.
  • a musical tone signal output unit can be constituted by a simple latch circuit, resulting in almost no increase in manufacturing cost after the output unit is added.
  • a sound source processing program to be stored in a program storage means need only be changed to meet the above requirements. Therefore, development cost of a new musical tone waveform generation apparatus can be greatly decreased, and a new sound source method can be presented to a user by means of, e.g., a ROM card.
  • the present invention has, as an architecture of the sound source processing program, a processing architecture for simultaneously executing algorithm processing operations as I/O processing among operator processing operations before or after simultaneous execution of at least one operator processing as a modulation processing unit. For this reason, when one of a plurality of algorithms is selected to execute sound source processing, a plurality of types of algorithm processing portions are prepared, and need only be switched as needed. Therefore, the sound source processing program can be rendered very compact. The small program size can greatly contribute to a compact, low-cost musical tone waveform generation apparatus.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Computer Hardware Design (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • General Engineering & Computer Science (AREA)
  • Electrophonic Musical Instruments (AREA)

Claims (10)

  1. Musikton-Wellenformerzeugungseinrichtung, welche folgendes enthält:
    Programmspeichermittel (201, 301, 2011) zur Speicherung einer Anzahl von Schallquellen-Verarbeitungsprogrammen entsprechend einer Anzahl von Arten von Schallquellenverfahren;
    Musikton-Erzeugungsmittel (101, 102, 1011), die eine Anzahl von Tonerzeugungskanälen enthalten, von denen jeder Musiktonsignale erzeugt;
    Zuordnungsmittel (1501, 1502, 2001, 2002, 2121, 116, S1801-9, S1901-8, S2101-9, S2201-8, S2301) zur Auswahl der Schallquellen-Verarbeitungsprogramme aus den Programmspeichermitteln (201, 301, 2011) und zur Zuordnung ausgewählter Schallquellenprogramme zu dem jeweiligen der genannten Tonerzeugungskanäle, wobei der jeweilige der Tonerzeugungskanäle einen Ton durch Ausführung des Schallquellenprogramms, das dem jeweiligen der Tonerzeugungskanäle durch die genannten Zuordnungsmittel (1501, 1502, 2001, 2002, 2121, 16, S1801-9, S1901-8, S2101-9, S2201-8, S2301) zugeordnet worden ist, erzeugt; und
    Musikton-Signalausgabemittel (601, 602, 3011, 3021) zur Ausgabe der Musiktonsignale, welche von den Musikton-Signalerzeugungsmitteln (101, 102, 1011) erzeugt worden sind, zu vorbestimmten Ausgabezeitintervallen.
  2. Einrichtung nach Anspruch 1, dadurch gekennzeichnet, daß die Musikton-Signalausgabemittel (601, 602, 3011, 3021) folgendes enthalten:
    Zeitsignal-Erzeugungsmittel (2031) zur Erzeugung eines Zeitsignales für jede vorbestimmte Tastungsperiode;
    erste Haltemittel (601, 3011) zum Festhalten eines digitalen Musiktonsignals, das von den Musikton-Signalerzeugungsmitteln (101, 102, 1011) erzeugt worden ist, zu einer Ausgabezeit des digitalen Musiktonsignales von den Musikton-Signalerzeugungsmitteln (101, 102, 1011); und
    zweite Haltemittel (602, 3021) zur Ausgabe des digitalen Musiktonsignales durch Festhalten eines Ausgangssignales der ersten Haltemittel, wenn das Zeitsignal von den Zeitsignal-Erzeugungsmitteln (2031) erzeugt worden ist.
  3. Einrichtung nach Anspruch 1, dadurch gekennzeichnet, daß die genannten Progammspeichermittel (201, 301, 2011) weiter ein Darbietungs-Datenverarbeitungsprogramm zur Verarbeitung der Erzeugungsdaten speichern, und daß die Musikton-Erzeugungsmittel eine Adressensteuereinrichtung (205, 305, 2051) zur Steuerung einer Adresse der Programmspeichermittel (206, 306, 2061) zur Speicherung der Musiktonerzeugungsdaten, welche für die Erzeugung eines Musiktonsignales notwendig sind, ferner eine arithmetische Verarbeitungseinrichtung (2081, 2091, 208, 209, 308, 309) zur Durchführung einer vorbestimmten arithmetischen Operation, sowie eine Programmdurchführungseinrichtung (207, 307, 2071) zur Durchführung des Darbietungs-Datenverarbeitungsprogramms oder des Schallquellen-Verarbeitungsprogramms enthalten, welche in den Programmspeichermitteln (201, 301, 2011) gespeichert sind, während eine Steuerung der Adressensteuereinrichtung (205, 305, 2051), der Datenspeichermittel (206, 306, 2061) und der arithmetischen Verarbeitungseinrichtung (2081, 2091) 208, 209, 308, 309) erfolgt, um normalerweise das Darbietungs-Datenverarbeitungsprogramm zur Steuerung der Musiktonerzeugungsdaten an den Datenspeichermitteln (206, 306, 2061) durchzuführen, um das Schallquellen-Verarbeitungsprogramm zu vorbestimmten Zeitintervallen durchzuführen, um das Darbietungs-Datenverarbeitungsprogramm wiederum bei Abschluß des Schallquellen-Verarbeitungsprogramms auszuführen, und um eine zeitaufgeteilte Verarbeitung auf der Basis der Musikton-Erzeugungsdaten an den Datenspeichermitteln (206, 306, 2061) unmittelbar nach Durchführung des Schallquellen-Verarbeitungsprogramms auszuführen, so daß Musiktonsignale durch die Schallquellenverfahren erzeugt werden, die den Tonerzeugungskanälen zugeordnet sind.
  4. Einrichtung nach Anspruch 1, dadurch gekennzeichnet, daß die Zuordnungsmittel (1501, 1502, 2001, 2002, 2121, 116, S1801-9, S1901-8, S2101-9, S2201-8, S2301) Klangfarbenbezeichnungsmittel (S1801-9, S1901-8, S2101-9, S2201-8) zur Bezeichnung einer Klangfarbe des Musiktonsignales, welches in dem jeweiligen Tonerzeugungskanal entsprechend den Darbietungsdaten erzeugt werden soll, enthalten, und daß hierdurch das Schallquellenprogramm, welches der Klangfarbe entspricht, welche durch die Klangfarbenbezeichnungsmittel (S1801-9, S1901-8, S2101-9, S2201-8) bezeichnet worden ist, ausgewählt wird und einem Tonerzeugungssignal zugeordnet wird.
  5. Einrichtung nach Anspruch 4, dadurch gekennzeichnet, daß die Darbietungsdaten solche Daten sind, welche die Tonhöhe eines zu erzeugenden Musiktonsignales anzeigen.
  6. Einrichtung nach Anspruch 4, dadurch gekennzeichnet, daß die Darbietungsdaten solche Daten sind, welche eine Berührung oder einen Anschlag eines Betätigungselementes bei einer Darbietung anzeigen.
  7. Einrichtung nach Anspruch 1, dadurch gekennzeichnet, daß die Zuteilungsmittel (1501, 1502, 2001, 2002, 2121, 116, S1801-9, S1901-8, S2101-9, S2201-8, S2301) Ausgabemittel (2121, 116) für die Ausgabe von Wiedergabedaten einer Anzahl von ein Musikstück bildenden Teilen sowie Klangfarbenbezeichnungsmittel (S2301) zur Bezeichnung einer Klangfarbe des Musiktonsignales enthalten, das in dem jeweiligen Tonerzeugungssignal entsprechend einem der mehreren Teile erzeugt werden soll, zu welchen die von den Ausgabemitteln (2121, 116) ausgegebenen Darbietungsdaten gehören, und daß hierdurch das Schallquellenprogramm entsprechend der Klangfarbe, welche durch die Klangfarbenbezeichnungmittel (S2301) bezeichnet worden ist, ausgewählt und dem Tonerzeugungssignal zugeordnet wird.
  8. Einrichtung nach Anspruch 1, dadurch gekennzeichnet, daß die Zuordnungsmittel Teilungspunkt-Bezeichnungsmittel (1501, 2001) zum Veranlassen eines Spielers zur Bezeichnung eines Teilungspunktes zur Aufteilung eines Bereiches eines Darbietungsdatenwertes in eine Mehrzahl von Bereichen, und Klangfarbenbezeichnungsmittel (1502, 2002) zur Bezeichnung einer Klangfarbe einer Mehrzahl von Bereichen mit einem durch die Teilungspunktbezeichnungmittel (1501, 2001) bezeichneten Teilungspunkt enthalten, und daß hierdurch das Schallquellenprogramm entsprechend der durch die Klangfarbenbezeichnungsmittel (1502, 2002) bezeichneten Klangfarbe gewählt und dem Tonerzeugungssignal zugeordnet wird.
  9. Einrichtung nach Anspruch 8, dadurch gekennzeichnet, daß die Darbietungsdaten solche Daten sind, welche eine Tonhöhe eines zu erzeugenden Musiktonsignales anzeigen.
  10. Einrichtung nach Anspruch 8, dadurch gekennzeichnet, daß die Darbietungsdaten solche Daten sind, welche die Berührung oder den Anschlag eines Betätigungselementes bei einer Darbietung anzeigen.
EP91109140A 1990-06-28 1991-06-04 Vorrichtung zur Erzeugung von Musikwellenformen Expired - Lifetime EP0463411B1 (de)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP171215/90 1990-06-28
JP2171215A JP2869573B2 (ja) 1990-06-28 1990-06-28 楽音波形発生装置
JP2172200A JP2869574B2 (ja) 1990-06-29 1990-06-29 楽音波形発生装置
JP172200/90 1990-06-29

Publications (3)

Publication Number Publication Date
EP0463411A2 EP0463411A2 (de) 1992-01-02
EP0463411A3 EP0463411A3 (en) 1993-09-22
EP0463411B1 true EP0463411B1 (de) 1999-01-13

Family

ID=26494010

Family Applications (1)

Application Number Title Priority Date Filing Date
EP91109140A Expired - Lifetime EP0463411B1 (de) 1990-06-28 1991-06-04 Vorrichtung zur Erzeugung von Musikwellenformen

Country Status (4)

Country Link
EP (1) EP0463411B1 (de)
KR (1) KR950000841B1 (de)
DE (1) DE69130748T2 (de)
HK (1) HK1013349A1 (de)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6272465B1 (en) 1994-11-02 2001-08-07 Legerity, Inc. Monolithic PC audio circuit

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW281745B (de) * 1994-03-31 1996-07-21 Yamaha Corp
TW279219B (de) * 1994-03-31 1996-06-21 Yamaha Corp
DE69517896T2 (de) * 1994-09-13 2001-03-15 Yamaha Corp Elektronisches Musikinstrument und Vorrichtung zum Hinzufügen von Klangeffekten zum Tonsignal
JP3358324B2 (ja) * 1994-09-13 2002-12-16 ヤマハ株式会社 電子楽器
US6047073A (en) * 1994-11-02 2000-04-04 Advanced Micro Devices, Inc. Digital wavetable audio synthesizer with delay-based effects processing
US6246774B1 (en) 1994-11-02 2001-06-12 Advanced Micro Devices, Inc. Wavetable audio synthesizer with multiple volume components and two modes of stereo positioning
US5742695A (en) * 1994-11-02 1998-04-21 Advanced Micro Devices, Inc. Wavetable audio synthesizer with waveform volume control for eliminating zipper noise
US5668338A (en) * 1994-11-02 1997-09-16 Advanced Micro Devices, Inc. Wavetable audio synthesizer with low frequency oscillators for tremolo and vibrato effects
JPH08160959A (ja) * 1994-12-02 1996-06-21 Sony Corp 音源制御装置
EP0801784A1 (de) * 1994-12-12 1997-10-22 Advanced Micro Devices, Inc. Pc klangsystem mit wellenformspeicher
US5753841A (en) * 1995-08-17 1998-05-19 Advanced Micro Devices, Inc. PC audio system with wavetable cache
US5847304A (en) * 1995-08-17 1998-12-08 Advanced Micro Devices, Inc. PC audio system with frequency compensated wavetable data
US5959231A (en) * 1995-09-12 1999-09-28 Yamaha Corporation Electronic musical instrument and signal processor having a tonal effect imparting function
US6326537B1 (en) 1995-09-29 2001-12-04 Yamaha Corporation Method and apparatus for generating musical tone waveforms by user input of sample waveform frequency
CN1232945C (zh) 2000-04-03 2005-12-21 雅马哈株式会社 便携机、该便携机的节电方法和音量补偿方法
CN113112971B (zh) * 2021-03-30 2022-08-05 上海锣钹信息科技有限公司 一种midi残缺音播放方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5375919A (en) * 1976-12-17 1978-07-05 Nippon Gakki Seizo Kk Electronic instrument
JPS5567799A (en) * 1978-11-16 1980-05-22 Nippon Musical Instruments Mfg Electronic musical instrument
JPS58211789A (ja) * 1982-06-04 1983-12-09 ヤマハ株式会社 楽音合成装置
US4862784A (en) * 1988-01-14 1989-09-05 Yamaha Corporation Electronic musical instrument

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6272465B1 (en) 1994-11-02 2001-08-07 Legerity, Inc. Monolithic PC audio circuit

Also Published As

Publication number Publication date
EP0463411A2 (de) 1992-01-02
KR950000841B1 (ko) 1995-02-02
DE69130748D1 (de) 1999-02-25
HK1013349A1 (en) 1999-08-20
KR920001424A (ko) 1992-01-30
DE69130748T2 (de) 1999-09-30
EP0463411A3 (en) 1993-09-22

Similar Documents

Publication Publication Date Title
EP0463411B1 (de) Vorrichtung zur Erzeugung von Musikwellenformen
US5319151A (en) Data processing apparatus outputting waveform data in a certain interval
US5192824A (en) Electronic musical instrument having multiple operation modes
JPH0760310B2 (ja) タッチコントロール装置
US5354948A (en) Tone signal generation device for generating complex tones by combining different tone sources
EP0463409B1 (de) Vorrichtung zur Erzeugung von Musikwellenformen
EP0169659A2 (de) Tongenerator für ein elektroniches Musikinstrument
JPH04306697A (ja) ステレオ方式
JPH0612069A (ja) ディジタル信号処理装置
JP2869573B2 (ja) 楽音波形発生装置
US5074183A (en) Musical-tone-signal-generating apparatus having mixed tone color designation states
JP3035991B2 (ja) 楽音波形発生装置
JP2797139B2 (ja) 楽音波形発生装置
JP3010693B2 (ja) 楽音波形発生装置
EP0201998B1 (de) Elektronisches Musikinstrument
US4612839A (en) Waveform data generating system
JPS62208099A (ja) 楽音発生装置
JP2678974B2 (ja) 楽音波形発生装置
JP2877012B2 (ja) 楽音合成装置
JP3134840B2 (ja) 波形サンプルの補間装置
JPH0460698A (ja) 楽音波形発生装置
JPH064079A (ja) 楽音合成装置
JP3094759B2 (ja) 楽音信号分配処理装置
JP3430575B2 (ja) 電子楽音信号合成装置
JP2819609B2 (ja) 音像定位制御装置

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 19910702

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): DE FR GB IT

PUAL Search report despatched

Free format text: ORIGINAL CODE: 0009013

AK Designated contracting states

Kind code of ref document: A3

Designated state(s): DE FR GB IT

17Q First examination report despatched

Effective date: 19961022

GRAG Despatch of communication of intention to grant

Free format text: ORIGINAL CODE: EPIDOS AGRA

GRAG Despatch of communication of intention to grant

Free format text: ORIGINAL CODE: EPIDOS AGRA

GRAG Despatch of communication of intention to grant

Free format text: ORIGINAL CODE: EPIDOS AGRA

GRAH Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOS IGRA

GRAH Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOS IGRA

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): DE FR GB IT

REF Corresponds to:

Ref document number: 69130748

Country of ref document: DE

Date of ref document: 19990225

ITF It: translation for a ep patent filed

Owner name: STUDIO TORTA S.R.L.

ET Fr: translation filed
PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed
PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20010528

Year of fee payment: 11

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20010530

Year of fee payment: 11

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20010611

Year of fee payment: 11

REG Reference to a national code

Ref country code: GB

Ref legal event code: IF02

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20020604

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20030101

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20020604

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20030228

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20050604