EP1017039B1 - MIDI Schnittstelle mit Sprachfähigkeit - Google Patents

MIDI Schnittstelle mit Sprachfähigkeit

Info

Publication number
EP1017039B1
EP1017039B1 EP19990480122 EP99480122A EP1017039B1 EP 1017039 B1 EP1017039 B1 EP 1017039B1 EP 19990480122 EP19990480122 EP 19990480122 EP 99480122 A EP99480122 A EP 99480122A EP 1017039 B1 EP1017039 B1 EP 1017039B1
Authority
EP
European Patent Office
Prior art keywords
sounds
notes
musical
sequence
distinct
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP19990480122
Other languages
English (en)
French (fr)
Other versions
EP1017039A1 (de
Inventor
Maurice Flam
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to EP19990480122 priority Critical patent/EP1017039B1/de
Publication of EP1017039A1 publication Critical patent/EP1017039A1/de
Application granted granted Critical
Publication of EP1017039B1 publication Critical patent/EP1017039B1/de
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/18Selecting circuits
    • G10H1/26Selecting circuits for automatically producing a series of tones
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0033Recording/reproducing or transmission of music for electrophonic musical instruments
    • G10H1/0041Recording/reproducing or transmission of music for electrophonic musical instruments in coded form
    • G10H1/0058Transmission between separate instruments or between individual components of a musical system
    • G10H1/0066Transmission between separate instruments or between individual components of a musical system using a MIDI interface
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2230/00General physical, ergonomic or hardware implementation of electrophonic musical tools or instruments, e.g. shape or architecture
    • G10H2230/045Special instrument [spint], i.e. mimicking the ergonomy, shape, sound or other characteristic of a specific acoustic musical instrument category
    • G10H2230/155Spint wind instrument, i.e. mimicking musical wind instrument features; Electrophonic aspects of acoustic wind instruments; MIDI-like control therefor.
    • G10H2230/195Spint flute, i.e. mimicking or emulating a transverse flute or air jet sensor arrangement therefor, e.g. sensing angle, lip position, etc, to trigger octave change
    • G10H2230/201Spint piccolo, i.e. half-size transverse flute, e.g. ottavino
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/171Transmission of musical instrument data, control or status information; Transmission, remote access or control of music data for electrophonic musical instruments
    • G10H2240/201Physical layer or hardware aspects of transmission to or from an electrophonic musical instrument, e.g. voltage levels, bit streams, code words or symbols over a physical link connecting network nodes or instruments
    • G10H2240/241Telephone transmission, i.e. using twisted pair telephone lines or any type of telephone network
    • G10H2240/251Mobile telephone transmission, i.e. transmitting, accessing or controlling music data wirelessly via a wireless or mobile telephone receiver, analog or digital, e.g. DECT GSM, UMTS

Definitions

  • the present invention relates generally to digital interfaces for musical instruments, and specifically to methods and devices for representing musical notes using a digital interface.
  • MIDI Musical Instrument Digital Interface
  • MIDI Musical Instrument Digital Interface
  • Information regarding implementing the MIDI standard is widely available, and can be found, for instance, in a publication entitled "Official MIDI Specification” (MIDI Manufacturers Association, La Habra, California).
  • Data used in the MIDI standard typically include times of depression and release of a specified key on a digital musical instrument, the velocity of the depression, optional post-depression pressure measurements, vibrato, tremolo, etc.
  • a performance by one or more digital instruments using the MIDI protocol can be processed at any later time using standard editing tools, such as insert, delete, and cut-and-paste, until all aspects of the performance are in accordance with the desires of a user of the musical editor.
  • a MIDI computer file which contains the above-mentioned data representing a musical performance, does not contain a representation of the actual wave forms generated by an output module of the original performing musical instrument. Rather, the file may contain an indication that, for example, certain musical notes should be played by a simulated acoustic grand piano.
  • a MIDI-compatible output device subsequently playing the file would then retrieve from its own memory a representation of an acoustic grand piano, which representation may be the same as or different from that of the original digital instrument. The retrieved representation is used to generate the musical wave forms, based on the data in the file.
  • MIDI files and MIDI devices which process MIDI information designate a desired simulated musical instrument to play forthcoming notes by indicating a patch number corresponding to the instrument.
  • patch numbers are specified by the GM (General MIDI) protocol, which is a standard widely known and accepted in the art.
  • the GM protocol specification is available from the International MIDI Association (Los Angeles, California), and was originally described in an article, "General MIDI (GM) and Roland's GS Standard," by Chris Meyer, in the August, 1991, issue of Electronic Musician.
  • that patch will produce qualitatively the same type of sound, from the point of view of human auditory perception, for any one key on the keyboard of the digital musical instrument as for any other key.
  • the Acoustic Grand Piano patch is selected, then playing middle C and several neighboring notes produces piano-like sounds which are, in general, similar to each other in tonal quality, and which vary essentially only in pitch. (In fact, if the musical sounds produced were substantially different in any respect other than pitch, the effect on a human listener would be jarring and undesirable.)
  • MIDI allows information governing the performance of 16 independent simulated instruments to be transmitted effectively simultaneously through 16 logical channels defined by the MIDI standard.
  • Channel 10 is uniquely defined as a percussion channel which, in contrast to the patches described hereinabove, has qualitatively distinct sounds defined for each successive key on the keyboard.
  • depressing MIDI notes 40, 41, and 42 yields respectively an Electric Snare, a Low Floor Tom, and a Closed Hi-Hat.
  • MIDI cannot generally be used to set words to music.
  • Some MIDI patches are known in the art to use a "split-keyboard" feature, whereby notes below a certain threshold MIDI note number (the "split-point" on the keyboard) have a first sound (e.g., organ), and notes above the split-point have a second sound (e.g., flute).
  • the split-keyboard feature thus allows a single keyboard to be used to reproduce two different instruments.
  • the US patent US4733591 discloses an electronic musical instrument able to generate a multiplexed signal of a musical tone and a human voice, thus an instrument able to speaks the names of the tones as it produces the musical tones.
  • this instrument it is not possible to take into account parameters qualifying sounds such as velocity in playing a sequence of notes or duration of notes when creating the sounds of a key which has been played by the player.
  • parameters qualifying sounds such as velocity in playing a sequence of notes or duration of notes when creating the sounds of a key which has been played by the player.
  • the solution of the US patent it is not possible to create sounds not from a sequence of keys played on an instrument but from a data file which could also be accessed through a network.
  • an electronic musical device generates qualitatively distinct sounds, such as different spoken words, responsive to different musical notes that are input to the device.
  • the pitch and/or other tonal qualities of the generated sounds are preferably also determined by the notes.
  • the device is MIDI-enabled and uses a specially-programmed patch on a non-percussion MIDI channel to generate the distinct sounds.
  • the musical notes may be input to the device using any suitable method known in the art. For example, the notes may be retrieved from a file, or may be created in real-time on a MIDI-enabled digital musical instrument coupled to the device.
  • the distinct sounds comprise representations of a human voice which, most preferably, sings the names of the notes, such as "Do/Re/Mi/Fa/Sol/La/Si/Do” or "C/D/E/F/G/A/B/C,” responsive to the corresponding notes generated by the MIDI instrument.
  • the voice may say, sing, or generate other words, phrases, messages, or sound effects, whereby any particular one of these is produced responsive to selection of a particular musical note, preferably by depression of a pre-designated key.
  • one or more parameters such as key velocity, key after-pressure, note duration, sustain pedal activation, modulation settings, etc., are produced or selected by a user of the MIDI instrument and are used to control respective qualities of the distinct sounds.
  • music education software running on a personal computer or a server has the capability to generate the qualitatively distinct sounds responsive to either the different keys pressed on the MIDI instrument or different notes stored in a MIDI file.
  • the software and/or MIDI file is accessed from a network such as the Internet, preferably from a Web page.
  • the music education software preferably enables a student to learn solfege (the system of using the syllables, "Do Re Mi" to refer to musical tones) by playing notes on a MIDI instrument and hearing them sung according to their respective musical syllables, or by hearing songs played back from a MIDI file, one of the channels being set to play a specially-programmed solfege patch, as described hereinabove.
  • solfege the system of using the syllables, "Do Re Mi" to refer to musical tones
  • the electronic musical device is enabled to produce clearly perceivable solfege sounds even when a pitch wheel of the device is being used to modulate the solfege sounds' pitch or when the user is rapidly playing notes on the device. Both of these situations could, if uncorrected, distort the solfege sounds or render them incomprehensible.
  • the digitized sounds are preferably modified to enable them to be recognized by a listener although played for a very short time.
  • a method for electronic generation of sounds, based on the notes in a musical scale including:
  • At least one of the qualitatively distinct sounds includes a representation of a human voice.
  • the distinct sounds include solfege syllables respectively associated with the notes.
  • assigning includes creating a MIDI (Musical Instrument Digital Interface) patch which includes the distinct sounds.
  • MIDI Musical Instrument Digital Interface
  • creating the patch includes:
  • receiving the input includes playing the sequence of musical notes on a musical instrument, while in another preferred embodiment, receiving the input includes retrieving the sequence of musical notes from a file.
  • retrieving the sequence includes accessing a network and downloading the file from a remote computer.
  • generating the output includes producing the distinct sounds responsive to respective velocity parameters and/or duration parameters of notes in the sequence of notes.
  • generating the output includes accelerating the output of a portion of the sounds responsive to an input action.
  • a method for electronic generation of sounds, based on the notes in a musical scale including:
  • assigning the sounds includes assigning respective representations of a human voice pronouncing one or more words.
  • apparatus for electronic generation of sounds, based on notes in a musical scale including:
  • At least one of the qualitatively distinct sounds includes a representation of a human voice.
  • the distinct sounds include respective solfege syllables.
  • the data are stored in a MIDI patch.
  • the sounds are played at respective musical pitches associated with the respective notes in the scale.
  • a system for musical instruction includes apparatus as described hereinabove.
  • the sounds preferably include words descriptive of the notes.
  • Fig. 1 is a schematic illustration of a system 20 for generating sounds, comprising a processor 24 coupled to a digital musical instrument 22, an optional amplifier 28, which preferably includes an audio speaker, and an optional music server 40, in accordance with a preferred embodiment of the present invention.
  • Processor 24 and instrument 22 generally act as music generators in this embodiment.
  • Processor 24 preferably comprises a personal computer, a sequencer, and/or other apparatus known in the art for processing MIDI information. It will be understood by one skilled in the art that the principles of the present invention, as described hereinbelow, may also be implemented by using instrument 22 or processor 24 independently.
  • instrument 22 and processor 24 are connected by standard cables and connectors to amplifier 28, while a MIDI cable 32 is used to connect a MIDI port 30 on instrument 22 to a MIDI port 34 on processor 24.
  • processor 24 is coupled to a network 42 (for example, the Internet) which allows processor 24 to download MIDI files from music server 40, also coupled to the network.
  • digital musical instrument 22 is MIDI-enabled.
  • a user 26 of instrument 22 plays a series of notes on the instrument, for example, the C major scale, and the instrument causes amplifier 28 to generate, responsive thereto, the words "Do Re Mi Fa Sol La Si Do," each word “sung,” i.e., pitched, at the corresponding tone.
  • the solfege thereby produced varies according to some or all of the same keystroke parameters or other parameters that control most MIDI instrumental patches, e.g., key velocity, key after-pressure, note duration, sustain pedal activation, modulation settings, etc.
  • user 26 downloads from server 40 into processor 24 a standard MIDI file, not necessarily prepared specifically for use with this invention.
  • a standard MIDI file not necessarily prepared specifically for use with this invention.
  • the user may find an American history Web page with a MIDI file containing a monophonic rendition of "Yankee Doodle," originally played and stored using GM patch 73 (Piccolo).
  • GM patch 73 Pieris
  • “Monophonic” means chat an instrument outputs only one tone at a time.)
  • processor 24 After downloading the file, processor 24 preferably changes the patch selection from 73 to a patch which is specially programmed according to the principles of the present invention (and not according to the GM standard).
  • a patch relating each key on the keyboard to a respective solfege syllable is downloaded from server 40 to a memory 36 in processor 24.
  • User 26 preferably uses the downloaded patch in processor 24, and/or optionally transfers the patch to instrument 22, where it typically resides in an electronic memory 38 thereof. From the user's perspective, operation of the patch is preferably substantially the same as that of other MIDI patches known in the art.
  • the specially-programmed MIDI patch described hereinabove is used in conjunction with educational software to teach solfege and/or to use solfege as a tool to teach other aspects of music, e.g., pitch, duration, consonance and dissonance, sight-singing, etc.
  • MIDI-enabled Web pages stored on server 40 comprise music tutorials which utilize the patch and can be downloaded into processor 24 and/or run remotely by user 26.
  • Fig. 2 is a schematic illustration of a data structure 50 for storing sounds, utilized by system 20 of Fig. 1, in accordance with a preferred embodiment of the present invention.
  • Data structure 50 is preferably organized in the same general manner as MIDI patches which are known in the art. Consequently, each block 52 in structure 50 preferably corresponds to a particular key on digital musical instrument 22 and contains a functional representation relating one or more of the various MIDI input parameters (e.g., MIDI note, key depression velocity, after-pressure, sustain pedal activation, modulation settings, etc.) to an output.
  • the output typically consists of an electrical signal which is sent to amplifier 28 to produce a desired sound.
  • structure 50 comprises qualitatively distinct sounds for a set of successive MIDI notes.
  • a set of "qualitatively distinct sounds” is used in the present patent application and in the claims to refer to a set of sounds which are perceived by a listener to differ from each other most recognizably based on a characteristic that is not inherent in the pitch of each of the sounds in the set.
  • Illustrative examples of sets of qualitatively different sounds are given in Table I. In each of the sets in the table, each of the different sounds is assigned to a different MIDI note and (when appropriate) is preferably "sung" by amplifier/speaker 28 at the pitch of that note when the note is played. TABLE I 1.
  • a MIDI patch made according to the principles of the present invention is different from MIDI patches known in the art, in which pitch is the most recognizable characteristic (and typically the only recognizable characteristic) which perceptually differentiates the sounds generated by playing different notes, particularly several notes within one octave.
  • pitch is the most recognizable characteristic (and typically the only recognizable characteristic) which perceptually differentiates the sounds generated by playing different notes, particularly several notes within one octave.
  • Each block 52 in data structure 50 preferably comprises a plurality of wave forms to represent the corresponding MIDI note.
  • Wave Table Synthesis as is known in the art of computerized music synthesis, is the preferred method for generating data structure 50.
  • a given block 52 in structure 50 for example "Fa,” is prepared by digitally sampling a human voice singing "Fa" at a plurality of volume levels and for a plurality of durations. Interpolation between the various sampled data sets, or extrapolation from the sampled sets, is used to generate appropriate sounds for non-sampled inputs.
  • only one sampling is made for each entry in structure 50, and its volume or other playback parameters are optionally altered in real-time to generate solfege based on the MIDI file or keys being played.
  • blocks corresponding to notes separated by exactly one octave have substantially the same wave forms.
  • preparation of structure 50 in order to make a solfege patch is analogous to preparation of any digitally sampled instrumental patch known in the art (e.g., acoustic grand piano), except that, as will be understood from the disclosure hereinabove, no interpolation is generally performed between two relatively near MIDI notes to determine the sounds of intermediate notes.
  • instrument 22 includes a pitch wheel, known in the art as a means for smoothly modulating the pitch of a note, typically in order to allow user 26 to cause a transition between one solfege sound and a following solfege sound.
  • a pitch wheel known in the art as a means for smoothly modulating the pitch of a note, typically in order to allow user 26 to cause a transition between one solfege sound and a following solfege sound.
  • Spoken words generally have a "voiced" part, predominantly generated by the larynx, and an "unvoiced" part, predominantly generated by the teeth, tongue, palate, and lips.
  • the voiced part of speech can vary significantly in pitch, while the unvoiced part is relatively unchanged with modulations in the pitch of a spoken word.
  • synthesis of the sounds is adapted in order to enhance the ability of a listener to clearly perceive each solfege sound as it is being output by amplifier 28, even when the user is operating the pitch wheel (which can distort the sounds) or playing notes very quickly (e.g., faster than about 6 notes/second).
  • instrument 22 regularly checks for input actions such as fast key-presses or use of the pitch wheel. Upon detecting one of these conditions, instrument 22 preferably accelerates the output of the voiced part of the solfege sound, most preferably generating a substantial portion of the voiced part in less than about 100 ms (typically in about 15 ms). The unvoiced part is generally not modified in these cases.
  • the responsiveness of instrument 22 to pitch wheel use is preferably deferred until after the accelerated sound is produced.
  • Dividing a spoken sound into its voiced and unvoiced parts, optionally altering one or both of the parts, and subsequently recombining the parts is a technique well known in the art. Using known techniques, acceleration of the voiced part is typically performed in such a manner that the pitch of the voiced part is not increased by the acceleration of its playback.
  • the voiced and unvoiced parts of each solfege note are evaluated prior to playing instrument 22, most preferably at the time of initial creation of data structure 50.
  • both the unmodified digital representation of a solfege sound and the specially-created "accelerated" solfege sound are typically stored in block 52, and instrument 22 selects whether to retrieve the unmodified or accelerated solfege sound based on predetermined selection parameters.
  • acceleration of the solfege sound is performed without separation of the voiced and unvoiced parts. Instead, substantially the entire representation of the solfege sound is accelerated, preferably without altering the pitch of the sound, such that the selected solfege sound is clearly perceived by a listener before the sound is altered by the pitch wheel or replaced by a subsequent solfege sound.
  • solfege sound e.g., the "D” in "Do"
  • the most recognizable part of the solfege sound is heard by a listener before the sound is distorted or a subsequent key is pressed.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Electrophonic Musical Instruments (AREA)

Claims (18)

  1. Verfahren für die elektronische Erzeugung von Klängen auf der Grundlage der aufeinander folgenden Noten einer Tonleiter, wobei das Verfahren Folgendes umfasst:
    Zuweisen entsprechender Klänge zu mindestens einigen der aufeinander folgenden Noten, wobei sich die Klänge voneinander auf der Grundlage der Tonhöhe der Note und durch ein weiteres Merkmal unterscheiden, das nicht durch die Tonhöhe der einzelnen Klänge der Tonleiter bedingt wird;
    Speichern der zugewiesenen Klänge in einem Programm, das auf einem Computer ausgeführt wird, der über eine Hardwareschnittstelle mit einer Instrumententastatur verbunden ist, und Ermöglichen der Erfassung und Codierung von Parametern anhand einer Taste, die auf der Tastatur angeschlagen wird, und Übertragen zu dem Computer, der außerdem in der Lage ist, entsprechende Klänge zu decodieren und auf einem Verstärker zu erzeugen, wobei jede angeschlagene Taste einem erzeugen Klang entspricht, der sich von einem anderen unterscheiden kann;
    Empfangen einer ersten Eingabe, die eine Abfolge von Musiknoten angibt, welche aus den Noten der Tonleiter ausgewählt wurden;
    Empfangen einer zweiten Eingabe, die einen oder mehrere Tastenanschlagsparameter angibt, der/die einer oder mehreren der Noten in der Abfolge entspricht/entsprechen; und
    Erzeugen einer Ausgabe als Reaktion auf die Abfolge, bei der die verschiedenartigen Klänge als Reaktion auf die betreffenden Noten, die durch die erste Eingabe definiert wurden, in der Abfolge und mit den betreffenden Tonhöhen, die den betreffenden Noten zugewiesen sind, und als Reaktion auf die Tastenanschlagsparameter der zweiten Eingabe für die betreffenden Noten erzeugt werden.
  2. Verfahren nach Anspruch 1, wobei mindestens einer der verschiedenartigen Klänge eine Repräsentation einer menschlichen Stimme umfasst.
  3. Verfahren nach Anspruch 2, wobei die verschiedenartigen Klänge Solfege-Tonsilben umfassen, die jeweils den Noten zugewiesen sind.
  4. Verfahren nach einem beliebigen der vorangegangenen Ansprüche, wobei die Zuweisung das Erzeugen einer MIDI-Klangbelegung (Musical Instrument Digital Interface) (50) umfasst, welche die verschiedenartigen Klänge umfasst.
  5. Verfahren nach Anspruch 4, wobei die Erzeugung der Klangbelegung Folgendes umfasst:
    Erzeugen einer digitalen Repräsentation der Klänge durch das digitale Abtasten der verschiedenartigen Klänge; und
    Speichern der Repräsentation in der Klangbelegung.
  6. Verfahren nach einem beliebigen der vorangegangenen Ansprüche, wobei der Empfang der ersten und zweiten Eingaben das Abspielen der Abfolge von Musiknoten auf einem Musikinstrument (22) umfasst.
  7. Verfahren nach einem beliebigen der Ansprüche 1 bis 5, wobei der Empfang der ersten und zweiten Eingaben das Abrufen der Abfolge von Musiknoten aus einer Datei umfasst.
  8. Verfahren nach Anspruch 7, wobei der Empfang der ersten und zweiten Eingaben das Zugreifen auf ein Netzwerk (42) und das Herunterladen der Datei von einem entfernten Computer (40) umfasst.
  9. Verfahren nach einem beliebigen der vorangegangenen Ansprüche, wobei die Erzeugung der Ausgabe das Erzeugen der verschiedenartigen Klänge als Reaktion auf die betreffenden Notendauerparameter in der Abfolge von Noten umfasst.
  10. Verfahren nach einem beliebigen der vorangegangenen Ansprüche, wobei die Erzeugung der Ausgabe das Erzeugen der verschiedenartigen Klänge als Reaktion auf die betreffenden Notengeschwindigkeitsparameter in der Abfolge von Noten umfasst.
  11. Verfahren nach einem beliebigen der vorangegangenen Ansprüche, wobei die Erzeugung der Ausgabe das Beschleunigen der Ausgabe eines Teils der Klänge als Reaktion auf eine Eingabeaktion umfasst.
  12. Verfahren nach einem beliebigen der Ansprüche 1 bis 11, wobei die Zuweisung der Klänge das Zuweisen der betreffenden Repräsentationen einer menschlichen Stimme umfasst, die ein oder mehrere Wörter ausspricht.
  13. Vorrichtung (20) für die elektronische Erzeugung von Klängen auf der Grundlage von aufeinander folgenden Noten einer Tonleiter, die Folgendes umfasst:
    eine elektronische Musikerzeugungseinheit (22, 24), die einen Speicher (38, 36) umfasst, in dem Daten, welche die betreffenden Klänge angeben, die den aufeinander folgenden Noten zugewiesen sind, für ein Programm gespeichert sind, das auf einem Computer ausgeführt wird, der über eine Hardwareschnittstelle mit einer Instrumententastatur verbunden ist und die Erfassung und Codierung der Parameter für eine Taste, die auf der Tastatur von einem Instrumentenspieler angeschlagen wird, sowie die Übertragung an den Computer ermöglicht, der entsprechende Klänge decodieren und auf einem Verstärker erzeugen kann, wobei jede angeschlagene Taste einem erzeugten Klang entspricht, der von einem anderen Klang verschieden sein kann, auf der Grundlage der Tonhöhe der Note sowie durch ein weiteres Merkmal, das nicht durch die Tonhöhe der einzelnen Klänge der Tonleiter bedingt wird, und wobei die elektronische Musikerzeugungseinheit so ausgelegt ist, dass sie
    (a) eine erste Eingabe, die eine Abfolge von Musiknoten angibt, die aus den Noten der Tonleiter ausgewählt wurden, empfängt; und wobei die elektronische Musikerzeugungseinheit so ausgelegt ist, dass sie
    (b) eine zweite Eingabe, die einen oder mehrere Tastenanschlagsparameter angibt, der/die einer oder mehreren der Noten in der Abfolge entspricht/entsprechen, empfängt; und wobei die Vorrichtung ferner Folgendes umfasst:
    einen Lautsprecher (28), der von der elektronischen Musikerzeugungseinheit angesteuert wird, um als Reaktion auf die Abfolge eine Ausgabe zu erzeugen, wobei die elektronische Musikerzeugungseinheit so ausgelegt ist, dass sie als Reaktion auf die ersten und zweiten Eingaben verschiedenartige Klänge erzeugt, die den Noten der Tonleiter zugewiesen sind.
  14. Vorrichtung nach Anspruch 13, wobei mindestens einer der verschiedenartigen Klänge eine Repräsentation einer menschlichen Stimme umfasst.
  15. Vorrichtung nach Anspruch 14, wobei die verschiedenartigen Klänge entsprechende Solfege-Tonsilben umfassen.
  16. Vorrichtung nach einem beliebigen der Ansprüche 13 bis 16, wobei die Daten in einer MIDI-Klangbelegung (50) gespeichert werden.
  17. Vorrichtung nach einem beliebigen der Ansprüche 13 bis 16, wobei bei der vom Lautsprecher erzeugten Ausgabe die Klänge mit den betreffenden Tonhöhen abgespielt werden, die den betreffenden Noten der Tonleiter zugewiesen sind.
  18. System für die Musiklehre, das eine Vorrichtung nach Anspruch 17 umfasst, wobei die Klänge Wörter umfassen, welche die Noten beschreiben.
EP19990480122 1998-12-29 1999-11-25 MIDI Schnittstelle mit Sprachfähigkeit Expired - Lifetime EP1017039B1 (de)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP19990480122 EP1017039B1 (de) 1998-12-29 1999-11-25 MIDI Schnittstelle mit Sprachfähigkeit

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
EP98480102 1998-12-29
EP98480102 1998-12-29
EP19990480122 EP1017039B1 (de) 1998-12-29 1999-11-25 MIDI Schnittstelle mit Sprachfähigkeit

Publications (2)

Publication Number Publication Date
EP1017039A1 EP1017039A1 (de) 2000-07-05
EP1017039B1 true EP1017039B1 (de) 2006-08-16

Family

ID=26151826

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19990480122 Expired - Lifetime EP1017039B1 (de) 1998-12-29 1999-11-25 MIDI Schnittstelle mit Sprachfähigkeit

Country Status (1)

Country Link
EP (1) EP1017039B1 (de)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4733591A (en) * 1984-05-30 1988-03-29 Nippon Gakki Seizo Kabushiki Kaisha Electronic musical instrument
JPH05341793A (ja) * 1991-04-19 1993-12-24 Pioneer Electron Corp カラオケ演奏装置
JP3381074B2 (ja) * 1992-09-21 2003-02-24 ソニー株式会社 音響構成装置
US5915237A (en) * 1996-12-13 1999-06-22 Intel Corporation Representing speech using MIDI

Also Published As

Publication number Publication date
EP1017039A1 (de) 2000-07-05

Similar Documents

Publication Publication Date Title
US6191349B1 (en) Musical instrument digital interface with speech capability
US6506969B1 (en) Automatic music generating method and device
KR0149251B1 (ko) 악기음 발생 방법 및 시스템과 악기음 발생 제어 시스템
JP3527763B2 (ja) 調性制御装置
JPH0744183A (ja) カラオケ演奏装置
JP3915807B2 (ja) 奏法自動判定装置及びプログラム
JPH10214083A (ja) 楽音生成方法および記憶媒体
JP3116937B2 (ja) カラオケ装置
JP4407473B2 (ja) 奏法決定装置及びプログラム
JP4036952B2 (ja) 歌唱採点方式に特徴を有するカラオケ装置
JP2001324987A (ja) カラオケ装置
EP1017039B1 (de) MIDI Schnittstelle mit Sprachfähigkeit
JPH06332449A (ja) 電子楽器の歌声再生装置
JP2605885B2 (ja) 楽音発生装置
JP3618203B2 (ja) 利用者が伴奏音楽を演奏できるカラオケ装置
JP4802947B2 (ja) 奏法決定装置及びプログラム
JP3637196B2 (ja) 音楽再生装置
Menzies New performance instruments for electroacoustic music
JP3719129B2 (ja) 楽音信号合成方法、楽音信号合成装置および記録媒体
JP2002221978A (ja) ボーカルデータ生成装置、ボーカルデータ生成方法および歌唱音合成装置
JP2002297139A (ja) 演奏データ変更処理装置
JP3873914B2 (ja) 演奏練習装置及びプログラム
JP6981239B2 (ja) 機器、方法及びプログラム
JP2000003175A (ja) 楽音生成方法、楽音デ―タ作成方法、楽音波形デ―タ作成方法、楽音デ―タ生成方法、および記憶媒体
JP2002041035A (ja) 再生用符号化データ作成方法

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

AX Request for extension of the european patent

Free format text: AL;LT;LV;MK;RO;SI

17P Request for examination filed

Effective date: 20000617

AKX Designation fees paid

Free format text: AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

17Q First examination report despatched

Effective date: 20040203

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060816

Ref country code: LI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060816

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT;WARNING: LAPSES OF ITALIAN PATENTS WITH EFFECTIVE DATE BEFORE 2007 MAY HAVE OCCURRED AT ANY TIME BEFORE 2007. THE CORRECT EFFECTIVE DATE MAY BE DIFFERENT FROM THE ONE RECORDED.

Effective date: 20060816

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060816

Ref country code: CH

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060816

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060816

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060816

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: NV

Representative=s name: INTERNATIONAL BUSINESS MACHINES CORPORATION

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REF Corresponds to:

Ref document number: 69932796

Country of ref document: DE

Date of ref document: 20060928

Kind code of ref document: P

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20061116

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20061116

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20061127

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20061127

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20061130

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20070116

NLV1 Nl: lapsed or annulled due to failure to fulfill the requirements of art. 29p and 29m of the patents act
REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

ET Fr: translation filed
PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20070518

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20061117

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20061125

REG Reference to a national code

Ref country code: GB

Ref legal event code: 746

Effective date: 20081017

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060816

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20121206

Year of fee payment: 14

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20140731

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20131202

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20181031

Year of fee payment: 20

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20181203

Year of fee payment: 20

REG Reference to a national code

Ref country code: DE

Ref legal event code: R071

Ref document number: 69932796

Country of ref document: DE

REG Reference to a national code

Ref country code: GB

Ref legal event code: PE20

Expiry date: 20191124

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION

Effective date: 20191124