JP3646599B2 - Playing interface - Google Patents

Playing interface Download PDF

Info

Publication number
JP3646599B2
JP3646599B2 JP2000002077A JP2000002077A JP3646599B2 JP 3646599 B2 JP3646599 B2 JP 3646599B2 JP 2000002077 A JP2000002077 A JP 2000002077A JP 2000002077 A JP2000002077 A JP 2000002077A JP 3646599 B2 JP3646599 B2 JP 3646599B2
Authority
JP
Japan
Prior art keywords
performance
motion
operator
sensor
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
JP2000002077A
Other languages
Japanese (ja)
Other versions
JP2001195059A (en
Inventor
雅樹 佐藤
聡史 宇佐
善樹 西谷
Original Assignee
ヤマハ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ヤマハ株式会社 filed Critical ヤマハ株式会社
Priority to JP2000002077A priority Critical patent/JP3646599B2/en
Priority claimed from EP07110789.0A external-priority patent/EP1855267B1/en
Publication of JP2001195059A publication Critical patent/JP2001195059A/en
Application granted granted Critical
Publication of JP3646599B2 publication Critical patent/JP3646599B2/en
Anticipated expiration legal-status Critical
Application status is Expired - Lifetime legal-status Critical

Links

Images

Description

[0001]
BACKGROUND OF THE INVENTION
The present invention is interposed between a performance interface and, more specifically, a musical sound generating device such as an electronic musical instrument or a musical sound reproducing device, and variously controls the musical sound generating device according to the performance of the performance participant. Related to playing interface.
[0002]
[Prior art]
Generally, in an electronic musical instrument, when four performance parameters such as timbre, pitch, volume, and effect are determined, a desired musical tone can be generated. CD, MD, DVD, DAT, MIDI (Musical Instrument Digital Interface) source In a musical sound reproducing apparatus that reproduces acoustic information such as the above, a desired musical sound can be reproduced when three performance parameters of tempo, volume, and effect are determined. Therefore, if a performance interface is provided between the operator and a musical sound generating device such as an electronic musical instrument or a musical sound reproducing device, and the above-described 4 or 3 performance parameters are determined using the performance interface by the operator's operation, A desired musical sound according to the operation of the operator can be output.
[0003]
Conventionally, as this type of interface, one that controls performance parameters of musical sounds output from an electronic musical instrument or musical sound reproducing device in accordance with the movement of an operator has been proposed. However, since the number of operators is limited to one, and the musical tone generator and its performance parameters are one, many people cannot participate or enjoy various musical tone outputs.
[0004]
[Problems to be solved by the invention]
The present invention provides various functions to a performance interface that controls performance parameters of a musical sound generator such as an electronic musical instrument in accordance with the movement and physical state of a performance participant, thereby providing music ensembles, theater, music education sites, sports As a new musical sound controller for events, concerts, theme parks, and music games, a new type of performance interface that allows anyone from infants to the elderly to easily participate in musical sound production and enjoy playing The purpose is to provide.
[0005]
[Means for Solving the Problems]
According to one aspect of the present invention, there is provided a performance interface for generating performance control information for controlling a musical sound generated from a musical sound generator according to an operator's physical condition, and three directions based on the operation of the operator The motion detector that can be installed or possessed by the operator that outputs the motion detection signal composed of the corresponding three-direction components and the motion detection signal of the three-direction components received from the motion detector are mutually connected. The operation determination means for determining that a specific operation corresponding to each relationship is performed by the operator based on the fact that each direction component matches each of a plurality of predetermined relationships. A performance interface (claim 1) is provided, comprising performance control information generating means for generating performance control information corresponding to the specific action based on the determination result. The motion detector in the performance interface is typically of a type that is held by the operator's hand or worn on the operator's hand or foot.
[0006]
According to another aspect of the present invention, there is provided a performance interface for generating performance control information for controlling a musical sound generated from a musical sound generator according to an operator's physical condition, wherein the performance interface includes three directions based on the operation of the operator A motion detector that can be installed or possessed by the operator that outputs a motion detection signal composed of corresponding three-direction components, and detects the physical state of the operator, and outputs a corresponding state detection signal The state detector that can be installed or held by the operator and the motion detection signals of the three directional components received from the motion detector are compared with each other, and each directional component matches a plurality of predetermined relationships. The action determination means for determining that a specific action corresponding to each relationship is being performed by the operator, and the physical state of the operator based on the state detection signal received from the state detector. Performance control information for generating performance control information corresponding to the identified specific motion and the analyzed body state based on the determination result of the body state analysis means to analyze, the determination result of the motion determination means and the analysis result of the body state analysis means A performance interface (claim 2) comprising a generating means is provided.
[0007]
In the performance directional interface according to the present invention, the state detector is configured to detect at least one of pulse, body temperature, resistance between skin, brain waves, respiratory rate, and viewpoint position of the eyeball, and output a corresponding state detection signal. (Claim 3). The motion detector may be one that is held by the operator's hand or that is attached to the operator's hand or foot.
[0008]
In the performance direction interface according to the present invention, the performance control information can be information for controlling the volume, tempo, timing, tone color, effect, or pitch of the musical sound. Furthermore, the motion detector can use a three-dimensional acceleration sensor that detects motion in three orthogonal directions based on the motion of the operator and outputs a corresponding motion detection signal in three directions.
[0009]
[Effects of the Invention]
According to one aspect of the present invention, the performance interface controls the musical sound generated from the motion detector that can be carried by the operator or that can be installed on the operator (hand, foot, etc.) and the musical sound generator. And a main body system for generating performance control information. Here, the performance control information is, for example, the volume, tempo, timing, tone color, effect, or pitch of the musical sound, and is controlled by the motion detection signal. The motion detection signal is output by detecting motion in three directions based on the motion of the operator by a motion detector such as a three-dimensional acceleration sensor. The main body system receives the motion detection signals in the three directions from the motion detector, and each of the direction components of the motion detection signal matches a plurality of predetermined relationships by the motion discrimination means. It is determined that a specific action corresponding to the relationship is being performed by the operator, and performance control information corresponding to the specific action based on the determination result is generated by the performance control information generating means.
[0010]
As described above, in the performance interface according to the present invention, when the operator (performance participant) moves the motion detector variously, the content of the motion detection signal of the motion detector is analyzed to determine a specific operation by the operator. . That is, based on the fact that each direction component of the three-direction motion detection signal received from the motion detector matches each of a plurality of predetermined relationships, a specific action corresponding to each relationship is performed by the operator. It is determined. For example, when the Z direction component is larger than the X and Y direction components, it is determined that the “push” operation is performed, and when the relationship is opposite, it is determined that the “cut” operation is performed. Further, in the latter case, the “cutting” operation and the “cutting” operation are determined according to the magnitude relationship between the X and Y direction components. And performance control information is produced | generated according to the specific operation | movement based on this discrimination | determination result. Therefore, according to the present invention, a specific operation of the operator is discriminated by a method of comparing each direction component of the motion detection signal detected based on the motion of the operator, and various musical performances can be performed according to the discrimination result. Can be controlled.
[0011]
According to another feature of the present invention, the performance interface includes a motion detector and a state detector that can be possessed by the operator or can be installed by the operator, and a performance for controlling the musical sound generated from the musical sound generator. And a main body system that generates control information. A motion detector such as a three-dimensional acceleration sensor detects a motion in three directions based on the motion of the operator, and outputs a corresponding motion detection signal. The state detector detects the physical state of the operator such as pulse, body temperature, resistance between skin, brain wave, respiration rate, eyeball viewpoint movement, and outputs a corresponding state detection signal. The main body system receives the detection signals from the motion detector and the state detector, and each direction component of the motion detection signal matches each of a plurality of predetermined relationships by the motion discrimination means. It is determined that a specific action corresponding to the relationship is performed by the operator, the physical state analysis unit analyzes the operator's physical state from the state detection signal, and the performance control information generation unit determines the specific The performance control information corresponding to the operation and the analysis result is generated.
[0012]
Thus, in the performance interface of the present invention, when the operator moves the motion detector, the specific motion of the operator is discriminated from the content of the motion detection signal of the motion detector, and at the same time, the state detection of the state detector is detected. The physical state of the operator is analyzed from the signal content [state information (biological information, physiological information)], and performance control information is generated according to the analysis results. Therefore, according to the present invention, it is possible not only to respond to a specific operation of the operator, but also to control the music performance in various ways in consideration of the physical state of the operator.
[0013]
In other words, according to the performance interface of the present invention, the musical sound generating device can be connected to the musical sound generating device in accordance with motion (gesture) information representing a specific performance of the performance participant and physical information representing the physical state (biological and physiological) of the performance participant. By generating control information and having the function of controlling the performance parameters of the musical sound generator based on this control information, it is possible to output musical sounds produced according to the gestures and biological conditions of the performance participants. However, you can easily participate in the production of musical sounds.
[0014]
In order to acquire such physical information, for example, a motion sensor that detects motion in three orthogonal directions such as a three-dimensional acceleration sensor is used for motion (gesture) information, and a pulse or skin is used for status information. It is preferable to use a biological information sensor that generates a measurement output such as inter-resistance. Then, two or more performance parameters of the musical sound generating device can be controlled by the acquired physical information.
[0016]
In one embodiment of the present invention, a system in which a plurality of performance participants can simultaneously share and control a musical sound generating device such as an electronic musical instrument or a musical sound creating device can be provided.
[0017]
Specifically, one or more performance participants install a motion sensor or a biological information sensor on a predetermined body part (for example, a hand or a foot), and wirelessly transmit detection data from the sensor to the receiver of the musical sound generator. Wireless transmission is performed, and the musical sound generator side analyzes the detected data, and controls the performance parameters of the musical sound generator based on the analysis result.
[0018]
In one embodiment of the present invention, an operation sensor such as a one-dimensional to three-dimensional sensor is used as the physical information input means of the performance interface to control two or more performance parameters of the musical sound generator, Biological information is input as information to control some performance parameters. Further, the performance parameters can be controlled by using the motion sensor output and the biological information at the same time.
[0019]
In another embodiment of the present invention, an operation sensor such as a one-dimensional to three-dimensional sensor is used as the physical information input means of the performance interface to control the tempo of the output musical sound. In this case, the periodicity of the motion sensor output is used as a performance parameter. Also, the tempo of the musical sound can be controlled by inputting biometric information, and the performance parameters can be controlled using the output of the motion sensor and the biometric information at the same time.
[0020]
In another embodiment of the present invention, the detection data from a plurality of performance participants by physical information detection sensors such as motion sensors of two (for example, both hands) or one (for example, one hand) or all or For a plurality of arbitrarily selected detection data, a simple average or weighted average data value or a data value selected according to a predetermined rule is calculated, and performance parameters are controlled using the calculated data value.
[0021]
The present invention can be applied not only to pure musical performances but also to various musical performance environments. Examples of the application environment of the present invention include the following:
(1) Music performance control (conductor mode = professional mode, semi-auto mode).
(2) Accompaniment sound, external sound control
Various percussion instrument sounds, bells, natural sounds are built in, or an external sound source is used, and music performance is controlled by a plurality of people or one person. For example, a sound source of a predetermined performance track is set as a sound of a hand bell, a Japanese musical instrument, a gamelan, a percussion (ensemble) or the like, and is inserted into a music piece (main melody performance track).
(3) Performance by multiple people (music ensemble)
Precise rules such as average data obtained by simple average or weighted average of sensor output values from two or more persons, or the last or first data in time within a certain time range. The music performance is controlled based on the data selected in.
(Example of use) Music performance at a music education site = For example, a teacher has a master sensor and controls the tempo and volume of music. Students use the handset sensor to put various selection sounds into the music, such as handbell-like functions, traditional Japanese drums and kaneki (and let the natural wind and water sound simultaneously) (Teachers and students will have fun and participate in the musical performance while having a sense of participation in each other.)
(4) Accompaniment of tap dance.
(5) Network music performance between remote locations (with video) (music game)
Humans in different locations simultaneously control music performance through a communication network. For example, a person at a remote location can perform a musical performance at the same time while watching a video in a music classroom through a network.
(6) Excited state response tone control in the game.
(7) Background music (BGM) control in sports such as jogging and aerobics (bio mode, health mode)
For example, listening to the heart rate in tempo, considering jogging, aerobics, etc. and the heart rate, the tempo, volume, etc. of the music are naturally lowered when a certain set heart rate is exceeded.
(8) Theater.
During the theatrical performance, the sound of sound effects such as cutting sounds and wind noises is controlled according to the sword's turn during the sword dance.
(9) Event.
In various events, for example, a participation type remote controller, a participation type controller, a participation type input device, a participation type Game, and the like.
(10) Concert.
At the concert venue, the performer performs the main control (tempo, dynamics, etc.) of the song, and the viewer can participate in the music performance with a sub-control unit in a simple manner according to the light emitted by the LED etc. Produce.
(11) Theme park.
In the parade at the theme park, the performance of the song is controlled and the lighting effect by the light emitting device is controlled.
[0022]
DETAILED DESCRIPTION OF THE INVENTION
Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the drawings. The following embodiments are merely examples, and various modifications can be made without departing from the spirit of the present invention.
[0023]
[System configuration]
FIG. 1 shows a schematic configuration of an entire performance system including a performance interface according to an embodiment of the present invention. In this example, the system includes a plurality of body information detection transmitters 1T1 to 1Tn, an information reception / musical sound controller 1R and a musical sound reproducing unit 1S, a main system 1M, a host computer 2, a sound system 3, and a speaker system 4. The physical information detecting transmitters 1T1 to 1Tn and the information receiving / musical sound controller 1R constitute a performance interface.
[0024]
The plurality of body information detection transmitters 1T1 to 1Tn include motion sensors MS1 to MSn and / or state sensors SS1 to SSn, and both sensors MSa and SSa (a = 1 to n) provide performance information. It is held by the hands of each of a plurality of operators participating in the performance, or attached to a predetermined part of the body of each performance participant. Each motion sensor MSa detects the gestures and movements of each performance participant and outputs a motion signal in response thereto. For example, a so-called “three-dimensional sensor” (such as a three-dimensional acceleration sensor or a three-dimensional velocity sensor) ( x, y, z), etc., and a two-dimensional sensor (x, y), a distortion detector. The state sensor SSa is a so-called “biological information sensor” that outputs a state signal obtained by measuring the pulse (pulse wave) of each participant's body, resistance between skin, brain waves, respiration, pupil movement, and the like. .
[0025]
Each of the plurality of physical information detection transmitters 1T1 to 1Tn uses the operation signals and state signals from the physical information sensors MS1 to MSn and SS1 to SSn as detection signals via signal processing and transmission devices (not shown), respectively. Receive and transmit to the musical tone controller 1R. The information reception / musical sound controller 1R includes a reception processing unit RP, an information analysis unit AN, and a performance parameter determination unit PS, and can communicate with the host computer 2 composed of a personal computer (PC). Then, data processing for performance parameter control is performed.
[0026]
That is, when the information reception and tone controller 1R receives the detection signals from the physical information detection transmitters 1T1 to 1Tn, the reception processing unit RP extracts corresponding data under a predetermined condition, and extracts the extracted operation data or The state data is handed over to the information analysis unit AN as detection data. In the information analysis unit AN, the detection data is analyzed, for example, the body tempo is detected from the repetition cycle of the detection signal. Then, the performance parameter determination unit PS determines the performance parameter of the musical sound based on the analysis result of the detection data.
[0027]
The musical tone reproduction unit 1S includes a performance data control unit MC and a sound source unit SB, and generates a musical tone signal based on, for example, MIDI format performance data. The performance data control unit MC changes the performance data generated by the main system 1M or the performance data prepared in advance according to the performance parameters set by the performance parameter determination unit PS. The sound source unit SB generates a musical sound signal based on the changed performance data, transmits it to the sound system 3, and emits the corresponding musical sound from the speaker system 4.
[0028]
In the performance interface (1T1 to 1Tn · 1M) according to one embodiment of the present invention, when the operator moves the motion sensors MS1 to MSn with such a configuration, the information analysis unit AN detects the motion sensors MS1 to MSn. Analyze operator movement from data. The performance parameter determination unit PS determines a performance parameter according to the analysis result, and the musical tone reproduction unit 1S generates musical performance data based on the performance parameter. Therefore, a musical tone that is controlled as desired reflecting the movement of the motion sensor is emitted through the sound and speaker systems 3 and 4. The information analysis unit AN also analyzes the physical state of the operator from the state information (biological information, physiological information) from the state sensors SS1 to SSn simultaneously with the motion analysis of the motion sensors MS1 to MSn, and displays these analysis results. The corresponding performance parameter is generated. Therefore, it is possible not only to respond to the movement of the operator but also to control the music performance in various ways in consideration of the operator's physical condition.
[0029]
[Configuration of body information detection transmitter]
FIG. 2 shows a configuration example of the physical information detection transmitter according to the embodiment of the present invention. Each physical information detection transmitter 1Ta (a = 1 to n) includes a signal processing and transmission device in addition to the physical information sensor such as the motion sensor MSa and the state sensor SSa, and the signal processing and transmission device is a transmitter central processing device. (Transmitter CPU) T0, memory T1, high-frequency transmitter T2, display unit T3, charge controller T4, transmission power amplifier T5, operation switch T6, and the like. The motion sensor MSa has a structure that can be held by a performance participant or attached to an arbitrary location of the performance participant. When the motion sensor MSa is a hand-held type, the signal processing and transmission device can be incorporated in the sensor housing together with the motion sensor MSa. The state sensor SSa is attached to a predetermined part of the body according to the state to be detected.
[0030]
The transmitter CPUT0 controls the operations of the operation sensor MSa, the state sensor SSa, the high frequency transmitter T2, the display unit T3, and the charge controller T4 based on the transmitter operation program recorded in the memory T1. The detection signals from the physical information sensors MSa and SSa are subjected to predetermined processing such as ID number addition processing by the transmitter CPUT0, transmitted to the high-frequency transmitter T2, and further amplified by the transmission power amplifier T5. It is transmitted to the main system 1M side via the transmission antenna TA.
[0031]
The display unit T3 includes, for example, a 7-segment LED or LCD display and one or more LED light emitters (none of which are shown). The LED display includes a sensor number, an operating power source, Display alarms. For example, the LED light emitter always emits light according to the operation state of the operation switch T6, or blinks according to the detection output of the motion sensor MSa under the control of the transmitter CPUT0. The operation switch T6 is used not only for blinking control of the LED light emitter but also for various settings such as mode setting. The charge controller T4 controls the charging of the battery power supply T8 when a commercial power supply is connected to the AC adapter T7. When the power switch (not shown) provided in the battery power supply T8 is turned on, the battery controller T4 supplies power to each part of the transmitter. Supply is made.
[0032]
[System configuration]
FIG. 3 is a block diagram of the hardware configuration of the main system according to one embodiment of the present invention. In this example, the main body system 1M includes a main body central processing unit (main body CPU) 10, a read only memory (ROM) 11, a random access memory (RAM) 12, an external storage device 13, a timer 14, and first and second detections. Circuits 15 and 16, a display circuit 17, a sound source circuit 18, an effect circuit 19, a reception processing circuit 1A, and the like, and these devices 10 to 1A are connected to each other via a bus 1B. A communication interface (I / F) 1C for communicating with the host computer 2 is connected to the bus 1B, and a MIDI interface (I / F) 1D is further connected.
[0033]
The main body CPU 10 that controls the entire main body system 1M performs various controls according to a predetermined program under the time management by the timer 14 that is used for generating a tempo clock, an interrupt clock, and the like. A performance interface processing program for performance data change and playback control is centrally executed. The ROM 11 stores predetermined control programs for controlling the main system 1M. These control programs include a performance interface processing program for performance parameter determination, performance data change and playback control, and various data / tables. Etc. can be included. The RAM 12 stores data and parameters necessary for these processes, and is used as a work area for temporarily storing various data being processed.
[0034]
A keyboard 1E is connected to the first detection circuit 15, a pointing device 1F such as a mouse is connected to the second detection circuit 16, and a display 1G is connected to the display circuit 17. As a result, the user can operate the keyboard 1E and the pointing device 1F while viewing various screens displayed on the display 1G to set various modes necessary for performance data control in the main system 1M, and the processing / functions corresponding to the ID number. Various setting operations such as assignment, tone color (sound source) setting to performance tracks, and the like can be performed.
[0035]
According to the present invention, an antenna distribution circuit 1H is connected to the reception processing circuit 1A. The antenna distribution circuit 1H is composed of, for example, a multi-channel high-frequency receiver, and operation signals from a plurality of physical information detection transmitters 1T1 to 1Tn. And the status signal is received via the receiving antenna RA. The reception processing circuit 1 </ b> A converts the received signal into operation data and status data that can be processed by the main system 1 </ b> M, takes it into the system, and stores it in a predetermined area of the RAM 12.
[0036]
With the performance interface processing function of the main body CPU 10, motion data and state data representing the motion and state of each performance participant are analyzed, and performance parameters are determined based on the analysis result. The effect circuit 19 composed of a DSP or the like realizes the function of the sound source unit SB together with the sound source circuit 18 and the main body CPU 10, and controls performance data to be played based on set performance parameters. Then, performance data that has been produced according to the physical information of the performance participants is generated. Then, the sound system 3 connected to the effect circuit 19 emits a performance musical sound via the speaker system 4 according to the musical sound signal based on the performance data subjected to the effect processing.
[0037]
The external storage device 13 includes a hard disk drive (HDD), a compact disk read only memory (CD-ROM) drive, a floppy disk drive (FDD), a magneto-optical (MO) disk drive, and a digital multipurpose disk (DVD). It consists of a storage device such as a drive, and can store various control programs and various data. Accordingly, the performance interface processing program and various data necessary for performance parameter determination, performance data change, and playback control can be read from the external storage device 13 into the RAM 12 as well as using the ROM 11, and as required. The processing result can also be recorded in the external storage device 13. Further, in the external storage device 13, in particular, media such as CD-ROM, FD, MO, and DVD, for example, various performance music data in the MIDI format are stored as MIDI files. Performance music data can be loaded into the main system.
[0038]
Further, such processing programs and performance music data can be taken into the main system 1M from the host computer 2 connected via the communication I / F 1C and the communication network, or sent to the host computer 2. For example, software such as sound source software and performance music data can be distributed through a communication network. Furthermore, it communicates with other MIDI equipment 1J connected to MIDII / F1D, receives performance data, etc., and uses it, or conversely, performance data processed by the performance interface function of the present invention. Send it out. As a result, the sound source unit (FIG. 1: SB, FIG. 3: 18/19) of the main system 1M can be omitted, and the function of the sound source unit can be given to another MIDI device 1J.
[0039]
[Configuration example of motion sensor]
4 and 5 show examples of physical information detection mechanisms that can be used in the performance interface of the present invention. FIG. 4A is an example of a baton-type hand-held type body information detection transmitter. This body information detection transmitter incorporates the devices shown in FIG. 2 excluding the operation unit and the display unit (does not include the state sensor SSa). As the built-in motion sensor MSa, for example, a three-dimensional sensor such as a three-dimensional acceleration sensor or a three-dimensional speed sensor is used, and the direction of operation is controlled by a performance participant holding the baton transmitter in his hand. In addition, an operation signal corresponding to the size can be output.
[0040]
As shown in the figure, the external structure of this transmitter consists of a base portion (leftward in the drawing) and an end portion (rightward in the drawing) with both ends having a large diameter and a center having a small diameter. It is small and easy to grip with a hand, and functions as a grip. An LED display TD of the display unit T3 and a power switch TS of the battery power source T8 are provided on the outer surface of the bottom (left end in the figure), an operation switch T6 is provided on the outer surface of the center, and a display unit T3 is provided near the tip of the end. A plurality of LED light emitters TL are provided.
[0041]
In the baton type transmitter shown in FIG. 4 (1), when a performance participant holds and operates the base of the baton with a hand, an operation signal corresponding to the operation direction and the operation force is output from the built-in three-dimensional sensor. For example, if a three-dimensional acceleration sensor is built with the x-direction detection axis aligned with the mounting direction of the operation switch T6, swinging the baton up and down while holding the mounting position of the operation switch T6 upwards When a signal output representing the acceleration αx in the x direction corresponding to the acceleration (force) is generated and the baton is shaken left and right (perpendicular to the paper surface), a signal output representing the acceleration αy in the y direction corresponding to the swing acceleration (force) is generated. When this occurs and the baton is struck back and forth (left and right of the paper surface), a signal output indicating the acceleration αz in the z direction corresponding to the thrust or pulling acceleration is generated.
[0042]
FIG. 4 (2) shows an example of a shoe shape in which a motion sensor is embedded in the heel portion of the shoe. The motion sensor MSa is, for example, a strain detector (a one-dimensional sensor in the vertical x-axis direction) or a secondary to three-dimensional sensor (a left-right y-axis direction to a toe z-axis direction), and is embedded in a shoe heel part. In this example, the devices other than the sensor unit in the body information detection transmitter 1Ta in FIG. 2 are incorporated into, for example, a signal processing and transmitter device (not shown) attached to the waist belt, and the motion sensor MSa. The operation detection output is input to the detection processing and transmitter apparatus via a wire (not shown). Such a shoe-shaped motion sensor, for example, controls the tempo of a performance song according to the period of a detection signal when performing tap dance in a performance such as Latin music, or increases the percussion instrument volume according to the detection timing. And tap sound insertion (using a specific performance track).
[0043]
On the other hand, if the state sensor SSa can be obtained by holding it in the hand, the state sensor SSa can be implemented by a hand-held type similar to the above-described baton shape, but usually the body state corresponding to the type of state information to be obtained. Signal processing and transmitter apparatus attached to the detection part and provided with the state detection output of the state sensor SSa to other required parts (clothes, hats, glasses, collars, waist belts, etc.) through the wire Configure to give to.
[0044]
FIG. 5 shows another configuration example of the body information detection mechanism. In this example, a body information detection transmitter 1Ta is configured by the ring type body information sensor IS and the signal processing and transmitter device TTa. As the physical information sensor IS, for example, a motion sensor MSa such as a two-dimensional or three-dimensional sensor or a strain detector, or a state sensor SSa such as a pulse (pulse wave) detector can be used. Not only one finger (index finger) but also a plurality of fingers can be installed. The devices other than the sensor unit in the body information detection transmitter 1Ta in FIG. 2 are incorporated into a bracelet-shaped detection process and transmitter device TTa, and the detection output of the body information sensor IS is a wire (not shown). To the signal processing and transmitter device TTa.
[0045]
As in FIG. 4A, the signal processing and transmitter device TTa is provided with an LED display TD, a power switch TS, and an operation switch T6, but does not include an LED light emitter TL. In the case where the motion sensor MSa is used for the body information sensor IS, the state sensor SSa is provided in another part capable of detecting the body state as necessary, and when the state sensor SSa is used for the body information sensor IS. If necessary, the motion sensor MSa [for example, the motion sensor MSa as shown in FIG. 4 (2)] may be provided in another part where the motion can be detected.
[0046]
[Format of sensor data]
In one embodiment of the present invention, the ID number of each sensor is given to the data represented by the detection signal of the motion sensor or the state sensor, which is used to identify the sensor on the main body system 1M side and perform the corresponding processing. It is done. FIG. 6 shows a format example of sensor data. As the ID number, the most significant 5 bits (bits 0 to 4) are used, and a maximum of 32 numbers can be assigned.
[0047]
The next 3 bits (bits 5 to 7) are switch (SW) bits, and can be used to specify up to 8 types of specifications such as mode designation, start / stop, song selection, cueing, etc. Decoding is performed according to a switch table set for each ID number on the system 1M side. As for the SW bit, all the bits can be designated by the operation switch T6, only a part of the bits can be set, and the rest can be preset for each sensor, or all the bits can be preset. Normally, it is preferable that at least the first SW bit A (bit 5) can be designated to play mode on (A = “1”) / off (A = “0”) by the operation switch T6.
[0048]
The next three bytes (= 8 bits) are data bytes. When a three-dimensional sensor is used, as shown in the figure, x-axis data (bits 8 to 15), y-axis data (bits 16 to 23) and Z-axis data (bits 24 to 31) is allocated, and in the case of a two-dimensional sensor, the third data byte (bits 24 to 31) can be used as an extended data area. In the case of a one-dimensional sensor, the second and third Data bytes (bits 16 to 31) can be used as the extension data area. For other types of information sensors, data values corresponding to the respective detection modes can be assigned to these data bytes.
[0049]
[Use of motion sensor = Use of multiple analysis outputs]
In one embodiment of the present invention, desired performance can be added to the music performance by a plurality of analysis outputs obtained by processing the motion sensor output generated by the performance participant operating the operating body. For example, when a one-dimensional acceleration sensor that detects acceleration (force) in only one direction is used as the motion sensor, a plurality of performance parameters related to music performance can be controlled with the basic configuration shown in FIG. . Here, the one-dimensional acceleration sensor MSa is an acceleration (force) only in a predetermined unidirectional (for example, vertical x-axis direction) component in the body information detecting transmitter having a baton structure shown in FIG. It is configured as an operating body with a built-in acceleration detector (x-axis detector) Sa.
[0050]
In FIG. 7, when a performance participant performs an operation such as holding such an operating body in his / her hand, the one-dimensional acceleration sensor MSa has a predetermined one-direction (x-axis direction) component of the operation acceleration (force). Is output to the main system 1M. In the main system 1M, when it is confirmed that a preset ID number is added to the detection signal, a band-pass filter function that removes noise components by low-pass, high-frequency removal, etc., and passes effective components, and Valid data representing the acceleration α is handed over to the information analysis unit AN via the reception processing unit RP having a DC cut function for removing gravity.
[0051]
The information analysis unit AN analyzes this acceleration data, the peak time Tp representing the local peak occurrence time of the time lapse waveform | α | (t) of the absolute acceleration | α |, and the peak value Vp representing the height of the local peak. , Peak Q value Qp expressed by the following formula (1) indicating the sharpness of the local peak, peak interval indicating the time interval between the local peaks, the depth of the valley between the local peaks, the strength of the high frequency component of the peak, and the acceleration Extract the polarity of the local peak of α (t):
[Expression 1]
Here, w is the time width at which the height at the local peak of the acceleration waveform α (t) is half of the peak value Vp.
[0052]
The performance parameter determination unit PS determines the beat timing BT, dynamics (velocity or volume) DY, articulation AR, pitch (pitch), timbre, etc. according to the detected outputs Tp, Vp, Qp,. Various performance parameters are determined, and the performance data control unit of the musical tone reproducing unit 1S controls the performance data based on the determined performance parameters, and outputs the musical tone performance via the sound system 3. For example, the beat timing BT is controlled according to the peak occurrence time (time point) Tp, the dynamics DY is controlled according to the peak value Vp, the articulation AR is controlled according to the peak Q value Qp, and the local peak The beat number is discriminated by determining whether it is the front beat or the back beat according to the polarity.
[0053]
FIG. 8 very schematically shows an example of a hand movement trajectory and a waveform example of acceleration data α when the one-dimensional acceleration sensor MSa is held and commanded and the acceleration value “α (t) on the vertical axis is shown. "Represents the absolute value (no polarity) of the acceleration data α, that is, the absolute acceleration“ | α | (t) ”. FIG. 8 (1) shows an example of an operation trajectory (a) and an acceleration waveform example (a) at the time of a 2-beat espressive (espressivo = expressively) command operation. In this example of trajectory (a), The command operation is not stopped at the operation points P1 and P2 indicated by the black circles, and it always moves smoothly and softly. On the other hand, FIG. 8 (2) shows an example of an operation trajectory (b) and an acceleration waveform example (b) at the time of conducting a 2-beat staccato (“sound is clearly separated”). (b) represents a quick and sharp commanding action that is temporarily stopped at the operating points P3 and P4 indicated by x.
[0054]
Therefore, according to such a command operation, for example, the beat timing BT is determined by the peak occurrence time (time point) Tp = t1, t2, t3,..., T4, t5, t6,. DY is determined, and the articulation parameter AR is determined by the Q value Qp of the local peak. That is, in the espressive operation of FIG. 8 (1) and the staccato operation of FIG. 8 (2), there is not much difference in the peak value Vp, but the Q value of the local peak is sufficiently different, so this Q value Qp is used. It controls the degree of articulation between staccato and espressive. How to use this articulation parameter AR will be explained in more detail below.
[0055]
The MIDI music data has information indicating the sound generation start timing and the mute start (pronunciation stop) timing along with pitch information for a large number of musical sounds. The time from the start to the end of the sound generation, that is, the length of time during which the sound is sounding is called “gate time”. When the gate time GT is made shorter than the gate time value of the music piece data, that is, using the coefficient Agt for multiplying the music piece data gate time GT0, for example, Agt = 0.5 is set to 0.5 of the gate time GT0. By doubling, a staccato performance can be obtained. On the other hand, for example, when a = 1.8, that is, 1.8 times, is set to be longer than the original gate time value of music data, an espressive performance can be obtained.
[0056]
Therefore, this gate time coefficient Agt is adopted as the articulation parameter AR and is changed in accordance with the Q value Qp of the local peak. For example, as shown in the following equation (2), by performing linear conversion of the Q value Qp of the local peak and adjusting the gate time GT using the coefficient Agt that changes in accordance with the Q value Qp in this way, The curation AR can be controlled:
Agt = k1 × Qp + k2 (2)
[0057]
For controlling performance parameters, in addition to the peak Q value Qp, the depth of valleys in the time waveform of the absolute acceleration | α | shown in each waveform example (a), (b) of FIG. Or may be used in combination. In the illustrated locus example (b), the pause time is longer than in the locus example (a), the valley of the waveform is deeper, and the value is closer to “0”. Further, in the trajectory example (b), since the operation is sharper than in the trajectory example (a), the high frequency component is stronger than in the trajectory example (a).
[0058]
Further, for example, the tone color can be controlled by the peak Q value Qp. In general, in a synthesizer, the envelope shape of the sound wave is determined by the attack (rise) part A, decay part D, sustain S and release R, but if the attack part A has a low rise speed (slope), a soft tone is produced. On the contrary, when it is high, it tends to be a sharp tone. Therefore, when the performance participant performs a hand swing operation on the operating body equipped with the one-dimensional acceleration sensor MSa, the rising speed of the attack portion A is controlled by the Q value of the local peak in the time waveform of the hand motion acceleration (αx). This makes it possible to control an equivalent tone.
[0059]
In this example, the timbre is controlled equivalently by controlling a part of the envelope shape ADDR of the sound waveform. However, the timbre itself (so-called so-called violin timbre, for example, from a double bass timbre) “Voice”) may be switched, or this method may be used in combination with the ADSR control method. Further, as the cause information, not only the Q value of the local peak but also the strength of the high frequency component of the waveform or other information may be used.
[0060]
It is also possible to control an effect (so-called “effect”) parameter such as reverb according to the detection output. For example, the reverb time is controlled using the Q value of the local peak. The high Q value corresponds to the action of the performance participant shaking the operating body sharply, but in such a sharp and sharp action, the reverb is shortened and the sound is crisp. On the contrary, when the Q value is low, the reverb is lengthened to make the sound gentle and relaxed. Of course, this relationship may be intentionally reversed, another effect parameter (for example, the filter cutoff frequency of the sound source unit SB) may be controlled, or the plurality of effect parameters may be controlled. Also in this case, the cause information is not limited to the Q value of the local peak, and the strength of the high frequency component of the waveform and other information can be used.
[0061]
Further, it is possible to produce a percussion instrument sound generation mode in which a percussion instrument sound is generated when a local peak occurs using the peak interval. In this percussion instrument sounding mode, when the extracted peak interval is long, a percussion instrument with a low pitch, such as a bass drum, is played, and when the peak interval is short with a quick swing action, it is high, for example, a triangle. Play a percussion instrument with pitch. Of course, the relationship may be reversed, and instead of switching the timbre (voice), only the pitch (pitch) may be changed continuously or stepwise without changing the timbre (voice). Furthermore, the timbre (voice) may be changed not only in two types but also in various types, or may be gradually changed while performing a volume crossfade. In addition, the use of the pitch interval is not limited to percussion instruments. For example, from the double bass to the violin, the timbres and pitches of other musical instruments are changed such as changing the pitch of the stringed instrument and changing the pitch. It may be changed.
[0062]
[Use of multiple motion sensor outputs]
In one embodiment of the present invention, desired performance can be added to the music performance by processing a plurality of motion sensor outputs generated by performance participants operating the operating body. As such a motion sensor, for example, a two-dimensional sensor provided with an x-axis and y-axis detector on a baton-shaped structure as shown in FIG. 4A, or a three-dimensional sensor further provided with a z-axis detector. It is preferable to use a built-in one. When the operation body equipped with this motion sensor is gripped and operated in the x-axis to y-axis direction, and further in the z-axis direction, the motion detection output of each axis sensor is analyzed to determine the operation (motion) type. A plurality of performance parameters such as tempo and volume are controlled according to the determination result. Therefore, the performance participant can behave like a conductor with respect to the music performance (conductor mode).
[0063]
In the conductor mode, the performance parameters are controlled by the sensor output when there is a sensor output (pro mode), and when there are sensor outputs, the performance parameters are controlled by the sensor output. If there is no data, it is possible to set to a mode (semi-auto mode) in which the original MIDI data is reproduced.
[0064]
Here, when a two-dimensional sensor is used as a motion sensor for conducting a command, as in the case of using a one-dimensional sensor, various analyzes can be performed and various performance parameters can be controlled accordingly, Compared to the one-dimensional sensor, an analysis output that more accurately reflects the swing motion can be obtained. For example, when a hand gesture is performed in the same manner as the one-dimensional sensor of FIGS. 7 and 8 with the operating body (conductor rod) equipped with the above-described two-dimensional acceleration sensor, the x-axis and y-axis detectors of the two-dimensional sensor Thus, signals representing the acceleration αx in the x-axis (vertical / vertical) direction and the acceleration αy in the y-axis (left / right / horizontal) direction are output to the main system 1M. In the main system 1M, the acceleration data of each axis is handed to the information analysis unit via the reception processing unit, and the analysis data of each axis is analyzed, and the absolute acceleration represented by the following expression (3), that is, the absolute value of acceleration | α | Is required:
[Expression 2]
[0065]
FIG. 9 shows that a two-dimensional acceleration sensor using two acceleration detectors (electrostatic acceleration sensor: TOPRE “TPR70G-100”) for the x-axis and the y-axis is attached to the baton, and the performance operator An example of the trajectory and acceleration waveform of a hand movement obtained when a command stick is held with the right hand and commanded is shown. The command trajectory is represented as a two-dimensional motion trajectory. As shown in FIG. 9 (1), for example, (a) 2-beat espressive command operation, (b) 2-beat staccato command operation, (c) 2-beat And (d) a 3-beat staccato command action trajectory is obtained. In the figure, “(1)” to “(3)” represent swing motion (beating motion) classifications, (a) and (b) are “two swings” (two beats), and (c) and (c) d) is “three swings” (three beats). FIG. 9 (2) shows detection outputs obtained from the respective axis detectors of the sensor corresponding to the respective trajectory examples (a) to (d) based on the commanding action of the performance operator.
[0066]
Here, similarly to the case of the one-dimensional sensor described above, the detection output of each axis detector is passed through a band-pass filter that removes frequency components unnecessary for commanding action recognition in the reception processing unit of the main system 1M. If the sensor is fixed on a desk or the like, the acceleration sensor outputs αx, αy, and | α | do not become zero due to the gravity of the earth, but these components are also eliminated by the DC cut filter because they are unnecessary for command recognition. The direction of the command action appears as the sign and strength of the detection output of the two-dimensional acceleration sensor, and the timing of the swing action (beating action) appears as a local peak of acceleration absolute value | α |. Used for timing. Therefore, two-dimensional acceleration data αx and αy taking positive and negative values are used for discrimination of beat numbers, but only the acceleration absolute value | α | is used for beat timing detection.
[0067]
The accelerations αx and αy at the time of the beating operation actually vary greatly in polarity and strength depending on the direction of the beating operation, and are complex waveforms including many pseudo peaks, so that the beat timing can be stably obtained as it is. It is difficult. Therefore, as described above, a 12th-order moving average filter that removes unnecessary high-frequency components from the absolute acceleration value is passed. (A) to (d) of FIG. 9 (2) show examples of acceleration waveforms after passing through the obi-castle pass filter constituted by these two filters. It is a signal when conducting carefully according to a) to (d). Each waveform shown on the right side of FIG. 9B represents a vector locus for one cycle of the two-dimensional acceleration signals αx and αy. Each waveform shown on the left side represents 3 seconds of the time domain waveform | α | (t) of the acceleration absolute value | α |, and each local peak corresponds to a beating operation.
[0068]
When extracting local peaks for beat detection, it is necessary to avoid false detections such as detection of false peaks and missed detection of beat peaks. Appropriate methods should be applied. Further, the accelerations αx and αy take positive and negative values as seen on the right side of FIG. 9 (2), but the hand always moves delicately in a human command operation and does not completely stop. For this reason, there is no time when both accelerations αx and αy become zero at the same time and stop at the origin, and therefore the time domain waveform | α | (t) is also commanded as shown on the left side of FIG. 9 (2). The inside does not become zero.
[0069]
[Three-dimensional sensor use mode = 3-axis processing]
When a three-dimensional sensor is used as the motion sensor MSa by further increasing the number of detection axes, various performance controls corresponding to the operation are performed by analyzing the operation (movement) of the sensor and analyzing the various operations. be able to. FIG. 10 is a functional block diagram in the case of performing a musical performance using a three-dimensional sensor. In the three-dimensional sensor use mode of FIG. 10, the baton-type detection transmitter 1Ta described in FIG. 4 (1) incorporates the three-dimensional sensor MSa, and the performance participant holds the baton-type detection transmitter 1Ta in one hand or both hands. The operation signal according to the operation direction and the operation force can be output.
[0070]
When a three-dimensional acceleration sensor is used as the three-dimensional sensor, x (up and down) is respectively detected from the x-axis, y-axis, and z-axis detectors SX, SY, and SZ of the three-dimensional sensor MSa in the baton-type body information detection transmitter 1Ta. ) Direction acceleration αx, y (left-right) direction acceleration αy, and z (front-rear) direction acceleration αz are output to main system 1M. In the main system 1M, when it is confirmed that a preset ID number is added to these signals, the acceleration data of each axis is handed over to the information analysis unit AN via the reception processing unit RP. In the information analysis unit AN, each axis acceleration data is analyzed, and first, an absolute value | α | of the acceleration expressed by the following equation (4) is obtained:
[Equation 3]
[0071]
Next, the accelerations αx and αy are compared with the acceleration αz. For example,
αx <αz and αy <αz (5)
When the above relationship is established, that is, when the z-direction acceleration αz is larger than the x- and y-direction accelerations αx and αy, it is determined that the “pushing action” is to strike the baton.
[0072]
Conversely, when the z-direction acceleration αz is smaller than the x- and y-direction accelerations αx and αy, it is determined that this is a “cutting operation” for cutting air with a baton. In this case, by further comparing the values of the accelerations αx and αy in the x and y directions, it is determined whether the direction of the “cutting operation” is “vertical” (x) or “weft” (y). Can do.
[0073]
Also, not only the comparison of the components in the x, y, and z directions of each axis, but also the magnitude of each of the direction components αx, αy, αz itself and a predetermined threshold value are compared. It can be determined that it is a “combination operation”. For example, if αz> αx, αy and αx> “x component threshold”, it is determined that “pushing operation while cutting in the vertical direction (x direction)”, and αz <αx, αy, αx> “x component threshold. If “y”> “y-component threshold”, it is determined that the operation is “oblique (in both x and y directions) cutting operation”. Further, by detecting a phenomenon in which the values of the y-direction accelerations αx and αy relatively change so as to draw a circular locus, it is possible to determine the “turning operation” that is turned around the baton.
[0074]
The performance parameter determination unit PS determines various performance parameters according to these discrimination outputs, and the performance data control unit of the musical tone reproduction unit 1S controls the performance data based on the determined performance parameters. Musical tone performance is output via For example, the volume of the performance data is controlled in accordance with the acceleration absolute value | α | or the magnitude of the component indicating the maximum among the directional components αx, αy, αz. Further, other performance parameters are controlled based on the determination result in the analysis unit AN.
[0075]
For example, the tempo is controlled in accordance with the cycle of “vertical (x direction) cutting operation”. Apart from this, if the “warming operation” is short and high, articulation is given, and if it is long and low, the pitch (pitch) is lowered. In addition, a slur effect is given by the discrimination of “weft (y direction) cutting operation”. When “pushing movement” is determined, the musical sound generation timing is shortened to give a staccato effect, or a single sound (percussion instrument sound, shout, etc.) according to the size is inserted into the musical performance. When the “combination operation” is determined, the above-described control is used together. When the “turning operation” is determined, control is performed so that the reverberation effect is enhanced according to the magnitude when the period is large, or trill is generated according to the period when the period is small.
[0076]
Of course, the same control as that described in the case of using a one-dimensional sensor or a two-dimensional sensor can be performed. That is, in the three-dimensional acceleration sensor, when the absolute acceleration projected on the xy plane represented by Expression (3) is “xy absolute acceleration | αxy |”, the xy absolute acceleration | αxy | The time of occurrence of the local peak of the time lapse waveform | αxy | (t), the local peak value, the peak Q value indicating the sharpness of the local peak, the peak interval indicating the time interval between the local peaks, the depth of the valley between the local peaks, Extracts the strength of the high-frequency component of the peak, the polarity of the local peak of acceleration α (t), etc., controls the beat timing of the musical composition according to the peak occurrence time, controls the dynamics according to the local peak value, and peaks The articulation AR is controlled according to the Q value Qp, and so on. When the expression (5) is established and it is determined that the “pushing motion”, in parallel with these controls, a single sound (percussion instrument sound, shout, etc.) is inserted into the musical performance, or the z-direction acceleration αz Depending on the magnitude of the sound, a timbre change or a reverb effect is given, or other performance control not performed by xy absolute acceleration | αxy | is performed.
[0077]
In addition, when a one-dimensional to three-dimensional sensor is installed in the sword and a sword dance with music accompaniment is performed in a play, etc., the sound that is cut off according to the detection output of each axis due to the turn of the sword (x axis or y axis) Axis), wind noise (y-axis or x-axis), and further, it can be used to control sound effects such as struck sounds (z-axis).
[0078]
[Other use examples of motion sensors]
In a one-dimensional to three-dimensional acceleration sensor, if the output of each axis of the sensor is integrated or a speed sensor is used instead of the acceleration sensor, the operation is similarly determined according to the operation (movement) speed of the sensor. Performance parameters can be controlled. Also, by integrating the integrated output of each axis of the acceleration sensor or integrating the output of each axis of the speed sensor, the operation (movement) position of the sensor is estimated, and another performance parameter is controlled according to this estimated position. (For example, the pitch can be controlled according to the height in the x direction). In addition, two motion sensors such as one-dimensional to three-dimensional sensors are prepared as baton-shaped operating bodies as shown in FIG. 4A, and one operator operates each operating body with left and right hands. By doing this, it is possible to add separate left and right controls to the music performance. For example, a musical performance track (part) is divided into two, and each group performance track (part) is individually controlled based on left and right motion analysis.
[0079]
[Use of state sensor = biomode]
According to one embodiment of the present invention, it is possible to enjoy the music performance by reflecting the state of the performance participant in the performance music sound by detecting the biological information. For example, in the case where a plurality of participants perform aerobic exercises or the like while listening to music performance, a pulse (pulse wave) detector is used as the information sensor IS in FIG. A heart rate detector is attached to detect the heart rate, and when the predetermined heart rate is exceeded, the tempo of the music being played is naturally lowered in consideration of the health of the participants. Thereby, the music performance which considered operation | movement, such as aerobics, and physical states, such as a heart rate, is implement | achieved. In this case, it is assumed that the performance tempo is controlled according to the average value of the measurement data such as the heart rate, and in calculating the average value, the average value is calculated by adding a larger weight to the higher heart rate. Is preferred. In addition, the volume may be decreased as the tempo decreases.
[0080]
In the case of the above-described example, when the rate of increase in heart rate is within a predetermined range specified in advance, a sound is emitted from the speaker 4 and the LED light emitter of the display unit is lit to recognize that it is in a normal state. However, if it deviates from this range, a performance stop function according to the physical condition can be added such that generation of musical sound and light emission are stopped. In addition, other equivalent biological information can be used instead of the heart rate. For example, the same effect can be obtained even when the respiratory rate is used. As a respiration rate sensor, there are a method in which a pressure sensor is attached to the chest and abdomen, or a temperature sensor for detecting air movement is attached to a nostril or the like.
[0081]
Other examples of responding to biological information include analyzing the state of excitement (increase in pulse and respiratory rate, decrease in skin resistance, increase in blood pressure and body temperature, etc.) from the biological information, Accordingly, there is a method of performing musical tone control of excitement level response in which performance parameters are controlled in the opposite direction to the above-described example in consideration of health, such as increasing the performance tempo and volume. This method is suitable for BGM performance of various games in which a plurality of people participate and music performance that a plurality of participants enjoy while dancing in the hall, and the degree of excitement is calculated from an average value of the plurality of participants, for example. The
[0082]
[Operation / state combination mode]
According to one embodiment of the present invention, by using the motion sensor and the state sensor together to detect and process the performance and biological information of the performance participant, a plurality of performance statuses of the performance participant can be reflected in the performance musical sound. A musical performance can be produced. FIG. 11 shows a functional block diagram in the case where the performance sensor and the state sensor are combined to produce a musical performance. In this example, the motion sensor MSa uses a two-dimensional sensor including the already described x-axis and y-axis detectors SX and SY, but a one-dimensional sensor or a three-dimensional sensor may be used as necessary. Can do. This motion sensor MSa is built in a baton-shaped structure as shown in FIG. 4 (1), and performs a musical performance command operation by, for example, shaking with the right hand of the performance operator. The state sensor SSa includes a viewpoint (line of sight) tracking unit (Eye Tracking System) SE and a breath detection unit (Breath Sensor) SB, which are respectively attached to predetermined body parts of the performance participants. Tracking and respiration detection of the person.
[0083]
The detection signals from the x-axis and y-axis detection units SX and SY of the two-dimensional motion sensor MSa and the viewpoint tracking unit SE and respiration detection unit SB of the state sensor SSa are each assigned an individual ID number. The signal is output to the main system 1M through each signal processing and transmission unit. When the main body system 1M confirms the provision of the ID number set in advance, the reception processing unit RP processes the detection signals from the two-dimensional motion sensor MSa, the viewpoint tracking unit SE, and the respiration detection unit SB, and the corresponding two-dimensional The motion data Dm, viewpoint position data De, and respiration data Db are given to each analysis block AM, AE, AB of the information analysis unit AN according to the ID number. The motion analysis block AM analyzes the motion data Dm to recognize the magnitude of the data value, the beat timing, the beat number, and the articulation, and the viewpoint analysis block AE analyzes the viewpoint position data De to analyze the gaze area ( In the respiration analysis block AB, the respiration data Db is analyzed to detect the expiration and inhalation state.
[0084]
In the next performance parameter determination unit PS, the performance data selected in accordance with the SW bits (bits 5 to 7: FIG. 6) from the MIDI file in the performance data medium (external storage device 13) in the first data processing block PA. In addition to estimating the beat position on the musical score, the beat generation time is predicted from the currently set performance tempo, and the estimated beat position, the predicted beat generation time, and the data value from the motion analysis block AM are calculated. Integrated processing of size, beat timing, beat number and articulation. In the second data processing block PB, the volume, performance tempo, and tone generation timing are determined based on the result of the integration process, and a specific performance part is determined according to the gaze area detected in the viewpoint analysis block AE. The control by the breath is determined based on the expiration and the inspiration state detected by the breath analysis block AB. The musical tone reproduction unit 1S controls performance data based on the determined performance parameters, and performs a desired musical tone performance via the sound system 3.
[0085]
[Operation mode by multiple performance operators]
According to one embodiment of the present invention, music performance can be controlled by operating a plurality of physical information detection transmitters by a plurality of performance operators. Here, each performance operator can operate one or more physical information detection transmitters, and each physical information detection transmitter includes the motion sensors or states described so far with reference to FIGS. The configuration of the sensor (including the bio mode and the operation / state combination mode) can be adopted.
[0086]
[Ensemble mode]
For example, a plurality of physical information detection transmitters are composed of one master device and a plurality of slave devices, and control certain performance parameters according to the physical information detection signal of the master device, and also detect the physical information of the slave devices. An ensemble mode in which other performance parameters are controlled by a signal can be set. FIG. 12 shows a functional block diagram in an ensemble mode according to one embodiment of the present invention. One master machine 1T1 and a plurality of slave machines 1T2-1Tn (for example, n = 24), for example, control the tempo and volume of performance parameters according to the physical information detection signal of the master machine 1T1, The timbre is controlled by the body information detection signals of the slave units 1T2 to 1Tn. In this case, each physical information detection transmitter 1Ta (a = 1 to n) is configured to detect the movement and transmit the movement signal Ma (a = 1 to n) in the baton shape of FIG. 4 (1). Is preferred.
[0087]
In FIG. 12, the operation signals M1 to Mn (n = 24) from the body information detection transmitters 1T1 to 1Tn are subjected to selective reception processing by the information reception of the main system 1M and the reception processing unit RP in the tone controller 1R. That is, these operation signals M1 to Mn are assigned to ID numbers assigned to the operation signals D1 to Dn in the selector SL of the reception processing unit RP by a preset “ID number assignment” (“group for each ID number”). By identifying according to the setting “)”, the operation signal M1 from the master unit 1T1 and the operation signals M2 to Mn from the slave units 1T2 to 1Tn are sorted, and each corresponds to the master operation signal M1. Master data MD to be performed and slave unit data corresponding to the slave unit operation signals M2 to Mn, and these slave unit data are further classified into first to m-th groups SD1 to SDm.
[0088]
For example, by operating the operation switch T6 on the master device 1T1 with the ID number “0”, the first SW bit A in FIG. 6 is set to play mode ON (A = “1”), and the “group / individual mode” is set. The second SW bit C for designating "" is set to the group mode designation (B = "1") or the individual mode designation (B = "0"), and the third SW bit C for designating "whole / partial lead mode" Is set to the overall leading mode designation (C = “1”) or the partial leading mode designation (C = “0”). On the other hand, in the slave units 1T2 to 1T24 (= n) having ID numbers “1” to “23 (= n−1)”, the first SW bit A is set to play mode ON (A = “ It is assumed that the second and third SW bits A are arbitrary values (B = “X”, C = “X”, “X” represents an arbitrary value).
[0089]
On the other hand, the selector SL refers to “ID number assignment” so that the operation signal M1 from the master machine 1T1 is from the ID number “0” added to this signal from the master machine 1T1. The corresponding master data MD is output, and the operation signals M2 to Mn from the slave units 1T2 to 1Tn are determined to be those from the slave units 1T2 to 1Tn based on the ID numbers “1” to “23”. Select the corresponding handset data. At this time, these child device data are further classified and output for each of the first to m-th groups SD1 to SDm in accordance with “group setting for each ID number”. Note that the group classification by “group setting for each ID number” changes according to the setting contents in the main system 1M, and one group may include a plurality of child device data, or one group may have one. There may be only one slave unit data, or there may be only one group in total.
[0090]
The master data MD and the slave data of the first to mth groups SD1 to SDm are handed to the information analysis unit AN. In the master data analysis block MA, the master data MD is analyzed, the contents of the second and third SW bits B and C are examined, and the magnitude and period of the data value are determined. For example, it is checked whether “group mode” or “individual mode” is designated by the second SW bit B of the master data MD, and either “entire leading mode” or “partial leading mode” is determined by the third SW bit C. Check if it is specified. Further, the operation content indicated by the data, the size of each content, the cycle, and the like are determined based on the content of the data byte of the master data MD.
[0091]
In the slave unit data analysis block SA, the slave unit data included in the first to m-th groups SD1 to SDm is subjected to information analysis, and the data value is determined according to the mode specified by the second SW bit B of the master data MD. A size, a period, etc. are determined. For example, if the “group mode” is set, the average value of the size and period of the slave unit data belonging to each group SD1 to SDm is calculated, and if the “individual mode” is set, the individual slave unit data Is calculated.
[0092]
The performance parameter determination unit PS in the next stage is composed of a main setting block MP and a sub setting block AP corresponding to the master data analysis block MA and the slave unit data analysis block SA, and is recorded on the medium (external storage device 13). For the performance data selected from the MIDI file, the performance parameter of each performance track is determined. First, in the main setting block MP, performance parameters of a predetermined performance track are determined based on the determination result in the master data analysis block MA. For example, when the “whole lead mode” is designated by the third SW bit C, the volume parameter value is determined according to the data value magnitude determination result for all performance tracks (tr), and the period determination result Set the tempo parameter value accordingly. Further, when the “partially leading mode” is designated, the first performance track (tr) (one or more performance tracks, for example, a melody sound track) set in advance corresponding to this mode, Similarly, the volume parameter value and tempo parameter value corresponding to each determination result are determined.
[0093]
On the other hand, in the sub-setting block AP, a preset tone color is set for the performance track corresponding to the mode specified by the third SW bit C, and the performance parameter is set based on the determination result in the slave unit data analysis block SA. To decide. For example, when the “overall lead mode” is designated, a predetermined tone color parameter is set for a predetermined performance track (for example, all accompaniment sound tracks or sound effect tracks) set in advance corresponding to this mode. At the same time, the performance parameters of these performance tracks are changed according to the determination result of the slave unit data in addition to the setting based on the determination result of the master data (that is, the volume parameter value is further changed according to the magnitude of the slave unit data value). And change the tempo parameter value further according to the period of the slave unit data). In this case, the volume parameter value is preferably calculated by multiplication with a variable based on the master data determination result, and the tempo parameter is preferably calculated by an arithmetic average with the variable based on the master data analysis output. When “partially leading mode” is designated, each determination result for a second performance track (for example, a performance track other than the first performance track) set in advance corresponding to this mode. In response to this, the volume parameter value and the tempo parameter value are independently determined.
[0094]
Based on the performance parameters determined as described above, the music sound reproducing unit 1S employs the performance parameters of each performance track of the selected performance data from the MIDI file, and a preset tone color for each performance track. Assign (sound source). As a result, a musical tone having a predetermined tone color corresponding to the performance of the performance participant can be generated.
[0095]
For example, in a music class, a teacher has one master unit 1T1 to control the volume and tempo of the main melody of the performance data song, and a plurality of students have respective slave units 1T2 to 1Tn. The performance can be enjoyed in various ways, such as generating an accompaniment sound with a volume or tempo according to the operation or a percussion instrument sound. In this case, the percussion instrument sounds are not only set to drums, kaneki, etc. by selecting the timbre, but if you prepare different types of sound sources such as natural wind and water sounds, these sound sources can also be used as performance tracks. This makes it possible to simultaneously generate a drum sound, a sound of money, a natural wind, a water sound and the like while the music is in progress. Therefore, it is possible to enjoy various performance forms that anyone can participate in.
[0096]
Further, in the master unit 1T1 and the slave units 1T2 to 1Tn, the LED light emitter TL can always emit light by operating the operation switch T6 or can be blinked according to the detection output of the motion sensor MSa. As a result, the LED light oscillates or blinks from the baton as the music performance progresses, so that not only the music effect but also a visual effect can be enjoyed.
[0097]
[Various music performance control by multiple operators]
Of course, the plurality of body information detection transmitters 1T1 to 1Tn can be configured such that the master unit 1T1 is eliminated (that is, the master unit and the slave units are all distinguished as slave units). Absent. The simplest example is a case where two operators are each provided with a body information detection transmitter and a musical performance is controlled by a duet. Each operator may be provided with one body information detection transmitter. A plurality of body information detection transmitters may be provided. For example, each operator has two baton-type motion sensors as shown in FIG. 4 (1), and each person divides a musical performance track (part) into two, and further divides into two by left and right manual operations. The performance track (part) can be individually controlled by a total of four motion sensors.
[0098]
Furthermore, examples of controlling music performance by a plurality of operators include network music performance and music game between remote locations. For example, in different places such as music classrooms, participants can participate in the performance of musical performances at the same time by controlling the performance of the music with the body information detection transmitter provided to each participant while watching the video through the communication network. can do. In various events, a plurality of event participants can be provided with physical information detection transmitters, and participation in the control of music performance can be realized by each physical information detection output.
[0099]
As another example, at a concert venue, one or more performers perform the main control of the music by controlling the tempo and dynamics of the music by the body information detection transmitter for main control, and a plurality of viewers By performing sub-control that has a sub-control body information detection transmitter and inserts and controls a sound such as a clapping time according to light emitted from an LED or the like, it is possible to realize an effect that can participate in music performance. Furthermore, in a parade at a theme park, multiple parade participants can similarly control the performance parameters of the music with the main control, and control the insertion of cheers and produce the light with the light emitting device with the sub-control. Can do.
[0100]
【The invention's effect】
As described above, in the performance interface according to the present invention, when the performance participant moves the motion detector variously, the motion detection signal of the motion detector is compared to determine the specific operation of the operator, Since performance control information corresponding to a specific action based on each discrimination result is generated, a variety of music performances can be performed according to the specific action of the operator by comparing each direction component of the motion detection signal. Can be controlled. Further, in another performance interface according to the present invention, when the operator moves the motion detector, each direction component of the motion detection signal of the motion detector is compared to determine a specific operation of the operator, Since the operator's physical state is analyzed from the content of the state detection signal of the detector [state information (biological information, physiological information)], and performance control information corresponding to the analysis results is generated, the operator The music performance can be controlled in various ways in consideration of the physical condition of the operator.
[Brief description of the drawings]
FIG. 1 is a block diagram showing a schematic configuration of an entire performance system including a performance interface according to an embodiment of the present invention.
FIG. 2 is a block diagram showing a configuration example of a physical information detection transmitter according to an embodiment of the present invention.
FIG. 3 is a block diagram showing a hardware configuration of a main system according to an embodiment of the present invention.
FIG. 4 is a diagram showing an example of a physical information detection mechanism that can be used in the performance interface of the present invention. FIG. 4 (1) shows an example of a baton-type hand-held physical information detection transmitter. FIG. 4 (2) shows an example of a shoe shape.
FIG. 5 is a diagram showing another configuration example of the body information detection mechanism that can be used in the performance interface according to the present invention.
FIG. 6 shows a format example of sensor data used in one embodiment of the present invention.
FIG. 7 is a functional block diagram of a system that uses a plurality of analysis outputs based on a one-motion sensor (one-dimensional sensor) output according to an embodiment of the present invention.
FIG. 8 is a diagram schematically showing an example of a hand movement motion trajectory and an acceleration data waveform example during a one-dimensional acceleration sensor command operation according to an embodiment of the present invention;
FIG. 9 is a diagram illustrating an example of a locus of hand movement and an acceleration output waveform of a sensor according to an embodiment of the present invention.
FIG. 10 is a diagram showing an example of a locus of hand movement and an acceleration output waveform of a two-dimensional sensor in one embodiment of the present invention.
FIG. 11 is a functional block diagram in an operation sensor and state sensor combined mode according to an embodiment of the present invention.
FIG. 12 shows a functional block diagram in an ensemble mode according to one embodiment of the present invention.
[Explanation of symbols]
TD LED or LCD display,
TS power switch,
TL LED emitter,
IS body information sensor,
TTa signal processing and transmitter equipment:
Mx, My, Mz; M1 to Mn motion detection signals,
Se, Sb state detection signal,
Dm Master machine operation detection data,
De, Db state detection data,
AM; MA, SA motion analysis block,
AE, AB state (viewpoint, breathing) analysis block,
PA, PB first and second data processing blocks,
MP, AP Main setting block and sub setting block.

Claims (6)

  1. A performance interface for generating performance control information for controlling a musical sound generated from a musical sound generator according to an operator's physical condition,
    A motion detector that is installed or possessed by the operator that detects motion in three directions based on the motion of the operator and outputs a motion detection signal composed of the corresponding three-direction components;
    The motion detection signals of the three direction components received from the motion detector are compared with each other, and based on the fact that each direction component matches each of a plurality of predetermined relationships, a specific operation corresponding to each relationship is performed. Action discriminating means for discriminating what is being performed by the operator;
    A performance interface characterized by comprising performance control information generating means for generating performance control information corresponding to the specific action based on the determination result.
  2. A performance interface for generating performance control information for controlling a musical sound generated from a musical sound generator according to an operator's physical condition,
    A motion detector that is installed or possessed by the operator that detects motion in three directions based on the motion of the operator and outputs a motion detection signal composed of the corresponding three-direction components;
    A state detector that can be installed or possessed by the operator that detects the physical state of the operator and outputs a corresponding state detection signal;
    The motion detection signals of the three direction components received from the motion detector are compared with each other, and based on the fact that each direction component matches each of a plurality of predetermined relationships, a specific operation corresponding to each relationship is performed. Action discriminating means for discriminating what is being performed by the operator;
    Body state analysis means for analyzing the operator's body state based on the state detection signal received from the state detector;
    Performance control information generating means for generating performance control information corresponding to the determined specific motion and the analyzed physical state based on the determination result of the motion determining means and the analysis result of the physical condition analyzing means. A performance interface characterized by that.
  3.   The said state detector detects at least one of a pulse, body temperature, resistance between skin, an electroencephalogram, a respiration rate, and a viewpoint position of an eyeball, and outputs a corresponding state detection signal. Playing interface.
  4.   The performance according to any one of claims 1 to 3, wherein the motion detector is gripped by an operator's hand or attached to an operator's hand or foot. Interface.
  5.   The performance interface according to any one of claims 1 to 4, wherein the performance control information controls a volume, tempo, timing, timbre, effect, or pitch of a musical sound.
  6.   6. The motion detector is a three-dimensional acceleration sensor that detects motion in three directions orthogonal to each other based on an operation of an operator and outputs a corresponding motion detection signal in three directions. The performance interface according to any one of the above.
JP2000002077A 2000-01-11 2000-01-11 Playing interface Expired - Lifetime JP3646599B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2000002077A JP3646599B2 (en) 2000-01-11 2000-01-11 Playing interface

Applications Claiming Priority (11)

Application Number Priority Date Filing Date Title
JP2000002077A JP3646599B2 (en) 2000-01-11 2000-01-11 Playing interface
EP07110789.0A EP1855267B1 (en) 2000-01-11 2001-01-10 Apparatus and method for detecting performer´s motion to interactively control performance of music or the like
EP20070110770 EP1860642A3 (en) 2000-01-11 2001-01-10 Apparatus and method for detecting performer´s motion to interactively control performance of music or the like
US09/758,632 US7183480B2 (en) 2000-01-11 2001-01-10 Apparatus and method for detecting performer's motion to interactively control performance of music or the like
EP07110784.1A EP1837858B1 (en) 2000-01-11 2001-01-10 Apparatus and method for detecting performer´s motion to interactively control performance of music or the like
EP20010100081 EP1130570B1 (en) 2000-01-11 2001-01-10 Apparatus and method for detecting performer's motion to interactively control performance of music or the like
DE2001630822 DE60130822T2 (en) 2000-01-11 2001-01-10 Apparatus and method for detecting movement of a player to control interactive music performance
US10/291,134 US7179984B2 (en) 2000-01-11 2002-11-08 Apparatus and method for detecting performer's motion to interactively control performance of music or the like
US10/387,811 US7135637B2 (en) 2000-01-11 2003-03-13 Apparatus and method for detecting performer's motion to interactively control performance of music or the like
US11/400,710 US7781666B2 (en) 2000-01-11 2006-04-07 Apparatus and method for detecting performer's motion to interactively control performance of music or the like
US12/780,745 US8106283B2 (en) 2000-01-11 2010-05-14 Apparatus and method for detecting performer's motion to interactively control performance of music or the like

Publications (2)

Publication Number Publication Date
JP2001195059A JP2001195059A (en) 2001-07-19
JP3646599B2 true JP3646599B2 (en) 2005-05-11

Family

ID=18531224

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2000002077A Expired - Lifetime JP3646599B2 (en) 2000-01-11 2000-01-11 Playing interface

Country Status (1)

Country Link
JP (1) JP3646599B2 (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4759888B2 (en) * 2001-09-07 2011-08-31 ヤマハ株式会社 Karaoke System
JP4646099B2 (en) * 2001-09-28 2011-03-09 パイオニア株式会社 Audio information reproducing apparatus and audio information reproducing system
JP4052274B2 (en) 2004-04-05 2008-02-27 ソニー株式会社 Information presentation device
JP2006114174A (en) 2004-10-18 2006-04-27 Sony Corp Content reproducing method and content reproducing device
JP2006171133A (en) 2004-12-14 2006-06-29 Sony Corp Apparatus and method for reconstructing music piece data, and apparatus and method for reproducing music content
JP4389821B2 (en) 2005-03-22 2009-12-24 ソニー株式会社 Body motion detection device, content playback device, body motion detection method and content playback method
JP2006304167A (en) 2005-04-25 2006-11-02 Sony Corp Key generating method and key generating apparatus
JP2007058752A (en) * 2005-08-26 2007-03-08 Sony Corp Information processor, information processing method and program
JP4757089B2 (en) 2006-04-25 2011-08-24 任天堂株式会社 Music performance program and music performance apparatus
JP2008008924A (en) * 2006-06-27 2008-01-17 Yamaha Corp Electric stringed instrument system
JP4826509B2 (en) * 2007-02-28 2011-11-30 ブラザー工業株式会社 Karaoke equipment
US20090265671A1 (en) * 2008-04-21 2009-10-22 Invensense Mobile devices with motion gesture recognition
JP6064713B2 (en) * 2013-03-19 2017-01-25 ヤマハ株式会社 Signal output device
JP6305958B2 (en) * 2015-04-10 2018-04-04 日本電信電話株式会社 Ensemble device, ensemble system, method and program thereof
WO2017195343A1 (en) * 2016-05-13 2017-11-16 株式会社阪神メタリックス Musical sound generation system

Also Published As

Publication number Publication date
JP2001195059A (en) 2001-07-19

Similar Documents

Publication Publication Date Title
Gaye et al. Sonic City: The Urban Environment as a Musical Interface.
Holmes et al. Electronic and experimental music: pioneers in technology and composition
EP1116213B1 (en) Automatic music generating method and device
KR101625360B1 (en) Motion detection system
JP4691754B2 (en) Game device
Sethares Rhythm and transforms
US8317614B2 (en) System and method for playing a music video game with a drum system game controller
AU717471B2 (en) Method and apparatus for simulating a jam session and instructing a user in how to play the drums
US6835887B2 (en) Methods and apparatus for providing an interactive musical game
JP4307193B2 (en) Program, information storage medium, and game system
JP4779264B2 (en) Mobile communication terminal, tone generation system, tone generation device, and tone information providing method
Paradiso The brain opera technology: New instruments and gestural sensors for musical interaction and performance
US20010034014A1 (en) Physical motion state evaluation apparatus
US20060111621A1 (en) Musical personal trainer
US7893337B2 (en) System and method for learning music in a computer game
Collins Game sound: an introduction to the history, theory, and practice of video game music and sound design
KR100868600B1 (en) Apparatus for controlling music reproduction and apparatus for reproducing music
KR100457200B1 (en) Game system
US8283547B2 (en) Scheme for providing audio effects for a musical instrument and for controlling images with same
JP3948242B2 (en) Music generation control system
US7151214B2 (en) Interactive multimedia apparatus
US20040127285A1 (en) Entertainment device
Schloss Using contemporary technology in live performance: The dilemma of the performer
US7504577B2 (en) Music instrument system and methods
US7145070B2 (en) Digital musical instrument system

Legal Events

Date Code Title Description
A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20040928

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20041012

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20041213

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20050118

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20050131

R150 Certificate of patent (=grant) or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

S531 Written request for registration of change of domicile

Free format text: JAPANESE INTERMEDIATE CODE: R313532

R350 Written notification of registration of transfer

Free format text: JAPANESE INTERMEDIATE CODE: R350

FPAY Renewal fee payment (prs date is renewal date of database)

Free format text: PAYMENT UNTIL: 20080218

Year of fee payment: 3

FPAY Renewal fee payment (prs date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090218

Year of fee payment: 4

FPAY Renewal fee payment (prs date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090218

Year of fee payment: 4

FPAY Renewal fee payment (prs date is renewal date of database)

Free format text: PAYMENT UNTIL: 20100218

Year of fee payment: 5

FPAY Renewal fee payment (prs date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110218

Year of fee payment: 6

FPAY Renewal fee payment (prs date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120218

Year of fee payment: 7

FPAY Renewal fee payment (prs date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130218

Year of fee payment: 8

FPAY Renewal fee payment (prs date is renewal date of database)

Free format text: PAYMENT UNTIL: 20140218

Year of fee payment: 9