WO2023195333A1 - Control device - Google Patents

Control device Download PDF

Info

Publication number
WO2023195333A1
WO2023195333A1 PCT/JP2023/010952 JP2023010952W WO2023195333A1 WO 2023195333 A1 WO2023195333 A1 WO 2023195333A1 JP 2023010952 W JP2023010952 W JP 2023010952W WO 2023195333 A1 WO2023195333 A1 WO 2023195333A1
Authority
WO
WIPO (PCT)
Prior art keywords
key
sound
communication base
performance
signal
Prior art date
Application number
PCT/JP2023/010952
Other languages
French (fr)
Japanese (ja)
Inventor
善政 磯崎
克典 鈴木
隆洋 寺田
隆志 森
潤 石井
伊吹 半田
琢哉 藤島
幸司 谷高
幸夫 涌井
宗一 瀧川
陽 前澤
陽貴 大川
吉勝 松原
保彦 大場
福太郎 奥山
智也 佐々木
Original Assignee
ヤマハ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ヤマハ株式会社 filed Critical ヤマハ株式会社
Publication of WO2023195333A1 publication Critical patent/WO2023195333A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/32Constructional details

Definitions

  • the present invention relates to a control device.
  • Patent Document 1 discloses a technique for reducing the influence of communication delay in order to realize a comfortable ensemble performance.
  • One of the purposes of the present invention is to enable a plurality of musicians playing in an ensemble to feel a sense of unity.
  • a control device in one embodiment includes a first transmitter, a first receiver, and a first generator.
  • the first transmitter transmits first performance data including performance details for a keyboard instrument at the first communication base to the second communication base.
  • the first receiving section receives the second performance data from the second communication base.
  • the first generation section generates a drive signal for producing sound according to the second performance data, and outputs it to the sound production device at the first communication base.
  • At least one of the first performance data and the second performance data includes a key position signal indicating the amount of depression of a key on the keyboard instrument.
  • FIG. 1 is a diagram for explaining a communication system configuration in a first embodiment. It is a figure explaining the internal structure of the automatic performance piano in a 1st embodiment.
  • FIG. 2 is a diagram illustrating the configuration of a control device in the first embodiment. It is a figure explaining the composition of an ensemble control function in a 1st embodiment. It is a figure explaining the positional relationship of the vibrator and the pick-up sensor in a 2nd embodiment.
  • FIG. 7 is a diagram illustrating the configuration of a drive signal generation section in a second embodiment.
  • FIG. 7 is a diagram illustrating the relationship between velocity and delay time in the third embodiment.
  • FIG. 7 is a diagram illustrating the relationship between velocity and correction value in the third embodiment.
  • FIG. 1 is a diagram illustrating the configuration of a communication system in one embodiment.
  • the communication system includes a server 1000 connected to a network NW such as the Internet.
  • Server 1000 includes a control unit such as a CPU, a storage unit, and a communication unit.
  • the control unit provides a service for realizing an ensemble performance between communication bases by executing a predetermined program.
  • the server 1000 controls communication between a plurality of communication bases connected to the network NW, and executes processing necessary for the automatic performance pianos 1 at each communication base to realize P2P type communication with each other. This processing may be realized by a known method.
  • FIG. 1 two communication bases T1 and T2 are illustrated, but the number is not limited to this, and even more communication bases may exist. In the following description, when the communication bases T1 and T2 are described without distinction, they are simply referred to as communication bases.
  • information related to the performance at each communication base is exchanged between the communication base T1 and the communication base T2 by P2P communication.
  • P2P communication Through this communication, an ensemble performance is realized between a plurality of communication bases.
  • a self-playing piano 1 is arranged at each communication base.
  • an environment collecting device 82 and an environment providing device 88 are connected to the automatic performance piano 1.
  • the environment collection device 82 includes a sensor for collecting information on the surrounding environment of the automatic piano 1, and outputs a collection signal indicating the measurement result of the sensor.
  • the surrounding environment includes, for example, sound, light, vibration, temperature, air flow, and the like.
  • the environment providing device 88 Upon acquiring a control signal indicating the surrounding environment, the environment providing device 88 provides an environment based on the control signal.
  • the environment collecting device 82 and the environment providing device 88 may be configured integrally.
  • the environment providing device 88 may be provided depending on the number of other communication bases. For example, if there are three communication bases in addition to the communication base T1, three environment providing devices 88 may be provided in the communication base T1 corresponding to the respective communication bases. At least one of the environment collecting device 82 and the environment providing device 88 may be built into the automatic performance piano 1. Specific examples of the environment collecting device 82 and the environment providing device 88 will be described later.
  • the automatic performance piano 1 includes a keyboard instrument 10, a control device 20, a sensor 30, and a drive device 40.
  • FIG. 2 is a diagram illustrating the internal configuration of the automatic performance piano in the first embodiment.
  • the keyboard instrument 10 of the automatic performance piano 1 corresponds to, for example, a grand piano.
  • Keyboard instrument 10 includes a plurality of keys 12.
  • the keyboard instrument 10 includes a hammer 14, a string 15, and a damper 18 provided corresponding to each key 12.
  • the configurations provided corresponding to each key 12 are shown focusing on each configuration provided corresponding to one key 2 shown in FIG. Therefore, descriptions of each configuration provided corresponding to other keys 12 are omitted. Some components, such as the damper 18, may not be provided for some keys 12.
  • the keyboard instrument 10 includes a plurality of pedals 13.
  • the plurality of pedals 13 are, for example, a damper pedal, a shift pedal, and a sostenuto pedal.
  • the configurations provided corresponding to each pedal 13 are shown focusing on each configuration provided corresponding to one pedal 13 shown in FIG. Therefore, descriptions of each component provided corresponding to the other pedals 13 are omitted.
  • the keyboard instrument 10 further includes a keyboard lid 11, a bridge 16, a soundboard 17, a straight post 19, and the like.
  • the sensor 30 includes a key sensor 32, a pedal sensor 33, and a hammer sensor 34.
  • the key sensor 32 is provided corresponding to each key 12 and outputs a measurement signal according to the behavior of the key 12 to the control device 20.
  • the key sensor 32 outputs a measurement signal according to the position (depression amount) of the key 12 to the control device 20.
  • the position of the key 12 may be measured in continuous quantities (fine resolution) or by detecting that the key 12 passes a predetermined position.
  • the key 12 may be detected at a plurality of positions within the pressing range of the key 12 (range from the rest position to the end position).
  • the hammer sensor 34 is provided corresponding to each hammer 14 and outputs a measurement signal according to the behavior of the hammer 14 to the control device 20.
  • the hammer sensor 34 measures the position (rotation amount) of the hammer shank immediately before the hammer 14 hits the string 15, and outputs a measurement signal to the control device 20 in accordance with the measurement result.
  • the position of the hammer shank may be measured in continuous quantities (fine resolution) or by detecting when the hammer shank passes a predetermined position.
  • the position where the hammer shank is detected may be a plurality of positions within the range immediately before the hammer 14 hits the string 15.
  • the pedal sensor 33 is provided corresponding to each pedal 13 and outputs a measurement signal according to the behavior of the pedal 13 to the control device 20.
  • the pedal sensor 33 outputs a measurement signal according to the position (depression amount) of the pedal 13 to the control device 20.
  • the position of the pedal 13 may be detected as a continuous quantity (fine resolution), or may be detected when the pedal 13 passes a predetermined position.
  • the position where the pedal 13 is detected may be any of a plurality of positions within the depression range of the pedal 13 (range from the rest position to the end position).
  • the drive device 40 includes a key drive device 42, a pedal drive device 43, a stopper 44, a vibrator 47, and a damper drive device 48.
  • the key drive device 42 is provided corresponding to each key 12, and is driven to press down the key 12 under control using a drive signal from the control device 20. This mechanically reproduces the same situation as when the player presses the key 12.
  • a pedal drive device 43 is provided corresponding to each pedal 13, and is driven to press down the pedal 13 under control using a drive signal from the control device 20. This mechanically reproduces the same situation as when the player depresses the pedal 13.
  • the damper driving device 48 is provided corresponding to each damper 18, and is driven so as to separate the damper 18 from the string 15 under control using a drive signal by the control device 20.
  • the damper drive device 48 may have a configuration that drives all dampers 18 simultaneously.
  • the stopper 44 is driven by control from the control device 20 so as to be in either a position where it collides with the hammer shank (blocking position) or a position where it does not collide with the hammer shank (retreat position).
  • blocking position the position where it collides with the hammer shank
  • retreat position the position where it does not collide with the hammer shank
  • the vibrator 47 is supported by a support section connected to the straight post 9 so as to be in contact with the surface of the soundboard 17 opposite to the part where the bridge piece 16 is arranged.
  • the vibrator 47 vibrates the soundboard 17 under the control of the control device 20 using a drive signal.
  • a drive signal containing a piano sound is supplied from the control device 20
  • the vibrator 47 applies vibrations to the soundboard 17 according to the drive signal.
  • piano sound is emitted from the soundboard 17.
  • a plurality of vibrators 47 may be arranged so as to be in contact with the soundboard 17.
  • a speaker that emits sound may be used.
  • Sound generation by the keyboard instrument 10 includes cases where it is realized by hitting the strings 15 with the hammer 14 and cases where it is realized by vibrating the soundboard 17 with the vibrator 47. Therefore, the keyboard instrument 10 can also be said to include a sounding device that generates a string hitting sound by driving the keys 12, and a sounding device that generates a sound from the soundboard 17 by driving the vibrator 47. .
  • the driving of the key 12 and the driving of the vibrator 47 are realized by outputting a driving signal to the driving device 40 as described later.
  • control device 20 is attached to the keyboard instrument 10.
  • the control device 20 does not need to be a device attached to the keyboard instrument 10, and may be, for example, a personal computer, a tablet computer, a smartphone, or the like.
  • FIG. 3 is a diagram illustrating the configuration of the control device in the first embodiment.
  • the control device 20 includes a control section 21 , a storage section 22 , an operation panel 23 , a communication section 24 , a sound source section 25 , and an interface 26 . Each of these components is connected via a bus 27.
  • the control unit 21 is an example of a computer including a processor such as a CPU and a storage device such as a RAM.
  • the control unit 21 executes a program stored in the storage unit 22 using a CPU (processor), and causes the control device 20 to implement functions for executing various processes.
  • the functions realized by the control device 20 include an ensemble control function to be described later. This ensemble control function controls each part of the control device 20 and each component connected to the interface 26.
  • a sensor 30 and a drive device 40 are connected to the interface 26 .
  • an external device 80 is further connected to the interface 26.
  • the interface 26 transmits drive signals, control signals, etc. generated by the control unit 21 to the target configurations, and receives measurement signals, collection signals, etc. from each target configuration.
  • the storage unit 22 is a storage device such as a nonvolatile memory or a hard disk drive.
  • the storage unit 22 stores a program executed by the control unit 21 and various data required when executing this program.
  • the operation panel 23 has operation buttons and the like that accept user operations. When a user's operation is accepted using this operation button, an operation signal corresponding to the operation is output to the control unit 21.
  • the operation panel 23 may have a display screen. In this case, the operation panel 23 may be a touch panel in which a touch sensor is combined with a display screen.
  • the communication unit 24 is a communication module that communicates with other devices wirelessly, wired, etc.
  • the other device with which the communication unit 24 communicates is the player piano 1 at the server 1000 or another communication base.
  • performance data, environment data, etc. indicating the contents of a performance on the keyboard instrument 10 are communicated between the communication bases.
  • the sound source section 25 generates a sound signal under control from the control section 21.
  • the sound signal is used as a drive signal for driving the vibrator 47 (vibration drive signal to be described later).
  • the sound signal includes, in this example, a signal representing the sound of a piano.
  • the control unit 21 controls the sound source unit 25 to generate a sound signal representing a piano sound according to the performance content corresponding to the performance data.
  • the performance data may be data generated based on a measurement signal generated by the sensor 30.
  • the performance data may be, for example, MIDI format data including sound production control information such as note-on, note-off, note number, velocity, etc., or may be information directly indicated by a measurement signal.
  • the interface 26 is an interface that connects the control device 20 and each external component.
  • Each component connected to interface 26 includes, in this example, sensor 30, drive device 40, and external device 80, as described above.
  • the interface 26 outputs the measurement signal output from the sensor 30 to the control unit 21.
  • the interface 26 outputs a drive signal for driving each device to the drive device 40.
  • the drive signal is generated in an ensemble control function 100, which will be described later.
  • the interface 26 may include a headphone terminal or the like to which a sound signal representing the piano sound generated by the sound source section 25 is supplied.
  • the structure for realizing the ensemble control function is not limited to being realized by executing a program, and at least a part of the structure may be realized by hardware.
  • the configuration for realizing the ensemble control function may be realized not by the control device 20 but by a device connected to the interface 26 (for example, a computer in which this program is installed).
  • the control unit 21 controls the stopper 44 to be placed at the blocking position.
  • the stopper 44 prevents the string from being struck, while the sound signal (for example, the sound of a piano performance) corresponding to the performance operation is It is generated in the sound source section 25.
  • the sound signal for example, the sound of a piano performance
  • the vibrator 47 vibrates the soundboard 17, thereby emitting sound.
  • a signal for driving the vibrator 47 is generated by a drive signal generation section 145 described below.
  • FIG. 4 is a diagram illustrating the configuration of the ensemble control function in the first embodiment.
  • the ensemble control function 100 includes a performance data generation section 131, a performance data transmission section 133, a performance data reception section 143, and a drive signal generation section 145.
  • the ensemble control function 100 further includes an environment data generation section 121, an environment data transmission section 123, and an environment data transmission section 123 as a function for sharing the surrounding environment of the automatic performance piano 1 between communication bases in conjunction with the ensemble performance. It includes a receiving section 183 and a control signal generating section 185.
  • the performance data generation unit 131 generates performance data indicating the content of the performance on the keyboard instrument 10 based on the measurement signal output from the sensor 30.
  • the performance data includes a measurement signal output from the key sensor 32 (hereinafter referred to as a key position signal) and a measurement signal output from the pedal sensor 33 (hereinafter referred to as a pedal position signal).
  • the key position signal includes the pitch of the pressed key 12 and the amount of pressing of the key 12. If the key sensor 32 is a sensor that measures the amount of depression of the key 12 at four locations, the information of the amount of depression of the key 12 included in the key position signal indicates the position of one of the four locations.
  • the pedal position signal includes the type of pedal 13 that was pressed and the amount of depression of the pedal 13. If the pedal sensor 33 is a sensor that measures the amount of pedal depression at three locations, the information on the amount of depression of the pedal 13 indicates the position of one of the three locations.
  • the performance data may further include a measurement signal (hereinafter referred to as a hammer position signal) output from the hammer sensor 34.
  • the hammer position signal includes, for example, the pitch of the key and the rotational position of the hammer 14.
  • the performance data generated by the performance data generation section 131 is data (for example, in MIDI format) that includes sound production control information generated based on the measurement results of the key sensor 32 and the pedal sensor 33.
  • data for example, in MIDI format
  • the amount of depression of the key 12 reaches a state where a note-on occurs.
  • the amount of depression of the key 12 can be sequentially transmitted while the key 12 is being depressed. Therefore, it is possible to make the automatic performance piano 1 at another communication base recognize that the key 12 has started to be pressed even before the note-on is reached. For example, when the key 12 on the automatic performance piano 1 of the communication base T1 starts to be pressed, even before a note-on occurs, the key 12 on the automatic performance piano 1 of the communication base T2 is pressed down to the recognized pressing amount. You can start driving. By doing so, it is possible to drive the keys 12 at the communication base T2 so as to follow the playing operation on the keys 12 at the communication base T1 with a short delay time.
  • the performance data transmitter 133 transmits the performance data generated by the performance data generator 131 to other communication bases.
  • the performance data receiving unit 143 receives performance data transmitted from other communication bases.
  • the drive signal generation section 145 generates a drive signal used in the drive device 40 based on the performance data received by the performance data reception section 143.
  • This drive signal includes a signal supplied to the key drive device 42 (key drive signal), a signal supplied to the pedal drive device 43 (pedal drive signal), and a signal supplied to the vibrator 47 (excitation drive signal). )including.
  • the key drive signal is generated based on the performance data, and more specifically, based on the key position signal included in the performance data.
  • the key drive signal is a signal for controlling the key drive device 42 to drive the key 12 so as to reproduce the amount of depression corresponding to the key position signal.
  • the key 12 to be driven is a key corresponding to the pitch specified by the key position signal.
  • the pedal drive signal is generated based on the performance data, and more specifically, based on the pedal position signal.
  • the pedal drive signal is a signal for controlling the pedal drive device 43 to move the pedal corresponding to the type specified by the pedal position signal to a position corresponding to the amount of depression.
  • the vibration driving signal is generated based on the performance data, and more specifically, is a signal generated by the sound source section 25 based on the key position signal and the pedal position signal.
  • the vibrator 47 vibrates the soundboard 17 in response to the vibration drive signal, the sound (piano sound in this example) corresponding to the signal generated in the sound source section 25 is transmitted around the keyboard instrument 10 via the soundboard 17. It spreads to
  • the drive signal generation section 145 When generating a sound signal in the sound source section 25, the drive signal generation section 145 generates sound generation control information based on the key position signal and the pedal position signal, and generates a sound signal in the sound source section 25 based on the sound generation control information. You may let them. At this time, the drive signal generation unit 145 may generate the sound generation control information using a calculation that predicts the note-on timing and velocity from the change in the amount of depression of the key 12 indicated by the key position signal in the performance data. . Changes in the rotational position of the hammer 14 indicated by the hammer position signal in the performance data may be used for this predictive calculation.
  • the predictive calculation may use a learned model obtained in advance by machine learning, or may use a fitting process that assumes a constant velocity trajectory, a constant acceleration trajectory, etc. based on changes in the amount of depression. This makes it possible to improve prediction accuracy even when the movements of the key 12 and the hammer 14 are not aligned.
  • the key 12 is driven based on performance data, and the hammer 14 that operates as a result hits the string 15.
  • the timing at which the sound is produced is delayed. Therefore, the timing of sound generation is affected not only by the communication delay between communication points but also by the delay when the key 12 is driven.
  • the key 12 and the pedal 13 are driven by the key drive signal and the pedal drive signal, but since the stopper 44 prevents the hammer 14 from hitting the string 15, a string-striking sound is generated. do not. Instead, sound is generated from the soundboard 17 by driving the vibrator 47 with the vibration drive signal. Producing sound using the vibrator 47 does not require driving the key 12. Therefore, regarding the time from the sound generation instruction (for example, note-on) to the actual sound generation, the time for sound generation by the vibrator 47 is shorter than the time for sound generation by string striking.
  • the performance data and sound generation method to be transmitted are not limited to the above combinations.
  • sound generation control information may be transmitted as the performance data.
  • the time difference in sound generation between the communication bases can be made shorter than when the sound generation by string striking is used.
  • Each drive signal is generated based on the sound production control information in the performance data.
  • the velocity value in the sound generation control information may be increased to a predetermined value or more.
  • the slower the driving speed is set the more the key 12 may move with a delay from the scheduled timing.
  • the string striking of the hammer 14 is prevented by the stopper 44 and no string striking sound is produced, so that there is no effect on sound production. Therefore, the velocity value can be increased for keys 12 that do not contribute to sound production.
  • the velocity value is not changed so that the sound content remains unchanged. At this time, some of the keys 12 may not be driven so as not to affect the user's performance.
  • the keys 12 that are not driven may be keys of pitches used for the performance music by setting the performance music in advance.
  • the stopper 44 may be controlled to the retracted position to generate sound by driving the key 12 and striking the string.
  • the time difference in sound production between communication bases can be made shorter than when the sound generation control information is transmitted as performance data. I can do it.
  • both the sound generated by the user's performance operation for example, the sound generated by the performance at the communication base T1 and the sound generated by the performance at another communication base (for example, the communication base T2) include string striking sounds.
  • the pedal 13 may not be driven by the pedal drive device 43, while the damper 18 may be driven by the damper drive device 48.
  • sound generation by the vibrator 47 may be used while the stopper 44 is controlled to the retracted position.
  • the drive signal generation unit 145 may prevent the key 12 and the pedal 13 from moving without generating the key drive signal and the pedal drive signal. In this way, while the sound generated by the user's performance operation includes the sound of string hitting, the sound generated by the performance at another communication base can be the sound generated by the vibrator 47 with less delay.
  • the environmental data generation unit 121 generates environmental data indicating the surrounding environment based on the collection signal output from the environment collection device 82.
  • the ambient environment includes images and sounds around the device. Therefore, the environment collecting device 82 includes a device for collecting the surrounding environment, that is, a camera (imaging device) for obtaining images and a microphone (sound collecting device) for obtaining sound. In this example, the camera acquires an image of a range that includes the player of the keyboard instrument 10.
  • the information regarding the image included in the environmental data may be image information indicating the image (video) itself, but in this example, the information regarding the image is information obtained by capturing the movements of the performer using motion capture technology.
  • the sensor that measures the movement of the performer is not limited to a camera, and may include an IMU (internal measurement unit), a pressure sensor, a displacement sensor, and the like.
  • the motion information is, for example, information about a plurality of parts having predetermined features extracted from an image and indicated by the coordinates of each part.
  • the environmental data may be transmitted in the form of audio data representing sound signals. In this case, the motion information can be synchronized with the sound signal included in the audio data by being transmitted as data on a predetermined channel in the audio data.
  • environmental data may be sent as part of existing data, such as in a format indicating pronunciation control information (for example, MIDI format) or in video data format, and then sent as part of other data. It may also be transmitted to a communication base.
  • the sounds collected by the environment collection device 82 may include sounds (piano sounds) generated by playing the keyboard instrument 10.
  • the period during which the sound produced by playing the keyboard instrument 10 exists can be specified from the key position signal or the like. If the sound produced by the performance is produced by the vibrator 47, the sound can be identified by the sound source section 25. Therefore, when generating the environmental data, the environmental data generating section 121 may perform signal processing to cancel the sound component generated by the sound source section 25 from the sound included in the collected signal.
  • the environmental data generation unit 121 may perform signal processing to cancel the string striking sound component from the sound included in the collected signal.
  • the string striking sound component may be generated by the sound source section 25 using a key position signal and a pedal drive signal.
  • the environmental data generation unit 121 may generate environmental data for a period in which sounds generated by playing the keyboard instrument 10 exist without using the sounds included in the collected signals. At this time, the environment collecting device 82 may recognize the period during which the performance is being performed, and may not collect sounds during that period.
  • the environmental data transmitting unit 123 transmits the environmental data generated by the environmental data generating unit 121 to other communication bases.
  • the environmental data receiving unit 183 receives environmental data transmitted from other communication bases.
  • the control signal generating unit 185 generates a control signal used in the environment providing device 88 based on the environmental data received by the environmental data receiving unit 183.
  • This control signal is a signal for reproducing information about the surrounding environment included in the environmental data.
  • a signal for displaying an image on a display (display device) and a sound are sent from a speaker (sound emitting device).
  • a speaker sound emitting device
  • the display may be placed at a position where the player can easily see the keyboard lid 11 of the keyboard instrument 10, the music stand, or the like.
  • a projector that projects an image onto the keyboard lid 11 may be used instead of the display.
  • an environment providing device 88 may be provided corresponding to each communication base.
  • the control signal supplied to the environment providing device 88 is generated based on the environmental data received from the communication base corresponding to the environment providing device 88.
  • the control signal generation unit 185 may generate an image imitating the performer using the motion information included in the environmental data, and generate a signal for displaying the generated image on a display. At this time, an image may be generated that emphasizes a specific part or action.
  • the specific part may be, for example, the player's eyes, face, fingers, or the like.
  • the specific motion may be, for example, a movement of the line of sight, a movement of the face, a movement of fingers during a musical performance, or the like.
  • the control signal generation unit 185 generates an image such as a graph that numerically represents the movement of the performer using the motion information included in the environmental data, and generates a signal for displaying the generated image on a display. Good too. The performer can use the displayed information to tailor the performance.
  • the control signal generation unit 185 may generate a signal for displaying an image on the display based on the performance data received by the performance data reception unit 143.
  • the image based on the performance data may include an image showing the performance content included in the performance data, for example, an image showing keys and pedals being operated.
  • the time difference in pronunciation between communication bases can be reduced, allowing users to feel closer to each other's surrounding environments. Therefore, a plurality of musicians playing together can feel a sense of unity.
  • the performance content included in the performance data is not limited to indicating performance operations on the keys 12 and the like.
  • the performance data includes a signal indicating the vibration of the soundboard 17 to which the string-striking sound caused by the performance is transmitted.
  • the vibration of the soundboard 17 is measured by a pickup sensor included in the sensor 30 in this example.
  • FIG. 5 is a diagram illustrating the positional relationship between the vibrator and the pickup sensor in the second embodiment.
  • FIG. 5 is a diagram of the keyboard instrument 10 viewed from below.
  • the soundboard 17 is provided with two vibrators 47 (vibrators 47H and 47L).
  • the vibrators 47H and 47L are connected between the plural sound bars 17a of the sound board 17.
  • the vibrator 47H is provided at a position corresponding to the piece 16H among the two pieces 16 (pieces 16H (long piece) and 16L (short piece)).
  • the vibrator 47L is provided at a position corresponding to the piece 16L.
  • the bridge 16H is a bridge that supports the strings 15 on the treble side
  • the bridge 16L is a bridge that supports the strings 15 on the bass side.
  • the vibrator 47H is supported by a support portion 97H connected to the straight column 19.
  • the vibrator 47L is supported by a support portion 97L connected to the straight column 19.
  • the vibrator 47 is not limited to being provided at a position corresponding to the piece 16 on the soundboard 17, but may be provided at a position away from the piece 16, or at a position corresponding to the sound bar 17a. may be provided. When provided at a position corresponding to the sound bar 17a, the vibrator 47 may be provided on the string 15 side of the sound board 17.
  • the pickup sensor 37H is attached to the soundboard 17 near the vibrator 47H, measures the vibration of the soundboard 17, and outputs a measurement signal indicating the measurement result.
  • the pickup sensor 37L is attached to the soundboard 17 near the vibrator 47L, measures vibrations of the soundboard 17, and outputs a measurement signal indicating the measurement result. Therefore, the performance data that the performance data transmission section 133 transmits to other communication bases and the performance data that the performance data reception section 143 receives from the other communication bases are based on the measurement signal PU1 from the pickup sensor 37H and the performance data from the pickup sensor 37L. Contains measurement signal PU2.
  • FIG. 6 is a diagram illustrating the configuration of the drive signal generation section in the second embodiment.
  • the drive signal generation section 145A generates vibration drive signals DS1 and DS2 from the measurement signals PU1 and PU2 included in the performance data received by the performance data reception section 143.
  • the vibration drive signal DS1 is supplied to the vibrator 47H.
  • the vibration drive signal DS2 is supplied to the vibrator 47L.
  • the drive signal generation section 145A includes a crosstalk processing section 1451, a sound imparting section 1453, and an amplification section 1455.
  • the crosstalk processing unit 1451 performs a predetermined delay process and a predetermined filter process on the measurement signal PU1, and adds the processed signal to the measurement signal PU2.
  • the crosstalk processing unit 1451 performs predetermined delay processing and predetermined filter processing on the measurement signal PU2, and adds the processed signal to the measurement signal PU1. This reduces the crosstalk components included in each of the measurement signals PU1 and PU2.
  • the sound imparting unit 1453 performs signal processing to impart acoustic effects such as a delay, compressor, expander, and equalizer to the measurement signals PU1 and PU2.
  • the amplifying section 1455 amplifies the measurement signals PU1 and PU2 to output vibration drive signals DS1 and DS2 to be supplied to the vibrators 47H and 47L.
  • the vibration of the soundboard 17 measured by the pickup sensors 37H and 37L is not only the vibration caused by the string striking sound, but also the vibration caused by the vibration caused by the vibration drive signals DS1 and DS2. Also includes vibrations caused by 47H and 47L. Therefore, before transmitting the performance data to another communication base, the performance data transmitter 133 performs signal processing to reduce the components of the vibration drive signals DS1 and DS2 for the measurement signals PU1 and PU2 included in the performance data. may be applied.
  • ⁇ Third embodiment> As described above, when the velocity value is small, it takes time for the key driving device 42 to drive the key 12, and the movement of the key 12 is delayed compared to when the velocity value is large. As mentioned above, if the string strike sound does not occur, you can increase the velocity value, but if the string strike sound does occur, increasing the velocity value too much will significantly change the pronunciation content. Put it away. In the third embodiment, an example for reducing such changes in pronunciation content as much as possible will be described.
  • FIG. 7 is a diagram illustrating the relationship between velocity and delay time in the third embodiment.
  • the horizontal axis is the velocity value, which is a value used as a calculation parameter in the drive signal generation section 145 to generate the key drive signal.
  • the vertical axis corresponds to the delay time when the key driving device 42 drives the key 12 based on the key driving signal.
  • the velocity value is obtained, for example, from information included in the performance data used by the drive signal generation section 145. More specifically, the velocity value may be obtained by calculation from a key position signal or a hammer position signal included in the performance data. If the performance data includes sound production control information, it may be acquired from the velocity value. In this example, the velocity takes values from "1" to "127".
  • the drive signal generation unit 145 corrects the velocity to become larger when the velocity becomes smaller than Vt.
  • FIG. 8 is a diagram illustrating the relationship between velocity and correction value in the third embodiment.
  • a correction value obtained by increasing the value is used as a calculation parameter in the drive signal generation section 145.
  • the correction value is set to change from "Va” to "Vt".
  • the delay time can be reduced when the velocity is small. Since the correction value is larger than the value before correction, the sound will be louder than expected.On the other hand, compared to the effect of reducing the delay time, the change in sound volume has less impact on the listener. small.
  • the correction value may be set to "Vb" which is larger than "Va" when the velocity is "1".
  • the drive signal generation unit 145 When playing in ensemble, it is preferable to minimize the delay as much as possible. On the other hand, if the player is not performing in an ensemble but only listening to pronunciation based on performance data, it is preferable that the delay time does not change over the entire input value rather than reducing the delay. Therefore, in such a case, the drive signal generation unit 145 generates a key drive signal that is intentionally delayed to delay the timing at which the key 12 starts to be pressed in a range where the velocity value is large (a range equal to or greater than a predetermined value). A signal may also be generated. At this time, the time for delaying the timing may be gradually reduced as the input value decreases from "Vt" to "1".
  • the communication base T1 is a large hall where an orchestra can perform.
  • the communication base T2 is a small studio like a soundproof room.
  • an orchestra performs at the communication base T1
  • a piano performs at the communication base T2. That is, there is no piano player in the orchestra at the communication base T1, but there is a piano player at the remote communication base T2.
  • the orchestra performance sound at the communication base T1 is transmitted to the communication base T2.
  • the player at the communication base T2 plays the automatic performance piano 1 while listening to the performance sound received from the communication base T1.
  • the content of the user's performance is transmitted as performance data from the automatic performance piano 1 to the automatic performance piano 1 at the communication base T2. Therefore, the automatic performance piano 1 at the communication base T2 generates sound so as to reproduce the performance at the communication base T1. That is, at the communication base T1, even if there is no piano player, the sound of the piano can be heard together with the sound of the orchestra.
  • the sense of presence of the orchestral performance at the communication base T1 can be conveyed to the player of the automatic performance piano 1 at the communication base T2. The configuration for realizing this will be described in detail below.
  • FIG. 9 is a diagram illustrating the configuration of the environment collection device at the communication base T1 in the fourth embodiment.
  • a conductor stand CS a player piano 1, and a chair 50 are installed on a stage ST1 of a hall.
  • the vibration measurement plate 821 is placed at a location on the stage ST1 where the chair 50 is installed. Vibration measurement plate 821 measures vibrations transmitted via stage ST1 and outputs a collection signal indicating the measured vibrations.
  • the vibration measurement plate 822 is placed at a location on the stage ST1 where the automatic performance piano 1 is installed. Vibration measurement plate 822 measures vibrations transmitted via stage ST1 and outputs a collection signal indicative of the measured vibrations.
  • Microphone 823 is placed near chair 50 in this example. Microphone 823 collects the arriving sound and outputs a collected signal indicative of the sound.
  • FIG. 10 is a diagram illustrating the configuration of the environment providing device at the communication base T1 in the fourth embodiment.
  • a player piano 1 and a chair 50 are installed on a stage ST2 of a studio.
  • An environment providing device 88 including vibration generating plates 881 and 882 and a speaker 883 is provided on the stage ST2.
  • the vibration generating plate 881 is placed at the location where the chair 50 is installed on the stage ST2.
  • the vibration generating plate 881 vibrates based on the control signal.
  • the vibration generating plate 882 is placed at the location where the automatic performance piano 1 is installed on the stage ST2.
  • the vibration generating plate 882 vibrates based on the control signal.
  • the speaker 883 is placed near the chair 50 (near the performer). Speaker 883 emits sound based on the control signal.
  • vibrations accompanying the performance are transmitted to the vibration measurement plates 821 and 822 via the stage ST1, and the sound of the performance is collected by the microphone 823.
  • the collected signals output from each of the vibration measurement plates 821 and 822 and the microphone 823 are transmitted to the communication base T2 as environmental data.
  • the automatic performance piano 1 is played at the communication base T2
  • the content of the performance is transmitted as performance data to the communication base T1.
  • the automatic performance piano 1 is driven to produce sound based on the performance data from the communication base T2. That is, the automatic performance piano 1 at the communication base T1 is driven according to the performance on the automatic performance piano 1 at the communication base T2.
  • speaker 883 generates sound based on the environmental data from communication base T1. This sound is the sound collected by the microphone 823, and corresponds to the sound of an orchestra performance at the communication base T1.
  • the vibration generating plate 881 and the vibration generating plate 882 are driven to vibrate based on the environmental data from the communication base T1.
  • the vibration at the vibration generating plate 881 corresponds to the vibration measured by the vibration measuring plate 821 at the communication base T1. That is, vibrations that are transmitted to the chair 50 at the communication base T1 are also transmitted to the chair 50 at the communication base T2.
  • the vibration at the vibration generating plate 882 corresponds to the vibration measured by the vibration measuring plate 822 at the communication base T1.
  • the vibration at the vibration generating plate 882 corresponds to the vibration measured by the vibration measuring plate 822 at the communication base T1. That is, vibrations that are transmitted to the automatic performance piano 1 at the communication base T2 are also transmitted to the automatic performance piano 1 at the communication base T2. Therefore, the performer at the communication base T2 can feel as if he were playing at the communication base T1.
  • the vibration measurement plates 821, 822 and microphone 823 at the communication base T1 will also collect the components of the piano sound when the automatic performance piano 1 is driven. Therefore, signal processing for reducing the piano sound component is performed on the path up to the drive of the vibration generating plates 881, 882 and the speaker 883 at the communication base T2.
  • the piano sound component can be generated from a signal for driving the automatic performance piano 1 at the communication base T1. Therefore, this signal processing may be executed by the environmental data generation unit 121, for example, when the environmental data transmitted from the communication base T1 is generated. By doing so, the influence of the performance at the communication base T2 can be reduced in the environment provided by the environment providing device 88 at the communication base T2.
  • ⁇ Fifth embodiment> when the environment providing device 88 displays images of performers at other communication bases, it also displays its own image (the image of the performer transmitted to the other communication bases). I will explain about it.
  • FIG. 11 is a diagram illustrating the configuration of the control signal generation section in the fifth embodiment.
  • the control signal generation section 185B in the fifth embodiment includes a self-portrait acquisition section 1851, a remote image acquisition section 1853, and an image composition section 1855.
  • the self-portrait acquisition unit 1851 acquires self-portrait information regarding images including the performer based on the collection signal output from the environment collection device 82. Based on the environmental data received by the environmental data receiving section 183, the remote image acquisition section 1853 acquires remote image information regarding images including the performer collected by the environment collection device 82 of another communication base. Both the self-portrait information and the remote image information are images including the player and the keyboard portion of the keyboard instrument 10.
  • the image composition unit 1855 generates a composite image based on self-portrait information and remote image information.
  • the composite image is an image in which an image region of the performer included in the remote image information is extracted and superimposed on the image of the self-portrait information.
  • the image synthesis unit 1855 identifies the keyboard part from the images in each of the self-portrait information and the remote image information, and determines the superimposition position of the performer's image in the remote image information so that the keyboard parts match each other.
  • the image synthesis unit 1855 superimposes, for example, an image obtained by applying a transformation matrix to the remote image information so as to maximize the cross-correlation between the keyboard parts, on the image of the self-portrait information.
  • the image composition unit 1855 generates a control signal for displaying the composite image and outputs it to the environment providing device 88.
  • the environment providing device 88 may be a display that displays a composite image on a display, or may be a projector that projects at least a portion of an image of remote image information onto a keyboard portion. When projecting using a projector, a predetermined transformation matrix depending on the position of the keyboard portion may be applied to the remote image information.
  • the image compositing unit 1855 determines whether the images of the two performers have touched each other, and if they have touched each other, the area corresponding to the contact area can be made to emit light, etc., so that the area can be identified. Modify the composite image.
  • the contact portion that emits light may be limited to a portion (for example, the arm or hand).
  • the performers may be made to recognize that the images of the two performers have come into contact with each other using something other than the images. For example, if the images of two performers touch each other, the seat of the chair used by the performers may be vibrated.
  • a configuration for vibrating the seat surface of the chair is included in the environment providing device 88, and is controlled by a control signal from the control signal generation unit 185B. In this way, even if two performers perform at different communication bases, they can experience the situation as if they were actually performing at the same location.
  • the remote image acquisition unit 1853 may acquire the above-mentioned operation information (remote operation information) instead of remote image information.
  • the image synthesis unit 1855 may generate an image that resembles the performer using the motion information, and synthesize it with the image of the self-portrait information to generate a composite image.
  • the image synthesis unit 1855 takes into account the time lag (communication delay, etc.) between self-portrait information and remote image information, and when generating a composite image, delays the image of self-portrait information and then images the remote image information.
  • a future predicted image corresponding to the delay time may be generated from the image of the remote image information or the remote motion information and superimposed on the image of the self-portrait information.
  • FIG. 12 is a diagram illustrating an example of screen display in the sixth embodiment.
  • a player piano 1a corresponding to the communication base T1 and a player piano 1b corresponding to the communication base T2 are arranged in one room.
  • a screen SC on which an image is projected by a projector PJ is arranged between the automatic performance piano 1a and the automatic performance piano 1b.
  • an image corresponding to the performance data communicated between the automatic performance piano 1a and the automatic performance piano 1b is displayed on the screen SC.
  • the displayed image is an image related to the sound according to the content of the performance, and in this example, it is a band-shaped image displayed at a position determined by the pitch and timing of pronunciation and with a length corresponding to the length of the sound.
  • the band-shaped image sba is an image showing a sound according to the performance content on the automatic performance piano 1a
  • the band-shaped image sbb is an image showing the sound according to the performance content on the automatic performance piano 1a.
  • the band images sba and sbb are displayed in a flowing manner depending on the direction of communication.
  • a band-shaped image sba corresponding to the sound corresponding to the key 12 is displayed on the screen so as to move toward the automatic performance piano 1b.
  • the band-shaped image sba reaches the automatic performance piano 1b side, a sound corresponding to the image may be generated at the automatic performance piano 1b.
  • the automatic performance piano 1b may delay the timing until the timing is reached, and then drive the key 12. The relationship between the automatic performance piano 1a and the automatic performance piano 1b remains the same even if they are replaced.
  • Such a projector PJ and screen SC can also be said to be an example of the environment providing device 88.
  • the environment providing device 88 is shared by the two automatic performance pianos 1a and 1b.
  • the present invention is not limited to the embodiments described above, and includes various other modifications.
  • the embodiments described above have been described in detail to explain the present invention in an easy-to-understand manner, and the present invention is not necessarily limited to having all the configurations described.
  • Some modified examples will be described below.
  • the first embodiment will be described as a modified example, the present invention can also be applied as a modified example of other embodiments. It is also possible to combine a plurality of modifications and apply them to each embodiment.
  • the drive signal generation unit 145 is not limited to predicting sound production control information such as note-on from the change in the amount of depression of the key 12 indicated by the key position signal in the performance data, but also uses other information to control sound production. Information may also be predicted. For example, the drive signal generation unit 145 extracts the movement of a finger toward the key 12 from the image of the performer included in the environmental data, and determines the subsequent movement when the key 12 is pressed based on the change in the finger movement. The pronunciation control information may be predicted by estimating. The information indicating the movement of the finger is also information regarding the depression of the key 12. Therefore, the finger image or finger movement may be obtained by the sensor 30. In this case, information indicating finger movements may be transmitted as performance data.
  • the sensor 30 may have a configuration that detects contact with or proximity to the key 12.
  • the performance data generation unit 131 generates and transmits performance data based on this detection result, so that the key 12 starts to be pressed at another communication base before the key 12 actually starts being pressed. can be recognized. This may improve the prediction accuracy of the pronunciation control information.
  • a trained model may be generated to correspond to each performer.
  • Predicting the sound production control information in this way is not limited to being applied to the case of controlling the automatic performance piano 1 at another communication base, but may be used for various interlocking operations.
  • the present invention can be applied to a configuration in which a keyboard device and a sound source device are connected wirelessly. For example, if a note-on is generated by pressing a key on a keyboard device and then transmitted to a sound source device, the timing of sound generation will be delayed due to communication delay. On the other hand, by transmitting the key movement to the sound source device before a note-on occurs on the keyboard device, the influence of communication delay can be reduced by predictive calculation using the movement.
  • the key position signal generated in response to the depression of a key 12 on the automatic performance piano 1 may be used to control other keys 12 on the same automatic performance piano 1. For example, control can be performed such that, in response to the depression of a key 12, a key 12 corresponding to a tone one octave higher than that key 12 is linked.
  • the interlocking key 12 is not limited to a tone one octave higher, but may be any predetermined tone.
  • the predetermined tone may be determined relative to the pitch of the depressed key 12, or may be determined absolutely regardless of the pitch. At this time, by using the key position signal instead of the sound generation control information, it is possible to reduce the time difference between pressing the key 12 to be played and driving the interlocking key 12.
  • the data to be recorded may be data based on the sound production control information, or may be data corresponding to signals output from the sensor 30, such as a key position signal, a hammer position signal, and a pedal position signal.
  • the performance data transmitting unit 133 may include in the performance data information for setting whether or not to drive the keys 12 of the player piano 1 at another communication base for each key 12 and transmit the data. .
  • the presence or absence of driving may be set by the performer at the transmission side communication base during the performance, or may be predetermined based on a specific key or range.
  • the automatic performance piano 1 does not drive the key 12 but drives the vibrator 47 in response to a key position signal regarding the key 12 which is set not to be driven.
  • the environment collection device 82 may include a sensor attached to the performer, for example, a sensor that measures the performer's breathing.
  • the player piano 1 may transmit the measurement results of the player's breathing to other communication bases as environmental data, and display information according to changes in breathing on the display of the environment providing device 88 at the other communication base. good.
  • a performer's breathing is closely related to the performance movement. For example, just before starting a performance, performers often take a deep breath. Accordingly, when a deep inhale is measured, information indicating this or the time until the performance is considered to begin may be displayed on the display. Assuming that the time from when a player takes a deep breath to the start of a performance differs depending on the player, the time may be set to vary depending on the player. In predicting this time, a learned model may be used that is machine-learned the correlation between the timing of a deep breath and the time until the start of the performance.
  • the environment providing device 88 may be a small movable device capable of providing various environments, and may be, for example, in a humanoid shape imitating some kind of character.
  • the environment providing device 88 may be a humanoid robot that moves its arms and hands based on control signals.
  • the environment providing device 88 may have a shape that can be attached to the performer (a wristwatch type, a shoulder type, a neck type, etc.).
  • the configuration that provides various environments may be the above-mentioned display or speaker, or may be a heat source, cooling source, fan, etc. for controlling the temperature, or may be a configuration that provides various environments such as room brightness, color, etc. It may also be a light, a projector, etc.
  • the environment providing device 88 may include, for example, a structure such as a robot arm for changing the position of a heat source or the like, or by arranging a plurality of heat sources and driving one of them, the position of the heat source can be substantially changed. may be changed.
  • the heat source may be used, for example, to recreate the position of the performer at another communication location.
  • the environment collecting device 82 only needs to include a sensor compatible with the environment providing device 88, and may include, for example, a temperature sensor, an air volume sensor, an illuminance sensor, and the like.
  • the environment providing device 88 may be able to localize a sound image or reproduce a predetermined sound field by including a plurality of speakers. At this time, predetermined reverb processing or filter processing such as FIR may be added to the sound signal included in the environmental data.
  • the environment collecting device 82 may collect information for reproducing the sound field characteristics of the room and transmit the collected information to other communication bases as environmental data. Thereby, the environment providing device 88 at the communication base on the receiving side may reproduce the sound field of the room at the communication base on the transmitting side based on the information included in the environmental data. At this time, the sound field of the room at the transmitting side communication base may be more accurately reproduced by including signal processing for canceling the sound field characteristics of the room at the receiving side communication base. Processing to reproduce such a sound field may be added to the excitation drive signal.
  • a common metronome synchronized at each communication base may be realized using sound, light, vibration, etc.
  • time information used in a satellite positioning system such as a GPS signal may be used, or a time synchronization technique based on NTP (Network Time Protocol) may be used.
  • NTP Network Time Protocol
  • the BPM value may be set and the beat start timing may be determined based on time information.
  • the BPM value may be determined based on a preset performance piece, or may be set by the performer.
  • any one of a plurality of communication bases may be used as a metronome reference.
  • the beat position may be analyzed from the performance content at the reference communication base and used as a metronome.
  • the beat position that can be matched and handled by the largest number of communication bases may be used as a metronome at other communication bases.
  • Predetermined data data including sound production control information, sound data, video data, etc.
  • the predetermined data may be obtained by recording the performance. For example, a drum rhythm pattern may be played by setting a metronome.
  • the player may be made to recognize the beat of the metronome by vibrating the movable components of the automatic performance piano 1.
  • the drive signal generation unit 145 may generate a drive signal to move the pedal 13 a little with each beat of the metronome. If the pedal 13 is a damper pedal, the amount by which the pedal 13 is moved is so small that the damper 18 does not separate from the strings 15.
  • the structure of the metronome that moves with each beat is not limited to the pedal 13, but may be any key 12. In this case, the key 12 is pressed down to the extent that it does not produce a string strike or a note-on. It is preferable.
  • the transmitted performance data may include time information when transmitting the performance data.
  • the performance data may be adjusted according to the time information.
  • the performance data from multiple other communication bases may differ depending on the communication delay. Even if the performance data is received at the same timing, by shifting the performance data on the time axis so that the time information is aligned, it is possible to drive the automatic performance piano 1 assuming that the delay amount is the same.
  • the drive signal generation unit 145 may generate the drive signal such that the longer the delay time, the smaller the velocity value.
  • the drive signal generation unit 145 may generate more reverberation as the delay time increases. In this way, the automatic performance piano 1 can realize sound production in which the length of the delay time is an effect of the distance. In other words, the long delay time can give the listener the feeling that the performance is being performed at a distant location.
  • an image may be displayed on the display that visually indicates the magnitude of the delay time.
  • an image visually indicating the magnitude of the delay time may be presented using AR (Augmented Reality).
  • images related to each communication base may be presented by converting the delay time into a position/distance relationship in the AR space.
  • the control device 20 may calculate the degree of correlation by comparing performance data between a plurality of communication bases, and display the degree of correlation on the display.
  • the degree of correlation may be calculated using signal processing or DNN (Deep Neural Network), for example.
  • DNN Deep Neural Network
  • the degree of correlation may be calculated using performance data that has been adjusted so that time information is consistent between a plurality of communication bases.
  • the control device 20 may analyze the received performance data to identify chords or beat positions, and display the identified information on the display. At this time, the most likely chord and beat position among the performance data at the plurality of communication bases may be displayed on the display.
  • the keyboard may be illuminated with light so that the player can recognize the keys 12 that correspond to the constituent notes of this chord.
  • the control device 20 analyzes the chord from the received performance data, and if the likelihood of the chord is higher than a predetermined value, it identifies it as the current chord.
  • the control device 20 controls the vibrator 47 not to be driven by any playing operation on the key 12 other than the sound corresponding to the chord. .
  • the control device 20 analyzes the beat position from the received performance data, and if the likelihood of the beat position is higher than a predetermined value, it identifies it as the current beat position.
  • the control device 20 controls the control device 20 to generate a sound when the key 12 is pressed within a predetermined time range before reaching the next predicted beat position.
  • the sound generation by the vibrator 47 is realized by delaying the beat position to the predicted beat position. In this way, the performance sound may be matched to the beat position.
  • the control device 20 specifies the volume from the received performance data, and also specifies the volume of the user's performance on the automatic performance piano 1.
  • the volume is specified, for example, by the average value of velocity over a predetermined period of time in the past.
  • the drive signal generation unit 145 generates a key drive signal or an excitation drive signal by adjusting the volume of the received performance data so that it approaches the volume of the player's own performance. When adjusting the volume, the volume may be changed gradually rather than suddenly. In this way, the volume balance in the ensemble can be adjusted.
  • the volume balance may be set in advance so that one of the volumes is relatively loud.
  • control device 20 When the control device 20 causes the vibrator 47 to generate sound in response to a performance operation on the key 12, the control device 20 may delay the timing of the sound generation in response to the depression of the key 12. At this time, performance data transmitted to other communication bases and performance data transmitted from other communication bases are not delayed. Thereby, the performer plays by pressing down the key 12 early in consideration of the delay time, so that the influence of communication delay on the ensemble performance can be reduced.
  • the control device 20 controls the sensor 30 and the drive device 40.
  • the device 40 may not include components related to the device 40, and may be composed of a desktop computer, a tablet computer, or the like.
  • control device 20 may convert the performance sound into performance data and transmit it to another communication base.
  • the performance sounds may be collected by a microphone, and the control device 20 may generate performance data by analyzing constituent sounds included in the collected performance sounds and converting them into sound production control information. Such processing can also be applied to musical instruments other than pianos.
  • the environment collecting device 82 may include a sensor that detects the opening and closing of the keyboard lid 11, a sensor that detects the seating of the performer on the chair, and the like.
  • the environment providing device 88 may have a display that displays the opening and closing of the keyboard lid 11 and the seating of the performer on the chair.
  • the environment providing device 88 may have a structure that opens and closes the keyboard lid 11 in response to a control signal. In this case, in response to the opening and closing of the keyboard lid 11 at a specific communication base, the keyboard lids 11 at other communication bases may be linked.
  • the keyboard instrument 10 in the automatic performance piano 1 is not limited to an acoustic piano such as a grand piano, but may be an electronic keyboard instrument.
  • the electronic keyboard instrument may be a keyboard device having a structure corresponding to the keys 12, or may be a keyboard device in which the keys 12 have a sheet-like structure.
  • a keyboard device having a sheet-like structure it can be placed on the floor and played by stepping on it with feet, so it can be played even in situations where hands cannot be used.
  • the playable range may be narrow.
  • a plurality of keyboard devices having different ranges set in advance may be used to allow a plurality of people to perform.
  • a keyboard device having a sheet-like structure
  • it may be placed on the back side of a bedside table.
  • a rotation mechanism may be provided in the support member that supports the side table so that either the front surface or the back surface of the side table can be switched to face the top surface.
  • At least part of the functions of the control device 20 may be provided as a plug-in in software that implements the video conference system.
  • the network NW that connects the communication bases may be a dedicated line realized by an optical cable or the like.
  • the environment collecting device 82 and the environment providing device 88 may include a configuration for detachably attaching them to the automatic performance piano 1.
  • the player piano 1 may also include a structure for attaching the environment collecting device 82 and the environment providing device 88. In this case, the environment collecting device 82 or the environment providing device 88 may be connected to the interface 26 by being attached to the automatic performance piano 1.
  • the first transmitting unit transmits the first performance data including the performance content for the keyboard instrument at the first communication base to the second communication base; a first receiving section that receives second performance data; a first generating section that generates a drive signal for producing sound according to the second performance data and outputs it to the sound production device at the first communication base;
  • a control device wherein at least one of the first performance data and the second performance data includes a key position signal indicating a depression amount of a key on the keyboard instrument.
  • the sounding device may include a vibrator connected to a soundboard of the keyboard instrument.
  • the sound generation according to the second performance data may be generated by vibration of the vibrator according to the drive signal.
  • the sounding device may include a key of the keyboard instrument, a hammer interlocked with the key, and a string struck by the hammer.
  • the sound generation according to the second performance data may be generated by driving the key according to the drive signal.
  • the drive signal may be a signal for driving the key so as to reproduce the depression amount according to the key position signal.
  • a second transmitting unit that acquires first environmental data according to information on the surrounding environment collected by the environment collecting device at the first communication base and transmits it to the second communication base; a second receiving unit that receives environmental data; and a second generating unit that generates a control signal for providing a surrounding environment according to the second environmental data and outputs it to the environment providing device at the first communication base.

Abstract

A control device according to one embodiment includes a first transmission unit, a first reception unit, and a first generation unit. The first transmission unit transmits, to a second communication base, first performance data including the contents of performance for a keyboard instrument at a first communication base. The first reception unit receives second performance data from the second communication base. The first generation unit generates a driving signal for producing sound corresponding to the second performance data, and outputs the driving signal to a sound production device at the first communication base. The first performance data and/or the second performance data includes key position signals indicating pressing amounts of keys in the keyboard instrument.

Description

制御装置Control device
 本発明は制御装置に関する。 The present invention relates to a control device.
 楽器が演奏される複数の通信拠点がネットワークを介して接続されることにより、離れた場所に置いても合奏を可能とする技術が開発されている。快適な合奏を実現するために通信遅延の影響を少なくするための技術が、例えば、特許文献1に開示されている。 Technology has been developed that allows multiple communication bases where musical instruments are played to be connected via a network, making it possible to perform in ensemble even when the instruments are placed in remote locations. For example, Patent Document 1 discloses a technique for reducing the influence of communication delay in order to realize a comfortable ensemble performance.
特開2005-195982号公報Japanese Patent Application Publication No. 2005-195982
 複数の通信拠点における合奏によれば、同じ場所で行う合奏に比べて、様々な観点で、複数の演奏者が一体感を得られにくい。 When performing in an ensemble at multiple communication bases, it is difficult for multiple performers to feel a sense of unity from various viewpoints, compared to when performing in an ensemble at the same location.
 本発明の目的の一つは、合奏をする複数の演奏者が一体感を得られるようにすることにある。 One of the purposes of the present invention is to enable a plurality of musicians playing in an ensemble to feel a sense of unity.
 一実施形態における制御装置は、第1送信部と、第1受信部と、第1生成部と、を含む。第1送信部は、第1通信拠点における鍵盤楽器に対する演奏内容を含む第1演奏データを第2通信拠点に送信する。第1受信部は、第2通信拠点から第2演奏データを受信する。第1生成部は、第2演奏データに応じた発音をするための駆動信号を生成して、第1通信拠点における発音装置に出力する。第1演奏データおよび第2演奏データの少なくとも一方は、鍵盤楽器における鍵の押下量を示す鍵位置信号を含む。 A control device in one embodiment includes a first transmitter, a first receiver, and a first generator. The first transmitter transmits first performance data including performance details for a keyboard instrument at the first communication base to the second communication base. The first receiving section receives the second performance data from the second communication base. The first generation section generates a drive signal for producing sound according to the second performance data, and outputs it to the sound production device at the first communication base. At least one of the first performance data and the second performance data includes a key position signal indicating the amount of depression of a key on the keyboard instrument.
 本発明によれば、合奏をする複数の演奏者が一体感を得られるようにすることができる。 According to the present invention, it is possible for a plurality of performers playing in an ensemble to feel a sense of unity.
第1実施形態における通信システム構成を説明するための図である。FIG. 1 is a diagram for explaining a communication system configuration in a first embodiment. 第1実施形態における自動演奏ピアノの内部構成を説明する図である。It is a figure explaining the internal structure of the automatic performance piano in a 1st embodiment. 第1実施形態における制御装置の構成を説明する図である。FIG. 2 is a diagram illustrating the configuration of a control device in the first embodiment. 第1実施形態における合奏制御機能の構成を説明する図である。It is a figure explaining the composition of an ensemble control function in a 1st embodiment. 第2実施形態における加振器およびピックアップセンサの位置関係を説明する図である。It is a figure explaining the positional relationship of the vibrator and the pick-up sensor in a 2nd embodiment. 第2実施形態における駆動信号生成部の構成を説明する図である。FIG. 7 is a diagram illustrating the configuration of a drive signal generation section in a second embodiment. 第3実施形態におけるベロシティと遅延時間との関係を説明する図である。FIG. 7 is a diagram illustrating the relationship between velocity and delay time in the third embodiment. 第3実施形態におけるベロシティと補正値との関係を説明する図である。FIG. 7 is a diagram illustrating the relationship between velocity and correction value in the third embodiment. 第4実施形態における通信拠点T1における環境収集装置の構成を説明する図である。It is a figure explaining the composition of the environment collecting device in communication base T1 in a 4th embodiment. 第4実施形態における通信拠点T2における環境提供装置の構成を説明する図である。It is a figure explaining the structure of the environment provision apparatus in communication base T2 in 4th Embodiment. 第5実施形態における制御信号生成部の構成を説明する図である。It is a figure explaining the composition of the control signal generation part in a 5th embodiment. 第6実施形態におけるスクリーンの表示例を説明する図である。It is a figure explaining the display example of the screen in 6th Embodiment.
 以下、本発明の一実施形態について、図面を参照しながら詳細に説明する。以下に示す実施形態は一例であって、本発明はこれらの実施形態に限定して解釈されるものではない。各実施形態において説明される構成は、他の実施形態に適用することもできる。以下に説明する複数の実施形態で参照する図面において、同一部分または同様な機能を有する部分には同一の符号または類似の符号(数字の後にA、Bなど付しただけの符号)を付し、その繰り返しの説明は省略する場合がある。図面は、説明を明確にするために、構成の一部が図面から省略されたりして、模式的に説明される場合がある。 Hereinafter, one embodiment of the present invention will be described in detail with reference to the drawings. The embodiments shown below are merely examples, and the present invention should not be construed as being limited to these embodiments. The configuration described in each embodiment can also be applied to other embodiments. In the drawings referred to in multiple embodiments described below, the same parts or parts having similar functions are denoted by the same or similar symbols (numerals followed by numbers such as A, B, etc.), The repeated explanation may be omitted. In order to clarify the explanation, the drawings may be explained schematically with some components omitted from the drawings.
<第1実施形態>[通信システム]
 図1は、一実施形態における通信システムの構成を説明する図である。通信システムは、インターネットなどのネットワークNWに接続されたサーバ1000を含む。サーバ1000は、CPU等の制御部、記憶部および通信部を含む。制御部は、所定のプログラムを実行することにより、通信拠点間における合奏を実現するためのサービスを提供する。サーバ1000は、ネットワークNWに接続された複数の通信拠点間の通信を制御し、各通信拠点における自動演奏ピアノ1が互いにP2P型通信を実現するために必要な処理を実行する。この処理は、公知の方法により実現されればよい。図1では、2つの通信拠点T1、T2が例示されているが、この数に限られず、さらに多くの通信拠点が存在してもよい。以下の説明において、通信拠点T1、T2を区別せずに説明する場合には、単に通信拠点という。
<First embodiment> [Communication system]
FIG. 1 is a diagram illustrating the configuration of a communication system in one embodiment. The communication system includes a server 1000 connected to a network NW such as the Internet. Server 1000 includes a control unit such as a CPU, a storage unit, and a communication unit. The control unit provides a service for realizing an ensemble performance between communication bases by executing a predetermined program. The server 1000 controls communication between a plurality of communication bases connected to the network NW, and executes processing necessary for the automatic performance pianos 1 at each communication base to realize P2P type communication with each other. This processing may be realized by a known method. In FIG. 1, two communication bases T1 and T2 are illustrated, but the number is not limited to this, and even more communication bases may exist. In the following description, when the communication bases T1 and T2 are described without distinction, they are simply referred to as communication bases.
 この例では、通信拠点T1と通信拠点T2との間で、それぞれの通信拠点における演奏に関連する情報が、P2P通信によって相互にやり取りされる。この通信によって、複数の通信拠点間における合奏が実現される。各通信拠点においては、自動演奏ピアノ1が配置されている。自動演奏ピアノ1には、この例では、環境収集装置82および環境提供装置88が接続されている。 In this example, information related to the performance at each communication base is exchanged between the communication base T1 and the communication base T2 by P2P communication. Through this communication, an ensemble performance is realized between a plurality of communication bases. A self-playing piano 1 is arranged at each communication base. In this example, an environment collecting device 82 and an environment providing device 88 are connected to the automatic performance piano 1.
 環境収集装置82は、自動演奏ピアノ1の周囲環境の情報を収集するためのセンサを含み、センサの測定結果を示す収集信号を出力する。周囲環境は、例えば、音、光、振動、温度、空気の流れなどである。環境提供装置88は、周囲環境を示す制御信号を取得すると、制御信号に基づく環境を提供する。環境収集装置82と環境提供装置88とは、一体に構成されていてもよい。環境提供装置88は、他の通信拠点の数に応じて設けられてもよい。例えば、通信拠点T1の他に、3つの通信拠点が存在する場合には、通信拠点T1においては3つの環境提供装置88がそれぞれの通信拠点に対応して設けられてもよい。環境収集装置82と環境提供装置88との少なくとも一方は、自動演奏ピアノ1に組み込まれた構成であってもよい。環境収集装置82および環境提供装置88の具体例については、後述する。 The environment collection device 82 includes a sensor for collecting information on the surrounding environment of the automatic piano 1, and outputs a collection signal indicating the measurement result of the sensor. The surrounding environment includes, for example, sound, light, vibration, temperature, air flow, and the like. Upon acquiring a control signal indicating the surrounding environment, the environment providing device 88 provides an environment based on the control signal. The environment collecting device 82 and the environment providing device 88 may be configured integrally. The environment providing device 88 may be provided depending on the number of other communication bases. For example, if there are three communication bases in addition to the communication base T1, three environment providing devices 88 may be provided in the communication base T1 corresponding to the respective communication bases. At least one of the environment collecting device 82 and the environment providing device 88 may be built into the automatic performance piano 1. Specific examples of the environment collecting device 82 and the environment providing device 88 will be described later.
 自動演奏ピアノ1は、鍵盤楽器10、制御装置20、センサ30および駆動装置40を含む。 The automatic performance piano 1 includes a keyboard instrument 10, a control device 20, a sensor 30, and a drive device 40.
[自動演奏ピアノ]
 続いて、自動演奏ピアノ1の構成について説明する。
[Automatic piano]
Next, the configuration of the automatic performance piano 1 will be explained.
 図2は、第1実施形態における自動演奏ピアノの内部構成を説明する図である。自動演奏ピアノ1のうち、鍵盤楽器10は、例えばグランドピアノに相当する。鍵盤楽器10は、複数の鍵12を含む。鍵盤楽器10は、各鍵12に対応して設けられたハンマ14、弦15およびダンパ18を含む。自動演奏ピアノ1において、各鍵12に対応して設けられている構成は、図2に示した1つの鍵2に対応して設けられている各構成に着目して示している。したがって、他の鍵12に対応して設けられている各構成については記載を省略している。ダンパ18など一部の構成については、一部の鍵12に対しては設けられていない場合もある。 FIG. 2 is a diagram illustrating the internal configuration of the automatic performance piano in the first embodiment. The keyboard instrument 10 of the automatic performance piano 1 corresponds to, for example, a grand piano. Keyboard instrument 10 includes a plurality of keys 12. The keyboard instrument 10 includes a hammer 14, a string 15, and a damper 18 provided corresponding to each key 12. In the automatic performance piano 1, the configurations provided corresponding to each key 12 are shown focusing on each configuration provided corresponding to one key 2 shown in FIG. Therefore, descriptions of each configuration provided corresponding to other keys 12 are omitted. Some components, such as the damper 18, may not be provided for some keys 12.
 鍵盤楽器10は、複数のペダル13を含む。複数のペダル13は、例えば、ダンパペダル、シフトペダルおよびソステヌートペダルである。自動演奏ピアノ1において、各ペダル13に対応して設けられている構成は、図2に示した1つのペダル13に対応して設けられている各構成に着目して示している。したがって、他のペダル13に対応して設けられている各構成については記載を省略している。鍵盤楽器10は、鍵盤蓋11、駒16、響板17および直支柱19などをさらに含む。 The keyboard instrument 10 includes a plurality of pedals 13. The plurality of pedals 13 are, for example, a damper pedal, a shift pedal, and a sostenuto pedal. In the automatic performance piano 1, the configurations provided corresponding to each pedal 13 are shown focusing on each configuration provided corresponding to one pedal 13 shown in FIG. Therefore, descriptions of each component provided corresponding to the other pedals 13 are omitted. The keyboard instrument 10 further includes a keyboard lid 11, a bridge 16, a soundboard 17, a straight post 19, and the like.
 センサ30は、鍵センサ32、ペダルセンサ33およびハンマセンサ34を含む。鍵センサ32は、各鍵12に対応して設けられ、鍵12の挙動に応じた測定信号を制御装置20に出力する。この例では、鍵センサ32は、鍵12の位置(押下量)に応じた測定信号を制御装置20に出力する。鍵12の位置は、連続量(微細な分解能)で測定されてもよいし、鍵12が予め決められた位置を通過したことを検出することによって測定されてもよい。鍵12が検出される位置は、鍵12の押下範囲(レスト位置からエンド位置までの範囲)のうち複数の位置であればよい。 The sensor 30 includes a key sensor 32, a pedal sensor 33, and a hammer sensor 34. The key sensor 32 is provided corresponding to each key 12 and outputs a measurement signal according to the behavior of the key 12 to the control device 20. In this example, the key sensor 32 outputs a measurement signal according to the position (depression amount) of the key 12 to the control device 20. The position of the key 12 may be measured in continuous quantities (fine resolution) or by detecting that the key 12 passes a predetermined position. The key 12 may be detected at a plurality of positions within the pressing range of the key 12 (range from the rest position to the end position).
 ハンマセンサ34は、各ハンマ14に対応して設けられ、ハンマ14の挙動に応じた測定信号を制御装置20に出力する。この例では、ハンマセンサ34は、ハンマ14が弦15を打撃する直前におけるハンマシャンクの位置(回動量)を測定し、測定結果に応じた測定信号を制御装置20に出力する。ハンマシャンクの位置は、連続量(微細な分解能)で測定されてもよいし、ハンマシャンクが予め決められた位置を通過したことを検出することによって測定されてもよい。ハンマシャンクが検出される位置は、ハンマ14が弦15を打撃する直前の範囲のうち複数の位置であればよい。 The hammer sensor 34 is provided corresponding to each hammer 14 and outputs a measurement signal according to the behavior of the hammer 14 to the control device 20. In this example, the hammer sensor 34 measures the position (rotation amount) of the hammer shank immediately before the hammer 14 hits the string 15, and outputs a measurement signal to the control device 20 in accordance with the measurement result. The position of the hammer shank may be measured in continuous quantities (fine resolution) or by detecting when the hammer shank passes a predetermined position. The position where the hammer shank is detected may be a plurality of positions within the range immediately before the hammer 14 hits the string 15.
 ペダルセンサ33は、各ペダル13に対応して設けられ、ペダル13の挙動に応じた測定信号を制御装置20に出力する。この例では、ペダルセンサ33は、ペダル13の位置(踏込量)に応じた測定信号を制御装置20に出力する。ペダル13の位置は、連続量(微細な分解能)で検出されてもよいし、ペダル13が予め決められた位置を通過したことで検出されてもよい。ペダル13が検出される位置は、ペダル13の踏込範囲(レスト位置からエンド位置までの範囲)のうち、複数の位置であればよい。 The pedal sensor 33 is provided corresponding to each pedal 13 and outputs a measurement signal according to the behavior of the pedal 13 to the control device 20. In this example, the pedal sensor 33 outputs a measurement signal according to the position (depression amount) of the pedal 13 to the control device 20. The position of the pedal 13 may be detected as a continuous quantity (fine resolution), or may be detected when the pedal 13 passes a predetermined position. The position where the pedal 13 is detected may be any of a plurality of positions within the depression range of the pedal 13 (range from the rest position to the end position).
 駆動装置40は、鍵駆動装置42、ペダル駆動装置43、ストッパ44、加振器47およびダンパ駆動装置48を含む。鍵駆動装置42は、各鍵12に対応して設けられ、制御装置20による駆動信号を用いた制御によって鍵12を押下するように駆動する。これによって、演奏者が鍵12を押下したときと同じ状況を機械的に再現する。ペダル駆動装置43は、各ペダル13対応して設けられ、制御装置20による駆動信号を用いた制御によってペダル13を押下するように駆動する。これによって、演奏者がペダル13を踏み込んだときと同じ状況を機械的に再現する。ダンパ駆動装置48は、各ダンパ18に対応して設けられ、制御装置20による駆動信号を用いた制御によって、ダンパ18を弦15から離すように駆動する。ダンパ駆動装置48は、全てのダンパ18を同時に駆動する構成を有していてもよい。 The drive device 40 includes a key drive device 42, a pedal drive device 43, a stopper 44, a vibrator 47, and a damper drive device 48. The key drive device 42 is provided corresponding to each key 12, and is driven to press down the key 12 under control using a drive signal from the control device 20. This mechanically reproduces the same situation as when the player presses the key 12. A pedal drive device 43 is provided corresponding to each pedal 13, and is driven to press down the pedal 13 under control using a drive signal from the control device 20. This mechanically reproduces the same situation as when the player depresses the pedal 13. The damper driving device 48 is provided corresponding to each damper 18, and is driven so as to separate the damper 18 from the string 15 under control using a drive signal by the control device 20. The damper drive device 48 may have a configuration that drives all dampers 18 simultaneously.
 ストッパ44は、ハンマシャンクと衝突する位置(阻止位置)と、ハンマシャンクと衝突しない位置(待避位置)とのいずれかになるように、制御装置20からの制御によって駆動される。ストッパ44が阻止位置にある場合には、鍵12が押下されても、ハンマシャンクの移動が制限されてハンマ14が弦15を打撃しない。ストッパ44が待避位置にある場合には、鍵12が押下されると、鍵12に連動するハンマ14が弦15を打撃する。弦15が打撃されることによって、鍵盤楽器10は音を発生する。 The stopper 44 is driven by control from the control device 20 so as to be in either a position where it collides with the hammer shank (blocking position) or a position where it does not collide with the hammer shank (retreat position). When the stopper 44 is in the blocking position, the movement of the hammer shank is restricted and the hammer 14 does not strike the string 15 even if the key 12 is pressed down. When the stopper 44 is in the retracted position, when the key 12 is pressed down, the hammer 14 interlocked with the key 12 strikes the string 15. When the strings 15 are struck, the keyboard instrument 10 generates sound.
 加振器47は、この例では、響板17のうち駒16が配置されている部分の反対側の面に接触するように、直支柱9に接続された支持部によって支持されている。加振器47は、制御装置20による駆動信号を用いた制御によって、響板17を振動させる。加振器47は、例えば、制御装置20からピアノ音を含む駆動信号が供給されると、その駆動信号に応じた振動を響板17に印加する。これによって、響板17からピアノ音が放出される。複数の加振器47が響板17に接触するように配置されていてもよい。響板17を振動させる加振器47に代えて、音を放出するスピーカが用いられてもよい。 In this example, the vibrator 47 is supported by a support section connected to the straight post 9 so as to be in contact with the surface of the soundboard 17 opposite to the part where the bridge piece 16 is arranged. The vibrator 47 vibrates the soundboard 17 under the control of the control device 20 using a drive signal. For example, when a drive signal containing a piano sound is supplied from the control device 20, the vibrator 47 applies vibrations to the soundboard 17 according to the drive signal. As a result, piano sound is emitted from the soundboard 17. A plurality of vibrators 47 may be arranged so as to be in contact with the soundboard 17. Instead of the vibrator 47 that vibrates the soundboard 17, a speaker that emits sound may be used.
 鍵盤楽器10による発音は、ハンマ14により弦15を打撃することによって実現される場合と、加振器47により響板17を振動させることによって実現される場合とを含む。したがって、鍵盤楽器10は、鍵12を駆動することによって打弦音を発生させる発音装置、および加振器47を駆動することにより響板17から音を発生させる発音装置を含んでいるということもできる。鍵12の駆動および加振器47の駆動は、後述するように駆動信号が駆動装置40に出力されることによって実現される。 Sound generation by the keyboard instrument 10 includes cases where it is realized by hitting the strings 15 with the hammer 14 and cases where it is realized by vibrating the soundboard 17 with the vibrator 47. Therefore, the keyboard instrument 10 can also be said to include a sounding device that generates a string hitting sound by driving the keys 12, and a sounding device that generates a sound from the soundboard 17 by driving the vibrator 47. . The driving of the key 12 and the driving of the vibrator 47 are realized by outputting a driving signal to the driving device 40 as described later.
 制御装置20の構成について説明する。この例では、制御装置20は、鍵盤楽器10に取り付けられている。制御装置20は、鍵盤楽器10に取り付けられる装置でなくてもよく、例えば、パーソナルコンピュータ、タブレットコンピュータ、スマートフォン等であってもよい。 The configuration of the control device 20 will be explained. In this example, the control device 20 is attached to the keyboard instrument 10. The control device 20 does not need to be a device attached to the keyboard instrument 10, and may be, for example, a personal computer, a tablet computer, a smartphone, or the like.
 図3は、第1実施形態における制御装置の構成を説明する図である。制御装置20は、制御部21、記憶部22、操作パネル23、通信部24、音源部25およびインターフェイス26を有する。これらの各構成はバス27を介して接続されている。 FIG. 3 is a diagram illustrating the configuration of the control device in the first embodiment. The control device 20 includes a control section 21 , a storage section 22 , an operation panel 23 , a communication section 24 , a sound source section 25 , and an interface 26 . Each of these components is connected via a bus 27.
 制御部21は、CPUなどのプロセッサおよびRAM等の記憶装置を備えるコンピュータの一例である。制御部21は、記憶部22に記憶されたプログラムを、CPU(プロセッサ)を用いて実行し、様々な処理を実行するための機能を制御装置20において実現させる。制御装置20において実現される機能は、後述する合奏制御機能を含む。この合奏制御機能により、制御装置20の各部およびインターフェイス26に接続された各構成が制御される。インターフェイス26には、センサ30および駆動装置40が接続される。この例では、インターフェイス26には、さらに、外部装置80が接続される。インターフェイス26は、対象となる構成に対して、制御部21によって生成された駆動信号、制御信号等を送信したり、対象となる各構成から測定信号、収集信号等を受信したりする。 The control unit 21 is an example of a computer including a processor such as a CPU and a storage device such as a RAM. The control unit 21 executes a program stored in the storage unit 22 using a CPU (processor), and causes the control device 20 to implement functions for executing various processes. The functions realized by the control device 20 include an ensemble control function to be described later. This ensemble control function controls each part of the control device 20 and each component connected to the interface 26. A sensor 30 and a drive device 40 are connected to the interface 26 . In this example, an external device 80 is further connected to the interface 26. The interface 26 transmits drive signals, control signals, etc. generated by the control unit 21 to the target configurations, and receives measurement signals, collection signals, etc. from each target configuration.
 記憶部22は、不揮発性メモリ、ハードディスクドライブなどの記憶装置である。記憶部22は、制御部21において実行されるプログラムおよびこのプログラムを実行するときに必要となる各種データを記憶する。 The storage unit 22 is a storage device such as a nonvolatile memory or a hard disk drive. The storage unit 22 stores a program executed by the control unit 21 and various data required when executing this program.
 操作パネル23は、ユーザの操作を受け付ける操作ボタンなどを有する。この操作ボタンによりユーザの操作が受け付けられると、操作に応じた操作信号が制御部21に出力される。操作パネル23は、表示画面を有してもよい。この場合には操作パネル23は、表示画面にタッチセンサが組み合わされたタッチパネルであってもよい。 The operation panel 23 has operation buttons and the like that accept user operations. When a user's operation is accepted using this operation button, an operation signal corresponding to the operation is output to the control unit 21. The operation panel 23 may have a display screen. In this case, the operation panel 23 may be a touch panel in which a touch sensor is combined with a display screen.
 通信部24は、無線、有線などにより他の装置と通信を行う通信モジュールである。通信部24が通信を行う他の装置は、この例では、サーバ1000または他の通信拠点における自動演奏ピアノ1である。この例では、通信拠点間において、鍵盤楽器10に対する演奏内容を示す演奏データ、環境データなどが通信される。 The communication unit 24 is a communication module that communicates with other devices wirelessly, wired, etc. In this example, the other device with which the communication unit 24 communicates is the player piano 1 at the server 1000 or another communication base. In this example, performance data, environment data, etc. indicating the contents of a performance on the keyboard instrument 10 are communicated between the communication bases.
 音源部25は、制御部21からの制御によって音信号を生成する。音信号は、加振器47を駆動するための駆動信号(後述する加振駆動信号)等に用いられる。音信号は、この例では、ピアノの音を示す信号を含む。制御部21は、例えば、演奏データに対応する演奏内容に応じたピアノの音を示す音信号を生成するように音源部25を制御する。演奏データは、センサ30により生成される測定信号に基づいて生成されるデータであればよい。演奏データは、例えば、ノートオン、ノートオフ、ノートナンバ、ベロシティなどの発音制御情報を含むMIDI形式のデータであってもよいし、測定信号によって直接的に示される情報であってもよい。 The sound source section 25 generates a sound signal under control from the control section 21. The sound signal is used as a drive signal for driving the vibrator 47 (vibration drive signal to be described later). The sound signal includes, in this example, a signal representing the sound of a piano. For example, the control unit 21 controls the sound source unit 25 to generate a sound signal representing a piano sound according to the performance content corresponding to the performance data. The performance data may be data generated based on a measurement signal generated by the sensor 30. The performance data may be, for example, MIDI format data including sound production control information such as note-on, note-off, note number, velocity, etc., or may be information directly indicated by a measurement signal.
 インターフェイス26は、制御装置20と外部の各構成とを接続するインターフェイスである。インターフェイス26に接続される各構成は、上述したように、この例では、センサ30、駆動装置40および外部装置80を含む。インターフェイス26は、センサ30から出力される測定信号を、制御部21に出力する。インターフェイス26は、駆動装置40に対して各装置を駆動するための駆動信号を出力する。駆動信号は、後述する合奏制御機能100において生成される。インターフェイス26は、音源部25において生成されたピアノ音を示す音信号が供給されるヘッドフォン端子などを含んでもよい。 The interface 26 is an interface that connects the control device 20 and each external component. Each component connected to interface 26 includes, in this example, sensor 30, drive device 40, and external device 80, as described above. The interface 26 outputs the measurement signal output from the sensor 30 to the control unit 21. The interface 26 outputs a drive signal for driving each device to the drive device 40. The drive signal is generated in an ensemble control function 100, which will be described later. The interface 26 may include a headphone terminal or the like to which a sound signal representing the piano sound generated by the sound source section 25 is supplied.
[合奏制御機能]
 続いて、制御部21がプログラムを実行することにより実現される合奏制御機能について説明する。合奏制御機能を実現する構成がプログラムの実行によって実現される場合に限らず、少なくとも一部の構成がハードウエアによって実現されてもよい。合奏制御機能を実現する構成が、制御装置20ではなく、インターフェイス26に接続される装置(例えば、このプログラムがインストールされたコンピュータ)によって実現されてもよい。
[Ensemble control function]
Next, the ensemble control function realized by the control unit 21 executing the program will be described. The structure for realizing the ensemble control function is not limited to being realized by executing a program, and at least a part of the structure may be realized by hardware. The configuration for realizing the ensemble control function may be realized not by the control device 20 but by a device connected to the interface 26 (for example, a computer in which this program is installed).
 この例では、合奏制御機能が実現されるときには、制御部21は、ストッパ44を阻止位置に配置されるように制御する。この場合には、ユーザにより鍵12およびペダル13に対して演奏操作が入力されると、ストッパ44により打弦が阻止される一方、演奏操作に応じた音信号(例えば、ピアノの演奏音)が音源部25において生成される。この音信号を用いて加振器47が響板17を振動させることによって、音として放出される。加振器47を駆動させるための信号は、以下に示す駆動信号生成部145において生成される。 In this example, when the ensemble control function is realized, the control unit 21 controls the stopper 44 to be placed at the blocking position. In this case, when the user inputs a performance operation to the keys 12 and the pedal 13, the stopper 44 prevents the string from being struck, while the sound signal (for example, the sound of a piano performance) corresponding to the performance operation is It is generated in the sound source section 25. Using this sound signal, the vibrator 47 vibrates the soundboard 17, thereby emitting sound. A signal for driving the vibrator 47 is generated by a drive signal generation section 145 described below.
 図4は、第1実施形態における合奏制御機能の構成を説明する図である。合奏制御機能100は、演奏データ生成部131、演奏データ送信部133、演奏データ受信部143および駆動信号生成部145を含む。合奏制御機能100は、この例では、さらに合奏に付随して、自動演奏ピアノ1の周囲環境を通信拠点間で共有するための機能として、環境データ生成部121、環境データ送信部123、環境データ受信部183および制御信号生成部185を含む。 FIG. 4 is a diagram illustrating the configuration of the ensemble control function in the first embodiment. The ensemble control function 100 includes a performance data generation section 131, a performance data transmission section 133, a performance data reception section 143, and a drive signal generation section 145. In this example, the ensemble control function 100 further includes an environment data generation section 121, an environment data transmission section 123, and an environment data transmission section 123 as a function for sharing the surrounding environment of the automatic performance piano 1 between communication bases in conjunction with the ensemble performance. It includes a receiving section 183 and a control signal generating section 185.
 演奏データ生成部131は、センサ30から出力される測定信号に基づいて、鍵盤楽器10に対する演奏内容を示す演奏データを生成する。この例では、演奏データは、鍵センサ32から出力される測定信号(以下、鍵位置信号という)およびペダルセンサ33から出力される測定信号(以下、ペダル位置信号という)を含む。鍵位置信号は、この例では、押下された鍵12の音高と鍵12の押下量とを含む。鍵センサ32が鍵12の押下量を4箇所で測定するセンサであれば、鍵位置信号に含まれる鍵12の押下量の情報は、4箇所のいずれかの位置を示す。 The performance data generation unit 131 generates performance data indicating the content of the performance on the keyboard instrument 10 based on the measurement signal output from the sensor 30. In this example, the performance data includes a measurement signal output from the key sensor 32 (hereinafter referred to as a key position signal) and a measurement signal output from the pedal sensor 33 (hereinafter referred to as a pedal position signal). In this example, the key position signal includes the pitch of the pressed key 12 and the amount of pressing of the key 12. If the key sensor 32 is a sensor that measures the amount of depression of the key 12 at four locations, the information of the amount of depression of the key 12 included in the key position signal indicates the position of one of the four locations.
 ペダル位置信号は、この例では、押下されたペダル13の種類とペダル13の踏込量とを含む。ペダルセンサ33がペダルの踏込量を3箇所で測定するセンサであれば、ペダル13の踏込量の情報は、3箇所のいずれかの位置を示す。演奏データは、さらにハンマセンサ34から出力される測定信号(以下、ハンマ位置信号という)を含んでもよい。ハンマ位置信号は、例えば、鍵の音高とハンマ14の回動位置とを含む。 In this example, the pedal position signal includes the type of pedal 13 that was pressed and the amount of depression of the pedal 13. If the pedal sensor 33 is a sensor that measures the amount of pedal depression at three locations, the information on the amount of depression of the pedal 13 indicates the position of one of the three locations. The performance data may further include a measurement signal (hereinafter referred to as a hammer position signal) output from the hammer sensor 34. The hammer position signal includes, for example, the pitch of the key and the rotational position of the hammer 14.
 演奏データ生成部131において生成される演奏データが、鍵センサ32およびペダルセンサ33の測定結果に基づいて生成された発音制御情報を含むデータ(例えばMIDI形式)の場合を想定する。この場合には、例えば、ノートオンを送信するには、鍵12における押下量がノートオンを発生させる状態まで進む必要がある。 It is assumed that the performance data generated by the performance data generation section 131 is data (for example, in MIDI format) that includes sound production control information generated based on the measurement results of the key sensor 32 and the pedal sensor 33. In this case, for example, in order to transmit a note-on, it is necessary that the amount of depression of the key 12 reaches a state where a note-on occurs.
 一方、この例における演奏データ生成部131によれば、鍵12が押下されている途中の段階で、鍵12の押下量を順次送信することができる。したがって、他の通信拠点の自動演奏ピアノ1に対して、ノートオンに至る前であっても鍵12が押下され始めたことを認識させることができる。例えば、通信拠点T1の自動演奏ピアノ1において、鍵12が押下され始めると、ノートオンに至る前であっても、通信拠点T2の自動演奏ピアノ1における鍵12を、認識した押下量になるように駆動し始めることができる。このようにすることで、通信拠点T1おける鍵12への演奏操作を、通信拠点T2においても、短い遅延時間で追従させるように鍵12を駆動することができる。 On the other hand, according to the performance data generation unit 131 in this example, the amount of depression of the key 12 can be sequentially transmitted while the key 12 is being depressed. Therefore, it is possible to make the automatic performance piano 1 at another communication base recognize that the key 12 has started to be pressed even before the note-on is reached. For example, when the key 12 on the automatic performance piano 1 of the communication base T1 starts to be pressed, even before a note-on occurs, the key 12 on the automatic performance piano 1 of the communication base T2 is pressed down to the recognized pressing amount. You can start driving. By doing so, it is possible to drive the keys 12 at the communication base T2 so as to follow the playing operation on the keys 12 at the communication base T1 with a short delay time.
 演奏データ送信部133は、演奏データ生成部131において生成された演奏データを他の通信拠点へ送信する。 The performance data transmitter 133 transmits the performance data generated by the performance data generator 131 to other communication bases.
 演奏データ受信部143は、他の通信拠点から送信された演奏データを受信する。 The performance data receiving unit 143 receives performance data transmitted from other communication bases.
 駆動信号生成部145は、演奏データ受信部143によって受信された演奏データに基づいて、駆動装置40において用いられる駆動信号を生成する。この駆動信号は、鍵駆動装置42に供給される信号(鍵駆動信号)、ペダル駆動装置43に供給される信号(ペダル駆動信号)、および加振器47に供給される信号(加振駆動信号)を含む。 The drive signal generation section 145 generates a drive signal used in the drive device 40 based on the performance data received by the performance data reception section 143. This drive signal includes a signal supplied to the key drive device 42 (key drive signal), a signal supplied to the pedal drive device 43 (pedal drive signal), and a signal supplied to the vibrator 47 (excitation drive signal). )including.
 鍵駆動信号は、演奏データに基づいて生成され、より詳細には演奏データに含まれる鍵位置信号に基づいて生成される。鍵駆動信号は、鍵位置信号に応じた押下量を再現するように鍵12を駆動するように鍵駆動装置42を制御するための信号である。駆動される対象となる鍵12は、鍵位置信号によって特定される音高に対応する鍵である。ペダル駆動信号は、演奏データに基づいて生成され、より詳細にはペダル位置信号に基づいて生成される。ペダル駆動信号は、ペダル位置信号によって特定される種類に対応するペダルを踏込量に対応する位置に移動させるように、ペダル駆動装置43を制御するための信号である。 The key drive signal is generated based on the performance data, and more specifically, based on the key position signal included in the performance data. The key drive signal is a signal for controlling the key drive device 42 to drive the key 12 so as to reproduce the amount of depression corresponding to the key position signal. The key 12 to be driven is a key corresponding to the pitch specified by the key position signal. The pedal drive signal is generated based on the performance data, and more specifically, based on the pedal position signal. The pedal drive signal is a signal for controlling the pedal drive device 43 to move the pedal corresponding to the type specified by the pedal position signal to a position corresponding to the amount of depression.
 加振駆動信号は、演奏データに基づいて生成され、より詳細には鍵位置信号およびペダル位置信号に基づいて音源部25によって生成された信号である。加振駆動信号により加振器47が響板17を振動させると、音源部25において生成された信号に対応する音(この例では、ピアノ音)が響板17を介して鍵盤楽器10の周囲に拡がる。 The vibration driving signal is generated based on the performance data, and more specifically, is a signal generated by the sound source section 25 based on the key position signal and the pedal position signal. When the vibrator 47 vibrates the soundboard 17 in response to the vibration drive signal, the sound (piano sound in this example) corresponding to the signal generated in the sound source section 25 is transmitted around the keyboard instrument 10 via the soundboard 17. It spreads to
 駆動信号生成部145は、音源部25において音信号を生成するときに、鍵位置信号およびペダル位置信号に基づいて発音制御情報を生成し、発音制御情報に基づいて音源部25に音信号を生成させてもよい。このとき、駆動信号生成部145は、演奏データにおける鍵位置信号が示す鍵12の押下量の変化から、ノートオンとなるタイミングおよびベロシティを予測する演算を用いて発音制御情報を生成してもよい。演奏データにおけるハンマ位置信号が示すハンマ14の回動位置の変化をこの予測演算に用いてもよい。予測演算は、予め機械学習によって得られた学習済モデルを用いてもよいし、押下量の変化から、等速軌道、等加速度軌道等を想定したフィッティング処理を用いてもよい。これにより鍵12とハンマ14との動きが揃わない場合でも予測精度を向上することができる。 When generating a sound signal in the sound source section 25, the drive signal generation section 145 generates sound generation control information based on the key position signal and the pedal position signal, and generates a sound signal in the sound source section 25 based on the sound generation control information. You may let them. At this time, the drive signal generation unit 145 may generate the sound generation control information using a calculation that predicts the note-on timing and velocity from the change in the amount of depression of the key 12 indicated by the key position signal in the performance data. . Changes in the rotational position of the hammer 14 indicated by the hammer position signal in the performance data may be used for this predictive calculation. The predictive calculation may use a learned model obtained in advance by machine learning, or may use a fitting process that assumes a constant velocity trajectory, a constant acceleration trajectory, etc. based on changes in the amount of depression. This makes it possible to improve prediction accuracy even when the movements of the key 12 and the hammer 14 are not aligned.
 演奏データに基づいて鍵12が駆動され、その結果として動作するハンマ14が弦15を打撃する場合を想定する。この場合には、発音指示(例えばノートオン)から鍵12の駆動のための時間を要することで、発音されるタイミングが遅くなる。したがって、発音のタイミングは、通信拠点間における通信遅延の影響に加えて、鍵12が駆動されるときの遅延の影響も受ける。 It is assumed that the key 12 is driven based on performance data, and the hammer 14 that operates as a result hits the string 15. In this case, since it takes time to drive the key 12 after the sound generation instruction (for example, note-on), the timing at which the sound is produced is delayed. Therefore, the timing of sound generation is affected not only by the communication delay between communication points but also by the delay when the key 12 is driven.
 一方、駆動信号生成部145によれば、鍵駆動信号およびペダル駆動信号によって鍵12およびペダル13を駆動するが、ストッパ44によりハンマ14の弦15への打撃が阻止されるため、打弦音が発生しない。その代わりに、加振駆動信号によって加振器47が駆動されることで、響板17から発音される。加振器47を用いた発音は、鍵12の駆動を要しない。したがって、発音指示(例えば、ノートオン)から実際の発音に至るまでの時間に関して、加振器47による発音の場合の時間が打弦による発音の場合の時間よりも短い。 On the other hand, according to the drive signal generation unit 145, the key 12 and the pedal 13 are driven by the key drive signal and the pedal drive signal, but since the stopper 44 prevents the hammer 14 from hitting the string 15, a string-striking sound is generated. do not. Instead, sound is generated from the soundboard 17 by driving the vibrator 47 with the vibration drive signal. Producing sound using the vibrator 47 does not require driving the key 12. Therefore, regarding the time from the sound generation instruction (for example, note-on) to the actual sound generation, the time for sound generation by the vibrator 47 is shorter than the time for sound generation by string striking.
 このとき、加振器47による発音と鍵12の駆動とは別々に制御されるため、互いのタイミングにずれが生じる。一方、そのずれの量は少ないため、ユーザの感覚に与える影響は少ない。 At this time, since the sound generation by the vibrator 47 and the driving of the key 12 are controlled separately, a difference in timing occurs between them. On the other hand, since the amount of deviation is small, it has little effect on the user's senses.
 送信される演奏データと発音方法については、上記の組み合わせに限らない。例えば、鍵12の押下量が演奏データとして送信されるのではなく、発音制御情報が演奏データとして送信されてもよい。この場合であっても、上述したように加振器47による発音を用いることにより、打弦による発音を用いる場合よりも、通信拠点間における発音の時間差を短くすることができる。 The performance data and sound generation method to be transmitted are not limited to the above combinations. For example, instead of transmitting the amount by which the key 12 is pressed as the performance data, sound generation control information may be transmitted as the performance data. Even in this case, by using the sound generation by the vibrator 47 as described above, the time difference in sound generation between the communication bases can be made shorter than when the sound generation by string striking is used.
 各駆動信号は、演奏データにおける発音制御情報に基づいて生成される。このとき、鍵駆動信号について、発音制御情報におけるベロシティの値を所定値以上になるように大きくしてもよい。ベロシティの値が小さい、すなわち、鍵12の押下速度が遅い場合(ハンマ14の回動速度が小さい場合)には、鍵駆動装置42は、鍵12の駆動速度を遅くする。 Each drive signal is generated based on the sound production control information in the performance data. At this time, regarding the key drive signal, the velocity value in the sound generation control information may be increased to a predetermined value or more. When the velocity value is small, that is, when the pressing speed of the key 12 is slow (when the rotating speed of the hammer 14 is small), the key driving device 42 slows down the driving speed of the key 12.
 このとき、ソレノイドの特性によって、駆動速度が遅く設定されるほど、鍵12が予定されたタイミングよりも遅延して動く場合がある。その遅延を補償するためベロシティの値を大きくすることが考えられる。ベロシティの値を大きくすると、鍵12が駆動されるときの遅延時間が短くなるが、打弦音が大きくなってしまう。一方、この例では、ハンマ14の打弦がストッパ44により阻止されて打弦音は生じないため、発音への影響がない。したがって、発音に寄与しない鍵12については、ベロシティの値を大きくすることができる。加振駆動信号については、発音内容が変わらないように、ベロシティの値を変更しない。このときには、ユーザの演奏に影響を与えないようにするために、一部の鍵12については駆動されないようにしてもよい。駆動されない鍵12は、予め演奏曲が設定されることで、演奏曲に使用される音高の鍵を対象としてもよい。 At this time, depending on the characteristics of the solenoid, the slower the driving speed is set, the more the key 12 may move with a delay from the scheduled timing. In order to compensate for the delay, it is conceivable to increase the velocity value. If the velocity value is increased, the delay time when the key 12 is driven becomes shorter, but the string striking sound becomes louder. On the other hand, in this example, the string striking of the hammer 14 is prevented by the stopper 44 and no string striking sound is produced, so that there is no effect on sound production. Therefore, the velocity value can be increased for keys 12 that do not contribute to sound production. Regarding the excitation drive signal, the velocity value is not changed so that the sound content remains unchanged. At this time, some of the keys 12 may not be driven so as not to affect the user's performance. The keys 12 that are not driven may be keys of pitches used for the performance music by setting the performance music in advance.
 他の例として、加振器による発音が用いられるのではなく、ストッパ44が待避位置に制御されることで鍵12の駆動および打弦による発音が用いられてもよい。この場合であっても、上述したように、鍵12の押下量を演奏データとして送信することにより、発音制御情報を演奏データとして送信する場合よりも、通信拠点間における発音の時間差を短くすることができる。この場合、ユーザの演奏操作により生じる音(例えば、通信拠点T1における演奏により生じる音)および他の通信拠点(例えば、通信拠点T2)における演奏により生じる音は、いずれも打弦音を含む。 As another example, instead of using the vibrator to generate sound, the stopper 44 may be controlled to the retracted position to generate sound by driving the key 12 and striking the string. Even in this case, as described above, by transmitting the amount of depression of the key 12 as performance data, the time difference in sound production between communication bases can be made shorter than when the sound generation control information is transmitted as performance data. I can do it. In this case, both the sound generated by the user's performance operation (for example, the sound generated by the performance at the communication base T1) and the sound generated by the performance at another communication base (for example, the communication base T2) include string striking sounds.
 このときには、ユーザの演奏に影響を与えないようにするために、ペダル13がペダル駆動装置43によって駆動されないようにする一方、ダンパ駆動装置48によってダンパ18が駆動されてもよい。 At this time, in order to avoid affecting the user's performance, the pedal 13 may not be driven by the pedal drive device 43, while the damper 18 may be driven by the damper drive device 48.
 さらに他の例として、ストッパ44を待避位置に制御しつつも加振器47による発音を用いてもよい。この場合には、駆動信号生成部145は、鍵駆動信号およびペダル駆動信号を生成せずに鍵12およびペダル13が動かないようにしてもよい。このようにすると、ユーザの演奏操作により生じる音は打弦音を含む一方、他の通信拠点における演奏により生じる音は、遅延の少ない加振器47による音とすることができる。送信される演奏データと発音方法との組み合わせについては、複数の例を説明したが、いずれを選択するかを操作パネル23への操作によって設定できるようにしてもよい。 As yet another example, sound generation by the vibrator 47 may be used while the stopper 44 is controlled to the retracted position. In this case, the drive signal generation unit 145 may prevent the key 12 and the pedal 13 from moving without generating the key drive signal and the pedal drive signal. In this way, while the sound generated by the user's performance operation includes the sound of string hitting, the sound generated by the performance at another communication base can be the sound generated by the vibrator 47 with less delay. Although a plurality of examples have been described regarding combinations of performance data and sound generation methods to be transmitted, which one to select may be set by operating the operation panel 23.
 環境データ生成部121は、環境収集装置82から出力される収集信号に基づいて、周囲環境を示す環境データを生成する。この例では、周囲環境は、装置周辺の画像および音を含む。そのため、環境収集装置82は、周囲環境を収集するための装置、すなわち、画像を取得するためのカメラ(撮像装置)および音を取得するためのマイクロフォン(収音装置)を含む。カメラは、この例では、鍵盤楽器10の演奏者が含まれる範囲の画像を取得する。 The environmental data generation unit 121 generates environmental data indicating the surrounding environment based on the collection signal output from the environment collection device 82. In this example, the ambient environment includes images and sounds around the device. Therefore, the environment collecting device 82 includes a device for collecting the surrounding environment, that is, a camera (imaging device) for obtaining images and a microphone (sound collecting device) for obtaining sound. In this example, the camera acquires an image of a range that includes the player of the keyboard instrument 10.
 環境データに含まれる画像に関する情報は、画像(動画)そのものを示す画像情報であってもよいが、この例では、演奏者の動きをモーションキャプチャの技術によって取得することによって得られた動作情報を含む。演奏者の動きを測定するセンサは、カメラに限らず、IMU(Inertial Measurement Unit)、圧力センサ、変位センサ等を含んでもよい。動作情報は、例えば、画像から抽出された所定の特徴を有する複数の部分について、各部分の座標によって示される情報である。環境データは、音信号を示すオーディオデータの形式で送信されてもよい。この場合には、動作情報は、オーディオデータにおける所定のチャンネルのデータとして送信されることで、オーディオデータに含まれる音信号に同期させることもできる。同様に、環境データは、発音制御情報を示す形式(例えば、MIDI形式)、動画データの形式等、既に存在するデータの一部として送信されるようにデータの形式が変換されてから、他の通信拠点に対して送信されてもよい。 The information regarding the image included in the environmental data may be image information indicating the image (video) itself, but in this example, the information regarding the image is information obtained by capturing the movements of the performer using motion capture technology. include. The sensor that measures the movement of the performer is not limited to a camera, and may include an IMU (internal measurement unit), a pressure sensor, a displacement sensor, and the like. The motion information is, for example, information about a plurality of parts having predetermined features extracted from an image and indicated by the coordinates of each part. The environmental data may be transmitted in the form of audio data representing sound signals. In this case, the motion information can be synchronized with the sound signal included in the audio data by being transmitted as data on a predetermined channel in the audio data. Similarly, environmental data may be sent as part of existing data, such as in a format indicating pronunciation control information (for example, MIDI format) or in video data format, and then sent as part of other data. It may also be transmitted to a communication base.
 環境収集装置82が収集する音は、鍵盤楽器10に対する演奏により生じる音(ピアノ音)が含まれる場合がある。鍵盤楽器10に対する演奏により生じる音が存在する期間は、鍵位置信号等から特定することができる。演奏により生じる音が加振器47による発音である場合には、音源部25においてその音が特定できる。したがって、環境データ生成部121は、環境データを生成するときに、収集信号に含まれる音から、音源部25において生成した音の成分をキャンセルするように信号処理を施してもよい。 The sounds collected by the environment collection device 82 may include sounds (piano sounds) generated by playing the keyboard instrument 10. The period during which the sound produced by playing the keyboard instrument 10 exists can be specified from the key position signal or the like. If the sound produced by the performance is produced by the vibrator 47, the sound can be identified by the sound source section 25. Therefore, when generating the environmental data, the environmental data generating section 121 may perform signal processing to cancel the sound component generated by the sound source section 25 from the sound included in the collected signal.
 環境データ生成部121は、演奏により生じる音が打弦音であったとしても、収集信号に含まれる音から打弦音の成分をキャンセルするように信号処理を施してもよい。打弦音の成分は、音源部25において鍵位置信号およびペダル駆動信号を用いて生成されればよい。環境データ生成部121は、鍵盤楽器10に対する演奏により生じる音が存在する期間について、収集信号に含まれる音を用いずに環境データを生成してもよい。このとき、環境収集装置82は、演奏中の期間を認識させることで、その期間は音を収集しないようにしてもよい。 Even if the sound generated by the performance is a string striking sound, the environmental data generation unit 121 may perform signal processing to cancel the string striking sound component from the sound included in the collected signal. The string striking sound component may be generated by the sound source section 25 using a key position signal and a pedal drive signal. The environmental data generation unit 121 may generate environmental data for a period in which sounds generated by playing the keyboard instrument 10 exist without using the sounds included in the collected signals. At this time, the environment collecting device 82 may recognize the period during which the performance is being performed, and may not collect sounds during that period.
 環境データ送信部123は、環境データ生成部121において生成された環境データを他の通信拠点へ送信する。 The environmental data transmitting unit 123 transmits the environmental data generated by the environmental data generating unit 121 to other communication bases.
 環境データ受信部183は、他の通信拠点から送信された環境データを受信する。 The environmental data receiving unit 183 receives environmental data transmitted from other communication bases.
 制御信号生成部185は、環境データ受信部183によって受信された環境データに基づいて、環境提供装置88において用いられる制御信号を生成する。この制御信号は、環境データに含まれる周囲環境の情報を再現するための信号であり、この例では、画像をディスプレイ(表示装置)に表示するための信号および音をスピーカ(放音装置)から出力するための信号を含む。そのため、環境提供装置88は、画像を表示するためのディスプレイおよび音を出力するためのスピーカを含む。ディスプレイは、鍵盤楽器10における鍵盤蓋11、譜面台等、演奏者が見やすい位置に配置されていてもよい。鍵盤蓋11に画像を表示する場合には、ディスプレイに代えて、鍵盤蓋11に画像を投影するプロジェクタが用いられてもよい。複数の通信拠点が通信対象である場合には、各通信拠点に対応して環境提供装置88が設けられてもよい。この場合には、環境提供装置88に供給される制御信号は、その環境提供装置88に対応する通信拠点から受信された環境データに基づいて生成される。 The control signal generating unit 185 generates a control signal used in the environment providing device 88 based on the environmental data received by the environmental data receiving unit 183. This control signal is a signal for reproducing information about the surrounding environment included in the environmental data. In this example, a signal for displaying an image on a display (display device) and a sound are sent from a speaker (sound emitting device). Contains signals for output. Therefore, the environment providing device 88 includes a display for displaying images and a speaker for outputting sound. The display may be placed at a position where the player can easily see the keyboard lid 11 of the keyboard instrument 10, the music stand, or the like. When displaying an image on the keyboard lid 11, a projector that projects an image onto the keyboard lid 11 may be used instead of the display. If a plurality of communication bases are to be communicated with, an environment providing device 88 may be provided corresponding to each communication base. In this case, the control signal supplied to the environment providing device 88 is generated based on the environmental data received from the communication base corresponding to the environment providing device 88.
 制御信号生成部185は、環境データに含まれる動作情報を用いて、演奏者を模した画像を生成し、生成した画像をディスプレイに表示するための信号を生成してもよい。このとき、特定の部分または動作を強調する画像が生成されてもよい。特定の部分とは、例えば、演奏者の目、顔、指等であってもよい。特定の動作は、例えば、視線の動き、顔の動き、演奏動作における指の動き等であってもよい。制御信号生成部185は、環境データに含まれる動作情報を用いて、演奏者の動きを数値化して示すグラフ等の画像を生成し、生成した画像をディスプレイに表示するための信号を生成してもよい。演奏者は、演奏を合わせるために、表示された情報を用いることができる。 The control signal generation unit 185 may generate an image imitating the performer using the motion information included in the environmental data, and generate a signal for displaying the generated image on a display. At this time, an image may be generated that emphasizes a specific part or action. The specific part may be, for example, the player's eyes, face, fingers, or the like. The specific motion may be, for example, a movement of the line of sight, a movement of the face, a movement of fingers during a musical performance, or the like. The control signal generation unit 185 generates an image such as a graph that numerically represents the movement of the performer using the motion information included in the environmental data, and generates a signal for displaying the generated image on a display. Good too. The performer can use the displayed information to tailor the performance.
 制御信号生成部185は、演奏データ受信部143によって受信された演奏データに基づく画像をディスプレイに表示するための信号を生成してもよい。演奏データに基づく画像は、その演奏データに含まれる演奏内容を示す画像、例えば、操作されている鍵、ペダルを示す画像を含んでもよい。 The control signal generation unit 185 may generate a signal for displaying an image on the display based on the performance data received by the performance data reception unit 143. The image based on the performance data may include an image showing the performance content included in the performance data, for example, an image showing keys and pedals being operated.
 このように、第1実施形態における合奏制御機能100によれば、通信拠点間での発音の時間差を低減し、互いの周囲環境を身近に感じることができる。したがって、合奏をする複数の演奏者が一体感を得ることができる。 In this way, according to the ensemble control function 100 in the first embodiment, the time difference in pronunciation between communication bases can be reduced, allowing users to feel closer to each other's surrounding environments. Therefore, a plurality of musicians playing together can feel a sense of unity.
<第2実施形態>
 演奏データに含まれる演奏内容は、鍵12等への演奏操作を示す場合に限らない。第2実施形態では、演奏による打弦音が伝達された響板17の振動を示す信号が、演奏データに含まれている例について説明する。響板17の振動は、この例では、センサ30に含まれるピックアップセンサによって測定される。
<Second embodiment>
The performance content included in the performance data is not limited to indicating performance operations on the keys 12 and the like. In the second embodiment, an example will be described in which the performance data includes a signal indicating the vibration of the soundboard 17 to which the string-striking sound caused by the performance is transmitted. The vibration of the soundboard 17 is measured by a pickup sensor included in the sensor 30 in this example.
 図5は、第2実施形態における加振器およびピックアップセンサの位置関係を説明する図である。図5は、鍵盤楽器10を下側から見た図である。図5に示すように、響板17には、2つの加振器47(加振器47H、47L)が設けられている。加振器47H、47Lは、響板17のうち、複数存在する響棒17aの間に接続されている。加振器47Hは、2つの駒16(駒16H(長駒)、16L(短駒))のうち、駒16Hに対応する位置に設けられている。加振器47Lは、駒16Lに対応する位置に設けられている。駒16Hは、高音側の弦15を支持する駒であり、駒16Lは、低音側の弦15を支持する駒である。加振器47Hは、直支柱19に接続された支持部97Hによって支持されている。加振器47Lは、直支柱19に接続された支持部97Lによって支持されている。 FIG. 5 is a diagram illustrating the positional relationship between the vibrator and the pickup sensor in the second embodiment. FIG. 5 is a diagram of the keyboard instrument 10 viewed from below. As shown in FIG. 5, the soundboard 17 is provided with two vibrators 47 (vibrators 47H and 47L). The vibrators 47H and 47L are connected between the plural sound bars 17a of the sound board 17. The vibrator 47H is provided at a position corresponding to the piece 16H among the two pieces 16 (pieces 16H (long piece) and 16L (short piece)). The vibrator 47L is provided at a position corresponding to the piece 16L. The bridge 16H is a bridge that supports the strings 15 on the treble side, and the bridge 16L is a bridge that supports the strings 15 on the bass side. The vibrator 47H is supported by a support portion 97H connected to the straight column 19. The vibrator 47L is supported by a support portion 97L connected to the straight column 19.
 加振器47は、響板17のうち駒16に対応する位置に設けられている場合に限らず、駒16とは離れた位置に設けられていてもよいし、響棒17aに対応する位置に設けられていてもよい。響棒17aに対応する位置に設けられる場合には、響板17の弦15側に加振器47が設けられればよい。 The vibrator 47 is not limited to being provided at a position corresponding to the piece 16 on the soundboard 17, but may be provided at a position away from the piece 16, or at a position corresponding to the sound bar 17a. may be provided. When provided at a position corresponding to the sound bar 17a, the vibrator 47 may be provided on the string 15 side of the sound board 17.
 ピックアップセンサ37Hは、加振器47Hの近傍において響板17に取り付けられ、響板17の振動を測定し、測定結果を示す測定信号を出力する。ピックアップセンサ37Lは、加振器47Lの近傍において響板17に取り付けられ、響板17の振動を測定し、測定結果を示す測定信号を出力する。そのため、演奏データ送信部133が他の通信拠点に送信する演奏データ、および演奏データ受信部143が他の通信拠点から受信する演奏データは、ピックアップセンサ37Hからの測定信号PU1およびピックアップセンサ37Lからの測定信号PU2を含む。 The pickup sensor 37H is attached to the soundboard 17 near the vibrator 47H, measures the vibration of the soundboard 17, and outputs a measurement signal indicating the measurement result. The pickup sensor 37L is attached to the soundboard 17 near the vibrator 47L, measures vibrations of the soundboard 17, and outputs a measurement signal indicating the measurement result. Therefore, the performance data that the performance data transmission section 133 transmits to other communication bases and the performance data that the performance data reception section 143 receives from the other communication bases are based on the measurement signal PU1 from the pickup sensor 37H and the performance data from the pickup sensor 37L. Contains measurement signal PU2.
 図6は、第2実施形態における駆動信号生成部の構成を説明する図である。駆動信号生成部145Aは、演奏データ受信部143によって受信された演奏データに含まれる測定信号PU1、PU2から加振駆動信号DS1、DS2を生成する。加振駆動信号DS1は、加振器47Hに供給される。加振駆動信号DS2は、加振器47Lに供給される。 FIG. 6 is a diagram illustrating the configuration of the drive signal generation section in the second embodiment. The drive signal generation section 145A generates vibration drive signals DS1 and DS2 from the measurement signals PU1 and PU2 included in the performance data received by the performance data reception section 143. The vibration drive signal DS1 is supplied to the vibrator 47H. The vibration drive signal DS2 is supplied to the vibrator 47L.
 駆動信号生成部145Aは、クロストーク処理部1451、音響付与部1453および増幅部1455を含む。クロストーク処理部1451は、測定信号PU1に対して所定のディレイ処理および所定のフィルタ処理を施して測定信号PU2に加算する。クロストーク処理部1451は、測定信号PU2に対して所定のディレイ処理および所定のフィルタ処理を施して測定信号PU1に加算する。これによって、それぞれの測定信号PU1、PU2に含まれるクロストーク成分を低減する。 The drive signal generation section 145A includes a crosstalk processing section 1451, a sound imparting section 1453, and an amplification section 1455. The crosstalk processing unit 1451 performs a predetermined delay process and a predetermined filter process on the measurement signal PU1, and adds the processed signal to the measurement signal PU2. The crosstalk processing unit 1451 performs predetermined delay processing and predetermined filter processing on the measurement signal PU2, and adds the processed signal to the measurement signal PU1. This reduces the crosstalk components included in each of the measurement signals PU1 and PU2.
 音響付与部1453は、測定信号PU1、PU2に対して、ディレイ、コンプレッサ、エキスパンダ、イコライザなどの音響効果を付与するための信号処理を施す。増幅部1455は、測定信号PU1、PU2を増幅することによって、加振器47H、47Lに供給される加振駆動信号DS1、DS2を出力する。 The sound imparting unit 1453 performs signal processing to impart acoustic effects such as a delay, compressor, expander, and equalizer to the measurement signals PU1 and PU2. The amplifying section 1455 amplifies the measurement signals PU1 and PU2 to output vibration drive signals DS1 and DS2 to be supplied to the vibrators 47H and 47L.
 例えば通信拠点T1における鍵盤楽器10と通信拠点T2における鍵盤楽器10とにおいて、響板17等の大きさおよび形状が異なる場合、響板17における振動モードの違い等が生じる。これに起因して、それぞれの鍵盤楽器10から放出される音が異なる。クロストーク処理部1451および音響付与部1453における信号処理におけるパラメータが、鍵盤楽器10における相違する構成に対応して設定される。これにより、他の通信拠点における鍵盤楽器10の形状等が異なっていても、その違いに起因する発音の違いを低減することができる。 For example, if the size and shape of the soundboard 17 etc. are different between the keyboard instrument 10 at the communication base T1 and the keyboard instrument 10 at the communication base T2, a difference in the vibration mode of the soundboard 17 etc. will occur. Due to this, the sounds emitted from each keyboard instrument 10 are different. Parameters for signal processing in the crosstalk processing section 1451 and the sound imparting section 1453 are set corresponding to the different configurations of the keyboard instrument 10. As a result, even if the shapes of the keyboard instruments 10 at other communication bases are different, differences in pronunciation caused by the differences can be reduced.
 複数の通信拠点間で合奏が行われる場合、ピックアップセンサ37H、37Lにおいて測定される響板17の振動は、打弦音に起因する振動のみではなく、加振駆動信号DS1、DS2に基づく加振器47H、47Lによる振動も含む。そのため、演奏データ送信部133は、演奏データを他の通信拠点に送信する前に、演奏データに含まれる測定信号PU1、PU2に対して、加振駆動信号DS1、DS2の成分を低減させる信号処理を施してもよい。 When an ensemble is performed between multiple communication bases, the vibration of the soundboard 17 measured by the pickup sensors 37H and 37L is not only the vibration caused by the string striking sound, but also the vibration caused by the vibration caused by the vibration drive signals DS1 and DS2. Also includes vibrations caused by 47H and 47L. Therefore, before transmitting the performance data to another communication base, the performance data transmitter 133 performs signal processing to reduce the components of the vibration drive signals DS1 and DS2 for the measurement signals PU1 and PU2 included in the performance data. may be applied.
<第3実施形態>
 上述したように、ベロシティの値が小さい場合、鍵駆動装置42における鍵12の駆動に時間がかかり、ベロシティの値が大きい場合よりも、鍵12の動きが遅延する。上述したように、打弦音が発生しない状況であれば、ベロシティの値を大きくすることができるが、打弦音が発生する場合には、ベロシティの値を大きくしすぎると、発音内容が大きく変わってしまう。第3実施形態ではそのような発音内容の変化をできるだけ低減するための例について説明する。
<Third embodiment>
As described above, when the velocity value is small, it takes time for the key driving device 42 to drive the key 12, and the movement of the key 12 is delayed compared to when the velocity value is large. As mentioned above, if the string strike sound does not occur, you can increase the velocity value, but if the string strike sound does occur, increasing the velocity value too much will significantly change the pronunciation content. Put it away. In the third embodiment, an example for reducing such changes in pronunciation content as much as possible will be described.
 図7は、第3実施形態におけるベロシティと遅延時間との関係を説明する図である。横軸は、ベロシティの値であり、鍵駆動信号の生成のために駆動信号生成部145における演算パラメータとして用いられる値である。縦軸は、鍵駆動信号に基づいて鍵駆動装置42が鍵12を駆動するときの遅延時間に対応する。ベロシティの値は、例えば、駆動信号生成部145が用いる演奏データに含まれる情報から取得される。より具体的には、ベロシティの値は、演奏データに含まれる鍵位置信号またはハンマ位置信号から演算されることによって取得されてもよい。演奏データに発音制御情報が含まれている場合には、そのベロシティの値から取得されてもよい。この例では、ベロシティは、「1」から「127」までの値をとる。 FIG. 7 is a diagram illustrating the relationship between velocity and delay time in the third embodiment. The horizontal axis is the velocity value, which is a value used as a calculation parameter in the drive signal generation section 145 to generate the key drive signal. The vertical axis corresponds to the delay time when the key driving device 42 drives the key 12 based on the key driving signal. The velocity value is obtained, for example, from information included in the performance data used by the drive signal generation section 145. More specifically, the velocity value may be obtained by calculation from a key position signal or a hammer position signal included in the performance data. If the performance data includes sound production control information, it may be acquired from the velocity value. In this example, the velocity takes values from "1" to "127".
 図7に示すようにベロシティが小さくなると、遅延時間が増加する。ベロシティがVtより小さくなると、遅延時間が急激に増加する。そのため、この例では、駆動信号生成部145は、ベロシティがVtより小さくなる場合にその値が大きくなるように補正する。 As shown in FIG. 7, as the velocity decreases, the delay time increases. When the velocity becomes smaller than Vt, the delay time increases rapidly. Therefore, in this example, the drive signal generation unit 145 corrects the velocity to become larger when the velocity becomes smaller than Vt.
 図8は、第3実施形態におけるベロシティと補正値との関係を説明する図である。図8に示すように、ベロシティがVtより小さい値であるときに、その値を大きくするようにして得られる補正値が、駆動信号生成部145における演算パラメータとして用いられる。例えば、ベロシティが「1」から「Vt」に変化する場合に、補正値が「Va」から「Vt」に変化するように設定される。このようにすると、ベロシティが小さい場合の遅延時間を小さくすることができる。補正値は補正前の値に比べて大きいため、想定よりも大きい音が発生することになる、一方、遅延時間を低減する効果に比べると、音の大きさの変化は聴取者に与える影響は小さい。遅延時間を低減する効果をより強くするために、ベロシティが「1」であるときに補正値が「Va」よりも大きい「Vb」になるように設定されてもよい。 FIG. 8 is a diagram illustrating the relationship between velocity and correction value in the third embodiment. As shown in FIG. 8, when the velocity is a value smaller than Vt, a correction value obtained by increasing the value is used as a calculation parameter in the drive signal generation section 145. For example, when the velocity changes from "1" to "Vt", the correction value is set to change from "Va" to "Vt". In this way, the delay time can be reduced when the velocity is small. Since the correction value is larger than the value before correction, the sound will be louder than expected.On the other hand, compared to the effect of reducing the delay time, the change in sound volume has less impact on the listener. small. In order to strengthen the effect of reducing the delay time, the correction value may be set to "Vb" which is larger than "Va" when the velocity is "1".
 合奏を行うときにはできるだけ遅延を少なくする方が好ましい。一方、合奏をするのではなく、演奏データに基づく発音を聴取するだけであれば、遅延を少なくするよりは、入力値の全体にわたって遅延時間が変化しないことが好ましい。したがって、このような場合には、駆動信号生成部145は、ベロシティの値が大きい範囲(所定値以上の範囲)において、意図的に鍵12の押下が始まるタイミングを遅らせるように遅延させた鍵駆動信号を生成してもよい。このとき、入力値が「Vt」から「1」に向かって小さくなるほど、タイミングを遅らせる時間が徐々に減るようにしてもよい。 When playing in ensemble, it is preferable to minimize the delay as much as possible. On the other hand, if the player is not performing in an ensemble but only listening to pronunciation based on performance data, it is preferable that the delay time does not change over the entire input value rather than reducing the delay. Therefore, in such a case, the drive signal generation unit 145 generates a key drive signal that is intentionally delayed to delay the timing at which the key 12 starts to be pressed in a range where the velocity value is large (a range equal to or greater than a predetermined value). A signal may also be generated. At this time, the time for delaying the timing may be gradually reduced as the input value decreases from "Vt" to "1".
<第4実施形態>
 第4実施形態では、異なる通信拠点において、演奏される楽器が異なる例について説明する。ここでは、通信拠点T1は、オーケストラによる演奏が可能な大きなホールである。通信拠点T2は、防音室のような小さなスタジオである。この例では、通信拠点T1においてオーケストラによる演奏が行われ、通信拠点T2においてピアノによる演奏が行われる。すなわち、通信拠点T1におけるオーケストラにはピアノの演奏者は存在せず、遠隔の通信拠点T2においてピアノの演奏者が存在する。
<Fourth embodiment>
In the fourth embodiment, an example will be described in which different musical instruments are played at different communication bases. Here, the communication base T1 is a large hall where an orchestra can perform. The communication base T2 is a small studio like a soundproof room. In this example, an orchestra performs at the communication base T1, and a piano performs at the communication base T2. That is, there is no piano player in the orchestra at the communication base T1, but there is a piano player at the remote communication base T2.
 通信拠点T1におけるオーケストラの演奏音は、通信拠点T2に送信される。通信拠点T2における演奏者は、通信拠点T1から受信した演奏音を聴きながら、自動演奏ピアノ1を演奏する。ユーザの演奏内容は、演奏データとして自動演奏ピアノ1から通信拠点T2の自動演奏ピアノ1に送信される。そのため、通信拠点T2における自動演奏ピアノ1は、通信拠点T1における演奏を再現するように音を発生する。すなわち、通信拠点T1においては、ピアノの演奏者が存在しなくても、オーケストラの演奏音と共にピアノの演奏音も聴取することができる。第4実施形態では、通信拠点T2における自動演奏ピアノ1の演奏者に対して、通信拠点T1におけるオーケストラ演奏の臨場感を伝えることができる。これを実現するための構成について以下に詳述する。 The orchestra performance sound at the communication base T1 is transmitted to the communication base T2. The player at the communication base T2 plays the automatic performance piano 1 while listening to the performance sound received from the communication base T1. The content of the user's performance is transmitted as performance data from the automatic performance piano 1 to the automatic performance piano 1 at the communication base T2. Therefore, the automatic performance piano 1 at the communication base T2 generates sound so as to reproduce the performance at the communication base T1. That is, at the communication base T1, even if there is no piano player, the sound of the piano can be heard together with the sound of the orchestra. In the fourth embodiment, the sense of presence of the orchestral performance at the communication base T1 can be conveyed to the player of the automatic performance piano 1 at the communication base T2. The configuration for realizing this will be described in detail below.
 図9は、第4実施形態における通信拠点T1における環境収集装置の構成を説明する図である。通信拠点T1において、指揮台CS、自動演奏ピアノ1および椅子50が、ホールのステージST1に設置されている。ステージST1には、振動測定板821、822およびマイクロフォン823を含む環境収集装置82が設けられている。振動測定板821は、ステージST1における椅子50が設置される場所に配置されている。振動測定板821は、ステージST1を介して伝達される振動を測定し、測定した振動を示す収集信号を出力する。振動測定板822は、ステージST1における自動演奏ピアノ1が設置される場所に配置されている。振動測定板822は、ステージST1を介して伝達される振動を測定し、測定した振動を示す収集信号を出力する。マイクロフォン823は、この例では、椅子50の近傍に配置される。マイクロフォン823は、到達した音を収集して、その音を示す収集信号を出力する。 FIG. 9 is a diagram illustrating the configuration of the environment collection device at the communication base T1 in the fourth embodiment. At the communication base T1, a conductor stand CS, a player piano 1, and a chair 50 are installed on a stage ST1 of a hall. An environment collecting device 82 including vibration measurement plates 821 and 822 and a microphone 823 is provided on the stage ST1. The vibration measurement plate 821 is placed at a location on the stage ST1 where the chair 50 is installed. Vibration measurement plate 821 measures vibrations transmitted via stage ST1 and outputs a collection signal indicating the measured vibrations. The vibration measurement plate 822 is placed at a location on the stage ST1 where the automatic performance piano 1 is installed. Vibration measurement plate 822 measures vibrations transmitted via stage ST1 and outputs a collection signal indicative of the measured vibrations. Microphone 823 is placed near chair 50 in this example. Microphone 823 collects the arriving sound and outputs a collected signal indicative of the sound.
 図10は、第4実施形態における通信拠点T1における環境提供装置の構成を説明する図である。通信拠点T2においては、自動演奏ピアノ1および椅子50がスタジオのステージST2に設置されている。ステージST2には、振動発生板881、882およびスピーカ883を含む環境提供装置88が設けられている。振動発生板881は、ステージST2における椅子50が設置される場所に配置されている。振動発生板881は、制御信号に基づいて振動する。振動発生板882は、ステージST2における自動演奏ピアノ1が設置される場所に配置されている。振動発生板882は、制御信号に基づいて振動する。スピーカ883は、この例では、椅子50の近傍(演奏者の近傍)に配置される。スピーカ883は、制御信号に基づく音を放出する。振動発生板881、882、スピーカ883において用いられる制御信号が生成されるときに、振動特性に応じた信号処理、伝達特性を変化させるための信号処理等が施されてもよい。 FIG. 10 is a diagram illustrating the configuration of the environment providing device at the communication base T1 in the fourth embodiment. At the communication base T2, a player piano 1 and a chair 50 are installed on a stage ST2 of a studio. An environment providing device 88 including vibration generating plates 881 and 882 and a speaker 883 is provided on the stage ST2. The vibration generating plate 881 is placed at the location where the chair 50 is installed on the stage ST2. The vibration generating plate 881 vibrates based on the control signal. The vibration generating plate 882 is placed at the location where the automatic performance piano 1 is installed on the stage ST2. The vibration generating plate 882 vibrates based on the control signal. In this example, the speaker 883 is placed near the chair 50 (near the performer). Speaker 883 emits sound based on the control signal. When the control signals used in the vibration generating plates 881, 882 and the speaker 883 are generated, signal processing according to the vibration characteristics, signal processing for changing the transfer characteristics, etc. may be performed.
 通信拠点T1においてオーケストラが演奏すると、演奏に伴う振動がステージST1を介して振動測定板821、822に伝達され、演奏音がマイクロフォン823において収集される。振動測定板821、822およびマイクロフォン823のそれぞれから出力された収集信号は、環境データとして通信拠点T2に送信される。通信拠点T2において自動演奏ピアノ1を演奏すると、演奏内容が演奏データとして通信拠点T1に送信される。 When the orchestra performs at the communication base T1, vibrations accompanying the performance are transmitted to the vibration measurement plates 821 and 822 via the stage ST1, and the sound of the performance is collected by the microphone 823. The collected signals output from each of the vibration measurement plates 821 and 822 and the microphone 823 are transmitted to the communication base T2 as environmental data. When the automatic performance piano 1 is played at the communication base T2, the content of the performance is transmitted as performance data to the communication base T1.
 これにより、通信拠点T1においては、自動演奏ピアノ1が、通信拠点T2からの演奏データに基づいて駆動されて発音する。すなわち、通信拠点T1における自動演奏ピアノ1は、通信拠点T2における自動演奏ピアノ1に対する演奏に従って駆動される。通信拠点T2においては、スピーカ883が、通信拠点T1からの環境データに基づいて音を発生する。この音は、マイクロフォン823によって収集された音であり、通信拠点T1におけるオーケストラの演奏音に対応する。 As a result, at the communication base T1, the automatic performance piano 1 is driven to produce sound based on the performance data from the communication base T2. That is, the automatic performance piano 1 at the communication base T1 is driven according to the performance on the automatic performance piano 1 at the communication base T2. At communication base T2, speaker 883 generates sound based on the environmental data from communication base T1. This sound is the sound collected by the microphone 823, and corresponds to the sound of an orchestra performance at the communication base T1.
 さらに振動発生板881および振動発生板882が、通信拠点T1からの環境データに基づいて駆動されて振動する。振動発生板881における振動は、通信拠点T1における振動測定板821によって測定された振動に対応する。すなわち、通信拠点T1において椅子50に伝わるような振動が、通信拠点T2における椅子50に対しても伝わる。振動発生板882における振動は、通信拠点T1における振動測定板822によって測定された振動に対応する。振動発生板882における振動は、通信拠点T1における振動測定板822によって測定された振動に対応する。すなわち、通信拠点T2において自動演奏ピアノ1に伝わるような振動が、通信拠点T2における自動演奏ピアノ1に対しても伝わる。したがって、通信拠点T2における演奏者は、通信拠点T1において演奏しているような臨場感を得ることができる。 Further, the vibration generating plate 881 and the vibration generating plate 882 are driven to vibrate based on the environmental data from the communication base T1. The vibration at the vibration generating plate 881 corresponds to the vibration measured by the vibration measuring plate 821 at the communication base T1. That is, vibrations that are transmitted to the chair 50 at the communication base T1 are also transmitted to the chair 50 at the communication base T2. The vibration at the vibration generating plate 882 corresponds to the vibration measured by the vibration measuring plate 822 at the communication base T1. The vibration at the vibration generating plate 882 corresponds to the vibration measured by the vibration measuring plate 822 at the communication base T1. That is, vibrations that are transmitted to the automatic performance piano 1 at the communication base T2 are also transmitted to the automatic performance piano 1 at the communication base T2. Therefore, the performer at the communication base T2 can feel as if he were playing at the communication base T1.
 通信拠点T1における振動測定板821、822およびマイクロフォン823は、自動演奏ピアノ1が駆動されることによりピアノ音の成分についても収集することになる。したがって、通信拠点T2における振動発生板881、882およびスピーカ883が駆動されるまでの経路において、ピアノ音の成分を減少させるための信号処理が行われる。ピアノ音の成分は、通信拠点T1において自動演奏ピアノ1を駆動するための信号から生成することができる。したがって、この信号処理は、例えば、通信拠点T1から送信される環境データが生成されるときに環境データ生成部121が実行してもよい。このようにすることで、通信拠点T2における環境提供装置88によって提供される環境において、通信拠点T2における演奏の影響を低減することができる。 The vibration measurement plates 821, 822 and microphone 823 at the communication base T1 will also collect the components of the piano sound when the automatic performance piano 1 is driven. Therefore, signal processing for reducing the piano sound component is performed on the path up to the drive of the vibration generating plates 881, 882 and the speaker 883 at the communication base T2. The piano sound component can be generated from a signal for driving the automatic performance piano 1 at the communication base T1. Therefore, this signal processing may be executed by the environmental data generation unit 121, for example, when the environmental data transmitted from the communication base T1 is generated. By doing so, the influence of the performance at the communication base T2 can be reduced in the environment provided by the environment providing device 88 at the communication base T2.
<第5実施形態>
 第5実施形態では、環境提供装置88において他の通信拠点における演奏者の画像を表示する場合に、自らの画像(他の通信拠点に対して送信される演奏者の画像)についても表示する構成について説明する。
<Fifth embodiment>
In the fifth embodiment, when the environment providing device 88 displays images of performers at other communication bases, it also displays its own image (the image of the performer transmitted to the other communication bases). I will explain about it.
 図11は、第5実施形態における制御信号生成部の構成を説明する図である。第5実施形態における制御信号生成部185Bは、自画像取得部1851、遠隔画像取得部1853および画像合成部1855を含む。 FIG. 11 is a diagram illustrating the configuration of the control signal generation section in the fifth embodiment. The control signal generation section 185B in the fifth embodiment includes a self-portrait acquisition section 1851, a remote image acquisition section 1853, and an image composition section 1855.
 自画像取得部1851は、環境収集装置82から出力される収集信号に基づいて、演奏者を含む画像に関する自画像情報を取得する。遠隔画像取得部1853は、環境データ受信部183によって受信された環境データに基づいて、他の通信拠点の環境収集装置82において収集された演奏者を含む画像に関する遠隔画像情報を取得する。自画像情報および遠隔画像情報は、いずれも演奏者および鍵盤楽器10の鍵盤部分を含む画像である。 The self-portrait acquisition unit 1851 acquires self-portrait information regarding images including the performer based on the collection signal output from the environment collection device 82. Based on the environmental data received by the environmental data receiving section 183, the remote image acquisition section 1853 acquires remote image information regarding images including the performer collected by the environment collection device 82 of another communication base. Both the self-portrait information and the remote image information are images including the player and the keyboard portion of the keyboard instrument 10.
 画像合成部1855は、自画像情報および遠隔画像情報に基づいて、合成画像を生成する。合成画像は、遠隔画像情報に含まれる演奏者の画像領域を抽出して、自画像情報の画像に重畳した画像である。このとき、画像合成部1855は、自画像情報および遠隔画像情報のそれぞれにおける画像から鍵盤部分を特定し、互いの鍵盤部分が一致するように、遠隔画像情報における演奏者の画像の重畳位置を決定する。画像合成部1855は、例えば、互いの鍵盤部分の相互相関が最大化するように遠隔画像情報に変換行列を適用して得られる画像を、自画像情報の画像に重畳する。画像合成部1855は、合成画像を表示するための制御信号を生成して環境提供装置88に出力する。環境提供装置88は、ディスプレイに合成画像を表示するディスプレイであってもよいし、遠隔画像情報の画像の少なくとも一部を、鍵盤部分に投影するプロジェクタであってもよい。プロジェクタを用いて投影する場合には、その鍵盤部分の位置に応じた所定の変換行列を遠隔画像情報に適用してもよい。 The image composition unit 1855 generates a composite image based on self-portrait information and remote image information. The composite image is an image in which an image region of the performer included in the remote image information is extracted and superimposed on the image of the self-portrait information. At this time, the image synthesis unit 1855 identifies the keyboard part from the images in each of the self-portrait information and the remote image information, and determines the superimposition position of the performer's image in the remote image information so that the keyboard parts match each other. . The image synthesis unit 1855 superimposes, for example, an image obtained by applying a transformation matrix to the remote image information so as to maximize the cross-correlation between the keyboard parts, on the image of the self-portrait information. The image composition unit 1855 generates a control signal for displaying the composite image and outputs it to the environment providing device 88. The environment providing device 88 may be a display that displays a composite image on a display, or may be a projector that projects at least a portion of an image of remote image information onto a keyboard portion. When projecting using a projector, a predetermined transformation matrix depending on the position of the keyboard portion may be applied to the remote image information.
 2人で1つのピアノを演奏する連弾では、それぞれの演奏者が演奏する音域が異なるため、鍵盤に対する演奏者の位置も異なる。連弾を2つの通信拠点に分けて1人ずつ演奏する場合であっても、鍵盤に対する演奏者の位置は、1つのピアノを演奏する場合とほぼ同じである。したがって、生成される合成画像は、1つのピアノを演奏する2人の演奏者の画像として得られる。 In a duet in which two people play one piano, each player plays in a different range, so the positions of the players relative to the keyboard are also different. Even when a duet is divided into two communication bases and played by one player at a time, the position of the player relative to the keyboard is almost the same as when playing one piano. Therefore, the generated composite image is obtained as an image of two players playing one piano.
 この例では、画像合成部1855は、2人の演奏者の画像が互いに接触したかどうかを判定し、接触した場合には接触部分に対応する領域を発光させる等、その領域を特定できるように合成画像を修正する。このとき、発光させる接触部分を一部(例えば、腕または手など)に限定してもよい。2人の演奏者の画像が接触したことを画像以外で演奏者に認識させてもよい。例えば、2人の演奏者の画像が互いに接触した場合には、演奏者が利用する椅子の座面を振動させてもよい。この場合には、椅子の座面を振動させる構成は環境提供装置88に含まれ、制御信号生成部185Bからの制御信号によって制御される。このようにすることで、2人の演奏者が互いにことなる通信拠点で演奏したとしても、実際に同じ場所で演奏しているような状況を体験することができる。 In this example, the image compositing unit 1855 determines whether the images of the two performers have touched each other, and if they have touched each other, the area corresponding to the contact area can be made to emit light, etc., so that the area can be identified. Modify the composite image. At this time, the contact portion that emits light may be limited to a portion (for example, the arm or hand). The performers may be made to recognize that the images of the two performers have come into contact with each other using something other than the images. For example, if the images of two performers touch each other, the seat of the chair used by the performers may be vibrated. In this case, a configuration for vibrating the seat surface of the chair is included in the environment providing device 88, and is controlled by a control signal from the control signal generation unit 185B. In this way, even if two performers perform at different communication bases, they can experience the situation as if they were actually performing at the same location.
 遠隔画像取得部1853は、遠隔画像情報に代えて上述した動作情報(遠隔動作情報)として取得してもよい。この場合には、画像合成部1855は、動作情報を用いて演奏者を模した画像を生成して、自画像情報の画像に合成して合成画像を生成してもよい。画像合成部1855は、自画像情報と遠隔画像情報との時間的なずれ(通信遅延等)を考慮して、合成画像を生成するときに、自画像情報の画像を遅延させてから遠隔画像情報の画像を重畳してもよいし、遠隔画像情報の画像または遠隔動作情報から、遅延時間分の未来の予測画像を生成して自画像情報の画像に重畳してもよい。 The remote image acquisition unit 1853 may acquire the above-mentioned operation information (remote operation information) instead of remote image information. In this case, the image synthesis unit 1855 may generate an image that resembles the performer using the motion information, and synthesize it with the image of the self-portrait information to generate a composite image. The image synthesis unit 1855 takes into account the time lag (communication delay, etc.) between self-portrait information and remote image information, and when generating a composite image, delays the image of self-portrait information and then images the remote image information. Alternatively, a future predicted image corresponding to the delay time may be generated from the image of the remote image information or the remote motion information and superimposed on the image of the self-portrait information.
<第6実施形態>
 第6実施形態では、2つの通信拠点が同じ部屋に存在する場合について説明する。この場合には、1つの部屋に互いに通信可能な2つの自動演奏ピアノ1が存在する。このような場合に、2つの自動演奏ピアノ1の間に、それぞれの演奏内容に関連する画像が表示されるスクリーンが配置されてもよい。
<Sixth embodiment>
In the sixth embodiment, a case will be described in which two communication bases exist in the same room. In this case, two player pianos 1 that can communicate with each other exist in one room. In such a case, a screen may be placed between the two automatic performance pianos 1 on which images related to the content of each performance are displayed.
 図12は、第6実施形態におけるスクリーンの表示例を説明する図である。通信拠点T1に対応する自動演奏ピアノ1aと通信拠点T2に対応する自動演奏ピアノ1bとが、1つの部屋に配置されている。自動演奏ピアノ1aと自動演奏ピアノ1bとの間には、プロジェクタPJによって画像が投影されるスクリーンSCが配置されている。この例では、スクリーンSCには、自動演奏ピアノ1aと自動演奏ピアノ1b間で通信される演奏データに応じた画像が表示される。 FIG. 12 is a diagram illustrating an example of screen display in the sixth embodiment. A player piano 1a corresponding to the communication base T1 and a player piano 1b corresponding to the communication base T2 are arranged in one room. A screen SC on which an image is projected by a projector PJ is arranged between the automatic performance piano 1a and the automatic performance piano 1b. In this example, an image corresponding to the performance data communicated between the automatic performance piano 1a and the automatic performance piano 1b is displayed on the screen SC.
 表示される画像は、演奏内容に応じた音に関連する画像であり、この例では、音高と発音タイミングとで決まる位置に音長に応じた長さで表した帯状画像である。図12においては、帯状画像sbaは自動演奏ピアノ1aにおける演奏内容に応じた音を示す画像であり、帯状画像sbbは自動演奏ピアノ1aにおける演奏内容に応じた音を示す画像である。帯状画像sba、sbbは、通信の向きに応じて流れるように表示される。 The displayed image is an image related to the sound according to the content of the performance, and in this example, it is a band-shaped image displayed at a position determined by the pitch and timing of pronunciation and with a length corresponding to the length of the sound. In FIG. 12, the band-shaped image sba is an image showing a sound according to the performance content on the automatic performance piano 1a, and the band-shaped image sbb is an image showing the sound according to the performance content on the automatic performance piano 1a. The band images sba and sbb are displayed in a flowing manner depending on the direction of communication.
 すなわち、自動演奏ピアノ1aにおいて鍵12が押下されると、その鍵12に応じた音に相当する帯状画像sbaが、自動演奏ピアノ1bに向けて移動していくようにスクリーンに表示される。ここで、帯状画像sbaが自動演奏ピアノ1b側に到達したときに、その画像に対応した音が自動演奏ピアノ1bにおいて発音されるようにしてもよい。この場合には、自動演奏ピアノ1bは、帯状画像sbaに対応する演奏データを受信した後に、そのタイミングに到達するまで遅延させてから、鍵12を駆動すればよい。自動演奏ピアノ1aと自動演奏ピアノ1bとの関係は入れ替えても同じである。 That is, when a key 12 is pressed on the automatic performance piano 1a, a band-shaped image sba corresponding to the sound corresponding to the key 12 is displayed on the screen so as to move toward the automatic performance piano 1b. Here, when the band-shaped image sba reaches the automatic performance piano 1b side, a sound corresponding to the image may be generated at the automatic performance piano 1b. In this case, after receiving the performance data corresponding to the band-shaped image sba, the automatic performance piano 1b may delay the timing until the timing is reached, and then drive the key 12. The relationship between the automatic performance piano 1a and the automatic performance piano 1b remains the same even if they are replaced.
 このようなプロジェクタPJとスクリーンSCとは、環境提供装置88の一例ということもできる。この場合には、環境提供装置88は、2つの自動演奏ピアノ1a、1bに共有されている。 Such a projector PJ and screen SC can also be said to be an example of the environment providing device 88. In this case, the environment providing device 88 is shared by the two automatic performance pianos 1a and 1b.
<変形例>
 本発明は上述した実施形態に限定されるものではなく、他の様々な変形例が含まれる。例えば、上述した実施形態は本発明を分かりやすく説明するために詳細に説明したものであり、必ずしも説明した全ての構成を備えるものに限定されるものではない。以下、一部の変形例について説明する。第1実施形態を変形した例として説明するが、他の実施形態を変形する例としても適用することができる。複数の変形例を組み合わせて各実施形態に適用することもできる。
<Modified example>
The present invention is not limited to the embodiments described above, and includes various other modifications. For example, the embodiments described above have been described in detail to explain the present invention in an easy-to-understand manner, and the present invention is not necessarily limited to having all the configurations described. Some modified examples will be described below. Although the first embodiment will be described as a modified example, the present invention can also be applied as a modified example of other embodiments. It is also possible to combine a plurality of modifications and apply them to each embodiment.
(1)駆動信号生成部145は、演奏データにおける鍵位置信号が示す鍵12の押下量の変化から、ノートオンなどの発音制御情報を予測する場合に限らず、別の情報を用いて発音制御情報を予測してもよい。駆動信号生成部145は、例えば、環境データに含まれる演奏者の画像のうち、鍵12に向かう指の動きを抽出し、その指の動きの変化からその後の鍵12が押下されるときの動きを推定して、発音制御情報を予測してもよい。指の動きを示す情報は、鍵12の押下に関する情報でもある。したがって、指の画像または指の動きは、センサ30において取得されたものとしてもよい。この場合には、指の動きを示す情報は演奏データによって送信されてもよい。 (1) The drive signal generation unit 145 is not limited to predicting sound production control information such as note-on from the change in the amount of depression of the key 12 indicated by the key position signal in the performance data, but also uses other information to control sound production. Information may also be predicted. For example, the drive signal generation unit 145 extracts the movement of a finger toward the key 12 from the image of the performer included in the environmental data, and determines the subsequent movement when the key 12 is pressed based on the change in the finger movement. The pronunciation control information may be predicted by estimating. The information indicating the movement of the finger is also information regarding the depression of the key 12. Therefore, the finger image or finger movement may be obtained by the sensor 30. In this case, information indicating finger movements may be transmitted as performance data.
 センサ30において、鍵12への接触または近接することを検出する構成を有していてもよい。この場合、演奏データ生成部131は、この検出結果に基づいて演奏データを生成して送信することによって、実際に鍵12が押下され始める前に、他の通信拠点に鍵12が押下され始めることを認識させることができる。これによって、発音制御情報の予測精度を向上させてもよい。 The sensor 30 may have a configuration that detects contact with or proximity to the key 12. In this case, the performance data generation unit 131 generates and transmits performance data based on this detection result, so that the key 12 starts to be pressed at another communication base before the key 12 actually starts being pressed. can be recognized. This may improve the prediction accuracy of the pronunciation control information.
 これらの予測において、鍵12の動きまたは指の動き等の動き履歴情報と、ノートオンのタイミングおよびベロシティの値等の発音制御情報との相関関係を機械学習させた学習済モデルが用いられてもよい。学習済モデルは、各演奏者に対応するように生成されてもよい。 In these predictions, even if a trained model is used that is machine-trained to learn the correlation between movement history information such as the movement of the key 12 or finger movement and pronunciation control information such as note-on timing and velocity value. good. A trained model may be generated to correspond to each performer.
 このように発音制御情報を予測することは、他の通信拠点における自動演奏ピアノ1を制御する場合に適用される場合に限られず、様々な連動に用いてもよい。例えば、鍵盤装置と音源装置とが無線で接続されているような構成の場合に適用することできる。例えば、鍵盤装置での鍵の押下によりノートオンが発生してから音源装置にノートオンが送信されると、通信遅延の影響で発音タイミングが遅くなる。一方、鍵盤装置でノートオンが発生する前に音源装置に鍵の動きを送信することで、その動きを用いた予測演算によって、通信遅延の影響を小さくすることができる。 Predicting the sound production control information in this way is not limited to being applied to the case of controlling the automatic performance piano 1 at another communication base, but may be used for various interlocking operations. For example, the present invention can be applied to a configuration in which a keyboard device and a sound source device are connected wirelessly. For example, if a note-on is generated by pressing a key on a keyboard device and then transmitted to a sound source device, the timing of sound generation will be delayed due to communication delay. On the other hand, by transmitting the key movement to the sound source device before a note-on occurs on the keyboard device, the influence of communication delay can be reduced by predictive calculation using the movement.
(2)自動演奏ピアノ1への鍵12の押下に応じて生成される鍵位置信号を用いて、同じ自動演奏ピアノ1における他の鍵12を制御してもよい。例えば、鍵12の押下に応じて、その鍵12の1オクターブ高い音に対応する鍵12が連動するように制御することもできる。連動する鍵12は、1オクターブ高い音に限らず、予め決められた音であればよい。予め決められた音は、押下された鍵12の音高に対して相対的に決められてもよいし、その音高とは関係なく絶対的に決められてもよい。このとき、発音制御情報ではなく鍵位置信号を用いることにより、演奏対象の鍵12の押下と、連動する鍵12の駆動との時間差を小さくすることができる。 (2) The key position signal generated in response to the depression of a key 12 on the automatic performance piano 1 may be used to control other keys 12 on the same automatic performance piano 1. For example, control can be performed such that, in response to the depression of a key 12, a key 12 corresponding to a tone one octave higher than that key 12 is linked. The interlocking key 12 is not limited to a tone one octave higher, but may be any predetermined tone. The predetermined tone may be determined relative to the pitch of the depressed key 12, or may be determined absolutely regardless of the pitch. At this time, by using the key position signal instead of the sound generation control information, it is possible to reduce the time difference between pressing the key 12 to be played and driving the interlocking key 12.
(3)制御装置20において、自動演奏ピアノ1への演奏内容が記録されるようにしてもよい。記録されるデータは、発音制御情報に基づくデータであってもよいし、鍵位置信号、ハンマ位置信号およびペダル位置信号などのセンサ30から出力される信号に応じたデータであってもよい。 (3) In the control device 20, the content of the performance on the automatic performance piano 1 may be recorded. The data to be recorded may be data based on the sound production control information, or may be data corresponding to signals output from the sensor 30, such as a key position signal, a hammer position signal, and a pedal position signal.
(4)演奏データ送信部133は、他の通信拠点における自動演奏ピアノ1の鍵12を駆動するかしないかを、鍵12毎に設定するための情報を演奏データに含めて送信してもよい。駆動の有無は、送信側の通信拠点において演奏者によって、演奏中に設定されてもよいし、特定の鍵または音域によって予め決められていてもよい。受信側の通信拠点において自動演奏ピアノ1は、駆動しない設定の鍵12に関する鍵位置信号に対しては、鍵12の駆動をせずに、加振器47を駆動する。 (4) The performance data transmitting unit 133 may include in the performance data information for setting whether or not to drive the keys 12 of the player piano 1 at another communication base for each key 12 and transmit the data. . The presence or absence of driving may be set by the performer at the transmission side communication base during the performance, or may be predetermined based on a specific key or range. At the communication base on the receiving side, the automatic performance piano 1 does not drive the key 12 but drives the vibrator 47 in response to a key position signal regarding the key 12 which is set not to be driven.
(5)環境収集装置82は、演奏者に取り付けるセンサ、例えば、演奏者の呼吸を測定するセンサを含んでもよい。自動演奏ピアノ1は、演奏者の呼吸の測定結果を環境データとして他の通信拠点に送信し、他の通信拠点における環境提供装置88のディスプレイ等で呼吸の変化に応じた情報を表示させてもよい。演奏者の呼吸は、演奏動作と密接に関連している。例えば、演奏を開始する直前には、演奏者が息を大きく吸い込むことが多い。したがって、息を大きく吸い込むことが測定された場合に、そのことを示す情報または演奏が開始されると考えられるまでの時間がディスプレイに表示されてもよい。演奏者によって息を大きく吸い込んでから演奏を開始するまでの時間が異なることを想定して、演奏者によって、その時間を異ならせるように設定されてもよい。この時間の予測において、息を大きく吸い込んだタイミングと演奏開始までの時間との相関関係を機械学習させた学習済モデルが用いられてもよい。 (5) The environment collection device 82 may include a sensor attached to the performer, for example, a sensor that measures the performer's breathing. The player piano 1 may transmit the measurement results of the player's breathing to other communication bases as environmental data, and display information according to changes in breathing on the display of the environment providing device 88 at the other communication base. good. A performer's breathing is closely related to the performance movement. For example, just before starting a performance, performers often take a deep breath. Accordingly, when a deep inhale is measured, information indicating this or the time until the performance is considered to begin may be displayed on the display. Assuming that the time from when a player takes a deep breath to the start of a performance differs depending on the player, the time may be set to vary depending on the player. In predicting this time, a learned model may be used that is machine-learned the correlation between the timing of a deep breath and the time until the start of the performance.
(6)環境提供装置88は、様々な環境を提供することができる小型の可動型装置であってもよく、例えば、何らかのキャラクタを模した人型の形状であってもよい。例えば、環境提供装置88は、制御信号に基づいて腕および手が動く人型のロボットであってもよい。環境提供装置88は、演奏者に取り付けることができる形状(腕時計型、肩掛け型、首掛け型等)であってもよい。様々な環境を提供する構成は、上述したディスプレイ、スピーカであってもよいし、例えば、温度を制御するための熱源、冷却源、ファン等であってもよいし、部屋の明るさ、色合い、模様等を制御するための照明、プロジェクタ等であってもよい。環境提供装置88は、例えば、熱源等の位置を変化させるためのロボットアーム等の構造を含んでもよいし、複数の熱源を配置していずれかかが駆動されることで実質的に熱源の位置が変化するようにしてもよい。熱源は、例えば、他の通信拠点における演奏者の位置を再現するために用いられてもよい。環境収集装置82は、環境提供装置88に対応したセンサを含んでいればよく、例えば、温度センサ、風量センサ、照度センサ等を含んでもよい。 (6) The environment providing device 88 may be a small movable device capable of providing various environments, and may be, for example, in a humanoid shape imitating some kind of character. For example, the environment providing device 88 may be a humanoid robot that moves its arms and hands based on control signals. The environment providing device 88 may have a shape that can be attached to the performer (a wristwatch type, a shoulder type, a neck type, etc.). The configuration that provides various environments may be the above-mentioned display or speaker, or may be a heat source, cooling source, fan, etc. for controlling the temperature, or may be a configuration that provides various environments such as room brightness, color, etc. It may also be a light, a projector, etc. for controlling a pattern or the like. The environment providing device 88 may include, for example, a structure such as a robot arm for changing the position of a heat source or the like, or by arranging a plurality of heat sources and driving one of them, the position of the heat source can be substantially changed. may be changed. The heat source may be used, for example, to recreate the position of the performer at another communication location. The environment collecting device 82 only needs to include a sensor compatible with the environment providing device 88, and may include, for example, a temperature sensor, an air volume sensor, an illuminance sensor, and the like.
(7)環境提供装置88は、複数のスピーカを含むことで、音像を定位させたり所定の音場を再現したりすることができてもよい。このとき、所定のリバーブ処理またはFIR等のフィルタ処理を環境データに含まれる音信号に付加してもよい。環境収集装置82において部屋の音場特性を再現するための情報を収集して環境データとして他の通信拠点に送信してもよい。これにより、受信側の通信拠点における環境提供装置88は、環境データに含まれる情報に基づいて、送信側の通信拠点における部屋の音場を再現するようにしてもよい。このとき、受信側の通信拠点における部屋の音場特性をキャンセルするための信号処理を含めることで、送信側の通信拠点における部屋の音場をより正確に再現するようにしてもよい。このような音場を再現する処理は、加振駆動信号に付加されてもよい。 (7) The environment providing device 88 may be able to localize a sound image or reproduce a predetermined sound field by including a plurality of speakers. At this time, predetermined reverb processing or filter processing such as FIR may be added to the sound signal included in the environmental data. The environment collecting device 82 may collect information for reproducing the sound field characteristics of the room and transmit the collected information to other communication bases as environmental data. Thereby, the environment providing device 88 at the communication base on the receiving side may reproduce the sound field of the room at the communication base on the transmitting side based on the information included in the environmental data. At this time, the sound field of the room at the transmitting side communication base may be more accurately reproduced by including signal processing for canceling the sound field characteristics of the room at the receiving side communication base. Processing to reproduce such a sound field may be added to the excitation drive signal.
(8)各通信拠点において同期された共通のメトロノームを音、光、振動等によって実現してもよい。絶対的な時刻で同期する場合には、例えば、GPS信号など衛星測位システムで用いられる時刻情報が用いられてもよいし、NTP(Network Time Protocol)による時刻同期技術が用いられてもよい。この場合には、BPMの値が設定され、拍の開始タイミングが時刻情報に基づいて決定されればよい。BPMの値は、予め設定された演奏曲に基づいて決められてもよいし、演奏者によって設定されてもよい。 (8) A common metronome synchronized at each communication base may be realized using sound, light, vibration, etc. When synchronizing using absolute time, for example, time information used in a satellite positioning system such as a GPS signal may be used, or a time synchronization technique based on NTP (Network Time Protocol) may be used. In this case, the BPM value may be set and the beat start timing may be determined based on time information. The BPM value may be determined based on a preset performance piece, or may be set by the performer.
 絶対的な時刻を基準とするのではなく、複数の通信拠点のうちいずれかの通信拠点がメトロノームの基準とされてもよい。この場合には、基準となる通信拠点での演奏内容からビート位置が解析されて、メトロノームとして用いられてもよい。複数の通信拠点における演奏内容からそれぞれビート位置を検出した場合に、最も多い数の通信拠点で一致した扱えるビート位置が、他の通信拠点でもメトロノームとして用いられてもよい。これらのメトロノームにしたがって、所定のデータ(発音制御情報を含むデータ、音データ、動画データなど)を再生してもよい。所定のデータは、演奏を記録することによって得られてもよい。例えば、ドラムのリズムパターンがメトロノームの設定により再生されてもよい。 Instead of using absolute time as a reference, any one of a plurality of communication bases may be used as a metronome reference. In this case, the beat position may be analyzed from the performance content at the reference communication base and used as a metronome. When beat positions are detected from the performance contents at a plurality of communication bases, the beat position that can be matched and handled by the largest number of communication bases may be used as a metronome at other communication bases. Predetermined data (data including sound production control information, sound data, video data, etc.) may be played back according to these metronomes. The predetermined data may be obtained by recording the performance. For example, a drum rhythm pattern may be played by setting a metronome.
 メトロノームを振動で実現する場合には、自動演奏ピアノ1のうち可動の構成を振動させることによってメトロノームの拍を演奏者に認識させてもよい。例えば、駆動信号生成部145は、メトロノームの拍毎にペダル13を少し動かすように駆動信号を生成してもよい。ペダル13を動かす量は、ダンパペダルであれば、ダンパ18が弦15から離れない程度のわずかな量である。メトロノームの拍毎に動く構成は、ペダル13に限らず、いずれかの鍵12であってもよく、この場合には、鍵12は打弦またはノートオンを発生させない発音しない程度の押下量であることが好ましい。 In the case where the metronome is realized by vibration, the player may be made to recognize the beat of the metronome by vibrating the movable components of the automatic performance piano 1. For example, the drive signal generation unit 145 may generate a drive signal to move the pedal 13 a little with each beat of the metronome. If the pedal 13 is a damper pedal, the amount by which the pedal 13 is moved is so small that the damper 18 does not separate from the strings 15. The structure of the metronome that moves with each beat is not limited to the pedal 13, but may be any key 12. In this case, the key 12 is pressed down to the extent that it does not produce a string strike or a note-on. It is preferable.
(9)送信される演奏データにおいて演奏データを送信するときの時刻情報が含まれてもよい。このようにすると、複数の通信拠点から受信された演奏データを、時刻情報に応じて時間軸上で調整することで、通信遅延における時間軸上のずれを無くすように補正することできる。例えば、自動演奏ピアノ1を演奏しなかったり、演奏したとしても他の通信拠点へ演奏データを送信しなかったりする場合であれば、他の複数の通信拠点からの演奏データが通信遅延によってそれぞれ異なるタイミングで受信されたとしても、時刻情報が揃うように時間軸上で演奏データをずらすことで、同じ遅延量であるものとして自動演奏ピアノ1を駆動することができる。 (9) The transmitted performance data may include time information when transmitting the performance data. In this way, by adjusting the performance data received from a plurality of communication bases on the time axis according to the time information, it is possible to correct the deviation on the time axis due to communication delay. For example, if player piano 1 is not played, or even if it is played, the performance data is not sent to other communication bases, the performance data from multiple other communication bases may differ depending on the communication delay. Even if the performance data is received at the same timing, by shifting the performance data on the time axis so that the time information is aligned, it is possible to drive the automatic performance piano 1 assuming that the delay amount is the same.
 この時刻情報を利用して、各通信拠点から届く演奏データの遅延時間が認識できる。駆動信号生成部145は、この遅延時間が大きいほど、ベロシティの値を小さくするようにして駆動信号を生成してもよい。駆動信号生成部145は、遅延時間が大きいほど、残響を付すようにしてもよい。このようにすると、遅延時間の長さを距離の大きさによる効果とした発音を、自動演奏ピアノ1において実現することができる。すなわち、遅延時間が大きいことを遠い場所での演奏とした感覚を聴取者に与えることができる。各通信拠点に関して、遅延時間の大きさを視覚的に示す画像がディスプレイに表示されてもよい。各通信拠点に関して、AR(Augmented Reality)を用いて遅延時間の大きさを視覚的に示す画像が提示されてもよい。例えば、遅延時間をAR空間上の位置・距離関係に変換して、各通信拠点に関する画像が提示されればよい。 Using this time information, it is possible to recognize the delay time of performance data arriving from each communication base. The drive signal generation unit 145 may generate the drive signal such that the longer the delay time, the smaller the velocity value. The drive signal generation unit 145 may generate more reverberation as the delay time increases. In this way, the automatic performance piano 1 can realize sound production in which the length of the delay time is an effect of the distance. In other words, the long delay time can give the listener the feeling that the performance is being performed at a distant location. For each communication base, an image may be displayed on the display that visually indicates the magnitude of the delay time. Regarding each communication base, an image visually indicating the magnitude of the delay time may be presented using AR (Augmented Reality). For example, images related to each communication base may be presented by converting the delay time into a position/distance relationship in the AR space.
(10)制御装置20は、複数の通信拠点間での演奏データを互いに比較して相関度を演算して、相関度をディスプレイに表示するようにしてもよい。相関度は、例えば、信号処理またはDNN(Deep Neural Network)を用いて演算されてもよい。このとき、上述のように、複数の通信拠点間で時刻情報が揃うように調整された演奏データを用いて相関度を演算してもよい。制御装置20は、受信した演奏データを解析してコードを特定したり、ビート位置を特定したりして、特定した情報をディスプレイに表示してもよい。このとき、複数の通信拠点における演奏データのうち、最も尤度が高いコード、ビート位置をディスプレイに表示するようにしてもよい。このコードの構成音に相当する鍵12が演奏者に認識させるように、鍵盤に光が照射されてもよい。 (10) The control device 20 may calculate the degree of correlation by comparing performance data between a plurality of communication bases, and display the degree of correlation on the display. The degree of correlation may be calculated using signal processing or DNN (Deep Neural Network), for example. At this time, as described above, the degree of correlation may be calculated using performance data that has been adjusted so that time information is consistent between a plurality of communication bases. The control device 20 may analyze the received performance data to identify chords or beat positions, and display the identified information on the display. At this time, the most likely chord and beat position among the performance data at the plurality of communication bases may be displayed on the display. The keyboard may be illuminated with light so that the player can recognize the keys 12 that correspond to the constituent notes of this chord.
(11)制御装置20は、受信した演奏データからコードを解析し、そのコードの尤度が所定値より高い場合には、現在のコードとして特定する。制御装置20は、鍵12への演奏操作に応じて加振器47による発音をする場合、コードに対応する音以外の鍵12への演奏操作は、加振器47を駆動しないように制御する。 (11) The control device 20 analyzes the chord from the received performance data, and if the likelihood of the chord is higher than a predetermined value, it identifies it as the current chord. When the vibrator 47 generates sound in response to a playing operation on the key 12, the control device 20 controls the vibrator 47 not to be driven by any playing operation on the key 12 other than the sound corresponding to the chord. .
(12)制御装置20は、受信した演奏データからビート位置を解析し、そのビート位置の尤度が所定値より高い場合には、現在のビート位置として特定する。制御装置20は、鍵12への演奏操作に応じて加振器47による発音をする場合、次に予測されるビート位置に至るまでに所定時間の範囲内に鍵12への押下が発生したときには、予測されるビート位置まで遅延させることで加振器47による発音を実現する。このようにして、演奏音をビート位置に合わせるようにしてもよい。 (12) The control device 20 analyzes the beat position from the received performance data, and if the likelihood of the beat position is higher than a predetermined value, it identifies it as the current beat position. When the vibrator 47 generates a sound in response to a performance operation on the key 12, the control device 20 controls the control device 20 to generate a sound when the key 12 is pressed within a predetermined time range before reaching the next predicted beat position. , the sound generation by the vibrator 47 is realized by delaying the beat position to the predicted beat position. In this way, the performance sound may be matched to the beat position.
(13)制御装置20は、受信した演奏データから音量を特定するとともに、自動演奏ピアノ1に対する自分の演奏による音量を特定する。音量は、例えば、過去の所定時間におけるベロシティの平均値によって特定される。駆動信号生成部145は、受信した演奏データの音量が自分の演奏による音量に近づくように調整して、鍵駆動信号または加振駆動信号を生成する。音量を調整するときは、急激に変化させるのではなく、徐々に変化させてもよい。このようにすると、合奏における音量バランスを調整することができる。音量バランスは、予め設定することによって、いずれか一方の音量が相対的に大きくなるようにしてもよい。 (13) The control device 20 specifies the volume from the received performance data, and also specifies the volume of the user's performance on the automatic performance piano 1. The volume is specified, for example, by the average value of velocity over a predetermined period of time in the past. The drive signal generation unit 145 generates a key drive signal or an excitation drive signal by adjusting the volume of the received performance data so that it approaches the volume of the player's own performance. When adjusting the volume, the volume may be changed gradually rather than suddenly. In this way, the volume balance in the ensemble can be adjusted. The volume balance may be set in advance so that one of the volumes is relatively loud.
(14)制御装置20は、鍵12への演奏操作に応じて加振器47による発音をする場合、鍵12の押下に応じた発音のタイミングを遅延させてもよい。このとき、他の通信拠点に送信される演奏データおよび他の通信拠点から送信された演奏データは遅延させない。これにより、演奏者は遅延時間を考慮して早めに鍵12を押下するように演奏するため、合奏における通信遅延の影響を少なくすることができる。 (14) When the control device 20 causes the vibrator 47 to generate sound in response to a performance operation on the key 12, the control device 20 may delay the timing of the sound generation in response to the depression of the key 12. At this time, performance data transmitted to other communication bases and performance data transmitted from other communication bases are not delayed. Thereby, the performer plays by pressing down the key 12 early in consideration of the delay time, so that the influence of communication delay on the ensemble performance can be reduced.
(15)いずれかの通信拠点において自動演奏ピアノ1が設置されているのではなく、センサ30および駆動装置40が配置されていないアコースティックピアノである場合には、制御装置20は、センサ30および駆動装置40に関連する構成を含んでいなくてもよく、デスクトップパソコン、タブレットコンピュータ等で構成されてもよい。 (15) If the player piano 1 is not installed at any communication base, but is an acoustic piano in which the sensor 30 and the drive device 40 are not installed, the control device 20 controls the sensor 30 and the drive device 40. The device 40 may not include components related to the device 40, and may be composed of a desktop computer, a tablet computer, or the like.
 この場合には、制御装置20は、演奏音を演奏データに変換して、他の通信拠点に送信してもよい。演奏音はマイクロフォンにより収集されればよく、制御装置20は、収集された演奏音に含まれる構成音を解析して、発音制御情報に変換することで、演奏データを生成してもよい。このような処理によれば、ピアノ以外の楽器にも適用することができる。 In this case, the control device 20 may convert the performance sound into performance data and transmit it to another communication base. The performance sounds may be collected by a microphone, and the control device 20 may generate performance data by analyzing constituent sounds included in the collected performance sounds and converting them into sound production control information. Such processing can also be applied to musical instruments other than pianos.
(16)環境収集装置82は、鍵盤蓋11の開閉を検出するセンサ、椅子への演奏者の着座を検出するセンサなどを有してもよい。この場合には、環境提供装置88は、鍵盤蓋11の開閉、椅子への演奏者の着座を表示するディスプレイを有してもよい。環境提供装置88は、制御信号に応じて鍵盤蓋11を開閉する構造を有してもよい。この場合には、特定の通信拠点における鍵盤蓋11の開閉に対応して、他の通信拠点における鍵盤蓋11が連動してもよい。 (16) The environment collecting device 82 may include a sensor that detects the opening and closing of the keyboard lid 11, a sensor that detects the seating of the performer on the chair, and the like. In this case, the environment providing device 88 may have a display that displays the opening and closing of the keyboard lid 11 and the seating of the performer on the chair. The environment providing device 88 may have a structure that opens and closes the keyboard lid 11 in response to a control signal. In this case, in response to the opening and closing of the keyboard lid 11 at a specific communication base, the keyboard lids 11 at other communication bases may be linked.
(17)自動演奏ピアノ1における鍵盤楽器10は、グランドピアノ等のアコースティックピアノに限らず、電子鍵盤楽器であってもよい。電子鍵盤楽器は、鍵12に相当する構造を有する鍵盤装置であってもよいし、鍵12がシート状の構造を有する鍵盤装置であってもよい。シート状の構造を有する鍵盤装置の場合には、床に置いて足で踏んで演奏することもできるから、手を使用することができない状況においても演奏することができる。足で演奏するような鍵盤装置の場合には、演奏可能な音域が狭いことがある。このような場合には、互いに異なる音域が予め設定された複数の鍵盤装置を用いることにより、複数人で演奏するようにしてもよい。シート状の構造を有する鍵盤装置の場合、ベッドのサイドテーブルの裏面に配置されてもよい。この場合には、サイドテーブルを支持する支持部材において、サイドテーブルの表面と裏面とのいずれかが上面を向くように切り替えられる回転機構が設ければよい。 (17) The keyboard instrument 10 in the automatic performance piano 1 is not limited to an acoustic piano such as a grand piano, but may be an electronic keyboard instrument. The electronic keyboard instrument may be a keyboard device having a structure corresponding to the keys 12, or may be a keyboard device in which the keys 12 have a sheet-like structure. In the case of a keyboard device having a sheet-like structure, it can be placed on the floor and played by stepping on it with feet, so it can be played even in situations where hands cannot be used. In the case of a keyboard device that is played with the feet, the playable range may be narrow. In such a case, a plurality of keyboard devices having different ranges set in advance may be used to allow a plurality of people to perform. In the case of a keyboard device having a sheet-like structure, it may be placed on the back side of a bedside table. In this case, a rotation mechanism may be provided in the support member that supports the side table so that either the front surface or the back surface of the side table can be switched to face the top surface.
(18)制御装置20の機能の少なくとも一部は、ビデオ会議システムを実現するソフトウエアにおけるプラグインとして用意されてもよい。 (18) At least part of the functions of the control device 20 may be provided as a plug-in in software that implements the video conference system.
(19)通信拠点間を接続するネットワークNWは、光ケーブル等で実現される専用回線であってもよい。 (19) The network NW that connects the communication bases may be a dedicated line realized by an optical cable or the like.
(20)環境収集装置82および環境提供装置88は、自動演奏ピアノ1に着脱可能に取り付けるための構成を含んでいてもよい。自動演奏ピアノ1においても環境収集装置82および環境提供装置88を取り付けるための構成を含んでいてもよい。この場合には、環境収集装置82または環境提供装置88は、自動演奏ピアノ1に取り付けられることで、インターフェイス26に接続されてもよい。 (20) The environment collecting device 82 and the environment providing device 88 may include a configuration for detachably attaching them to the automatic performance piano 1. The player piano 1 may also include a structure for attaching the environment collecting device 82 and the environment providing device 88. In this case, the environment collecting device 82 or the environment providing device 88 may be connected to the interface 26 by being attached to the automatic performance piano 1.
 以上が変形例に関する説明である。 The above is the explanation regarding the modified example.
 以上のとおり、本発明の一実施形態によれば、第1通信拠点における鍵盤楽器に対する演奏内容を含む第1演奏データを第2通信拠点に送信する第1送信部と、前記第2通信拠点から第2演奏データを受信する第1受信部と、前記第2演奏データに応じた発音をするための駆動信号を生成して、前記第1通信拠点における発音装置に出力する第1生成部と、を含み、前記第1演奏データおよび前記第2演奏データの少なくとも一方は、前記鍵盤楽器における鍵の押下量を示す鍵位置信号を含む、制御装置が提供される。 As described above, according to an embodiment of the present invention, the first transmitting unit transmits the first performance data including the performance content for the keyboard instrument at the first communication base to the second communication base; a first receiving section that receives second performance data; a first generating section that generates a drive signal for producing sound according to the second performance data and outputs it to the sound production device at the first communication base; There is provided a control device, wherein at least one of the first performance data and the second performance data includes a key position signal indicating a depression amount of a key on the keyboard instrument.
 前記発音装置は、前記鍵盤楽器の響板に接続された加振器を含んでもよい。前記第2演奏データに応じた発音は、前記駆動信号に応じた前記加振器の振動によって生じてもよい。 The sounding device may include a vibrator connected to a soundboard of the keyboard instrument. The sound generation according to the second performance data may be generated by vibration of the vibrator according to the drive signal.
 前記発音装置は、前記鍵盤楽器の鍵、前記鍵に連動するハンマおよび前記ハンマに打撃される弦を含んでもよい。前記第2演奏データに応じた発音は、前記駆動信号に応じた前記鍵の駆動によって生じてもよい。前記駆動信号は、前記鍵位置信号に応じた前記押下量を再現するように前記鍵を駆動するための信号であってもよい。 The sounding device may include a key of the keyboard instrument, a hammer interlocked with the key, and a string struck by the hammer. The sound generation according to the second performance data may be generated by driving the key according to the drive signal. The drive signal may be a signal for driving the key so as to reproduce the depression amount according to the key position signal.
 前記第1通信拠点における環境収集装置によって収集された周囲環境の情報に応じた第1環境データを取得して前記第2通信拠点に送信する第2送信部と、前記第2通信拠点から第2環境データを受信する第2受信部と、前記第2環境データに応じた周囲環境を提供するための制御信号を生成して、前記第1通信拠点における環境提供装置に出力する第2生成部と、をさらに含んでもよい。 a second transmitting unit that acquires first environmental data according to information on the surrounding environment collected by the environment collecting device at the first communication base and transmits it to the second communication base; a second receiving unit that receives environmental data; and a second generating unit that generates a control signal for providing a surrounding environment according to the second environmental data and outputs it to the environment providing device at the first communication base. , may further include.
1,1a,1b:自動演奏ピアノ、10:鍵盤楽器、11:鍵盤蓋、12:鍵、13:ペダル、14:ハンマ、15:弦、16,16H,16L:駒、17:響板、17a:響棒、18:ダンパ、19:直支柱、20:制御装置、21:制御部、22:記憶部、23:操作パネル、24:通信部、25:音源部、26:インターフェイス、27:バス、30:センサ、32:鍵センサ、33:ペダルセンサ、34:ハンマセンサ、37H,37L:ピックアップセンサ、40:駆動装置、42:鍵駆動装置、43:ペダル駆動装置、44:ストッパ、47,47H,47L:加振器、48:ダンパ駆動装置、49H,49L:支持部、50:椅子、82:環境収集装置、88:環境提供装置、100:合奏制御機能、121:環境データ生成部、123:環境データ送信部、131:演奏データ生成部、133:演奏データ送信部、143:演奏データ受信部、145,145A:駆動信号生成部、183:環境データ受信部、185,185B:制御信号生成部、821,822:振動測定板、823:マイクロフォン、881,882:振動発生板、883:スピーカ、1000:サーバ、1451:クロストーク処理部、1453:音響付与部、1455:増幅部、1851:自画像取得部、1853:遠隔画像取得部、1855:画像合成部 1, 1a, 1b: automatic piano, 10: keyboard instrument, 11: keyboard lid, 12: key, 13: pedal, 14: hammer, 15: strings, 16, 16H, 16L: bridge, 17: soundboard, 17a : Sound bar, 18: Damper, 19: Straight column, 20: Control device, 21: Control section, 22: Storage section, 23: Operation panel, 24: Communication section, 25: Sound source section, 26: Interface, 27: Bus , 30: sensor, 32: key sensor, 33: pedal sensor, 34: hammer sensor, 37H, 37L: pickup sensor, 40: drive device, 42: key drive device, 43: pedal drive device, 44: stopper, 47, 47H, 47L: Vibrator, 48: Damper drive device, 49H, 49L: Support section, 50: Chair, 82: Environment collection device, 88: Environment provision device, 100: Ensemble control function, 121: Environment data generation section, 123: Environmental data transmitting section, 131: Performance data generating section, 133: Performance data transmitting section, 143: Performance data receiving section, 145, 145A: Drive signal generating section, 183: Environmental data receiving section, 185, 185B: Control signal Generation unit, 821, 822: Vibration measurement plate, 823: Microphone, 881, 882: Vibration generation plate, 883: Speaker, 1000: Server, 1451: Crosstalk processing unit, 1453: Sound imparting unit, 1455: Amplification unit, 1851 : Self-portrait acquisition unit, 1853: Remote image acquisition unit, 1855: Image composition unit

Claims (4)

  1.  第1通信拠点における鍵盤楽器に対する演奏内容を含む第1演奏データを第2通信拠点に送信する第1送信部と、
     前記第2通信拠点から第2演奏データを受信する第1受信部と、
     前記第2演奏データに応じた発音をするための駆動信号を生成して、前記第1通信拠点における発音装置に出力する第1生成部と、
     を含み、
     前記第1演奏データおよび前記第2演奏データの少なくとも一方は、前記鍵盤楽器における鍵の押下量を示す鍵位置信号を含む、
     制御装置。
    a first transmitting unit that transmits first performance data including performance content for a keyboard instrument at the first communication base to a second communication base;
    a first receiving unit that receives second performance data from the second communication base;
    a first generation unit that generates a drive signal for producing sound according to the second performance data and outputs it to a sound production device in the first communication base;
    including;
    At least one of the first performance data and the second performance data includes a key position signal indicating a pressed amount of a key on the keyboard instrument.
    Control device.
  2.  前記発音装置は、前記鍵盤楽器の響板に接続された加振器を含み、
     前記第2演奏データに応じた発音は、前記駆動信号に応じた前記加振器の振動によって生じる、
     請求項1に記載の制御装置。
    The sounding device includes an exciter connected to a soundboard of the keyboard instrument,
    The sound generation according to the second performance data is generated by vibration of the vibrator according to the drive signal,
    The control device according to claim 1.
  3.  前記発音装置は、前記鍵盤楽器の鍵、前記鍵に連動するハンマおよび前記ハンマに打撃される弦を含み、
     前記第2演奏データに応じた発音は、前記駆動信号に応じた前記鍵の駆動によって生じ、
     前記駆動信号は、前記鍵位置信号に応じた前記押下量を再現するように前記鍵を駆動するための信号である、請求項1に記載の制御装置。
    The sounding device includes a key of the keyboard instrument, a hammer interlocked with the key, and a string struck by the hammer,
    The sound generation according to the second performance data is generated by driving the key according to the drive signal,
    The control device according to claim 1, wherein the drive signal is a signal for driving the key so as to reproduce the depression amount according to the key position signal.
  4.  前記第1通信拠点における環境収集装置によって収集された周囲環境の情報に応じた第1環境データを取得して前記第2通信拠点に送信する第2送信部と、
     前記第2通信拠点から第2環境データを受信する第2受信部と、
     前記第2環境データに応じた周囲環境を提供するための制御信号を生成して、前記第1通信拠点における環境提供装置に出力する第2生成部と、
     をさらに含む、請求項1から請求項3のいずれかに記載の制御装置。
    a second transmission unit that acquires first environmental data according to information on the surrounding environment collected by an environment collection device at the first communication base and transmits it to the second communication base;
    a second receiving unit that receives second environmental data from the second communication base;
    a second generation unit that generates a control signal for providing a surrounding environment according to the second environment data and outputs the control signal to an environment providing device at the first communication base;
    The control device according to any one of claims 1 to 3, further comprising the following.
PCT/JP2023/010952 2022-04-06 2023-03-20 Control device WO2023195333A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022063525A JP2023154288A (en) 2022-04-06 2022-04-06 Control device
JP2022-063525 2022-04-06

Publications (1)

Publication Number Publication Date
WO2023195333A1 true WO2023195333A1 (en) 2023-10-12

Family

ID=88242727

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/010952 WO2023195333A1 (en) 2022-04-06 2023-03-20 Control device

Country Status (2)

Country Link
JP (1) JP2023154288A (en)
WO (1) WO2023195333A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0552869U (en) * 1991-12-25 1993-07-13 カシオ計算機株式会社 Electronic musical instrument system
JP2002091291A (en) * 2000-09-20 2002-03-27 Vegetable House:Kk Data communication system for piano lesson
JP2009098683A (en) * 2007-09-28 2009-05-07 Yamaha Corp Performance system
JP2013015643A (en) * 2011-07-01 2013-01-24 Yamaha Corp Performance data transmitter and performance data receiver
JP2016081039A (en) * 2014-10-17 2016-05-16 ヤマハ株式会社 Acoustic system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0552869U (en) * 1991-12-25 1993-07-13 カシオ計算機株式会社 Electronic musical instrument system
JP2002091291A (en) * 2000-09-20 2002-03-27 Vegetable House:Kk Data communication system for piano lesson
JP2009098683A (en) * 2007-09-28 2009-05-07 Yamaha Corp Performance system
JP2013015643A (en) * 2011-07-01 2013-01-24 Yamaha Corp Performance data transmitter and performance data receiver
JP2016081039A (en) * 2014-10-17 2016-05-16 ヤマハ株式会社 Acoustic system

Also Published As

Publication number Publication date
JP2023154288A (en) 2023-10-19

Similar Documents

Publication Publication Date Title
Hoffman et al. Interactive improvisation with a robotic marimba player
US7842875B2 (en) Scheme for providing audio effects for a musical instrument and for controlling images with same
JP4501725B2 (en) Keyboard instrument
US11341947B2 (en) System and method for musical performance
US11557269B2 (en) Information processing method
WO2007037068A1 (en) Ensemble system
JP4639795B2 (en) Musical instrument performance drive device, keyboard instrument performance drive system, and keyboard instrument.
CN112955948A (en) Musical instrument and method for real-time music generation
US10418012B2 (en) Techniques for dynamic music performance and related systems and methods
US11694665B2 (en) Sound source, keyboard musical instrument, and method for generating sound signal
US11551653B2 (en) Electronic musical instrument
US7504572B2 (en) Sound generating method
WO2023195333A1 (en) Control device
JP4131279B2 (en) Ensemble parameter display device
Sarkar et al. Recognition and prediction in a network music performance system for Indian percussion
CN101777342B (en) Electric keyboard instrument
Martin Percussion and computer in live performance
WO2023182005A1 (en) Data output method, program, data output device, and electronic musical instrument
Turchet The Hyper-Hurdy-Gurdy
Menzies New performance instruments for electroacoustic music
WO2023181571A1 (en) Data output method, program, data output device, and electronic musical instrument
WO2016017623A1 (en) Reference display device, reference display method, and program
Angell Combining Acoustic Percussion Performance with Gesture Control Electronics
JP2005062766A (en) Automatic music playing apparatus
Whitman et al. Mind Over Machine Over Matter: A Study of Electro-Acoustic Performance Practices

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23784623

Country of ref document: EP

Kind code of ref document: A1