WO2021038833A1 - Acoustic space creation apparatus - Google Patents

Acoustic space creation apparatus Download PDF

Info

Publication number
WO2021038833A1
WO2021038833A1 PCT/JP2019/034122 JP2019034122W WO2021038833A1 WO 2021038833 A1 WO2021038833 A1 WO 2021038833A1 JP 2019034122 W JP2019034122 W JP 2019034122W WO 2021038833 A1 WO2021038833 A1 WO 2021038833A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound
speaker
processing unit
state
predetermined number
Prior art date
Application number
PCT/JP2019/034122
Other languages
French (fr)
Japanese (ja)
Inventor
潤耶 及川
Original Assignee
ソニフィデア合同会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニフィデア合同会社 filed Critical ソニフィデア合同会社
Priority to EP19942900.2A priority Critical patent/EP4024391A4/en
Priority to JP2019564546A priority patent/JP6710428B1/en
Priority to PCT/JP2019/034122 priority patent/WO2021038833A1/en
Publication of WO2021038833A1 publication Critical patent/WO2021038833A1/en

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0008Associated control or indicating means
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/155Musical effects
    • G10H2210/265Acoustic effect simulation, i.e. volume, spatial, resonance or reverberation effects added to a musical sound, usually by appropriate filtering or delays
    • G10H2210/281Reverberation or echo
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/155Musical effects
    • G10H2210/265Acoustic effect simulation, i.e. volume, spatial, resonance or reverberation effects added to a musical sound, usually by appropriate filtering or delays
    • G10H2210/295Spatial effects, musical uses of multiple audio channels, e.g. stereo
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/155User input interfaces for electrophonic musical instruments
    • G10H2220/201User input interfaces for electrophonic musical instruments for movement interpretation, i.e. capturing and recognizing a gesture or a specific kind of movement, e.g. to control a musical instrument
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/155User input interfaces for electrophonic musical instruments
    • G10H2220/4013D sensing, i.e. three-dimensional (x, y, z) position or movement sensing.
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/155User input interfaces for electrophonic musical instruments
    • G10H2220/441Image sensing, i.e. capturing images or optical patterns for musical purposes or musical control purposes
    • G10H2220/455Camera input, e.g. analyzing pictures from a video camera and using the analysis results as control data

Definitions

  • the present invention relates to an acoustic space generator.
  • Patent Document 1 discloses a configuration of a musical tone generator that creates musical tone data based on the position and amount of change of the motion of a person who is the subject of the camera and outputs the music as sound. Has been done. According to this music generator, the performer does not need to acquire the knowledge and skills to operate the device that automatically plays music and to play the musical instrument, and the performer can improvise by simply exercising in front of the camera. The performance can be easily performed.
  • a device that outputs a sound that reacts with the movement of the subject is known, but there is a device that can recognize the movement of the subject from a viewpoint different from the conventional technology and play a new musical tone generated by the recognition.
  • the musical sound generated according to the movement of the subject can be further enjoyed.
  • the state of movement of the object is different from that of the prior art so that the musical sound generated according to the state of movement of the object (for example, the movement of the performer) can be further enjoyed.
  • the main purpose is to provide a device that recognizes from different viewpoints and plays a new musical tone generated by this.
  • the present invention focuses on simple movements of "moving” and “stopping”, and temporal sensations (intervals) of various movements such as physical expression, breathing method, "pause”, “pause”, and ON / OFF of things. ) was conceived for the purpose of making it a sound or a syllable. Then, in the present invention, a device that mainly focuses on the repetition of "movement (movement)” and “stationary” of an object, and generates and organizes a melody, a syllable, and an acoustic space according to a specific scale based on this. I will provide a.
  • the present invention comprises a camera for photographing an object, a light source for irradiating the object, an input unit for acquiring an image from the camera, a processing unit for selecting generated sound according to a frame of an image, and a processing unit.
  • a plurality of speakers that output a selected sound are provided, and the processing unit compares the light amount of the frame with the light amount of the frame immediately before the frame, and the object is based on a preset light amount threshold. It determines whether it is in a moving state or a stationary state, and when it is determined that it is in a moving state, it generates a motion detection signal, selects the pitch to be output by the pre-stored music structure program, and stores it in advance.
  • one of a plurality of speakers is assigned to each sound of the selected pitch, the sound is output from the assigned speaker, and the state of movement is the first in the first unit time. It occurs in a predetermined number of times in a row, and after the number of times in which the first predetermined number of times has occurred reaches the second predetermined number of times and the state is stationary for the second unit time or more, it is determined that the state is in motion.
  • a specific speaker is assigned to a predetermined sound from a plurality of speakers
  • a sine wave is generated from the specific speaker by the music structure program, and the state of movement is performed in the first unit time.
  • the state of movement is after being in a stationary state for a second unit time or more.
  • the music structure program causes the specific speaker to output the first reverberant sound, and the first unit time.
  • This is an acoustic space generation device that outputs a second reverberant sound from a speaker in a predetermined order by a music structure program when a fourth predetermined number of consecutive motion states are generated.
  • the present invention outputs a sensor capable of detecting the state of an object, an input unit that acquires a signal from the sensor, a processing unit that selects a sound to be generated according to the signal, and a sound selected by the processing unit.
  • a plurality of speakers are provided, and the processing unit compares a signal with a signal immediately preceding the signal, and determines whether the object is in a moving state or a stationary state based on a preset signal threshold. When it is determined that the sound is in a motion state, a motion detection signal is generated, the pitch to be output is selected by the pre-stored music structure program, and the sound is selected by the pre-stored spatial structure program.
  • One of a plurality of speakers is assigned to each sound of the pitch, the sound is output from the assigned speaker, and the state of movement occurs continuously in the first predetermined number of times in the first unit time, and the first When the number of times of occurrence in a predetermined number of times reaches the second predetermined number of times, the sound is stationary for the second unit time or more, and then it is determined that the sound is in motion, and the sound is set to a predetermined sound.
  • a specific speaker is assigned among a plurality of speakers, a sine wave is generated from the specific speaker by the music structure program, and a state of movement is generated for the first predetermined number of times in a row in the first unit time.
  • the music structure program When the number of times of occurrence in the first predetermined number of times reaches the third predetermined number of times, when it is determined to be in a moving state after being in a stationary state for a second unit time or more, and When a specific speaker is assigned to the selected sound, the music structure program outputs the first reverberant sound from the specific sound, and the state of movement is the fourth predetermined in the first unit time.
  • This is an acoustic space generator that outputs a second reverberant sound from a speaker in a predetermined order by a music structure program when it occurs continuously a number of times.
  • each state of motion and rest is determined based on the movement of an object (for example, a subject), a sound is selected according to the frequency, period, number of times, etc. of each state, and the selected sound. Is output as a musical sound. Therefore, according to the present invention, it is possible to easily play a musical sound according to the mode of movement of the object, and it is possible to easily provide an acoustic space for playing such a musical sound. Further, according to the present invention, it is possible to play the musical sound or sound of a new syllable generated based on the repetitive motion of the motion and the rest of the object.
  • FIG. 1 It is a schematic diagram which shows an example of the acoustic space generation apparatus which concerns on embodiment, (a) is a view seen from the front side, (b) is a view seen from above. It is a block diagram which shows the main part of the acoustic space generation apparatus of FIG. It is a figure which shows an example of the use state of the acoustic space generation apparatus of FIG. It is a flowchart which shows an example of the operation of the acoustic space generation apparatus of FIG. It is a flowchart which shows the processing of the processing part at the time of outputting the sound of a predetermined pitch. It is a figure which shows the audio file of the pitch. It is a figure for demonstrating an example of operation of a processing part.
  • FIG. 1A and 1B are views showing an outline of an example of the configuration of the acoustic space generator 100 according to the embodiment, FIG. 1A is a view seen from the front side, and FIG. 1B is a view seen from above. Note that FIG. 1A shows a state in which the wall portion 11b on the front side is transmitted, and FIG. 1B shows a state in which the ceiling portion 11c is transmitted.
  • the acoustic space generation device 100 is a device that generates a melody or syllable according to a specific scale and generates an acoustic space 10 in which the melody or syllable is played.
  • the acoustic space 10 is a space for playing music.
  • the acoustic space 10 is the internal space of the structure 11.
  • the structure 11 is a container-like structure, and is formed to be hollow and movable. As shown in FIG. 1, for example, the structure 11 has a substantially square floor surface 11a in a plan view, a wall surface 11b rising from each side of the floor surface 11a, and above the floor surface 11a so as to block the internal space. It is configured to include a ceiling surface 11c arranged in.
  • the acoustic space 10 is a space defined by the floor surface 11a, the wall surface 11b, and the ceiling surface 11c of the structure 11.
  • the acoustic space 10 is not limited to the internal space of a container-like structure such as the structure 11 described above, and any space capable of propagating sound can be applied. That is, the acoustic space 10 may be, for example, an internal space of a building such as a concert hall or an event hall, or an underground space. Further, the acoustic space 10 is not limited to a closed space, and may be, for example, a space on an outdoor stage, a space on the ground surface, or the like. In this case, the camera 20, the light source 30, the stand 40, the plurality of speakers 50, and the like, which will be described later, are installed on, for example, an outdoor stage or the surface of the earth.
  • the acoustic space generation device 100 includes a camera 20, a light source 30, a stand 40, a plurality of speakers 50, and an information processing device 60 (see FIG. 2).
  • the camera 20, the light source 30, and the base 40 are installed inside the acoustic space 10, but may be installed outside the acoustic space 10.
  • the camera 20 and the like may be installed at a place away from the acoustic space 10.
  • the camera 20 is a device capable of photographing the movement of the subject (object) (that is, the hand of the user U) H (see FIG. 3).
  • the camera 20 is installed in the vicinity of the ceiling portion 11c so as to face downward.
  • the camera 20 captures the subject H from above and captures a moving image of the subject H.
  • the camera 20 is arranged in the central portion of the acoustic space 10 in a plan view.
  • the camera 20 is supported in the air, and is supported by, for example, a support metal fitting (not shown) extending in the horizontal direction from the wall portion 11b.
  • the camera 20 is not limited to the above configuration, and the orientation of the camera 20, the installation location, the installation method, and the like can be arbitrarily set.
  • the camera 20 is installed facing upward or horizontally. It may be installed in a state of being fixed to the ceiling portion 11c, or may be installed in a state of being suspended from the ceiling portion 11c.
  • the camera 20 continuously captures the subject H and transmits the captured image to the information processing device 60. Specifically, the camera 20 captures the subject H at a preset frame rate, and inputs each frame as image data to the input unit 61 (see FIG. 2) of the information processing device 60.
  • the camera 20 is connected to the information processing device 60 so as to be capable of data communication, and the image data is input from the camera 20 to the input unit 61 via wired (for example, USB or Ethernet (registered trademark)) or wireless (for example, various radio communication or the Internet). Will be sent to.
  • the frame rate of the camera 20 is set according to the speed of movement of the subject H, the performance of the information processing device 60, and the like.
  • the frame rate of the camera 20 is set to 40 fps (frames per second).
  • the camera 20 photographs the subject H at a frequency of 40 times per second, and transmits the acquired frame images at a frequency of 40 images per second to the information processing device 60.
  • the frame rate of the camera 20 is not limited to 40 fps, and may be set to, for example, 25 fps, 30 fps, 50 fps, 60 fps, or the like.
  • the light source 30 is an instrument or device that emits light that irradiates the subject H.
  • the light source 30 is arranged in the vicinity of the ceiling portion 11c in the central portion of the plan view inside the acoustic space 10.
  • the light source 30 is, for example, an LED spotlight.
  • the spotlight is an illumination that intensively illuminates a part of the acoustic space 10.
  • the light source 30 is installed so as to face downward, and emits light L directly downward. As a result, the light L is projected onto a part of the region R (see FIG. 3) of the upper surface 41 of the table 40.
  • the light source 30 is provided integrally with the camera 20.
  • the light source 30 is not limited to the above configuration, and may be, for example, an arc lamp, an incandescent lamp, a fluorescent lamp, sunlight, or the like, or one that uniformly illuminates a wide range including the periphery of the table 40. You may. Further, the light source 30 can be arbitrarily set with respect to the direction of the light L to be emitted, the installation location, the installation method, and the like. For example, the light source 30 may be installed so as to emit the light L in the upward or horizontal direction. It may be installed separately from the camera 20.
  • the stand 40 is installed on the floor portion 11a below the light source 30.
  • the height of the table 40 is set to a height corresponding to the position of the waist of the user (performer) U in an upright posture, for example (see FIG. 3).
  • the base 40 is installed substantially in the center of the surface of the floor portion 11a, and has a rectangular parallelepiped shape with a height of 90 cm (cm) and a length and width of 45 cm.
  • the upper surface 41 of the table 40 is flat and is arranged so as to include the entire region R to which the emitted light L of the light source 30 is irradiated.
  • the table 40 is not limited to the above configuration, and its shape, size, arrangement, and the like can be appropriately changed. Further, it is arbitrary whether or not such a stand 40 is installed in the acoustic space generation device 100.
  • the light L is emitted from the light source 30 directly below.
  • the light L emitted from the light source 30 goes downward as it is, it irradiates, for example, a circular region R on the upper surface 41 of the table 40.
  • the substantially conical space having the light source 30 as the apex and the upper surface 41 of the base 40 as the bottom surface is partially illuminated by the light L in the acoustic space 10 which is set to be dark as a whole.
  • a substantially conical space illuminated by the light L of the light source 30 is referred to as a light irradiation space S (see FIG. 3).
  • the first speaker 51, the second speaker 52, the third speaker 53, the fourth speaker 54, and the fifth speaker 55 each have sound emitting units 51a, 52a, 55a that emit sound in a predetermined direction.
  • the fifth speaker 55 is arranged in the central part and the upper side in the plan view in the acoustic space 10.
  • the fifth speaker 55 is provided on the upper surface of the camera 20, and is installed with the sound emitting portion 55a facing upward.
  • the first speaker 51, the second speaker 52, the third speaker 53, and the fourth speaker 54 are arranged at the bottom in the acoustic space 10 and are evenly spaced on the same circumference centered on the fifth speaker 55. It is arranged so as to be.
  • the first to fourth speakers 51 to 54 are installed at four corners of a substantially square floor surface 11a, respectively. Further, each of the first to fourth speakers 51 to 54 is in a state where the sound emitting portions 51a, 52a, 55a are directed toward the center side of the acoustic space 10, and is horizontally so as to emit sound slightly upward. On the other hand, it is installed in a state of being tilted upward by about 5 ° to 25 °.
  • the plurality of speakers 50 included in the acoustic space generator 100 are not limited to the configurations of the first to fifth speakers 51 to 55 described above. That is, the number of speakers 51 and the like installed in the acoustic space 10, the arrangement of the speakers 51 and the like, the orientation of the sound emitting portion 51a and the like, and the like can be appropriately changed. Specifically, for example, the number of speakers included in the acoustic space generator 100 may be 2 or more, 4 or less, or 6 or more. Further, for example, all the speakers 51 to 55 constituting the five speakers 50 may be arranged at the upper portion, the bottom portion, or the central portion in the acoustic space 10. You may.
  • the fifth speaker 55 may be installed away from the camera 20 with the sound emitting portion 55a facing downward so as to emit sound downward, and the first to fourth speakers 51 and the like may be installed. It may be installed so as to emit sound in the horizontal direction. Further, although the first to fifth speakers 51 and the like are all speakers having the same configuration, some or all of them may be speakers having different configurations.
  • the information processing device 60 is composed of, for example, a computer.
  • the information processing device 60 is communicably connected to each of the first to fifth speakers 51 and the like and the camera 20 via wired or wireless communication.
  • the information processing device 60 acquires the image of the camera 20 and outputs a predetermined sound from the first to fifth speakers 51 and the like.
  • the information processing device 60 is installed outside the acoustic space 10, but may be installed inside the acoustic space 10.
  • FIG. 2 is a block diagram showing a main part of the acoustic space generation device 100. As shown in FIG. 2, the information processing device 60 includes an input unit 61, a processing unit 62, and a storage unit 63.
  • the input unit 61 acquires the image of the camera 20.
  • the input unit 61 acquires a plurality of frames captured by the camera 20 as image data.
  • the processing unit 62 performs predetermined processing based on the input image data, selects a pitch according to the operation state of the subject H, and outputs each sound of the selected pitch from the speaker 50.
  • the data processing and the like in the processing unit 62 will be described later.
  • the processing unit 62 is realized by a configuration including, for example, a CPU.
  • the storage unit 63 stores image data input from the camera 20, data generated by the processing unit 62, and the like.
  • the storage unit 63 also stores a program for executing the processing of the processing unit 62, and a music structure program and a spatial structure program described later.
  • the storage unit 63 is realized by, for example, a memory or a hard disk.
  • FIG. 3 is a diagram showing an example of a usage state of the acoustic space generation device 100.
  • the user U stands on the back side of the table 40 and performs an operation of moving or stopping the hand H with the hand H in the light irradiation space S.
  • the acoustic space generator 100 is used in this way.
  • the user U puts both the left and right hand H or one hand H in the light irradiation space S, and moves, rotates, or moves the finger in the vertical direction or the horizontal direction, for example.
  • Exercises such as expanding and closing the palm of the hand H, and temporarily stopping the movement of the hand H.
  • Such a series of movements of the hand H is photographed by the camera 20.
  • the sound generated according to the operation of the hand H is output from the speaker 50.
  • FIG. 4 is a flowchart showing an example of the operation of the acoustic space generation device 100.
  • the operation of the acoustic space generation device 100 will be described with reference to the flowchart of FIG.
  • the camera 20 captures a moving image of the subject (object) H (step S01). Then, the image data captured by the camera 20 is transmitted to the information processing device 60 (step S02). For example, in the usage state shown in FIG. 3, the camera 20 continuously photographs the hand H in the light irradiation space S at a preset frame rate, and inputs the information processing device 60 as image data for each frame. Enter in 61.
  • the processing unit 62 executes a predetermined process and a predetermined operation based on the image data (step S03). Then, a musical sound or the like is output from the speaker 50 (step S04).
  • the sound output from the speaker 50 has a melody or a syllable composed of sounds of a predetermined pitch selected by the processing unit 62.
  • the sound output from the speaker 50 includes a sine wave (sine wave) sound and a reverberation sound in addition to such a melody or syllable sound if a predetermined condition is satisfied.
  • a sine wave sine wave
  • a reverberation sound in addition to such a melody or syllable sound if a predetermined condition is satisfied.
  • steps S01 to S04 of the acoustic space generation device 100 By the operations of steps S01 to S04 of the acoustic space generation device 100, a musical sound automatically composed according to the operation of the subject H is played from the speaker 50, whereby the acoustic space 10 is generated.
  • FIG. 5 is a flowchart showing the processing of the processing unit 62 when outputting a sound having a predetermined pitch.
  • the processing of the processing unit 62 is automatically executed based on, for example, the program stored in the storage unit 63.
  • the processing unit 62 first acquires a frame image (step S11). Next, the processing unit 62 compares the light amount of the image data of the frame with the light amount of the image data of the frame acquired immediately before the frame (step S12). In step S12, the amount of light in the entire area of the frame image is compared, but instead, the amount of light in a part of a predetermined area in the frame image may be compared.
  • step S13 the processing unit 62 determines whether the difference between the light amount of the image data of the frame and the light amount of the image data of the frame acquired immediately before the frame is equal to or more than the threshold value or less than the threshold value.
  • the threshold value is a value of a predetermined amount of light, is set in advance, and is stored in the storage unit 63.
  • the processing unit 62 determines that the subject H is in a “motion” state (step S14). Then, when the processing unit 62 determines that the subject H is in a moving state, it generates one motion sensing signal (step S15).
  • the processing unit 62 determines that the difference in the amount of light described above is less than the threshold value (when "NO” in step S13), the processing unit 62 determines that the subject H is in the "stop” state (step S24).
  • the processing unit 62 may generate one pause / lingering signal (step S25).
  • step S16 the processing unit 62 selects one output sound pitch based on the preset and stored music structure program (step S16).
  • step S16 for example, the pitch of the sound to be output from the file having the data shown in FIG. 6 is selected based on the music structure program.
  • one pitch is selected from the 42 pitch data (A1 to A42) shown in FIG.
  • the pitch selected in step S16 may be selected from a recording material such as acoustic noise instead of the 42 pitch data (A1 to A42) shown in FIG.
  • a plurality of pitches may be selected at the same time from the data (A1 to A42) of the 42 pitches shown in FIG.
  • step S17 the processing unit 62 assigns a speaker to a sound of a selected pitch by a preset and stored spatial structure program (step S17).
  • step S17 any one of the five speakers 51 to 55 is assigned to each sound of the selected pitch based on the spatial structure program.
  • any one of the five speakers 51 to 55 is assigned to each of the selected pitches in step S17.
  • FIG. 6 is a diagram showing an audio file of pitch.
  • FIG. 6 shows, for example, data on 42 pitches.
  • A1 to A42 are serial numbers of 42 pitches. These 42 pitches are the pitches that can be selected in step S16.
  • the MIDI number is also called a note number or a note number, and is a numerical value indicating the pitch and range of MIDI (Musical Instrument Digital Interface).
  • the polyphony is the number of sounds that can be output at the same time, and is also a value corresponding to the number of layers. Further, the appearance frequency of each pitch when the motion sensing signal is generated 127 times indicates how many times each pitch is selected when the motion sensing signal is generated 127 times.
  • the output frequency of SP1 / SP2 / SP3 / SP4 / SP5 when the motion sensing signal is generated 20 times means that when the motion sensing signal is generated 20 times and a sound having a predetermined pitch is selected 20 times. How many times are assigned to each of the first speaker 51 (SP1), the second speaker 52 (SP2), the third speaker 53 (SP3), the fourth speaker 54 (SP4), and the fifth speaker 55 (SP5)? Is shown.
  • step S16 and step S17 Numerical values related to the appearance frequency of each pitch and the allocation frequency of each speaker for the sound of each pitch are set in advance.
  • step S16 and step S17 the pitch is automatically selected and the speaker 50 is assigned according to the numerical values relating to the appearance frequency and the allocation frequency.
  • the processing unit 62 outputs a sound having a predetermined pitch from the assigned speaker 50 (step S18).
  • step S18 the sound of the selected pitch is output from the assigned speaker 50.
  • the speaker 51 or the like assigned to each sound constituting the continuous sound is changed (for example, a different speaker 51 or the like is used for each sound constituting the continuous sound). (Assignment, etc.), the sound may be output from the speaker 50 so as to move up, down, left and right in the acoustic space 10.
  • FIG. 7 is a diagram for explaining an example of the operation of the processing unit 62.
  • the sound of the pitch selected based on the music structure program is output from the speaker 50.
  • the speaker 50 is temporarily in a paused state in which no sound is output, and as a result, the acoustic space 10 is in a state of silence or lingering sound. Therefore, when the subject H keeps moving continuously, the sound of a predetermined pitch is continuously output, so that the sound as a texture is generated.
  • the processing unit 62 generates a sine wave from the speaker 50 under a predetermined condition.
  • This sine wave is output in a state of being synthesized with respect to the waveform of the sound having the predetermined pitch described above. Therefore, the processing of the processing unit 62 regarding the generation of such a sine wave will be described.
  • FIG. 8 is a flowchart showing a sine wave generation process in the processing unit 62.
  • the processing unit 62 executes the following processing. As shown in FIG. 8, it is determined whether or not the motion sensing signal is continuously generated a predetermined number of times in the first unit time (step S31). That is, it is determined whether or not the state of movement has occurred continuously in the first predetermined number of times in the first unit time. When it is determined that the occurrence has occurred (in the case of "YES" in step S31), it is determined whether the number of occurrences in the first predetermined number of times in the step S31 has reached the second predetermined number of times (step S32).
  • step S32 When it is determined that the signal has been reached (in the case of "YES” in step S32), it is determined whether or not the motion detection signal is generated after the rest state for the second unit time or more (step S33). Then, when it is determined that the speaker is in a moving state after being in a stationary state for the second unit time or more (in the case of “YES” in step S33), further, the plurality of speakers 50 for a predetermined sound.
  • a specific speaker 51 or the like is assigned (when “YES” in step S34), a sine wave is generated from the specific speaker 51 or the like (step S35).
  • the determination of the stationary state may be recognized by not generating the motion detection signal, or may be recognized by detecting the pause / lingering signal.
  • FIG. 9 is a diagram for explaining an example of the operation of the processing unit 62.
  • the first unit time is set to, for example, 180 ms
  • the first predetermined number of times is set to, for example, 1 to 30
  • the second unit time is set to, for example, 500 ms
  • the second predetermined number of times is set to, for example, 10 times.
  • These set values are stored in the storage unit 63.
  • the processing unit 62 continuously generates motion sensing signals 1 to 30 times in a period of 180 ms or less, the total of the cycles reaches 10 times, and the processing unit 62 is in a stationary state for 500 ms or more.
  • the fifth speaker 55 When the motion detection signal is detected, for example, the fifth speaker 55 is assigned, and the first sine wave is generated from the fifth speaker 55 for 60 to 90 seconds. Further, when the sound of the pitch selected by the generation of the motion detection signal in step S33 is the sound of any of the pitches A19 to A25 among the data of the 42 pitches of FIG. Generates a second sine wave in addition to the first sine wave.
  • the second sine wave is a combination of the third sine wave and the fourth sine wave, which will be described later, by a volume curve of 10 seconds.
  • the third sine wave is a sine wave having a frequency two octaves higher than the pitch selected by the generation of the motion sensing signal in step S33.
  • the fourth sine wave is a sine wave in which a frequency of 0.5 to 11 Hz is added to the third sine wave.
  • the second sine wave is output from, for example, the fifth speaker 55 for 10 seconds.
  • the reached value is reached by using 1500 to 4000 ms time complementation.
  • the processing unit 62 outputs the first reverberation sound from the speaker 50 under a predetermined condition. Therefore, subsequently, the processing of the processing unit 62 regarding the output of the first reverberation sound will be described.
  • FIG. 10 is a flowchart showing the output processing of the first reverberation sound.
  • the processing unit 62 executes the following processing. As shown in FIG. 10, it is determined whether or not the motion sensing signal is continuously generated a predetermined number of times in the first unit time (step S41). When it is determined that the occurrence has occurred (in the case of "YES” in step S31), it is determined whether the number of occurrences in the first predetermined number of times in the step S41 has reached the third predetermined number of times (step S42). When it is determined that the object has been reached (in the case of "YES" in step S42), it is subsequently determined whether or not the motion detection signal is generated after the rest state for the second unit time or more (step S43).
  • the plurality of speakers 50 are further used for the selected sound.
  • a specific speaker is assigned (when “YES” in step S44)
  • the first reverberation sound is output from the specific speaker 50 (step S45).
  • FIG. 11 is a diagram for explaining an example of the operation of the processing unit 62.
  • the first unit time, the first predetermined number of times, and the second unit time are set in advance, for example, to the above values.
  • the third predetermined number of times is set to, for example, 3 to 5 times.
  • These set values are stored in the storage unit 63.
  • the processing unit 62 counts the period in which the motion sensing signal is continuously generated 1 to 30 times in a period of 180 ms or less as 1, and when the 3rd to 5th count is reached, further 500 ms or more.
  • the fifth speaker 55 When the motion detection signal is detected after being in a stationary state, for example, the fifth speaker 55 is assigned, and the first reverberation sound is output from the fifth speaker 55 for 60 to 90 seconds. At this time, when the fifth speaker 55 is the speaker assigned to the sound of the pitch selected based on the motion detection signal after being stationary for 500 ms or more, the first reverberation sound is from the fifth speaker 55 to the above. It is combined with the pitch sound and output.
  • the processing unit 62 outputs the second reverberation sound from the speaker 50 under a predetermined condition. Therefore, subsequently, the processing of the processing unit 62 regarding the output of the second reverberation sound will be described.
  • FIG. 12 is a flowchart showing the output processing of the second reverberation sound.
  • the processing unit 62 executes the following processing. As shown in FIG. 12, it is determined whether or not the motion sensing signal is continuously generated a fourth predetermined number of times in the first unit time (step S51). When it is determined that the sound has occurred (when “YES” in step S51), and when a specific speaker among the plurality of speakers 50 is assigned to the selected sound (when “YES” in step S52). , The second reverberation sound is output from the specific speaker 50 (step S53).
  • FIG. 13 is a diagram for explaining an example of the operation of the processing unit 62.
  • the first unit time is set to, for example, the above value.
  • the fourth predetermined number of times is set to, for example, 30 times.
  • These set values are stored in the storage unit 63.
  • the processing unit 62 outputs the second reverberation sound from the speaker 50 when the motion detection signal is continuously generated 30 times in a period of 180 ms or less.
  • the processing unit 62 synthesizes the second reverberation sound with, for example, the sounds of the pitches A1 to A18 and A70 to A79 out of the data of the 42 pitches shown in FIG.
  • the processing unit 62 outputs the second reverberation sound from, for example, the first speaker 51 and the second speaker 52, and also outputs the second reverberation sound from the third speaker 53 and the fourth speaker 54 after 1 second of acoustic movement.
  • the processing unit 62 outputs a sound having a predetermined pitch from the speaker 50 by the processing of steps S11 to S18. Further, the processing unit 62 generates a sine wave from the speaker 50 by the processing of steps S31 to S35. Further, the processing unit 62 outputs the first reverberation sound from the speaker 50 by the processing of steps S41 to S45. Furthermore, the processing unit 62 outputs the second reverberation sound from the speaker 50 by the processing of steps S51 to S53.
  • the sound space generation device 100 is stored by the motion sensing signal emitted from the input unit 61 (the processing unit 62 that determines the repetition (movement / rest time variation) of "movement / rest” through the sensor input).
  • the acoustic space 10 is generated via the music structure program and the space structure program implemented in the unit 63.
  • each state of the motion and the rest is determined based on the motion of the subject H.
  • a sound is selected according to the frequency, cycle, number of times, etc. of each state, and the selected sound is output as a musical sound.
  • the musical sound according to the mode of movement of the object is easily performed. It is possible to easily provide an acoustic space 10 that can play such a musical sound. Further, according to the acoustic space generator 100, a new sound is generated based on a repetitive motion of an object moving and resting. It is possible to play musical sounds and sounds of syllables. Further, under a predetermined condition, an acoustic space 10 that plays a unique musical sound including a sound generated by synthesizing a sine wave, a first reverberation sound, a second reverberation sound, and the like. Can be provided.
  • the acoustic space generation device 100 since the moving subject H is irradiated with the light L, the user U plays improvisational music while observing the movement of the subject H illuminated by the light L. It is possible to appreciate it. Thereby, according to the acoustic space generation device 100, it is possible to provide the acoustic space 10 having a new entertainment property that allows the user U to enjoy improvised music with visual changes. Then, according to the acoustic space generator 100, the user U is made to experience or hear the sounds related to the body's breathing such as physical expression, yoga, and welfare, which are generated from the state of movement and rest. Therefore, it is possible not only to experience or listen to music, but also to contribute to the improvement of its spirit.
  • the acoustic space generation device 100 includes a camera 20 for photographing a subject and a light source 30 for irradiating the subject H.
  • the state of an object can be detected instead of both the camera 20 and the light source 30. It may be configured to include various sensors. Therefore, in the following, the configuration of the acoustic space generator according to the modified example will be specifically described.
  • the sensor capable of detecting the state of the object for example, a distance sensor capable of detecting the distance to the object can be applied.
  • the camera 20 is also one of the sensors capable of detecting the movement of an object.
  • the operations of the input unit and the processing unit of the acoustic space generator according to such a modification are the same as the operations of the input unit 61 and the processing unit 62 described above.
  • the input unit according to the modified example continuously acquires signals from the sensor according to the distance from the sensor to the object, for example. Further, the processing unit of the modified example compares the acquired signal with the signal acquired immediately before, and if the difference is equal to or greater than the threshold value, determines that the state of movement of the object.
  • the threshold value is a value related to the signal and is set in advance.
  • the subsequent processing of the processing unit is the same as that of the processing unit 62 described above.
  • a sensor capable of detecting the state of an object an input unit that acquires a signal from the sensor, a processing unit that selects a sound to be generated according to the signal, and a sound selected by the processing unit are used.
  • a plurality of speakers 50 for output are provided, and the processing unit compares the signal with the signal immediately before the signal, and the object is in a moving state or a stationary state based on the above threshold value of the preset signal.
  • a motion detection signal is generated, the pitch to be output is selected by the pre-stored music structure program, and the pre-stored spatial structure program is used.
  • One of a plurality of speakers 50 is assigned to each sound of the selected pitch, and the sound is output from the assigned speaker.
  • Other configurations of the acoustic space generator according to the modified example are the same as those of the acoustic space generator 100 according to the above-described embodiment.
  • the processing unit of the acoustic space generator according to the modified example outputs a sine wave, a first reverberation sound, and a second reverberation sound to a plurality of speakers according to the frequency, period, number of times, etc. of each state. Output from 50.
  • the acoustic space generator can be realized with a simpler configuration. Therefore, for example, it is possible to configure the audio space generator with a small PC having a moving image camera function. As a result, it becomes easy to apply the acoustic space generator to music teaching materials for children.
  • the acoustic space generator 100 can also have the following configuration. Specifically, for example, the processing unit determines in advance whether the cell in which the sensor is mounted is stepped on and the state in which the foot is taken off from the cell, or whether the buzzer switch is ON / OFF. It may be determined whether the object is in a moving state or a stationary state based on the set signal threshold value. That is, for example, using a sensor capable of detecting contact with the foot, different signals are used when the foot is in contact with the square and when the foot is separated (for example, when the foot is in contact, the signal is separated by 1.
  • the processing unit compares the signal acquired from the sensor with the signal acquired immediately before, and determines whether the difference is equal to or greater than the threshold value (for example, 1 or more). As a result, the processing unit determines whether the object is in a moving state (a state in which the movement is changed) or a stationary state (a state in which the movement is unchanged) (for example, when the above difference is 1, the moving unit moves.
  • a moving state a state in which the movement is changed
  • a stationary state a state in which the movement is unchanged
  • the processing unit determines that it is in a motion state, it generates a motion sensing signal, selects a pitch to be output by the music structure program stored in advance, and is selected by the spatial structure program stored in advance.
  • One of a plurality of speakers is assigned to each sound of the pitch, and the sound is output from the assigned speaker. Further, in addition to the sound of the above pitch, a sine wave, a first reverberation sound, and a second reverberation sound are output from the plurality of speakers 50 according to the frequency, cycle, number of times, etc. of each state.
  • the processing unit attacks (that is, the sound is resolutely strong) based on a preset signal threshold in the input voice information (for example, the voice input to the microphone). It may be determined whether or not the object is in a moving state or a stationary state by determining the presence or absence of (exit, sound output, and adult rise). That is, for example, the acoustic space generation device 100 includes a microphone that is a voice recognition device for picking up the voice of an object, and the processing unit compares the voice signal acquired from the microphone that is the sensor with the voice signal acquired immediately before. , It may be determined whether the difference (for example, the difference in volume) is equal to or more than a preset threshold value, and whether the object is in a moving state or a stationary state.
  • an object whose movement is detected by a sensor such as a camera is not limited to the hand H, and may be, for example, a person's foot or the whole body, or an object held by a person (for example, a stick). It may be the shadow of a leaf, a wave, or the like.
  • the plurality of speakers 50 included in the acoustic space generator 100 may be composed of a plurality of stationary speakers 51 or the like as in the embodiment, or a pair of speakers that directly output sound to both ears of the user U. Headphones or binaural earphones equipped with speakers may be used.
  • the acoustic space 10 generated by the acoustic space generation device 100 is a real space, but may be a virtual space. That is, for example, when headphones or binaural earphones are applied as the speaker 50 of the acoustic space generator 100, the acoustic space generator 100 provides a virtual acoustic space to the user U who wears the speaker 50. To do. Further, in the above-described embodiment and modification, processing such as discriminating between a motion state (motion) and a stationary state (stop) of the state of the object by using a sensor capable of detecting the movement of the object (subject) is performed.
  • the acoustic space generator of the present invention is, for example, a motion state for the motion of an object in a specific fictitious place in virtual reality (VR) or in a game (such as on OpenGL® drawing).
  • a process such as discriminating between and a stationary state (stop) may be performed to generate a sound corresponding to the process.
  • the acoustic space generation device 100 stores a plurality of music structure programs having algorithms in which different conditions (threshold values and the like) are set in the storage unit 63, and the processing of the above step S16 is performed by the plurality of music structure programs.
  • the music structure program can be appropriately selected from the above and executed, or a plurality of music structure programs may be executed at the same time.
  • the plurality of music structure programs are stored in the storage unit 63 in a state of being divided into different layers, for example.
  • the music structure program can be appropriately selected according to the usage environment of the acoustic space generation device 100 and the number of mounting locations, and the music structure program can be easily improved or added.
  • the acoustic space generator of the present invention has, for example, a system configuration in which a large number of speakers 51 or the like are randomly installed on a tree branch in a natural environment, or a flat surface (for example, a painting, a book, or a wall surface (for example, vertical). (Square wall surfaces with horizontal lengths of 10 cm and 5 m each)), a product configuration in which multiple ultra-small speakers 51 and sensors are studded, and sensors and speakers 51 and the like mounted on multiple street lights that are also light sources 30. It may be realized by the configuration of the environmental system. Further, the acoustic space generation device of the present invention may constitute a complex improvisational ensemble device / musical instrument.
  • the acoustic space generator in this case is, for example, changed / assigned to another recording material for 42 numbers (A1 to A42) to which each pitch file shown in FIG. 6 is assigned, or mounted in the storage unit 63. It may be configured to perform processing such as associating the operations of other music structure programs (layers) that have been performed. That is, for example, when the pitch number "A2" is selected in step S16, the noise recording material and the sound collection / output from the microphone are played in real time in the present acoustic space, and then the pitch number "A2" and the like are transmitted.
  • the sound collection output of the microphone may be turned off, and at the same time, another music structure program / layer may be operated.
  • the acoustic space generation device 100 of the above embodiment includes a plurality of speakers 50, the number of speakers 51 and the like may be one.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Electrophonic Musical Instruments (AREA)
  • Auxiliary Devices For Music (AREA)

Abstract

[Problem] To provide an apparatus capable of more easily providing an improvised performance based on a user motion. [Solution] An acoustic space creation apparatus provided with: a camera 20 for imaging an object; a light source 30 for irradiating the object with light; an input unit 61 for acquiring an image from the camera; a processing unit 62 that selects a sound to be generated in accordance with the frame of a video; and a plurality of speakers 50 that outputs the sound selected by the processing unit. The processing unit compares the quantity of light in a frame with the quantity of light in the immediately preceding frame to determine whether the object is in motion or stationary on the basis of a predetermined threshold for the light quantity. When determining that the object is in motion, the processing unit: generates a motion detection signal; selects a musical pitch to be output in accordance with a pre-stored musical structure program; allocates one of the plurality of speakers to each sound of the selected musical pitch in accordance with a pre-stored spatial structure program; and causes the allocated speaker to output a sound. When a state of motion occurs consecutively a first predetermined number of times during a first unit time and the number of times of the consecutive occurrences for the first predetermined number of times reaches a second predetermined number of times, and the processing unit determines that the object is in motion after being stationary during a second unit time or greater, with a specific speaker among the plurality of speakers allocated to a predetermined sound, the processing unit causes the specific speaker to generate a sine wave in accordance with the musical structure program. When a state of motion occurs consecutively the first predetermined number of times during the first unit time and the number of times of consecutive occurrences for the first predetermined number of times reaches a third predetermined number of times, and the processing unit determines that the object is in motion after being stationary during the second unit time or greater, with a specific speaker allocated to a selected sound, the processing unit causes the specific speaker to output a first reverberation in accordance with the musical structure program. When a state of motion occurs consecutively a fourth predetermined number of times during the first unit time, the processing unit causes a speaker to output a second reverberation in a predetermined order in accordance with the musical structure program.

Description

音響空間生成装置Acoustic space generator
 本発明は、音響空間生成装置に関する。 The present invention relates to an acoustic space generator.
 カメラの前で人が動作することにより、その動作に応じた楽音データを自動的に作成し、それを演奏することのできる楽音生成装置が知られている。例えば下記特許文献1には、カメラの被写体である人の運動に反応して、その動きの位置と変化量に基づいて楽音データを作成し、これを音として出力する楽音生成装置の構成が開示されている。この楽音生成装置によれば、演奏者は、音楽を自動演奏する装置の操作や楽器の演奏をするための知識や技術を習得する必要がなく、カメラの前で運動するだけで、即興的な演奏を容易に行うことができる。 There is known a musical tone generator that can automatically create musical tone data according to the motion of a person in front of the camera and play it. For example, Patent Document 1 below discloses a configuration of a musical tone generator that creates musical tone data based on the position and amount of change of the motion of a person who is the subject of the camera and outputs the music as sound. Has been done. According to this music generator, the performer does not need to acquire the knowledge and skills to operate the device that automatically plays music and to play the musical instrument, and the performer can improvise by simply exercising in front of the camera. The performance can be easily performed.
特許第3643829号公報Japanese Patent No. 3643829
 上述したように被写体の動きと反応する音を出力する装置が知られているが、被写体の動きについて従来技術とは異なる観点から認識し、これにより生じる新たな楽音を奏でることのできる装置があれば、被写体の動きに応じて生成される楽音をより一層楽しむことができる。このような点に鑑み、本発明では、物体の動きの状態(例えば演奏者の動作)に応じて生成される楽音をより一層楽しむことができるように、物体の動きの状態を従来技術とは異なる観点から認識して、これにより生じる新たな楽音を奏でる装置を提供することを主たる目的とする。 As described above, a device that outputs a sound that reacts with the movement of the subject is known, but there is a device that can recognize the movement of the subject from a viewpoint different from the conventional technology and play a new musical tone generated by the recognition. For example, the musical sound generated according to the movement of the subject can be further enjoyed. In view of these points, in the present invention, the state of movement of the object is different from that of the prior art so that the musical sound generated according to the state of movement of the object (for example, the movement of the performer) can be further enjoyed. The main purpose is to provide a device that recognizes from different viewpoints and plays a new musical tone generated by this.
 また、上記特許文献1記載の楽音生成装置では、カメラの被写体における動きの位置の特定、並びに、被写体の動きの変化量の算出の双方を行う必要がある。したがって、上記楽音生成装置は、演奏する際のデータ処理の負担が大きくなる可能性を有している。このような点に鑑み、本発明では、物体の動きの状態に基づく即興的な演奏をより一層容易に行うことができる装置を提供することも目的とする。 Further, in the musical sound generator described in Patent Document 1, it is necessary to both specify the position of the movement of the camera on the subject and calculate the amount of change in the movement of the subject. Therefore, the musical tone generator has a possibility that the burden of data processing at the time of playing becomes large. In view of these points, it is also an object of the present invention to provide an apparatus capable of more easily performing improvisational performance based on the state of movement of an object.
 本発明は、「動く」と「止まる」の単純動作に着眼し、身体表現や、呼吸法、「間」、「間」、ものごとのON/OFFなどの様々な動作の時間的感覚(間隔)を音あるいは音節にすることを目的として着想されたものである。そして、本発明では、主として物体の「運動(動き)」と「静止」との反復に着眼し、これを元に特定の音階による旋律(メロディー)や音節、音響空間を生成・組織化する装置を提供する。 The present invention focuses on simple movements of "moving" and "stopping", and temporal sensations (intervals) of various movements such as physical expression, breathing method, "pause", "pause", and ON / OFF of things. ) Was conceived for the purpose of making it a sound or a syllable. Then, in the present invention, a device that mainly focuses on the repetition of "movement (movement)" and "stationary" of an object, and generates and organizes a melody, a syllable, and an acoustic space according to a specific scale based on this. I will provide a.
 すなわち、本発明は、物体を撮影するカメラと、物体を照射する光源と、カメラから画像を取得する入力部と、映像のフレームに応じて、発生する音を選択する処理部と、処理部により選択された音を出力する複数のスピーカーと、を備え、処理部は、フレームの光量と当該フレームの直前のフレームの光量とを比較して、予め設定された光量の閾値に基づいて、物体が動きの状態か静止の状態であるかを判定し、動きの状態であると判定されるとき、動作感知シグナルを発生し、予め記憶された音楽構造プログラムにより、出力する音程を選択し、予め記憶された空間構造プログラムにより、選択された音程の各音に対して、複数のスピーカーの一つを割り当て、割り当てられたスピーカーから音を出力させ、第1の単位時間で動きの状態が第1の所定の回数連続で発生し、第1の所定の回数連続で発生した回数が第2の所定の回数に達し、第2の単位時間以上静止の状態であった後に、動きの状態であると判定されたとき、且つ、所定の音に対し、複数のスピーカーのうち特定のスピーカーが割り当てられたとき、音楽構造プログラムにより、特定のスピーカーからサイン波を発生させ、第1の単位時間で動きの状態が第1の所定回数連続で発生し、第1の所定回数連続で発生した回数が第3の所定の回数に達したとき、第2の単位時間以上静止の状態であった後に、動きの状態であると判定されたとき、且つ、選択された音に対して、特定のスピーカーが割り当てられたとき、音楽構造プログラムにより、特定のスピーカーから第1の残響音を出力させ、第1の単位時間で動きの状態が第4の所定の回数連続で発生したとき、音楽構造プログラムにより、第2の残響音を所定の順序でスピーカーから出力させる音響空間生成装置である。 That is, the present invention comprises a camera for photographing an object, a light source for irradiating the object, an input unit for acquiring an image from the camera, a processing unit for selecting generated sound according to a frame of an image, and a processing unit. A plurality of speakers that output a selected sound are provided, and the processing unit compares the light amount of the frame with the light amount of the frame immediately before the frame, and the object is based on a preset light amount threshold. It determines whether it is in a moving state or a stationary state, and when it is determined that it is in a moving state, it generates a motion detection signal, selects the pitch to be output by the pre-stored music structure program, and stores it in advance. According to the spatial structure program, one of a plurality of speakers is assigned to each sound of the selected pitch, the sound is output from the assigned speaker, and the state of movement is the first in the first unit time. It occurs in a predetermined number of times in a row, and after the number of times in which the first predetermined number of times has occurred reaches the second predetermined number of times and the state is stationary for the second unit time or more, it is determined that the state is in motion. When a specific speaker is assigned to a predetermined sound from a plurality of speakers, a sine wave is generated from the specific speaker by the music structure program, and the state of movement is performed in the first unit time. Is generated in the first predetermined number of times in a row, and when the number of times in which the first predetermined number of times is continuously reached reaches the third predetermined number of times, the state of movement is after being in a stationary state for a second unit time or more. When it is determined that the sound is, and when a specific speaker is assigned to the selected sound, the music structure program causes the specific speaker to output the first reverberant sound, and the first unit time. This is an acoustic space generation device that outputs a second reverberant sound from a speaker in a predetermined order by a music structure program when a fourth predetermined number of consecutive motion states are generated.
 また、本発明は、物体の状態を検出可能なセンサと、センサから信号を取得する入力部と、信号に応じて、発生する音を選択する処理部と、処理部により選択された音を出力する複数のスピーカーと、を備え、処理部は、信号と当該信号の直前の信号とを比較して、予め設定された信号の閾値に基づいて、物体が動きの状態か静止の状態であるかを判定し、動きの状態であると判定されるとき、動作感知シグナルを発生し、予め記憶された音楽構造プログラムにより、出力する音程を選択し、予め記憶された空間構造プログラムにより、選択された音程の各音に対して、複数のスピーカーの一つを割り当て、割り当てられたスピーカーから音を出力させ、第1の単位時間で動きの状態が第1の所定の回数連続で発生し、第1の所定の回数連続で発生した回数が第2の所定の回数に達し、第2の単位時間以上静止の状態であった後に、動きの状態であると判定されたとき、且つ、所定の音に対し、複数のスピーカーのうち特定のスピーカーが割り当てられたとき、音楽構造プログラムにより、特定のスピーカーからサイン波を発生させ、第1の単位時間で動きの状態が第1の所定回数連続で発生し、第1の所定回数連続で発生した回数が第3の所定の回数に達したとき、第2の単位時間以上静止の状態であった後に、動きの状態であると判定されたとき、且つ、選択された音に対して、特定のスピーカーが割り当てられたとき、音楽構造プログラムにより、特定のスピーカーから第1の残響音を出力させ、第1の単位時間で動きの状態が第4の所定の回数連続で発生したとき、音楽構造プログラムにより、第2の残響音を所定の順序でスピーカーから出力させる音響空間生成装置である。 Further, the present invention outputs a sensor capable of detecting the state of an object, an input unit that acquires a signal from the sensor, a processing unit that selects a sound to be generated according to the signal, and a sound selected by the processing unit. A plurality of speakers are provided, and the processing unit compares a signal with a signal immediately preceding the signal, and determines whether the object is in a moving state or a stationary state based on a preset signal threshold. When it is determined that the sound is in a motion state, a motion detection signal is generated, the pitch to be output is selected by the pre-stored music structure program, and the sound is selected by the pre-stored spatial structure program. One of a plurality of speakers is assigned to each sound of the pitch, the sound is output from the assigned speaker, and the state of movement occurs continuously in the first predetermined number of times in the first unit time, and the first When the number of times of occurrence in a predetermined number of times reaches the second predetermined number of times, the sound is stationary for the second unit time or more, and then it is determined that the sound is in motion, and the sound is set to a predetermined sound. On the other hand, when a specific speaker is assigned among a plurality of speakers, a sine wave is generated from the specific speaker by the music structure program, and a state of movement is generated for the first predetermined number of times in a row in the first unit time. , When the number of times of occurrence in the first predetermined number of times reaches the third predetermined number of times, when it is determined to be in a moving state after being in a stationary state for a second unit time or more, and When a specific speaker is assigned to the selected sound, the music structure program outputs the first reverberant sound from the specific sound, and the state of movement is the fourth predetermined in the first unit time. This is an acoustic space generator that outputs a second reverberant sound from a speaker in a predetermined order by a music structure program when it occurs continuously a number of times.
 本発明の音響空間生成装置では、物体(例えば被写体)の動きに基づいてその運動及び静止の各状態を判定し、各状態の頻度や周期、回数等に応じて音を選択し、選択した音を楽音として出力する。したがって、本発明によれば、物体の動きの態様に応じた楽音を容易に奏でることができ、このような楽音を奏でる音響空間を容易に提供することができる。また、本発明によれば、物体の運動と静止との反復動作に基づき生成された新たな音節の楽音や音響を奏でることができる。 In the acoustic space generator of the present invention, each state of motion and rest is determined based on the movement of an object (for example, a subject), a sound is selected according to the frequency, period, number of times, etc. of each state, and the selected sound. Is output as a musical sound. Therefore, according to the present invention, it is possible to easily play a musical sound according to the mode of movement of the object, and it is possible to easily provide an acoustic space for playing such a musical sound. Further, according to the present invention, it is possible to play the musical sound or sound of a new syllable generated based on the repetitive motion of the motion and the rest of the object.
実施形態に係る音響空間生成装置の一例を示す概略図であり、(a)は正面側から見た図、(b)は上方から見た図である。It is a schematic diagram which shows an example of the acoustic space generation apparatus which concerns on embodiment, (a) is a view seen from the front side, (b) is a view seen from above. 図1の音響空間生成装置の要部を示すブロック図である。It is a block diagram which shows the main part of the acoustic space generation apparatus of FIG. 図1の音響空間生成装置の使用状態の一例を示す図である。It is a figure which shows an example of the use state of the acoustic space generation apparatus of FIG. 図1の音響空間生成装置の動作の一例を示すフローチャートである。It is a flowchart which shows an example of the operation of the acoustic space generation apparatus of FIG. 所定音程の音を出力する際の処理部の処理を示すフローチャートである。It is a flowchart which shows the processing of the processing part at the time of outputting the sound of a predetermined pitch. 音程のオーディオファイルを示す図である。It is a figure which shows the audio file of the pitch. 処理部の動作の一例を説明するための図である。It is a figure for demonstrating an example of operation of a processing part. サイン波の発生処理を示すフローチャートである。It is a flowchart which shows the generation process of a sine wave. 処理部の動作の一例を説明するための図である。It is a figure for demonstrating an example of operation of a processing part. 第1残響音の出力処理を示すフローチャートである。It is a flowchart which shows the output processing of the 1st reverberation sound. 処理部の動作の一例を説明するための図である。It is a figure for demonstrating an example of operation of a processing part. 第2残響音の出力処理を示すフローチャートである。It is a flowchart which shows the output processing of the 2nd reverberation sound. 処理部の動作の一例を説明するための図である。It is a figure for demonstrating an example of operation of a processing part.
 以下、本発明の実施形態について図面を参照しながら説明する。ただし、本発明はこれに限定されるものではない。また、図面においては実施形態を説明するため、一部分を大きくまたは強調して記載するなど適宜縮尺を変更して表現している。図1は、実施形態に係る音響空間生成装置100の構成の一例についてその概略を示す図であり、(a)は正面側から見た図、(b)は上方から見た図である。なお、図1(a)は手前側の壁部11bを透過した状態を示し、図1(b)は天井部11cを透過した状態を示している。 Hereinafter, embodiments of the present invention will be described with reference to the drawings. However, the present invention is not limited to this. Further, in the drawings, in order to explain the embodiment, the scale is appropriately changed and expressed, such as drawing a part in a large or emphasized manner. 1A and 1B are views showing an outline of an example of the configuration of the acoustic space generator 100 according to the embodiment, FIG. 1A is a view seen from the front side, and FIG. 1B is a view seen from above. Note that FIG. 1A shows a state in which the wall portion 11b on the front side is transmitted, and FIG. 1B shows a state in which the ceiling portion 11c is transmitted.
 実施形態に係る音響空間生成装置100は、特定の音階による旋律や音節を生成し、それが奏でられる音響空間10を生成する装置である。実施形態では、音響空間10は、音楽を奏でる空間である。 The acoustic space generation device 100 according to the embodiment is a device that generates a melody or syllable according to a specific scale and generates an acoustic space 10 in which the melody or syllable is played. In the embodiment, the acoustic space 10 is a space for playing music.
 実施形態では、音響空間10は、構造体11の内部空間である。構造体11は、コンテナ状の構造物であり、中空かつ移動可能に形成されている。構造体11は、例えば図1に示すように、平面視で略正方形の床面11aと、床面11aの各辺部から起立する壁面11bと、上記内部空間を塞ぐように床面11aの上方に配置される天井面11cと、を含み構成される。音響空間10は、構造体11の床面11a、壁面11b、及び天井面11cにより画定される空間である。なお、音響空間10は、上述した構造体11のようなコンテナ状の構造物の内部空間に限定されず、音を伝播可能なすべての空間が適用可能である。すなわち、音響空間10は、例えば、コンサートホールやイベントホールなどの建物の内部空間や、地下空間であってもよい。また、音響空間10は、閉じられた空間であることに限定されず、例えば、屋外ステージ上の空間や、地表の空間などであってもよい。この場合、後述する、カメラ20や、光源30、台40、複数のスピーカー50などは、例えば屋外ステージ上や地表に設置される。 In the embodiment, the acoustic space 10 is the internal space of the structure 11. The structure 11 is a container-like structure, and is formed to be hollow and movable. As shown in FIG. 1, for example, the structure 11 has a substantially square floor surface 11a in a plan view, a wall surface 11b rising from each side of the floor surface 11a, and above the floor surface 11a so as to block the internal space. It is configured to include a ceiling surface 11c arranged in. The acoustic space 10 is a space defined by the floor surface 11a, the wall surface 11b, and the ceiling surface 11c of the structure 11. The acoustic space 10 is not limited to the internal space of a container-like structure such as the structure 11 described above, and any space capable of propagating sound can be applied. That is, the acoustic space 10 may be, for example, an internal space of a building such as a concert hall or an event hall, or an underground space. Further, the acoustic space 10 is not limited to a closed space, and may be, for example, a space on an outdoor stage, a space on the ground surface, or the like. In this case, the camera 20, the light source 30, the stand 40, the plurality of speakers 50, and the like, which will be described later, are installed on, for example, an outdoor stage or the surface of the earth.
 音響空間生成装置100は、カメラ20と、光源30と、台40と、複数のスピーカー50と、情報処理装置60(図2参照)と、を備える。カメラ20、光源30、及び台40は、音響空間10の内部に設置されているが、音響空間10の外部に設置されてもよい。例えば、カメラ20等は、音響空間10から離れた場所に設置されてもよい。 The acoustic space generation device 100 includes a camera 20, a light source 30, a stand 40, a plurality of speakers 50, and an information processing device 60 (see FIG. 2). The camera 20, the light source 30, and the base 40 are installed inside the acoustic space 10, but may be installed outside the acoustic space 10. For example, the camera 20 and the like may be installed at a place away from the acoustic space 10.
 カメラ20は、被写体(物体)(すなわち使用者Uの手)H(図3参照)の動きを撮影可能な装置である。カメラ20は、天井部11c付近において下方に向けた状態で設置されている。カメラ20は、被写体Hを上方から撮影し、被写体Hの動画を撮像する。カメラ20は、音響空間10において平面視における中央部分に配置されている。カメラ20は、空中で支持されており、例えば、壁部11bから水平方向に延びる支持金具(不図示)により支持されている。なお、カメラ20は、上記構成に限定されず、カメラ20の向きや、設置場所、設置方法などについては任意に設定可能であり、例えば、カメラ20は、上方や水平方向に向けて設置されてもよいし、天井部11cに対して固定した状態で設置されてもよいし、天井部11cから吊り下げられた状態で設けられてもよい。 The camera 20 is a device capable of photographing the movement of the subject (object) (that is, the hand of the user U) H (see FIG. 3). The camera 20 is installed in the vicinity of the ceiling portion 11c so as to face downward. The camera 20 captures the subject H from above and captures a moving image of the subject H. The camera 20 is arranged in the central portion of the acoustic space 10 in a plan view. The camera 20 is supported in the air, and is supported by, for example, a support metal fitting (not shown) extending in the horizontal direction from the wall portion 11b. The camera 20 is not limited to the above configuration, and the orientation of the camera 20, the installation location, the installation method, and the like can be arbitrarily set. For example, the camera 20 is installed facing upward or horizontally. It may be installed in a state of being fixed to the ceiling portion 11c, or may be installed in a state of being suspended from the ceiling portion 11c.
 カメラ20は、被写体Hを連続的に撮像し、撮影した画像を情報処理装置60に送信する。具体的には、カメラ20は、予め設定されたフレームレートで被写体Hを撮影し、フレームごとに画像データとして情報処理装置60の入力部61(図2参照)に入力する。カメラ20は情報処理装置60とデータ通信可能に接続されており、画像データは有線(例えばUSBやEthernet(登録商標))あるいは無線(例えば各種電波通信やインターネット)を介してカメラ20から入力部61に送信される。カメラ20のフレームレートは、被写体Hの動きの速度や情報処理装置60の性能などに応じて設定される。実施形態においては、カメラ20のフレームレートは、40fps(frames per second)に設定される。この場合、カメラ20は、毎秒40回の頻度で被写体Hを撮影し、毎秒40枚の頻度で取得したフレーム画像を情報処理装置60に送信する。なお、カメラ20のフレームレートは、40fpsに限定されず、例えば25fps、30fps、50fps、60fpsなどに設定されてもよい。 The camera 20 continuously captures the subject H and transmits the captured image to the information processing device 60. Specifically, the camera 20 captures the subject H at a preset frame rate, and inputs each frame as image data to the input unit 61 (see FIG. 2) of the information processing device 60. The camera 20 is connected to the information processing device 60 so as to be capable of data communication, and the image data is input from the camera 20 to the input unit 61 via wired (for example, USB or Ethernet (registered trademark)) or wireless (for example, various radio communication or the Internet). Will be sent to. The frame rate of the camera 20 is set according to the speed of movement of the subject H, the performance of the information processing device 60, and the like. In the embodiment, the frame rate of the camera 20 is set to 40 fps (frames per second). In this case, the camera 20 photographs the subject H at a frequency of 40 times per second, and transmits the acquired frame images at a frequency of 40 images per second to the information processing device 60. The frame rate of the camera 20 is not limited to 40 fps, and may be set to, for example, 25 fps, 30 fps, 50 fps, 60 fps, or the like.
 光源30は、被写体Hを照射する光を発する器具や装置等である。光源30は、音響空間10の内部において、平面視中央部分の天井部11c付近に配置されている。光源30は、例えば、LEDのスポットライトである。スポットライトとは音響空間10の一部を集中的に照らす照明である。光源30は、下方に向けた状態で設置されており、真下に向けて光Lを射出する。これにより、台40の上面41の一部の領域R(図3参照)に光Lが投射される。光源30は、カメラ20と一体的に設けられている。なお、光源30は、上記構成に限定されず、例えば、アーク灯や、白熱電球、蛍光灯、太陽光などであってもよいし、台40の周囲を含む広範囲を均一に照らすものなどであってもよい。また、光源30は、射出する光Lの方向や、設置場所、設置方法などについても任意に設定可能であり、例えば、上方や水平方向に光Lを射出するように設置されてもよいし、カメラ20とは個別に設置されてもよい。 The light source 30 is an instrument or device that emits light that irradiates the subject H. The light source 30 is arranged in the vicinity of the ceiling portion 11c in the central portion of the plan view inside the acoustic space 10. The light source 30 is, for example, an LED spotlight. The spotlight is an illumination that intensively illuminates a part of the acoustic space 10. The light source 30 is installed so as to face downward, and emits light L directly downward. As a result, the light L is projected onto a part of the region R (see FIG. 3) of the upper surface 41 of the table 40. The light source 30 is provided integrally with the camera 20. The light source 30 is not limited to the above configuration, and may be, for example, an arc lamp, an incandescent lamp, a fluorescent lamp, sunlight, or the like, or one that uniformly illuminates a wide range including the periphery of the table 40. You may. Further, the light source 30 can be arbitrarily set with respect to the direction of the light L to be emitted, the installation location, the installation method, and the like. For example, the light source 30 may be installed so as to emit the light L in the upward or horizontal direction. It may be installed separately from the camera 20.
 台40は、光源30の下方の床部11a上に設置されている。台40の高さは、例えば起立した姿勢の使用者(演奏者)Uの腰の位置に相当する高さに設定されている(図3参照)。台40は、床部11a表面のほぼ中央に設置されており、高さ90センチメートル(cm)、縦横の長さがそれぞれ45センチメートルの直方体形状となっている。台40の上面41は、平面状であり、光源30の射出光Lが照射される領域Rの全てを含むように配置されている。なお、台40は、上記構成に限定されず、その形状やサイズ、配置などについて適宜変更が可能である。また、音響空間生成装置100において、このような台40を設置するか否かは任意である。 The stand 40 is installed on the floor portion 11a below the light source 30. The height of the table 40 is set to a height corresponding to the position of the waist of the user (performer) U in an upright posture, for example (see FIG. 3). The base 40 is installed substantially in the center of the surface of the floor portion 11a, and has a rectangular parallelepiped shape with a height of 90 cm (cm) and a length and width of 45 cm. The upper surface 41 of the table 40 is flat and is arranged so as to include the entire region R to which the emitted light L of the light source 30 is irradiated. The table 40 is not limited to the above configuration, and its shape, size, arrangement, and the like can be appropriately changed. Further, it is arbitrary whether or not such a stand 40 is installed in the acoustic space generation device 100.
 上述したように、光源30から真下に向けて光Lが射出される。光源30から射出された光Lは、そのまま下方に進むと台40の上面41の例えば円形状の領域Rを照射する。このとき、光源30を頂点としかつ台40の上面41を底面とする略円錐状の空間が、全体的に暗く設定された音響空間10において部分的に光Lに照らされた状態となる。このような、光源30の光Lに照らされた略円錐状の空間を光照射空間S(図3参照)という。 As described above, the light L is emitted from the light source 30 directly below. When the light L emitted from the light source 30 goes downward as it is, it irradiates, for example, a circular region R on the upper surface 41 of the table 40. At this time, the substantially conical space having the light source 30 as the apex and the upper surface 41 of the base 40 as the bottom surface is partially illuminated by the light L in the acoustic space 10 which is set to be dark as a whole. Such a substantially conical space illuminated by the light L of the light source 30 is referred to as a light irradiation space S (see FIG. 3).
 音響空間10にはスピーカー50が5つ設置されている。これら5つのスピーカー50は、情報処理装置60により生成された音楽データに基づく音を出力する。これら第1スピーカー51、第2スピーカー52、第3スピーカー53、第4スピーカー54、及び第5スピーカー55は、それぞれ所定方向に音を放出する放音部51a,52a,55aを有する。 Five speakers 50 are installed in the acoustic space 10. These five speakers 50 output sound based on the music data generated by the information processing device 60. The first speaker 51, the second speaker 52, the third speaker 53, the fourth speaker 54, and the fifth speaker 55 each have sound emitting units 51a, 52a, 55a that emit sound in a predetermined direction.
 第5スピーカー55は、音響空間10において、平面視における中央部かつ上側に配置されている。第5スピーカー55は、カメラ20の上面に設けられ、放音部55aを上方に向けた状態で設置されている。 The fifth speaker 55 is arranged in the central part and the upper side in the plan view in the acoustic space 10. The fifth speaker 55 is provided on the upper surface of the camera 20, and is installed with the sound emitting portion 55a facing upward.
 他方、第1スピーカー51、第2スピーカー52、第3スピーカー53、及び第4スピーカー54は、音響空間10において、底部に配置され、第5スピーカー55を中心とする同一円周上に等間隔となるように配置されている。第1~第4スピーカー51~54は、略正方形状の床面11aの4つの隅部にそれぞれ設置されている。また、第1~第4スピーカー51~54のそれぞれは、放音部51a,52a,55aを音響空間10の中央側に向けた状態、かつ、若干上方に向けて放音するように水平方向に対して5°~25°度程度上方に傾けた状態で設置されている。 On the other hand, the first speaker 51, the second speaker 52, the third speaker 53, and the fourth speaker 54 are arranged at the bottom in the acoustic space 10 and are evenly spaced on the same circumference centered on the fifth speaker 55. It is arranged so as to be. The first to fourth speakers 51 to 54 are installed at four corners of a substantially square floor surface 11a, respectively. Further, each of the first to fourth speakers 51 to 54 is in a state where the sound emitting portions 51a, 52a, 55a are directed toward the center side of the acoustic space 10, and is horizontally so as to emit sound slightly upward. On the other hand, it is installed in a state of being tilted upward by about 5 ° to 25 °.
 なお、音響空間生成装置100の備える複数のスピーカー50は、上記した第1~第5スピーカー51~55の構成に限定されない。すなわち、音響空間10内におけるスピーカー51等の設置数や、スピーカー51等のそれぞれの配置、放音部51a等の向きなどは適宜変更可能である。具体的には、例えば、音響空間生成装置100の備えるスピーカーの数は、2つ以上4つ以下、あるいは6つ以上であってもよい。また、例えば、5つのスピーカー50を構成する全てのスピーカー51~55は、音響空間10において、上部に配置されてもよいし、底部に配置されてもよいし、中央部分を囲むように配置されてもよい。また、例えば、第5スピーカー55は、下方に放音するように放音部55aを下方に向けた状態でカメラ20から離間させて設置されてもよいし、第1~第4スピーカー51等は水平方向に放音するように設置されてもよい。また、第1~第5スピーカー51等は、全て同一の構成のスピーカーであるが、これらの一部あるいは全ては異なる構成のスピーカーであってもよい。 The plurality of speakers 50 included in the acoustic space generator 100 are not limited to the configurations of the first to fifth speakers 51 to 55 described above. That is, the number of speakers 51 and the like installed in the acoustic space 10, the arrangement of the speakers 51 and the like, the orientation of the sound emitting portion 51a and the like, and the like can be appropriately changed. Specifically, for example, the number of speakers included in the acoustic space generator 100 may be 2 or more, 4 or less, or 6 or more. Further, for example, all the speakers 51 to 55 constituting the five speakers 50 may be arranged at the upper portion, the bottom portion, or the central portion in the acoustic space 10. You may. Further, for example, the fifth speaker 55 may be installed away from the camera 20 with the sound emitting portion 55a facing downward so as to emit sound downward, and the first to fourth speakers 51 and the like may be installed. It may be installed so as to emit sound in the horizontal direction. Further, although the first to fifth speakers 51 and the like are all speakers having the same configuration, some or all of them may be speakers having different configurations.
 情報処理装置60は、例えばコンピュータから構成される。情報処理装置60は、第1~第5スピーカー51等及びカメラ20のそれぞれと有線又は無線通信を介して通信可能に接続されている。情報処理装置60は、カメラ20の映像を取得すると共に、第1~第5スピーカー51等から所定の音を出力させる。情報処理装置60は、音響空間10の外部に設置されるが、音響空間10の内部に設置されてもよい。図2は、音響空間生成装置100の要部を示すブロック図である。図2に示すように、情報処理装置60は、入力部61と処理部62と記憶部63とを有している。 The information processing device 60 is composed of, for example, a computer. The information processing device 60 is communicably connected to each of the first to fifth speakers 51 and the like and the camera 20 via wired or wireless communication. The information processing device 60 acquires the image of the camera 20 and outputs a predetermined sound from the first to fifth speakers 51 and the like. The information processing device 60 is installed outside the acoustic space 10, but may be installed inside the acoustic space 10. FIG. 2 is a block diagram showing a main part of the acoustic space generation device 100. As shown in FIG. 2, the information processing device 60 includes an input unit 61, a processing unit 62, and a storage unit 63.
 入力部61は、カメラ20の映像を取得する。入力部61は、カメラ20の撮像した複数のフレームを画像データとして取得する。処理部62は、入力された画像データに基づいて所定の処理を行い、被写体Hの動作の状態に応じた音程を選択して、選択した音程の各音をスピーカー50から出力させる。なお、処理部62におけるデータ処理等については後述する。また、処理部62は、例えばCPUを含む構成により実現される。記憶部63は、カメラ20から入力された画像データや、処理部62が生成したデータなどを記憶する。また、記憶部63は、処理部62の処理を実行するためのプログラムや、後述する音楽構造プログラム及び空間構造プログラムも記憶している。記憶部63は、例えばメモリやハードディスクなどにより実現される。 The input unit 61 acquires the image of the camera 20. The input unit 61 acquires a plurality of frames captured by the camera 20 as image data. The processing unit 62 performs predetermined processing based on the input image data, selects a pitch according to the operation state of the subject H, and outputs each sound of the selected pitch from the speaker 50. The data processing and the like in the processing unit 62 will be described later. Further, the processing unit 62 is realized by a configuration including, for example, a CPU. The storage unit 63 stores image data input from the camera 20, data generated by the processing unit 62, and the like. In addition, the storage unit 63 also stores a program for executing the processing of the processing unit 62, and a music structure program and a spatial structure program described later. The storage unit 63 is realized by, for example, a memory or a hard disk.
 次に、音響空間生成装置100の使用方法について説明する。図3は、音響空間生成装置100の使用状態の一例を示す図である。図3に示すように、使用者Uは、台40の背面側に立ち、光照射空間Sに手Hを入れた状態で、手Hを動かしたり止めたりする動作を行う。音響空間生成装置100はこのように使用される。このとき、使用者Uは、光照射空間Sに左右両方の手Hあるいは片方の手Hを入れ、その手Hを、例えば、上下方向や左右方向に動かしたり、回転させたり、指を動かしたり、手Hのひらを拡げたり閉じたりといった運動をしたり、手Hの動きを一時的に止めたりする。このような手Hの一連の動作はカメラ20に撮影される。そして、スピーカー50から手Hの動作に応じて生成された音が出力される。 Next, how to use the acoustic space generator 100 will be described. FIG. 3 is a diagram showing an example of a usage state of the acoustic space generation device 100. As shown in FIG. 3, the user U stands on the back side of the table 40 and performs an operation of moving or stopping the hand H with the hand H in the light irradiation space S. The acoustic space generator 100 is used in this way. At this time, the user U puts both the left and right hand H or one hand H in the light irradiation space S, and moves, rotates, or moves the finger in the vertical direction or the horizontal direction, for example. , Exercises such as expanding and closing the palm of the hand H, and temporarily stopping the movement of the hand H. Such a series of movements of the hand H is photographed by the camera 20. Then, the sound generated according to the operation of the hand H is output from the speaker 50.
 続いて、音響空間生成装置100の動作について説明する。図4は、音響空間生成装置100の動作の一例を示すフローチャートである。以下では、音響空間生成装置100の動作について、図4のフローチャートに即して説明する。 Next, the operation of the acoustic space generator 100 will be described. FIG. 4 is a flowchart showing an example of the operation of the acoustic space generation device 100. Hereinafter, the operation of the acoustic space generation device 100 will be described with reference to the flowchart of FIG.
 先ず、カメラ20で被写体(物体)Hの動画を撮像する(ステップS01)。そして、カメラ20の撮影した画像データをから情報処理装置60に送信する(ステップS02)。例えば図3に示す使用状態においては、カメラ20は、予め設定されたフレームレートで、光照射空間S内の手Hを連続的に撮影し、フレームごとに画像データとして情報処理装置60の入力部61に入力する。 First, the camera 20 captures a moving image of the subject (object) H (step S01). Then, the image data captured by the camera 20 is transmitted to the information processing device 60 (step S02). For example, in the usage state shown in FIG. 3, the camera 20 continuously photographs the hand H in the light irradiation space S at a preset frame rate, and inputs the information processing device 60 as image data for each frame. Enter in 61.
 フレームごとの画像データ(フレーム画像)が入力部61に入力されると、処理部62は、当該画像データに基づく所定の処理及び所定の動作を実行する(ステップS03)。そして、スピーカー50から楽音などが出力される(ステップS04)。スピーカー50から出力される音は、処理部62において選択された所定の音程の音により構成される旋律(メロディー)や音節を有する。 When the image data (frame image) for each frame is input to the input unit 61, the processing unit 62 executes a predetermined process and a predetermined operation based on the image data (step S03). Then, a musical sound or the like is output from the speaker 50 (step S04). The sound output from the speaker 50 has a melody or a syllable composed of sounds of a predetermined pitch selected by the processing unit 62.
 スピーカー50から出力される音は、所定の条件を満たすことにより、このような旋律や音節の音に加えて、サイン波(正弦波)音や、残響音を含む。なお、スピーカー50からサイン波音が出力される条件並びに残響音が出力される条件等については後述する。 The sound output from the speaker 50 includes a sine wave (sine wave) sound and a reverberation sound in addition to such a melody or syllable sound if a predetermined condition is satisfied. The conditions under which the sine wave sound is output from the speaker 50 and the conditions under which the reverberation sound is output will be described later.
 音響空間生成装置100のステップS01~ステップS04の動作により、被写体Hの動作に応じて自動作曲された楽音がスピーカー50から奏でられ、これにより音響空間10が生成される。 By the operations of steps S01 to S04 of the acoustic space generation device 100, a musical sound automatically composed according to the operation of the subject H is played from the speaker 50, whereby the acoustic space 10 is generated.
 続いて、ステップS03の処理部62の処理の内容について具体的に説明する。図5は、所定音程の音を出力する際の処理部62の処理を示すフローチャートである。なお、処理部62の処理は、例えば、記憶部63に記憶されたプログラムに基づき自動的に実行される。 Subsequently, the content of the processing of the processing unit 62 in step S03 will be specifically described. FIG. 5 is a flowchart showing the processing of the processing unit 62 when outputting a sound having a predetermined pitch. The processing of the processing unit 62 is automatically executed based on, for example, the program stored in the storage unit 63.
 処理部62は、図5に示すように、先ず、フレーム画像を取得する(ステップS11)。次いで、処理部62は、当該フレームの画像データの光量と、当該フレームの直前に取得したフレームの画像データの光量を比較する(ステップS12)。ステップS12では、フレーム画像の全領域の光量を比較するが、これに代えて、フレーム画像における一部の所定領域の光量を比較するようにしてもよい。 As shown in FIG. 5, the processing unit 62 first acquires a frame image (step S11). Next, the processing unit 62 compares the light amount of the image data of the frame with the light amount of the image data of the frame acquired immediately before the frame (step S12). In step S12, the amount of light in the entire area of the frame image is compared, but instead, the amount of light in a part of a predetermined area in the frame image may be compared.
 続いて、光量の差が閾値以上か否かを判定する(ステップS13)。ステップS13では、処理部62は、当該フレームの画像データの光量と、当該フレームの直前に取得したフレームの画像データの光量との差が、閾値以上か、閾値未満かについて判定する。閾値は、所定の光量の値であり、予め設定されて記憶部63に記憶される。 Subsequently, it is determined whether or not the difference in the amount of light is equal to or greater than the threshold value (step S13). In step S13, the processing unit 62 determines whether the difference between the light amount of the image data of the frame and the light amount of the image data of the frame acquired immediately before the frame is equal to or more than the threshold value or less than the threshold value. The threshold value is a value of a predetermined amount of light, is set in advance, and is stored in the storage unit 63.
 処理部62は、上記した光量の差が閾値以上の場合(ステップS13において「YES」の場合)、被写体Hが「動き」(motion)の状態と判定する(ステップS14)。そして、処理部62は、被写体Hが動きの状態と判定すると、動作感知シグナルを1つ発生する(ステップS15)。 When the difference in the amount of light described above is equal to or greater than the threshold value (when “YES” in step S13), the processing unit 62 determines that the subject H is in a “motion” state (step S14). Then, when the processing unit 62 determines that the subject H is in a moving state, it generates one motion sensing signal (step S15).
 一方、処理部62は、上記した光量の差が閾値未満と判定して場合(ステップS13において「NO」の場合)、被写体Hが「静止」(stop)の状態と判定する(ステップS24)。処理部62は、被写体Hが静止の状態と判定した場合、休止・余韻シグナルを1つ発生するようにしても構わない(ステップS25)。 On the other hand, when the processing unit 62 determines that the difference in the amount of light described above is less than the threshold value (when "NO" in step S13), the processing unit 62 determines that the subject H is in the "stop" state (step S24). When the processing unit 62 determines that the subject H is in a stationary state, the processing unit 62 may generate one pause / lingering signal (step S25).
 処理部62は、ステップS15において動作感知シグナルが1つ発生すると、予め設定され記憶された音楽構造プログラムに基づき出力音の音程を1つ選択する(ステップS16)。ステップS16では、音楽構造プログラムに基づき、例えば、図6に示すデータを有するファイルから出力する音の音程が選択される。この場合、図6記載の42個の音程のデータ(A1~A42)から1つの音程が選択される。なお、ステップS16で選択される音程は、図6記載の42個の音程のデータ(A1~A42)に代えて、例えば音響ノイズなどの録音素材から選択されてもよい。また、ステップS16では、図6記載の42個の音程のデータ(A1~A42)のうち複数の音程が同時に選択されてもよい。 When one motion detection signal is generated in step S15, the processing unit 62 selects one output sound pitch based on the preset and stored music structure program (step S16). In step S16, for example, the pitch of the sound to be output from the file having the data shown in FIG. 6 is selected based on the music structure program. In this case, one pitch is selected from the 42 pitch data (A1 to A42) shown in FIG. The pitch selected in step S16 may be selected from a recording material such as acoustic noise instead of the 42 pitch data (A1 to A42) shown in FIG. Further, in step S16, a plurality of pitches may be selected at the same time from the data (A1 to A42) of the 42 pitches shown in FIG.
 処理部62は、ステップS16に続いて、予め設定され記憶された空間構造プログラムにより、選択された音程の音に対してスピーカーを割り当てる(ステップS17)。ステップS17では、選択された音程の各音に対して、空間構造プログラムに基づき、5つのスピーカー51~55のうちいずれか一つのスピーカーが割り当てられる。なお、ステップS16において同時に複数の音程が選択される場合、ステップS17では、選択された複数の音程のそれぞれに対して、5つのスピーカー51~55のうちいずれか一つのスピーカーが割り当てられる。 Following step S16, the processing unit 62 assigns a speaker to a sound of a selected pitch by a preset and stored spatial structure program (step S17). In step S17, any one of the five speakers 51 to 55 is assigned to each sound of the selected pitch based on the spatial structure program. When a plurality of pitches are selected at the same time in step S16, any one of the five speakers 51 to 55 is assigned to each of the selected pitches in step S17.
 図6は、音程のオーディオファイルを示す図である。図6には、例えば42個の音程に関するデータが示されている。図6において、A1~A42は、42個の音程の通し番号である。これら42個の音程が、ステップS16において選択可能な音程である。また、MIDI番号は、ノートナンバーやノート番号ともいい、MIDI(Musical Instrument Digital Interface)での音の高さ、音域を表す数値である。同時発音数とは、同時に出力可能な音の数をいい、レイヤー数に相当する値でもある。また、動作感知シグナルが127回発生した場合の各音程の出現頻度とは、動作感知シグナルが127回発生した場合に各音程がどのくらいの回数で選択されるかを示している。また、動作感知シグナルが20回発生した場合のSP1/SP2/SP3/SP4/SP5の出力頻度とは、動作感知シグナルが20回発生し、所定の音程の音が20回選択された場合において、第1スピーカー51(SP1)、第2スピーカー52(SP2)、第3スピーカー53(SP3)、第4スピーカー54(SP4)、及び第5スピーカー55(SP5)のそれぞれにどれくらいの回数で割り当てられるかを示している。 FIG. 6 is a diagram showing an audio file of pitch. FIG. 6 shows, for example, data on 42 pitches. In FIG. 6, A1 to A42 are serial numbers of 42 pitches. These 42 pitches are the pitches that can be selected in step S16. The MIDI number is also called a note number or a note number, and is a numerical value indicating the pitch and range of MIDI (Musical Instrument Digital Interface). The polyphony is the number of sounds that can be output at the same time, and is also a value corresponding to the number of layers. Further, the appearance frequency of each pitch when the motion sensing signal is generated 127 times indicates how many times each pitch is selected when the motion sensing signal is generated 127 times. The output frequency of SP1 / SP2 / SP3 / SP4 / SP5 when the motion sensing signal is generated 20 times means that when the motion sensing signal is generated 20 times and a sound having a predetermined pitch is selected 20 times. How many times are assigned to each of the first speaker 51 (SP1), the second speaker 52 (SP2), the third speaker 53 (SP3), the fourth speaker 54 (SP4), and the fifth speaker 55 (SP5)? Is shown.
 各音程の出現頻度、並びに、各音程の音の各スピーカーの割り当て頻度に関する数値は予め設定されている。ステップS16及びステップS17では、これらの出現頻度及び割り当て頻度に関する数値に応じて、自動的に、音程の選択やスピーカー50の割り当てがなされる。 Numerical values related to the appearance frequency of each pitch and the allocation frequency of each speaker for the sound of each pitch are set in advance. In step S16 and step S17, the pitch is automatically selected and the speaker 50 is assigned according to the numerical values relating to the appearance frequency and the allocation frequency.
 図5に戻り、処理部62は、割り当てられたスピーカー50から所定音程の音を出力させる(ステップS18)。ステップS18では、選択された音程の音を、割り当てられたスピーカー50から出力させる。なお、スピーカー50が連続音を出力する場合において、かかる連続音を構成する各音に対して割り当てるスピーカー51等を変えるなどして(例えば、上記連続音を構成する音ごとに異なるスピーカー51等を割り当てるなど)、スピーカー50から音を音響空間10内で上下左右に動くように出力させてもよい。 Returning to FIG. 5, the processing unit 62 outputs a sound having a predetermined pitch from the assigned speaker 50 (step S18). In step S18, the sound of the selected pitch is output from the assigned speaker 50. When the speaker 50 outputs continuous sound, the speaker 51 or the like assigned to each sound constituting the continuous sound is changed (for example, a different speaker 51 or the like is used for each sound constituting the continuous sound). (Assignment, etc.), the sound may be output from the speaker 50 so as to move up, down, left and right in the acoustic space 10.
 図7は、処理部62の動作の一例を説明するための図である。処理部62の上述した処理により、図7にも示されるように、被写体Hが動きの状態の場合、音楽構造プログラムに基づき選択された音程の音がスピーカー50から出力される。一方、被写体Hが静止の状態にある場合、スピーカー50は一時的に音を出力しない休止状態となり、その結果、音響空間10は、無音あるいは余韻を発する状態となる。したがって、被写体Hが動きの状態を連続的に続けると、所定音程の音が連続的に出力されるので、質感としての音が生成される。他方、被写体Hがその動きの状態の中で、静止の状態を適宜挟むと、音により運動と運動との間が表現されるとともに、音の余韻を聞くことができる。その結果、旋律(メロディー)や音節の奏でる音響空間10が生成される。 FIG. 7 is a diagram for explaining an example of the operation of the processing unit 62. As shown in FIG. 7, by the above-described processing of the processing unit 62, when the subject H is in a moving state, the sound of the pitch selected based on the music structure program is output from the speaker 50. On the other hand, when the subject H is in a stationary state, the speaker 50 is temporarily in a paused state in which no sound is output, and as a result, the acoustic space 10 is in a state of silence or lingering sound. Therefore, when the subject H keeps moving continuously, the sound of a predetermined pitch is continuously output, so that the sound as a texture is generated. On the other hand, when the subject H appropriately sandwiches a stationary state in the moving state, the sound expresses between the movements and the lingering sound can be heard. As a result, an acoustic space 10 in which a melody or a syllable plays is generated.
 また、上述したステップS03の処理において、処理部62は、所定の条件下で、スピーカー50からサイン波を発生させる。このサイン波は、上記した所定音程の音の波形に対して合成された状態で出力される。そこで、このようなサイン波の発生に関する処理部62の処理について説明する。 Further, in the process of step S03 described above, the processing unit 62 generates a sine wave from the speaker 50 under a predetermined condition. This sine wave is output in a state of being synthesized with respect to the waveform of the sound having the predetermined pitch described above. Therefore, the processing of the processing unit 62 regarding the generation of such a sine wave will be described.
 図8は、処理部62におけるサイン波の発生処理を示すフローチャートである。処理部62は次の処理を実行する。図8に示すように、動作感知シグナルが第1の単位時間で第1の所定の回数連続で発生したか否かを判定する(ステップS31)。すなわち、第1の単位時間で動きの状態が第1の所定の回数連続で発生したか否かを判定する。発生したと判定した場合(ステップS31の「YES」の場合)、ステップS31における第1の所定の回数連続で発生した回数が、第2の所定の回数に到達したか判定する(ステップS32)。到達したと判定した場合(ステップS32の「YES」の場合)、第2の単位時間以上静止の状態であった後に動作感知シグナルが発生したか否か判定する(ステップS33)。そして、第2の単位時間以上静止の状態であった後に、動きの状態であると判定された場合(ステップS33の「YES」の場合)、さらに、所定の音に対して複数のスピーカー50のうち特定のスピーカー51等が割り当てられたとき(ステップS34の「YES」のとき)、特定のスピーカー51等からサイン波を発生させる(ステップS35)。なお、上記したサイン波の発生処理において、静止の状態の判定は、動作感知シグナルの不発生により認識してもよいし、休止・余韻シグナルを検知することにより認識してもよい。 FIG. 8 is a flowchart showing a sine wave generation process in the processing unit 62. The processing unit 62 executes the following processing. As shown in FIG. 8, it is determined whether or not the motion sensing signal is continuously generated a predetermined number of times in the first unit time (step S31). That is, it is determined whether or not the state of movement has occurred continuously in the first predetermined number of times in the first unit time. When it is determined that the occurrence has occurred (in the case of "YES" in step S31), it is determined whether the number of occurrences in the first predetermined number of times in the step S31 has reached the second predetermined number of times (step S32). When it is determined that the signal has been reached (in the case of "YES" in step S32), it is determined whether or not the motion detection signal is generated after the rest state for the second unit time or more (step S33). Then, when it is determined that the speaker is in a moving state after being in a stationary state for the second unit time or more (in the case of “YES” in step S33), further, the plurality of speakers 50 for a predetermined sound. When a specific speaker 51 or the like is assigned (when “YES” in step S34), a sine wave is generated from the specific speaker 51 or the like (step S35). In the above-mentioned sine wave generation process, the determination of the stationary state may be recognized by not generating the motion detection signal, or may be recognized by detecting the pause / lingering signal.
 具体的には、例えば次のとおりである。図9は、処理部62の動作の一例を説明するための図である。第1の単位時間は例えば180msに、第1の所定の回数は例えば1~30回に、第2の単位時間は例えば500msに、第2の所定の回数は例えば10回に、それぞれ予め設定される。なお、これらの設定値は記憶部63に記憶される。処理部62は、図9に示すように、180ms以下の期間で動作感知シグナルが1~30回連続で発生し、その周期の合計が10回に達し、さらに500ms以上静止の状態であった後に動作感知シグナルを検知すると、例えば第5スピーカー55を割り当て、第5スピーカー55から60~90秒間、第1サイン波を発生させる。また、処理部62は、ステップS33の動作感知シグナルの発生により選択された音程の音が、図6の42個の音程のデータのうち例えばA19~A25のいずれかの音程の音であった場合には、第1サイン波に加えて第2サイン波を発生させる。第2サイン波は、後述する第3サイン波と第4サイン波を10秒の音量カーブによって合成したものである。ここで、第3サイン波とは、ステップS33の動作感知シグナルの発生により選択された音程の2オクターブ上の周波数のサイン波である。第4サイン波とは、第3サイン波に対して0.5~11Hzの周波数が加算されたサイン波である。第2サイン波は、例えば第5スピーカー55から10秒間出力される。なお、第4サイン波において、数値が変更される際、1500~4000ms時間補完を用いて到達値に達する。 Specifically, for example, it is as follows. FIG. 9 is a diagram for explaining an example of the operation of the processing unit 62. The first unit time is set to, for example, 180 ms, the first predetermined number of times is set to, for example, 1 to 30, the second unit time is set to, for example, 500 ms, and the second predetermined number of times is set to, for example, 10 times. To. These set values are stored in the storage unit 63. As shown in FIG. 9, the processing unit 62 continuously generates motion sensing signals 1 to 30 times in a period of 180 ms or less, the total of the cycles reaches 10 times, and the processing unit 62 is in a stationary state for 500 ms or more. When the motion detection signal is detected, for example, the fifth speaker 55 is assigned, and the first sine wave is generated from the fifth speaker 55 for 60 to 90 seconds. Further, when the sound of the pitch selected by the generation of the motion detection signal in step S33 is the sound of any of the pitches A19 to A25 among the data of the 42 pitches of FIG. Generates a second sine wave in addition to the first sine wave. The second sine wave is a combination of the third sine wave and the fourth sine wave, which will be described later, by a volume curve of 10 seconds. Here, the third sine wave is a sine wave having a frequency two octaves higher than the pitch selected by the generation of the motion sensing signal in step S33. The fourth sine wave is a sine wave in which a frequency of 0.5 to 11 Hz is added to the third sine wave. The second sine wave is output from, for example, the fifth speaker 55 for 10 seconds. In the 4th sine wave, when the numerical value is changed, the reached value is reached by using 1500 to 4000 ms time complementation.
 さらに、上述したステップS03の処理において、処理部62は、所定の条件下で、スピーカー50から第1残響音を出力させる。そこで、続いて、第1残響音の出力に関する処理部62の処理について説明する。 Further, in the process of step S03 described above, the processing unit 62 outputs the first reverberation sound from the speaker 50 under a predetermined condition. Therefore, subsequently, the processing of the processing unit 62 regarding the output of the first reverberation sound will be described.
 図10は、第1残響音の出力処理を示すフローチャートである。処理部62は次の処理を実行する。図10に示すように、動作感知シグナルが第1の単位時間で第1の所定の回数連続で発生したか否かを判定する(ステップS41)。発生したと判定した場合(ステップS31の「YES」の場合)、ステップS41における第1の所定の回数連続で発生した回数が、第3の所定の回数に到達したか判定する(ステップS42)。到達したと判定した場合(ステップS42の「YES」の場合)、続いて、第2の単位時間以上静止の状態であった後に動作感知シグナルが発生したか否か判定する(ステップS43)。そして、第2の単位時間以上静止の状態であった後に、動きの状態であると判定された場合(ステップS43の「YES」の場合)、さらに、選択された音に対して複数のスピーカー50のうち特定のスピーカーが割り当てられたとき(ステップS44の「YES」のとき)、特定のスピーカー50から第1残響音を出力させる(ステップS45)。 FIG. 10 is a flowchart showing the output processing of the first reverberation sound. The processing unit 62 executes the following processing. As shown in FIG. 10, it is determined whether or not the motion sensing signal is continuously generated a predetermined number of times in the first unit time (step S41). When it is determined that the occurrence has occurred (in the case of "YES" in step S31), it is determined whether the number of occurrences in the first predetermined number of times in the step S41 has reached the third predetermined number of times (step S42). When it is determined that the object has been reached (in the case of "YES" in step S42), it is subsequently determined whether or not the motion detection signal is generated after the rest state for the second unit time or more (step S43). Then, when it is determined that the speaker is in a moving state after being in a stationary state for the second unit time or more (in the case of “YES” in step S43), the plurality of speakers 50 are further used for the selected sound. When a specific speaker is assigned (when “YES” in step S44), the first reverberation sound is output from the specific speaker 50 (step S45).
 具体的には、例えば次のとおりである。図11は、処理部62の動作の一例を説明するための図である。第1の単位時間、第1の所定の回数、及び第2の単位時間は、予め、例えば上記値に設定される。また、第3の所定の回数は、例えば3~5回に設定される。なお、これらの設定値は記憶部63に記憶される。処理部62は、図11に示すように、180ms以下の期間で動作感知シグナルが1~30回連続で発生した周期を1とカウントし、3~5カウント目に達したとき、さらに、500ms以上静止の状態であった後に動作感知シグナルを検知すると、例えば第5スピーカー55を割り当て、60~90秒間、第5スピーカー55から第1残響音を出力させる。このとき、上記した500ms以上静止後の動作感知シグナルに基づいて選択された音程の音に対して割り当てられたスピーカーが第5スピーカー55である場合、第1残響音は、第5スピーカー55から上記音程の音に合成して出力させる。 Specifically, for example, it is as follows. FIG. 11 is a diagram for explaining an example of the operation of the processing unit 62. The first unit time, the first predetermined number of times, and the second unit time are set in advance, for example, to the above values. The third predetermined number of times is set to, for example, 3 to 5 times. These set values are stored in the storage unit 63. As shown in FIG. 11, the processing unit 62 counts the period in which the motion sensing signal is continuously generated 1 to 30 times in a period of 180 ms or less as 1, and when the 3rd to 5th count is reached, further 500 ms or more. When the motion detection signal is detected after being in a stationary state, for example, the fifth speaker 55 is assigned, and the first reverberation sound is output from the fifth speaker 55 for 60 to 90 seconds. At this time, when the fifth speaker 55 is the speaker assigned to the sound of the pitch selected based on the motion detection signal after being stationary for 500 ms or more, the first reverberation sound is from the fifth speaker 55 to the above. It is combined with the pitch sound and output.
 さらにまた、上述したステップS03の処理において、処理部62は、所定の条件下で、スピーカー50から第2残響音を出力させる。そこで、続いて、第2残響音の出力に関する処理部62の処理について説明する。 Furthermore, in the process of step S03 described above, the processing unit 62 outputs the second reverberation sound from the speaker 50 under a predetermined condition. Therefore, subsequently, the processing of the processing unit 62 regarding the output of the second reverberation sound will be described.
 図12は、第2残響音の出力処理を示すフローチャートである。処理部62は次の処理を実行する。図12に示すように、動作感知シグナルが第1の単位時間で第4の所定の回数連続で発生したか否かを判定する(ステップS51)。発生したと判定した場合(ステップS51の「YES」の場合)、さらに、選択された音に対して複数のスピーカー50のうち特定のスピーカーが割り当てられたとき(ステップS52の「YES」のとき)、特定のスピーカー50から第2残響音を出力させる(ステップS53)。 FIG. 12 is a flowchart showing the output processing of the second reverberation sound. The processing unit 62 executes the following processing. As shown in FIG. 12, it is determined whether or not the motion sensing signal is continuously generated a fourth predetermined number of times in the first unit time (step S51). When it is determined that the sound has occurred (when “YES” in step S51), and when a specific speaker among the plurality of speakers 50 is assigned to the selected sound (when “YES” in step S52). , The second reverberation sound is output from the specific speaker 50 (step S53).
 具体的には、例えば次のとおりである。図13は、処理部62の動作の一例を説明するための図である。予め、第1の単位時間は、例えば上記値に設定される。また、第4の所定の回数は、例えば30回に設定される。なお、これらの設定値は記憶部63に記憶される。図13に示すように、処理部62は、180ms以下の期間で動作感知シグナルが30回連続で発生すると、スピーカー50から第2残響音を出力させる。処理部62は、第2残響音を、例えば、図6記載の42個の音程のデータのうちA1~A18、A70~A79の音程の音に対して合成させた上で、3~10秒間出力させる。また、処理部62は、第2残響音を、例えば、第1スピーカー51と第2スピーカー52から出力させるとともに、1秒の音響移動を経て第3スピーカー53と第4スピーカー54からも出力させる。 Specifically, for example, it is as follows. FIG. 13 is a diagram for explaining an example of the operation of the processing unit 62. In advance, the first unit time is set to, for example, the above value. Further, the fourth predetermined number of times is set to, for example, 30 times. These set values are stored in the storage unit 63. As shown in FIG. 13, the processing unit 62 outputs the second reverberation sound from the speaker 50 when the motion detection signal is continuously generated 30 times in a period of 180 ms or less. The processing unit 62 synthesizes the second reverberation sound with, for example, the sounds of the pitches A1 to A18 and A70 to A79 out of the data of the 42 pitches shown in FIG. 6, and outputs the second reverberation sound for 3 to 10 seconds. Let me. Further, the processing unit 62 outputs the second reverberation sound from, for example, the first speaker 51 and the second speaker 52, and also outputs the second reverberation sound from the third speaker 53 and the fourth speaker 54 after 1 second of acoustic movement.
 上述したように、処理部62は、ステップS11~ステップS18の処理によってスピーカー50から所定音程の音を出力させる。また、処理部62は、ステップS31~ステップS35の処理によってスピーカー50からサイン波を発生させる。さらに、処理部62は、ステップS41~ステップS45の処理によってスピーカー50から第1残響音を出力させる。さらにまた、処理部62は、ステップS51~ステップS53の処理によってスピーカー50から第2残響音を出力させる。 As described above, the processing unit 62 outputs a sound having a predetermined pitch from the speaker 50 by the processing of steps S11 to S18. Further, the processing unit 62 generates a sine wave from the speaker 50 by the processing of steps S31 to S35. Further, the processing unit 62 outputs the first reverberation sound from the speaker 50 by the processing of steps S41 to S45. Furthermore, the processing unit 62 outputs the second reverberation sound from the speaker 50 by the processing of steps S51 to S53.
 以上のように、音響空間生成装置100では、入力部 61(センサ入力を通じて、「動く・静止」の反復(動作/静止時間のバリエーション)を判定する処理部62から発せられる動作感知シグナルにより、記憶部63に実装された音楽構造プログラム及び空間構造プログラムを介した音響空間10を生成する。そして、音響空間生成装置100によれば、被写体Hの動きに基づいてその運動及び静止の各状態を判定し、各状態の頻度や周期、回数等に応じて音を選択し、選択した音を楽音として出力する。したがって、音響空間生成装置100によれば、物体の動きの態様に応じた楽音を容易に奏でることができ、このような楽音を奏でる音響空間10を容易に提供することができる。また、音響空間生成装置100によれば、物体の運動と静止との反復動作に基づき生成され新たな音節の楽音や音響を奏でることができる。また、所定条件下において、サイン波を合成したことで生じる音や、第1残響音、第2残響音などを含むユニークな楽音を奏でる音響空間10を提供することができる。 As described above, in the acoustic space generation device 100, the sound space generation device 100 is stored by the motion sensing signal emitted from the input unit 61 (the processing unit 62 that determines the repetition (movement / rest time variation) of "movement / rest" through the sensor input). The acoustic space 10 is generated via the music structure program and the space structure program implemented in the unit 63. Then, according to the acoustic space generation device 100, each state of the motion and the rest is determined based on the motion of the subject H. Then, a sound is selected according to the frequency, cycle, number of times, etc. of each state, and the selected sound is output as a musical sound. Therefore, according to the acoustic space generator 100, the musical sound according to the mode of movement of the object is easily performed. It is possible to easily provide an acoustic space 10 that can play such a musical sound. Further, according to the acoustic space generator 100, a new sound is generated based on a repetitive motion of an object moving and resting. It is possible to play musical sounds and sounds of syllables. Further, under a predetermined condition, an acoustic space 10 that plays a unique musical sound including a sound generated by synthesizing a sine wave, a first reverberation sound, a second reverberation sound, and the like. Can be provided.
 また、音響空間生成装置100では、動作する被写体Hに対して光Lが照射されるので、使用者Uは、光Lに照らされた被写体Hの動きを見ながら、即興的な音楽を演奏あるいは鑑賞することが可能である。これにより、音響空間生成装置100によれば、使用者Uが即興的な音楽を視覚上の変化とともに楽しむことのできる新たなエンターテイメント性を備えた音響空間10を提供することができる。そして、音響空間生成装置100によれば、動きと静止の状態から生まれる、身体表現やヨガや福祉など身体の呼吸と関係する音を、使用者Uに体感又は聴取させるので、使用者Uに対して、単に音楽を体感又は聴取させるだけでなく、その精神の向上にも寄与することができる。 Further, in the acoustic space generation device 100, since the moving subject H is irradiated with the light L, the user U plays improvisational music while observing the movement of the subject H illuminated by the light L. It is possible to appreciate it. Thereby, according to the acoustic space generation device 100, it is possible to provide the acoustic space 10 having a new entertainment property that allows the user U to enjoy improvised music with visual changes. Then, according to the acoustic space generator 100, the user U is made to experience or hear the sounds related to the body's breathing such as physical expression, yoga, and welfare, which are generated from the state of movement and rest. Therefore, it is possible not only to experience or listen to music, but also to contribute to the improvement of its spirit.
 続いて、上記実施形態に係る音響空間生成装置100の変形例について説明する。上記音響空間生成装置100では、被写体を撮影するカメラ20と、被写体Hを照射する光源30と、を備える構成であったが、カメラ20及び光源30の双方に代えて、物体の状態を検出可能なセンサを備える構成としてもよい。そこで、以下では、変形例に係る音響空間生成装置の構成について、具体的に説明する。 Subsequently, a modification of the acoustic space generator 100 according to the above embodiment will be described. The acoustic space generation device 100 includes a camera 20 for photographing a subject and a light source 30 for irradiating the subject H. However, the state of an object can be detected instead of both the camera 20 and the light source 30. It may be configured to include various sensors. Therefore, in the following, the configuration of the acoustic space generator according to the modified example will be specifically described.
 上記した変形例に係る音響空間生成装置の構成において、物体の状態を検出可能なセンサとしては、例えば物体との距離を検出可能な距離センサなどが適用可能である。なお、カメラ20も物体の動きを検出可能なセンサの一つである。 In the configuration of the acoustic space generator according to the above-described modification, as the sensor capable of detecting the state of the object, for example, a distance sensor capable of detecting the distance to the object can be applied. The camera 20 is also one of the sensors capable of detecting the movement of an object.
 このような変形例に係る音響空間生成装置の入力部及び処理部の動作は、上述した入力部61及び処理部62の動作と同様である。ただし、変形例に係る入力部は、例えばセンサから物体までの距離に応じた信号を、連続的にセンサから取得する。また、変形例の処理部は、取得した信号を直前に取得した信号と比較して、その差が閾値以上であれば、物体の動きの状態と判定する。閾値は、信号に関する値であって、予め設定される。なお、これ以降の処理部の処理については上記した処理部62と同様である。すなわち、変形例では、物体の状態を検出可能なセンサと、センサから信号を取得する入力部と、上記信号に応じて、発生する音を選択する処理部と、処理部により選択された音を出力する複数のスピーカー50と、を備え、処理部は、信号と当該信号の直前の信号とを比較して、予め設定された信号の上記閾値に基づいて、物体が動きの状態か静止の状態であるかを判定し、動きの状態であると判定されるとき、動作感知シグナルを発生し、予め記憶された音楽構造プログラムにより、出力する音程を選択し、予め記憶された空間構造プログラムにより、選択された音程の各音に対して、複数のスピーカー50の一つを割り当て、割り当てられたスピーカーから音を出力させる。変形例に係る音響空間生成装置のその他の構成については上記した実施形態に係る音響空間生成装置100と同様である。変形例に係る音響空間生成装置の処理部は、上記音程の音に加えて、各状態の頻度や周期、回数等に応じてサイン波や、第1残響音、第2残響音を複数のスピーカー50から出力させる。 The operations of the input unit and the processing unit of the acoustic space generator according to such a modification are the same as the operations of the input unit 61 and the processing unit 62 described above. However, the input unit according to the modified example continuously acquires signals from the sensor according to the distance from the sensor to the object, for example. Further, the processing unit of the modified example compares the acquired signal with the signal acquired immediately before, and if the difference is equal to or greater than the threshold value, determines that the state of movement of the object. The threshold value is a value related to the signal and is set in advance. The subsequent processing of the processing unit is the same as that of the processing unit 62 described above. That is, in the modified example, a sensor capable of detecting the state of an object, an input unit that acquires a signal from the sensor, a processing unit that selects a sound to be generated according to the signal, and a sound selected by the processing unit are used. A plurality of speakers 50 for output are provided, and the processing unit compares the signal with the signal immediately before the signal, and the object is in a moving state or a stationary state based on the above threshold value of the preset signal. When it is determined that the sound is in motion, a motion detection signal is generated, the pitch to be output is selected by the pre-stored music structure program, and the pre-stored spatial structure program is used. One of a plurality of speakers 50 is assigned to each sound of the selected pitch, and the sound is output from the assigned speaker. Other configurations of the acoustic space generator according to the modified example are the same as those of the acoustic space generator 100 according to the above-described embodiment. In addition to the sound of the above pitch, the processing unit of the acoustic space generator according to the modified example outputs a sine wave, a first reverberation sound, and a second reverberation sound to a plurality of speakers according to the frequency, period, number of times, etc. of each state. Output from 50.
 このような変形例によれば、音響空間生成装置をより一層簡易な構成で実現できる。したがって、例えば、音響空間生成装置を動画カメラ機能付き小型PCで構成することも可能となる。その結果、音響空間生成装置を子供用の音楽教材などに適用することも容易となる。 According to such a modification, the acoustic space generator can be realized with a simpler configuration. Therefore, for example, it is possible to configure the audio space generator with a small PC having a moving image camera function. As a result, it becomes easy to apply the acoustic space generator to music teaching materials for children.
 また、音響空間生成装置100は、上記変形例の他に、次のような構成とすることも可能である。具体的には、例えば、処理部は、センサを実装した升目を踏んだ状態と当該升目から足を離した状態とを判別したり、ブザーのスイッチのON/OFFを判別したりして、予め設定された信号の閾値に基づいて、物体が動きの状態か静止の状態かを判定してもよい。つまり、例えば、足との接触を感知可能なセンサを用いて、足が升目に接触しているときと離間しているときとで異なる信号(例えば、接触しているときは1、離間しているときは0)を当該センサから処理部に出力したり、ブザーのONの状態を検知可能なセンサを用いて、ブザーのスイッチがONのときとOFFのときとで異なる信号(例えば、ONのときは1、OFFのときは0)を当該センサから処理部に出力したりする。そして、処理部は、センサから取得した信号を直前に取得した信号と比較して、その差が閾値以上か(例えば1以上か)判別する。これにより、処理部は、物体が動きの状態(動作に変化のある状態)か静止の状態(動作に変化のない状態)であるかを判定する(例えば、上記差が1の場合には動きの状態、上記差が0の場合には静止の状態と判定する)。そして、処理部は、動きの状態であると判定するとき、動作感知シグナルを発生し、予め記憶された音楽構造プログラムにより、出力する音程を選択し、予め記憶された空間構造プログラムにより、選択された音程の各音に対して、複数のスピーカーの一つを割り当て、割り当てられたスピーカーから音を出力させる。また、上記音程の音に加えて、各状態の頻度や周期、回数等に応じてサイン波や、第1残響音、第2残響音を複数のスピーカー50から出力させる。 In addition to the above modification, the acoustic space generator 100 can also have the following configuration. Specifically, for example, the processing unit determines in advance whether the cell in which the sensor is mounted is stepped on and the state in which the foot is taken off from the cell, or whether the buzzer switch is ON / OFF. It may be determined whether the object is in a moving state or a stationary state based on the set signal threshold value. That is, for example, using a sensor capable of detecting contact with the foot, different signals are used when the foot is in contact with the square and when the foot is separated (for example, when the foot is in contact, the signal is separated by 1. When it is, 0) is output from the sensor to the processing unit, or a sensor that can detect the ON state of the buzzer is used, and different signals (for example, ON) are used when the buzzer switch is ON and OFF. When it is 1, it outputs 0) when it is OFF, and it outputs it from the sensor to the processing unit. Then, the processing unit compares the signal acquired from the sensor with the signal acquired immediately before, and determines whether the difference is equal to or greater than the threshold value (for example, 1 or more). As a result, the processing unit determines whether the object is in a moving state (a state in which the movement is changed) or a stationary state (a state in which the movement is unchanged) (for example, when the above difference is 1, the moving unit moves. If the above difference is 0, it is determined to be a stationary state). Then, when the processing unit determines that it is in a motion state, it generates a motion sensing signal, selects a pitch to be output by the music structure program stored in advance, and is selected by the spatial structure program stored in advance. One of a plurality of speakers is assigned to each sound of the pitch, and the sound is output from the assigned speaker. Further, in addition to the sound of the above pitch, a sine wave, a first reverberation sound, and a second reverberation sound are output from the plurality of speakers 50 according to the frequency, cycle, number of times, etc. of each state.
 また、音響空間生成装置100において、処理部は、入力された音声情報(例えば、マイクに入力された音声)において、予め設定された信号の閾値に基づいて、アタック(つまり、音が決然として強く出ること、音の出だし、おとの立ち上がり)の有無を判別して、物体が動きの状態か静止の状態かを判定してもよい。つまり、例えば、音響空間生成装置100は物体の音声を拾うための音声認識装置であるマイクを備え、処理部は、センサであるマイクから取得した音声信号を直前に取得した音声信号と比較して、その差(例えば音量の差)が、予め設定された閾値以上か判別し、物体が動きの状態か静止の状態かを判定するようにしてもよい。 Further, in the acoustic space generation device 100, the processing unit attacks (that is, the sound is resolutely strong) based on a preset signal threshold in the input voice information (for example, the voice input to the microphone). It may be determined whether or not the object is in a moving state or a stationary state by determining the presence or absence of (exit, sound output, and adult rise). That is, for example, the acoustic space generation device 100 includes a microphone that is a voice recognition device for picking up the voice of an object, and the processing unit compares the voice signal acquired from the microphone that is the sensor with the voice signal acquired immediately before. , It may be determined whether the difference (for example, the difference in volume) is equal to or more than a preset threshold value, and whether the object is in a moving state or a stationary state.
 以上、本発明の実施形態及び変形例について説明したが、本発明の技術範囲は、上記の実施形態に限定されるものではない。例えば、カメラ等のセンサで動きを検知される物体については、手Hに限定されず、例えば、人の足や全身であってもよいし、人が把持する物(例えば棒)などであってもよいし、木の葉の影や、波などであってもよい。また、音響空間生成装置100の備える複数のスピーカー50は、実施形態のように複数の据置型のスピーカー51等から構成されてもよいし、使用者Uの両耳に音を直接出力する一対のスピーカーを備えた、ヘッドホンや両耳用イヤホンなどであってもよい。また、音響空間生成装置100の生成する音響空間10は、現実の空間であったが、仮想的な空間であってもよい。すなわち、例えば、音響空間生成装置100のスピーカー50としてヘッドホンあるいは両耳用イヤホンが適用される場合、音響空間生成装置100は、スピーカー50を装着した使用者Uに対して、仮想の音響空間を提供する。また、上記した実施形態及び変形例では、物体(被写体)の動きを検出可能なセンサを用いて物体の状態について動きの状態(motion)と静止の状態(stop)とを判別する等の処理をしてその結果に応じた音を生成するが、ここで、物体(被写体)の動きは、現実の物体の動きに限定されない。つまり、本発明の音響空間生成装置は、例えば、仮想現実(VR)やゲーム内(OpenGL(登録商標)描画上など)における特定の架空の場所における物体の動きの動作について動きの状態(motion)と静止の状態(stop)とを判別する等の処理をしてこれに応じた音を生成するようにしてもよい。また、音響空間生成装置100は、その記憶部63にそれぞれ異なる条件(閾値等)が設定されたアルゴリズムを有する複数の音楽構造プログラムが格納され、上記ステップS16の処理を、これら複数の音楽構造プログラムから適宜選択して実行したり、複数の音楽構造プログラムを同時に実行したりするものであってもよい。この場合、複数の音楽構造プログラムは、例えば、それぞれ異なるレイヤーに分けた状態で記憶部63に格納される。これにより、音響空間生成装置100の使用環境や実装場所の数に応じて、音楽構造プログラムを適宜選択することができるともに、音楽構造プログラムの改良や追加も容易となる。 Although the embodiments and modifications of the present invention have been described above, the technical scope of the present invention is not limited to the above embodiments. For example, an object whose movement is detected by a sensor such as a camera is not limited to the hand H, and may be, for example, a person's foot or the whole body, or an object held by a person (for example, a stick). It may be the shadow of a leaf, a wave, or the like. Further, the plurality of speakers 50 included in the acoustic space generator 100 may be composed of a plurality of stationary speakers 51 or the like as in the embodiment, or a pair of speakers that directly output sound to both ears of the user U. Headphones or binaural earphones equipped with speakers may be used. Further, the acoustic space 10 generated by the acoustic space generation device 100 is a real space, but may be a virtual space. That is, for example, when headphones or binaural earphones are applied as the speaker 50 of the acoustic space generator 100, the acoustic space generator 100 provides a virtual acoustic space to the user U who wears the speaker 50. To do. Further, in the above-described embodiment and modification, processing such as discriminating between a motion state (motion) and a stationary state (stop) of the state of the object by using a sensor capable of detecting the movement of the object (subject) is performed. The sound is generated according to the result, but here, the movement of the object (subject) is not limited to the movement of the actual object. That is, the acoustic space generator of the present invention is, for example, a motion state for the motion of an object in a specific fictitious place in virtual reality (VR) or in a game (such as on OpenGL® drawing). A process such as discriminating between and a stationary state (stop) may be performed to generate a sound corresponding to the process. Further, the acoustic space generation device 100 stores a plurality of music structure programs having algorithms in which different conditions (threshold values and the like) are set in the storage unit 63, and the processing of the above step S16 is performed by the plurality of music structure programs. It may be appropriately selected from the above and executed, or a plurality of music structure programs may be executed at the same time. In this case, the plurality of music structure programs are stored in the storage unit 63 in a state of being divided into different layers, for example. As a result, the music structure program can be appropriately selected according to the usage environment of the acoustic space generation device 100 and the number of mounting locations, and the music structure program can be easily improved or added.
 また、本発明の音響空間生成装置は、例えば、自然環境の木の枝にランダムに多数のスピーカー51等を設置したシステムの構成や、平面(例えば、絵画や、本、壁面(例えば、縦と横の長さがそれぞれ10cmやそれぞれ5mなどの正方形状の壁面))において超小型のスピーカー51等やセンサを複数散りばめた製品の構成、光源30でもある複数の街灯にセンサとスピーカー51等を実装した環境システムの構成、などにより実現されてもよい。また、本発明の音響空間生成装置は、複合的な即興アンサンブル装置・楽器を構成するものであってもよい。この場合の音響空間生成装置は、例えば、図6記載の各音程ファイルが割り当てられる42個のナンバー(A1~A42)に対して、他の録音素材へ変更/割り当てる、もしくは、記憶部63に実装された他の音楽構造プログラム(レイヤー)の動作を関連つける、などの処理を行う構成であってもよい。すなわち、例えば、上記ステップS16において、音程ナンバー「A2」が選択された場合、ノイズ録音素材と、マイクから集音・出力を本音響空間内でリアルタイムに流し、その後、音程ナンバー「A2」や他のナンバーが1回または複数回選択されたら、マイクの集音出力を消し、同時に、他の音楽構造プログラム/レイヤーを動作させる、などの処理を行う構成であってもよい。なお、上記実施形態の音響空間生成装置100では複数のスピーカー50を備えているが、スピーカー51等は1つであっても構わない。 Further, the acoustic space generator of the present invention has, for example, a system configuration in which a large number of speakers 51 or the like are randomly installed on a tree branch in a natural environment, or a flat surface (for example, a painting, a book, or a wall surface (for example, vertical). (Square wall surfaces with horizontal lengths of 10 cm and 5 m each)), a product configuration in which multiple ultra-small speakers 51 and sensors are studded, and sensors and speakers 51 and the like mounted on multiple street lights that are also light sources 30. It may be realized by the configuration of the environmental system. Further, the acoustic space generation device of the present invention may constitute a complex improvisational ensemble device / musical instrument. The acoustic space generator in this case is, for example, changed / assigned to another recording material for 42 numbers (A1 to A42) to which each pitch file shown in FIG. 6 is assigned, or mounted in the storage unit 63. It may be configured to perform processing such as associating the operations of other music structure programs (layers) that have been performed. That is, for example, when the pitch number "A2" is selected in step S16, the noise recording material and the sound collection / output from the microphone are played in real time in the present acoustic space, and then the pitch number "A2" and the like are transmitted. When the number of is selected once or a plurality of times, the sound collection output of the microphone may be turned off, and at the same time, another music structure program / layer may be operated. Although the acoustic space generation device 100 of the above embodiment includes a plurality of speakers 50, the number of speakers 51 and the like may be one.
 H 被写体、手(物体)
 20 カメラ(センサ)
 30 光源 
 50,51,52,53,54,55 スピーカー
 61 入力部
 62 処理部
 100 音響空間生成装置
H Subject, hand (object)
20 camera (sensor)
30 light source
50, 51, 52, 53, 54, 55 Speaker 61 Input unit 62 Processing unit 100 Acoustic space generator

Claims (2)

  1.  物体を撮影するカメラと、
     前記物体を照射する光源と、
     前記カメラから画像を取得する入力部と、
     前記映像のフレームに応じて、発生する音を選択する処理部と、
     前記処理部により選択された音を出力する複数のスピーカーと、を備え、
     前記処理部は、
     前記フレームの光量と当該フレームの直前のフレームの光量とを比較して、予め設定された光量の閾値に基づいて、前記物体が動きの状態か静止の状態であるかを判定し、
     動きの状態であると判定されるとき、動作感知シグナルを発生し、予め記憶された音楽構造プログラムにより、出力する音程を選択し、予め記憶された空間構造プログラムにより、選択された音程の各音に対して、前記複数のスピーカーの一つを割り当て、割り当てられたスピーカーから前記音を出力させ、
     第1の単位時間で前記動きの状態が第1の所定の回数連続で発生し、前記第1の所定の回数連続で発生した回数が第2の所定の回数に達し、第2の単位時間以上静止の状態であった後に、動きの状態であると判定されたとき、且つ、所定の音に対し、前記複数のスピーカーのうち特定のスピーカーが割り当てられたとき、前記音楽構造プログラムにより、前記特定のスピーカーからサイン波を発生させ、
     前記第1の単位時間で前記動きの状態が前記第1の所定回数連続で発生し、前記第1の所定回数連続で発生した回数が第3の所定の回数に達したとき、第2の単位時間以上静止の状態であった後に、動きの状態であると判定されたとき、且つ、選択された音に対して、前記特定のスピーカーが割り当てられたとき、前記音楽構造プログラムにより、前記特定のスピーカーから第1の残響音を出力させ、
     前記第1の単位時間で前記動きの状態が前記第4の所定の回数連続で発生したとき、前記音楽構造プログラムにより、第2の残響音を所定の順序で前記スピーカーから出力させる音響空間生成装置。
    A camera that shoots objects and
    The light source that irradiates the object and
    An input unit that acquires an image from the camera and
    A processing unit that selects the generated sound according to the frame of the video, and
    A plurality of speakers that output the sound selected by the processing unit are provided.
    The processing unit
    The light amount of the frame is compared with the light amount of the frame immediately before the frame, and it is determined whether the object is in a moving state or a stationary state based on a preset threshold value of the light amount.
    When it is determined to be in a motion state, a motion detection signal is generated, a pitch to be output is selected by a pre-stored music structure program, and each sound of the selected pitch is selected by a pre-stored spatial structure program. On the other hand, one of the plurality of speakers is assigned, and the sound is output from the assigned speaker.
    In the first unit time, the state of movement occurs in the first predetermined number of times in a row, and the number of times in which the first predetermined number of times occurs in succession reaches the second predetermined number of times, and is equal to or longer than the second unit time. When it is determined to be in a moving state after being in a stationary state, and when a specific speaker among the plurality of speakers is assigned to a predetermined sound, the specific speaker is specified by the music structure program. Generate a sine wave from the speaker of
    When the state of movement occurs continuously in the first predetermined number of times in the first unit time and the number of times generated in the first predetermined number of times reaches a third predetermined number of times, the second unit When it is determined that the sound is in motion after being stationary for an hour or longer, and when the specific speaker is assigned to the selected sound, the music structure program determines the specific sound. Output the first reverberation sound from the speaker,
    An acoustic space generator that outputs a second reverberation sound from the speaker in a predetermined order by the music structure program when the motion state occurs continuously in the fourth predetermined number of times in the first unit time. ..
  2.  物体の状態を検出可能なセンサと、
     前記センサから信号を取得する入力部と、
     前記信号に応じて、発生する音を選択する処理部と、
     前記処理部により選択された音を出力する複数のスピーカーと、を備え、
     前記処理部は、
     前記信号と当該信号の直前の信号とを比較して、予め設定された信号の閾値に基づいて、前記物体が動きの状態か静止の状態であるかを判定し、
     動きの状態であると判定されるとき、動作感知シグナルを発生し、予め記憶された音楽構造プログラムにより、出力する音程を選択し、予め記憶された空間構造プログラムにより、選択された音程の各音に対して、前記複数のスピーカーの一つを割り当て、割り当てられたスピーカーから前記音を出力させ、
     第1の単位時間で前記動きの状態が第1の所定の回数連続で発生し、前記第1の所定の回数連続で発生した回数が第2の所定の回数に達し、第2の単位時間以上静止の状態であった後に、動きの状態であると判定されたとき、且つ、所定の音に対し、前記複数のスピーカーのうち特定のスピーカーが割り当てられたとき、前記音楽構造プログラムにより、前記特定のスピーカーからサイン波を発生させ、
     前記第1の単位時間で前記動きの状態が前記第1の所定回数連続で発生し、前記第1の所定回数連続で発生した回数が第3の所定の回数に達したとき、第2の単位時間以上静止の状態であった後に、動きの状態であると判定されたとき、且つ、選択された音に対して、前記特定のスピーカーが割り当てられたとき、前記音楽構造プログラムにより、前記特定のスピーカーから第1の残響音を出力させ、
     前記第1の単位時間で前記動きの状態が前記第4の所定の回数連続で発生したとき、前記音楽構造プログラムにより、第2の残響音を所定の順序で前記スピーカーから出力させる音響空間生成装置。
    A sensor that can detect the state of an object and
    An input unit that acquires a signal from the sensor and
    A processing unit that selects the sound to be generated according to the signal, and
    A plurality of speakers that output the sound selected by the processing unit are provided.
    The processing unit
    The signal is compared with the signal immediately preceding the signal, and based on a preset signal threshold, it is determined whether the object is in a moving state or a stationary state.
    When it is determined to be in a motion state, a motion detection signal is generated, a pitch to be output is selected by a pre-stored music structure program, and each sound of the selected pitch is selected by a pre-stored spatial structure program. On the other hand, one of the plurality of speakers is assigned, and the sound is output from the assigned speaker.
    In the first unit time, the state of movement occurs in the first predetermined number of times in a row, and the number of times in which the first predetermined number of times occurs in succession reaches the second predetermined number of times, and is equal to or longer than the second unit time. When it is determined to be in a moving state after being in a stationary state, and when a specific speaker among the plurality of speakers is assigned to a predetermined sound, the specific speaker is specified by the music structure program. Generate a sine wave from the speaker of
    When the state of movement occurs continuously in the first predetermined number of times in the first unit time and the number of times generated in the first predetermined number of times reaches a third predetermined number of times, the second unit When it is determined that the sound is in motion after being in a stationary state for an hour or more, and when the specific speaker is assigned to the selected sound, the music structure program determines the specific sound. Output the first reverberation sound from the speaker,
    An acoustic space generator that outputs a second reverberation sound from the speaker in a predetermined order by the music structure program when the state of movement occurs continuously in the fourth predetermined number of times in the first unit time. ..
PCT/JP2019/034122 2019-08-30 2019-08-30 Acoustic space creation apparatus WO2021038833A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP19942900.2A EP4024391A4 (en) 2019-08-30 2019-08-30 Acoustic space creation apparatus
JP2019564546A JP6710428B1 (en) 2019-08-30 2019-08-30 Acoustic space generator
PCT/JP2019/034122 WO2021038833A1 (en) 2019-08-30 2019-08-30 Acoustic space creation apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2019/034122 WO2021038833A1 (en) 2019-08-30 2019-08-30 Acoustic space creation apparatus

Publications (1)

Publication Number Publication Date
WO2021038833A1 true WO2021038833A1 (en) 2021-03-04

Family

ID=71079288

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/034122 WO2021038833A1 (en) 2019-08-30 2019-08-30 Acoustic space creation apparatus

Country Status (3)

Country Link
EP (1) EP4024391A4 (en)
JP (1) JP6710428B1 (en)
WO (1) WO2021038833A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS643829B2 (en) 1981-04-09 1989-01-23 Inoue Japax Res
JP2000276138A (en) * 1999-03-23 2000-10-06 Yamaha Corp Music sound controller
JP2000276139A (en) * 1999-03-23 2000-10-06 Yamaha Corp Method for generating music sound and method for controlling electronic instrument
JP2004205738A (en) * 2002-12-25 2004-07-22 Shunsuke Nakamura Apparatus, program, and method for musical sound generation
JP2005316300A (en) * 2004-04-30 2005-11-10 Kyushu Institute Of Technology Semiconductor device having musical tone generation function, and mobile type electronic equipment, mobil phone, spectacles appliance and spectacles appliance set using the same
US20160267895A1 (en) * 2015-03-13 2016-09-15 Samsung Electronics Co., Ltd. Electronic device, method for recognizing playing of string instrument in electronic device, and method for providng feedback on playing of string instrument in electronic device
JP2016180974A (en) * 2014-09-19 2016-10-13 渡会 寛 Electronic musical instrument using light receiving element

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7038122B2 (en) * 2001-05-08 2006-05-02 Yamaha Corporation Musical tone generation control system, musical tone generation control method, musical tone generation control apparatus, operating terminal, musical tone generation control program and storage medium storing musical tone generation control program
JP2005227628A (en) * 2004-02-13 2005-08-25 Matsushita Electric Ind Co Ltd Control system using rhythm pattern, method and program
DE102005049485B4 (en) * 2005-10-13 2007-10-18 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Control playback of audio information
US9966051B2 (en) * 2016-03-11 2018-05-08 Yamaha Corporation Sound production control apparatus, sound production control method, and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS643829B2 (en) 1981-04-09 1989-01-23 Inoue Japax Res
JP2000276138A (en) * 1999-03-23 2000-10-06 Yamaha Corp Music sound controller
JP2000276139A (en) * 1999-03-23 2000-10-06 Yamaha Corp Method for generating music sound and method for controlling electronic instrument
JP2004205738A (en) * 2002-12-25 2004-07-22 Shunsuke Nakamura Apparatus, program, and method for musical sound generation
JP2005316300A (en) * 2004-04-30 2005-11-10 Kyushu Institute Of Technology Semiconductor device having musical tone generation function, and mobile type electronic equipment, mobil phone, spectacles appliance and spectacles appliance set using the same
JP2016180974A (en) * 2014-09-19 2016-10-13 渡会 寛 Electronic musical instrument using light receiving element
US20160267895A1 (en) * 2015-03-13 2016-09-15 Samsung Electronics Co., Ltd. Electronic device, method for recognizing playing of string instrument in electronic device, and method for providng feedback on playing of string instrument in electronic device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4024391A4

Also Published As

Publication number Publication date
EP4024391A4 (en) 2023-05-03
EP4024391A1 (en) 2022-07-06
JP6710428B1 (en) 2020-06-17
JPWO2021038833A1 (en) 2021-09-27

Similar Documents

Publication Publication Date Title
KR102354274B1 (en) Role play simulation method and terminal device in VR scenario
KR101532111B1 (en) Gesture-related feedback in electronic entertainment system
JP3646599B2 (en) Playing interface
US20080250914A1 (en) System, method and software for detecting signals generated by one or more sensors and translating those signals into auditory, visual or kinesthetic expression
CN106465008B (en) Terminal audio mixing system and playing method
JP6515057B2 (en) Simulation system, simulation apparatus and program
JP5772111B2 (en) Display control device
JP5767004B2 (en) Audiovisual system, remote control terminal, venue equipment control apparatus, audiovisual system control method, and audiovisual system control program
JP2008090633A (en) Motion data creation device, motion data creation method and motion data creation program
JP2018011201A (en) Information processing apparatus, information processing method, and program
Friberg A fuzzy analyzer of emotional expression in music performance and body motion
Hunt et al. Multiple media interfaces for music therapy
KR20170024374A (en) Stage Image Displaying Apparatus Capable of Interaction of Performance Condition and Audience Participation and Displaying Method Using the Same
JP3646600B2 (en) Playing interface
JP3077192B2 (en) Electronic musical instruments compatible with performance environments
WO2021038833A1 (en) Acoustic space creation apparatus
KR101809617B1 (en) My-concert system
US9646587B1 (en) Rhythm-based musical game for generative group composition
Refsum Jensenius et al. Performing the electric violin in a sonic space
WO2014085200A1 (en) Multi-media spatial controller having proximity controls and sensors
JP6295597B2 (en) Apparatus and system for realizing cooperative performance by multiple people
Gobin et al. Designing musical interfaces with composition in mind
JP2007256411A (en) Musical sound controller
Rodrigues Intonaspacio: Comprehensive Study on the Conception and Design of Digital Musical Instruments: Interaction Between Space and Musical Gesture
JP2005333309A (en) Apparatus, method, system, and program for information processing

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2019564546

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19942900

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019942900

Country of ref document: EP

Effective date: 20220330