EP4024391A1 - Acoustic space creation apparatus - Google Patents

Acoustic space creation apparatus Download PDF

Info

Publication number
EP4024391A1
EP4024391A1 EP19942900.2A EP19942900A EP4024391A1 EP 4024391 A1 EP4024391 A1 EP 4024391A1 EP 19942900 A EP19942900 A EP 19942900A EP 4024391 A1 EP4024391 A1 EP 4024391A1
Authority
EP
European Patent Office
Prior art keywords
sound
processing unit
musical
predetermined number
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP19942900.2A
Other languages
German (de)
French (fr)
Other versions
EP4024391A4 (en
Inventor
Junya OIKAWA
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sonifidea LLC
Original Assignee
Sonifidea LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sonifidea LLC filed Critical Sonifidea LLC
Publication of EP4024391A1 publication Critical patent/EP4024391A1/en
Publication of EP4024391A4 publication Critical patent/EP4024391A4/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0008Associated control or indicating means
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/155Musical effects
    • G10H2210/265Acoustic effect simulation, i.e. volume, spatial, resonance or reverberation effects added to a musical sound, usually by appropriate filtering or delays
    • G10H2210/281Reverberation or echo
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/155Musical effects
    • G10H2210/265Acoustic effect simulation, i.e. volume, spatial, resonance or reverberation effects added to a musical sound, usually by appropriate filtering or delays
    • G10H2210/295Spatial effects, musical uses of multiple audio channels, e.g. stereo
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/155User input interfaces for electrophonic musical instruments
    • G10H2220/201User input interfaces for electrophonic musical instruments for movement interpretation, i.e. capturing and recognizing a gesture or a specific kind of movement, e.g. to control a musical instrument
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/155User input interfaces for electrophonic musical instruments
    • G10H2220/4013D sensing, i.e. three-dimensional (x, y, z) position or movement sensing.
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/155User input interfaces for electrophonic musical instruments
    • G10H2220/441Image sensing, i.e. capturing images or optical patterns for musical purposes or musical control purposes
    • G10H2220/455Camera input, e.g. analyzing pictures from a video camera and using the analysis results as control data

Definitions

  • the present invention relates to an acoustic space creation apparatus.
  • Japanese Patent No. 3643829 discloses a configuration of a musical sound generating apparatus that reacts to motion of a person as a subject of a camera, creates musical sound data based on a position and a variation amount of the motion, and output it as a sound.
  • a performer does not need to acquire knowledge and skills to operate an apparatus that automatically plays music and play a musical instrument, and the performer can easily perform an improvisational performance simply by performing the motion in front of a camera.
  • Patent Literature 1 Japanese Patent No. 3643829
  • an apparatus outputting a sound that reacts to a motion of a subject. If there is an apparatus that can recognize the motion of the subject from a viewpoint different from that of the prior art and play a new musical sound generated by this, the musical sound generated corresponding to the motion of the subject can be further enjoyed.
  • the musical sound generating apparatus described in Japanese Patent No. 3643829 described above it is necessary both to specify a position of the motion on the subject of the camera and to calculate the variation amount of the motion of the subject. Accordingly, the above-described musical sound generating apparatus has a possibility that a burden of data processing in performance becomes large. In view of such a point, it is also an object of the present invention to provide an apparatus that can more easily perform an improvisational performance based on a motion state of an object.
  • the present invention focuses on simple motions of "moving” and “stopping,” and is conceived for a purpose of converting temporal senses/sensations (intervals) of various kinds of motions such as physical expression, breathing method, "pause,” “pause,” and ON/OFF of things into a sound or a syllable. Then, the present invention provides an apparatus that mainly focuses on repetition of "MOTION (movement)” and “STOP” of an object and generates and organizes a melody (melody) and a syllable according to a specific musical scale, and an acoustic space based on the repetition.
  • the present invention provides an acoustic space creation apparatus that includes a camera, a light source, an input unit, a processing unit, and a plurality of speakers.
  • the camera captures an object.
  • the light source irradiates the object.
  • the input unit acquires images from the camera.
  • the processing unit selects a sound to be generated corresponding to a frame of the video.
  • the plurality of speakers output a sound selected by the processing unit.
  • the processing unit compares a light amount in the frame with a light amount in a frame immediately before the frame and determines whether the object is in a motion state or a stop state based on a predetermined threshold for light amount.
  • the processing unit When the object is determined to be in a motion state, the processing unit generates a motion detection signal, selects a musical pitch to be output in accordance with a pre-stored musical structure program, allocates one of the plurality of speakers to each sound of the selected musical pitch in accordance with a pre-stored spatial structure program, and causes the allocated speaker to output the sound.
  • the processing unit causes the specific speaker to generate a sine wave in accordance with the musical structure program.
  • the processing unit causes the specific speaker to output a first reverberation sound in accordance with the musical structure program.
  • the processing unit causes the speaker to output a second reverberation sound in a predetermined order in accordance with the musical structure program.
  • the present invention also provides an acoustic space creation apparatus that includes a sensor, an input unit, a processing unit, and a plurality of speakers.
  • the sensor is capable of detecting a state of an object.
  • the input unit acquires signals from the sensor.
  • the processing unit selects a sound to be generated corresponding to the signal.
  • the plurality of speakers output a sound selected by the processing unit.
  • the processing unit compares the signal with a signal immediately before the signal and determines whether the object is in a motion state or a stop state based on a predetermined threshold for signal.
  • the processing unit When the object is determined to be in a motion state, the processing unit generates a motion detection signal, selects a musical pitch to be output in accordance with a pre-stored musical structure program, allocates one of the plurality of speakers to each sound of the selected musical pitch in accordance with a pre-stored spatial structure program, and causes the allocated speaker to output the sound.
  • the processing unit causes the specific speaker to generate a sine wave in accordance with the musical structure program.
  • the processing unit causes the specific speaker to output a first reverberation sound in accordance with the musical structure program.
  • the processing unit causes the speaker to output a second reverberation sound in a predetermined order in accordance with the musical structure program.
  • the acoustic space creation apparatus of the present invention determines each of motion state and stop state based on the motion of the object (for example, subject), selects a sound in accordance with, for example, a frequency, a cycle, and/or the number of times of each state, and outputs the selected sound as the musical sound. Consequently, with the present invention, it is possible to easily perform a musical sound corresponding to an aspect of the motion of the object and easily provide an acoustic space that performs such a musical sound. Furthermore, the present invention allows performing the musical sound and/or the acoustics of the new syllable generated based on the repetition of motion and stop of the object.
  • FIGS. 1A and 1B are a drawing illustrating an outline on one example of a configuration of an acoustic space creation apparatus 100 according to the embodiment, FIG. 1A is a drawing viewed from a front side, and FIG. 1B is a drawing viewed from above.
  • FIG. 1A illustrates a state where a wall portion 11b on a front side is transmitted
  • FIG. 1B illustrates a state where a ceiling portion 11c is transmitted.
  • the acoustic space creation apparatus 100 is an apparatus that generates a melody and a syllable by a specific musical scale and creates an acoustic space 10 performing the melody and the syllable.
  • the acoustic space 10 is a space for performing music.
  • the acoustic space 10 is an internal space of a structure 11.
  • the structure 11 is a structural body in a shape of a container and is formed to be hollow and movable.
  • the structure 11 is constituted by including a floor portion 11a that is approximately square in plan view, wall portions 11b that stand up from each side of the floor portion 11a, and a ceiling portion 11c arranged above the floor portion 11a so as to cover the above-described internal space.
  • the acoustic space 10 is a space defined by the floor portion 11a, the wall portions 11b, and the ceiling portion 11c of the structure 11.
  • the acoustic space 10 is not limited to the internal space of the structural body in a shape of a container like the above-described structure 11, and all the space where the sound can propagate is applicable. That is, the acoustic space 10 may be, for example, an internal space and an underground space of a building such as a concert hall and an event hall.
  • the acoustic space 10 is not limited to a closed space, and may be, for example, a space on an outdoor stage and a space on a ground surface. In this case, a camera 20, a light source 30, a table 40, a plurality of speakers 50, and the like, which will be described later, are installed, for example, on the outdoor stage or on the ground surface.
  • the acoustic space creation apparatus 100 includes the camera 20, the light source 30, the table 40, the plurality of speakers 50, and an information processing apparatus 60 (see FIG. 2 ). While the camera 20, the light source 30, and the table 40 are installed inside the acoustic space 10, they may be installed outside the acoustic space 10. For example, the camera 20 and the like may be installed in a place away from the acoustic space 10.
  • the camera 20 is a device that can capture a motion of a subject (an object) (namely, hands of a user U) H (see FIG. 3 ).
  • the camera 20 is installed in the proximity of the ceiling portion 11c in a state so as to face downward.
  • the camera 20 captures the subject H from above and images a moving image of the subject H.
  • the camera 20 is arranged in a central portion in the acoustic space 10 in plan view.
  • the camera 20 is supported in the air, for example, is supported by a support metal fitting (not illustrated) extending in a horizontal direction from the wall portions 11b.
  • the camera 20 is not limited to the above-described configuration, and a direction of the camera 20, an installation place, an installation method, and the like can be conveniently set, for example, the camera 20 may be installed to face upward or in the horizontal direction or may be installed in a state being fixed to the ceiling portion 11c or may be installed in a state of being suspended from the ceiling portion 11c.
  • the camera 20 consecutively images the subject H and transmits the captured images to the information processing apparatus 60. Specifically, the camera 20 captures the subject H by a predetermined frame rate and inputs it to an input unit 61 (see FIG. 2 ) of the information processing apparatus 60 as image data for each frame.
  • the camera 20 is connected to the information processing apparatus 60 so as to be capable of data communication, and the image data is transmitted to the input unit 61 from the camera 20 via wired communication (for example, USB or Ethernet (registered trademark)) or wireless communication (for example, various kinds of radio wave communication or Internet).
  • the frame rate of the camera 20 is set corresponding to a speed of the motion of the subject H and performance of the information processing apparatus 60.
  • the frame rate of the camera 20 is set to 40 fps (frames per second).
  • the camera 20 captures the subject H at a frequency of 40 times per second and transmits frame images acquired at a frequency of 40 sheets per second to the information processing apparatus 60.
  • the frame rate of the camera 20 is not limited to 40 fps and may be set to, for example, 25 fps, 30 fps, 50 fps, 60 fps, or the like.
  • the light source 30 is an instrument, device, or the like that emits light for irradiating the subject H.
  • the light source 30 is arranged in the proximity of the ceiling portion 11c in the central portion in plan view, inside the acoustic space 10.
  • the light source 30 is, for example, an LED spotlight.
  • the spotlight is illumination that concentratedly illuminates a part of the acoustic space 10.
  • the light source 30 is installed in a state of facing downward and emits a light L directly downward. Thus, the light L is projected to a region R (see FIG. 3 ) in a part of an upper surface 41 of the table 40.
  • the light source 30 is installed integrally with the camera 20.
  • the light source 30 is not limited to the above-described configuration and may be, for example, an arc lamp, an incandescent bulb, a fluorescent lamp, sunlight, or the like, or may be one that uniformly illuminates a wide area including the periphery of the table 40.
  • the light source 30 can be conveniently set with respect to a direction of the emitted light L, an installation place, an installation method, and the like and, for example, may be installed so as to emit the light L upward or horizontally, or may be installed separately from the camera 20.
  • the table 40 is installed on a floor portion 11a below the light source 30.
  • a height of the table 40 is set, for example, to a height corresponding to a position of the waist of the user (the performer) U in an upright posture ((see FIG. 3 )).
  • the table 40 is installed approximately in the center of the surface of the floor portion 11a, and has a rectangular parallelepiped shape with a height of 90 cm (cm) and respective longitudinal and horizontal lengths of 45 cm.
  • the upper surface 41 of the table 40 has a planar shape and is arranged so as to include the entire region R irradiated by the emission light L of the light source 30.
  • the table 40 is not limited to the above-described configuration, and the shape, the size, the arrangement, and the like can be changed as necessary. In the acoustic space creation apparatus 100, it is optional whether or not to locate such a table 40.
  • the light L is emitted directly downward from the light source 30.
  • the light L emitted from the light source 30 irradiates, for example, the circular region R on the upper surface 41 of the table 40 when it proceeds downward.
  • an approximately conical space having the light source 30 as the apex and the upper surface 41 of the table 40 as the bottom surface becomes a state of being partially illuminated by the light L, in the acoustic space 10 set to be dark as a whole.
  • Such an approximately conical space illuminated by the light L of the light source 30 is referred to as a light irradiation space S (see FIG. 3 ).
  • Five speakers 50 are installed in the acoustic space 10. These five speakers 50 output the sound based on music data generated by the information processing apparatus 60. These five speakers 50, which includes a first speaker 51, a second speaker 52, a third speaker 53, a fourth speaker 54, and a fifth speaker 55, respectively have sound emitting portions 51a, 52a, 55a that emit a sound in a predetermined direction.
  • the fifth speaker 55 is arranged in the central portion in plan view and on the upper side in the acoustic space 10.
  • the fifth speaker 55 is installed on an upper surface of the camera 20 and is installed with the emitting portion 55a facing upward.
  • the first speaker 51, the second speaker 52, the third speaker 53, and the fourth speaker 54 are arranged in the bottom portion and arranged so as to be equally spaced on an identical circumference centered on the fifth speaker 55, in the acoustic space 10.
  • the first to the fourth speakers 51 to 54 are each installed at four corners of the approximately square floor portion 11a.
  • the first to the fourth speakers 51 to 54 are installed having the sound emitting portions 51a, 52a, 55a, respectively directed toward the center side of the acoustic space 10 while being tilted upward by about 5 degrees to 25 degrees with respect to the horizontal direction so as to emit sounds slightly upward.
  • the plurality of speakers 50 included in the acoustic space creation apparatus 100 are not limited to the configuration of the above-described first to fifth speakers 51 to 55. That is, an installation count of the speaker 51 and the like, respective arrangements of the speaker 51 and the like, and directions of the sound emitting portion 51a and the like inside the acoustic space 10 can be changed as necessary. Specifically, for example, a count of speakers included in the acoustic space creation apparatus 100 may be two or more and four or less or may be six or more. All the speakers 51 to 55 constituting the five speakers 50 may be arranged, for example, in the upper portion or may be arranged in the bottom portion or may be arranged so as to surround the central portion, in the acoustic space 10.
  • the fifth speaker 55 may be installed to be separated from the camera 20 in a state where the sound emitting portion 55a is directed downward so as to emit the sound downward and the first to fourth speakers 51 and the like may be installed so as to emit the sound in the horizontal direction. While all the first to fifth speakers 51 and the like are speakers having the identical configuration, some or all of them may be speakers having different configurations.
  • the information processing apparatus 60 is configured by, for example, a computer.
  • the information processing apparatus 60 is communicatively connected with each of the first to fifth speakers 51 and the like and the camera 20 via wired or wireless communication.
  • the information processing apparatus 60 acquires video of the camera 20 and causes the first to fifth speakers 51 and the like to emit a predetermined sound. While the information processing apparatus 60 is installed outside the acoustic space 10, it may be installed inside the acoustic space 10.
  • FIG. 2 is a block diagram illustrating a main part of the acoustic space creation apparatus 100. As illustrated in FIG. 2 , the information processing apparatus 60 has the input unit 61, a processing unit 62, and a storage unit 63.
  • the input unit 61 acquires the video of the camera 20.
  • the input unit 61 acquires a plurality of frames imaged by the camera 20 as the image data.
  • the processing unit 62 performs predetermined processing based on the input image data, selects a musical pitch corresponding to the motion state of the subject H, and causes the speakers 50 to output each sound of the selected musical pitch.
  • the data processing and the like in the processing unit 62 will be described later.
  • the processing unit 62 is achieved by the configuration including, for example, a CPU.
  • the storage unit 63 stores the image data input from the camera 20, data generated by the processing unit 62 and the like.
  • the storage unit 63 also stores a program for executing the processing of the processing unit 62, a musical structure program and a spatial structure program, which will be described later.
  • the storage unit 63 is achieved by, for example, a memory, a hard disk, and the like.
  • FIG. 3 is a drawing illustrating one example of a use state of the acoustic space creation apparatus 100.
  • the user U stands on a back side of the table 40 and performs the motion of moving and stopping the hands H in a state of entering the hands H in the light irradiation space S.
  • the acoustic space creation apparatus 100 is used in this way.
  • the user U puts both the left hand and the right hand H or one hand H in the light irradiation space S, performs the motion of, for example, moving the hands H vertically or horizontally, rotating the hands H, moving the fingers, or expanding and closing the palms of the hands H, and temporarily stops the motion of the hands H.
  • Such a series of motions of the hands H is captured by the camera 20. Then, the sounds generated corresponding to the motions of the hands H are output from the speakers 50.
  • FIG. 4 is a flowchart illustrating one example of the operation of the acoustic space creation apparatus 100. In the following, the operation of the acoustic space creation apparatus 100 will be described according to the flowchart in FIG. 4 .
  • the moving image of the subject (the object) H is imaged by the camera 20 (Step S01).
  • the camera 20 transmits the captured image data to the information processing apparatus 60 (Step S02).
  • the camera 20 consecutively captures the hands H inside the light irradiation space S at the predetermined frame rate and inputs each frame as the image data to the input unit 61 of the information processing apparatus 60.
  • the processing unit 62 executes the predetermined processing and a predetermined operation based on the image data (Step S03). Then, musical sounds or the like are output from the speakers 50 (Step S04).
  • the sound output from the speakers 50 has a melody (melody) and a syllable composed of the sounds of a predetermined musical pitch selected by the processing unit 62.
  • the sound output from the speakers 50 includes a sine wave (sine wave) sound and a reverberation sound in addition to the sound of such a melody or a syllable.
  • sine wave sine wave
  • reverberation sound in addition to the sound of such a melody or a syllable.
  • Step S01 to Step S04 of the acoustic space creation apparatus 100 By the operations of Step S01 to Step S04 of the acoustic space creation apparatus 100, a musical sound automatically composed corresponding to the motion of the subject H is performed from the speakers 50, and thus, the acoustic space 10 is created.
  • FIG. 5 is a flowchart illustrating the processing of the processing unit 62 when the sound of a predetermined musical pitch is output.
  • the processing of the processing unit 62 is, for example, automatically executed based on the program stored in the storage unit 63.
  • the processing unit 62 acquires the frame image (Step S11).
  • the processing unit 62 compares a light amount of the image data in the frame with the light amount of the image data in a frame acquired immediately before the frame (Step S12). While, at Step S12, the light amount of the whole region in the frame image is compared, instead of this, the light amount of a part of a predetermined region in the frame image may be compared.
  • Step S13 it is determined whether or not the difference in the light amount is equal to or more than a threshold (Step S13).
  • the processing unit 62 determines whether the difference between the light amount of the image data in the frame and the light amount of the image data in the frame acquired immediately before the frame is equal to or more than the threshold or less than the threshold.
  • the threshold is a value of a predetermined light amount, is set preliminarily, and is stored in the storage unit 63.
  • the processing unit 62 determines that the subject H is in a state of "motion” (movement) (Step S14). Then, when the subject H is determined to be in a "motion” state, the processing unit 62 generates one motion detection signal (Step S15).
  • the processing unit 62 determines that the subject H is in a "STOP" (stop) state (Step S24). When the subject H is determined to be in a stop state, the processing unit 62 may generate one rest/reverberation signal (Step S25).
  • the processing unit 62 selects one musical pitch of an output sound based on the preliminarily set and stored musical structure program (Step S16).
  • Step S16 based on the musical structure program, for example, the musical pitch of the sound to be output is selected from a file having data indicated in FIG. 6 .
  • one musical pitch is selected from data (A1 to A42) of the 42 musical pitches described in FIG. 6 .
  • the musical pitch selected at Step S16 may be selected from, for example, a recording material such as an acoustic noise, instead of the data (A1 to A42) of the 42 musical pitches described in FIG. 6 .
  • Step S16 a plurality of musical pitches among the data (A1 to A42) of the 42 musical pitches described in FIG. 6 may be selected at a time.
  • Step S17 the processing unit 62 allocates a speaker to the sound of the selected musical pitch by the preliminarily set and stored spatial structure program (Step S17).
  • Step S17 based on the spatial structure program, any one of the five speakers 51 to 55 is allocated to each sound of the selected musical pitch.
  • Step S17 any one of the five speakers 51 to 55 is allocated to each of the selected plurality of musical pitches.
  • FIG. 6 is a drawing illustrating the audio file of the musical pitch.
  • data for example, regarding the 42 musical pitches are illustrated.
  • A1 to A42 are serial numbers of the 42 musical pitches. These 42 musical pitches are selectable musical pitches at Step S16.
  • Musical Instrument Digital Interface (MIDI) number is also called a note number or a note number and is a numerical value representing a sound pitch and a sound range in MIDI.
  • a simultaneous sound number is a count of sounds that can be output at a time and is also a value corresponding to a count of layers.
  • An appearance frequency of each musical pitch when the motion detection signal occurs 127 times indicates how many times each pitch is selected when the motion detection signal occurs 127 times.
  • An output frequency of SP1/SP2/SP3/SP4/SP5 when the motion detection signal occurs 20 times indicates how many times each of a first speaker 51 (SP1), a second speaker 52 (SP2), a third speaker 53 (SP3), a fourth speaker 54 (SP4), and a fifth speaker 55 (SP5) is allocated when the motion detection signal occurs 20 times and the sound of the predetermined musical pitch is selected 20 times.
  • Numerical values regarding the appearance frequency of each musical pitch and an allocation frequency to each speaker of the sound of each musical pitch are preliminarily set.
  • selection of the musical pitch and allocation to the speakers 50 are automatically performed according to the numerical values related to these appearance frequency and allocation frequency.
  • the processing unit 62 causes the allocated speakers 50 to output the sound of the predetermined musical pitch (Step S18).
  • the sound of the selected musical pitch is output from the allocated speakers 50.
  • the sound may be output from the speakers 50 so as to move up, down, left, and right inside the acoustic space 10, by, for example, changing the speaker 51 and the like allocated to each sound constituting the continuous sound (for example, allocating speaker 51 and the like different for each sound constituting the above-described continuous sound).
  • FIG. 7 is a drawing for describing one example of an operation of the processing unit 62.
  • the sound of the selected musical pitch is output from the speakers 50 based on the musical structure program.
  • the speakers 50 become a rest state where the speakers 50 do not output the sound temporarily, and as a result, the acoustic space 10 becomes in a state of being silent or emitting the reverberation. Accordingly, when the subject H continuously keeps a motion state, the sound of the predetermined musical pitch is continuously output, and thus, the sound as a texture is generated.
  • the processing unit 62 causes the speakers 50 to generate a sine wave under a predetermined condition.
  • This sine wave is output in a synthesized state with the waveform of the sound of the above-described predetermined musical pitch.
  • the processing of the processing unit 62 related to generation of such sine wave will be described.
  • FIG. 8 is a flowchart illustrating generation processing of the sine wave in the processing unit 62.
  • the processing unit 62 executes the following processing. As illustrated in FIG. 8 , the processing unit 62 determines whether or not the motion detection signal has occurred consecutively a first predetermined number of times during a first unit time (Step S31). That is, the processing unit 62 determines whether or not a motion state has occurred consecutively the first predetermined number of times during the first unit time. When it is determined to have occurred (when "YES" at Step S31), the processing unit 62 determines whether or not the number of times of the consecutive occurrences for the first predetermined number of times at Step S31 has reached a second predetermined number of times (Step S32).
  • the processing unit 62 determines whether or not the motion detection signal has occurred after having been in a stop state during a second unit time or greater (Step S33). Then, when it is determined to be in a motion state after having been in a stop state during a second unit time or greater (when "YES” at Step S33), furthermore, when the specific speaker 51 or the like among the plurality of speakers 50 is allocated to the predetermined sound (when "YES” at Step S34), the processing unit 62 causes the specific speaker 51 or the like to generate the sine wave (Step S35).
  • the determination of the stop state may be recognized by non-occurrence of the motion detection signal or may be recognized by detecting the rest/reverberation signal.
  • FIG. 9 is a drawing for describing one example of the operation of the processing unit 62.
  • the first unit time is preliminarily set to, for example, 180 ms
  • the first predetermined number of times is preliminarily set to, for example, 1 time to 30 times
  • the second unit time is preliminarily set to, for example, 500 ms
  • the second predetermined number of times is preliminarily set to, for example, 10 times, respectively.
  • These setting values are stored in the storage unit 63. As illustrated in FIG.
  • the processing unit 62 allocates, for example, the fifth speaker 55 and causes the fifth speaker 55 to generate a first sine wave for 60 seconds to 90 seconds.
  • the processing unit 62 causes a second sine wave to be generated, in addition to the first sine wave.
  • the second sine wave is a sine wave where a third sine wave and a fourth sine wave, which will be described later, are synthesized by a sound volume curve of 10 seconds.
  • the third sine wave is a sine wave with a frequency two octaves higher than the musical pitch selected by the occurrence of the motion detection signal at Step S33.
  • the fourth sine wave is a sine wave where a frequency of 0.5 Hz to 11 Hz is added to the third sine wave.
  • the second sine wave is output from, for example, the fifth speaker 55 for 10 seconds.
  • the fourth sine wave when the numerical value is changed, it reaches a reach value by using 1500 ms to 4000 ms time complementation.
  • the processing unit 62 causes the speakers 50 to output a first reverberation sound, under a predetermined condition.
  • the processing of the processing unit 62 related to the output of the first reverberation sound will be described.
  • FIG.10 is a flowchart illustrating output processing of the first reverberation sound.
  • the processing unit 62 executes the following processing. As illustrated in FIG. 10 , the processing unit 62 determines whether or not the motion detection signa has consecutively occurred the first predetermined number of times during the first unit time (Step S41). When it is determined to have occurred (when "YES" at Step S41), the processing unit 62 determines whether or not the number of times of the consecutive occurrences for the first predetermined number of times at Step S41 has reached a third predetermined number of times (Step S42).
  • the processing unit 62 determines whether or not the motion detection signal has occurred after having been in a stop state during the second unit time or greater (Step S43). Then, when it is determined to be in a motion state after having been in a stop state during the second unit time or greater (when "YES” at Step S43), furthermore, when the specific speaker among the plurality of speakers 50 is allocated to the selected sound (when "YES” at Step S44), the processing unit 62 causes the specific speaker 50 to output the first reverberation sound (Step S45).
  • FIG. 11 is a drawing for describing one example of the operation of the processing unit 62.
  • the first unit time, the first predetermined number of times, and the second unit time are preliminarily set to, for example, the above-described values.
  • the third predetermined number of times is set to, for example, 3 times to 5 times.
  • the processing unit 62 counts the cycle where the motion detection signal consecutively occurs 1 time to 30 times during 180 processing unit 62 detects the motion detection signal after having been in a stop state for 500 ms or greater, the processing unit 62 allocates, for example, the fifth speaker 55 and causes the fifth speaker 55 to output the first reverberation sound for 60 seconds to 90 seconds.
  • the first reverberation sound is output from the fifth speaker 55 by being synthesized with the sound of the above-described musical pitch.
  • the processing unit 62 causes the speakers 50 to output the second reverberation sound under a predetermined condition.
  • the processing of the processing unit 62 related to the output of the second reverberation sound will be described.
  • FIG. 12 is a flowchart illustrating the output processing of the second reverberation sound.
  • the processing unit 62 executes the following processing. As illustrated in FIG. 12 , the processing unit 62 determines whether or not the motion detection signal has consecutively occurred a fourth predetermined number of times during the first unit time (Step S51). When it is determined to have occurred (when "YES” at Step S51), and, furthermore, the specific speakers among the plurality of speakers 50 are allocated with respect to the selected sound (when "YES” at Step S52), the processing unit 62 causes the specific speakers 50 to output the second reverberation sound (Step S53).
  • FIG. 13 is a drawing fir describing one example of the operation of the processing unit 62.
  • the first unit time is preliminarily set to, for example, the above-described values.
  • the fourth predetermined number of times is set to, for example, 30 times. These setting values are stored in the storage unit 63.
  • the processing unit 62 causes the speakers 50 to output the second reverberation sound. After synthesizing the second reverberation sound, for example, with the sounds of the musical pitches of A1 to A18, A70 to A79 among the data of 42 musical pitches described in FIG.
  • the processing unit 62 causes the second reverberation sound to be output for 3 seconds to 10 seconds.
  • the processing unit 62 causes the second reverberation sound to be output from, for example, the first speaker 51 and the second speaker 52 and also causes the second reverberation sound to be output from the third speaker 53 and the fourth speaker 54 after one second of acoustic movement.
  • the processing unit 62 causes the speakers 50 to output the sound of the predetermined musical pitch by the processing of Step S11 to Step S18.
  • the processing unit 62 causes the speakers 50 to generate the sine wave by the processing of Step S31 to Step S35. Further, the processing unit 62 causes the speakers 50 to output the first reverberation sound by the processing of Step S41 to Step S45. Furthermore, the processing unit 62 causes the speakers 50 to output the second reverberation sound by the processing of Step S51 to Step S53.
  • the acoustic space creation apparatus 100 creates the acoustic space 10 via the musical structure program and the spatial structure program implemented in the storage unit 63. Then, the acoustic space creation apparatus 100 determines each of motion state and stop state based on the motion of the subject H, selects the sound corresponding to the frequency, the cycle, the number of times, and the like of each state, and outputs the selected sound as the musical sound.
  • the acoustic space creation apparatus 100 can easily perform the musical sound corresponding to the aspect of the motion of the object, and thus, can easily provide the acoustic space 10 performing such musical sound.
  • the acoustic space creation apparatus 100 can perform the musical sound and the acoustics of the new syllable generated based on the repetitive motion between the motion and the stop of the object. Under the predetermined conditions, the acoustic space 10 that performs the unique musical sound including the sound generated by synthesizing the sine waves, the first reverberation sound, the second reverberation sound, and the like can be provided.
  • the acoustic space creation apparatus 100 since the light L is irradiated with respect to the subject H in motion, a user U can perform or appreciate the improvisational music while watching the motion of the subject H irradiated by the light L. This allows the acoustic space creation apparatus 100 to provide the acoustic space 10 including a new entertainment feature where the user U can enjoy the improvisational music together with a visual change.
  • the acoustic space creation apparatus 100 allows the user U to feel or listen to the music related to body breathing such as a physical expression, yoga, and welfare generated from the motion state and the stop state, the acoustic space creation apparatus 100 not only allows the user U to simply feel or listen to the music, but also can contribute to improvement of spirit of the user U.
  • a modification of the acoustic space creation apparatus 100 will be described. While the above-described acoustic space creation apparatus 100 has a configuration including the camera 20 capturing the subject and the light source 30 irradiating the subject H, it may have a configuration including a sensor capable of detecting the state of the object instead of both the camera 20 and the light source 30. Thus, in the following, a configuration of an acoustic space creation apparatus according to the modification will be specifically described.
  • the sensor capable of detecting the state of the object for example, a distance sensor capable of detecting a distance from the object or the like, is applicable.
  • the camera 20 is also one of the sensor capable of detecting the motion of the object.
  • an input unit and a processing unit of the acoustic space creation apparatus is similar to the operation of the input unit 61 and the processing unit 62, which are described above.
  • the input unit according to the modification consecutively acquires, for example, a signal corresponding to a distance from the sensor to the object from the sensor.
  • the processing unit of the modification compares the acquired signal with the signal acquired immediately before and determines that the object is in a motion state when the difference is equal to or more than a threshold.
  • the threshold is a value related to the signal and is preliminarily set.
  • the subsequent processing of the processing unit is similar to that of the above-described processing unit 62.
  • the modification includes the sensor capable of detecting the state of the object, the input unit acquiring the signal from the sensor, the processing unit selecting a sound to be generated corresponding to the above-described signal, and the plurality of speakers 50 outputting the sound selected by the processing uni.
  • the processing unit compares the signal with a signal immediately before the signal, determines whether the object is in a motion state or a stop state based on the predetermined threshold, which is described above, for the signal, generates the motion detection signal when it is determined to be in a motion state, selects a musical pitch to be output in accordance with the pre-stored musical structure program, allocates one of the plurality of speakers 50 to each sound of the selected musical pitch in accordance with the pre-stored spatial structure program, and causes the allocated speaker to output the sound.
  • the other configurations of the acoustic space creation apparatus according to the modification is similar to those of the acoustic space creation apparatus 100 according to the above-described embodiment.
  • the processing unit of the acoustic space creation apparatus causes the plurality of speakers 50 to output the sine wave, the first reverberation sound, and the second reverberation sound corresponding to the frequency, the cycle, the number of times, and the like of each state, in addition to the sound of the above-described musical pitch.
  • the acoustic space creation apparatus can be achieved with a simpler configuration. Consequently, for example, it is also possible to constitute the acoustic space creation apparatus with a compact PC with a video camera function. As a result, it becomes easy to apply the acoustic space creation apparatus to a music teaching material for children and the like.
  • the acoustic space creation apparatus 100 can also have the following configuration. Specifically, for example, by determining a state of stepping a grid where a sensor is mounted and a state of taking a foot off the grid or determining ON/OFF of a switch of a buzzer, the processing unit may determine whether the object is in a motion state or a stop state based on a predetermined threshold for the signal.
  • signals different between when the foot is in contact with the grid and when the foot is away from the grid are output from the sensor to the processing unit, or by using a sensor capable of detecting an ON state of the buzzer, signals different between when the switch of the buzzer is ON and when the switch of the buzzer is OFF (for example, 1 for ON and 0 for OFF) are output from the sensor to the processing unit.
  • the processing unit compares the signal acquired from the sensor with the immediately preceding signal and determines whether or not the difference is equal to or more than a threshold (for example, whether or not it is equal to or more than 1).
  • the processing unit determines whether the object is in a motion state (a state with change in motion) or a stop state (a state with no change in motion) (for example, when the above difference is 1, it is determined to be in motion state, and when the above difference is 0, it is determined to be a stop state). Then, when the object is determined to be in a motion state, the processing unit generates the motion detection signal, selects the musical pitch to be output in accordance with the pre-stored musical structure program, allocates one of a plurality of speakers to each sound of the selected musical pitch in accordance with the pre-stored spatial structure program, and causes the allocated speaker to output the sound.
  • a motion state a state with change in motion
  • a stop state a state with no change in motion
  • the processing unit causes the plurality of speakers 50 to output the sine wave, the first reverberation sound, and the second reverberation sound corresponding to the frequency, the cycle, the number of times, and the like of each state.
  • the processing unit may determine whether or not the object is in a motion state or a stop state by determining presence /absence of an attack (namely, a resolutely strong sound, a start of a sound, and a rise of a sound) based on a predetermined threshold for signal.
  • an attack namely, a resolutely strong sound, a start of a sound, and a rise of a sound
  • the acoustic space creation apparatus 100 may include a microphone as a voice recognition device for picking up the voice of the object, and the processing unit may compare the voice signal acquired from the microphone as a sensor with the voice signal acquired immediately before, determine whether or not the difference (for example, the difference of a sound volume) is equal to or more than a predetermined threshold, and determine whether the object is in a motion state or a stop state.
  • the processing unit may compare the voice signal acquired from the microphone as a sensor with the voice signal acquired immediately before, determine whether or not the difference (for example, the difference of a sound volume) is equal to or more than a predetermined threshold, and determine whether the object is in a motion state or a stop state.
  • the object the motion of which is detected by a sensor such as a camera is not limited to the hands H and, for example, may be feet or a whole body of a person, may be a thing and the like that the person holds (for example, a bar), and may be a shadow of a leaf, a wave, and the like.
  • the plurality of speakers 50 that the acoustic space creation apparatus 100 includes may be configured by a plurality of stop speakers 51 or the like as the embodiment, and may be a headphone, earphones for both ears, and the like including a pair of speakers that directly output the sound to both ears of the user U. While the acoustic space 10 that the acoustic space creation apparatus 100 creates is the actual space, it may be a virtual space. That is, for example, when the headphone or the earphones for both ears are applied as the speakers 50 of the acoustic space creation apparatus 100, the acoustic space creation apparatus 100 provides a virtual acoustic space with respect to the user U mounting the speakers 50.
  • the acoustic space creation apparatus of the present invention may generate the sound corresponding to, for example, the processing such as determining a motion state (motion) and a stop state (stop) for the motion of the object at a specific fictitious place in virtual reality (VR) or in a game (on an OpenGL (registered mark) drawing and the like).
  • the acoustic space creation apparatus 100 may include the storage unit 63 storing a plurality of musical structure programs having algorithms where respective different conditions (a threshold and the like) are set and may execute the processing at Step S16 described above by appropriately selecting from the plurality of musical structure programs or simultaneously executing the plurality of musical structure programs.
  • the plurality of musical structure programs are stored in the storage unit 63, for example, in a state of being divided into respective different layers. This ensures appropriate selection of the musical structure program corresponding to the usage environment and a count of implementation locations of the acoustic space creation apparatus 100 and facilitated improvement and addition of the musical structure program.
  • the acoustic space creation apparatus of the present invention may be achieved by, for example, a configuration of a system where the multiple speakers 51 and the like are randomly installed at branches of a tree in the natural environment, or a configuration of a product where a plurality of ultra-compact speaker 51 and the like, and sensors are set on a flat surface (for example, a picture, a book, or a wall portion (for example, a square wall portion with a length and a width of 10 cm, respectively, or 5 m, respectively)), or a configuration of an environmental system where sensors, and the speaker 51 and the like are implemented in a plurality of street lights that are also the light source 30.
  • a flat surface for example, a picture, a book, or a wall portion (for example, a square wall portion with a length and a width of 10 cm, respectively, or 5 m, respectively)
  • sensors, and the speaker 51 and the like are implemented in a plurality of street lights that are also the light source 30.
  • the acoustic space creation apparatus of the present invention may be one that constitutes a complex improvisational ensemble device/musical instrument.
  • the acoustic space creation apparatus in this case may have a configuration, for example, that performs the processing such as, for the 42 numbers (A1 to A42) to which the respective musical pitch files described in FIG. 6 are allocated, changing/allocating to other recording material, or associating the operation of the other musical structure program (layer) implemented in the storage unit 63.
  • the acoustic space creation apparatus may have a configuration that performs the processing such as, when the musical pitch number "A2" is selected, playing a noise recording material and a sound collection/output of a microphone in real time inside the acoustic space, and, after that, when the musical pitch number "A2" or other number is selected once or multiple times, turning off the sound collection/output of the microphone, and, simultaneously, causing the other musical structure program/layer to operate.
  • the acoustic space creation apparatus 100 of the above-described embodiment includes the plurality of speakers 50, a count of the speaker 51 and the like may be one.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Electrophonic Musical Instruments (AREA)
  • Auxiliary Devices For Music (AREA)

Abstract

Provided is an apparatus that can more easily perform an improvisational performance based on a motion state of a user.An acoustic space creation apparatus includes a camera, a light source, an input unit, a processing unit, and a plurality of speakers. The camera captures an object. The light source irradiates the object. The input unit acquires images from the camera. The processing unit selects a sound to be generated corresponding to a frame of the video. The plurality of speakers output a sound selected by the processing unit. The processing unit compares a light amount in the frame with a light amount in a frame immediately before the frame and determines whether the object is in a motion state or a stop state based on a predetermined threshold for light amount. When the object is determined to be in a motion state, the processing unit generates a motion detection signal, selects a musical pitch to be output in accordance with a pre-stored musical structure program, allocates one of the plurality of speakers to each sound of the selected musical pitch in accordance with a pre-stored spatial structure program, and causes the allocated speaker to output the sound. When the motion state consecutively occurs a first predetermined number of times during a first unit time, the number of times of the consecutive occurrences for the first predetermined number of times reaches a second predetermined number of times, the object is determined to be in a motion state after having been in a stop state during a second unit time or greater, and a specific speaker among the plurality of speakers is allocated to a predetermined sound, the processing unit causes the specific speaker to generate a sine wave in accordance with the musical structure program. When the motion state consecutively occurs the first predetermined number of times during the first unit time, the number of times of the consecutive occurrences for the first predetermined number of times reaches a third predetermined number of times, the object is determined to be in a motion state after having been in a stop state during the second unit time or greater, and the specific speaker is allocated to the selected sound, the processing unit causes the specific speaker to output a first reverberation sound in accordance with the musical structure program. When the motion state consecutively occurs a fourth predetermined number of times during the first unit time, the processing unit causes the speaker to output a second reverberation sound in a predetermined order in accordance with the musical structure program.

Description

    TECHNICAL FIELD
  • The present invention relates to an acoustic space creation apparatus.
  • BACKGROUND ART
  • There is known a musical sound generating apparatus that can automatically create musical sound data according to a motion of a person when the person is moving in front of a camera, and play the musical sound data. For example, Japanese Patent No. 3643829 discloses a configuration of a musical sound generating apparatus that reacts to motion of a person as a subject of a camera, creates musical sound data based on a position and a variation amount of the motion, and output it as a sound.
    According to the musical sound generating apparatus, a performer does not need to acquire knowledge and skills to operate an apparatus that automatically plays music and play a musical instrument, and the performer can easily perform an improvisational performance simply by performing the motion in front of a camera.
  • CITATION LIST PATENT LITERATURE
  • Patent Literature 1: Japanese Patent No. 3643829
  • SUMMARY OF THE INVENTION PROBLEMS TO BE SOLVED
  • As described above, there is known an apparatus outputting a sound that reacts to a motion of a subject. If there is an apparatus that can recognize the motion of the subject from a viewpoint different from that of the prior art and play a new musical sound generated by this, the musical sound generated corresponding to the motion of the subject can be further enjoyed. In view of such a point, it is a main object of the present invention to provide an apparatus that recognizes a state of a motion of an object from a viewpoint different from that of the prior art and plays a new musical sound generated from this such that the musical sound generated corresponding to the state of the motion of the object (for example, a motion of a performer) can be further enjoyed.
  • In the musical sound generating apparatus described in Japanese Patent No. 3643829 described above, it is necessary both to specify a position of the motion on the subject of the camera and to calculate the variation amount of the motion of the subject. Accordingly, the above-described musical sound generating apparatus has a possibility that a burden of data processing in performance becomes large. In view of such a point, it is also an object of the present invention to provide an apparatus that can more easily perform an improvisational performance based on a motion state of an object.
  • SOLUTIONS TO THE PROBLEMS
  • The present invention focuses on simple motions of "moving" and "stopping," and is conceived for a purpose of converting temporal senses/sensations (intervals) of various kinds of motions such as physical expression, breathing method, "pause," "pause," and ON/OFF of things into a sound or a syllable. Then, the present invention provides an apparatus that mainly focuses on repetition of "MOTION (movement)" and "STOP" of an object and generates and organizes a melody (melody) and a syllable according to a specific musical scale, and an acoustic space based on the repetition.
  • Thus, the present invention provides an acoustic space creation apparatus that includes a camera, a light source, an input unit, a processing unit, and a plurality of speakers. The camera captures an object. The light source irradiates the object. The input unit acquires images from the camera. The processing unit selects a sound to be generated corresponding to a frame of the video. The plurality of speakers output a sound selected by the processing unit. The processing unit compares a light amount in the frame with a light amount in a frame immediately before the frame and determines whether the object is in a motion state or a stop state based on a predetermined threshold for light amount. When the object is determined to be in a motion state, the processing unit generates a motion detection signal, selects a musical pitch to be output in accordance with a pre-stored musical structure program, allocates one of the plurality of speakers to each sound of the selected musical pitch in accordance with a pre-stored spatial structure program, and causes the allocated speaker to output the sound. When the motion state consecutively occurs a first predetermined number of times during a first unit time, the number of times of the consecutive occurrences for the first predetermined number of times reaches a second predetermined number of times, the object is determined to be in a motion state after having been in a stop state during a second unit time or greater, and a specific speaker among the plurality of speakers is allocated to a predetermined sound, the processing unit causes the specific speaker to generate a sine wave in accordance with the musical structure program. When the motion state consecutively occurs the first predetermined number of times during the first unit time, the number of times of the consecutive occurrences for the first predetermined number of times reaches a third predetermined number of times, the object is determined to be in a motion state after having been in a stop state during the second unit time or greater, and the specific speaker is allocated to the selected sound, the processing unit causes the specific speaker to output a first reverberation sound in accordance with the musical structure program. When the motion state consecutively occurs a fourth predetermined number of times during the first unit time, the processing unit causes the speaker to output a second reverberation sound in a predetermined order in accordance with the musical structure program.
  • The present invention also provides an acoustic space creation apparatus that includes a sensor, an input unit, a processing unit, and a plurality of speakers. The sensor is capable of detecting a state of an object. The input unit acquires signals from the sensor. The processing unit selects a sound to be generated corresponding to the signal. The plurality of speakers output a sound selected by the processing unit. The processing unit compares the signal with a signal immediately before the signal and determines whether the object is in a motion state or a stop state based on a predetermined threshold for signal. When the object is determined to be in a motion state, the processing unit generates a motion detection signal, selects a musical pitch to be output in accordance with a pre-stored musical structure program, allocates one of the plurality of speakers to each sound of the selected musical pitch in accordance with a pre-stored spatial structure program, and causes the allocated speaker to output the sound. When the motion state consecutively occurs a first predetermined number of times during a first unit time, the number of times of the consecutive occurrences for the first predetermined number of times reaches a second predetermined number of times, the object is determined to be in a motion state after having been in a stop state during a second unit time or greater, and a specific speaker among the plurality of speakers is allocated to a predetermined sound, the processing unit causes the specific speaker to generate a sine wave in accordance with the musical structure program. When the motion state consecutively occurs the first predetermined number of times during the first unit time, the number of times of the consecutive occurrences for the first predetermined number of times reaches a third predetermined number of times, the object is determined to be in a motion state after having been in a stop state during the second unit time or greater, and the specific speaker is allocated to the selected sound, the processing unit causes the specific speaker to output a first reverberation sound in accordance with the musical structure program. When the motion state consecutively occurs a fourth predetermined number of times during the first unit time, the processing unit causes the speaker to output a second reverberation sound in a predetermined order in accordance with the musical structure program.
  • EFFECTS OF THE INVENTION
  • The acoustic space creation apparatus of the present invention determines each of motion state and stop state based on the motion of the object (for example, subject), selects a sound in accordance with, for example, a frequency, a cycle, and/or the number of times of each state, and outputs the selected sound as the musical sound. Consequently, with the present invention, it is possible to easily perform a musical sound corresponding to an aspect of the motion of the object and easily provide an acoustic space that performs such a musical sound. Furthermore, the present invention allows performing the musical sound and/or the acoustics of the new syllable generated based on the repetition of motion and stop of the object.
  • BRIEF DESCRIPTION OF THE DRAWINGS
    • FIGS. 1A and 1B are schematic views illustrating one example of an acoustic space creation apparatus according to an embodiment, FIG. 1A is a drawing viewed from a front side, and FIG. 1B is a drawing viewed from above.
    • FIG. 2 is a block diagram illustrating a main part of the acoustic space creation apparatus in FIGS. 1A and 1B.
    • FIG. 3 is a drawing illustrating one example of a use state of the acoustic space creation apparatus in FIGS. 1A and 1B.
    • FIG. 4 is a flowchart illustrating one example of an operation of the acoustic space creation apparatus in FIGS. 1A and 1B.
    • FIG. 5 is a flowchart illustrating processing of a processing unit when a sound of a predetermined musical pitch is output.
    • FIG. 6 is a drawing illustrating an audio file of musical pitches.
    • FIG. 7 is a drawing for describing one example of the operation of the processing unit.
    • FIG. 8 is a flowchart illustrating generation processing of a sine wave.
    • FIG. 9 is a drawing for describing one example of the operation of the processing unit.
    • FIG. 10 is a flowchart illustrating output processing of a first reverberation sound.
    • FIG. 11 is a drawing for describing one example of the operation of the processing unit.
    • FIG. 12 is a flowchart illustrating the output processing of a second reverberation sound.
    • FIG. 13 is a drawing for describing one example of the operation of the processing unit.
    DESCRIPTION OF THE EMBODIMENTS
  • The following describes an embodiment of the present invention by referring to accompanied drawings. However, the present invention is not limited to this. In the drawings, to describe the embodiment, the scale is changed and expressed as appropriate, such as describing by enlarging or emphasizing a portion. FIGS. 1A and 1B are a drawing illustrating an outline on one example of a configuration of an acoustic space creation apparatus 100 according to the embodiment, FIG. 1A is a drawing viewed from a front side, and FIG. 1B is a drawing viewed from above. FIG. 1A illustrates a state where a wall portion 11b on a front side is transmitted, and FIG. 1B illustrates a state where a ceiling portion 11c is transmitted.
  • The acoustic space creation apparatus 100 according to the embodiment is an apparatus that generates a melody and a syllable by a specific musical scale and creates an acoustic space 10 performing the melody and the syllable. In the embodiment, the acoustic space 10 is a space for performing music.
  • In the embodiment, the acoustic space 10 is an internal space of a structure 11. The structure 11 is a structural body in a shape of a container and is formed to be hollow and movable. For example, as illustrated in FIGS. 1A and 1B, the structure 11 is constituted by including a floor portion 11a that is approximately square in plan view, wall portions 11b that stand up from each side of the floor portion 11a, and a ceiling portion 11c arranged above the floor portion 11a so as to cover the above-described internal space. The acoustic space 10 is a space defined by the floor portion 11a, the wall portions 11b, and the ceiling portion 11c of the structure 11. The acoustic space 10 is not limited to the internal space of the structural body in a shape of a container like the above-described structure 11, and all the space where the sound can propagate is applicable. That is, the acoustic space 10 may be, for example, an internal space and an underground space of a building such as a concert hall and an event hall. The acoustic space 10 is not limited to a closed space, and may be, for example, a space on an outdoor stage and a space on a ground surface. In this case, a camera 20, a light source 30, a table 40, a plurality of speakers 50, and the like, which will be described later, are installed, for example, on the outdoor stage or on the ground surface.
  • The acoustic space creation apparatus 100 includes the camera 20, the light source 30, the table 40, the plurality of speakers 50, and an information processing apparatus 60 (see FIG. 2). While the camera 20, the light source 30, and the table 40 are installed inside the acoustic space 10, they may be installed outside the acoustic space 10. For example, the camera 20 and the like may be installed in a place away from the acoustic space 10.
  • The camera 20 is a device that can capture a motion of a subject (an object) (namely, hands of a user U) H (see FIG. 3). The camera 20 is installed in the proximity of the ceiling portion 11c in a state so as to face downward. The camera 20 captures the subject H from above and images a moving image of the subject H. The camera 20 is arranged in a central portion in the acoustic space 10 in plan view. The camera 20 is supported in the air, for example, is supported by a support metal fitting (not illustrated) extending in a horizontal direction from the wall portions 11b. The camera 20 is not limited to the above-described configuration, and a direction of the camera 20, an installation place, an installation method, and the like can be conveniently set, for example, the camera 20 may be installed to face upward or in the horizontal direction or may be installed in a state being fixed to the ceiling portion 11c or may be installed in a state of being suspended from the ceiling portion 11c.
  • The camera 20 consecutively images the subject H and transmits the captured images to the information processing apparatus 60. Specifically, the camera 20 captures the subject H by a predetermined frame rate and inputs it to an input unit 61 (see FIG. 2) of the information processing apparatus 60 as image data for each frame. The camera 20 is connected to the information processing apparatus 60 so as to be capable of data communication, and the image data is transmitted to the input unit 61 from the camera 20 via wired communication (for example, USB or Ethernet (registered trademark)) or wireless communication (for example, various kinds of radio wave communication or Internet). The frame rate of the camera 20 is set corresponding to a speed of the motion of the subject H and performance of the information processing apparatus 60. In the embodiment, the frame rate of the camera 20 is set to 40 fps (frames per second). In this case, the camera 20 captures the subject H at a frequency of 40 times per second and transmits frame images acquired at a frequency of 40 sheets per second to the information processing apparatus 60. The frame rate of the camera 20 is not limited to 40 fps and may be set to, for example, 25 fps, 30 fps, 50 fps, 60 fps, or the like.
  • The light source 30 is an instrument, device, or the like that emits light for irradiating the subject H. The light source 30 is arranged in the proximity of the ceiling portion 11c in the central portion in plan view, inside the acoustic space 10. The light source 30 is, for example, an LED spotlight. The spotlight is illumination that concentratedly illuminates a part of the acoustic space 10. The light source 30 is installed in a state of facing downward and emits a light L directly downward. Thus, the light L is projected to a region R (see FIG. 3) in a part of an upper surface 41 of the table 40. The light source 30 is installed integrally with the camera 20. The light source 30 is not limited to the above-described configuration and may be, for example, an arc lamp, an incandescent bulb, a fluorescent lamp, sunlight, or the like, or may be one that uniformly illuminates a wide area including the periphery of the table 40. The light source 30 can be conveniently set with respect to a direction of the emitted light L, an installation place, an installation method, and the like and, for example, may be installed so as to emit the light L upward or horizontally, or may be installed separately from the camera 20.
  • The table 40 is installed on a floor portion 11a below the light source 30. A height of the table 40 is set, for example, to a height corresponding to a position of the waist of the user (the performer) U in an upright posture ((see FIG. 3)). The table 40 is installed approximately in the center of the surface of the floor portion 11a, and has a rectangular parallelepiped shape with a height of 90 cm (cm) and respective longitudinal and horizontal lengths of 45 cm. The upper surface 41 of the table 40 has a planar shape and is arranged so as to include the entire region R irradiated by the emission light L of the light source 30. The table 40 is not limited to the above-described configuration, and the shape, the size, the arrangement, and the like can be changed as necessary. In the acoustic space creation apparatus 100, it is optional whether or not to locate such a table 40.
  • As described above, the light L is emitted directly downward from the light source 30. The light L emitted from the light source 30 irradiates, for example, the circular region R on the upper surface 41 of the table 40 when it proceeds downward. At this time, an approximately conical space having the light source 30 as the apex and the upper surface 41 of the table 40 as the bottom surface becomes a state of being partially illuminated by the light L, in the acoustic space 10 set to be dark as a whole. Such an approximately conical space illuminated by the light L of the light source 30 is referred to as a light irradiation space S (see FIG. 3).
  • Five speakers 50 are installed in the acoustic space 10. These five speakers 50 output the sound based on music data generated by the information processing apparatus 60. These five speakers 50, which includes a first speaker 51, a second speaker 52, a third speaker 53, a fourth speaker 54, and a fifth speaker 55, respectively have sound emitting portions 51a, 52a, 55a that emit a sound in a predetermined direction.
  • The fifth speaker 55 is arranged in the central portion in plan view and on the upper side in the acoustic space 10. The fifth speaker 55 is installed on an upper surface of the camera 20 and is installed with the emitting portion 55a facing upward.
  • On the other hand, the first speaker 51, the second speaker 52, the third speaker 53, and the fourth speaker 54 are arranged in the bottom portion and arranged so as to be equally spaced on an identical circumference centered on the fifth speaker 55, in the acoustic space 10. The first to the fourth speakers 51 to 54 are each installed at four corners of the approximately square floor portion 11a. The first to the fourth speakers 51 to 54 are installed having the sound emitting portions 51a, 52a, 55a, respectively directed toward the center side of the acoustic space 10 while being tilted upward by about 5 degrees to 25 degrees with respect to the horizontal direction so as to emit sounds slightly upward.
  • The plurality of speakers 50 included in the acoustic space creation apparatus 100 are not limited to the configuration of the above-described first to fifth speakers 51 to 55. That is, an installation count of the speaker 51 and the like, respective arrangements of the speaker 51 and the like, and directions of the sound emitting portion 51a and the like inside the acoustic space 10 can be changed as necessary. Specifically, for example, a count of speakers included in the acoustic space creation apparatus 100 may be two or more and four or less or may be six or more. All the speakers 51 to 55 constituting the five speakers 50 may be arranged, for example, in the upper portion or may be arranged in the bottom portion or may be arranged so as to surround the central portion, in the acoustic space 10. For example, the fifth speaker 55 may be installed to be separated from the camera 20 in a state where the sound emitting portion 55a is directed downward so as to emit the sound downward and the first to fourth speakers 51 and the like may be installed so as to emit the sound in the horizontal direction. While all the first to fifth speakers 51 and the like are speakers having the identical configuration, some or all of them may be speakers having different configurations.
  • The information processing apparatus 60 is configured by, for example, a computer. The information processing apparatus 60 is communicatively connected with each of the first to fifth speakers 51 and the like and the camera 20 via wired or wireless communication. The information processing apparatus 60 acquires video of the camera 20 and causes the first to fifth speakers 51 and the like to emit a predetermined sound. While the information processing apparatus 60 is installed outside the acoustic space 10, it may be installed inside the acoustic space 10. FIG. 2 is a block diagram illustrating a main part of the acoustic space creation apparatus 100. As illustrated in FIG. 2, the information processing apparatus 60 has the input unit 61, a processing unit 62, and a storage unit 63.
  • The input unit 61 acquires the video of the camera 20. The input unit 61 acquires a plurality of frames imaged by the camera 20 as the image data. The processing unit 62 performs predetermined processing based on the input image data, selects a musical pitch corresponding to the motion state of the subject H, and causes the speakers 50 to output each sound of the selected musical pitch. The data processing and the like in the processing unit 62 will be described later. The processing unit 62 is achieved by the configuration including, for example, a CPU. The storage unit 63 stores the image data input from the camera 20, data generated by the processing unit 62 and the like. The storage unit 63 also stores a program for executing the processing of the processing unit 62, a musical structure program and a spatial structure program, which will be described later. The storage unit 63 is achieved by, for example, a memory, a hard disk, and the like.
  • Next, a use method of the acoustic space creation apparatus 100 will be described. FIG. 3 is a drawing illustrating one example of a use state of the acoustic space creation apparatus 100. As illustrated in FIG. 3, the user U stands on a back side of the table 40 and performs the motion of moving and stopping the hands H in a state of entering the hands H in the light irradiation space S. The acoustic space creation apparatus 100 is used in this way. At this time, the user U puts both the left hand and the right hand H or one hand H in the light irradiation space S, performs the motion of, for example, moving the hands H vertically or horizontally, rotating the hands H, moving the fingers, or expanding and closing the palms of the hands H, and temporarily stops the motion of the hands H. Such a series of motions of the hands H is captured by the camera 20. Then, the sounds generated corresponding to the motions of the hands H are output from the speakers 50.
  • Subsequently, an operation of the acoustic space creation apparatus 100 will be described. FIG. 4 is a flowchart illustrating one example of the operation of the acoustic space creation apparatus 100. In the following, the operation of the acoustic space creation apparatus 100 will be described according to the flowchart in FIG. 4.
  • First, the moving image of the subject (the object) H is imaged by the camera 20 (Step S01). The camera 20 transmits the captured image data to the information processing apparatus 60 (Step S02). For example, in the use state indicated in FIG. 3, the camera 20 consecutively captures the hands H inside the light irradiation space S at the predetermined frame rate and inputs each frame as the image data to the input unit 61 of the information processing apparatus 60.
  • When the image data for each frame (frame image) is input into the input unit 61, the processing unit 62 executes the predetermined processing and a predetermined operation based on the image data (Step S03). Then, musical sounds or the like are output from the speakers 50 (Step S04). The sound output from the speakers 50 has a melody (melody) and a syllable composed of the sounds of a predetermined musical pitch selected by the processing unit 62.
  • By satisfying a predetermined condition, the sound output from the speakers 50 includes a sine wave (sine wave) sound and a reverberation sound in addition to the sound of such a melody or a syllable. The conditions where the sine wave sound is output from the speakers 50 and the conditions where the reverberation sound is output from the speakers 50 and the like will be described later.
  • By the operations of Step S01 to Step S04 of the acoustic space creation apparatus 100, a musical sound automatically composed corresponding to the motion of the subject H is performed from the speakers 50, and thus, the acoustic space 10 is created.
  • Subsequently, the contents of the processing of the processing unit 62 at Step S03 will be specifically described. FIG. 5 is a flowchart illustrating the processing of the processing unit 62 when the sound of a predetermined musical pitch is output. The processing of the processing unit 62 is, for example, automatically executed based on the program stored in the storage unit 63.
  • As illustrated in FIG. 5, first, the processing unit 62 acquires the frame image (Step S11). Next, the processing unit 62 compares a light amount of the image data in the frame with the light amount of the image data in a frame acquired immediately before the frame (Step S12). While, at Step S12, the light amount of the whole region in the frame image is compared, instead of this, the light amount of a part of a predetermined region in the frame image may be compared.
  • Subsequently, it is determined whether or not the difference in the light amount is equal to or more than a threshold (Step S13). At Step S13, the processing unit 62 determines whether the difference between the light amount of the image data in the frame and the light amount of the image data in the frame acquired immediately before the frame is equal to or more than the threshold or less than the threshold. The threshold is a value of a predetermined light amount, is set preliminarily, and is stored in the storage unit 63.
  • When the above-described difference of the light amount is equal to or more than the threshold (when "YES" at Step S13), the processing unit 62 determines that the subject H is in a state of "motion" (movement) (Step S14). Then, when the subject H is determined to be in a "motion" state, the processing unit 62 generates one motion detection signal (Step S15).
  • On the other hand, when the above-described difference of the light amount is determined to be less than the threshold (when "NO" at Step S13), the processing unit 62 determines that the subject H is in a "STOP" (stop) state (Step S24). When the subject H is determined to be in a stop state, the processing unit 62 may generate one rest/reverberation signal (Step S25).
  • When one motion detection signal is generated at Step S15, the processing unit 62 selects one musical pitch of an output sound based on the preliminarily set and stored musical structure program (Step S16). At Step S16, based on the musical structure program, for example, the musical pitch of the sound to be output is selected from a file having data indicated in FIG. 6. In this case, one musical pitch is selected from data (A1 to A42) of the 42 musical pitches described in FIG. 6. The musical pitch selected at Step S16 may be selected from, for example, a recording material such as an acoustic noise, instead of the data (A1 to A42) of the 42 musical pitches described in FIG. 6. At Step S16, a plurality of musical pitches among the data (A1 to A42) of the 42 musical pitches described in FIG. 6 may be selected at a time.
  • Following Step S16, the processing unit 62 allocates a speaker to the sound of the selected musical pitch by the preliminarily set and stored spatial structure program (Step S17). At Step S17, based on the spatial structure program, any one of the five speakers 51 to 55 is allocated to each sound of the selected musical pitch. When a plurality of musical pitches are selected at a time at Step S16, at Step S17, any one of the five speakers 51 to 55 is allocated to each of the selected plurality of musical pitches.
  • FIG. 6 is a drawing illustrating the audio file of the musical pitch. In FIG. 6, data, for example, regarding the 42 musical pitches are illustrated. In FIG. 6, A1 to A42 are serial numbers of the 42 musical pitches. These 42 musical pitches are selectable musical pitches at Step S16. Musical Instrument Digital Interface (MIDI) number is also called a note number or a note number and is a numerical value representing a sound pitch and a sound range in MIDI. A simultaneous sound number is a count of sounds that can be output at a time and is also a value corresponding to a count of layers. An appearance frequency of each musical pitch when the motion detection signal occurs 127 times indicates how many times each pitch is selected when the motion detection signal occurs 127 times. An output frequency of SP1/SP2/SP3/SP4/SP5 when the motion detection signal occurs 20 times indicates how many times each of a first speaker 51 (SP1), a second speaker 52 (SP2), a third speaker 53 (SP3), a fourth speaker 54 (SP4), and a fifth speaker 55 (SP5) is allocated when the motion detection signal occurs 20 times and the sound of the predetermined musical pitch is selected 20 times.
  • Numerical values regarding the appearance frequency of each musical pitch and an allocation frequency to each speaker of the sound of each musical pitch are preliminarily set. At Step S16 and Step S17, selection of the musical pitch and allocation to the speakers 50 are automatically performed according to the numerical values related to these appearance frequency and allocation frequency.
  • Referring again to FIG. 5, the processing unit 62 causes the allocated speakers 50 to output the sound of the predetermined musical pitch (Step S18). At Step S18, the sound of the selected musical pitch is output from the allocated speakers 50. When the speakers 50 output a continuous sound, the sound may be output from the speakers 50 so as to move up, down, left, and right inside the acoustic space 10, by, for example, changing the speaker 51 and the like allocated to each sound constituting the continuous sound (for example, allocating speaker 51 and the like different for each sound constituting the above-described continuous sound).
  • FIG. 7 is a drawing for describing one example of an operation of the processing unit 62. As illustrated also in FIG. 7, by the above-described processing of the processing unit 62, when the subject H is in a motion state, the sound of the selected musical pitch is output from the speakers 50 based on the musical structure program. On the other hand, when the subject H is in a stop state, the speakers 50 become a rest state where the speakers 50 do not output the sound temporarily, and as a result, the acoustic space 10 becomes in a state of being silent or emitting the reverberation. Accordingly, when the subject H continuously keeps a motion state, the sound of the predetermined musical pitch is continuously output, and thus, the sound as a texture is generated. On the other hand, when the subject H appropriately sandwiches a stop state during a motion state, a pause between the motion and the motion is expressed by the sound and the reverberation of the sound can be heard. As a result, the acoustic space 10 where the melody (melody) and the syllable are performed is created.
  • In the processing at Step S03 described above, the processing unit 62 causes the speakers 50 to generate a sine wave under a predetermined condition. This sine wave is output in a synthesized state with the waveform of the sound of the above-described predetermined musical pitch. Thus, the processing of the processing unit 62 related to generation of such sine wave will be described.
  • FIG. 8 is a flowchart illustrating generation processing of the sine wave in the processing unit 62. The processing unit 62 executes the following processing. As illustrated in FIG. 8, the processing unit 62 determines whether or not the motion detection signal has occurred consecutively a first predetermined number of times during a first unit time (Step S31). That is, the processing unit 62 determines whether or not a motion state has occurred consecutively the first predetermined number of times during the first unit time. When it is determined to have occurred (when "YES" at Step S31), the processing unit 62 determines whether or not the number of times of the consecutive occurrences for the first predetermined number of times at Step S31 has reached a second predetermined number of times (Step S32). When it is determined to have reached (when "YES" at Step S32), the processing unit 62 determines whether or not the motion detection signal has occurred after having been in a stop state during a second unit time or greater (Step S33). Then, when it is determined to be in a motion state after having been in a stop state during a second unit time or greater (when "YES" at Step S33), furthermore, when the specific speaker 51 or the like among the plurality of speakers 50 is allocated to the predetermined sound (when "YES" at Step S34), the processing unit 62 causes the specific speaker 51 or the like to generate the sine wave (Step S35). In the above-described generation processing of the sine wave, the determination of the stop state may be recognized by non-occurrence of the motion detection signal or may be recognized by detecting the rest/reverberation signal.
  • Specifically, for example, it is as follows. FIG. 9 is a drawing for describing one example of the operation of the processing unit 62. The first unit time is preliminarily set to, for example, 180 ms, the first predetermined number of times is preliminarily set to, for example, 1 time to 30 times, the second unit time is preliminarily set to, for example, 500 ms, and the second predetermined number of times is preliminarily set to, for example, 10 times, respectively. These setting values are stored in the storage unit 63. As illustrated in FIG. 9, when detecting the motion detection signal after the motion detection signal consecutively occurred 1 to 30 times during 180 ms or less, and a sum of the cycles reached 10 times, furthermore, the stop state continued for 500 ms or more, the processing unit 62 allocates, for example, the fifth speaker 55 and causes the fifth speaker 55 to generate a first sine wave for 60 seconds to 90 seconds. When the sound of the musical pitch selected by the occurrence of the motion detection signal at Step S33 has been the sound of any musical pitch of the data of 42 musical pitches in FIG. 6, for example, A19 to A25, the processing unit 62 causes a second sine wave to be generated, in addition to the first sine wave. The second sine wave is a sine wave where a third sine wave and a fourth sine wave, which will be described later, are synthesized by a sound volume curve of 10 seconds. Here, the third sine wave is a sine wave with a frequency two octaves higher than the musical pitch selected by the occurrence of the motion detection signal at Step S33. The fourth sine wave is a sine wave where a frequency of 0.5 Hz to 11 Hz is added to the third sine wave. The second sine wave is output from, for example, the fifth speaker 55 for 10 seconds. In the fourth sine wave, when the numerical value is changed, it reaches a reach value by using 1500 ms to 4000 ms time complementation.
  • Furthermore, in the processing at Step S03 described above, the processing unit 62 causes the speakers 50 to output a first reverberation sound, under a predetermined condition. Thus, subsequently, the processing of the processing unit 62 related to the output of the first reverberation sound will be described.
  • FIG.10 is a flowchart illustrating output processing of the first reverberation sound. The processing unit 62 executes the following processing. As illustrated in FIG. 10, the processing unit 62 determines whether or not the motion detection signa has consecutively occurred the first predetermined number of times during the first unit time (Step S41). When it is determined to have occurred (when "YES" at Step S41), the processing unit 62 determines whether or not the number of times of the consecutive occurrences for the first predetermined number of times at Step S41 has reached a third predetermined number of times (Step S42). When it is determined to have reached (when "YES" at Step S42), subsequently, the processing unit 62 determines whether or not the motion detection signal has occurred after having been in a stop state during the second unit time or greater (Step S43). Then, when it is determined to be in a motion state after having been in a stop state during the second unit time or greater (when "YES" at Step S43), furthermore, when the specific speaker among the plurality of speakers 50 is allocated to the selected sound (when "YES" at Step S44), the processing unit 62 causes the specific speaker 50 to output the first reverberation sound (Step S45).
  • A specific description is given, for example, as follows. FIG. 11 is a drawing for describing one example of the operation of the processing unit 62. The first unit time, the first predetermined number of times, and the second unit time are preliminarily set to, for example, the above-described values. The third predetermined number of times is set to, for example, 3 times to 5 times. These setting values are stored in the storage unit 63. As illustrated in FIG. 11, the processing unit 62 counts the cycle where the motion detection signal consecutively occurs 1 time to 30 times during 180 processing unit 62 detects the motion detection signal after having been in a stop state for 500 ms or greater, the processing unit 62 allocates, for example, the fifth speaker 55 and causes the fifth speaker 55 to output the first reverberation sound for 60 seconds to 90 seconds. At this time, when the speaker allocated with respect to the sound of the selected musical pitch based on the motion detection signal after having been in a stop state for 500 ms or greater, which is described above, is the fifth speaker 55, the first reverberation sound is output from the fifth speaker 55 by being synthesized with the sound of the above-described musical pitch.
  • Further, in the processing at Step S03 described above, the processing unit 62 causes the speakers 50 to output the second reverberation sound under a predetermined condition. Thus, subsequently, the processing of the processing unit 62 related to the output of the second reverberation sound will be described.
  • FIG. 12 is a flowchart illustrating the output processing of the second reverberation sound. The processing unit 62 executes the following processing. As illustrated in FIG. 12, the processing unit 62 determines whether or not the motion detection signal has consecutively occurred a fourth predetermined number of times during the first unit time (Step S51). When it is determined to have occurred (when "YES" at Step S51), and, furthermore, the specific speakers among the plurality of speakers 50 are allocated with respect to the selected sound (when "YES" at Step S52), the processing unit 62 causes the specific speakers 50 to output the second reverberation sound (Step S53).
  • A specific description is given, for example, as follows. FIG. 13 is a drawing fir describing one example of the operation of the processing unit 62. The first unit time is preliminarily set to, for example, the above-described values. The fourth predetermined number of times is set to, for example, 30 times. These setting values are stored in the storage unit 63. As illustrated in FIG. 13, when the motion detection signal consecutively occurs 30 times during 180 ms or less, the processing unit 62 causes the speakers 50 to output the second reverberation sound. After synthesizing the second reverberation sound, for example, with the sounds of the musical pitches of A1 to A18, A70 to A79 among the data of 42 musical pitches described in FIG. 6, the processing unit 62 causes the second reverberation sound to be output for 3 seconds to 10 seconds. The processing unit 62 causes the second reverberation sound to be output from, for example, the first speaker 51 and the second speaker 52 and also causes the second reverberation sound to be output from the third speaker 53 and the fourth speaker 54 after one second of acoustic movement.
  • As described above, the processing unit 62 causes the speakers 50 to output the sound of the predetermined musical pitch by the processing of Step S11 to Step S18. The processing unit 62 causes the speakers 50 to generate the sine wave by the processing of Step S31 to Step S35. Further, the processing unit 62 causes the speakers 50 to output the first reverberation sound by the processing of Step S41 to Step S45. Furthermore, the processing unit 62 causes the speakers 50 to output the second reverberation sound by the processing of Step S51 to Step S53.
  • As described above, by the motion detection signal generated by the processing unit 62 that determines the repetition of "MOTION/STOP" (variations of motion/stop time) through the input unit 61 (the sensor input), the acoustic space creation apparatus 100 creates the acoustic space 10 via the musical structure program and the spatial structure program implemented in the storage unit 63. Then, the acoustic space creation apparatus 100 determines each of motion state and stop state based on the motion of the subject H, selects the sound corresponding to the frequency, the cycle, the number of times, and the like of each state, and outputs the selected sound as the musical sound. Consequently, the acoustic space creation apparatus 100 can easily perform the musical sound corresponding to the aspect of the motion of the object, and thus, can easily provide the acoustic space 10 performing such musical sound. The acoustic space creation apparatus 100 can perform the musical sound and the acoustics of the new syllable generated based on the repetitive motion between the motion and the stop of the object. Under the predetermined conditions, the acoustic space 10 that performs the unique musical sound including the sound generated by synthesizing the sine waves, the first reverberation sound, the second reverberation sound, and the like can be provided.
  • In the acoustic space creation apparatus 100, since the light L is irradiated with respect to the subject H in motion, a user U can perform or appreciate the improvisational music while watching the motion of the subject H irradiated by the light L. This allows the acoustic space creation apparatus 100 to provide the acoustic space 10 including a new entertainment feature where the user U can enjoy the improvisational music together with a visual change. Then, since the acoustic space creation apparatus 100 allows the user U to feel or listen to the music related to body breathing such as a physical expression, yoga, and welfare generated from the motion state and the stop state, the acoustic space creation apparatus 100 not only allows the user U to simply feel or listen to the music, but also can contribute to improvement of spirit of the user U.
  • Subsequently, a modification of the acoustic space creation apparatus 100 according to the above-described embodiment will be described. While the above-described acoustic space creation apparatus 100 has a configuration including the camera 20 capturing the subject and the light source 30 irradiating the subject H, it may have a configuration including a sensor capable of detecting the state of the object instead of both the camera 20 and the light source 30. Thus, in the following, a configuration of an acoustic space creation apparatus according to the modification will be specifically described.
  • In the configuration of the acoustic space creation apparatus according to the above-described modification, as the sensor capable of detecting the state of the object, for example, a distance sensor capable of detecting a distance from the object or the like, is applicable. The camera 20 is also one of the sensor capable of detecting the motion of the object.
  • Operations of an input unit and a processing unit of the acoustic space creation apparatus according to such modification is similar to the operation of the input unit 61 and the processing unit 62, which are described above. However, the input unit according to the modification consecutively acquires, for example, a signal corresponding to a distance from the sensor to the object from the sensor. The processing unit of the modification compares the acquired signal with the signal acquired immediately before and determines that the object is in a motion state when the difference is equal to or more than a threshold. The threshold is a value related to the signal and is preliminarily set. The subsequent processing of the processing unit is similar to that of the above-described processing unit 62. That is, the modification includes the sensor capable of detecting the state of the object, the input unit acquiring the signal from the sensor, the processing unit selecting a sound to be generated corresponding to the above-described signal, and the plurality of speakers 50 outputting the sound selected by the processing uni. The processing unit compares the signal with a signal immediately before the signal, determines whether the object is in a motion state or a stop state based on the predetermined threshold, which is described above, for the signal, generates the motion detection signal when it is determined to be in a motion state, selects a musical pitch to be output in accordance with the pre-stored musical structure program, allocates one of the plurality of speakers 50 to each sound of the selected musical pitch in accordance with the pre-stored spatial structure program, and causes the allocated speaker to output the sound. The other configurations of the acoustic space creation apparatus according to the modification is similar to those of the acoustic space creation apparatus 100 according to the above-described embodiment. The processing unit of the acoustic space creation apparatus according to the modification causes the plurality of speakers 50 to output the sine wave, the first reverberation sound, and the second reverberation sound corresponding to the frequency, the cycle, the number of times, and the like of each state, in addition to the sound of the above-described musical pitch.
  • According to such modification, the acoustic space creation apparatus can be achieved with a simpler configuration. Consequently, for example, it is also possible to constitute the acoustic space creation apparatus with a compact PC with a video camera function. As a result, it becomes easy to apply the acoustic space creation apparatus to a music teaching material for children and the like.
  • In addition to the above-described modification, the acoustic space creation apparatus 100 can also have the following configuration. Specifically, for example, by determining a state of stepping a grid where a sensor is mounted and a state of taking a foot off the grid or determining ON/OFF of a switch of a buzzer, the processing unit may determine whether the object is in a motion state or a stop state based on a predetermined threshold for the signal. That is, for example, by using a sensor capable of detecting contact with the foot, signals different between when the foot is in contact with the grid and when the foot is away from the grid (for example, 1 for being in contact with the grid and 0 for being away from the grid) are output from the sensor to the processing unit, or by using a sensor capable of detecting an ON state of the buzzer, signals different between when the switch of the buzzer is ON and when the switch of the buzzer is OFF (for example, 1 for ON and 0 for OFF) are output from the sensor to the processing unit. Then, the processing unit compares the signal acquired from the sensor with the immediately preceding signal and determines whether or not the difference is equal to or more than a threshold (for example, whether or not it is equal to or more than 1). Thus, the processing unit determines whether the object is in a motion state (a state with change in motion) or a stop state (a state with no change in motion) (for example, when the above difference is 1, it is determined to be in motion state, and when the above difference is 0, it is determined to be a stop state). Then, when the object is determined to be in a motion state, the processing unit generates the motion detection signal, selects the musical pitch to be output in accordance with the pre-stored musical structure program, allocates one of a plurality of speakers to each sound of the selected musical pitch in accordance with the pre-stored spatial structure program, and causes the allocated speaker to output the sound. In addition to the sound of the above-described musical pitch, the processing unit causes the plurality of speakers 50 to output the sine wave, the first reverberation sound, and the second reverberation sound corresponding to the frequency, the cycle, the number of times, and the like of each state.
  • In the acoustic space creation apparatus 100, in input voice information (for example, voice input into a microphone), the processing unit may determine whether or not the object is in a motion state or a stop state by determining presence /absence of an attack (namely, a resolutely strong sound, a start of a sound, and a rise of a sound) based on a predetermined threshold for signal. That is, for example, the acoustic space creation apparatus 100 may include a microphone as a voice recognition device for picking up the voice of the object, and the processing unit may compare the voice signal acquired from the microphone as a sensor with the voice signal acquired immediately before, determine whether or not the difference (for example, the difference of a sound volume) is equal to or more than a predetermined threshold, and determine whether the object is in a motion state or a stop state.
  • While the embodiment and the modification of the present invention have been described above, the technical scope of the present invention is not limited to the above-described embodiments. For example, the object the motion of which is detected by a sensor such as a camera is not limited to the hands H and, for example, may be feet or a whole body of a person, may be a thing and the like that the person holds (for example, a bar), and may be a shadow of a leaf, a wave, and the like. The plurality of speakers 50 that the acoustic space creation apparatus 100 includes may be configured by a plurality of stop speakers 51 or the like as the embodiment, and may be a headphone, earphones for both ears, and the like including a pair of speakers that directly output the sound to both ears of the user U. While the acoustic space 10 that the acoustic space creation apparatus 100 creates is the actual space, it may be a virtual space. That is, for example, when the headphone or the earphones for both ears are applied as the speakers 50 of the acoustic space creation apparatus 100, the acoustic space creation apparatus 100 provides a virtual acoustic space with respect to the user U mounting the speakers 50. In the embodiment and the modification described above, while the sound is generated corresponding to the result of the processing such as determining a motion state (motion) or a stop state (stop) for the state of the object, which is performed by using the sensor capable of detecting the motion of the object (the subject), here, the motion of the object (the subject) is not limited to the motion of the real object. That is, the acoustic space creation apparatus of the present invention may generate the sound corresponding to, for example, the processing such as determining a motion state (motion) and a stop state (stop) for the motion of the object at a specific fictitious place in virtual reality (VR) or in a game (on an OpenGL (registered mark) drawing and the like). The acoustic space creation apparatus 100 may include the storage unit 63 storing a plurality of musical structure programs having algorithms where respective different conditions (a threshold and the like) are set and may execute the processing at Step S16 described above by appropriately selecting from the plurality of musical structure programs or simultaneously executing the plurality of musical structure programs. In this case, the plurality of musical structure programs are stored in the storage unit 63, for example, in a state of being divided into respective different layers. This ensures appropriate selection of the musical structure program corresponding to the usage environment and a count of implementation locations of the acoustic space creation apparatus 100 and facilitated improvement and addition of the musical structure program.
  • The acoustic space creation apparatus of the present invention may be achieved by, for example, a configuration of a system where the multiple speakers 51 and the like are randomly installed at branches of a tree in the natural environment, or a configuration of a product where a plurality of ultra-compact speaker 51 and the like, and sensors are set on a flat surface (for example, a picture, a book, or a wall portion (for example, a square wall portion with a length and a width of 10 cm, respectively, or 5 m, respectively)), or a configuration of an environmental system where sensors, and the speaker 51 and the like are implemented in a plurality of street lights that are also the light source 30. The acoustic space creation apparatus of the present invention may be one that constitutes a complex improvisational ensemble device/musical instrument. The acoustic space creation apparatus in this case may have a configuration, for example, that performs the processing such as, for the 42 numbers (A1 to A42) to which the respective musical pitch files described in FIG. 6 are allocated, changing/allocating to other recording material, or associating the operation of the other musical structure program (layer) implemented in the storage unit 63. That is, for example, at Step S16 described above, the acoustic space creation apparatus may have a configuration that performs the processing such as, when the musical pitch number "A2" is selected, playing a noise recording material and a sound collection/output of a microphone in real time inside the acoustic space, and, after that, when the musical pitch number "A2" or other number is selected once or multiple times, turning off the sound collection/output of the microphone, and, simultaneously, causing the other musical structure program/layer to operate. While the acoustic space creation apparatus 100 of the above-described embodiment includes the plurality of speakers 50, a count of the speaker 51 and the like may be one.

Claims (2)

  1. An acoustic space creation apparatus comprising:
    a camera that captures an object;
    a light source that irradiates the object;
    an input unit that acquires images from the camera;
    a processing unit that selects a sound to be generated corresponding to a frame of the video;
    a plurality of speakers that output a sound selected by the processing unit, wherein
    the processing unit compares a light amount in the frame with a light amount in a frame immediately before the frame and determines whether the object is in a motion state or a stop state based on a predetermined threshold for light amount,
    when the object is determined to be in a motion state, the processing unit generates a motion detection signal, selects a musical pitch to be output in accordance with a pre-stored musical structure program, allocates one of the plurality of speakers to each sound of the selected musical pitch in accordance with a pre-stored spatial structure program, and causes the allocated speaker to output the sound,
    when the motion state consecutively occurs a first predetermined number of times during a first unit time, the number of times of the consecutive occurrences for the first predetermined number of times reaches a second predetermined number of times, the object is determined to be in a motion state after having been in a stop state during a second unit time or greater, and a specific speaker among the plurality of speakers is allocated to a predetermined sound, the processing unit causes the specific speaker to generate a sine wave in accordance with the musical structure program,
    when the motion state consecutively occurs the first predetermined number of times during the first unit time, the number of times of the consecutive occurrences for the first predetermined number of times reaches a third predetermined number of times, the object is determined to be in a motion state after having been in a stop state during the second unit time or greater, and the specific speaker is allocated to the selected sound, the processing unit causes the specific speaker to output a first reverberation sound in accordance with the musical structure program, and
    when the motion state consecutively occurs a fourth predetermined number of times during the first unit time, the processing unit causes the speaker to output a second reverberation sound in a predetermined order in accordance with the musical structure program.
  2. An acoustic space creation apparatus comprising:
    a sensor capable of detecting a state of an object;
    an input unit that acquires signals from the sensor;
    a processing unit that selects a sound to be generated corresponding to the signal;
    a plurality of speakers that output a sound selected by the processing unit, wherein
    the processing unit compares the signal with a signal immediately before the signal and determines whether the object is in a motion state or a stop state based on a predetermined threshold for signal,
    when the object is determined to be in a motion state, the processing unit generates a motion detection signal, selects a musical pitch to be output in accordance with a pre-stored musical structure program, allocates one of the plurality of speakers to each sound of the selected musical pitch in accordance with a pre-stored spatial structure program, and causes the allocated speaker to output the sound,
    when the motion state consecutively occurs a first predetermined number of times during a first unit time, the number of times of the consecutive occurrences for the first predetermined number of times reaches a second predetermined number of times, the object is determined to be in a motion state after having been in a stop state during a second unit time or greater, and a specific speaker among the plurality of speakers is allocated to a predetermined sound, the processing unit causes the specific speaker to generate a sine wave in accordance with the musical structure program,
    when the motion state consecutively occurs the first predetermined number of times during the first unit time, the number of times of the consecutive occurrences for the first predetermined number of times reaches a third predetermined number of times, the object is determined to be in a motion state after having been in a stop state during the second unit time or greater, and the specific speaker is allocated to the selected sound, the processing unit causes the specific speaker to output a first reverberation sound in accordance with the musical structure program, and
    when the motion state consecutively occurs a fourth predetermined number of times during the first unit time, the processing unit causes the speaker to output a second reverberation sound in a predetermined order in accordance with the musical structure program.
EP19942900.2A 2019-08-30 2019-08-30 Acoustic space creation apparatus Pending EP4024391A4 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2019/034122 WO2021038833A1 (en) 2019-08-30 2019-08-30 Acoustic space creation apparatus

Publications (2)

Publication Number Publication Date
EP4024391A1 true EP4024391A1 (en) 2022-07-06
EP4024391A4 EP4024391A4 (en) 2023-05-03

Family

ID=71079288

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19942900.2A Pending EP4024391A4 (en) 2019-08-30 2019-08-30 Acoustic space creation apparatus

Country Status (3)

Country Link
EP (1) EP4024391A4 (en)
JP (1) JP6710428B1 (en)
WO (1) WO2021038833A1 (en)

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS57170875A (en) 1981-04-09 1982-10-21 Inoue Japax Res Determined form carbide formation
JP3705000B2 (en) * 1999-03-23 2005-10-12 ヤマハ株式会社 Music generation method
JP3637802B2 (en) * 1999-03-23 2005-04-13 ヤマハ株式会社 Music control device
US7038122B2 (en) * 2001-05-08 2006-05-02 Yamaha Corporation Musical tone generation control system, musical tone generation control method, musical tone generation control apparatus, operating terminal, musical tone generation control program and storage medium storing musical tone generation control program
JP3643829B2 (en) * 2002-12-25 2005-04-27 俊介 中村 Musical sound generating apparatus, musical sound generating program, and musical sound generating method
JP2005227628A (en) * 2004-02-13 2005-08-25 Matsushita Electric Ind Co Ltd Control system using rhythm pattern, method and program
JP2005316300A (en) * 2004-04-30 2005-11-10 Kyushu Institute Of Technology Semiconductor device having musical tone generation function, and mobile type electronic equipment, mobil phone, spectacles appliance and spectacles appliance set using the same
DE102005049485B4 (en) * 2005-10-13 2007-10-18 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Control playback of audio information
JP6833313B2 (en) * 2014-09-19 2021-02-24 石井 純 Electronic musical instrument that makes full use of light receiving elements
KR20160109819A (en) * 2015-03-13 2016-09-21 삼성전자주식회사 Electronic device, sensing method of playing string instrument and feedback method of playing string instrument
US9966051B2 (en) * 2016-03-11 2018-05-08 Yamaha Corporation Sound production control apparatus, sound production control method, and storage medium

Also Published As

Publication number Publication date
JP6710428B1 (en) 2020-06-17
EP4024391A4 (en) 2023-05-03
JPWO2021038833A1 (en) 2021-09-27
WO2021038833A1 (en) 2021-03-04

Similar Documents

Publication Publication Date Title
JP6344578B2 (en) How to play an electronic musical instrument
CN106465008B (en) Terminal audio mixing system and playing method
US9646588B1 (en) Cyber reality musical instrument and device
TW200951764A (en) Gesture-related feedback in electronic entertainment system
WO2013159144A1 (en) Methods and devices and systems for positioning input devices and creating control signals
JP5767004B2 (en) Audiovisual system, remote control terminal, venue equipment control apparatus, audiovisual system control method, and audiovisual system control program
JP2008090633A (en) Motion data creation device, motion data creation method and motion data creation program
JP5772111B2 (en) Display control device
US10600395B2 (en) Miniature interactive lighted electronic drum kit
EP4024391A1 (en) Acoustic space creation apparatus
JP3077192B2 (en) Electronic musical instruments compatible with performance environments
KR101809617B1 (en) My-concert system
JP2007256412A (en) Musical sound controller
TWI585614B (en) Composite beat effect system and method for processing composite beat effect
JP2018005019A (en) Playing and staging device
JP4765705B2 (en) Music control device
JP6295597B2 (en) Apparatus and system for realizing cooperative performance by multiple people
US20160219677A1 (en) Lighting Systems And Methods
KR101212019B1 (en) Karaoke system for producing music signal dynamically from wireless electronic percurssion
JP2011154289A (en) Karaoke machine for enjoying mood for urging audience to sing in chorus
WO2023084933A1 (en) Information processing device, information processing method, and program
JP4720564B2 (en) Music control device
Săman et al. Music panel: An application for creating and editing music using OpenCV and JFUGUE
Rodrigues Intonaspacio: Comprehensive Study on the Conception and Design of Digital Musical Instruments: Interaction Between Space and Musical Gesture
Drummond Interactive electroacoustics

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20220319

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Free format text: PREVIOUS MAIN CLASS: G10G0001000000

Ipc: G10H0001000000

A4 Supplementary search report drawn up and despatched

Effective date: 20230403

RIC1 Information provided on ipc code assigned before grant

Ipc: G10H 1/00 20060101AFI20230328BHEP

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20240118

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED