WO2021192072A1 - Appareil de génération d'environnement sonore intérieur, appareil de source sonore, procédé de génération d'environnement sonore intérieur et procédé de commande d'appareil de source sonore - Google Patents

Appareil de génération d'environnement sonore intérieur, appareil de source sonore, procédé de génération d'environnement sonore intérieur et procédé de commande d'appareil de source sonore Download PDF

Info

Publication number
WO2021192072A1
WO2021192072A1 PCT/JP2020/013201 JP2020013201W WO2021192072A1 WO 2021192072 A1 WO2021192072 A1 WO 2021192072A1 JP 2020013201 W JP2020013201 W JP 2020013201W WO 2021192072 A1 WO2021192072 A1 WO 2021192072A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound
user
sound source
control
indoor
Prior art date
Application number
PCT/JP2020/013201
Other languages
English (en)
Japanese (ja)
Inventor
尚志 永野
Original Assignee
ヤマハ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ヤマハ株式会社 filed Critical ヤマハ株式会社
Priority to PCT/JP2020/013201 priority Critical patent/WO2021192072A1/fr
Publication of WO2021192072A1 publication Critical patent/WO2021192072A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/175Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
    • G10K11/178Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems

Definitions

  • One embodiment of the present invention relates to an indoor sound environment generation device, a sound source device, an indoor sound environment generation method, and a control method of the sound source device.
  • Patent Documents 1 and 2 disclose a device that outputs a sound that induces sleep to a user who wants to go to bed.
  • Patent Document 1 discloses an acoustic processing device that generates an audio signal to a speaker so as to recognize a sound image three-dimensionally.
  • Patent Document 2 discloses a sound source device that outputs different sounds depending on the state of the user.
  • the device of Patent Document 1 does not output sound according to the state of the user. For example, a user who takes a sleeping posture may want to go to bed or wake up.
  • one of the objects of the embodiment of the present invention is an indoor sound environment generator, a sound source device, an indoor sound environment generation method, and a sound source device that output a sound that can be guided to a desired state according to a user's state. Is to provide a control method for.
  • the indoor sound environment generator acquires a sound signal acquisition unit that acquires a sound signal, a sound signal output unit that outputs a sound signal, a biometric information acquisition unit that acquires the biometric information of the user, and a user's position information. It includes a position information acquisition unit to be acquired and a control unit that controls the sound image localization of the sound signal based on the biometric information and the position information.
  • the sound source device determines a biometric information acquisition unit that acquires the biometric information of the user, a plurality of sound source units, and a control table for controlling the plurality of sound source units based on the biometric information, and the determined control table.
  • a control unit that controls the plurality of sound source units based on the above, and a reading unit that reads a sound source from the plurality of sound source units according to the control of the control unit.
  • FIG. 1 is a schematic view showing a configuration of a sound reproduction system 1 including a sound source device 20 according to the first embodiment.
  • the sound reproduction system 1 includes a sensor 11, a sensor 12, a sensor 13, a sound source device 20, a speaker 51, and a speaker 52.
  • the sound reproduction system 1 causes the user E, who is lying on his back on the bed 5, to hear the sound emitted from the speaker 51 and the speaker 52.
  • the sounds emitted by the speaker 51 and the speaker 52 include, for example, a sound for introducing sleep, a sound for awakening, and the like.
  • the speaker 51 and the speaker 52 are arranged at predetermined positions away from the bed 5. In the example of FIG. 1, the sound is emitted toward the user E in the direction toward the feet of the user E.
  • the speaker 51 amplifies the stereo left (L) channel sound signal output from the sound source device 20 by the built-in amplifier and emits the sound.
  • the speaker 52 amplifies the sound signal of the stereo right (L) channel output from the sound source device 20 by the built-in amplifier and emits the sound.
  • the sensor 11 is attached to the wrist of the user E.
  • the sensor 12 is laid under the pillow.
  • the sensor 13 is laid on the bed.
  • the sensor 11, the sensor 12, and the sensor 13 detect biological information such as the pulse, respiration, body movement, brain wave, and blood pressure of the user E.
  • the sensor 11, the sensor 12, and the sensor 13 have a wireless communication function.
  • the sensor 11, the sensor 12, and the sensor 13 transmit the detected biological information to the sound source device 20.
  • the number of sensors that detect biological information is not limited to three as in the present embodiment.
  • the number of sensors may be one, two, or four or more.
  • the biological information is not limited to the example shown in this embodiment. Further, at least one biological information may be detected.
  • the wireless communication function is not essential.
  • the sensor 11, the sensor 12, and the sensor 13 may be connected to the sound source device 20 by wire.
  • the sensor may be in any form as long as it can acquire biometric information.
  • FIG. 2 is a block diagram showing the configuration of the sound source device 20.
  • the sound source device 20 includes a communication unit 21, a processor 22, a RAM 23, a flash memory 24, a display 25, a user I / F26, and an audio I / F27.
  • the sound source device 20 includes, for example, a personal computer, a smartphone, a tablet computer, or the like.
  • An audio device such as an audio receiver is also an example of a sound source device.
  • the communication unit 21 receives biometric information from the sensor 11, the sensor 12, and the sensor 13 via the wireless communication function.
  • the communication unit 21 may have a wired communication function such as USB or LAN.
  • the display 25 is made of an LCD or the like.
  • User I / F26 is an example of an operation unit.
  • the user I / F 26 includes a mouse, a keyboard, a touch panel, and the like.
  • the user I / F 26 accepts the user's operation.
  • the touch panel may be stacked on the display 25.
  • the user E inputs information such as a bedtime or a wake-up time via the user I / F26. Further, the user E selects the type of sound to be output from the speaker via the user I / F16. For example, the user selects one of the following types: "relax”, “sleep onset”, "good sleep", “wake up”, or "MUTE”.
  • the audio I / F27 is composed of an analog audio terminal, a digital audio terminal, or the like.
  • the audio I / F 27 is connected to the speaker 51 and the speaker 52.
  • the audio I / F 27 outputs a sound signal to the speaker 51 and the speaker 52.
  • the sound source device 20 may transmit a sound signal to the speaker 51 and the speaker 52 by the wireless communication function.
  • the processor 22 is composed of a CPU, a DSP, a SoC (System on a Chip), or the like.
  • the processor 22 performs various operations by reading a program from the flash memory 24, which is a storage medium, and temporarily storing the program in the RAM 23. The program does not need to be stored in the flash memory 24.
  • the processor 22 may temporarily download the program from the server or the like as needed.
  • FIG. 3 is a block diagram showing a functional configuration of the processor 22.
  • FIG. 4 is a flowchart showing the operation of the processor 22.
  • the processor 22 constitutes a biological information acquisition unit 30, a sound source unit 40, a control unit 140, and a reading unit 145 by a program read from the flash memory 24.
  • the sound source unit 40 includes a plurality of sound source units 410, 420, 430, 440 (four in the example of FIG. 3).
  • the biometric information acquisition unit 30 acquires biometric information via the communication unit 21 (S11).
  • the control unit 140 determines the contents of the control table 70 according to the biometric information acquired by the biometric information acquisition unit 30 (S12).
  • the reading unit 145 reads a sound source from the sound source unit 40 based on the control table 70 (S13).
  • FIG. 5 is a diagram showing an example of the control table 70.
  • the control table 70 defines five control modes as an example.
  • the five control modes correspond to any of "relax”, “sleep onset”, “good sleep”, “wake up”, and “MUTE”, respectively.
  • control mode 1 corresponds to "relax”
  • control mode 2 corresponds to "sleep onset”
  • control mode 3 corresponds to "good sleep”
  • control mode 4 corresponds to "wake up”
  • control mode 5 corresponds to "MUTE”.
  • the control unit 140 estimates the physical and mental state of the user based on the biometric information acquired by the biometric information acquisition unit 30, and "relaxes", “sleeps”, “good sleep”, “wakes up”, or “MUTE”. You may automatically select one of them. For example, when the biological information such as pulse, respiration, body movement, brain wave, or blood pressure satisfies a predetermined condition (for example, when it is a high value (a predetermined threshold value or more)), the control unit 140 is a user.
  • a predetermined condition for example, when it is a high value (a predetermined threshold value or more)
  • control unit 140 determines that the user is sleeping, the control unit 140 selects "good sleep”. Alternatively, the control unit 140 may select “MUTE” when it is determined that the user is sleeping. Further, the control unit 140 selects “sleep onset” when the current time reaches the bedtime input by the user. Further, the control unit 140 selects "wake up” when the current time reaches the wake-up time input by the user.
  • the control table 70 includes parameters of four sound sources of hypersonic, binaural beat, natural sound, and music as an example.
  • the sound source unit 410 corresponds to a hypersonic sound source
  • the sound source unit 420 corresponds to a binaural beat sound source
  • the sound source unit 430 corresponds to a natural sound source
  • the sound source unit 440 corresponds to a music sound source. ..
  • Hypersonic is, for example, an inaudible sound of about 20 to 100 Hz. Hypersonic creates a relaxing effect by giving inaudible vibrations to the surface of the human body.
  • the binaural beat is a sound having a frequency difference between the L channel and the R channel.
  • the binaural beat includes a sound of 100 Hz on the L channel and a sound of 110 Hz on the R channel. Binaural beats lead brain waves to a low state by making the user feel such a low frequency difference of about 10 Hz. This causes the binaural beats to have a relaxing effect. Natural sounds include, for example, the sound of wind, the sound of waves, the chirping of birds, or the babbling of rivers.
  • Natural sounds are a random combination of these multiple types of sounds.
  • the natural sound is reproduced with a random length while these multiple types of sounds repeatedly fade in and fade out. This allows natural sounds to introduce sleep and maintain a comfortable sleep.
  • Music for example, is a chord of synthesizer timbres. The lower frequency of the synthesizer timbre chords produces a relaxing effect. The higher frequency of the synthesizer timbre chords causes an arousal effect.
  • user E may select arbitrary sounds in advance.
  • the reading unit 145 reads sound sources from the four sound source units 410, 420, 430, and 440 according to the control of the control unit 140, mixes them, and outputs them to the audio I / F17. As a result, the speaker 51 and the speaker 52 output the mixed sound.
  • the mixed sound is classified as either a relaxing sound, a sleep-inducing sound, a good sleep sound, or an awakening sound.
  • the sounds mixed in control mode 1 include hypersonic, binaural beats, and low frequency synthesizer chord music. The sound obtained by mixing these sounds becomes a relaxing sound that produces an action of relaxing the user E.
  • Sounds mixed in control mode 2 include hypersonic, binaural beats, natural sounds, and low frequency synthesizer chord music. The sound obtained by mixing these sounds becomes a sleep-introducing sound that gives the user E the effect of introducing sleep.
  • the sounds mixed in control mode 3 include hypersonic and natural sounds. The sound of mixing these sounds is a good sleep sound that does not interfere with sleep and also has a relaxing effect.
  • the sounds mixed in control mode 4 include high frequency synthesizer chord music. In addition, music is a sound with a high tempo and a loud volume in two beats. Such music becomes an awakening sound that gives the awakening effect to the user E.
  • control table 70 Note that "-" among the parameters shown in the control table 70 means that natural sounds or music are not used.
  • the tempo of natural sounds and music is the difference from the heart rate or respiratory rate (times / minute) of user E.
  • the tempo of the control table 70 is "-3"
  • the sound source device 20 can relax the user E so that he / she can easily fall asleep by playing natural sounds or music at the same tempo as or lower than the heart rate of the user E.
  • the tempo is "2”
  • the sound source device 20 can shift to the awake state by playing natural sounds or music at a tempo higher than the heart rate of the user E.
  • “1 / f” means that the amplitude, tempo, frequency, etc. are fluctuated by 1 / f.
  • the sound source device 20 can further relax the user E by giving fluctuations.
  • the control unit 140 determines the control table 70 according to the biological information.
  • the sound source device 20 of the present embodiment may repeat the operation of the flowchart shown in FIG. 4 periodically (for example, every few seconds).
  • the contents of the control table 70 change based on the change in the biological information.
  • the tempo of the natural sound in the control mode 2 is set to “-4”, which is even lower.
  • the control table 70 of FIG. 6 is an example in which the effect of sleep induction is enhanced as compared with the example of FIG.
  • the control unit 140 changes the control table 70 to the contents shown in FIG. 6, for example, when the pulse, respiration, and body movement decrease.
  • the control unit 140 may change the control table 70 to the content shown in FIG. 6, for example, when there is no change in pulse, respiration, and body movement.
  • control unit 140 may change any of the other "relaxation", "good sleep", or “wake up” contents based on the biometric information acquired by the biometric information acquisition unit 30.
  • the tempo of the music in the control mode 4 is set to “3”, and the volume is set to “8”.
  • the control unit 140 changes the control table 70 to the contents shown in FIG. 7, for example, when the pulse, respiration, and body movement increase.
  • the control table 70 of FIG. 7 is an example in which the effect of awakening is enhanced as compared with the example of FIG.
  • the control unit 140 may change the control table 70 to the content shown in FIG. 6, for example, when there is no change in pulse, respiration, and body movement.
  • control unit 140 may record in the flash memory 24 the time from the output of the sleep introduction sound or the awakening sound to the transition to the sleep state or the awakening state.
  • the control unit 140 records the time for each sound source and each parameter. Then, the control unit 140 may learn a sound source and a parameter having a high effect of sleep induction by using a predetermined algorithm.
  • control unit 140 changed the contents of the control table 70.
  • the control unit 140 may select one control table from a plurality of control tables based on biological information.
  • the flash memory 24 stores a plurality of control tables corresponding to biometric information.
  • a plurality of control tables corresponding to biometric information may be stored in the server.
  • the control unit 140 transmits the biometric information to the server and acquires the corresponding control table.
  • control unit 140 may determine the control table 70 based on information such as the age, gender, nationality, etc. of the user in addition to the biometric information.
  • the contents of the control table 70 with respect to information such as the age, gender, and nationality of the user are recorded in, for example, a server (not shown).
  • the server records the contents of the control table 70 for information such as the age, gender, or nationality of the user from a large number of devices.
  • the server may learn these and learn the optimum control table 70 for information such as the age, gender, or nationality of the user.
  • the control unit 140 transmits information such as the age, gender, or nationality of the user to the server via the communication unit 21, and receives the corresponding control table 70.
  • control unit 140 may specify the user and determine the control table 70 based on the user's specific result in addition to the biometric information.
  • the user E edits the contents of the control table 70 via the user I / F26.
  • User E changes various parameters such as the type of natural sound, the type of music, the tempo, or the volume.
  • the control unit 140 records the edited contents of the user E in the flash memory 24.
  • the control unit 140 may learn the user's favorite parameters by using a predetermined algorithm according to the edited contents of the control table 70 of the user E. As a result, the control unit 140 determines the control table 70 according to the user's preference.
  • the sound source device 20 may output a relaxing sound to the user E in the living room or the like when the user E is detected to be in an excited state.
  • the sound source device 20 may output an awakening sound to the user E in the office when the user E is detected to be in a sleep state.
  • FIG. 8 is a schematic view showing the configuration of the sound reproduction system 1A including the sound source device 20A according to the second embodiment.
  • the same configurations as those shown in FIG. 1 are designated by the same reference numerals, and the description thereof will be omitted.
  • the sound reproduction system 1A includes an array speaker 50. Similar to the speaker 51 and the speaker 52 of FIG. 1, the array speaker 50 outputs various sounds including a relaxing sound, a sleep introducing sound, a good sleep sound, an awakening sound, and the like.
  • the array speaker 50 includes a plurality of speakers.
  • the array speaker 50 can control the directivity by controlling the volume and the sound emission timing of the sound signals supplied to the plurality of arranged speakers.
  • the array speaker 50 is arranged at a predetermined position away from the bed 5.
  • the array speaker 50 arranges a plurality of speakers in a direction parallel to the minor axis direction of the bed 5.
  • the sound source device 20A and the array speaker 50 are connected by a wireless communication function or a wired communication function.
  • the array speaker 50 amplifies the stereo left (L) channel sound signal and the stereo right (L) channel sound signal output from the sound source device 20 by the built-in amplifier and emits the sound.
  • FIG. 9 is a block diagram showing the configuration of the sound source device 20A.
  • the sound source device 20A includes the same hardware as the sound source device 20. Therefore, each hardware configuration is designated by the same reference numeral, and the description thereof will be omitted.
  • the sound source device 20A is an example of an indoor sound environment generation device.
  • the flash memory 24 of the sound source device 20A further stores a program for configuring the position information acquisition unit 75 and the estimation unit 80.
  • the processor 22 of the sound source device 20A further constitutes the position information acquisition unit 75 and the estimation unit 80 by the program read from the flash memory 24.
  • the position information acquisition unit 75 acquires information regarding the position of the user (for example, coordinates in the room).
  • the estimation unit 80 estimates the physical and mental state of the user based on the biological information. For example, the estimation unit 80 determines that the user is in an excited state when the biological information such as pulse, respiration, body movement, brain wave, or blood pressure is a high value (greater than or equal to a predetermined threshold value).
  • the estimation unit 80 is in a state of falling asleep when biological information such as pulse, respiration, body movement, electroencephalogram, or blood pressure is low (above a predetermined threshold value) and these values further decrease with the passage of time. Judge that. Further, the estimation unit 80 determines that it is in a sleeping state when it detects an electroencephalogram corresponding to REM sleep or non-REM sleep. Further, when the biological information such as pulse, respiration, body movement, brain wave, or blood pressure is low (above a predetermined threshold value) and these values increase with the passage of time, the estimation unit 80 is in a wake-up state. Judge that there is.
  • FIG. 10 is a flowchart showing the operation of the sound source device 20A.
  • the biometric information acquisition unit 30 acquires biometric information via the communication unit 21, and the position information acquisition unit 75 acquires the position information of the user E (S21).
  • the position information acquisition unit 75 acquires position information via, for example, the sensor 12 or the sensor 13.
  • the sensor 12 is laid under the pillow and the sensor 13 is laid on the bed. Therefore, when the position information acquisition unit 75 acquires the biological information from the sensor 12 or the sensor 13, the position information acquisition unit 75 determines that the user E is in the bed 5 and the head position exists at the position of the pillow.
  • the position information is represented by, for example, the coordinates when the room is viewed in a plane.
  • the user E inputs the coordinates of the sensor 12, the sensor 13, and the array speaker 50 in advance via the user I / F16.
  • the position information acquisition unit 75 acquires the coordinates of the sensor 12, the sensor 13, and the array speaker 50 in advance.
  • the position information acquisition unit 75 may acquire the position information via the sensor 11 worn by the user.
  • the sensor 11 transmits, for example, a Bluetooth® beacon signal.
  • the position information acquisition unit 75 measures the distance to the sensor 11 based on the received radio wave intensity of the beacon signal. Since the radio wave intensity is inversely proportional to the square of the distance, it can be converted into information regarding the distance between the sensor 11 and the sound source device 20A.
  • the position information acquisition unit 75 can uniquely identify the position of the sensor 11 by acquiring three or more pieces of information on the distance.
  • the array speaker 50 may receive the beacon signal of the sensor 11, and the position information acquisition unit 75 may receive information regarding the received radio wave intensity of the beacon signal from the array speaker 50.
  • the user E may set a plurality of terminals for receiving the beacon signal in the room. In this case, the position information acquisition unit 75 receives information on the received radio field strength of the beacon signal from a plurality of terminals.
  • the position information acquisition unit 75 may specify the position of the user E by using the temperature sensor. For example, when the position information acquisition unit 75 detects an object of about 36 degrees Celsius via a temperature sensor, it determines that the object is user E and acquires the coordinates of the object. The position information may be acquired.
  • the sound source device 20A acquires a sound signal (S22).
  • the processor 22 acquires a sound signal by the biological information acquisition unit 30, the sound source unit 40, the control unit 140, and the reading unit 145, as in the embodiment shown in the functional block diagram of FIG.
  • the sound source device 20A according to the second embodiment does not need to acquire the sound signal in the mode shown in the first embodiment.
  • the sound source device 20A may acquire a sound signal by reading out a specific content stored in the flash memory 24.
  • the sound source device 20A may acquire a sound signal by receiving a specific content from an information processing terminal such as a smartphone owned by the user or another device such as a server.
  • the control unit 140 performs sound image localization processing based on the biological information and the position information (S23).
  • the sound image localization process is, for example, a process of controlling the directivity by controlling the volume and the sound emission timing of the sound signals supplied to the plurality of speakers of the array speaker 50.
  • control unit 140 directs the directivity of the sound output from the array speaker 50 to the position where the user E is, as shown in FIG. 11, based on the position information. Further, the control unit 140 controls the directivity based on the biological information. For example, the control unit 140 outputs the sound of content such as music to the entire room when the estimation unit 80 determines that the estimation unit 80 is in the awake state even when the user E is at the position of the bed 5. Alternatively, the control unit 140 may output the sound of content such as music to the entire room when the user E is in a place other than the bed 5. Further, the control unit 140 may output an awakening sound when the estimation unit 80 determines that the estimation unit 80 is in a sleep state even when the user E is in a place other than the bed 5.
  • the control unit 140 performs the sound image localization process in which the directivity is controlled. After that, the control unit 140 outputs a sound signal to the array speaker 50 via the audio I / F17 (S24).
  • the control unit 140 may output a sound signal after adjusting the volume and sound emission timing of the array speaker 50, but information indicating the sound signal and volume and sound emission timing of each channel (sound image localization information). May be output to the array speaker 50. In this case, the array speaker 50 adjusts the volume and the sound emission timing.
  • the control unit 140 may output a sound signal and information for controlling the directivity (for example, coordinates indicating the direction of the sound) to the array speaker 50. In this case, the array speaker 50 calculates the volume adjustment amount and the sound emission timing adjustment amount.
  • FIG. 12 is a flowchart showing the operation of the sound source device 20A according to the modified example of the second embodiment.
  • the same reference numerals are given to the operations common to those in FIG. 10, and the description thereof will be omitted.
  • the control unit 140 of the sound source device 20A further identifies the user (S200).
  • the control unit 140 functions as a specific unit that identifies the user.
  • the control unit 140 performs sound image localization processing based on the user's specific result in addition to the biological information and the position information. For example, the control unit 140 controls the directivity so that various sounds reach only a specific user.
  • the control unit 140 directs the directivity of the sound output by the array speaker 50 to the position where the specific user E2 is. As a result, it is difficult for the user E1 to hear the sound output by the array speaker 50. For example, when the user E1 sets the wake-up time at 8:00 am and the user E2 sets the wake-up time at 9:00 am, the control unit 140 outputs an awakening sound to the user E2 at 8:00 am. In this case, only the user E2 can hear the awakening sound without disturbing the user E1 to go to bed.
  • the sound source device 20A may perform a process of acquiring the cancel sound and localizing the cancel sound to a person other than a specific user.
  • the control unit 140 outputs the sound beam B1 related to the awakening sound to the specific user E2.
  • the control unit 140 outputs the sound beam B2 related to the cancel sound to the other user E1.
  • the cancel sound is a sound having the opposite phase of the sound beam B1 related to the awakening sound.
  • the sound beam B2 related to the cancel sound can cancel the awakening sound leaking from the sound beam B1 related to the awakening sound. Therefore, the awakening sound can be heard only by the user E2 without further hindering the user E1 from going to bed.
  • an example of controlling the directivity of the array speaker is shown as an example of sound image localization processing.
  • the sound image localization process can be performed by physically changing the sound emission direction of the directional speaker by a motor or the like.
  • the sound image localization process can also be performed by arranging a plurality of speakers in the room and outputting a sound such as an awakening sound only to the speaker at the position closest to the specific user.
  • the sound image localization process may be, for example, a process of convolving a head-related transfer function into a sound signal.
  • E, E1, E2 ... Users 1, 1A ... Sound reproduction system 1A ... Sound reproduction system 5 ... Beds 11, 12, 13 ... Sensors 20, 20A ... Sound source device 21 ... Communication unit 22 ... Processor 23 ... RAM 24 ... Flash memory 25 ... Display 26 ... User I / F 27 ... Audio I / F 30 ... Biological information acquisition unit 40 ... Sound source unit 50 ... Array speakers 51, 52 ... Speaker 70 ... Control table 75 ... Position information acquisition unit 80 ... Estimating unit 140 ... Control unit 145 ... Reading unit 410, 420, 430, 440 ... Sound source Department

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

La présente invention concerne un appareil de génération d'environnement sonore intérieur comprenant : une unité d'acquisition de signaux sonores qui acquiert des signaux sonores ; une unité de sortie de signaux sonores qui émet les signaux sonores ; une unité d'acquisition d'informations biologiques qui acquiert les informations biologiques d'un utilisateur ; une unité d'acquisition d'informations de position qui acquiert les informations de position de l'utilisateur ; et une unité de commande qui commande la localisation d'images sonores des signaux sonores sur la base des informations biologiques et des informations de position.
PCT/JP2020/013201 2020-03-25 2020-03-25 Appareil de génération d'environnement sonore intérieur, appareil de source sonore, procédé de génération d'environnement sonore intérieur et procédé de commande d'appareil de source sonore WO2021192072A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/013201 WO2021192072A1 (fr) 2020-03-25 2020-03-25 Appareil de génération d'environnement sonore intérieur, appareil de source sonore, procédé de génération d'environnement sonore intérieur et procédé de commande d'appareil de source sonore

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/013201 WO2021192072A1 (fr) 2020-03-25 2020-03-25 Appareil de génération d'environnement sonore intérieur, appareil de source sonore, procédé de génération d'environnement sonore intérieur et procédé de commande d'appareil de source sonore

Publications (1)

Publication Number Publication Date
WO2021192072A1 true WO2021192072A1 (fr) 2021-09-30

Family

ID=77891254

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/013201 WO2021192072A1 (fr) 2020-03-25 2020-03-25 Appareil de génération d'environnement sonore intérieur, appareil de source sonore, procédé de génération d'environnement sonore intérieur et procédé de commande d'appareil de source sonore

Country Status (1)

Country Link
WO (1) WO2021192072A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024009677A1 (fr) * 2022-07-04 2024-01-11 ヤマハ株式会社 Procédé de traitement de son, dispositif de traitement de son et programme
WO2024053123A1 (fr) * 2022-09-05 2024-03-14 パナソニックIpマネジメント株式会社 Système de lecture, procédé de lecture et programme

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009510534A (ja) * 2005-10-03 2009-03-12 マイサウンド エーピーエス 人間のユーザに対して可聴騒音の知覚の削減をもたらすためのシステム
WO2018074224A1 (fr) * 2016-10-21 2018-04-26 株式会社デイジー Système, procédé, programme de génération d'atmosphère, et système d'estimation d'atmosphère
WO2018079846A1 (fr) * 2016-10-31 2018-05-03 ヤマハ株式会社 Dispositif de traitement de signal, procédé de traitement de signal et programme

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009510534A (ja) * 2005-10-03 2009-03-12 マイサウンド エーピーエス 人間のユーザに対して可聴騒音の知覚の削減をもたらすためのシステム
WO2018074224A1 (fr) * 2016-10-21 2018-04-26 株式会社デイジー Système, procédé, programme de génération d'atmosphère, et système d'estimation d'atmosphère
WO2018079846A1 (fr) * 2016-10-31 2018-05-03 ヤマハ株式会社 Dispositif de traitement de signal, procédé de traitement de signal et programme

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024009677A1 (fr) * 2022-07-04 2024-01-11 ヤマハ株式会社 Procédé de traitement de son, dispositif de traitement de son et programme
WO2024053123A1 (fr) * 2022-09-05 2024-03-14 パナソニックIpマネジメント株式会社 Système de lecture, procédé de lecture et programme

Similar Documents

Publication Publication Date Title
AU2004202501B2 (en) Control apparatus and control method
US9978358B2 (en) Sound generator device and sound generation method
US6369312B1 (en) Method for expressing vibratory music and apparatus therefor
JP4739762B2 (ja) オーディオ再生装置、オーディオフィードバックシステムおよび方法
WO2021192072A1 (fr) Appareil de génération d'environnement sonore intérieur, appareil de source sonore, procédé de génération d'environnement sonore intérieur et procédé de commande d'appareil de source sonore
WO2016136450A1 (fr) Appareil de commande de source sonore, procédé de commande de source sonore, et support d'informations lisible par ordinateur
US20170182284A1 (en) Device and Method for Generating Sound Signal
US10831437B2 (en) Sound signal controlling apparatus, sound signal controlling method, and recording medium
JP6477300B2 (ja) 音源装置
US20210046276A1 (en) Mood and mind balancing audio systems and methods
JP2011130099A (ja) 入眠起床用音環境生成装置
JP2017070571A (ja) 音源装置
WO2018235629A1 (fr) Dispositif de génération de formes d'onde de signal pour stimulation biologique
WO2017002703A1 (fr) Dispositif de génération de signal audio, procédé de génération de signal audio, et support d'enregistrement lisible par ordinateur
JP3868326B2 (ja) 睡眠導入装置及び心理生理効果授与装置
KR102220738B1 (ko) 우울증 및 치매 치료를 위한 뇌파자극장치
JP2018068962A (ja) 安眠装置
JPH0678998A (ja) 音響信号制御装置
KR101988423B1 (ko) 음향을 이용한 진동 시스템
KR101611362B1 (ko) 헬스케어용 오디오 장치
JP2011130100A (ja) 入眠起床用音環境生成装置
JP2017070342A (ja) コンテンツ再生装置及びそのプログラム
JPH03128066A (ja) 界磁効果処理装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20926833

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20926833

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP