WO2019155650A1 - Loudspeaker device and control method - Google Patents

Loudspeaker device and control method Download PDF

Info

Publication number
WO2019155650A1
WO2019155650A1 PCT/JP2018/023270 JP2018023270W WO2019155650A1 WO 2019155650 A1 WO2019155650 A1 WO 2019155650A1 JP 2018023270 W JP2018023270 W JP 2018023270W WO 2019155650 A1 WO2019155650 A1 WO 2019155650A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound
signal
sound signal
exciter
soundboard
Prior art date
Application number
PCT/JP2018/023270
Other languages
French (fr)
Japanese (ja)
Inventor
孝輔 佐々木
郁哉 井川
森島 守人
泰央 新田
Original Assignee
ヤマハ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ヤマハ株式会社 filed Critical ヤマハ株式会社
Publication of WO2019155650A1 publication Critical patent/WO2019155650A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/02Casings; Cabinets ; Supports therefor; Mountings therein
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R7/00Diaphragms for electromechanical transducers; Cones
    • H04R7/02Diaphragms for electromechanical transducers; Cones characterised by the construction
    • H04R7/04Plane diaphragms

Definitions

  • the present invention relates to a speaker device and a control method.
  • This application claims priority on February 7, 2018 based on Japanese Patent Application No. 2018-020393 for which it applied to Japan, and uses the content here.
  • Patent Document 1 discloses a technique for generating sound by resonating a flat soundboard.
  • Patent Document 2 discloses a technique for transmitting sound to a listener by bringing a bone conduction speaker built in a pillow into contact with the head.
  • the present invention has been made in view of such a situation, and an object thereof is to provide a speaker device and a control method capable of controlling a sound to be output according to a surrounding situation.
  • one aspect of the present invention is a speaker device including a status signal acquisition unit, a sound signal acquisition unit, a signal processing unit, an exciter, and a soundboard.
  • the situation signal acquisition unit acquires a situation signal indicating a surrounding situation.
  • the sound signal acquisition unit acquires a sound signal input to the speaker device.
  • the signal processing unit controls the sound signal based on the status signal.
  • the exciter vibrates according to the sound signal controlled by the signal processing unit.
  • the soundboard is connected to the exciter and outputs sound by vibrating in the plate thickness direction according to the vibration of the exciter.
  • one embodiment of the present invention is a speaker device that includes a sound signal acquisition unit, a signal processing unit, an exciter, and a soundboard.
  • the sound signal acquisition unit acquires a sound signal input to the speaker device.
  • the signal processing unit controls the sound signal based on time.
  • the exciter vibrates according to the sound signal controlled by the signal processing unit.
  • the soundboard is connected to the exciter and outputs sound by vibrating in the plate thickness direction according to the vibration of the exciter.
  • one embodiment of the present invention is a speaker device including a status signal acquisition unit, a sound signal acquisition unit, a soundboard, a right exciter, a left exciter, and a signal processing unit.
  • the situation signal acquisition unit acquires a situation signal indicating a surrounding situation.
  • the sound signal acquisition unit acquires a sound signal input to the speaker device.
  • the soundboard outputs sound by vibrating in the thickness direction.
  • the right exciter is connected to the right side of the soundboard and vibrates according to a stereo right sound.
  • the left exciter is connected to the left side of the soundboard and vibrates in response to a stereo left sound.
  • the signal processing unit controls the sound signal based on the status signal, outputs the stereo right sound in the sound signal to a right exciter, outputs a stereo left sound in the sound signal to a left exciter, and At least one of the sound corresponding to the vibration of the left exciter output from the right side of the board and the sound corresponding to the vibration of the right exciter output from the left side of the soundboard is reduced.
  • one embodiment of the present invention is a speaker device including a status signal acquisition unit, a sound signal acquisition unit, a signal processing unit, an exciter, and a soundboard.
  • the situation signal acquisition unit acquires a situation signal indicating a surrounding situation.
  • the sound signal acquisition unit acquires a sound signal input to the speaker device.
  • the signal processing unit controls the sound signal of the sound signal when the situation signal includes sound.
  • the exciter vibrates according to the sound signal controlled by the signal processing unit.
  • the soundboard is connected to the exciter and outputs sound by vibrating in the plate thickness direction according to the vibration of the exciter.
  • the situation signal acquisition unit acquires a situation signal indicating a surrounding situation
  • the sound signal acquisition unit acquires a sound signal input to the speaker device
  • the signal processing unit The sound signal is controlled based on the status signal
  • the exciter vibrates according to the sound signal controlled by the signal processing unit
  • the soundboard is connected to the exciter
  • the plate thickness according to the vibration of the exciter
  • FIG. 1 is a block diagram illustrating a configuration example of the speaker device 1 according to the first embodiment.
  • a sensor signal is input to the speaker device 1.
  • the sensor signal is an example of a “status signal”.
  • the sensor signal is a signal detected by a sensor that detects the surrounding situation.
  • a surrounding situation is a surrounding situation where the speaker device 1 is provided.
  • the presence or absence of an object that changes the sound output by the speaker device 1 or a space in which the speaker device 1 outputs sound (for example, the speaker device). 1 is a signal indicating the illuminance of the room 1 and the background noise level.
  • the object that changes the sound output from the speaker device 1 is an object that changes or prevents the propagation of sound by the speaker device 1.
  • the speaker device 1 is a bedding (for example, a mattress 400 (see FIG. 2)).
  • a pillow 401 see FIG. 2) or the like, it is a pillow or a human head that can be placed on the plate surface of the soundboard 13 (see FIG. 2).
  • an input sound signal is input to the speaker device 1.
  • the input sound signal is an example of a “sound signal”.
  • the speaker device 1 outputs a sound (sound) obtained by changing the input sound signal according to the surrounding situation.
  • the speaker device 1 includes, for example, a signal processing unit 10, an amplifier unit 11, an exciter 12, and a soundboard 13.
  • the signal processing unit 10 is a signal processing circuit such as a DSP (Digital Signal Processor), for example, and performs signal processing on the input sound signal based on the sensor signal, thereby controlling the input sound signal and controlling the input sound signal. Is output to the amplifier unit 11.
  • the signal processing unit 10 may be configured such that a processor such as a CPU (Central Processing Unit) executes a program stored in a storage unit (not shown). Further, all or part of the signal processing unit 10 may be realized by dedicated hardware such as an LSI (Large Scale Integration), an ASIC (Application Specific Integrated Circuit), or an FPGA (Field-Programmable Gate Array).
  • the amplifier unit 11 amplifies the sound signal output from the signal processing unit 10 and outputs the amplified sound signal to the exciter 12.
  • the exciter 12 is, for example, a dynamic or piezoelectric actuator, and vibrates according to a sound signal from the amplifier unit 11.
  • the soundboard 13 is, for example, a plate-like member that resonates sound, and is, for example, a plate-like member made of a base material such as carbon or hard plastic.
  • the soundboard 13 is physically connected (adhered) to the exciter 12 and vibrates in the thickness direction in accordance with the vibration of the exciter 12, thereby outputting sound (sound) in a direction perpendicular to the plate surface.
  • FIG. 2 is a perspective view illustrating an example of the appearance of the speaker device 1 according to the first embodiment.
  • the speaker device 1 may include a main body 200 that is a housing that includes electronic components such as the signal processing unit 10, the amplifier unit 11, and the exciter 12.
  • the main body 200 is a hard casing, a space in which the exciter 12 included in the main body 200 vibrates in the vertical direction (that is, the thickness direction of the soundboard 13) can be secured. Thereby, it is possible to prevent the exciter 12 from being pressed against the bedding and the like, and the vibration of the exciter 12 being weakened or restricted.
  • the electronic component can be prevented from receiving an impact from the outside.
  • the main body 200 is a sealed container, it is possible to prevent moisture and dust from entering the connection part of the electronic component by enclosing the electronic component in the sealed container.
  • a plurality of exciters 12-1 and 12-2 may be connected to the soundboard 13.
  • the plurality of exciters 12 are controlled by the signal processing unit 10 so as to vibrate at the same phase.
  • the entire soundboard 13 can be vibrated with the same phase.
  • position when connecting the several exciter 12 to the sound board 13, it is preferable to arrange
  • each exciter 12 is arranged, for example, in the central part of the connection region between the main body 200 and the soundboard 13, and further, a part between the central part and each corner part. One by one.
  • a sheet-like sensor 300 may be provided on the soundboard 13.
  • the sensor 300 is, for example, a weight sensor (strain gauge) that detects the weight of an object placed on the plate surface of the soundboard 13.
  • a signal indicating the weight of the object detected by the sensor 300 is input to the speaker device 1 wirelessly or by wire.
  • the signal detected by the sensor 300 is not limited to the signal indicating the weight of the object, but is a signal indicating the displacement, pressure, or acceleration of the object placed on the plate surface of the soundboard 13. May be.
  • the soundboard 13 may be provided between the mattress 400 and the pillow 401.
  • the soundboard 13 outputs sound from the lower side of the pillow placed on the surface of the soundboard 13.
  • the sound output from the surface of the soundboard 13 is a sound in which the sound source is a surface sound source, and is a highly linear sound that travels in a direction perpendicular to the plate surface (z-axis direction). For this reason, the sound output from the surface of the soundboard 13 reaches the ears of the listener in the z-axis direction of the soundboard 13, but is difficult to reach others in the x-axis and y-axis directions. Sound leakage is difficult for others to hear.
  • FIG. 3 is a block diagram illustrating a configuration example of the signal processing unit 10 according to the first embodiment.
  • the signal processing unit 10 includes, for example, a sensor signal acquisition unit 100, an input sound signal acquisition unit 101, a sound signal correction unit 102, a content change unit 103, and a correction information storage unit 104.
  • the sensor signal acquisition unit 100 is an example of a “situation signal acquisition unit”.
  • the sound signal correction unit 102 is an example of a “control unit”.
  • the content changing unit 103 is an example of a “control unit”.
  • the sensor signal acquisition unit 100 acquires a sensor signal input from the external sensor 300 to the speaker device 1.
  • the sensor signal acquisition unit 100 outputs the acquired sensor signal to the sound signal correction unit 102.
  • the input sound signal acquisition unit 101 acquires an input sound signal input to the speaker device 1 from the outside.
  • the input sound signal acquisition unit 101 outputs the acquired sound signal to the sound signal correction unit 102.
  • the sound signal correction unit 102 corrects the input sound signal based on the sensor signal acquired from the sensor signal acquisition unit 100.
  • the sound signal correction unit 102 corrects the acoustic characteristics of the input sound signal based on the sensor signal.
  • the acoustic characteristics here are the characteristics of the sound output from the soundboard 13, for example, the frequency characteristics of the sound output from the soundboard 13. In general, even when the exciter 12 is vibrated in the same manner, the acoustic characteristics of the output sound differ depending on the surroundings of the soundboard 13 and the material and individual differences of the soundboard 13.
  • the sound signal correction unit 102 corrects the acoustic characteristics of the input sound signal in accordance with the surrounding conditions of the soundboard 13 so that the sound characteristics are substantially constant even when the surrounding conditions change.
  • This correction data indicates, for example, the reverse characteristics of the frequency characteristics of the acquired sound when sound is output from the soundboard 13 at the same signal level for each frequency band and the sound is acquired from a predetermined position with a microphone or the like. It
  • the sound signal correction unit 102 refers to the correction information storage unit 104 that stores information in which acoustic characteristic correction data is associated with the situation around the soundboard 13, and based on the referenced correction data, the input sound Correct the signal.
  • the correction information storage unit 104 for example, when there is no object placed on the plate surface of the soundboard 13 (case A), or when a pillow is placed (case) as the situation around the soundboard 13 B) and correction data corresponding to each of the case where the pillow and the head are placed (case C) are stored.
  • the sound signal correction unit 102 determines which of the above cases A to C is based on the sensor signal, selects correction data according to the determined result, and uses the selected correction data, Correct the input sound signal.
  • the sound signal correction unit 102 corrects the head-related transfer function of the input sound signal based on the sensor signal, for example.
  • the sound signal correction unit 102 operates the position of the sound source and the direction of the sound perceived by the listener by correcting the head-related transfer function. As a result, when the listener listens to the sound from a position close to both ears of his / her ear, it is possible to reduce the feeling that the sound source is in the head and feel uncomfortable.
  • the sound signal correction unit 102 refers to, for example, the correction information storage unit 104 in which information associated with the correction data of the head related transfer function is stored, and corrects the input sound signal based on the referenced correction data. For example, the sound signal correction unit 102 determines which of the above cases A to C is based on the sensor signal and determines that the head is placed on the soundboard 13. The input sound signal is corrected using the correction data of the head related transfer function.
  • the sound signal correcting unit 102 may change the volume of the input sound signal based on the sensor signal. For example, the sound signal correcting unit 102 determines which of the cases A to C is based on the sensor signal, and the head is placed on the soundboard 13 (via a pillow). The volume of the input sound signal is changed when the head is changed from the sleeping state to the state where the head is not placed, that is, when the listening state is changed from the sleeping state to the rising state. This makes it possible to mute the sound when the ear is away from the speaker, or to increase the volume so that the sound can be heard even when the ear is away from the speaker.
  • the sound signal correction unit 102 may reduce the vocal of the input sound signal based on the sensor signal, for example.
  • the sound signal correction unit 102 suppresses the sound output from the speaker device 1 from being heard by another person different from the listener by reducing vocals that are easily recognized by humans compared to other sounds.
  • the sound signal correction unit 102 reduces the vocal of the input sound signal based on a sensor signal indicating illuminance when the illuminance is less than a predetermined threshold.
  • amendment part 102 reduces the vocal of an input sound signal, when it is a background noise level less than a predetermined threshold based on the sensor signal which shows a background noise level, for example.
  • a predetermined threshold based on the sensor signal which shows a background noise level, for example.
  • the sound signal correction unit 102 may correct the auditory characteristics of the input sound signal based on the sensor signal, for example.
  • the sound signal correction unit 102 determines the signal level of a high frequency (relatively high frequency band of the audible band) or low frequency (relatively low frequency band of the audible band) that is difficult to be perceived by human hearing. increase. Further, the sound signal correction unit 102 reduces the signal level of the sound signal in a band that is easily perceived by human hearing (for example, an intermediate band excluding a relatively high frequency band and a low frequency band in the audible band). Thereby, it is suppressed that the sound output from the speaker apparatus 1 is heard by others different from the listener.
  • the sound signal correction unit 102 corrects the audibility characteristics of the input sound signal.
  • the sound signal correction unit 102 refers to the correction information storage unit 104 in which information corresponding to the correction data of the auditory characteristic is stored, and corrects the auditory characteristic of the input sound signal based on the referred correction data. .
  • the sound signal correction unit 102 may correct the auditory characteristics according to the age of the listener or another person different from the listener.
  • the sound signal correction unit 102 acquires input information indicating the age of the listener or another person different from the listener, for example, from an input information acquisition unit (not shown) that acquires input information input by an input operation or the like by the listener.
  • the auditory characteristics are corrected based on the acquired input information.
  • the correction information storage unit 104 stores, for example, a plurality of auditory characteristic correction data associated with each person's age.
  • the sound signal correction unit 102 selects auditory characteristic correction data based on the age indicated in the acquired input information, and corrects the auditory characteristic of the input sound signal using the selected correction data.
  • the sound signal correction unit 102 may change the maximum volume of the input sound signal based on the sensor signal. For example, the sound signal correction unit 102 reduces the maximum volume of the input sound signal when the sensor signal indicating illuminance indicates illuminance less than a predetermined threshold. Or the sound signal correction
  • the content changing unit 103 changes the content of the input sound signal based on the sensor signal.
  • the content here is the content of the sound of the input sound signal, for example, a song.
  • the content changing unit 103 changes content when a sensor signal indicating illuminance indicates illuminance less than a predetermined threshold.
  • the content changing unit 103 changes the ambient sound so that the listener can relax at night or when the light is turned off.
  • the environmental sound is a sound that naturally occurs in a normal environment, such as a cry of a bird or a water sound of a waterfall. This makes it possible to output a relaxing environmental sound when the listener is sleeping or is going to sleep, and to output a sound based on the input sound signal when the listener is awake during the day, etc. It is possible to output a sound according to the activity state of the listener.
  • the content changing unit 103 may change the content by selecting a sound signal from a content acquisition unit (not shown) that acquires the sound signal to be changed, or the content changing target.
  • the content may be changed by selecting a sound signal from a content storage unit (not shown) that stores the sound signal.
  • the correction information storage unit 104 stores correction data corresponding to acoustic characteristics, correction data corresponding to head-related transfer functions, and correction data corresponding to auditory characteristics.
  • a plurality of corrections corresponding to each of the acoustic characteristics corresponding to the object (for example, cases A to C described above) placed on the plate surface of the soundboard 13 as correction data corresponding to the acoustic characteristics.
  • Data may be stored.
  • the correction information storage unit 104 may store a plurality of correction data corresponding to each of the audibility characteristics corresponding to the age and age of the person as correction data corresponding to the audibility characteristics.
  • FIGS. 4 to 6 are flowcharts showing an operation example of the signal processing unit 10 according to the first embodiment.
  • FIG. 4 is a flowchart showing the operation of the process in which the signal processing unit 10 corrects the acoustic characteristics.
  • FIG. 5 is a flowchart showing the operation of the process in which the signal processing unit 10 changes the content.
  • FIG. 6 is a flowchart showing the operation of the process in which the signal processing unit 10 corrects the maximum volume.
  • the sensor signal acquisition unit 100 of the signal processing unit 10 acquires a sensor signal indicating the weight of the object placed on the plate surface of the soundboard 13 (step S ⁇ b> 10).
  • the sound signal correction unit 102 determines the mounting status (for example, the above-described cases A to C) on the soundboard 13 (step S11).
  • the sound signal correction unit 102 acquires correction data corresponding to the determined mounting situation (step S12).
  • the sound signal correction unit 102 refers to the correction information storage unit 104, for example, and acquires correction data corresponding to the mounting status.
  • the sound signal correction unit 102 corrects the acoustic characteristics of the input sound signal using the acquired correction data (step S13). Note that the sound signal correcting unit 102 may correct at least one of the head-related transfer function and the auditory sensation characteristic together with the acoustic characteristic or instead of the acoustic characteristic.
  • the sensor signal acquisition unit 100 acquires a sensor signal indicating ambient illuminance (step S ⁇ b> 20).
  • the content changing unit 103 determines whether or not the illuminance is less than a predetermined threshold (step S21). If the illuminance is less than the predetermined threshold, the content changing unit 103 changes the content (step S22).
  • the content changing unit 103 may acquire a sensor signal indicating a background noise level together with or instead of the illuminance, and change the content based on at least one of the illuminance and the background noise level.
  • the sensor signal acquisition unit 100 acquires a sensor signal indicating the surrounding background noise level (step S30).
  • the sound signal correcting unit 102 determines whether or not the background noise level is less than a predetermined threshold (step S31).
  • the sound signal correction unit 102 reduces the maximum volume when the background noise level is less than a predetermined threshold (step S32).
  • the sound signal correction unit 102 acquires a sensor signal indicating illuminance together with or instead of the background noise level, and reduces the maximum volume based on at least one of the illumination level and the background noise level. May be.
  • the signal processing unit 10 controls the input sound signal based on the sensor signal indicating the surrounding situation, so that the sound is generated according to the surrounding situation. Since the signal can be controlled and the sound quality can be improved even when the situation changes and the sound quality deteriorates, a high-quality sound can be output.
  • the speaker device 1 according to the present embodiment can output a highly straight-forward sound by vibrating the soundboard 13 and can make it difficult for sound to be transmitted to other people around. Can be suppressed.
  • the soundboard 13 is provided between the pillow 401 and the mattress 400, and the sensor signal acquisition unit 100 is configured to detect an object placed on the plate surface of the soundboard 13.
  • the signal indicating the weight is acquired, and the sound signal correcting unit 102 controls the input sound signal based on the signal indicating the weight of the object placed on the soundboard 13.
  • the speaker device 1 according to the present embodiment depending on the situation such as whether the object placed on the plate surface of the soundboard 13 is nothing, only a pillow, or a pillow and a head, The input sound signal can be controlled.
  • the speaker device 1 of the present embodiment has a sound quality when nothing is placed on the plate surface of the soundboard 13, when only a pillow is placed, and when a pillow and a head are placed. Even if they are different, it is possible to output a sound having substantially the same sound quality regardless of the situation.
  • the sound signal correction unit 102 corrects at least one of the acoustic characteristics, the head-related transfer function, and the auditory characteristics of the input sound signal. Even if they are different from each other, it is possible to correct the acoustic characteristics and output sounds having substantially the same characteristics.
  • FIG. 7 is a perspective view illustrating an example of the appearance of the speaker device 1A according to the first modification of the first embodiment.
  • the soundboard 13 is composed of a right soundboard 13R and a left soundboard 13L, and the speaker device 1A outputs stereo sound.
  • symbol is attached
  • the signal processing unit 10 controls the input sound signal based on the sensor signal, and converts the controlled input sound signal into a stereo sound signal.
  • the signal processing unit 10 outputs the stereo right sound of the stereo sound to the right exciter 12-1 and the stereo left sound to the left exciter 12-1.
  • the signal processing unit 10 corrects the stereo sound so that the crosstalk of the stereo sound is reduced.
  • correction data for reducing crosstalk is stored in the correction information storage unit 104, and the signal processing unit 10 corrects the stereo sound using the correction data, thereby reducing the crosstalk of the stereo sound.
  • the correction data is generated by acquiring sounds output from the right soundboard 13R and the left soundboard 13L with a microphone or the like.
  • a stereo left sound component mixed in the sound signal acquired from the right soundboard 13R side and a stereo right sound component mixed in the sound signal acquired from the left soundboard 13L side are extracted.
  • the modification 2 of the embodiment is different from the above embodiment in that the sound signal correction unit 102 controls the input sound signal based on time instead of the illuminance and the background noise level.
  • the sound signal correction unit 102 acquires time information indicating time from a timer (not shown) of the speaker device 1 (1A), and controls the input sound signal when the acquired time reaches a predetermined time. For example, when the acquired time reaches the extinguishing time, the sound signal correcting unit 102 reduces the vocal of the input sound signal. Thereby, for example, vocals can be reduced at night or at bedtime, and it is possible to prevent sound leakage particularly when the sound output from the speaker device 1 is easily heard by others.
  • the sound signal correction unit 102 may change the volume or maximum volume of the input sound signal or the content together with the control for reducing the vocal or instead of the control for reducing the vocal.
  • the sound signal correction unit 102 may change the volume or maximum volume of the input sound signal or the content together with the control for reducing the vocal or instead of the control for reducing the vocal.
  • the sound signal correction unit 102 determines that the head is placed on the soundboard 13, the input sound signal is corrected using the correction data of the head related transfer function.
  • the sound signal correction unit 102 may correct the head-related transfer function only when the head is placed at the center of the soundboard 13. Further, the sound signal correction unit 102 determines the head-related transfer function according to the direction of the head placed on the soundboard 13 (sleeping posture, that is, lying or lying, etc.). You may make it determine whether it correct
  • the sound signal correction unit 102 is, for example, a plurality of positions provided symmetrically with respect to the central portion of the soundboard 13.
  • amendment part 102 determines by analyzing the imaging data acquired from the camera (not shown) which images a listener's attitude
  • the speaker device 1 ⁇ / b> B in the present embodiment is applied, for example, to support the hearing (hearing) of the user's ear when the user in bed is in a situation where it is difficult to hear the sound.
  • the speaker device 1B corrects the voice of the conversation to a sound that is easy for the user to hear when the user has a conversation with a related person (for example, a nurse, a caregiver, a doctor, a family, or the like).
  • FIG. 8 is a block diagram illustrating a configuration example of the speaker device 1B according to the second embodiment.
  • the ambient sound signal collected by the sound collection device 300B is input to the speaker device 1B.
  • the sound collection device 300B is a device that collects ambient sounds at the place where the speaker device 1B is provided, and is, for example, a microphone.
  • the ambient sound signal includes ambient environmental sounds, voices of people talking to the listener, voices spoken by the listener, and the like.
  • the ambient sound signal is a signal indicating the state of the surrounding sound, and is an example of a “situation signal”.
  • a call sound signal from the communication device 500 is input to the speaker device 1B.
  • the communication device 500 is a communication device used by a listener for a call, and is, for example, a mobile phone or a landline phone.
  • the communication device 500 is a so-called nurse call device that performs a call between a medical worker and a patient in a hospital room.
  • the call sound signal is an example of a “sound signal”.
  • the speaker device 1B includes, for example, a signal processing unit 10B, an amplifier unit 11, an exciter 12, and a soundboard 13.
  • the signal processing unit 10B acquires the ambient sound signal, the input sound signal, and the call sound signal, selects any one of the acquired signals, corrects the selected signal, and outputs it.
  • FIG. 9 is a block diagram illustrating a configuration example of the signal processing unit 10B according to the second embodiment.
  • the signal processing unit 10B includes, for example, an ambient sound signal acquisition unit 105, an input sound signal acquisition unit 101, a sound signal correction unit 102B, a content change unit 103, a correction information storage unit 104, and a call sound signal acquisition unit 106. And an output signal selection unit 107.
  • the ambient sound signal acquisition unit 105 is an example of a “situation signal acquisition unit”.
  • the call sound signal acquisition unit 106 is an example of a “sound signal acquisition unit”.
  • the output signal selection unit 107 is an example of a “signal processing unit”.
  • the ambient sound signal acquisition unit 105 acquires the ambient sound signal and outputs the acquired ambient sound signal to the output signal selection unit 107.
  • the input sound signal acquisition unit 101 acquires an input sound signal and outputs the acquired input sound signal to the output signal selection unit 107.
  • the call sound signal acquisition unit 106 acquires a call sound signal and outputs the acquired call sound signal to the output signal selection unit 107.
  • the output signal selection unit 107 is one of the ambient sound signal from the ambient sound signal acquisition unit 105, the input sound signal from the input sound signal acquisition unit 101, and the call sound signal from the call sound signal acquisition unit 106. A signal is selected, and the selected signal is output to the sound signal correction unit 102B together with information (type information) indicating the type of the selected signal.
  • the type information here is information for distinguishing the ambient sound signal, the input sound signal, and the call sound signal.
  • the output signal selection unit 107 selects a call sound signal when there is a call voice in the call sound signal. For example, when the volume of the call sound signal is equal to or higher than a predetermined volume threshold, the output signal selection unit 107 determines that the call sound signal includes call sound.
  • the output signal selection unit 107 selects the ambient sound signal when the speech sound signal does not include a speech sound but the ambient sound signal includes speech. For example, the output signal selection unit 107 performs frequency analysis on the ambient sound signal, and determines that the ambient sound signal includes sound if the signal strength of the band including the sound is equal to or greater than a predetermined intensity threshold. Alternatively, the output signal selection unit 107 arranges the two sound collectors 300B so as to be symmetric with respect to the position of the user, and based on the directivity of the collected sound, the stereo sound source Audio may be extracted from the principle. In this case, the output signal selection unit 107 determines that a sound is included in the ambient sound signal when the sound having a predetermined volume threshold or higher is extracted from the ambient sound signal.
  • the output signal selection unit 107 selects the input sound signal when neither the call sound signal nor the ambient sound signal includes sound.
  • the case where the voice is not included in the call sound signal is, for example, a case where the volume of the call sound signal is less than a predetermined volume threshold.
  • the case where the sound is not included in the peripheral sound signal is, for example, a case where the signal intensity of the band in which the sound is included in the ambient sound signal is less than a predetermined intensity threshold.
  • the sound signal correction unit 102B acquires the sound signal selected by the output signal selection unit 107 and corrects the acquired sound signal.
  • the sound signal correction unit 102B corrects the communication sound when the output signal selection unit 107 selects the communication sound signal.
  • the sound signal correction unit 102B corrects the volume of the communication sound according to the hearing level of the user.
  • the sound signal correction unit 102B corrects, for example, the communication sound to an acoustic characteristic that is easy for the user to listen to.
  • the sound signal correction unit 102B may reproduce (output) the communication sound at a speed at which the user can easily listen. Thereby, even when the surroundings are noisy or when the user is a so-called deaf person, it becomes easier for the user to hear the contents of the call, and it is possible to suppress missed communication items and confirmation items from the medical staff.
  • information about the user's hearing such as the user's hearing level, the acoustic characteristics that the user can easily listen to, and the speed at which the user can easily listen may be stored in advance in the correction information storage unit 104 or a storage unit (not shown).
  • the sound signal correcting unit 102B corrects the sound included in the ambient sound signal when the output signal selecting unit 107 selects the ambient sound signal.
  • the sound signal correction unit 102B extracts and outputs the sound from the ambient sound signal using, for example, a filter that passes only a signal in a band including the sound.
  • the sound signal correction unit 102B may correct and output the volume, acoustic characteristics, and speed of the extracted voice according to the user's hearing so that the user can easily hear it.
  • the sound signal correction unit 102B may reduce the volume when the volume of the extracted sound is equal to or higher than a predetermined volume threshold. Thereby, the volume of the sound output from the speaker device 1B is reduced, and howling that occurs when a part of the output from the speaker device 1B is fed back to the microphone can be suppressed.
  • the sound signal correction unit 102B may determine whether the extracted sound is the user's own sound or a sound different from the user's sound, and may perform processing according to the determined result. In this case, for example, the sound signal correction unit 102B stores in advance the frequency characteristics of the voice that the user makes various utterances. When the sound signal correcting unit 102B determines that the frequency characteristic of the extracted sound has characteristics similar to the frequency characteristic stored in advance, the sound signal correcting unit 102B determines that the extracted sound is the user's own sound. On the other hand, if the sound signal correcting unit 102B determines that the frequency characteristics of the extracted sound do not have characteristics similar to the frequency characteristics stored in advance, the sound signal correcting unit 102B determines that the extracted sound is not the user's own sound.
  • the sound signal correction unit 102B does not output the extracted sound to the amplifier unit 11 when the extracted sound is the user's own sound. Accordingly, since the user does not listen to his / her own sound from the speaker device 1B, his / her own voice is heard with a delay from the timing of the utterance, and the voice is reverberant and harsh.
  • the sound signal correction unit 102B corrects the extracted sound to a volume, acoustic characteristics, and speed that are easy for the user to hear. This makes it easier for the user to hear the voice spoken to him. Even when the surroundings are noisy or when the user is a so-called deaf person, it is not necessary for the concerned person to bend over the user's ears and speak with a loud voice, and the conversation can be performed smoothly.
  • the sound signal correcting unit 102B corrects the input sound when the output signal selecting unit 107 selects the input sound signal. For example, the sound signal correction unit 102B corrects and outputs the volume, acoustic characteristics, and speed of the input sound signal so as to be easily heard by the user according to the user's hearing.
  • the sound signal correction unit 102B may correct the input sound signal according to the signal level of the ambient sound signal (that is, the background noise level). In this case, the sound signal correction unit 102B changes the upper limit of the maximum volume of the input signal, for example, as in the first embodiment.
  • FIG. 10 is a flowchart illustrating an operation example of the signal processing unit 10B according to the second embodiment.
  • the ambient sound signal acquisition unit 105 of the signal processing unit 10B acquires the ambient sound signal (step S110), the input sound signal acquisition unit 101 acquires the input sound signal (step S111), and the call sound signal acquisition unit 106 acquires the call sound.
  • a signal is acquired (step S112).
  • the output signal selection unit 107 determines whether or not there is a call voice in the call sound signal (step S113). If there is a call voice in the call sound signal, the output signal selection unit 107 outputs the call sound signal along with its identification information as a sound signal.
  • the data is output to the correction unit 102B.
  • the sound signal correction unit 102B acquires the call voice signal from the output signal selection unit 107, and corrects the acquired call voice signal (step S114). When there is no call voice in the call sound signal, the output signal selection unit 107 determines whether or not the ambient sound signal includes sound (step S115). When the ambient sound signal includes sound, the output signal selection unit 107 outputs the ambient sound signal together with the identification information to the sound signal correction unit 102B. The sound signal correction unit 102B acquires the ambient sound signal from the output signal selection unit 107, and corrects the sound included in the acquired ambient sound signal (step S116). When the ambient sound signal does not include sound, the output signal selection unit 107 outputs the input sound signal together with the identification information to the sound signal correction unit 102B.
  • the sound signal correction unit 102B acquires the input sound signal from the output signal selection unit 107, and corrects the acquired input sound signal (step S117).
  • the method by which the sound signal correction unit 102B corrects the input sound signal is, for example, correction based on the background noise level described in the first embodiment.
  • the speaker device 1B according to the second embodiment corrects the sound when the sound is included in the ambient sound signal.
  • the speaker device 1B according to the second embodiment can support the user so that the user can easily hear the content of the conversation even when the ambient noise is large or the user is a so-called deaf person.
  • the sound to be output can be controlled according to the surrounding situation.
  • a program for realizing all or part of the functions of the speaker device 1 (1A, 1B) and the signal processing unit 10 (10B) according to the present invention is recorded on a computer-readable recording medium, and this recording medium is recorded.
  • the processing may be performed by causing the computer system to read and execute the program recorded in the above.
  • the “computer system” includes an OS and hardware such as peripheral devices.
  • the “computer system” includes a WWW system having a homepage providing environment (or display environment).
  • the “computer-readable recording medium” refers to a storage device such as a flexible medium, a magneto-optical disk, a portable medium such as a ROM or a CD-ROM, and a hard disk incorporated in a computer system.
  • the “computer-readable recording medium” refers to a volatile memory (RAM) in a computer system that becomes a server or a client when a program is transmitted via a network such as the Internet or a communication line such as a telephone line. In addition, those holding programs for a certain period of time are also included.
  • RAM volatile memory
  • the program may be transmitted from a computer system storing the program in a storage device or the like to another computer system via a transmission medium or by a transmission wave in the transmission medium.
  • the “transmission medium” for transmitting the program refers to a medium having a function of transmitting information, such as a network (communication network) such as the Internet or a communication line (communication line) such as a telephone line.
  • the program may be for realizing a part of the functions described above. Furthermore, what can implement

Abstract

The loudspeaker device according to an embodiment of the present invention has a state signal acquisition unit, sound signal acquisition unit, signal processing unit, exciter, and soundboard. The state signal acquisition unit acquires a state signal indicating the surrounding state. The sound signal acquisition unit acquires a sound signal to be inputted to the loudspeaker device. The signal processing unit controls the sound signal on the basis of the state signal. The exciter vibrates in accordance with the sound signal controlled by means of the signal processing unit. The soundboard is connected to the exciter, and outputs a sound by vibrating in the board thickness direction in accordance with the vibration of the exciter.

Description

スピーカ装置、及び制御方法Speaker device and control method
 本発明は、スピーカ装置、及び制御方法に関する。
 本願は、2018年2月7日に、日本に出願された特願2018-020393号に基づき優先権を主張し、その内容をここに援用する。
The present invention relates to a speaker device and a control method.
This application claims priority on February 7, 2018 based on Japanese Patent Application No. 2018-020393 for which it applied to Japan, and uses the content here.
 従来、ベッドの枕元にスピーカを組み込むことにより、横になったまま音楽やテレビの音声を聞けるシステムがある。このようなシステムでは、音楽等を聴くためにヘッドホンやイヤホン等を着用する必要がなく、身体の拘束感が少ないため、快適な睡眠環境を実現することができる。特許文献1には、平面上の響板を共振させることで音を発生する技術が開示されている。また、特許文献2には枕に内蔵させた骨伝導スピーカを頭部に当接させることによりに聴者に音を伝達する技術が開示されている。 Conventionally, there is a system for listening to music and TV sound while lying down by incorporating a speaker in the bedside of the bed. In such a system, it is not necessary to wear headphones or earphones in order to listen to music and the like, and since there is little sense of restraint on the body, a comfortable sleep environment can be realized. Patent Document 1 discloses a technique for generating sound by resonating a flat soundboard. Patent Document 2 discloses a technique for transmitting sound to a listener by bringing a bone conduction speaker built in a pillow into contact with the head.
特開平5-300591号公報JP-A-5-300591 特開2006-197257号公報JP 2006-197257 A
 しかしながら、上記のスピーカは周囲の状況に関わらず同じ条件で音を出力するため、スピーカの上に更に枕やタオルケットが載置された場合等、周囲の状況が変化した場合には、音が聴こえ難くなったり、音がこもってしまったりする等、音質が劣化してしまう場合があった。また、このようなスピーカを、同一の部屋に複数のベッドが配置される空間(例えば、病院等)における各ベッドに適用した場合、就寝時間帯にも日中と同じ音量で音が出力すると、隣床にいる他人に音が聴こえてしまう(音漏れが発生する)ため、聴者にとって利用し難くなる問題があった。 However, since the above speakers output sound under the same conditions regardless of the surrounding conditions, when the surrounding conditions change, such as when a pillow or towel is placed on the speaker, the sound can be heard. In some cases, the sound quality deteriorates, such as difficulty and sound accumulation. In addition, when such a speaker is applied to each bed in a space where a plurality of beds are arranged in the same room (for example, a hospital), when sound is output at the same volume as during the daytime, There is a problem that it is difficult for the listener to use because the sound is heard by others on the adjacent floor (sound leakage occurs).
 本発明は、このような状況に鑑みてなされたもので、その目的は、周囲の状況に応じて出力する音を制御することができるスピーカ装置、及び制御方法を提供することである。 The present invention has been made in view of such a situation, and an object thereof is to provide a speaker device and a control method capable of controlling a sound to be output according to a surrounding situation.
 上述した課題を解決するために、本発明の一態様は、状況信号取得部と、音信号取得部と、信号処理部と、エキサイタと、響板とを有するスピーカ装置である。前記状況信号取得部は、周囲の状況を示す状況信号を取得する。前記音信号取得部は、スピーカ装置に入力される音信号を取得する。前記信号処理部は、前記状況信号に基づいて前記音信号を制御する。前記エキサイタは、前記信号処理部により制御された音信号に応じて振動する。前記響板は、前記エキサイタと接続し、前記エキサイタの振動に応じて板厚方向に振動することにより音を出力する。 In order to solve the above-described problem, one aspect of the present invention is a speaker device including a status signal acquisition unit, a sound signal acquisition unit, a signal processing unit, an exciter, and a soundboard. The situation signal acquisition unit acquires a situation signal indicating a surrounding situation. The sound signal acquisition unit acquires a sound signal input to the speaker device. The signal processing unit controls the sound signal based on the status signal. The exciter vibrates according to the sound signal controlled by the signal processing unit. The soundboard is connected to the exciter and outputs sound by vibrating in the plate thickness direction according to the vibration of the exciter.
 また、本発明の一態様は、音信号取得部と、信号処理部と、エキサイタと、響板とを有するスピーカ装置である。前記音信号取得部は、スピーカ装置に入力される音信号を取得する。前記信号処理部は、時間に基づいて前記音信号を制御する。前記エキサイタは、前記信号処理部により制御された音信号に応じて振動する。前記響板は、前記エキサイタと接続し、前記エキサイタの振動に応じて板厚方向に振動することにより音を出力する。 Further, one embodiment of the present invention is a speaker device that includes a sound signal acquisition unit, a signal processing unit, an exciter, and a soundboard. The sound signal acquisition unit acquires a sound signal input to the speaker device. The signal processing unit controls the sound signal based on time. The exciter vibrates according to the sound signal controlled by the signal processing unit. The soundboard is connected to the exciter and outputs sound by vibrating in the plate thickness direction according to the vibration of the exciter.
 また、本発明の一態様は、状況信号取得部と、音信号取得部と、響板と、右側エキサイタと、左側エキサイタと、信号処理部と、を有するスピーカ装置である。前記状況信号取得部は、周囲の状況を示す状況信号を取得する。前記音信号取得部は、スピーカ装置に入力される音信号を取得する。前記響板は、板厚方向に振動することにより音を出力する。前記右側エキサイタは、前記響板の右側に接続され、ステレオ右音に応じて振動する。前記左側エキサイタは、前記響板の左側に接続され、ステレオ左音に応じて振動する。前記信号処理部は、前記状況信号に基づいて前記音信号を制御し、前記音信号における前記ステレオ右音を右側エキサイタに出力し、前記音信号におけるステレオ左音を左側エキサイタに出力し、前記響板の右側から出力される前記左側エキサイタの振動に応じた音、及び前記響板の左側から出力される前記右側エキサイタの振動に応じた音のうち少なくとも一方を低減させる。 Further, one embodiment of the present invention is a speaker device including a status signal acquisition unit, a sound signal acquisition unit, a soundboard, a right exciter, a left exciter, and a signal processing unit. The situation signal acquisition unit acquires a situation signal indicating a surrounding situation. The sound signal acquisition unit acquires a sound signal input to the speaker device. The soundboard outputs sound by vibrating in the thickness direction. The right exciter is connected to the right side of the soundboard and vibrates according to a stereo right sound. The left exciter is connected to the left side of the soundboard and vibrates in response to a stereo left sound. The signal processing unit controls the sound signal based on the status signal, outputs the stereo right sound in the sound signal to a right exciter, outputs a stereo left sound in the sound signal to a left exciter, and At least one of the sound corresponding to the vibration of the left exciter output from the right side of the board and the sound corresponding to the vibration of the right exciter output from the left side of the soundboard is reduced.
 また、本発明の一態様は、状況信号取得部と、音信号取得部と、信号処理部と、エキサイタと、響板とを有するスピーカ装置である。前記状況信号取得部は、周囲の状況を示す状況信号を取得する。前記音信号取得部は、スピーカ装置に入力される音信号を取得する。前記信号処理部は、前記状況信号に音声が含まれる場合に当該音信号の音声信号を制御する。前記エキサイタは、前記信号処理部により制御された音信号に応じて振動する。前記響板は、前記エキサイタと接続し、前記エキサイタの振動に応じて板厚方向に振動することにより音を出力する。 Further, one embodiment of the present invention is a speaker device including a status signal acquisition unit, a sound signal acquisition unit, a signal processing unit, an exciter, and a soundboard. The situation signal acquisition unit acquires a situation signal indicating a surrounding situation. The sound signal acquisition unit acquires a sound signal input to the speaker device. The signal processing unit controls the sound signal of the sound signal when the situation signal includes sound. The exciter vibrates according to the sound signal controlled by the signal processing unit. The soundboard is connected to the exciter and outputs sound by vibrating in the plate thickness direction according to the vibration of the exciter.
 また、本発明の一態様は、状況信号取得部が、周囲の状況を示す状況信号を取得し、音信号取得部が、スピーカ装置に入力される音信号を取得し、信号処理部が、前記状況信号に基づいて前記音信号を制御し、エキサイタが、前記信号処理部により制御された音信号に応じて振動し、響板が、前記エキサイタと接続し、前記エキサイタの振動に応じて板厚方向に振動することにより音を出力するスピーカ装置の制御方法である。 In one embodiment of the present invention, the situation signal acquisition unit acquires a situation signal indicating a surrounding situation, the sound signal acquisition unit acquires a sound signal input to the speaker device, and the signal processing unit The sound signal is controlled based on the status signal, the exciter vibrates according to the sound signal controlled by the signal processing unit, the soundboard is connected to the exciter, and the plate thickness according to the vibration of the exciter This is a method for controlling a speaker device that outputs sound by vibrating in a direction.
 本発明によれば、周囲の状況に応じて出力する音を制御することができる。 According to the present invention, it is possible to control the sound output according to the surrounding situation.
第1の実施形態におけるスピーカ装置1の構成例を示すブロック図である。It is a block diagram which shows the structural example of the speaker apparatus 1 in 1st Embodiment. 第1の実施形態におけるスピーカ装置1の外観の例を示す斜視図である。It is a perspective view which shows the example of the external appearance of the speaker apparatus 1 in 1st Embodiment. 第1の実施形態における信号処理部10の構成例を示すブロック図である。It is a block diagram which shows the structural example of the signal processing part 10 in 1st Embodiment. 第1の実施形態の信号処理部10の動作例を示すフローチャートである。It is a flowchart which shows the operation example of the signal processing part 10 of 1st Embodiment. 第1の実施形態の信号処理部10の動作例を示すフローチャートである。It is a flowchart which shows the operation example of the signal processing part 10 of 1st Embodiment. 第1の実施形態の信号処理部10の動作例を示すフローチャートである。It is a flowchart which shows the operation example of the signal processing part 10 of 1st Embodiment. 第1の実施形態の変形例1におけるスピーカ装置1Aの外観の例を示す斜視図である。It is a perspective view which shows the example of the external appearance of the speaker apparatus 1A in the modification 1 of 1st Embodiment. 第2の実施形態のスピーカ装置1Bの構成例を示すブロック図である。It is a block diagram which shows the structural example of the speaker apparatus 1B of 2nd Embodiment. 第2の実施形態の信号処理部10Bの構成例を示すブロック図である。It is a block diagram which shows the structural example of the signal processing part 10B of 2nd Embodiment. 第2の実施形態の信号処理部10Bの動作例を示すフローチャートである。It is a flowchart which shows the operation example of the signal processing part 10B of 2nd Embodiment.
 以下、実施形態のスピーカ装置を、図面を参照して説明する。 Hereinafter, the speaker device of the embodiment will be described with reference to the drawings.
(第1の実施形態)
 まず、第1の実施形態について説明する。
 図1は、第1の実施形態におけるスピーカ装置1の構成例を示すブロック図である。
(First embodiment)
First, the first embodiment will be described.
FIG. 1 is a block diagram illustrating a configuration example of the speaker device 1 according to the first embodiment.
 図1に示すように、スピーカ装置1には、センサ信号が入力される。ここで、センサ信号は、「状況信号」の一例である。センサ信号は、周囲の状況を検出するセンサにより検出された信号である。周囲の状況とは、スピーカ装置1が設けられた周囲の状況であり、例えば、スピーカ装置1が出力する音を変化させる物体の有無や、スピーカ装置1が音を出力する空間(例えば、スピーカ装置1が配置された部屋)の照度や、暗騒音レベルを示す信号である。スピーカ装置1が出力する音を変化させる物体とは、スピーカ装置1による音の伝搬を変化させ、或いは妨げるような物体であり、例えば、スピーカ装置1が寝具(例えば、マットレス400(図2参照)と枕401(図2参照)の間など)に設けられる場合、響板13(図2参照)の板面に載置され得る枕や人の頭部である。 As shown in FIG. 1, a sensor signal is input to the speaker device 1. Here, the sensor signal is an example of a “status signal”. The sensor signal is a signal detected by a sensor that detects the surrounding situation. A surrounding situation is a surrounding situation where the speaker device 1 is provided. For example, the presence or absence of an object that changes the sound output by the speaker device 1 or a space in which the speaker device 1 outputs sound (for example, the speaker device). 1 is a signal indicating the illuminance of the room 1 and the background noise level. The object that changes the sound output from the speaker device 1 is an object that changes or prevents the propagation of sound by the speaker device 1. For example, the speaker device 1 is a bedding (for example, a mattress 400 (see FIG. 2)). And a pillow 401 (see FIG. 2) or the like, it is a pillow or a human head that can be placed on the plate surface of the soundboard 13 (see FIG. 2).
 また、スピーカ装置1には、入力音信号が入力される。ここで、入力音信号は、「音信号」の一例である。スピーカ装置1は、入力音信号を周囲の状況に応じて変化させた音(音響)を出力する。 Also, an input sound signal is input to the speaker device 1. Here, the input sound signal is an example of a “sound signal”. The speaker device 1 outputs a sound (sound) obtained by changing the input sound signal according to the surrounding situation.
 スピーカ装置1は、例えば、信号処理部10と、アンプ部11と、エキサイタ12と、響板13とを備える。
 信号処理部10は、例えば、DSP(Digital Signal Processor)等の信号処理回路であり、センサ信号に基づいて入力音信号に信号処理を行うことで、入力音信号を制御し、制御した入力音信号をアンプ部11に出力する。なお、信号処理部10は、CPU(Central Processing Unit)などのプロセッサが、記憶部(不図示)に格納されたプログラムを実行することにより実現されるように構成されてもよい。また、信号処理部10の全部または一部が、LSI(Large Scale Integration)、ASIC(Application Specific Integrated Circuit)、またはFPGA(Field-Programmable Gate Array)等の専用ハードウェアにより実現されてもよい。
The speaker device 1 includes, for example, a signal processing unit 10, an amplifier unit 11, an exciter 12, and a soundboard 13.
The signal processing unit 10 is a signal processing circuit such as a DSP (Digital Signal Processor), for example, and performs signal processing on the input sound signal based on the sensor signal, thereby controlling the input sound signal and controlling the input sound signal. Is output to the amplifier unit 11. The signal processing unit 10 may be configured such that a processor such as a CPU (Central Processing Unit) executes a program stored in a storage unit (not shown). Further, all or part of the signal processing unit 10 may be realized by dedicated hardware such as an LSI (Large Scale Integration), an ASIC (Application Specific Integrated Circuit), or an FPGA (Field-Programmable Gate Array).
 アンプ部11は、信号処理部10から出力された音信号を増幅し、増幅した音信号をエキサイタ12に出力する。
 エキサイタ12は、例えば、ダイナミック型、又は圧電型のアクチュエータであり、アンプ部11からの音信号に応じて振動する。
 響板13は、例えば、音を響かせる板状部材であり、例えば、カーボンや硬質プラスチック等の基材からなる板状部材である。響板13は、エキサイタ12と物理的に接続(接着)し、エキサイタ12の振動に応じて板厚方向に振動することにより、板面と垂直な方向に音(音響)を出力する。
The amplifier unit 11 amplifies the sound signal output from the signal processing unit 10 and outputs the amplified sound signal to the exciter 12.
The exciter 12 is, for example, a dynamic or piezoelectric actuator, and vibrates according to a sound signal from the amplifier unit 11.
The soundboard 13 is, for example, a plate-like member that resonates sound, and is, for example, a plate-like member made of a base material such as carbon or hard plastic. The soundboard 13 is physically connected (adhered) to the exciter 12 and vibrates in the thickness direction in accordance with the vibration of the exciter 12, thereby outputting sound (sound) in a direction perpendicular to the plate surface.
 図2は、第1の実施形態におけるスピーカ装置1の外観の例を示す斜視図である。
 図2に示すように、スピーカ装置1においては、信号処理部10、アンプ部11、及びエキサイタ12等の電子部品を内包する筐体である本体200を備えていてもよい。本体200を硬質な筐体とした場合、本体200に内包されたエキサイタ12が上下方向(つまり、響板13の板厚方向)に振動する空間を確保することができる。これにより、エキサイタ12が寝具等に押さえつけられ、エキサイタ12の振動が弱められたり制限されたりしてしまうことを防止できる。また、電子部品が外部から衝撃を受けることを防止できる。また、本体200を密閉容器とした場合、密閉容器に電子部品を内包させることにより、電子部品の接続部分に水分や塵埃が浸入することを防止できる。
FIG. 2 is a perspective view illustrating an example of the appearance of the speaker device 1 according to the first embodiment.
As shown in FIG. 2, the speaker device 1 may include a main body 200 that is a housing that includes electronic components such as the signal processing unit 10, the amplifier unit 11, and the exciter 12. When the main body 200 is a hard casing, a space in which the exciter 12 included in the main body 200 vibrates in the vertical direction (that is, the thickness direction of the soundboard 13) can be secured. Thereby, it is possible to prevent the exciter 12 from being pressed against the bedding and the like, and the vibration of the exciter 12 being weakened or restricted. In addition, the electronic component can be prevented from receiving an impact from the outside. Further, when the main body 200 is a sealed container, it is possible to prevent moisture and dust from entering the connection part of the electronic component by enclosing the electronic component in the sealed container.
 また、響板13に、複数のエキサイタ12-1、及び12-2を接続させてもよい。この場合、複数のエキサイタ12は、同じ位相で振動するように信号処理部10により制御される。複数のエキサイタ12により同じ位相で響板13を振動させることにより、響板13の全体を同じ位相で振動させることができる。なお、響板13に複数のエキサイタ12を接続させる場合、響板13の板面に対して対称的な位置関係となるように配置することが好ましい。例えば、二つのエキサイタ12を接続させる場合、各エキサイタ12は、図2の例に示すように、本体200と響板13との接続領域の中央部からx軸方向に左右均等な位置に設けられる。例えば、三つのエキサイタ12を接続させる場合、各エキサイタ12は、例えば、本体200と響板13との接続領域の中央部に一つ配置され、さらに、中央部と各角部との間の部位に一つずつ配置される。 Also, a plurality of exciters 12-1 and 12-2 may be connected to the soundboard 13. In this case, the plurality of exciters 12 are controlled by the signal processing unit 10 so as to vibrate at the same phase. By vibrating the soundboard 13 with the same phase by the plurality of exciters 12, the entire soundboard 13 can be vibrated with the same phase. In addition, when connecting the several exciter 12 to the sound board 13, it is preferable to arrange | position so that it may become a symmetrical positional relationship with respect to the board surface of the sound board 13. FIG. For example, when two exciters 12 are connected, the exciters 12 are provided at equal left and right positions in the x-axis direction from the center of the connection area between the main body 200 and the soundboard 13 as shown in the example of FIG. . For example, when three exciters 12 are connected, each exciter 12 is arranged, for example, in the central part of the connection region between the main body 200 and the soundboard 13, and further, a part between the central part and each corner part. One by one.
 また、スピーカ装置1においては、響板13の上にシート状のセンサ300が設けられてもよい。この場合、センサ300は、例えば、響板13の板面に載置された物体の重さを検出する重さセンサ(歪ゲージ)である。この場合、センサ300により検出された物体の重さを示す信号が、無線又は有線により、スピーカ装置1に入力される。なお、センサ300により検出される信号は、物体の重さを示す信号に限定されることはなく、響板13の板面に載置された物体の変位や圧力、又は加速度を示す信号であってもよい。 In the speaker device 1, a sheet-like sensor 300 may be provided on the soundboard 13. In this case, the sensor 300 is, for example, a weight sensor (strain gauge) that detects the weight of an object placed on the plate surface of the soundboard 13. In this case, a signal indicating the weight of the object detected by the sensor 300 is input to the speaker device 1 wirelessly or by wire. The signal detected by the sensor 300 is not limited to the signal indicating the weight of the object, but is a signal indicating the displacement, pressure, or acceleration of the object placed on the plate surface of the soundboard 13. May be.
 また、スピーカ装置1においては、響板13は、マットレス400と枕401との間に設けられてよい。この場合、響板13は、響板13の板面の上に載置される枕の下側から音を出力する。響板13の板面から出力される音は、音源が面音源となる音であり、板面に垂直な方向(z軸方向)に進む直進性の高い音である。このため、響板13の板面から出力される音は、響板13のz軸方向にある聴者の耳に到達するが、x軸、及びy軸方向にいる他人には到達し難いため、他人に音が聴こえてしまう音漏れが起こりにくい。 In the speaker device 1, the soundboard 13 may be provided between the mattress 400 and the pillow 401. In this case, the soundboard 13 outputs sound from the lower side of the pillow placed on the surface of the soundboard 13. The sound output from the surface of the soundboard 13 is a sound in which the sound source is a surface sound source, and is a highly linear sound that travels in a direction perpendicular to the plate surface (z-axis direction). For this reason, the sound output from the surface of the soundboard 13 reaches the ears of the listener in the z-axis direction of the soundboard 13, but is difficult to reach others in the x-axis and y-axis directions. Sound leakage is difficult for others to hear.
 図3は、第1の実施形態における信号処理部10の構成例を示すブロック図である。信号処理部10は、例えば、センサ信号取得部100と、入力音信号取得部101と、音信号補正部102と、コンテンツ変更部103と、補正情報記憶部104とを備える。ここで、センサ信号取得部100は、「状況信号取得部」の一例である。また、音信号補正部102は、「制御部」の一例である。また、コンテンツ変更部103は、「制御部」の一例である。 FIG. 3 is a block diagram illustrating a configuration example of the signal processing unit 10 according to the first embodiment. The signal processing unit 10 includes, for example, a sensor signal acquisition unit 100, an input sound signal acquisition unit 101, a sound signal correction unit 102, a content change unit 103, and a correction information storage unit 104. Here, the sensor signal acquisition unit 100 is an example of a “situation signal acquisition unit”. The sound signal correction unit 102 is an example of a “control unit”. The content changing unit 103 is an example of a “control unit”.
 センサ信号取得部100は、外部のセンサ300からスピーカ装置1に対して入力されるセンサ信号を取得する。センサ信号取得部100は、取得したセンサ信号を、音信号補正部102に出力する。 The sensor signal acquisition unit 100 acquires a sensor signal input from the external sensor 300 to the speaker device 1. The sensor signal acquisition unit 100 outputs the acquired sensor signal to the sound signal correction unit 102.
 入力音信号取得部101は、外部からスピーカ装置1に対して入力される入力音信号を取得する。入力音信号取得部101は、取得した音信号を、音信号補正部102に出力する。 The input sound signal acquisition unit 101 acquires an input sound signal input to the speaker device 1 from the outside. The input sound signal acquisition unit 101 outputs the acquired sound signal to the sound signal correction unit 102.
 音信号補正部102は、センサ信号取得部100から取得したセンサ信号に基づいて、入力音信号を補正する。
 音信号補正部102は、例えば、センサ信号に基づいて入力音信号の音響特性を補正する。ここでの音響特性は、響板13から出力される音の特性であり、例えば、響板13から出力される音の周波数特性である。一般に、エキサイタ12を同じように振動をさせた場合でも、響板13の周囲の状況、及び響板13の材質や個体差等により、出力される音の音響特性が異なる。音信号補正部102は、周囲の状況が変化した場合でも音の特性がほぼ一定となるように、響板13の周囲の状況等に応じて入力音信号の音響特性を補正する。この補正データは、例えば、周波数帯域毎に同じ信号レベルで響板13から音を出力させ、所定の位置からマイク等でその音を取得した場合における、取得した音の周波数特性の逆特性を示すデータである。
The sound signal correction unit 102 corrects the input sound signal based on the sensor signal acquired from the sensor signal acquisition unit 100.
For example, the sound signal correction unit 102 corrects the acoustic characteristics of the input sound signal based on the sensor signal. The acoustic characteristics here are the characteristics of the sound output from the soundboard 13, for example, the frequency characteristics of the sound output from the soundboard 13. In general, even when the exciter 12 is vibrated in the same manner, the acoustic characteristics of the output sound differ depending on the surroundings of the soundboard 13 and the material and individual differences of the soundboard 13. The sound signal correction unit 102 corrects the acoustic characteristics of the input sound signal in accordance with the surrounding conditions of the soundboard 13 so that the sound characteristics are substantially constant even when the surrounding conditions change. This correction data indicates, for example, the reverse characteristics of the frequency characteristics of the acquired sound when sound is output from the soundboard 13 at the same signal level for each frequency band and the sound is acquired from a predetermined position with a microphone or the like. It is data.
 音信号補正部102は、例えば、響板13の周囲の状況に音響特性の補正データを対応させた情報が記憶された補正情報記憶部104を参照し、参照した補正データに基づいて、入力音信号を補正する。ここで補正情報記憶部104には、響板13の周囲の状況として、例えば、響板13の板面に載置された物体がない場合(ケースA)、枕が載置された場合(ケースB)、及び枕と頭部とが載置された場合(ケースC)の各々に対応する補正データが記憶される。音信号補正部102は、センサ信号に基づいて、上記のケースA~Cの何れの状況であるかを判定し、判定した結果に応じた補正データを選択し、選択した補正データを用いて、入力音信号を補正する。 For example, the sound signal correction unit 102 refers to the correction information storage unit 104 that stores information in which acoustic characteristic correction data is associated with the situation around the soundboard 13, and based on the referenced correction data, the input sound Correct the signal. Here, in the correction information storage unit 104, for example, when there is no object placed on the plate surface of the soundboard 13 (case A), or when a pillow is placed (case) as the situation around the soundboard 13 B) and correction data corresponding to each of the case where the pillow and the head are placed (case C) are stored. The sound signal correction unit 102 determines which of the above cases A to C is based on the sensor signal, selects correction data according to the determined result, and uses the selected correction data, Correct the input sound signal.
 また、音信号補正部102は、例えば、センサ信号に基づいて入力音信号の頭部伝達関数を補正する。音信号補正部102は、頭部伝達関数を補正することにより、聴者により知覚される音源の位置や音の方向を操作する。これにより、聴者が自身の両方の耳元に近い位置から音を聴く場合に、頭の中に音源があるような感覚を覚えて不快に感じることを低減させることが可能となる。 The sound signal correction unit 102 corrects the head-related transfer function of the input sound signal based on the sensor signal, for example. The sound signal correction unit 102 operates the position of the sound source and the direction of the sound perceived by the listener by correcting the head-related transfer function. As a result, when the listener listens to the sound from a position close to both ears of his / her ear, it is possible to reduce the feeling that the sound source is in the head and feel uncomfortable.
 音信号補正部102は、例えば、頭部伝達関数の補正データを対応させた情報が記憶された補正情報記憶部104を参照し、参照した補正データに基づいて、入力音信号を補正する。音信号補正部102は、例えば、センサ信号に基づいて、上記のケースA~Cの何れの状況であるかを判定し、響板13の上に頭部が載置されていると判定する場合に頭部伝達関数の補正データを用いて、入力音信号を補正する。 The sound signal correction unit 102 refers to, for example, the correction information storage unit 104 in which information associated with the correction data of the head related transfer function is stored, and corrects the input sound signal based on the referenced correction data. For example, the sound signal correction unit 102 determines which of the above cases A to C is based on the sensor signal and determines that the head is placed on the soundboard 13. The input sound signal is corrected using the correction data of the head related transfer function.
 また、音信号補正部102は、センサ信号に基づいて、入力音信号の音量を変化させるようにしてもよい。音信号補正部102は、例えば、センサ信号に基づいて、上記のケースA~Cの何れの状況であるかを判定し、響板13の上に(枕を介して)頭部が載置されていた状態から頭部が載置されていない状態に変化した場合、つまり聴者が就寝している状態から起き上がった状態に変化した場合に、入力音信号の音量を変化させる。これにより、スピーカから耳が遠ざかった場合に音を消音状態としたり、逆にスピーカから耳が遠ざかった場合にも音が聴けるように音量を増加させたりすることが可能となる。 Further, the sound signal correcting unit 102 may change the volume of the input sound signal based on the sensor signal. For example, the sound signal correcting unit 102 determines which of the cases A to C is based on the sensor signal, and the head is placed on the soundboard 13 (via a pillow). The volume of the input sound signal is changed when the head is changed from the sleeping state to the state where the head is not placed, that is, when the listening state is changed from the sleeping state to the rising state. This makes it possible to mute the sound when the ear is away from the speaker, or to increase the volume so that the sound can be heard even when the ear is away from the speaker.
 また、音信号補正部102は、例えば、センサ信号に基づいて入力音信号のボーカルを低減するようにしてもよい。音信号補正部102は、他の音に比較して人に認識され易いボーカルを低減させることにより、スピーカ装置1から出力される音が、聴者と異なる他人に聴こえてしまうことを抑制する。音信号補正部102は、例えば、照度を示すセンサ信号に基づいて、所定の閾値未満の照度である場合、入力音信号のボーカルを低減する。又は、音信号補正部102は、例えば、暗騒音レベルを示すセンサ信号に基づいて、所定の閾値未満の暗騒音レベルである場合、入力音信号のボーカルを低減する。これにより、例えば、夜間や就寝時において照明が消灯されている場合、または、周囲が静かである場合等に、ボーカルを低減させることができ、他人にスピーカ装置1から出力される音が聴かれ易い状況にある場合に、特に音漏れが生じないようにすることが可能となる。 Moreover, the sound signal correction unit 102 may reduce the vocal of the input sound signal based on the sensor signal, for example. The sound signal correction unit 102 suppresses the sound output from the speaker device 1 from being heard by another person different from the listener by reducing vocals that are easily recognized by humans compared to other sounds. For example, the sound signal correction unit 102 reduces the vocal of the input sound signal based on a sensor signal indicating illuminance when the illuminance is less than a predetermined threshold. Or the sound signal correction | amendment part 102 reduces the vocal of an input sound signal, when it is a background noise level less than a predetermined threshold based on the sensor signal which shows a background noise level, for example. As a result, for example, when the light is turned off at night or at bedtime, or when the surroundings are quiet, vocals can be reduced, and the sound output from the speaker device 1 is heard by others. It is possible to prevent sound leakage particularly when the situation is easy.
 また、音信号補正部102は、例えば、センサ信号に基づいて入力音信号の聴感特性を補正するようにしてもよい。音信号補正部102は、人の聴感に知覚され難い高域(可聴帯域のうち比較的高い周波数の帯域)や低域(可聴帯域のうち比較的低い周波数の帯域)の音信号の信号レベルを増加させる。また、音信号補正部102は、人の聴感に知覚され易い帯域(例えば、可聴帯域のうち比較的高い周波数と低い周波数の帯域を除いた中間の帯域)の音信号の信号レベルを低減させる。これにより、スピーカ装置1から出力される音が、聴者と異なる他人に聴こえてしまうことを抑制する。音信号補正部102は、例えば、照度を示すセンサ信号が所定の閾値未満の照度を示す場合、入力音信号の聴感特性を補正する。又は、音信号補正部102は、暗騒音レベルを示すセンサ信号が所定の閾値未満の暗騒音レベルを示す場合、入力音信号の聴感特性を補正する。これにより、例えば、夜間や就寝時において照明が消灯されている場合、または、周囲が静かである場合等に、入力音信号の聴感特性を補正することができ、他人にスピーカ装置1から出力される音が聴かれ易い状況にある場合に、特に音漏れが生じないようにすることが可能となる。この場合、音信号補正部102は、聴感特性の補正データを対応させた情報が記憶された補正情報記憶部104を参照し、参照した補正データに基づいて、入力音信号の聴感特性を補正する。 Further, the sound signal correction unit 102 may correct the auditory characteristics of the input sound signal based on the sensor signal, for example. The sound signal correction unit 102 determines the signal level of a high frequency (relatively high frequency band of the audible band) or low frequency (relatively low frequency band of the audible band) that is difficult to be perceived by human hearing. increase. Further, the sound signal correction unit 102 reduces the signal level of the sound signal in a band that is easily perceived by human hearing (for example, an intermediate band excluding a relatively high frequency band and a low frequency band in the audible band). Thereby, it is suppressed that the sound output from the speaker apparatus 1 is heard by others different from the listener. For example, when the sensor signal indicating illuminance indicates illuminance less than a predetermined threshold, the sound signal correction unit 102 corrects the audibility characteristics of the input sound signal. Or the sound signal correction | amendment part 102 correct | amends the auditory characteristic of an input sound signal, when the sensor signal which shows a background noise level shows the background noise level below a predetermined threshold value. Thereby, for example, when the illumination is turned off at night or at bedtime, or when the surroundings are quiet, the audibility characteristic of the input sound signal can be corrected and output to the other person from the speaker device 1. In particular, it is possible to prevent sound leakage when the sound is easily heard. In this case, the sound signal correction unit 102 refers to the correction information storage unit 104 in which information corresponding to the correction data of the auditory characteristic is stored, and corrects the auditory characteristic of the input sound signal based on the referred correction data. .
 なお、音信号補正部102は、聴者や、聴者と異なる他人の年齢に応じて聴感特性を補正するようにしてもよい。音信号補正部102は、例えば、聴者等による入力操作等により入力される入力情報を取得する入力情報取得部(不図示)から、聴者や、聴者と異なる他人の年齢を示す入力情報を取得し、取得した入力情報に基づいて、聴感特性を補正する。この場合、補正情報記憶部104には、例えば、人の年齢ごとに対応付する複数の聴感特性の補正データが記憶される。音信号補正部102は、取得した入力情報に示される年齢に基づいて、聴感特性の補正データを選択し、選択した補正データを用いて入力音信号の聴感特性を補正する。 Note that the sound signal correction unit 102 may correct the auditory characteristics according to the age of the listener or another person different from the listener. The sound signal correction unit 102 acquires input information indicating the age of the listener or another person different from the listener, for example, from an input information acquisition unit (not shown) that acquires input information input by an input operation or the like by the listener. The auditory characteristics are corrected based on the acquired input information. In this case, the correction information storage unit 104 stores, for example, a plurality of auditory characteristic correction data associated with each person's age. The sound signal correction unit 102 selects auditory characteristic correction data based on the age indicated in the acquired input information, and corrects the auditory characteristic of the input sound signal using the selected correction data.
 また、音信号補正部102は、センサ信号に基づいて、入力音信号の最大音量を変化させるようにしてもよい。音信号補正部102は、例えば、照度を示すセンサ信号が所定の閾値未満の照度を示す場合入力音信号の最大音量を低減させる。又は、音信号補正部102は、暗騒音レベルを示すセンサ信号が所定の閾値未満の暗騒音レベルを示す場合、入力音信号の最大音量を低減させる。これにより、夜間や就寝時において照明が消灯されている場合等、他人にスピーカ装置1から出力される音が聴かれ易い状況にある場合に、聴者が音量を増加させた場合であっても、音漏れが生じないようにすることが可能となる。 Further, the sound signal correction unit 102 may change the maximum volume of the input sound signal based on the sensor signal. For example, the sound signal correction unit 102 reduces the maximum volume of the input sound signal when the sensor signal indicating illuminance indicates illuminance less than a predetermined threshold. Or the sound signal correction | amendment part 102 reduces the maximum sound volume of an input sound signal, when the sensor signal which shows a background noise level shows the background noise level below a predetermined threshold value. Thereby, even when the listener increases the volume when the sound output from the speaker device 1 is easily heard by others, such as when the light is turned off at night or at bedtime, It is possible to prevent sound leakage.
 コンテンツ変更部103は、センサ信号に基づいて、入力音信号のコンテンツを変更する。ここでのコンテンツとは、入力音信号の音の内容であり、例えば、曲目である。コンテンツ変更部103は、例えば、照度を示すセンサ信号が所定の閾値未満の照度を示す場合にコンテンツを変化させる。コンテンツ変更部103は、例えば、夜間や消灯時には聴者がリラックスできるような環境音に変更する。環境音は例えば、鳥の鳴き声や滝の流れる水音等、通常の環境で自然と発生する音である。これにより聴者が就寝しているか就寝しそうな状況である場合にはリラックスできるような環境音を出力し、日中に聴者が覚醒しているような状況では入力音信号に基づく音を出力する等、聴者の活動状態に応じた音を出力することが可能である。 The content changing unit 103 changes the content of the input sound signal based on the sensor signal. The content here is the content of the sound of the input sound signal, for example, a song. For example, the content changing unit 103 changes content when a sensor signal indicating illuminance indicates illuminance less than a predetermined threshold. For example, the content changing unit 103 changes the ambient sound so that the listener can relax at night or when the light is turned off. The environmental sound is a sound that naturally occurs in a normal environment, such as a cry of a bird or a water sound of a waterfall. This makes it possible to output a relaxing environmental sound when the listener is sleeping or is going to sleep, and to output a sound based on the input sound signal when the listener is awake during the day, etc. It is possible to output a sound according to the activity state of the listener.
 なお、コンテンツ変更部103は、コンテンツを変更する対象の音信号を取得するコンテンツ取得部(不図示)から音信号を選択することでコンテンツを変更するようにしてもよいし、コンテンツを変更する対象の音信号を記憶したコンテンツ記憶部(不図示)から音信号を選択することでコンテンツを変更するようにしてもよい。 The content changing unit 103 may change the content by selecting a sound signal from a content acquisition unit (not shown) that acquires the sound signal to be changed, or the content changing target. The content may be changed by selecting a sound signal from a content storage unit (not shown) that stores the sound signal.
 補正情報記憶部104は、音響特性に対応する補正データ、頭部伝達関数に対応する補正データ、聴感特性に対応する補正データをそれぞれ記憶する。補正情報記憶部104には、音響特性に対応する補正データとして、響板13の板面に載置される物体(例えば上述したケースA~C)に応じた音響特性それぞれに対応する複数の補正データが記憶されていてもよい。また、補正情報記憶部104には、聴感特性に対応する補正データとして、人の年齢や年代に応じた聴感特性それぞれに対応する複数の補正データが記憶されていてもよい。 The correction information storage unit 104 stores correction data corresponding to acoustic characteristics, correction data corresponding to head-related transfer functions, and correction data corresponding to auditory characteristics. In the correction information storage unit 104, a plurality of corrections corresponding to each of the acoustic characteristics corresponding to the object (for example, cases A to C described above) placed on the plate surface of the soundboard 13 as correction data corresponding to the acoustic characteristics. Data may be stored. Further, the correction information storage unit 104 may store a plurality of correction data corresponding to each of the audibility characteristics corresponding to the age and age of the person as correction data corresponding to the audibility characteristics.
 ここで、信号処理部10の動作の流れを図4~図6を用いて説明する。
 図4~図6は、第1の実施形態の信号処理部10の動作例を示すフローチャートである。図4は、信号処理部10が音響特性を補正する処理の動作を示すフローチャートである。図5は、信号処理部10がコンテンツを変更する処理の動作を示すフローチャートである。図6は、信号処理部10が最大音量を補正する処理の動作を示すフローチャートである。
Here, the flow of the operation of the signal processing unit 10 will be described with reference to FIGS.
4 to 6 are flowcharts showing an operation example of the signal processing unit 10 according to the first embodiment. FIG. 4 is a flowchart showing the operation of the process in which the signal processing unit 10 corrects the acoustic characteristics. FIG. 5 is a flowchart showing the operation of the process in which the signal processing unit 10 changes the content. FIG. 6 is a flowchart showing the operation of the process in which the signal processing unit 10 corrects the maximum volume.
 図4に示すように、信号処理部10のセンサ信号取得部100は、響板13の板面に載置される物体の重さを示すセンサ信号を取得する(ステップS10)。音信号補正部102は、響板13における載置の状況(例えば、上述したケースA~C)を判定する(ステップS11)。音信号補正部102は、判定した載置の状況に応じた補正データを取得する(ステップS12)。音信号補正部102は、例えば、補正情報記憶部104を参照し、載置の状況に応じた補正データを取得する。音信号補正部102は、取得した補正データを用いて、入力音信号の音響特性を補正する(ステップS13)。なお、音信号補正部102は、音響特性と共に、又は音響特性に代えて、頭部伝達関数、及び聴感特性のうち少なくともいずれか一方を補正するようにしてもよい。 As shown in FIG. 4, the sensor signal acquisition unit 100 of the signal processing unit 10 acquires a sensor signal indicating the weight of the object placed on the plate surface of the soundboard 13 (step S <b> 10). The sound signal correction unit 102 determines the mounting status (for example, the above-described cases A to C) on the soundboard 13 (step S11). The sound signal correction unit 102 acquires correction data corresponding to the determined mounting situation (step S12). The sound signal correction unit 102 refers to the correction information storage unit 104, for example, and acquires correction data corresponding to the mounting status. The sound signal correction unit 102 corrects the acoustic characteristics of the input sound signal using the acquired correction data (step S13). Note that the sound signal correcting unit 102 may correct at least one of the head-related transfer function and the auditory sensation characteristic together with the acoustic characteristic or instead of the acoustic characteristic.
 図5に示すように、センサ信号取得部100は、周囲の照度を示すセンサ信号を取得する(ステップS20)。コンテンツ変更部103は、照度が所定の閾値未満か否か判定する(ステップS21)。コンテンツ変更部103は、照度が所定の閾値未満である場合、コンテンツを変更する(ステップS22)。なお、コンテンツ変更部103は、照度と共に、又は照度に代えて、暗騒音レベルを示すセンサ信号を取得し、照度及び暗騒音レベルの少なくとも一方に基づいて、コンテンツを変更するようにしてもよい。 As shown in FIG. 5, the sensor signal acquisition unit 100 acquires a sensor signal indicating ambient illuminance (step S <b> 20). The content changing unit 103 determines whether or not the illuminance is less than a predetermined threshold (step S21). If the illuminance is less than the predetermined threshold, the content changing unit 103 changes the content (step S22). The content changing unit 103 may acquire a sensor signal indicating a background noise level together with or instead of the illuminance, and change the content based on at least one of the illuminance and the background noise level.
 図6に示すように、センサ信号取得部100は、周囲の暗騒音レベルを示すセンサ信号を取得する(ステップS30)。音信号補正部102は、暗騒音レベルが所定の閾値未満か否か判定する(ステップS31)。音信号補正部102は、暗騒音レベルが所定の閾値未満である場合、最大音量を低減する(ステップS32)。なお、音信号補正部102は、暗騒音レベルと共に、又は暗騒音レベルに代えて、照度を示すセンサ信号を取得し、照度及び暗騒音レベルの少なくとも一方に基づいて、最大音量を低減するようにしてもよい。 As shown in FIG. 6, the sensor signal acquisition unit 100 acquires a sensor signal indicating the surrounding background noise level (step S30). The sound signal correcting unit 102 determines whether or not the background noise level is less than a predetermined threshold (step S31). The sound signal correction unit 102 reduces the maximum volume when the background noise level is less than a predetermined threshold (step S32). The sound signal correction unit 102 acquires a sensor signal indicating illuminance together with or instead of the background noise level, and reduces the maximum volume based on at least one of the illumination level and the background noise level. May be.
 以上説明したように、第1の実施形態のスピーカ装置1は、信号処理部10が周囲の状況を示すセンサ信号に基づいて、入力音信号を制御することで、周囲の状況に応じて、音信号を制御することができ、状況が変化して音質が劣化した場合であっても音質を改善させることができるため、高品質の音を出力することが可能である。また、本実施形態のスピーカ装置1は、響板13を振動させることにより直進性の高い音を出力させることができ、周囲にいる他人に対して音が伝わり難くすることができるため、音漏れを抑制することが可能である。 As described above, in the speaker device 1 according to the first embodiment, the signal processing unit 10 controls the input sound signal based on the sensor signal indicating the surrounding situation, so that the sound is generated according to the surrounding situation. Since the signal can be controlled and the sound quality can be improved even when the situation changes and the sound quality deteriorates, a high-quality sound can be output. In addition, the speaker device 1 according to the present embodiment can output a highly straight-forward sound by vibrating the soundboard 13 and can make it difficult for sound to be transmitted to other people around. Can be suppressed.
 また、第1の実施形態のスピーカ装置1では、響板13は、枕401とマットレス400との間に設けられ、センサ信号取得部100は、響板13の板面に載置された物体の重さを示す信号を取得し、音信号補正部102は、響板13に載置された物体の重さを示す信号に基づいて、入力音信号を制御する。このため、本実施形態のスピーカ装置1は、響板13の板面に載置された物体が、何もないか、枕のみか、枕と頭部とであるか等の状況に応じて、入力音信号を制御することができる。本実施形態のスピーカ装置1は、響板13の板面に何も載置されていない場合と、枕のみが載置された場合と、枕と頭部とが載置された場合とで音質が異なる場合であっても、各状況に関わらず、ほぼ同等な音質の音を出力することが可能となる。 Further, in the speaker device 1 of the first embodiment, the soundboard 13 is provided between the pillow 401 and the mattress 400, and the sensor signal acquisition unit 100 is configured to detect an object placed on the plate surface of the soundboard 13. The signal indicating the weight is acquired, and the sound signal correcting unit 102 controls the input sound signal based on the signal indicating the weight of the object placed on the soundboard 13. For this reason, the speaker device 1 according to the present embodiment, depending on the situation such as whether the object placed on the plate surface of the soundboard 13 is nothing, only a pillow, or a pillow and a head, The input sound signal can be controlled. The speaker device 1 of the present embodiment has a sound quality when nothing is placed on the plate surface of the soundboard 13, when only a pillow is placed, and when a pillow and a head are placed. Even if they are different, it is possible to output a sound having substantially the same sound quality regardless of the situation.
 また、第1の実施形態のスピーカ装置1では、音信号補正部102は、入力音信号の音響特性、頭部伝達関数、及び聴感特性のうちの少なくとも一つを補正するため、状況により音響特性等がそれぞれ異なる場合であっても、音響特性を補正してほぼ同等な特性の音を出力することが可能となる。 In the speaker device 1 of the first embodiment, the sound signal correction unit 102 corrects at least one of the acoustic characteristics, the head-related transfer function, and the auditory characteristics of the input sound signal. Even if they are different from each other, it is possible to correct the acoustic characteristics and output sounds having substantially the same characteristics.
(第1の実施形態の変形例1)
 次に、第1の実施形態の変形例1について説明する。
 図7は、第1の実施形態の変形例1におけるスピーカ装置1Aの外観の例を示す斜視図である。
 図7に示すように、第1の実施形態の変形例1においては、響板13が、右側響板13R、及び左側響板13Lにより構成され、スピーカ装置1Aがステレオ音を出力する点において、上記実施形態と異なる。以下では、上述した実施形態と異なる点を説明し、上述した実施形態と同一または類似の機能を有する構成に同一の符号を付し、その説明を省略する。
(Modification 1 of the first embodiment)
Next, Modification 1 of the first embodiment will be described.
FIG. 7 is a perspective view illustrating an example of the appearance of the speaker device 1A according to the first modification of the first embodiment.
As shown in FIG. 7, in the first modification of the first embodiment, the soundboard 13 is composed of a right soundboard 13R and a left soundboard 13L, and the speaker device 1A outputs stereo sound. Different from the above embodiment. Below, a different point from embodiment mentioned above is demonstrated, the same code | symbol is attached | subjected to the structure which has the same or similar function as embodiment mentioned above, and the description is abbreviate | omitted.
 信号処理部10は、センサ信号に基づいて入力音信号を制御し、制御した入力音信号をステレオ音の音信号に変換する。信号処理部10は、ステレオ音のステレオ右音を右側のエキサイタ12-1に、ステレオ左音を左側のエキサイタ12-1に、それぞれ出力する。この場合、信号処理部10は、ステレオ音のクロストークが低減されるように、ステレオ音を補正する。例えば、補正情報記憶部104に、クロストークを低減させる補正データを記憶させ、信号処理部10は、この補正データを用いてステレオ音を補正することにより、ステレオ音のクロストークを低減させる。この補正データは、例えば、響板13からステレオ音を出力させた場合に、右側響板13R、及び左側響板13Lそれぞれから出力される音をマイク等で取得することにより生成される。この場合、補正データは、右側響板13R側から取得した音信号に混在するステレオ左音の成分、および左側響板13L側から取得した音信号に混在するステレオ右音の成分を抽出した場合における、抽出した成分の周波数特性の逆特性を示すデータである。
 これにより第1の実施形態の変形例1では、ステレオ音を出力する場合のクロストークを低減させることで、よりステレオ感の高い音を出力することが可能となる。
 なお、図7では、響板13が、右側響板13R、及び左側響板13Lに分離されている場合の例を示しているが、右側響板13R、及び左側響板13Lが分離されていなくともよい。
The signal processing unit 10 controls the input sound signal based on the sensor signal, and converts the controlled input sound signal into a stereo sound signal. The signal processing unit 10 outputs the stereo right sound of the stereo sound to the right exciter 12-1 and the stereo left sound to the left exciter 12-1. In this case, the signal processing unit 10 corrects the stereo sound so that the crosstalk of the stereo sound is reduced. For example, correction data for reducing crosstalk is stored in the correction information storage unit 104, and the signal processing unit 10 corrects the stereo sound using the correction data, thereby reducing the crosstalk of the stereo sound. For example, when the stereo sound is output from the soundboard 13, the correction data is generated by acquiring sounds output from the right soundboard 13R and the left soundboard 13L with a microphone or the like. In this case, as the correction data, a stereo left sound component mixed in the sound signal acquired from the right soundboard 13R side and a stereo right sound component mixed in the sound signal acquired from the left soundboard 13L side are extracted. This is data showing the inverse characteristic of the frequency characteristic of the extracted component.
Thereby, in the modification 1 of 1st Embodiment, it becomes possible to output a sound with a higher stereo feeling by reducing the crosstalk in the case of outputting a stereo sound.
7 shows an example in which the soundboard 13 is separated into the right soundboard 13R and the left soundboard 13L, but the right soundboard 13R and the left soundboard 13L are not separated. Also good.
(第1の実施形態の変形例2)
 次に、第1の実施形態の変形例2について説明する。実施形態の変形例2では、音信号補正部102は、照度、及び暗騒音レベルに代えて、時間に基づいて、入力音信号を制御する点において、上記実施形態と相違する。
 音信号補正部102は、スピーカ装置1(1A)のタイマ(不図示)から時間を示す時間情報を取得し、取得した時間が所定の時間に到達した場合、入力音信号を制御する。音信号補正部102は、例えば、取得した時間が消灯時間に達した場合、入力音信号のボーカルを低減する。これにより、例えば、夜間や就寝時においてボーカルを低減させることができ、他人にスピーカ装置1から出力される音が聴かれ易い状況にある場合に、特に音漏れが生じないようにすることが可能となる。なお、音信号補正部102は、ボーカルを低減させる制御と共に、又は、ボーカルを低減させる制御に代えて、入力音信号の音量や最大音量、或いはコンテンツを変更するようにしてもよい。
 これにより、第1の実施形態の変形例2では、消灯時刻を過ぎているか、或いは消灯時刻が近づいた場合には、実際には照明が消灯されていなくとも、ボーカルを低減させる等して消灯時刻に備えて音漏れが生じないようにすることが可能となる。
(Modification 2 of the first embodiment)
Next, Modification 2 of the first embodiment will be described. The modification 2 of the embodiment is different from the above embodiment in that the sound signal correction unit 102 controls the input sound signal based on time instead of the illuminance and the background noise level.
The sound signal correction unit 102 acquires time information indicating time from a timer (not shown) of the speaker device 1 (1A), and controls the input sound signal when the acquired time reaches a predetermined time. For example, when the acquired time reaches the extinguishing time, the sound signal correcting unit 102 reduces the vocal of the input sound signal. Thereby, for example, vocals can be reduced at night or at bedtime, and it is possible to prevent sound leakage particularly when the sound output from the speaker device 1 is easily heard by others. It becomes. Note that the sound signal correction unit 102 may change the volume or maximum volume of the input sound signal or the content together with the control for reducing the vocal or instead of the control for reducing the vocal.
Thus, in the second modification of the first embodiment, when the turn-off time has passed or the turn-off time is approaching, even if the illumination is not actually turned off, it is turned off by reducing vocals or the like. It is possible to prevent sound leakage in preparation for the time.
 なお、上記実施形態では、音信号補正部102が、響板13の上に頭部が載置されていると判定する場合に頭部伝達関数の補正データを用いて、入力音信号を補正する場合を例示して説明したが、これに限定されない。音信号補正部102は、例えば、響板13の中央部に頭部が載置されている場合にだけ、頭部伝達関数を補正するようにしてもよい。また、音信号補正部102は、響板13の上に載置されている頭部の向き(就寝の姿勢、つまり横臥しているか、仰臥しているか等)に応じて、頭部伝達関数を補正するか否か判定するようにしてもよい。一般に、聴者の両耳に近い位置から音が出力される場合に、頭の中に音源があるような感覚を覚えて不快に感じることが知られており、響板13の中央部に頭部が載置されていない場合や、聴者が横臥している場合には伝達関数を補正する必要がない場合があるためである。音信号補正部102は、響板13の中央部に頭部が載置されているか否かを判定する方法として、例えば、響板13の中央部を基準として対称な位置に設けられた複数の歪ゲージについて、各々の歪ゲージにほぼ均等な重みが検出された場合、響板13の中央部に頭部が載置されていると判定する。また、音信号補正部102は、聴者が横臥しているか否かを判定する方法としては、例えば、聴者の姿勢を撮像するカメラ(不図示)から取得した撮像データを解析することにより判定する。 In the above embodiment, when the sound signal correction unit 102 determines that the head is placed on the soundboard 13, the input sound signal is corrected using the correction data of the head related transfer function. Although the case has been described by way of example, the present invention is not limited to this. For example, the sound signal correction unit 102 may correct the head-related transfer function only when the head is placed at the center of the soundboard 13. Further, the sound signal correction unit 102 determines the head-related transfer function according to the direction of the head placed on the soundboard 13 (sleeping posture, that is, lying or lying, etc.). You may make it determine whether it correct | amends. In general, it is known that when sound is output from a position close to both ears of a listener, it feels uncomfortable with the feeling that a sound source is in the head. This is because there is a case where it is not necessary to correct the transfer function when the is not placed or when the listener is lying down. As a method for determining whether or not the head is placed on the central portion of the soundboard 13, the sound signal correction unit 102 is, for example, a plurality of positions provided symmetrically with respect to the central portion of the soundboard 13. Regarding the strain gauges, when a substantially equal weight is detected for each strain gauge, it is determined that the head is placed on the central portion of the soundboard 13. Moreover, the sound signal correction | amendment part 102 determines by analyzing the imaging data acquired from the camera (not shown) which images a listener's attitude | position, for example as a method of determining whether a listener is lying down.
(第2の実施形態)
 次に、第2の実施形態について説明する。本実施形態におけるスピーカ装置1Bは、例えば、ベッドにいるユーザが音を聞き取り難い状況である場合に、ユーザの耳の聞こえ(聴覚)を支援するために適用される。具体的には、スピーカ装置1Bは、ユーザが関係者(例えば、看護師、介護士、医師、家族など)と会話を行う際に、その会話の音声をユーザが聞き取り易い音に補正する。
(Second Embodiment)
Next, a second embodiment will be described. The speaker device 1 </ b> B in the present embodiment is applied, for example, to support the hearing (hearing) of the user's ear when the user in bed is in a situation where it is difficult to hear the sound. Specifically, the speaker device 1B corrects the voice of the conversation to a sound that is easy for the user to hear when the user has a conversation with a related person (for example, a nurse, a caregiver, a doctor, a family, or the like).
 本実施形態では、スピーカ装置1Bに周囲音の音信号(以下、周囲音信号という)が入力される点において、上記実施形態と相違する。また、スピーカ装置1Bに外部の通信装置からの通話音の音信号(以下、通話音信号という)が入力される点において、上記実施形態と相違する。
 図8は、第2の実施形態におけるスピーカ装置1Bの構成例を示すブロック図である。
 図8に示すように、スピーカ装置1Bには、集音装置300Bにより集音された周囲音信号が入力される。集音装置300Bは、スピーカ装置1Bが設けられた場所における周囲の音を集音する装置であり、例えば、マイクロフォンである。周囲音信号には、周囲の環境音や、聴者に話しかける人の音声、及び聴者自身が話す音声等が含まれる。ここで、周囲音信号は、周囲の音の状況を示す信号であり、「状況信号」の一例である。
The present embodiment is different from the above-described embodiment in that an ambient sound signal (hereinafter referred to as an ambient sound signal) is input to the speaker device 1B. Further, the speaker device 1B is different from the above embodiment in that a sound signal of a call sound (hereinafter referred to as a call sound signal) from an external communication device is input to the speaker device 1B.
FIG. 8 is a block diagram illustrating a configuration example of the speaker device 1B according to the second embodiment.
As shown in FIG. 8, the ambient sound signal collected by the sound collection device 300B is input to the speaker device 1B. The sound collection device 300B is a device that collects ambient sounds at the place where the speaker device 1B is provided, and is, for example, a microphone. The ambient sound signal includes ambient environmental sounds, voices of people talking to the listener, voices spoken by the listener, and the like. Here, the ambient sound signal is a signal indicating the state of the surrounding sound, and is an example of a “situation signal”.
 また、スピーカ装置1Bには、通信装置500からの通話音信号が入力される。通信装置500は、聴者が通話に用いる通信装置であり、例えば、携帯電話や固定電話である。特に、スピーカ装置1Bが病室などの医療現場で用いられる場合には、通信装置500は、医療従事者と病室にいる患者との通話を行う、所謂ナースコール機器である。ここで、通話音信号は、「音信号」の一例である。 Further, a call sound signal from the communication device 500 is input to the speaker device 1B. The communication device 500 is a communication device used by a listener for a call, and is, for example, a mobile phone or a landline phone. In particular, when the speaker device 1B is used in a medical field such as a hospital room, the communication device 500 is a so-called nurse call device that performs a call between a medical worker and a patient in a hospital room. Here, the call sound signal is an example of a “sound signal”.
 スピーカ装置1Bは、例えば、信号処理部10Bと、アンプ部11と、エキサイタ12と、響板13とを備える。
 信号処理部10Bは、周囲音信号、入力音信号及び通話音信号を取得し、取得した信号のうち何れか一つの信号を選択し、選択した信号を補正して出力する。
The speaker device 1B includes, for example, a signal processing unit 10B, an amplifier unit 11, an exciter 12, and a soundboard 13.
The signal processing unit 10B acquires the ambient sound signal, the input sound signal, and the call sound signal, selects any one of the acquired signals, corrects the selected signal, and outputs it.
 図9は、第2の実施形態における信号処理部10Bの構成例を示すブロック図である。信号処理部10Bは、例えば、周囲音信号取得部105と、入力音信号取得部101と、音信号補正部102Bと、コンテンツ変更部103と、補正情報記憶部104と、通話音信号取得部106と、出力信号選択部107とを備える。ここで、周囲音信号取得部105は、「状況信号取得部」の一例である。また、通話音信号取得部106は「音信号取得部」の一例である。また、出力信号選択部107は、「信号処理部」の一例である。 FIG. 9 is a block diagram illustrating a configuration example of the signal processing unit 10B according to the second embodiment. The signal processing unit 10B includes, for example, an ambient sound signal acquisition unit 105, an input sound signal acquisition unit 101, a sound signal correction unit 102B, a content change unit 103, a correction information storage unit 104, and a call sound signal acquisition unit 106. And an output signal selection unit 107. Here, the ambient sound signal acquisition unit 105 is an example of a “situation signal acquisition unit”. The call sound signal acquisition unit 106 is an example of a “sound signal acquisition unit”. The output signal selection unit 107 is an example of a “signal processing unit”.
 周囲音信号取得部105は、周囲音信号を取得し、取得した周囲音信号を出力信号選択部107に出力する。
 入力音信号取得部101は、入力音信号を取得し、取得した入力音信号を、出力信号選択部107に出力する。
 通話音信号取得部106は、通話音信号を取得し、取得した通話音信号を、出力信号選択部107に出力する。
The ambient sound signal acquisition unit 105 acquires the ambient sound signal and outputs the acquired ambient sound signal to the output signal selection unit 107.
The input sound signal acquisition unit 101 acquires an input sound signal and outputs the acquired input sound signal to the output signal selection unit 107.
The call sound signal acquisition unit 106 acquires a call sound signal and outputs the acquired call sound signal to the output signal selection unit 107.
 出力信号選択部107は、周囲音信号取得部105からの周囲音信号、入力音信号取得部101からの入力音信号、及び通話音信号取得部106からの通話音信号のうち、何れか一つの信号を選択し、選択した信号を、当該選択した信号の種別を示す情報(種別情報)とともに音信号補正部102Bに出力する。ここでの種別情報は、周囲音信号、入力音信号、及び通話音信号をそれぞれ区別する情報である。 The output signal selection unit 107 is one of the ambient sound signal from the ambient sound signal acquisition unit 105, the input sound signal from the input sound signal acquisition unit 101, and the call sound signal from the call sound signal acquisition unit 106. A signal is selected, and the selected signal is output to the sound signal correction unit 102B together with information (type information) indicating the type of the selected signal. The type information here is information for distinguishing the ambient sound signal, the input sound signal, and the call sound signal.
 出力信号選択部107は、通話音信号に通話の音声がある場合、通話音信号を選択する。出力信号選択部107は、例えば、通話音信号の音量が所定の音量閾値以上である場合、通話音信号に通話の音声があると判定する。 The output signal selection unit 107 selects a call sound signal when there is a call voice in the call sound signal. For example, when the volume of the call sound signal is equal to or higher than a predetermined volume threshold, the output signal selection unit 107 determines that the call sound signal includes call sound.
 また、出力信号選択部107は、通話音信号に通話の音声がないが、周囲音信号に音声が含まれる場合、周囲音信号を選択する。出力信号選択部107は、例えば、周囲音信号を周波数解析し、音声が含まれる帯域の信号強度が所定の強度閾値以上である場合、周囲音信号に音声が含まれると判定する。或いは、出力信号選択部107は、二つの集音装置300Bを、ユーザの位置を基準として対称となるように配置することで、それぞれに集音された音の指向性に基づいて、ステレオ音源の原理から音声を抽出するようにしてもよい。この場合、出力信号選択部107は、周囲音信号から所定の音量閾値以上の音声を抽出した場合に、周囲音信号に音声が含まれると判定する。
 また、出力信号選択部107は、通話音信号、及び周囲音信号に共に音声が含まれない場合、入力音信号を選択する。ここで、通話音信号に音声が含まれない場合とは、例えば、通話音信号の音量が所定の音量閾値未満となる場合である。また、周周囲音信号に音声が含まれない場合とは、例えば、周囲音信号において音声が含まれる帯域の信号強度が所定の強度閾値未満となる場合である。
Further, the output signal selection unit 107 selects the ambient sound signal when the speech sound signal does not include a speech sound but the ambient sound signal includes speech. For example, the output signal selection unit 107 performs frequency analysis on the ambient sound signal, and determines that the ambient sound signal includes sound if the signal strength of the band including the sound is equal to or greater than a predetermined intensity threshold. Alternatively, the output signal selection unit 107 arranges the two sound collectors 300B so as to be symmetric with respect to the position of the user, and based on the directivity of the collected sound, the stereo sound source Audio may be extracted from the principle. In this case, the output signal selection unit 107 determines that a sound is included in the ambient sound signal when the sound having a predetermined volume threshold or higher is extracted from the ambient sound signal.
Further, the output signal selection unit 107 selects the input sound signal when neither the call sound signal nor the ambient sound signal includes sound. Here, the case where the voice is not included in the call sound signal is, for example, a case where the volume of the call sound signal is less than a predetermined volume threshold. Further, the case where the sound is not included in the peripheral sound signal is, for example, a case where the signal intensity of the band in which the sound is included in the ambient sound signal is less than a predetermined intensity threshold.
 音信号補正部102Bは、出力信号選択部107により選択された音信号を取得し、取得した音信号を補正する。
 音信号補正部102Bは、出力信号選択部107により通信音信号が選択された場合、通信音を補正する。音信号補正部102Bは、例えば、通信音の音量を、ユーザの聴力レベルに応じて補正する。また、音信号補正部102Bは、例えば、通信音を、ユーザが聴き取りやすい音響特性に補正する。音信号補正部102Bは、通信音を、ユーザが聴き取りやすい速度で再生(出力)するようにしてもよい。これにより、周囲が騒がしい場合や、ユーザがいわゆる難聴者であった場合でも、ユーザが通話の内容を聞き取り易くなり、医療従事者からの連絡事項や確認事項を聞き逃すことを抑制できる。
The sound signal correction unit 102B acquires the sound signal selected by the output signal selection unit 107 and corrects the acquired sound signal.
The sound signal correction unit 102B corrects the communication sound when the output signal selection unit 107 selects the communication sound signal. For example, the sound signal correction unit 102B corrects the volume of the communication sound according to the hearing level of the user. In addition, the sound signal correction unit 102B corrects, for example, the communication sound to an acoustic characteristic that is easy for the user to listen to. The sound signal correction unit 102B may reproduce (output) the communication sound at a speed at which the user can easily listen. Thereby, even when the surroundings are noisy or when the user is a so-called deaf person, it becomes easier for the user to hear the contents of the call, and it is possible to suppress missed communication items and confirmation items from the medical staff.
 なお、ユーザの聴力レベル、ユーザが聴き取りやすい音響特性、及びユーザが聴き取りやすい速度等のユーザの聴力に関する情報は、予め補正情報記憶部104や図示しない記憶部に記憶されてよい。 Note that information about the user's hearing such as the user's hearing level, the acoustic characteristics that the user can easily listen to, and the speed at which the user can easily listen may be stored in advance in the correction information storage unit 104 or a storage unit (not shown).
 音信号補正部102Bは、出力信号選択部107により周囲音信号が選択された場合、周囲音信号に含まれる音声を補正する。
 音信号補正部102Bは、例えば、音声が含まれる帯域の信号のみを通過させるフィルタを用いて、周囲音信号から音声を抽出して出力する。この場合、音信号補正部102Bは、抽出した音声の音量、音響特性、及び速度を、ユーザの聴覚に応じてユーザが聞き取りやすいように補正して出力するようにしてもよい。
 音信号補正部102Bは、抽出した音声の音量が、所定の音量閾値以上である場合に、音量を低減させるようにしてもよい。これにより、スピーカ装置1Bから出力する音声の音量が小さくなり、スピーカ装置1Bからの出力の一部がマイクに帰還されたことにより生ずるハウリングを抑制できる。
The sound signal correcting unit 102B corrects the sound included in the ambient sound signal when the output signal selecting unit 107 selects the ambient sound signal.
The sound signal correction unit 102B extracts and outputs the sound from the ambient sound signal using, for example, a filter that passes only a signal in a band including the sound. In this case, the sound signal correction unit 102B may correct and output the volume, acoustic characteristics, and speed of the extracted voice according to the user's hearing so that the user can easily hear it.
The sound signal correction unit 102B may reduce the volume when the volume of the extracted sound is equal to or higher than a predetermined volume threshold. Thereby, the volume of the sound output from the speaker device 1B is reduced, and howling that occurs when a part of the output from the speaker device 1B is fed back to the microphone can be suppressed.
 また、音信号補正部102Bは、抽出した音声が、ユーザ自身の音声か、ユーザの音声とは異なる音声かを判定し、判定した結果に応じた処理を行うようにしてもよい。この場合、音信号補正部102Bは、例えば、ユーザが様々な発話を行う音声の周波数特性を予め記憶させておく。音信号補正部102Bは、抽出した音声の周波数特性が、予め記憶させた周波数特性に似た特徴を有すると判定する場合、抽出した音声はユーザ自身の音声であると判定する。一方、音信号補正部102Bは、抽出した音声の周波数特性が、予め記憶させた周波数特性に似た特徴を有していないと判定する場合、抽出した音声はユーザ自身の音声でないと判定する。 Also, the sound signal correction unit 102B may determine whether the extracted sound is the user's own sound or a sound different from the user's sound, and may perform processing according to the determined result. In this case, for example, the sound signal correction unit 102B stores in advance the frequency characteristics of the voice that the user makes various utterances. When the sound signal correcting unit 102B determines that the frequency characteristic of the extracted sound has characteristics similar to the frequency characteristic stored in advance, the sound signal correcting unit 102B determines that the extracted sound is the user's own sound. On the other hand, if the sound signal correcting unit 102B determines that the frequency characteristics of the extracted sound do not have characteristics similar to the frequency characteristics stored in advance, the sound signal correcting unit 102B determines that the extracted sound is not the user's own sound.
 例えば、音信号補正部102Bは、抽出した音声がユーザ自身の音声である場合、抽出した音声をアンプ部11に出力しない。これにより、ユーザは自分自身の音声をスピーカ装置1Bから聴取しないため、自分自身の声が、発話のタイミングから遅れて聞こえ、声が響いて耳障りとなることを軽減させる。
 一方、音信号補正部102Bは、抽出した音声がユーザ自身の音声でない場合、抽出した音声を、ユーザが聴き取りやすい音量、音響特性、及び速度に補正する。これにより、ユーザが、自身に話しかけられた音声を聞き取り易くなる。周囲が騒がしい場合や、ユーザがいわゆる難聴者であった場合でも、関係者がユーザの耳元にかがみこんで大きな声で話しかけたりする必要がなく、会話をスムーズ行うことができる。
For example, the sound signal correction unit 102B does not output the extracted sound to the amplifier unit 11 when the extracted sound is the user's own sound. Accordingly, since the user does not listen to his / her own sound from the speaker device 1B, his / her own voice is heard with a delay from the timing of the utterance, and the voice is reverberant and harsh.
On the other hand, if the extracted sound is not the user's own sound, the sound signal correction unit 102B corrects the extracted sound to a volume, acoustic characteristics, and speed that are easy for the user to hear. This makes it easier for the user to hear the voice spoken to him. Even when the surroundings are noisy or when the user is a so-called deaf person, it is not necessary for the concerned person to bend over the user's ears and speak with a loud voice, and the conversation can be performed smoothly.
 音信号補正部102Bは、出力信号選択部107により入力音信号が選択された場合、入力音を補正する。音信号補正部102Bは、例えば、入力音信号の音量、音響特性、及び速度を、ユーザの聴覚に応じてユーザが聞き取りやすいように補正して出力する。また、音信号補正部102Bは、周囲音信号の信号レベル(つまり暗騒音レベル)に応じて入力音信号を補正してもよい。この場合、音信号補正部102Bは、例えば、第1の実施形態と同様に、入力信号の最大音量の上限を変更する。 The sound signal correcting unit 102B corrects the input sound when the output signal selecting unit 107 selects the input sound signal. For example, the sound signal correction unit 102B corrects and outputs the volume, acoustic characteristics, and speed of the input sound signal so as to be easily heard by the user according to the user's hearing. The sound signal correction unit 102B may correct the input sound signal according to the signal level of the ambient sound signal (that is, the background noise level). In this case, the sound signal correction unit 102B changes the upper limit of the maximum volume of the input signal, for example, as in the first embodiment.
 図10は、第2の実施形態の信号処理部10Bの動作例を示すフローチャートである。
 信号処理部10Bの周囲音信号取得部105は周囲音信号を取得し(ステップS110)、入力音信号取得部101は入力音信号を取得し(ステップS111)、通話音信号取得部106は通話音信号を取得する(ステップS112)。
 出力信号選択部107は、通話音信号に通話の音声があるか否かを判定し(ステップS113)、通話音信号に通話の音声がある場合には通話音信号をその識別情報と共に、音信号補正部102Bに出力する。音信号補正部102Bは、出力信号選択部107から通話音声信号を取得し、取得した通話音声信号を補正する(ステップS114)。
 出力信号選択部107は、通話音信号に通話の音声がない場合、周囲音信号に音声が含まれるか否かを判定する(ステップS115)。出力信号選択部107は、周囲音信号に音声が含まれる場合、周囲音信号をその識別情報と共に、音信号補正部102Bに出力する。音信号補正部102Bは、出力信号選択部107から周囲音信号を取得し、取得した周囲音信号に含まれる音声を補正する(ステップS116)。
 出力信号選択部107は、周囲音信号に音声が含まれない場合、入力音信号をその識別情報と共に、音信号補正部102Bに出力する。音信号補正部102Bは、出力信号選択部107から入力音信号を取得し、取得した入力音信号を補正する(ステップS117)。ここで、音信号補正部102Bが入力音信号を補正する方法は、例えば、第1の実施形態で説明した、暗騒音レベルに基づく補正である。
FIG. 10 is a flowchart illustrating an operation example of the signal processing unit 10B according to the second embodiment.
The ambient sound signal acquisition unit 105 of the signal processing unit 10B acquires the ambient sound signal (step S110), the input sound signal acquisition unit 101 acquires the input sound signal (step S111), and the call sound signal acquisition unit 106 acquires the call sound. A signal is acquired (step S112).
The output signal selection unit 107 determines whether or not there is a call voice in the call sound signal (step S113). If there is a call voice in the call sound signal, the output signal selection unit 107 outputs the call sound signal along with its identification information as a sound signal. The data is output to the correction unit 102B. The sound signal correction unit 102B acquires the call voice signal from the output signal selection unit 107, and corrects the acquired call voice signal (step S114).
When there is no call voice in the call sound signal, the output signal selection unit 107 determines whether or not the ambient sound signal includes sound (step S115). When the ambient sound signal includes sound, the output signal selection unit 107 outputs the ambient sound signal together with the identification information to the sound signal correction unit 102B. The sound signal correction unit 102B acquires the ambient sound signal from the output signal selection unit 107, and corrects the sound included in the acquired ambient sound signal (step S116).
When the ambient sound signal does not include sound, the output signal selection unit 107 outputs the input sound signal together with the identification information to the sound signal correction unit 102B. The sound signal correction unit 102B acquires the input sound signal from the output signal selection unit 107, and corrects the acquired input sound signal (step S117). Here, the method by which the sound signal correction unit 102B corrects the input sound signal is, for example, correction based on the background noise level described in the first embodiment.
 以上説明したように、第2の実施形態におけるスピーカ装置1Bは、周囲音信号に音声が含まれる場合には当該音声を補正する。これにより、第2の実施形態におけるスピーカ装置1Bは、周囲の騒音が大きい場合や、ユーザがいわゆる難聴者であった場合でも、ユーザが会話の内容を聞き取り易くなるように支援することができ、周囲の状況に応じて出力する音を制御することができる。 As described above, the speaker device 1B according to the second embodiment corrects the sound when the sound is included in the ambient sound signal. Thereby, the speaker device 1B according to the second embodiment can support the user so that the user can easily hear the content of the conversation even when the ambient noise is large or the user is a so-called deaf person. The sound to be output can be controlled according to the surrounding situation.
 なお、本発明におけるスピーカ装置1(1A、1B)、及び信号処理部10(10B)の全部または一部の機能を実現するためのプログラムをコンピュータ読み取り可能な記録媒体に記録して、この記録媒体に記録されたプログラムをコンピュータシステムに読み込ませて実行することにより処理を行なってもよい。なお、ここでいう「コンピュータシステム」とは、OSや周辺機器等のハードウェアを含むものとする。
 また、「コンピュータシステム」は、ホームページ提供環境(あるいは表示環境)を備えたWWWシステムも含むものとする。また、「コンピュータ読み取り可能な記録媒体」とは、フレキシブルディスク、光磁気ディスク、ROM、CD-ROM等の可搬媒体、コンピュータシステムに内蔵されるハードディスク等の記憶装置のことをいう。さらに「コンピュータ読み取り可能な記録媒体」とは、インターネット等のネットワークや電話回線等の通信回線を介してプログラムが送信された場合のサーバやクライアントとなるコンピュータシステム内部の揮発性メモリ(RAM)のように、一定時間プログラムを保持しているものも含むものとする。
Note that a program for realizing all or part of the functions of the speaker device 1 (1A, 1B) and the signal processing unit 10 (10B) according to the present invention is recorded on a computer-readable recording medium, and this recording medium is recorded. The processing may be performed by causing the computer system to read and execute the program recorded in the above. Here, the “computer system” includes an OS and hardware such as peripheral devices.
The “computer system” includes a WWW system having a homepage providing environment (or display environment). The “computer-readable recording medium” refers to a storage device such as a flexible medium, a magneto-optical disk, a portable medium such as a ROM or a CD-ROM, and a hard disk incorporated in a computer system. Further, the “computer-readable recording medium” refers to a volatile memory (RAM) in a computer system that becomes a server or a client when a program is transmitted via a network such as the Internet or a communication line such as a telephone line. In addition, those holding programs for a certain period of time are also included.
 また、上記プログラムは、このプログラムを記憶装置等に格納したコンピュータシステムから、伝送媒体を介して、あるいは、伝送媒体中の伝送波により他のコンピュータシステムに伝送されてもよい。ここで、プログラムを伝送する「伝送媒体」は、インターネット等のネットワーク(通信網)や電話回線等の通信回線(通信線)のように情報を伝送する機能を有する媒体のことをいう。また、上記プログラムは、前述した機能の一部を実現するためのものであってもよい。さらに、前述した機能をコンピュータシステムにすでに記録されているプログラムとの組み合わせで実現できるもの、いわゆる差分ファイル(差分プログラム)であってもよい。 The program may be transmitted from a computer system storing the program in a storage device or the like to another computer system via a transmission medium or by a transmission wave in the transmission medium. Here, the “transmission medium” for transmitting the program refers to a medium having a function of transmitting information, such as a network (communication network) such as the Internet or a communication line (communication line) such as a telephone line. The program may be for realizing a part of the functions described above. Furthermore, what can implement | achieve the function mentioned above in combination with the program already recorded on the computer system, what is called a difference file (difference program) may be sufficient.
 本発明のいくつかの実施形態を説明したが、これらの実施形態は、例として提示したものであり、発明の範囲を限定することは意図していない。これら実施形態は、その他の様々な形態で実施されることが可能であり、発明の要旨を逸脱しない範囲で、種々の省略、置き換え、変更を行うことができる。これら実施形態やその変形は、発明の範囲や要旨に含まれると同様に、特許請求の範囲に記載された発明とその均等の範囲に含まれるものである。 Although several embodiments of the present invention have been described, these embodiments are presented as examples and are not intended to limit the scope of the invention. These embodiments can be implemented in various other forms, and various omissions, replacements, and changes can be made without departing from the spirit of the invention. These embodiments and their modifications are included in the scope and gist of the invention, and are also included in the invention described in the claims and the equivalents thereof.
 以上に例示した実施形態は、スピーカ装置、及び制御方法に適用することができる。 The embodiments exemplified above can be applied to the speaker device and the control method.
 1、1A、1B スピーカ装置
 10、10B 信号処理部
 11 アンプ部
 12 エキサイタ
 13 響板
 100 センサ信号取得部
 101 入力音信号取得部
 102 音信号補正部
 103 コンテンツ変更部
 104 補正情報記憶部
 105 周囲音信号取得部
 106 通話音信号取得部
 107 出力信号選択部
DESCRIPTION OF SYMBOLS 1, 1A, 1B Speaker apparatus 10, 10B Signal processing part 11 Amplifier part 12 Exciter 13 Sound board 100 Sensor signal acquisition part 101 Input sound signal acquisition part 102 Sound signal correction part 103 Content change part 104 Correction information storage part 105 Ambient sound signal Acquisition unit 106 Call sound signal acquisition unit 107 Output signal selection unit

Claims (11)

  1.  周囲の状況を示す状況信号を取得する状況信号取得部と、
     スピーカ装置に入力される音信号を取得する音信号取得部と、
     前記状況信号に基づいて前記音信号を制御する信号処理部と、
     前記信号処理部により制御された音信号に応じて振動するエキサイタと、
     前記エキサイタと接続し、前記エキサイタの振動に応じて板厚方向に振動することにより音を出力する響板と
     を有するスピーカ装置。
    A status signal acquisition unit for acquiring a status signal indicating a surrounding situation;
    A sound signal acquisition unit for acquiring a sound signal input to the speaker device;
    A signal processing unit for controlling the sound signal based on the status signal;
    An exciter that vibrates according to the sound signal controlled by the signal processing unit;
    A speaker device comprising: a soundboard connected to the exciter and outputting sound by vibrating in a plate thickness direction according to vibration of the exciter.
  2.  前記響板は、枕とマットレスとの間に設けられ、
     前記状況信号取得部は、前記響板の板面に載置された物体の重さを示す信号を取得し、
     前記信号処理部は、前記響板に載置された物体の重さを示す信号に基づいて、前記音信号を制御する
     請求項1に記載のスピーカ装置。
    The soundboard is provided between the pillow and the mattress,
    The status signal acquisition unit acquires a signal indicating the weight of an object placed on the surface of the soundboard,
    The speaker device according to claim 1, wherein the signal processing unit controls the sound signal based on a signal indicating a weight of an object placed on the soundboard.
  3.  前記信号処理部は、前記音信号の音響特性、頭部伝達関数、及び聴感特性のうちの少なくとも一つを補正する
     請求項1又は請求項2に記載のスピーカ装置。
    The speaker device according to claim 1, wherein the signal processing unit corrects at least one of an acoustic characteristic, a head-related transfer function, and an auditory characteristic of the sound signal.
  4.  前記信号処理部は、前記音信号の音量を制御する
     請求項1から請求項3のいずれか一項に記載のスピーカ装置。
    The speaker device according to any one of claims 1 to 3, wherein the signal processing unit controls a volume of the sound signal.
  5.  前記状況信号取得部は、スピーカ装置が設けられた空間の照度、及び暗騒音レベルのうち少なくともいずれか一方を示す信号を取得し、
     前記信号処理部は、前記照度、及び暗騒音レベルのうち少なくともいずれか一方に基づいて、前記音信号のコンテンツを変更する
     請求項1から請求項4のいずれか一項に記載のスピーカ装置。
    The status signal acquisition unit acquires a signal indicating at least one of the illuminance of the space where the speaker device is provided and the background noise level,
    The speaker device according to any one of claims 1 to 4, wherein the signal processing unit changes content of the sound signal based on at least one of the illuminance and the background noise level.
  6.  前記信号処理部は、前記照度、及び暗騒音レベルのうち少なくともいずれか一方に基づいて、前記音信号の最大音量を変更する
     請求項5に記載のスピーカ装置。
    The speaker device according to claim 5, wherein the signal processing unit changes a maximum volume of the sound signal based on at least one of the illuminance and the background noise level.
  7.  前記信号処理部は、前記照度、及び暗騒音レベルのうち少なくともいずれか一方に基づいて、前記音信号のボーカルを低減する
     請求項5又は請求項6に記載のスピーカ装置。
    The speaker device according to claim 5, wherein the signal processing unit reduces the vocal of the sound signal based on at least one of the illuminance and the background noise level.
  8.  スピーカ装置に入力される音信号を取得する音信号取得部と、
     時間に基づいて前記音信号を制御する信号処理部と、
     前記信号処理部により制御された音信号に応じて振動するエキサイタと、
     前記エキサイタと接続し、前記エキサイタの振動に応じて板厚方向に振動することにより音を出力する響板と
     を有するスピーカ装置。
    A sound signal acquisition unit for acquiring a sound signal input to the speaker device;
    A signal processing unit for controlling the sound signal based on time;
    An exciter that vibrates according to the sound signal controlled by the signal processing unit;
    A speaker device comprising: a soundboard connected to the exciter and outputting sound by vibrating in a plate thickness direction according to vibration of the exciter.
  9.  周囲の状況を示す状況信号を取得する状況信号取得部と、
     スピーカ装置に入力される音信号を取得する音信号取得部と、
     板厚方向に振動することにより音を出力する響板と、
     前記響板の右側に接続され、ステレオ右音に応じて振動する右側エキサイタと、
     前記響板の左側に接続され、ステレオ左音に応じて振動する左側エキサイタと、
     前記状況信号に基づいて前記音信号を制御し、前記音信号におけるステレオ右音を右側エキサイタに出力し、前記音信号におけるステレオ左音を左側エキサイタに出力するとともに、前記響板の右側から出力される前記左側エキサイタの振動に応じた音、及び前記響板の左側から出力される前記右側エキサイタの振動に応じた音のうち少なくとも一方を低減させる信号処理部と
     を有するスピーカ装置。
    A status signal acquisition unit for acquiring a status signal indicating a surrounding situation;
    A sound signal acquisition unit for acquiring a sound signal input to the speaker device;
    A soundboard that outputs sound by vibrating in the thickness direction,
    A right exciter connected to the right side of the soundboard and vibrating in response to a stereo right sound;
    A left exciter connected to the left side of the soundboard and vibrating in response to a stereo left sound;
    The sound signal is controlled based on the status signal, the stereo right sound in the sound signal is output to the right exciter, the stereo left sound in the sound signal is output to the left exciter, and is output from the right side of the soundboard. A signal processing unit that reduces at least one of a sound corresponding to the vibration of the left exciter and a sound corresponding to the vibration of the right exciter output from the left side of the soundboard.
  10.  周囲の音の状況を示す状況信号を取得する状況信号取得部と、
     スピーカ装置に入力される音信号を取得する音信号取得部と、
     前記状況信号に音声が含まれる場合に当該音信号の音声信号を制御する信号処理部と、
     前記信号処理部により制御された音信号に応じて振動するエキサイタと、
     前記エキサイタと接続し、前記エキサイタの振動に応じて板厚方向に振動することにより音を出力する響板と
     を有するスピーカ装置。
    A status signal acquisition unit for acquiring a status signal indicating the status of surrounding sounds;
    A sound signal acquisition unit for acquiring a sound signal input to the speaker device;
    A signal processing unit that controls a sound signal of the sound signal when the situation signal includes sound;
    An exciter that vibrates according to the sound signal controlled by the signal processing unit;
    A speaker device comprising: a soundboard connected to the exciter and outputting sound by vibrating in a plate thickness direction according to vibration of the exciter.
  11.  状況信号取得部が、周囲の状況を示す状況信号を取得し、
     音信号取得部が、スピーカ装置に入力される音信号を取得し、
     信号処理部が、前記状況信号に基づいて前記音信号を制御し、
     エキサイタが、前記信号処理部により制御された音信号に応じて振動し、
     響板が、前記エキサイタと接続し、前記エキサイタの振動に応じて板厚方向に振動することにより音を出力する
     制御方法。
    The situation signal acquisition unit acquires a situation signal indicating the surrounding situation,
    The sound signal acquisition unit acquires a sound signal input to the speaker device,
    A signal processor controls the sound signal based on the status signal;
    The exciter vibrates according to the sound signal controlled by the signal processing unit,
    A control method in which a soundboard is connected to the exciter and outputs sound by vibrating in the thickness direction in accordance with the vibration of the exciter.
PCT/JP2018/023270 2018-02-07 2018-06-19 Loudspeaker device and control method WO2019155650A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018020393 2018-02-07
JP2018-020393 2018-02-07

Publications (1)

Publication Number Publication Date
WO2019155650A1 true WO2019155650A1 (en) 2019-08-15

Family

ID=67548231

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/023270 WO2019155650A1 (en) 2018-02-07 2018-06-19 Loudspeaker device and control method

Country Status (1)

Country Link
WO (1) WO2019155650A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3102326A1 (en) * 2019-10-18 2021-04-23 Parrot Faurecia Automotive Process for processing a signal from an acoustic emission system of a vehicle and vehicle comprising this acoustic emission system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH052498U (en) * 1991-06-25 1993-01-14 株式会社日立ホームテツク Surface heating tool
JP2006041843A (en) * 2004-07-26 2006-02-09 Toshiba Corp Bone conduction speaker system
JP2013070291A (en) * 2011-09-24 2013-04-18 Aisin Seiki Co Ltd Sound contour enhancement device for vehicle

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH052498U (en) * 1991-06-25 1993-01-14 株式会社日立ホームテツク Surface heating tool
JP2006041843A (en) * 2004-07-26 2006-02-09 Toshiba Corp Bone conduction speaker system
JP2013070291A (en) * 2011-09-24 2013-04-18 Aisin Seiki Co Ltd Sound contour enhancement device for vehicle

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3102326A1 (en) * 2019-10-18 2021-04-23 Parrot Faurecia Automotive Process for processing a signal from an acoustic emission system of a vehicle and vehicle comprising this acoustic emission system

Similar Documents

Publication Publication Date Title
US11676568B2 (en) Apparatus, method and computer program for adjustable noise cancellation
US11240588B2 (en) Sound reproducing apparatus
US9949048B2 (en) Controlling own-voice experience of talker with occluded ear
WO2014175466A1 (en) Acoustic reproduction apparatus and sound-collecting acoustic reproduction apparatus
US20020039427A1 (en) Audio apparatus
KR20130133790A (en) Personal communication device with hearing support and method for providing the same
EP3095252A2 (en) Hearing assistance system
JP5774635B2 (en) Audio equipment and method of using the same
JP6513839B2 (en) Listening device using bone conduction
WO2019155650A1 (en) Loudspeaker device and control method
JP2006174432A (en) Bone conduction speaker, headphone, headrest, and pillow using the same
KR200426390Y1 (en) Earphone having microphone
WO2002030151A2 (en) Audio apparatus
JP7222352B2 (en) bone conduction sound transmitter
JP5502166B2 (en) Measuring apparatus and measuring method
JP6250774B2 (en) measuring device
KR102357809B1 (en) Vibration like chair system using vibrator speaker mechanism
US10587963B2 (en) Apparatus and method to compensate for asymmetrical hearing loss
KR101319983B1 (en) Sound collector having plural microphones
JP2019140645A (en) Speaker device and control method
Belinky et al. Sound through bone conduction in public interfaces
CN117480786A (en) Sound equipment
CN114466272A (en) Sound-leakage-proof earphone and implementation method
JP6161555B2 (en) measuring device
KR20230032421A (en) Bone conduction earphone using active noise control

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18905473

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18905473

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP