WO2019155650A1 - Dispositif de haut-parleur et procédé de commande - Google Patents

Dispositif de haut-parleur et procédé de commande Download PDF

Info

Publication number
WO2019155650A1
WO2019155650A1 PCT/JP2018/023270 JP2018023270W WO2019155650A1 WO 2019155650 A1 WO2019155650 A1 WO 2019155650A1 JP 2018023270 W JP2018023270 W JP 2018023270W WO 2019155650 A1 WO2019155650 A1 WO 2019155650A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound
signal
sound signal
exciter
soundboard
Prior art date
Application number
PCT/JP2018/023270
Other languages
English (en)
Japanese (ja)
Inventor
孝輔 佐々木
郁哉 井川
森島 守人
泰央 新田
Original Assignee
ヤマハ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ヤマハ株式会社 filed Critical ヤマハ株式会社
Publication of WO2019155650A1 publication Critical patent/WO2019155650A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/02Casings; Cabinets ; Supports therefor; Mountings therein
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R7/00Diaphragms for electromechanical transducers; Cones
    • H04R7/02Diaphragms for electromechanical transducers; Cones characterised by the construction
    • H04R7/04Plane diaphragms

Definitions

  • the present invention relates to a speaker device and a control method.
  • This application claims priority on February 7, 2018 based on Japanese Patent Application No. 2018-020393 for which it applied to Japan, and uses the content here.
  • Patent Document 1 discloses a technique for generating sound by resonating a flat soundboard.
  • Patent Document 2 discloses a technique for transmitting sound to a listener by bringing a bone conduction speaker built in a pillow into contact with the head.
  • the present invention has been made in view of such a situation, and an object thereof is to provide a speaker device and a control method capable of controlling a sound to be output according to a surrounding situation.
  • one aspect of the present invention is a speaker device including a status signal acquisition unit, a sound signal acquisition unit, a signal processing unit, an exciter, and a soundboard.
  • the situation signal acquisition unit acquires a situation signal indicating a surrounding situation.
  • the sound signal acquisition unit acquires a sound signal input to the speaker device.
  • the signal processing unit controls the sound signal based on the status signal.
  • the exciter vibrates according to the sound signal controlled by the signal processing unit.
  • the soundboard is connected to the exciter and outputs sound by vibrating in the plate thickness direction according to the vibration of the exciter.
  • one embodiment of the present invention is a speaker device that includes a sound signal acquisition unit, a signal processing unit, an exciter, and a soundboard.
  • the sound signal acquisition unit acquires a sound signal input to the speaker device.
  • the signal processing unit controls the sound signal based on time.
  • the exciter vibrates according to the sound signal controlled by the signal processing unit.
  • the soundboard is connected to the exciter and outputs sound by vibrating in the plate thickness direction according to the vibration of the exciter.
  • one embodiment of the present invention is a speaker device including a status signal acquisition unit, a sound signal acquisition unit, a soundboard, a right exciter, a left exciter, and a signal processing unit.
  • the situation signal acquisition unit acquires a situation signal indicating a surrounding situation.
  • the sound signal acquisition unit acquires a sound signal input to the speaker device.
  • the soundboard outputs sound by vibrating in the thickness direction.
  • the right exciter is connected to the right side of the soundboard and vibrates according to a stereo right sound.
  • the left exciter is connected to the left side of the soundboard and vibrates in response to a stereo left sound.
  • the signal processing unit controls the sound signal based on the status signal, outputs the stereo right sound in the sound signal to a right exciter, outputs a stereo left sound in the sound signal to a left exciter, and At least one of the sound corresponding to the vibration of the left exciter output from the right side of the board and the sound corresponding to the vibration of the right exciter output from the left side of the soundboard is reduced.
  • one embodiment of the present invention is a speaker device including a status signal acquisition unit, a sound signal acquisition unit, a signal processing unit, an exciter, and a soundboard.
  • the situation signal acquisition unit acquires a situation signal indicating a surrounding situation.
  • the sound signal acquisition unit acquires a sound signal input to the speaker device.
  • the signal processing unit controls the sound signal of the sound signal when the situation signal includes sound.
  • the exciter vibrates according to the sound signal controlled by the signal processing unit.
  • the soundboard is connected to the exciter and outputs sound by vibrating in the plate thickness direction according to the vibration of the exciter.
  • the situation signal acquisition unit acquires a situation signal indicating a surrounding situation
  • the sound signal acquisition unit acquires a sound signal input to the speaker device
  • the signal processing unit The sound signal is controlled based on the status signal
  • the exciter vibrates according to the sound signal controlled by the signal processing unit
  • the soundboard is connected to the exciter
  • the plate thickness according to the vibration of the exciter
  • FIG. 1 is a block diagram illustrating a configuration example of the speaker device 1 according to the first embodiment.
  • a sensor signal is input to the speaker device 1.
  • the sensor signal is an example of a “status signal”.
  • the sensor signal is a signal detected by a sensor that detects the surrounding situation.
  • a surrounding situation is a surrounding situation where the speaker device 1 is provided.
  • the presence or absence of an object that changes the sound output by the speaker device 1 or a space in which the speaker device 1 outputs sound (for example, the speaker device). 1 is a signal indicating the illuminance of the room 1 and the background noise level.
  • the object that changes the sound output from the speaker device 1 is an object that changes or prevents the propagation of sound by the speaker device 1.
  • the speaker device 1 is a bedding (for example, a mattress 400 (see FIG. 2)).
  • a pillow 401 see FIG. 2) or the like, it is a pillow or a human head that can be placed on the plate surface of the soundboard 13 (see FIG. 2).
  • an input sound signal is input to the speaker device 1.
  • the input sound signal is an example of a “sound signal”.
  • the speaker device 1 outputs a sound (sound) obtained by changing the input sound signal according to the surrounding situation.
  • the speaker device 1 includes, for example, a signal processing unit 10, an amplifier unit 11, an exciter 12, and a soundboard 13.
  • the signal processing unit 10 is a signal processing circuit such as a DSP (Digital Signal Processor), for example, and performs signal processing on the input sound signal based on the sensor signal, thereby controlling the input sound signal and controlling the input sound signal. Is output to the amplifier unit 11.
  • the signal processing unit 10 may be configured such that a processor such as a CPU (Central Processing Unit) executes a program stored in a storage unit (not shown). Further, all or part of the signal processing unit 10 may be realized by dedicated hardware such as an LSI (Large Scale Integration), an ASIC (Application Specific Integrated Circuit), or an FPGA (Field-Programmable Gate Array).
  • the amplifier unit 11 amplifies the sound signal output from the signal processing unit 10 and outputs the amplified sound signal to the exciter 12.
  • the exciter 12 is, for example, a dynamic or piezoelectric actuator, and vibrates according to a sound signal from the amplifier unit 11.
  • the soundboard 13 is, for example, a plate-like member that resonates sound, and is, for example, a plate-like member made of a base material such as carbon or hard plastic.
  • the soundboard 13 is physically connected (adhered) to the exciter 12 and vibrates in the thickness direction in accordance with the vibration of the exciter 12, thereby outputting sound (sound) in a direction perpendicular to the plate surface.
  • FIG. 2 is a perspective view illustrating an example of the appearance of the speaker device 1 according to the first embodiment.
  • the speaker device 1 may include a main body 200 that is a housing that includes electronic components such as the signal processing unit 10, the amplifier unit 11, and the exciter 12.
  • the main body 200 is a hard casing, a space in which the exciter 12 included in the main body 200 vibrates in the vertical direction (that is, the thickness direction of the soundboard 13) can be secured. Thereby, it is possible to prevent the exciter 12 from being pressed against the bedding and the like, and the vibration of the exciter 12 being weakened or restricted.
  • the electronic component can be prevented from receiving an impact from the outside.
  • the main body 200 is a sealed container, it is possible to prevent moisture and dust from entering the connection part of the electronic component by enclosing the electronic component in the sealed container.
  • a plurality of exciters 12-1 and 12-2 may be connected to the soundboard 13.
  • the plurality of exciters 12 are controlled by the signal processing unit 10 so as to vibrate at the same phase.
  • the entire soundboard 13 can be vibrated with the same phase.
  • position when connecting the several exciter 12 to the sound board 13, it is preferable to arrange
  • each exciter 12 is arranged, for example, in the central part of the connection region between the main body 200 and the soundboard 13, and further, a part between the central part and each corner part. One by one.
  • a sheet-like sensor 300 may be provided on the soundboard 13.
  • the sensor 300 is, for example, a weight sensor (strain gauge) that detects the weight of an object placed on the plate surface of the soundboard 13.
  • a signal indicating the weight of the object detected by the sensor 300 is input to the speaker device 1 wirelessly or by wire.
  • the signal detected by the sensor 300 is not limited to the signal indicating the weight of the object, but is a signal indicating the displacement, pressure, or acceleration of the object placed on the plate surface of the soundboard 13. May be.
  • the soundboard 13 may be provided between the mattress 400 and the pillow 401.
  • the soundboard 13 outputs sound from the lower side of the pillow placed on the surface of the soundboard 13.
  • the sound output from the surface of the soundboard 13 is a sound in which the sound source is a surface sound source, and is a highly linear sound that travels in a direction perpendicular to the plate surface (z-axis direction). For this reason, the sound output from the surface of the soundboard 13 reaches the ears of the listener in the z-axis direction of the soundboard 13, but is difficult to reach others in the x-axis and y-axis directions. Sound leakage is difficult for others to hear.
  • FIG. 3 is a block diagram illustrating a configuration example of the signal processing unit 10 according to the first embodiment.
  • the signal processing unit 10 includes, for example, a sensor signal acquisition unit 100, an input sound signal acquisition unit 101, a sound signal correction unit 102, a content change unit 103, and a correction information storage unit 104.
  • the sensor signal acquisition unit 100 is an example of a “situation signal acquisition unit”.
  • the sound signal correction unit 102 is an example of a “control unit”.
  • the content changing unit 103 is an example of a “control unit”.
  • the sensor signal acquisition unit 100 acquires a sensor signal input from the external sensor 300 to the speaker device 1.
  • the sensor signal acquisition unit 100 outputs the acquired sensor signal to the sound signal correction unit 102.
  • the input sound signal acquisition unit 101 acquires an input sound signal input to the speaker device 1 from the outside.
  • the input sound signal acquisition unit 101 outputs the acquired sound signal to the sound signal correction unit 102.
  • the sound signal correction unit 102 corrects the input sound signal based on the sensor signal acquired from the sensor signal acquisition unit 100.
  • the sound signal correction unit 102 corrects the acoustic characteristics of the input sound signal based on the sensor signal.
  • the acoustic characteristics here are the characteristics of the sound output from the soundboard 13, for example, the frequency characteristics of the sound output from the soundboard 13. In general, even when the exciter 12 is vibrated in the same manner, the acoustic characteristics of the output sound differ depending on the surroundings of the soundboard 13 and the material and individual differences of the soundboard 13.
  • the sound signal correction unit 102 corrects the acoustic characteristics of the input sound signal in accordance with the surrounding conditions of the soundboard 13 so that the sound characteristics are substantially constant even when the surrounding conditions change.
  • This correction data indicates, for example, the reverse characteristics of the frequency characteristics of the acquired sound when sound is output from the soundboard 13 at the same signal level for each frequency band and the sound is acquired from a predetermined position with a microphone or the like. It
  • the sound signal correction unit 102 refers to the correction information storage unit 104 that stores information in which acoustic characteristic correction data is associated with the situation around the soundboard 13, and based on the referenced correction data, the input sound Correct the signal.
  • the correction information storage unit 104 for example, when there is no object placed on the plate surface of the soundboard 13 (case A), or when a pillow is placed (case) as the situation around the soundboard 13 B) and correction data corresponding to each of the case where the pillow and the head are placed (case C) are stored.
  • the sound signal correction unit 102 determines which of the above cases A to C is based on the sensor signal, selects correction data according to the determined result, and uses the selected correction data, Correct the input sound signal.
  • the sound signal correction unit 102 corrects the head-related transfer function of the input sound signal based on the sensor signal, for example.
  • the sound signal correction unit 102 operates the position of the sound source and the direction of the sound perceived by the listener by correcting the head-related transfer function. As a result, when the listener listens to the sound from a position close to both ears of his / her ear, it is possible to reduce the feeling that the sound source is in the head and feel uncomfortable.
  • the sound signal correction unit 102 refers to, for example, the correction information storage unit 104 in which information associated with the correction data of the head related transfer function is stored, and corrects the input sound signal based on the referenced correction data. For example, the sound signal correction unit 102 determines which of the above cases A to C is based on the sensor signal and determines that the head is placed on the soundboard 13. The input sound signal is corrected using the correction data of the head related transfer function.
  • the sound signal correcting unit 102 may change the volume of the input sound signal based on the sensor signal. For example, the sound signal correcting unit 102 determines which of the cases A to C is based on the sensor signal, and the head is placed on the soundboard 13 (via a pillow). The volume of the input sound signal is changed when the head is changed from the sleeping state to the state where the head is not placed, that is, when the listening state is changed from the sleeping state to the rising state. This makes it possible to mute the sound when the ear is away from the speaker, or to increase the volume so that the sound can be heard even when the ear is away from the speaker.
  • the sound signal correction unit 102 may reduce the vocal of the input sound signal based on the sensor signal, for example.
  • the sound signal correction unit 102 suppresses the sound output from the speaker device 1 from being heard by another person different from the listener by reducing vocals that are easily recognized by humans compared to other sounds.
  • the sound signal correction unit 102 reduces the vocal of the input sound signal based on a sensor signal indicating illuminance when the illuminance is less than a predetermined threshold.
  • amendment part 102 reduces the vocal of an input sound signal, when it is a background noise level less than a predetermined threshold based on the sensor signal which shows a background noise level, for example.
  • a predetermined threshold based on the sensor signal which shows a background noise level, for example.
  • the sound signal correction unit 102 may correct the auditory characteristics of the input sound signal based on the sensor signal, for example.
  • the sound signal correction unit 102 determines the signal level of a high frequency (relatively high frequency band of the audible band) or low frequency (relatively low frequency band of the audible band) that is difficult to be perceived by human hearing. increase. Further, the sound signal correction unit 102 reduces the signal level of the sound signal in a band that is easily perceived by human hearing (for example, an intermediate band excluding a relatively high frequency band and a low frequency band in the audible band). Thereby, it is suppressed that the sound output from the speaker apparatus 1 is heard by others different from the listener.
  • the sound signal correction unit 102 corrects the audibility characteristics of the input sound signal.
  • the sound signal correction unit 102 refers to the correction information storage unit 104 in which information corresponding to the correction data of the auditory characteristic is stored, and corrects the auditory characteristic of the input sound signal based on the referred correction data. .
  • the sound signal correction unit 102 may correct the auditory characteristics according to the age of the listener or another person different from the listener.
  • the sound signal correction unit 102 acquires input information indicating the age of the listener or another person different from the listener, for example, from an input information acquisition unit (not shown) that acquires input information input by an input operation or the like by the listener.
  • the auditory characteristics are corrected based on the acquired input information.
  • the correction information storage unit 104 stores, for example, a plurality of auditory characteristic correction data associated with each person's age.
  • the sound signal correction unit 102 selects auditory characteristic correction data based on the age indicated in the acquired input information, and corrects the auditory characteristic of the input sound signal using the selected correction data.
  • the sound signal correction unit 102 may change the maximum volume of the input sound signal based on the sensor signal. For example, the sound signal correction unit 102 reduces the maximum volume of the input sound signal when the sensor signal indicating illuminance indicates illuminance less than a predetermined threshold. Or the sound signal correction
  • the content changing unit 103 changes the content of the input sound signal based on the sensor signal.
  • the content here is the content of the sound of the input sound signal, for example, a song.
  • the content changing unit 103 changes content when a sensor signal indicating illuminance indicates illuminance less than a predetermined threshold.
  • the content changing unit 103 changes the ambient sound so that the listener can relax at night or when the light is turned off.
  • the environmental sound is a sound that naturally occurs in a normal environment, such as a cry of a bird or a water sound of a waterfall. This makes it possible to output a relaxing environmental sound when the listener is sleeping or is going to sleep, and to output a sound based on the input sound signal when the listener is awake during the day, etc. It is possible to output a sound according to the activity state of the listener.
  • the content changing unit 103 may change the content by selecting a sound signal from a content acquisition unit (not shown) that acquires the sound signal to be changed, or the content changing target.
  • the content may be changed by selecting a sound signal from a content storage unit (not shown) that stores the sound signal.
  • the correction information storage unit 104 stores correction data corresponding to acoustic characteristics, correction data corresponding to head-related transfer functions, and correction data corresponding to auditory characteristics.
  • a plurality of corrections corresponding to each of the acoustic characteristics corresponding to the object (for example, cases A to C described above) placed on the plate surface of the soundboard 13 as correction data corresponding to the acoustic characteristics.
  • Data may be stored.
  • the correction information storage unit 104 may store a plurality of correction data corresponding to each of the audibility characteristics corresponding to the age and age of the person as correction data corresponding to the audibility characteristics.
  • FIGS. 4 to 6 are flowcharts showing an operation example of the signal processing unit 10 according to the first embodiment.
  • FIG. 4 is a flowchart showing the operation of the process in which the signal processing unit 10 corrects the acoustic characteristics.
  • FIG. 5 is a flowchart showing the operation of the process in which the signal processing unit 10 changes the content.
  • FIG. 6 is a flowchart showing the operation of the process in which the signal processing unit 10 corrects the maximum volume.
  • the sensor signal acquisition unit 100 of the signal processing unit 10 acquires a sensor signal indicating the weight of the object placed on the plate surface of the soundboard 13 (step S ⁇ b> 10).
  • the sound signal correction unit 102 determines the mounting status (for example, the above-described cases A to C) on the soundboard 13 (step S11).
  • the sound signal correction unit 102 acquires correction data corresponding to the determined mounting situation (step S12).
  • the sound signal correction unit 102 refers to the correction information storage unit 104, for example, and acquires correction data corresponding to the mounting status.
  • the sound signal correction unit 102 corrects the acoustic characteristics of the input sound signal using the acquired correction data (step S13). Note that the sound signal correcting unit 102 may correct at least one of the head-related transfer function and the auditory sensation characteristic together with the acoustic characteristic or instead of the acoustic characteristic.
  • the sensor signal acquisition unit 100 acquires a sensor signal indicating ambient illuminance (step S ⁇ b> 20).
  • the content changing unit 103 determines whether or not the illuminance is less than a predetermined threshold (step S21). If the illuminance is less than the predetermined threshold, the content changing unit 103 changes the content (step S22).
  • the content changing unit 103 may acquire a sensor signal indicating a background noise level together with or instead of the illuminance, and change the content based on at least one of the illuminance and the background noise level.
  • the sensor signal acquisition unit 100 acquires a sensor signal indicating the surrounding background noise level (step S30).
  • the sound signal correcting unit 102 determines whether or not the background noise level is less than a predetermined threshold (step S31).
  • the sound signal correction unit 102 reduces the maximum volume when the background noise level is less than a predetermined threshold (step S32).
  • the sound signal correction unit 102 acquires a sensor signal indicating illuminance together with or instead of the background noise level, and reduces the maximum volume based on at least one of the illumination level and the background noise level. May be.
  • the signal processing unit 10 controls the input sound signal based on the sensor signal indicating the surrounding situation, so that the sound is generated according to the surrounding situation. Since the signal can be controlled and the sound quality can be improved even when the situation changes and the sound quality deteriorates, a high-quality sound can be output.
  • the speaker device 1 according to the present embodiment can output a highly straight-forward sound by vibrating the soundboard 13 and can make it difficult for sound to be transmitted to other people around. Can be suppressed.
  • the soundboard 13 is provided between the pillow 401 and the mattress 400, and the sensor signal acquisition unit 100 is configured to detect an object placed on the plate surface of the soundboard 13.
  • the signal indicating the weight is acquired, and the sound signal correcting unit 102 controls the input sound signal based on the signal indicating the weight of the object placed on the soundboard 13.
  • the speaker device 1 according to the present embodiment depending on the situation such as whether the object placed on the plate surface of the soundboard 13 is nothing, only a pillow, or a pillow and a head, The input sound signal can be controlled.
  • the speaker device 1 of the present embodiment has a sound quality when nothing is placed on the plate surface of the soundboard 13, when only a pillow is placed, and when a pillow and a head are placed. Even if they are different, it is possible to output a sound having substantially the same sound quality regardless of the situation.
  • the sound signal correction unit 102 corrects at least one of the acoustic characteristics, the head-related transfer function, and the auditory characteristics of the input sound signal. Even if they are different from each other, it is possible to correct the acoustic characteristics and output sounds having substantially the same characteristics.
  • FIG. 7 is a perspective view illustrating an example of the appearance of the speaker device 1A according to the first modification of the first embodiment.
  • the soundboard 13 is composed of a right soundboard 13R and a left soundboard 13L, and the speaker device 1A outputs stereo sound.
  • symbol is attached
  • the signal processing unit 10 controls the input sound signal based on the sensor signal, and converts the controlled input sound signal into a stereo sound signal.
  • the signal processing unit 10 outputs the stereo right sound of the stereo sound to the right exciter 12-1 and the stereo left sound to the left exciter 12-1.
  • the signal processing unit 10 corrects the stereo sound so that the crosstalk of the stereo sound is reduced.
  • correction data for reducing crosstalk is stored in the correction information storage unit 104, and the signal processing unit 10 corrects the stereo sound using the correction data, thereby reducing the crosstalk of the stereo sound.
  • the correction data is generated by acquiring sounds output from the right soundboard 13R and the left soundboard 13L with a microphone or the like.
  • a stereo left sound component mixed in the sound signal acquired from the right soundboard 13R side and a stereo right sound component mixed in the sound signal acquired from the left soundboard 13L side are extracted.
  • the modification 2 of the embodiment is different from the above embodiment in that the sound signal correction unit 102 controls the input sound signal based on time instead of the illuminance and the background noise level.
  • the sound signal correction unit 102 acquires time information indicating time from a timer (not shown) of the speaker device 1 (1A), and controls the input sound signal when the acquired time reaches a predetermined time. For example, when the acquired time reaches the extinguishing time, the sound signal correcting unit 102 reduces the vocal of the input sound signal. Thereby, for example, vocals can be reduced at night or at bedtime, and it is possible to prevent sound leakage particularly when the sound output from the speaker device 1 is easily heard by others.
  • the sound signal correction unit 102 may change the volume or maximum volume of the input sound signal or the content together with the control for reducing the vocal or instead of the control for reducing the vocal.
  • the sound signal correction unit 102 may change the volume or maximum volume of the input sound signal or the content together with the control for reducing the vocal or instead of the control for reducing the vocal.
  • the sound signal correction unit 102 determines that the head is placed on the soundboard 13, the input sound signal is corrected using the correction data of the head related transfer function.
  • the sound signal correction unit 102 may correct the head-related transfer function only when the head is placed at the center of the soundboard 13. Further, the sound signal correction unit 102 determines the head-related transfer function according to the direction of the head placed on the soundboard 13 (sleeping posture, that is, lying or lying, etc.). You may make it determine whether it correct
  • the sound signal correction unit 102 is, for example, a plurality of positions provided symmetrically with respect to the central portion of the soundboard 13.
  • amendment part 102 determines by analyzing the imaging data acquired from the camera (not shown) which images a listener's attitude
  • the speaker device 1 ⁇ / b> B in the present embodiment is applied, for example, to support the hearing (hearing) of the user's ear when the user in bed is in a situation where it is difficult to hear the sound.
  • the speaker device 1B corrects the voice of the conversation to a sound that is easy for the user to hear when the user has a conversation with a related person (for example, a nurse, a caregiver, a doctor, a family, or the like).
  • FIG. 8 is a block diagram illustrating a configuration example of the speaker device 1B according to the second embodiment.
  • the ambient sound signal collected by the sound collection device 300B is input to the speaker device 1B.
  • the sound collection device 300B is a device that collects ambient sounds at the place where the speaker device 1B is provided, and is, for example, a microphone.
  • the ambient sound signal includes ambient environmental sounds, voices of people talking to the listener, voices spoken by the listener, and the like.
  • the ambient sound signal is a signal indicating the state of the surrounding sound, and is an example of a “situation signal”.
  • a call sound signal from the communication device 500 is input to the speaker device 1B.
  • the communication device 500 is a communication device used by a listener for a call, and is, for example, a mobile phone or a landline phone.
  • the communication device 500 is a so-called nurse call device that performs a call between a medical worker and a patient in a hospital room.
  • the call sound signal is an example of a “sound signal”.
  • the speaker device 1B includes, for example, a signal processing unit 10B, an amplifier unit 11, an exciter 12, and a soundboard 13.
  • the signal processing unit 10B acquires the ambient sound signal, the input sound signal, and the call sound signal, selects any one of the acquired signals, corrects the selected signal, and outputs it.
  • FIG. 9 is a block diagram illustrating a configuration example of the signal processing unit 10B according to the second embodiment.
  • the signal processing unit 10B includes, for example, an ambient sound signal acquisition unit 105, an input sound signal acquisition unit 101, a sound signal correction unit 102B, a content change unit 103, a correction information storage unit 104, and a call sound signal acquisition unit 106. And an output signal selection unit 107.
  • the ambient sound signal acquisition unit 105 is an example of a “situation signal acquisition unit”.
  • the call sound signal acquisition unit 106 is an example of a “sound signal acquisition unit”.
  • the output signal selection unit 107 is an example of a “signal processing unit”.
  • the ambient sound signal acquisition unit 105 acquires the ambient sound signal and outputs the acquired ambient sound signal to the output signal selection unit 107.
  • the input sound signal acquisition unit 101 acquires an input sound signal and outputs the acquired input sound signal to the output signal selection unit 107.
  • the call sound signal acquisition unit 106 acquires a call sound signal and outputs the acquired call sound signal to the output signal selection unit 107.
  • the output signal selection unit 107 is one of the ambient sound signal from the ambient sound signal acquisition unit 105, the input sound signal from the input sound signal acquisition unit 101, and the call sound signal from the call sound signal acquisition unit 106. A signal is selected, and the selected signal is output to the sound signal correction unit 102B together with information (type information) indicating the type of the selected signal.
  • the type information here is information for distinguishing the ambient sound signal, the input sound signal, and the call sound signal.
  • the output signal selection unit 107 selects a call sound signal when there is a call voice in the call sound signal. For example, when the volume of the call sound signal is equal to or higher than a predetermined volume threshold, the output signal selection unit 107 determines that the call sound signal includes call sound.
  • the output signal selection unit 107 selects the ambient sound signal when the speech sound signal does not include a speech sound but the ambient sound signal includes speech. For example, the output signal selection unit 107 performs frequency analysis on the ambient sound signal, and determines that the ambient sound signal includes sound if the signal strength of the band including the sound is equal to or greater than a predetermined intensity threshold. Alternatively, the output signal selection unit 107 arranges the two sound collectors 300B so as to be symmetric with respect to the position of the user, and based on the directivity of the collected sound, the stereo sound source Audio may be extracted from the principle. In this case, the output signal selection unit 107 determines that a sound is included in the ambient sound signal when the sound having a predetermined volume threshold or higher is extracted from the ambient sound signal.
  • the output signal selection unit 107 selects the input sound signal when neither the call sound signal nor the ambient sound signal includes sound.
  • the case where the voice is not included in the call sound signal is, for example, a case where the volume of the call sound signal is less than a predetermined volume threshold.
  • the case where the sound is not included in the peripheral sound signal is, for example, a case where the signal intensity of the band in which the sound is included in the ambient sound signal is less than a predetermined intensity threshold.
  • the sound signal correction unit 102B acquires the sound signal selected by the output signal selection unit 107 and corrects the acquired sound signal.
  • the sound signal correction unit 102B corrects the communication sound when the output signal selection unit 107 selects the communication sound signal.
  • the sound signal correction unit 102B corrects the volume of the communication sound according to the hearing level of the user.
  • the sound signal correction unit 102B corrects, for example, the communication sound to an acoustic characteristic that is easy for the user to listen to.
  • the sound signal correction unit 102B may reproduce (output) the communication sound at a speed at which the user can easily listen. Thereby, even when the surroundings are noisy or when the user is a so-called deaf person, it becomes easier for the user to hear the contents of the call, and it is possible to suppress missed communication items and confirmation items from the medical staff.
  • information about the user's hearing such as the user's hearing level, the acoustic characteristics that the user can easily listen to, and the speed at which the user can easily listen may be stored in advance in the correction information storage unit 104 or a storage unit (not shown).
  • the sound signal correcting unit 102B corrects the sound included in the ambient sound signal when the output signal selecting unit 107 selects the ambient sound signal.
  • the sound signal correction unit 102B extracts and outputs the sound from the ambient sound signal using, for example, a filter that passes only a signal in a band including the sound.
  • the sound signal correction unit 102B may correct and output the volume, acoustic characteristics, and speed of the extracted voice according to the user's hearing so that the user can easily hear it.
  • the sound signal correction unit 102B may reduce the volume when the volume of the extracted sound is equal to or higher than a predetermined volume threshold. Thereby, the volume of the sound output from the speaker device 1B is reduced, and howling that occurs when a part of the output from the speaker device 1B is fed back to the microphone can be suppressed.
  • the sound signal correction unit 102B may determine whether the extracted sound is the user's own sound or a sound different from the user's sound, and may perform processing according to the determined result. In this case, for example, the sound signal correction unit 102B stores in advance the frequency characteristics of the voice that the user makes various utterances. When the sound signal correcting unit 102B determines that the frequency characteristic of the extracted sound has characteristics similar to the frequency characteristic stored in advance, the sound signal correcting unit 102B determines that the extracted sound is the user's own sound. On the other hand, if the sound signal correcting unit 102B determines that the frequency characteristics of the extracted sound do not have characteristics similar to the frequency characteristics stored in advance, the sound signal correcting unit 102B determines that the extracted sound is not the user's own sound.
  • the sound signal correction unit 102B does not output the extracted sound to the amplifier unit 11 when the extracted sound is the user's own sound. Accordingly, since the user does not listen to his / her own sound from the speaker device 1B, his / her own voice is heard with a delay from the timing of the utterance, and the voice is reverberant and harsh.
  • the sound signal correction unit 102B corrects the extracted sound to a volume, acoustic characteristics, and speed that are easy for the user to hear. This makes it easier for the user to hear the voice spoken to him. Even when the surroundings are noisy or when the user is a so-called deaf person, it is not necessary for the concerned person to bend over the user's ears and speak with a loud voice, and the conversation can be performed smoothly.
  • the sound signal correcting unit 102B corrects the input sound when the output signal selecting unit 107 selects the input sound signal. For example, the sound signal correction unit 102B corrects and outputs the volume, acoustic characteristics, and speed of the input sound signal so as to be easily heard by the user according to the user's hearing.
  • the sound signal correction unit 102B may correct the input sound signal according to the signal level of the ambient sound signal (that is, the background noise level). In this case, the sound signal correction unit 102B changes the upper limit of the maximum volume of the input signal, for example, as in the first embodiment.
  • FIG. 10 is a flowchart illustrating an operation example of the signal processing unit 10B according to the second embodiment.
  • the ambient sound signal acquisition unit 105 of the signal processing unit 10B acquires the ambient sound signal (step S110), the input sound signal acquisition unit 101 acquires the input sound signal (step S111), and the call sound signal acquisition unit 106 acquires the call sound.
  • a signal is acquired (step S112).
  • the output signal selection unit 107 determines whether or not there is a call voice in the call sound signal (step S113). If there is a call voice in the call sound signal, the output signal selection unit 107 outputs the call sound signal along with its identification information as a sound signal.
  • the data is output to the correction unit 102B.
  • the sound signal correction unit 102B acquires the call voice signal from the output signal selection unit 107, and corrects the acquired call voice signal (step S114). When there is no call voice in the call sound signal, the output signal selection unit 107 determines whether or not the ambient sound signal includes sound (step S115). When the ambient sound signal includes sound, the output signal selection unit 107 outputs the ambient sound signal together with the identification information to the sound signal correction unit 102B. The sound signal correction unit 102B acquires the ambient sound signal from the output signal selection unit 107, and corrects the sound included in the acquired ambient sound signal (step S116). When the ambient sound signal does not include sound, the output signal selection unit 107 outputs the input sound signal together with the identification information to the sound signal correction unit 102B.
  • the sound signal correction unit 102B acquires the input sound signal from the output signal selection unit 107, and corrects the acquired input sound signal (step S117).
  • the method by which the sound signal correction unit 102B corrects the input sound signal is, for example, correction based on the background noise level described in the first embodiment.
  • the speaker device 1B according to the second embodiment corrects the sound when the sound is included in the ambient sound signal.
  • the speaker device 1B according to the second embodiment can support the user so that the user can easily hear the content of the conversation even when the ambient noise is large or the user is a so-called deaf person.
  • the sound to be output can be controlled according to the surrounding situation.
  • a program for realizing all or part of the functions of the speaker device 1 (1A, 1B) and the signal processing unit 10 (10B) according to the present invention is recorded on a computer-readable recording medium, and this recording medium is recorded.
  • the processing may be performed by causing the computer system to read and execute the program recorded in the above.
  • the “computer system” includes an OS and hardware such as peripheral devices.
  • the “computer system” includes a WWW system having a homepage providing environment (or display environment).
  • the “computer-readable recording medium” refers to a storage device such as a flexible medium, a magneto-optical disk, a portable medium such as a ROM or a CD-ROM, and a hard disk incorporated in a computer system.
  • the “computer-readable recording medium” refers to a volatile memory (RAM) in a computer system that becomes a server or a client when a program is transmitted via a network such as the Internet or a communication line such as a telephone line. In addition, those holding programs for a certain period of time are also included.
  • RAM volatile memory
  • the program may be transmitted from a computer system storing the program in a storage device or the like to another computer system via a transmission medium or by a transmission wave in the transmission medium.
  • the “transmission medium” for transmitting the program refers to a medium having a function of transmitting information, such as a network (communication network) such as the Internet or a communication line (communication line) such as a telephone line.
  • the program may be for realizing a part of the functions described above. Furthermore, what can implement

Abstract

L'invention concerne, selon un mode de réalisation, un dispositif de haut-parleur comprenant une unité d'acquisition de signal d'état, une unité d'acquisition de signal sonore, une unité de traitement de signal, un excitateur et une carte sonore. L'unité d'acquisition de signal d'état acquiert un signal d'état indiquant l'état environnant. L'unité d'acquisition de signal sonore acquiert un signal sonore à entrer dans le dispositif de haut-parleur. L'unité de traitement de signal commande le signal sonore sur la base du signal d'état. L'excitateur vibre en conformité avec le signal sonore commandé au moyen de l'unité de traitement de signal. La carte sonore est connectée à l'excitateur et émet un son par vibration dans le sens de l'épaisseur de la carte en fonction de la vibration de l'excitateur.
PCT/JP2018/023270 2018-02-07 2018-06-19 Dispositif de haut-parleur et procédé de commande WO2019155650A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018020393 2018-02-07
JP2018-020393 2018-02-07

Publications (1)

Publication Number Publication Date
WO2019155650A1 true WO2019155650A1 (fr) 2019-08-15

Family

ID=67548231

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/023270 WO2019155650A1 (fr) 2018-02-07 2018-06-19 Dispositif de haut-parleur et procédé de commande

Country Status (1)

Country Link
WO (1) WO2019155650A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3102326A1 (fr) * 2019-10-18 2021-04-23 Parrot Faurecia Automotive Procédé de traitement d’un signal d’un système d’émission acoustique d’un véhicule et véhicule comportant ce système d’émission acoustique

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH052498U (ja) * 1991-06-25 1993-01-14 株式会社日立ホームテツク 面状採暖具
JP2006041843A (ja) * 2004-07-26 2006-02-09 Toshiba Corp 骨伝導スピーカー装置
JP2013070291A (ja) * 2011-09-24 2013-04-18 Aisin Seiki Co Ltd 車両用音輪郭強調装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH052498U (ja) * 1991-06-25 1993-01-14 株式会社日立ホームテツク 面状採暖具
JP2006041843A (ja) * 2004-07-26 2006-02-09 Toshiba Corp 骨伝導スピーカー装置
JP2013070291A (ja) * 2011-09-24 2013-04-18 Aisin Seiki Co Ltd 車両用音輪郭強調装置

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3102326A1 (fr) * 2019-10-18 2021-04-23 Parrot Faurecia Automotive Procédé de traitement d’un signal d’un système d’émission acoustique d’un véhicule et véhicule comportant ce système d’émission acoustique

Similar Documents

Publication Publication Date Title
US11676568B2 (en) Apparatus, method and computer program for adjustable noise cancellation
US11240588B2 (en) Sound reproducing apparatus
US9949048B2 (en) Controlling own-voice experience of talker with occluded ear
WO2014175466A1 (fr) Appareil de reproduction acoustique et appareil de reproduction acoustique collecteur de sons
US20020039427A1 (en) Audio apparatus
KR20130133790A (ko) 보청기를 가진 개인 통신 장치 및 이를 제공하기 위한 방법
EP3095252A2 (fr) Système d'aide auditive
JP5774635B2 (ja) 音響機器及びその使用方法
JP6513839B2 (ja) 骨伝導を利用した聴音装置
WO2019155650A1 (fr) Dispositif de haut-parleur et procédé de commande
JP2006174432A (ja) 骨伝導スピーカ及びこれを用いたヘッドフォン、ヘッドレスト、枕
KR200426390Y1 (ko) 마이크를 구비하는 이어폰
WO2002030151A2 (fr) Appareil audio
JP7222352B2 (ja) 骨伝導音響伝達装置
JP5502166B2 (ja) 測定装置及び測定方法
JP6250774B2 (ja) 測定装置
KR102357809B1 (ko) 진동자 스피커 메커니즘을 이용한 진동 체감형 의자 시스템
US10587963B2 (en) Apparatus and method to compensate for asymmetrical hearing loss
KR101319983B1 (ko) 복수개의 마이크를 구비한 집음기
JP2019140645A (ja) スピーカ装置、及び制御方法
Belinky et al. Sound through bone conduction in public interfaces
CN117480786A (zh) 音响装置
CN114466272A (zh) 一种防漏音耳机及实现方法
JP6161555B2 (ja) 測定装置
KR20230032421A (ko) 능동소음 제어를 이용한 골전도 이어폰

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18905473

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18905473

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP