WO2019069743A1 - Dispositif de commande audio, haut-parleur à ultrasons et système audio - Google Patents

Dispositif de commande audio, haut-parleur à ultrasons et système audio Download PDF

Info

Publication number
WO2019069743A1
WO2019069743A1 PCT/JP2018/035366 JP2018035366W WO2019069743A1 WO 2019069743 A1 WO2019069743 A1 WO 2019069743A1 JP 2018035366 W JP2018035366 W JP 2018035366W WO 2019069743 A1 WO2019069743 A1 WO 2019069743A1
Authority
WO
WIPO (PCT)
Prior art keywords
ultrasonic
speaker
audio
audio controller
sound
Prior art date
Application number
PCT/JP2018/035366
Other languages
English (en)
Japanese (ja)
Inventor
陽一 落合
泰一郎 村上
Original Assignee
ピクシーダストテクノロジーズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2017193330A external-priority patent/JP6329679B1/ja
Priority claimed from JP2017193792A external-priority patent/JP7095857B2/ja
Priority claimed from JP2017193373A external-priority patent/JP6330098B1/ja
Priority claimed from JP2018081202A external-priority patent/JP7095863B2/ja
Priority claimed from JP2018082010A external-priority patent/JP2019068396A/ja
Application filed by ピクシーダストテクノロジーズ株式会社 filed Critical ピクシーダストテクノロジーズ株式会社
Publication of WO2019069743A1 publication Critical patent/WO2019069743A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control

Definitions

  • the present invention relates to an audio controller, an ultrasonic speaker, and an audio system.
  • each speaker is placed around the listener.
  • Such an audio system can reproduce realistic sound by assigning a plurality of channels corresponding to an audio signal input from a sound source to each speaker.
  • Japanese Patent Application Laid-Open No. 2006-270522 discloses a technique for setting the mixing coefficient of each speaker according to the position and direction of the listener.
  • An object of the present invention is to remove restrictions on the use environment of the audio system.
  • An ultrasonic speaker comprising at least one ultrasonic speaker and a plurality of ultrasonic transducers, and an audio controller connectable to a sound source, Means for inputting an audio signal from the sound source; Based on the audio signal, a control signal is generated to individually control each ultrasonic transducer, and each ultrasonic transducer emits an ultrasonic wave having a phase difference focused at at least one focal position. And control means for outputting the control signal to each ultrasonic transducer. It is an audio controller.
  • restrictions can be removed on the use environment of the audio system.
  • FIG. 15 is a schematic view of sound pressure level information and first surround pan parameters referred to in the process of FIG. 14;
  • FIG. 15 is a schematic view of sound pressure level information divided into a first frequency band to a third frequency band in the process of FIG. 14;
  • FIG. 15 is a schematic view of sound pressure level information divided into a first frequency band to a third frequency band in the process of FIG. 14;
  • FIG. 15 is a schematic view of a second surround pan parameter generated in the process of FIG. 14; It is explanatory drawing of the outline
  • FIG. 1 is a system configuration diagram of the audio system of the first embodiment.
  • FIG. 2 is a block diagram showing the configuration of the audio system of FIG.
  • the audio system 1 is installed in the use environment SP.
  • the audio system 1 is located in front of the listener L.
  • the audio system 1 includes an audio controller 10, an ultrasonic speaker 21, a loudspeaker 22, a sound source 23, a camera 24, a position detection unit 25, a woofer 26, and a display 27. Equipped with
  • the audio controller 10 is an example of an information processing apparatus that controls a speaker set (the ultrasonic speaker 21, the loudspeaker 22, and the woofer 26).
  • the audio controller 10 includes a storage device 11, a processor 12, an input / output interface 13, and a communication interface 14.
  • the storage device 11 is configured to store programs and data.
  • the storage device 11 is, for example, a combination of a read only memory (ROM), a random access memory (RAM), and a storage (for example, a flash memory or a hard disk).
  • ROM read only memory
  • RAM random access memory
  • storage for example, a flash memory or a hard disk
  • the programs include, for example, the following programs.
  • Program of OS Operating System
  • Program of application executing information processing for example, control application for controlling the audio system 1
  • the data includes, for example, the following data.
  • ⁇ Database referenced in information processing ⁇ Data obtained by executing information processing (that is, execution result of information processing)
  • the processor 12 is configured to realize the function of the audio controller 10 by activating a program stored in the storage device 11.
  • the processor 12 is an example of a computer.
  • the input / output interface 13 receives an input signal from an input device (for example, the sound source 23, the camera 24, the position detection unit 25, and the display 27) connected to the audio controller 10, and an output connected to the audio controller 10. It is configured to output an output signal to a device (for example, the ultrasonic speaker 21 and the loudspeaker 22).
  • an input device for example, the sound source 23, the camera 24, the position detection unit 25, and the display 27
  • a device for example, the ultrasonic speaker 21 and the loudspeaker 22.
  • Communication interface 14 is configured to control communication between audio controller 10 and a server (not shown).
  • the ultrasonic speaker 21 is configured to emit an ultrasonic wave under the control of the audio controller 10.
  • the loudspeakers 22 and the woofer 26 are configured to generate an audible sound under the control of the audio controller 10.
  • the sound source 23 is configured to provide an audio signal to the audio controller 10.
  • the sound source 23 includes the following. ⁇ TV ⁇ audio media player (cassette player, CD (Compact Disc) player, DVD (Digital Versatile Disc) player, Blu-ray Disc player) ⁇ Digital audio player
  • the camera 24 is configured to obtain image information of the usage environment SP.
  • the camera 24 is, for example, a CMOS (Complementary MOS) camera.
  • the position detection unit 25 is configured to detect the position of the listener L.
  • the position detection unit 25 is, for example, an infrared sensor.
  • the infrared sensor emits infrared light and, upon receiving reflected light of infrared light, generates an electrical signal according to the reflected light.
  • the position for example, the relative position based on the infrared sensor
  • the processor 12 specifies the position of the listener L by generating three-dimensional coordinates indicating the relative position of the listener L with respect to the ultrasonic speaker 21 based on the position information acquired by the position detection unit 25.
  • the display 27 has a function of receiving a user's instruction to the audio controller 10 and a function of displaying information related to the audio system 1.
  • the display 27 is, for example, any of the following. -Liquid crystal display-Organic EL (Electro Luminescence) display-Touch panel display
  • FIG. 3 is a schematic block diagram of the ultrasonic speaker of FIG.
  • a cover 21 a (FIG. 3A) is disposed on the radiation surface of the ultrasonic speaker 21.
  • the radiation surface (FIG. 3B) on the housing 21b is exposed.
  • a phased array FA composed of a plurality of ultrasonic transducers 21c is disposed on the radiation surface.
  • the plurality of ultrasonic transducers 21c are arranged in the XZ plane (hereinafter referred to as "array plane").
  • the ultrasonic speaker 21 includes a drive unit (not shown) for driving each ultrasonic transducer 21 c.
  • the drive unit drives the plurality of ultrasonic transducers 21 c individually.
  • Each ultrasonic transducer 21 c vibrates by the drive of the drive unit.
  • An ultrasonic wave is generated by the vibration of each ultrasonic transducer 21c.
  • the ultrasonic waves emitted from the plurality of ultrasonic transducers 21c propagate in space and focus at a focal point in space. The focused ultrasound waves form an audible sound source.
  • FIG. 4 is an explanatory view of the outline of the first embodiment.
  • the sound source 23 provides an audio signal to the audio controller 10.
  • the audio controller 10 receives an audio signal from the sound source 23.
  • the audio signal includes sound pressure level frequency characteristics and a first surround pan parameter.
  • the audio controller 10 controls the first sound pressure level of the ultrasonic speaker 21, the second sound pressure level of the loudspeaker 22, and the third sound pressure of the woofer 26 based on the frequency characteristic of the audio signal and the first surround pan parameter. Determine the level.
  • the audio controller 10 generates a first speaker control signal based on the first sound pressure level, and outputs the first speaker control signal to the ultrasonic speaker 21.
  • the audio controller 10 generates a second speaker control signal based on the second sound pressure level and outputs it to the loudspeaker 22.
  • the audio controller 10 generates a third speaker control signal based on the third sound pressure level, and outputs the third speaker control signal to the woofer 26.
  • the ultrasonic speaker 21 emits an ultrasonic wave based on the first speaker control signal.
  • the loudspeakers 22 emit sound waves based on the second loudspeaker control signal.
  • the woofer 26 emits an acoustic wave based on the third speaker control signal. Thereby, a surround environment corresponding to the audio signal is constructed.
  • the ultrasonic speaker 21 emits an ultrasonic wave modulated by a predetermined modulation method.
  • the modulation scheme is, for example, any of the following. AM (Amplitude Modulation) modulation FM (Frequency Modulation) modulation PM (Phase Modulation) modulation
  • the ultrasonic speaker 21 gives a phase difference to the ultrasonic waves emitted from the respective ultrasonic transducers 21c by individually controlling the drive timings of the plurality of ultrasonic transducers 21c.
  • the focal position and the focal number depend on this phase difference. That is, the ultrasonic speaker 21 can change the focal position and the focal number by controlling the phase difference.
  • FIG. 5 is an explanatory diagram of a method of determining the drive timing of the ultrasonic speaker of FIG.
  • y (n), z (n)) are stored.
  • n is an identifier (a positive integer) of the ultrasonic transducer 21c.
  • the processor 12 determines focal point coordinates (xfp, yfp, zfp) indicating the relative position of the focal point FP with respect to the reference point, as shown in FIG.
  • the processor 12 calculates the coordinates (x (n), y (n), z (n)) of the ultrasonic transducer 21c (n) stored in the storage unit 11 and the focal coordinates (xfp, yfp, zfp). Based on the distance r (n) between the ultrasonic transducer 21c (n) and the focal point FP, it is calculated.
  • the processor 12 sets a time difference between the drive timing of the n + 1th ultrasonic transducer 21c (n + 1) driven and the drive timing of the nth ultrasonic transducer 21c (n) (hereinafter referred to as “drive time difference”) ⁇ T Calculate n + 1) using Equation 1.
  • ⁇ T (n + 1) -r (n + 1) / c (Equation 1)
  • C sound velocity
  • the processor 12 uses the focal point coordinates (xfp, yfp, zfp) and the coordinates (x (n + 1), y (n + 1), z (n + 1)) stored in the storage device 11 to The driving time difference ⁇ T (n + 1) of the ultrasonic transducer 21c (n + 1) is calculated.
  • the processor 12 supplies a drive signal to each ultrasonic transducer 21 c (n + 1) according to the drive time difference ⁇ T (n + 1).
  • Each ultrasonic transducer 21 c is driven according to this drive signal.
  • the ultrasonic waves emitted from the ultrasonic transducers 21c have a phase difference corresponding to the driving time difference ⁇ T (n + 1), and thus are focused at the focal point FP.
  • the ultrasound focused at the focal point FP forms a sound source.
  • An audible sound is generated from this sound source. That is, the ultrasonic speaker 21 can generate an audible sound at any position.
  • the ultrasonic speaker 21 can change the traveling range in which the sound wave of the audible sound travels by changing the focal position.
  • the distribution of the audible range in which the listener L can hear the audible sound forms a substantially rotationally symmetrical shape about the focal point FP.
  • the audible range is defined by the combination of the direction or angle at which the audible sound travels with the ultrasound beam and the distance between the focal point FP and the listener L.
  • the audible range is determined by the magnitude relationship between the environmental sound of the use environment of the ultrasonic speaker 21 and the volume of the audible sound.
  • the volume of the audible sound is determined by the amplitude or modulation of the ultrasonic wave emitted from the ultrasonic transducer 21c.
  • the processor 12 can change the audible range by adjusting the amplitude or modulation of the ultrasound.
  • FIG. 6 is an explanatory diagram of an operation example 1 of the ultrasonic speaker of the first embodiment.
  • FIG. 7 is a diagram showing a sound source formed in the operation example 1 of FIG. In the operation example 1, an ultrasonic wave is focused on one focal point.
  • the ultrasonic transducers 21ca to 21ci vibrate with a time difference in order from the both ends toward the center.
  • the ultrasonic speaker 21 emits an ultrasonic wave USW1 having a phase difference according to the time difference of vibration.
  • the ultrasound USW1 is focused at a focal point FP1 separated by a focal distance d1 from the center of the phased array FA.
  • the ultrasonic speaker 21 forms a point sound source SS1 at the focal point FP1.
  • the focal point FP1 is located at the ear of the listener L
  • the point sound source SS1 is formed at the ear of the listener L.
  • the listener L can hear the audible sound from the point sound source SS1 at his ear.
  • FIG. 8 is an explanatory diagram of an operation example 2 of the ultrasonic speaker of the first embodiment.
  • FIG. 9 is a diagram showing a sound source formed in the operation example 2 of FIG. In the operation example 2, ultrasonic waves are focused on a plurality of focal points.
  • the ultrasonic transducers 21ca to 21ci are divided into two groups G1 and G2.
  • the group G1 is composed of ultrasonic transducers 21ca to 21ce.
  • the group G2 is composed of ultrasonic transducers 21cf to 21ci.
  • the group G1 (ultrasonic transducers 21ca to 21ce) vibrates with a time difference in the order from the both ends toward the center.
  • the ultrasonic speaker 21 emits an ultrasonic wave USW2a having a phase difference according to the time difference of vibration.
  • the ultrasound USW2a is focused at a focal point FP2a separated by a focal distance d2a from the center of the phased array FA.
  • the group G2 (ultrasound transducers 21cf to 21ci) vibrates with a time difference in the order from the both ends toward the center.
  • the ultrasonic speaker 21 emits an ultrasonic wave USW 2 b having a phase difference according to the time difference of vibration.
  • the ultrasound USW2b is focused at a focal point FP2b which is separated from the center of the phased array FA by a focal distance d2b.
  • the ultrasonic speaker 21 forms point sound sources SS2a and SS2b at the focal points FP2a and FP2b, respectively.
  • the focal point FP2a is located at the ear of the listener L1
  • the point sound source SS2a is formed at the ear of the listener L1.
  • the listener L1 can hear the audible sound from the point sound source SS2a at the ear.
  • the focal point FP2b is located at the ear of the listener L2
  • the point sound source SS2b is formed at the ear of the listener L2.
  • the listener L2 can hear an audible sound from the point sound source SS2b at the ear.
  • the ultrasonic speaker 21 can also form point sound sources at three or more focal points.
  • FIG. 10 is an explanatory diagram of an operation example 3 of the ultrasonic speaker of the first embodiment.
  • FIG. 11 is a diagram showing a sound source formed in the operation example 3 of FIG.
  • the focal length d1 of the operation example 1 and the distance d3a sufficiently longer than the focal lengths d2a and d2b of the operation example 2 are set.
  • the ultrasonic transducers 21ca to 21ci vibrate at substantially the same time, whereby the ultrasonic speaker 21 emits ultrasonic waves USW3 having substantially no phase difference.
  • the ultrasonic wave USW3 forms an ultrasonic beam USB3 having high directivity in the focal point FP3 direction.
  • the ultrasound beam forms a beam-like sound source. That is, the beam of audible sound is present so as to cover the ultrasound beam.
  • the ultrasonic beam USB3 forms a beam-like sound source SS3. Therefore, the audible sound from the beam-like sound source SS3 (that is, the sound source formed along the ultrasonic beam USB3) from the direction of the ultrasonic speaker 21 can be heard approaching the listener L.
  • FIG. 12 is an explanatory diagram of an operation example 4 of the ultrasonic speaker of the first embodiment.
  • FIG. 13 is a diagram showing a sound source formed in the operation example 4 of FIG.
  • the ultrasonic transducers 21ca to 21ci vibrate with time difference in order from one end side to the other end side, whereby the ultrasonic wave USW4 having substantially no phase difference from the ultrasonic speaker 21 is an array It radiates obliquely to the direction (Y + direction) orthogonal to the plane.
  • the ultrasonic wave USW 4 forms, from the ultrasonic speaker 21, an ultrasonic beam USB 4 having high directivity in the direction of the focal point FP 4 located at an oblique position with respect to the direction (Y + direction) orthogonal to the array surface.
  • the ultrasonic beam USB4a emitted by the ultrasonic speaker 21 forms a beam-like sound source SS4a, and is reflected by the reflection member RM.
  • the reflected beam USW 4 b reflected by the reflecting member RM forms a beam-like sound source SS 4 b.
  • the sound source SS4b approaches the listener L located on the side of the reflection member RM from a direction different from that of the ultrasonic speaker 21. Therefore, the listener L sounds as if an audible sound from the side is sounding from the wall.
  • the reflecting member RM is a wall having a surface that specularly reflects the ultrasonic wave USW 4a
  • the listener L located in the reflecting direction of the ultrasonic beam USB 4a with respect to the reflecting member RM is behind itself.
  • the sound source SS4b approaches from the wall where it is located. Therefore, the listener L sounds as if an audible sound from the sound source SS 4 b is being emitted from the wall.
  • the reflecting member RM has a surface that diffuses the ultrasonic beam USB4a
  • the reflected beam USW4b is diffused from the reflecting member RM to a wide angle. Therefore, regardless of the position of the listener L, the listener L sounds as if the audible sound from the sound source SS 4 b is being emitted in a wide range.
  • FIG. 14 is a flowchart of processing of control of the audio system according to the first embodiment.
  • FIG. 15 is a schematic view of sound pressure level information and first surround pan parameters referred to in the process of FIG.
  • FIG. 16 is a schematic view of sound pressure level information divided into the first to third frequency bands in the process of FIG.
  • FIG. 17 is a schematic view of a second surround pan parameter generated in the process of FIG.
  • the sound source 23 outputs an audio signal (S200). Specifically, the sound source 23 encodes an audio signal and outputs the encoded audio signal to the audio controller 10.
  • the audio signal includes sound pressure level information (FIG. 15A) of the sound to be reproduced and a first surround pan parameter (FIG. 15B).
  • FIG. 15A is an example of sound pressure level information.
  • the horizontal axis is frequency (Hz) and the vertical axis is sound pressure level (dB).
  • FIG. 15B is an example of a first surround pan parameter of 5.1 ch surround mode.
  • the first surround pan parameters are the sound pressure of the center speaker (C), the right front speaker (R), the left front speaker (L), the right surround speaker (RS), the left surround speaker (LS), and the woofer (LFE) Indicates level balance (ie, panning).
  • the audio controller 10 executes acquisition of usage environment information (S100).
  • the processor 12 generates layout information indicating the layout of the use environment SP.
  • the layout information includes information indicating the three-dimensional size of the use environment SP and information indicating the three-dimensional shape.
  • the camera 24 captures image information of the usage environment SP.
  • the processor 12 applies three-dimensional modeling to the image information captured by the camera 24 to generate layout information indicating the layout of the use environment SP, and stores the layout information in the storage device 11.
  • the processor 12 stores layout information (for example, three-dimensional CAD data) of the use environment SP in the storage device 11 via the input / output interface 13 or the communication interface 14.
  • the position detection unit 25 detects the position of the listener L by emitting infrared light and receiving reflected light of infrared light.
  • the processor 12 specifies the relative position by generating three-dimensional coordinates indicating the relative position of the listener L with respect to the ultrasonic speaker 21 based on the electrical signal generated by the position detection unit 25.
  • the audio controller 10 executes input (S101) of an audio signal. Specifically, the processor 12 inputs the audio signal output from the sound source 23 in step S200.
  • the audio controller 10 decodes the audio signal (S102). Specifically, the processor 12 extracts sound pressure level information (FIG. 15A) and a first surround pan parameter (FIG. 15B) from the audio signal by decoding the audio signal. The processor 12 stores the sound pressure level information and the first surround pan parameter in the storage device 11.
  • step S102 the audio controller 10 executes determination of the focal position (S103).
  • the processor 12 detects the listener L with respect to the ultrasonic speaker 21 based on the detection result of the position detection unit 25 in step S100. Identify the relative position. The processor 12 determines the position of the focal point FP1 of FIG. 7 based on the identified relative position.
  • the processor 12 determines in step S100 based on the detection result of the position detection unit 25 that The relative position of the listener L to the sound wave speaker 21 is specified.
  • the processor 12 determines the positions of the plurality of focal points FP2a and FP2b of FIG. 9 based on the identified relative positions.
  • the processor 12 compares the reflective member RM with the ultrasonic speaker 21 based on the layout information generated by the processor 12 in step S100. Identify the location. The processor 12 determines the position of the focal point FP3 in FIG. 11 based on the identified relative position.
  • the processor 12 when generating an audible sound in the space behind the listener L, specifies the relative position of the reflective member RM based on the layout information generated by the processor 12 in step S100. The processor 12 determines the position of the focal point FP4a of FIG. 13 based on the identified relative position.
  • the audio controller 10 executes generation of a second surround pan parameter (S104). Specifically, the processor 12 divides the frequency characteristics of the sound wave level information stored in the storage device 11 in step S102 into a first frequency band B1 to a third frequency band B3 (FIG. 16).
  • the first frequency band B1 is a frequency band equal to or higher than the first frequency threshold TH1.
  • the processor 12 determines the first frequency band B1 based on the output characteristic of the ultrasonic speaker 21.
  • the processor 12 controls the sound of the ultrasonic speaker 21 so that the sound pressure level of the ultrasonic speaker 21 is the highest and the sound pressure level of the woofer 26 is the lowest for the frequency components constituting the first frequency band B1.
  • the pressure level, the sound pressure level of the loudspeaker 22 and the sound pressure level of the woofer 26 are determined.
  • the second frequency band B2 is a frequency band between the second frequency threshold TH2 and the first frequency threshold TH1.
  • the processor 12 determines a second frequency band B2 based on the output characteristic of the loudspeaker 22.
  • the processor 12 controls the sound pressure of the ultrasonic speaker 21 so that the sound pressure level of the loudspeaker 22 is the highest and the sound pressure level of the woofer 26 is the lowest for the frequency components constituting the second frequency band B2.
  • the level, the sound pressure level of the loudspeaker 22 and the sound pressure level of the woofer 26 are determined.
  • the third frequency band B3 is a frequency band equal to or lower than the second frequency threshold TH2.
  • the processor 12 determines the third frequency band B3 based on the output characteristics of the woofer 26.
  • the processor 12 sets the sound pressure of the ultrasonic speaker 21 so that the sound pressure level of the woofer 26 is the highest and the sound pressure level of the ultrasonic speaker 21 is the lowest for the frequency components constituting the third frequency band B3.
  • the pressure level, the sound pressure level of the loudspeaker 22 and the sound pressure level of the woofer 26 are determined.
  • the processor 12 determines the second surround pan parameter (FIG. 17) based on the determined sound pressure level (the sound pressure level of the ultrasonic speaker 21, the sound pressure level of the loudspeaker 22, and the sound pressure level of the woofer 26).
  • the second surround pan parameters in FIG. 17 are an ultrasonic speaker (US), a right front speaker (R), a left front speaker (L), a right surround speaker (RS), a left surround speaker (LS), and a woofer (LFE). Of the speaker component including the ultrasonic speaker 21).
  • FIG. 17A shows an example of a second surround pan parameter when the sound pressure level of the first frequency band B1 is higher than the sound pressure level of the second frequency band B2.
  • the loudspeaker 22 (right front speaker (R), left front)
  • the sound pressure level of the ultrasonic speaker (US) is higher than the sound pressure levels of the speaker (L), the right surround speaker (RS), and the left surround speaker (LS). That is, when the sound pressure level of the first frequency band B1 is higher than the sound pressure level of the second frequency band B2, the audio controller 10 emphasizes the sound of the ultrasonic speaker 21 more than the sound of the loudspeaker 22.
  • FIG. 17B shows an example of a second surround pan parameter when the sound pressure level in the first frequency band B1 is lower than the sound pressure level in the second frequency band B2.
  • the loudspeaker 22 (right front speaker (R), left front)
  • the sound pressure level of the ultrasonic speaker (US) is lower than the sound pressure levels of the speaker (L), the right surround speaker (RS), and the left surround speaker (LS). That is, when the sound pressure level of the first frequency band B1 is lower than the sound pressure level of the second frequency band B2, the audio controller 10 emphasizes the sound of the loudspeaker 22 more than the sound of the ultrasonic speaker 21.
  • the audio controller 10 executes determination of modulation parameters (S105). Specifically, processor 12 determines the modulation parameter based on the sound pressure level of the ultrasonic speaker (US) among the second surround pan parameters determined in step S104 and the focal position determined in step S103. Do.
  • the modulation parameter is a parameter related to the level of AM modulation of the ultrasonic wave emitted from the ultrasonic speaker 21. The amplitude of the ultrasound depends on the modulation parameter.
  • step S105 the audio controller 10 generates a speaker control signal (S106).
  • the processor 12 determines, based on the sound pressure level of the ultrasonic speaker (US), the focal position determined in step S103, and the modulation parameter determined in step S105 among the second surround panning parameters. , Generating a first speaker control signal for controlling the ultrasonic speaker 21.
  • the processor 12 controls the sound pressure levels of the right front speaker (R), the left front speaker (L), the right surround speaker (RS), and the left surround speaker (LS) among the second surround pan parameters determined in step S104. To generate a second speaker control signal for controlling the loudspeaker 22.
  • the processor 12 generates a third speaker control signal for controlling the woofer 26 based on the sound pressure level of the woofer (LFE) among the second surround panning parameters.
  • the processor 12 outputs the first to third speaker control signals to the ultrasonic speaker 21, the loudspeaker 22 and the woofer 26, respectively.
  • the ultrasonic speaker 21 emits an ultrasonic wave based on the first speaker control signal.
  • the ultrasonic waves emitted from the ultrasonic speaker 21 are focused at the focal point FP determined in step S103.
  • the focused ultrasound forms an audible sound source at the focal point FP. That is, the sound source formed at the focal point FP generates an audible sound.
  • the loudspeaker 22 and the woofer 26 generate an audible sound that uses itself as a sound source based on the second speaker control signal and the third speaker control signal, respectively.
  • the sound source 23 repeatedly executes the process of step S200 until the reproduction is completed (S201-NO).
  • the audio controller 10 repeatedly executes the processes of steps S100 to S106 until the reproduction is completed (S107-NO).
  • the audio system 1 can construct a surround environment according to the use environment SP, the position of the listener L, and the audio signal output from the sound source 23.
  • the focal position and the number of focal points of the ultrasonic speaker 21 are variable, the restrictions of the use environment SP (for example, the layout of the use environment SP, the obstacle existing in the use environment SP, the listener L The listener L can hear more various sounds without receiving the position).
  • FIG. 18 is an explanatory diagram of an outline of the second embodiment.
  • the sound source 23 provides the audio controller 10 with an audio signal that is the source of an audible sound to be reproduced by the ultrasonic speaker 21.
  • the audio controller 10 inputs an audio signal and position information.
  • the audio controller 10 determines a focus position for focusing the ultrasonic wave emitted from the ultrasonic speaker 21 based on the position information.
  • the audio controller 10 generates a drive signal based on the determined focus position and outputs the drive signal to the ultrasonic speaker 21.
  • the ultrasonic speaker 21 emits an ultrasonic wave based on the drive signal generated by the audio controller 10.
  • the ultrasonic waves emitted from the ultrasonic speaker 21 are focused at the focal position determined by the audio controller 10.
  • the focused ultrasound forms an audible sound source at the focal position.
  • the audio controller 10 can form the focal point of the ultrasonic wave emitted from the ultrasonic speaker 21 at an arbitrary focal position. That is, the audio controller 10 can form an audible sound source at an arbitrary position.
  • the focal position is fixed in advance. Therefore, the application of the ultrasonic speaker is limited.
  • the ultrasonic speaker 21 of the second embodiment can arbitrarily determine the focal position. Thereby, the ultrasonic speaker 21 can be used for a wider range of applications.
  • the audio controller 10 can move the sound source while reproducing the audible sound by changing the focus position while the ultrasonic speaker 21 is driven.
  • the sound source 23 executes step S200 as in the first embodiment.
  • the audio controller 10 executes steps S101 to S103 as in the first embodiment.
  • step S105 the audio controller 10 executes step S105. Specifically, the processor 12 determines the modulation parameter based on the sound pressure level of the first frequency characteristic FS1 stored in the storage device 11.
  • the audio controller 10 executes step S106. Specifically, as shown in FIG. 5, the processor 12 calculates the coordinates (x (n), y (n), z (n)) of the ultrasonic transducer 21c (n) stored in the storage device 11; Based on the focal point coordinates (xfp, yfp, zfp), the distance r (n) between the ultrasonic transducer 21c (n) and the focal point FP is calculated. The processor 12 calculates the driving timing difference ⁇ T (n + 1) between the driving timing of the n + 1-th driving ultrasonic transducer 21c (n + 1) and the n-th driving ultrasonic transducer 21c (n) using Equation 1 Do.
  • the processor 12 outputs a drive signal to each ultrasonic transducer 21 c (n + 1) in accordance with the drive time difference ⁇ T (n + 1). Each ultrasonic transducer 21 c is driven according to this drive signal.
  • the ultrasonic waves emitted from the ultrasonic transducers 21c have a phase difference corresponding to the driving time difference ⁇ T (n + 1), and thus are focused at the focal point FP.
  • the focused ultrasound forms an audible sound source at the focal point FP.
  • the sound source formed at the focal point FP generates an audible sound.
  • the sound source 23 repeatedly executes the process of step S200 until the reproduction is completed (S201-NO).
  • the audio controller 10 repeatedly executes the processes of steps S101 to S103 and S105 to S106 until the reproduction is completed (S107-NO).
  • step S103 the focal position is determined based on the position information acquired by the position detection unit 25 (that is, the position of the listener L to be made to hear the audible sound).
  • step S105 drive signals are generated such that the ultrasonic transducers 21c are driven with the drive time difference ⁇ T so that the ultrasonic waves are focused at the focal position.
  • steps S101 to S103 and S105 to S106 are repeated until the end of reproduction (S106), the audible sound is reproduced while changing the focus position (that is, moving the sound source of the audible sound). Can.
  • the first example is an example in which the audio system 1 is disposed inside a vehicle (for example, a car, a railway, or an aircraft).
  • a vehicle for example, a car, a railway, or an aircraft.
  • the audio system 1 is disposed in a car (for example, a dashboard, a door, a vicinity of a car loudspeaker, and at least one place of a ceiling) of a car.
  • a car for example, a dashboard, a door, a vicinity of a car loudspeaker, and at least one place of a ceiling
  • the position detection unit 25 acquires position information indicating the position of the listener L who is a passenger in the vehicle (for example, at least one of the driver and the passenger).
  • the position detection unit 25 is, for example, any of the following.
  • Camera In this case, the processor 12 specifies the position of the listener L by performing image analysis (for example, feature value analysis) on image data generated by the camera)
  • a distance sensor disposed outside the vehicle In this case, the processor 12 specifies the position of the listener L based on a signal generated by the distance sensor
  • Space recognition sensor in this case, the processor 12 specifies the position of the listener L based on the signal generated by the space recognition sensor
  • Laser light sensor in this case, the processor 12 locates the listener L based on a signal generated by the laser light sensor
  • Infotainment system In this case, the processor 12 locates the listener L based on data generated by the infotainment system)
  • On-vehicle AV (Audio Visual) device In this case, the processor 12
  • Warning messages for example, voice messages notifying that the traveling speed of the car exceeds the speed limit
  • Guidance messages for example, voice messages for guiding a route to a destination
  • Messages related to traveling eg, traveling speed or voice message notifying the remaining amount of fuel or battery
  • the sound source 23 may generate an audio signal for reproducing a sound related to any of the following travels. ⁇ Voice and music of video
  • step S103 the processor 12 determines the position of the listener L in the vehicle as the focal position. As a result, an audible sound corresponding to the audio signal generated by the sound source 23 can be heard by the listener L in the car.
  • the second example is an example in which the audio system 1 is disposed in a flight vehicle (for example, a drone).
  • the audio system 1 is disposed at the bottom of the drone.
  • the position detection unit 25 is a position of the listener L (for example, a victim, a rescuer, a suspicious person who is in a stricken area) below the drone in flight (that is, on the ground of the drone flight area) or a flight area of the drone Obtain location information that indicates a specific area on the ground of the The position detection unit 25 is, for example, any of the following. Camera, distance sensor, space recognition sensor, laser light sensor, temperature sensor, and storage device 11 storing position information indicating a predetermined range ⁇ Communication interface 14
  • the sound source 23 generates an audio signal for reproducing any of the following sounds.
  • Messages to support evacuation activities (for example, voice messages to notify victims of evacuation areas or dangerous areas)
  • Messages to support rescue activities (for example, voice messages notifying the rescuer of instructions on rescue activities) -Warning sound-Operation sound according to the operation of listener L
  • step S103 the processor 12 determines the position of the listener L on the ground of the flight area of the drone as the focal position. As a result, an audible sound corresponding to the audio signal generated by the sound source 23 can be heard by the listener L on the ground.
  • the third example is an example in which the audio system 1 is disposed indoors.
  • the audio system 1 is disposed in a room of a house.
  • the position detection unit 25 acquires position information indicating the position of the listener L in the room or a specific range based on the audio system 1.
  • the position detection unit 25 is, for example, any of the following. Cameras and microphones (for example, directional microphones) .Storage device 11 in which position information indicating a predetermined range is stored ⁇ Communication interface 14
  • the sound source 23 generates an audio signal for reproducing any of the following sounds.
  • -Operation sound according to the operation of listener L-Response message to voice instruction of listener L (as an example, a message generated by artificial intelligence)
  • voice instruction of listener L as an example, a message generated by artificial intelligence
  • step S103 the processor 12 determines the position of the listener L within a predetermined range from the indoor room of the house or the audio system 1 as the focal position. As a result, an audible sound corresponding to the audio signal generated by the sound source 23 can be made to be heard by the listener L in the predetermined range from the indoor room of the house or the audio system 1.
  • the fourth example is an example in which the audio system 1 is disposed on a monitor.
  • the position detection unit 25 acquires position information indicating the position of the listener L around the monitor or a specific range based on the audio system 1.
  • the position detection unit 25 is, for example, any of the following. Cameras and microphones (for example, directional microphones) ⁇ Communication interface 14
  • the sound source 23 generates an audio signal for reproducing any of the following sounds.
  • -Operation sound according to the operation of listener L-Response message to voice instruction of listener L (as an example, a message generated by artificial intelligence)
  • voice instruction of listener L as an example, a message generated by artificial intelligence
  • step S103 the processor 12 determines the position of the listener L located in the vicinity of the monitor or within the predetermined range from the audio system 1 as the focus position. As a result, an audible sound corresponding to the audio signal generated by the sound source 23 can be heard by the listener L around the monitor or within the predetermined range from the audio system 1.
  • the fifth example is an example in which the audio system 1 is disposed in an infrastructure (for example, a traffic signal, a public toilet, a platform of a railway station, or a public road).
  • an infrastructure for example, a traffic signal, a public toilet, a platform of a railway station, or a public road.
  • the position detection unit 25 acquires position information indicating the position of the listener L around the infrastructure or a specific range based on the audio system 1.
  • the position detection unit 25 is, for example, any of the following. -Camera-Temperature sensor-Storage device 11 in which position information indicating a predetermined range is stored
  • the listener L is a human or an animal (for example, a guide dog).
  • the sound source 23 generates an audio signal for reproducing any of the following sounds. ⁇ Warning message to human ⁇ Message to guide human action (for example, route to destination or advertisement) ⁇ Warning sound to animal ⁇ Warning sound to animal
  • step S103 the processor 12 determines the position of the listener L located around the infrastructure or within the predetermined range from the audio system 1 as the focal position. As a result, an audible sound corresponding to the audio signal generated by the sound source 23 can be made to be heard by the listener L around the infrastructure or within the predetermined range from the audio system 1.
  • the sixth example is an example in which the audio system 1 is arranged in an entertainment facility (for example, a facility in a theme park, a stage facility in an event hall, a speaker used in an event).
  • an entertainment facility for example, a facility in a theme park, a stage facility in an event hall, a speaker used in an event.
  • the position detection unit 25 acquires position information indicating a position of a listener L (a performer or an audience of entertainment content, or a suspicious person) who is in the vicinity of the entertainment facility, or a specific range based on the audio system 1.
  • the position detection unit 25 is, for example, any of the following. -Camera-Temperature sensor-Receiver of a signal transmitted from a wireless tag worn by a performer of entertainment content-Storage device 11 storing position information indicating a predetermined range ⁇ Communication interface 14
  • the sound source 23 generates an audio signal for reproducing any of the following sounds.
  • Music / video voice / guidance message eg, voice message for guiding a route to a destination
  • Guide sound e.g, voice message for guiding a route to a destination
  • Warning sound e.g., Warning sound
  • Instruction message for the performer e.g., Sound source played by disc jockey
  • step S103 the processor 12 determines the position of the listener L located in the vicinity of the entertainment facility or within the predetermined range from the audio system 1 as the focal position. As a result, the audible sound corresponding to the audio signal generated by the sound source 23 can be made to be heard by the listener L around the entertainment facility or within the predetermined range from the audio system 1.
  • the seventh example is an example in which the audio system 1 is disposed at a financial institution.
  • the audio system 1 is disposed at a store (as an example, a pillar) of a financial institution, or an apparatus (automatic cash dispenser, cash dispenser, or automatic loan examination apparatus) installed in the store.
  • a store as an example, a pillar
  • an apparatus automated cash dispenser, cash dispenser, or automatic loan examination apparatus
  • the position detection unit 25 acquires position information indicating the position of the listener L (for example, a customer or a suspicious person) present in the store, or a specific range based on the audio system 1.
  • the position detection unit 25 is, for example, any of the following. ⁇ Camera ⁇ distance sensor ⁇ Storage device 11 in which position information indicating a predetermined range is stored
  • the sound source 23 generates an audio signal for reproducing any of the following sounds.
  • a guidance message of the device installed in the store for example, a voice message for preventing the forgetting of a cash card
  • Warning sound for example, a warning sound for preventing the forgetting of a cash card
  • Operation sound according to the operation of listener L
  • step S103 the processor 12 determines the position of the listener L in the store of the financial institution or within the predetermined range from the audio system 1 as the focal position. As a result, the audible sound corresponding to the audio signal generated by the sound source 23 can be heard by the listener L in the store of the financial institution or within the predetermined range from the audio system 1.
  • the eighth example is an example in which the audio system 1 is disposed in a care facility (for example, a nursing home).
  • a care facility for example, a nursing home.
  • the audio system 1 is disposed in a care facility (for example, a pillar or an interior).
  • a care facility for example, a pillar or an interior.
  • the position detection unit 25 acquires position information indicating the position of the listener L around the audio system 1 or a specific range based on the audio system 1.
  • the position detection unit 25 is, for example, any of the following. ⁇ Camera and communication interface 14 .Storage device 11 in which position information indicating a predetermined range is stored
  • the sound source 23 generates an audio signal for reproducing any of the following sounds. ⁇ Message to warning person in the care facility ⁇ Warning sound
  • step S103 the processor 12 determines the position of the listener L located around the audio system 1 or within the predetermined range from the audio system 1 as the focal position. As a result, the audible sound corresponding to the audio signal generated by the sound source 23 can be made to be heard by the listener L in the care facility or within the predetermined range from the audio system 1.
  • the ninth example is an example in which the audio system 1 is disposed in a restaurant (for example, a coffee shop).
  • the position detection unit 25 acquires position information indicating the position of the listener L in the restaurant or the specific range based on the audio system 1.
  • the position detection unit 25 is, for example, any of the following. -Camera-Tag possessed by the customer-Storage device 11 in which position information indicating a predetermined range is stored ⁇ Communication interface 14
  • the sound source 23 generates an audio signal for reproducing any of the following sounds.
  • An informational message to the customer for example, a message indicating that cooking of the ordered item has been completed
  • step S103 the processor 12 determines, as a focal position, the position of the listener L who is within a predetermined range from the restaurant or the audio system 1 at the restaurant. As a result, the audible sound corresponding to the audio signal generated by the sound source 23 can be made to be heard by the listener L in the restaurant or in the predetermined range from the audio system 1.
  • the tenth example is an example in which the audio system 1 is disposed in a facility or device (VR as an example, a head mounted display, a content reproduction apparatus, or a video game machine) providing VR (Virtual Reality) content.
  • VR Virtual Reality
  • the position detection unit 25 acquires position information indicating the position of the listener L present in the facility providing the VR content or a specific range based on the audio system 1.
  • the position detection unit 25 is, for example, any of the following.
  • -Motion sensor-Camera-Storage device 11 in which position information indicating a predetermined range is stored A storage device 11 in which position information indicating a three-dimensional layout of the VR facility is stored .Storage device 11 in which position information indicating a predetermined range is stored ⁇ Communication interface 14
  • the sound source 23 generates an audio signal for reproducing any of the following sounds.
  • ⁇ Voice / guidance message of VR content eg voice message indicating that the end time of VR content is near
  • step S103 the processor 12 determines, as a focal position, the position of the listener L in a facility providing VR content or within a predetermined range from the audio system 1. As a result, the audible sound corresponding to the audio signal generated by the sound source 23 can be made to be heard by the listener L in the facility providing the VR content or within the predetermined range from the audio system 1.
  • FIG. 19 is an explanatory diagram of an outline of the third embodiment.
  • the sound source 23 provides an audio signal to the audio controller 10.
  • the audio controller 10 receives an audio signal from the sound source 23.
  • the audio signal includes sound pressure level frequency characteristics and a first surround pan parameter.
  • the audio controller 10 performs equalization on the audio signal to generate the first sound pressure level of the ultrasonic speaker 21 and the second sound loudspeaker 22 based on the first frequency characteristic of the audio signal and the first surround panning parameter.
  • the sound pressure level and the third sound pressure level of the woofer 26 are determined.
  • the audio controller 10 generates a first speaker control signal drive signal based on the first sound pressure level second frequency characteristic, and outputs it to the ultrasonic speaker 21.
  • the audio controller 10 generates a second speaker control signal based on the second sound pressure level and outputs it to the loudspeaker 22.
  • the audio controller 10 generates a third speaker control signal based on the third sound pressure level, and outputs the third speaker control signal to the woofer 26.
  • the ultrasonic speaker 21 emits an ultrasonic wave based on the first speaker control signal.
  • the loudspeakers 22 emit sound waves based on the second loudspeaker control signal.
  • the woofer 26 emits an acoustic wave based on the third speaker control signal. Thereby, a surround environment corresponding to the audio signal is constructed.
  • the ultrasonic wave is emitted based on the drive signal generated by the audio controller 10.
  • the ultrasonic waves emitted from the ultrasonic speaker 21 are focused at a focal point in space.
  • the focused ultrasound waves form an audible sound source.
  • This audible sound has a second frequency characteristic.
  • the listener L1 located near the focal point can hear audible sound from the sound source formed at the focal point.
  • the listener L2 located away from the focus can not hear this audible sound. That is, the audible sound from the sound source formed by the ultrasound does not leak from the vicinity of the focal point.
  • the listener L1 has an audio output device (for example, an earphone or headphones) having a structure for preventing sound leakage Need to be worn.
  • an audio output device for example, an earphone or headphones
  • the act of wearing such an audio output device can give the user an unnatural feeling.
  • the act of wearing such an audio output device for a long time gives the user a feeling of fatigue.
  • the listener L1 can hear an audible sound so as not to be heard by the listener L2 without attaching such an audio output device.
  • the listener L1 can listen to the sound of the sound source 23 in a natural state without wearing an audio output device.
  • the listener L1 can avoid fatigue due to the attachment of the audio output device.
  • FIG. 20 is a flowchart of control processing of the audio system according to the third embodiment.
  • FIG. 21 is a diagram showing an example of a screen displayed in the process of FIG.
  • FIG. 22 is an explanatory diagram of a first example of the equalizing (S110) in FIG.
  • FIG. 23 is an explanatory diagram of a second example and a third example of the equalizing (S110) of FIG.
  • FIG. 24 is an explanatory diagram of a fourth example of the equalizing (S110) in FIG.
  • the sound source 23 executes step S200 as in the first embodiment.
  • the audio controller 10 executes steps S101 to S102 as in the first embodiment.
  • step S102 the audio controller 10 executes determination of the focal position (S110). Specifically, processor 12 causes display 27 to display screen P110 (FIG. 20).
  • the screen P110 includes a display object A110 and an operation object B110.
  • the display objects A110 display image objects IMG110a to IMG110b.
  • the image object IMG 110 a is a symbol image of the ultrasonic speaker 21.
  • the image object IMG 110 b is a symbol image of the focal point FP.
  • the operation object B110 is an object that receives a user's instruction to move the position of the focal point FP.
  • the image object IMG 110 b moves in the display object A 110 in response to the user's operation on the operation object B 110.
  • the audio controller 10 determines the position of the focal point FP based on the relative position of the image object IMG110b with respect to the image object IMG110a.
  • the audio controller 10 performs equalization (S111). Specifically, the processor 12 converts the first frequency characteristic stored in the storage device 11 in step S102 into a second frequency characteristic.
  • step S111 A first example of step S111 will be described.
  • the storage device 11 stores an output frequency characteristic (FIG. 22A) related to the output level of the ultrasonic speaker 21.
  • the processor 12 refers to the output frequency characteristic stored in the storage device 11 and specifies a frequency (hereinafter referred to as “frequency threshold”) fth1 corresponding to an inflection point at which the tendency of the output level changes.
  • the processor 12 obtains the second frequency characteristic FS2 (FIG. 22B) by applying the equalizing coefficient to the first frequency characteristic FS1 (FIG. 22B) stored in the storage device 11. More specifically, the processor 12 determines an equalizing coefficient such that the sound pressure level becomes higher than that of the audio signal and is substantially even for frequency components below the frequency threshold fth1 (hereinafter referred to as “low frequency components”). Do. Thereby, the second frequency characteristic FS2a for the low frequency component is obtained.
  • the processor 12 is an equalizing coefficient such that the sound pressure level is lower than that of the audio signal and lower than the second frequency characteristic FS2a of the low frequency component with respect to frequency components higher than the frequency threshold fth1 (hereinafter referred to as "high frequency components"). Decide. Thereby, the second frequency characteristic FS2b for the high frequency component is obtained.
  • step S111 an audible sound in which the sound of low frequency components is emphasized is reproduced.
  • step S111 A second example of step S111 will be described.
  • the second example is an example in which the frequency characteristic of the audio signal shows a characteristic that the sound pressure level also increases as the frequency increases (that is, an audio signal in which high sound is strong).
  • the first frequency characteristic FS1 of the second example of step S111 has a tendency that the sound pressure level of the low frequency component is low and the sound pressure level of the high frequency component is high.
  • the processor 12 determines a median of the lower limit Fmin and the upper limit Fmax of the first frequency characteristic FS1 (that is, the median of the frequency range of the audio signal) as the frequency threshold fth2.
  • the processor 12 makes the sound pressure level higher than that of the audio signal for frequency components below the frequency threshold fth2 (hereinafter referred to as “low frequency components”), and the frequency components above the frequency threshold fth2 (hereinafter referred to as “high frequency components”)
  • An equalizing coefficient is determined such that the sound pressure level is lower than that of the audio signal, and the sound pressure level is substantially equal throughout the frequency range of the audio signal.
  • a third example of step S111 will be described.
  • the third example is an example of the case where the frequency characteristic of the audio signal shows a characteristic that the sound pressure level becomes lower as the frequency becomes higher (that is, the audio signal with stronger bass).
  • the first frequency characteristic FS1 of the third example of step S111 shows a characteristic (for example, a predetermined sound pressure level threshold V or higher) where the sound pressure levels of the low frequency component and the high frequency component are high.
  • the processor 12 After determining the frequency threshold fth2 in the same manner as in the second example, the processor 12 maintains the sound pressure level (that is, the value equal to or higher than the sound pressure level threshold V) for the low frequency component and the sound pressure level for the high frequency component. Determine the equalizing factor such that is higher than the audio signal. Thereby, the second frequency characteristic FS2 of FIG. 23B is obtained.
  • step S111 A fourth example of step S111 will be described.
  • the fourth example is an example in which the equalizing coefficient is determined with reference to the peak value of the first frequency characteristic.
  • the processor 12 specifies the peak value SLp1 of the first frequency characteristic FS1.
  • the processor 12 determines the peak threshold Pth by subtracting a predetermined constant value (for example, 6 dB) from the peak value SLp1.
  • the processor 12 shifts the entire band of the frequency range of the first frequency characteristic FS1 in the direction in which the sound pressure level decreases so that the peak value of the second frequency characteristic FS2 becomes equal to the peak threshold Pth. Thereby, the second frequency characteristic FS2 of FIG. 24 is obtained.
  • the audio controller 10 executes determination of modulation parameters (S112). Specifically, the processor 12 compares the sound pressure level of the first frequency characteristic FS1 stored in the storage device 11 with the sound pressure level of the second frequency characteristic FS2 obtained in step S111 and modulates it. Determine the parameters.
  • the processor 12 causes the sound pressure level of the second frequency characteristic FS2 to be equal to or higher than the predetermined sound pressure threshold.
  • the predetermined condition is any of the following.
  • step S112 the audio controller 10 executes step S106 as in the first embodiment.
  • the sound source 23 repeatedly executes the process of step S200 until the reproduction is completed (S201-NO).
  • the audio controller 10 repeatedly executes the processes of steps S101 to S102, S110 to S112, and S106 until the reproduction is completed (S107-NO).
  • step S111 in consideration of the output frequency characteristic of the ultrasonic speaker 21, equalization is performed to convert the first frequency characteristic of the audio signal output from the sound source 23 into the second frequency characteristic.
  • step S106 as a result, the ultrasonic speaker 21 emits an ultrasonic wave USW having a drive signal based on the second frequency characteristic.
  • the emitted ultrasound USW is focused at the focal point FP.
  • the focused ultrasound USW forms an audible sound source. Thereby, distortion of the audible sound reproduced by the ultrasonic speaker 21 can be eliminated.
  • the listener L1 of FIG. 19 reproduces a sound so as not to be heard by the listener L2 without wearing the audio output device. Can.
  • Modified Example 1 A first modification will be described. Modification 1 is an example in which the phased array FA has a curved surface shape.
  • the phased array FA of the first modification is formed on a curved array surface having a variable curvature.
  • An actuator for example, a variable arm
  • the actuators are configured to change the curvature of the array surface (ie, the curved shape).
  • the curvature of the array surface changes, the phase difference of the ultrasonic waves emitted from the phased array FA also changes.
  • the first speaker control signal generated in step S106 includes a drive signal for driving the actuator.
  • the actuator changes the curvature of the array surface based on the drive signal.
  • the same effect as the above embodiment can be obtained.
  • the radiation directions of the plurality of ultrasonic transducers 21c are directed to the focal point, so the sound pressure level of the ultrasound focused on the focal point Can raise
  • Modification 2 is an example of specifying the reflectance of the reflection member RM using the ultrasonic speaker 21.
  • the second modification includes an ultrasonic sensor that detects a reflected wave of the ultrasonic wave emitted by the ultrasonic transducer 21c.
  • the processor 12 further specifies the position of the reflective member RM based on the layout information.
  • the processor 12 emits ultrasonic waves toward the specified position (that is, the reflection member RM) by driving the ultrasonic transducer 21c.
  • the ultrasonic waves are reflected by the reflection member RM.
  • the ultrasonic sensor detects a reflected wave from the reflecting member RM.
  • the processor 12 estimates the reflection direction and the reflectance of the ultrasonic wave by the reflective member RM based on the time from the emission of the ultrasonic wave to the detection of the reflected wave by the ultrasonic sensor.
  • step S103 the processor 12 determines the focal position based on the reflection direction of the reflecting member RM estimated in step S100 and the relative position of the listener L.
  • step S105 the processor 12 determines a modulation parameter (that is, the amplitude of the ultrasonic wave) based on the reflectance of the reflecting member RM estimated in step S100.
  • a modulation parameter that is, the amplitude of the ultrasonic wave
  • the ultrasonic speaker 21 is used as a sonar.
  • a surround environment suitable for the use environment SP can be constructed according to the combination of the position of the reflective member RM, the reflection direction, and the reflectance.
  • Modification 3 is an example of constructing a surround environment using a plurality of ultrasonic speakers 21.
  • the audio system 1 of the modification 3 includes a plurality of ultrasonic speakers 21.
  • the processor 12 determines the focal positions and the sound pressure levels of the plurality of ultrasonic speakers 21 individually based on the relative positions of the ultrasonic speakers 21 and the listener L in steps S103 and S104.
  • the ultrasonic waves focused at the focal point increase compared to the first embodiment, the sound pressure of the audible sound from the sound source formed by the focused ultrasonic waves can be increased. As a result, more diverse surround environments can be constructed.
  • Example 4 A fourth modification will be described.
  • the fourth modification is an example in which the audible range is dynamically changed.
  • the processor 12 when the processor 12 receives an instruction (for example, an operation for changing the volume) of the operator (for example, the listener L) of the audio system 1, the processor 12 responds to the instruction. Change the amplitude or modulation of the ultrasonic wave emitted from 21c. In this case, the operator can arbitrarily change the audible range of the sound from the ultrasonic speaker 21.
  • an instruction for example, an operation for changing the volume
  • the processor 12 responds to the instruction. Change the amplitude or modulation of the ultrasonic wave emitted from 21c. In this case, the operator can arbitrarily change the audible range of the sound from the ultrasonic speaker 21.
  • the processor 12 changes the amplitude or the modulation degree of the ultrasonic wave emitted from the ultrasonic transducer 21 c in accordance with the position of the listener L detected by the position detection unit 25. For example, the processor 12 determines the amplitude or the degree of modulation such that the position of some of the plurality of listeners L is excluded from the audible range. In this case, only the specific listener L can hear the sound from the ultrasonic speaker 21.
  • the audio controller 10 includes a sensor (not shown) that detects the volume of the environmental sound.
  • the processor 12 determines the amplitude or the degree of modulation so that the audible range is uniformly maintained in accordance with the volume detected by the sensor. In this case, even if the environmental sound changes, the audible range can be maintained.
  • the audio controller 10 determines the amplitude or the degree of modulation in accordance with the audio signal supplied from the sound source 23. For example, if the first surround panning parameter included in the audio signal indicates a surround panning suitable for a wide audible range, the processor 12 determines the amplitude or degree of modulation such that the audible range is extended. If the first surround pan parameter indicates a surround pan suitable for a narrow audible range, the processor 12 determines the amplitude or degree of modulation such that the audible range is narrowed. In this case, the audible range can be changed according to the sound to be reproduced.
  • the audible range can be dynamically changed according to the external factor of the ultrasonic speaker 21.
  • the first aspect of the present embodiment is An audio controller 10 connectable to at least one ultrasonic speaker 21 and a sound source 23; A unit for inputting an audio signal from the sound source 23 (for example, the processor 12 that executes step S101); Control means (for example, the processor 12 for executing step S104) for controlling the focal position of the ultrasonic wave emitted by the ultrasonic speaker 21 based on the audio signal It is an audio controller 10.
  • restrictions of the use environment of the audio system 1 can be removed.
  • the focal position can be determined arbitrarily, the listener L can hear the sound from the ultrasonic speaker 21 regardless of the position of the listener L.
  • selectively switching point sound sources SS1 formed at at least one point and beam-like sound sources SS3 and SSb (FIG. 11) using one ultrasonic speaker 21.
  • the audio controller 10 of the second aspect of the present embodiment is The audio signal includes a first surround pan parameter,
  • the control means is Generating a second surround pan parameter including panning of the ultrasonic speaker 21 based on the first surround pan parameter and the frequency characteristic;
  • the focal position and the sound pressure level are controlled based on the second surround pan parameter.
  • the third aspect of the present embodiment is An ultrasonic speaker 21 comprising at least one ultrasonic speaker 21 and having a plurality of ultrasonic transducers, and an audio controller 10 connectable to a sound source 23,
  • a unit for inputting an audio signal from the sound source 23 for example, the processor 12 that executes step S101
  • the first frequency characteristic is converted to the second frequency characteristic by reducing the sound pressure level of the frequency component higher than the predetermined frequency threshold among the frequency components of the first frequency characteristic relating to the relationship between the sound pressure level and the frequency of the audio signal Means for converting (for example, the processor 12 executing step S111); Generating at least one drive signal for individually controlling each ultrasonic transducer to emit an ultrasonic wave having a phase difference focused at at least one focal position based on the second frequency characteristic; Control means (for example, the processor 12 performing step S106) for outputting a control signal to each ultrasonic transducer so that each ultrasonic transducer emits an ultrasonic wave having a phase difference focused at the focal position;
  • the ultrasonic waves emitted from the ultrasonic speaker 21 are focused at the spatial focal point FP.
  • the ultrasound focused at the focal point FP forms an audible sound source.
  • This audible sound has a second frequency characteristic.
  • the listener L1 located near the focal point can hear audible sound from the sound source formed at the focal point.
  • the listener L2 located away from the focus can not hear this audible sound. That is, the audible sound from the sound source formed by the ultrasound does not leak from the vicinity of the focal point.
  • the listener L1 can listen to the audible sound so as not to be heard by the listener L2 without wearing the audio output device.
  • the control means of the fourth aspect of the present embodiment controls at least one of the focal position and the focal number using the phase difference of the ultrasonic waves emitted by the ultrasonic speaker.
  • the focal position and the focal number can be dynamically changed.
  • a new user experience can be provided to the listener L.
  • the control means of the fifth aspect of the present embodiment determines the drive timings of the plurality of ultrasonic transducers 21 c of the ultrasonic speaker 21 individually, and outputs control signals to the respective ultrasonic transducers in accordance with the drive timings.
  • the focal position can be controlled at higher speed.
  • the focal position can be easily and quickly changed without physically driving the ultrasonic speaker 21.
  • the control means of the sixth aspect of the present embodiment determines the driving time difference of each ultrasonic transducer based on the focal point coordinates of the focal position and the coordinate indicating the position of each ultrasonic transducer.
  • the focal position can be easily and quickly changed without physically driving the ultrasonic speaker 21.
  • the seventh aspect of the present embodiment is Means for determining modulation parameters for AM modulation or PM modulation of the ultrasound based on the audio signal and the focus position (eg, the processor 12 performing step S105) It is an audio controller.
  • the control means of the eighth aspect of the present embodiment controls the focal position based on the relative position of the listener L to the ultrasonic speaker 21.
  • the eighth aspect it is possible to construct a surround environment according to the position of the listener L.
  • the surround environment can be continued to build, so that the restriction on the position of the listener L during playback can be removed.
  • the sound source can be moved.
  • the audio controller 10 includes means (e.g., the processor 12 that executes step S100) for acquiring use environment information on the use environment SP of the ultrasonic speaker 21.
  • the control means controls the focus position with further reference to the use environment information.
  • the ninth aspect it is possible to construct a surround environment according to the use environment SP of the ultrasonic speaker 21.
  • the use environment information of the tenth aspect of the present embodiment includes at least one of layout information indicating a layout of the use environment SP of the ultrasonic speaker 21 and image information of the use environment SP.
  • the audio controller 10 is a means for estimating the reflectance of the reflecting member RM present in the use environment SP of the ultrasonic speaker 21 based on the reflected wave of the ultrasonic wave emitted by the ultrasonic speaker 21. (Eg, processor 12), The control means controls the focus position with further reference to the reflectance.
  • the eleventh aspect it is possible to establish a surround environment according to the position, the shape, and the reflectance of the reflection member RM of the use environment SP of the ultrasonic speaker 21.
  • the position, shape, and reflectance of the reflection member RM of the use environment SP A surround environment can be constructed.
  • the audio controller 10 of the twelfth aspect of the present embodiment is further connectable to at least one speaker (for example, at least one of the loudspeaker 22 and the woofer 26), Means for determining the first sound pressure level of the ultrasonic speaker 21 and the second sound pressure level of the speaker based on the audio signal (for example, the processor 12 executing step S104);
  • the control means generates a first speaker control signal based on the first sound pressure level and a second speaker control signal based on the second sound pressure level, Means for outputting the first speaker control signal to the ultrasonic speaker 21; Means are provided for outputting the second speaker control signal to the speaker.
  • the determination unit of the thirteenth aspect of the present embodiment controls the second sound pressure level such that the first sound pressure level is higher as the frequency band is higher in the frequency characteristic.
  • the thirteenth aspect it is possible to construct a surround environment according to the combination of the ultrasonic speaker 21 and the output characteristics of the speakers (for example, the loudspeaker 22 and the woofer 26) other than the ultrasonic speaker 21.
  • the control means of the fourteenth aspect of the present embodiment is any of the first operation mode for forming a point sound source at at least one focal position and the second operation mode for forming a beam-like sound source toward the radiation direction of ultrasonic waves.
  • the control means of the fifteenth aspect of the present embodiment is In the first mode of operation, the ultrasound is focused to a first focal point located at a first distance from the ultrasound transducer, In the second operation mode, the ultrasound is focused from the ultrasound transducer to a second focal point located at a second distance longer than the first distance.
  • the control means of the sixteenth aspect of the present embodiment determines the focal position while the ultrasonic speaker 21 is driven.
  • the position of the sound source formed on the space can be changed during the reproduction of the sound.
  • a seventeenth aspect of the present embodiment is an ultrasonic speaker 21 connectable to the audio controller 10, wherein Equipped with a plurality of ultrasonic transducers 21c, A drive unit for individually driving the plurality of ultrasonic transducers 21c according to control of the audio controller 10; It is an ultrasonic speaker 21.
  • the eighteenth aspect of the present embodiment is Including the above audio controller 10; And an ultrasonic speaker 21 including a plurality of ultrasonic transducers 21c,
  • the ultrasonic speaker 21 includes a drive unit that individually drives the plurality of ultrasonic transducers 21c according to the control of the audio controller 10. It is an audio system 1.
  • the storage device 11 may be connected to the audio controller 10 via the network NW.
  • the speaker component (the combination of the ultrasonic speaker 21, the loudspeaker 22, and the woofer 26) in FIG. 2 is an example. This embodiment is also applicable to the following speaker components.
  • the ultrasonic speaker 21 alone that is, a speaker component not including a speaker other than the ultrasonic speaker 21 (the loudspeaker 22 and the woofer 26 in FIG. 2)
  • the camera 24 may detect the relative position of the listener L instead of the position detection unit 25. For example, the camera 24 acquires image information of the listener L.
  • the processor 12 applies feature amount analysis based on human feature amounts to the image information acquired by the camera 24. Thereby, the position (the position in the image space) of the listener L in the image information is specified.
  • the processor 12 specifies the relative position by generating three-dimensional coordinates indicating the relative position of the listener L with respect to the ultrasonic speaker 21 based on the specified position in the image space.
  • the ultrasonic speaker 21 may detect the relative position of the listener L instead of the position detection unit 25.
  • an ultrasonic sensor that detects the reflected wave of the ultrasonic wave emitted by the ultrasonic transducer 21c is provided.
  • the processor 12 emits ultrasonic waves by driving the ultrasonic transducers 21c.
  • the ultrasonic waves are reflected to the listener L.
  • the ultrasonic sensor detects a reflected wave from the listener L.
  • the processor 12 estimates the relative position of the listener L based on the time from when the ultrasonic wave is emitted to when the reflected wave is detected by the ultrasonic sensor.
  • audio system 10 audio controller 11: storage device 12: processor 13: input / output interface 14: communication interface 21: ultrasonic speaker 22: loudspeaker 23: sound source 24: camera 25: position detector 26: woofer 27: display

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

L'invention concerne un dispositif de commande audio qui peut être connecté à un haut-parleur à ultrasons, qui est au moins un haut-parleur à ultrasons et comprend une pluralité de transducteurs à ultrasons; et une source sonore. Ledit dispositif de commande audio comprend : un moyen pour entrer un signal audio à partir de la source sonore; et un moyen de commande qui génère, sur la base du signal audio, un signal de commande pour commander chacun des transducteurs à ultrasons individuellement, et fournit le signal de commande à chacun des transducteurs à ultrasons de telle sorte que chacun des transducteurs à ultrasons rayonne une onde ultrasonore ayant une différence de phase de sorte que les ondes ultrasonores convergent sur au moins une position focale.
PCT/JP2018/035366 2017-10-03 2018-09-25 Dispositif de commande audio, haut-parleur à ultrasons et système audio WO2019069743A1 (fr)

Applications Claiming Priority (10)

Application Number Priority Date Filing Date Title
JP2017193330A JP6329679B1 (ja) 2017-10-03 2017-10-03 オーディオコントローラ、超音波スピーカ、オーディオシステム、及びプログラム
JP2017193792A JP7095857B2 (ja) 2017-10-03 2017-10-03 音響システム、音響処理方法、及びプログラム
JP2017-193792 2017-10-03
JP2017-193373 2017-10-03
JP2017193373A JP6330098B1 (ja) 2017-10-03 2017-10-03 オーディオコントローラ、プログラム、超音波スピーカ、音源装置
JP2017-193330 2017-10-03
JP2018-081202 2018-04-20
JP2018081202A JP7095863B2 (ja) 2018-04-20 2018-04-20 音響システム、音響処理方法、及びプログラム
JP2018082010A JP2019068396A (ja) 2018-04-23 2018-04-23 オーディオコントローラ、プログラム、超音波スピーカ、音源装置
JP2018-082010 2018-04-23

Publications (1)

Publication Number Publication Date
WO2019069743A1 true WO2019069743A1 (fr) 2019-04-11

Family

ID=65994667

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/035366 WO2019069743A1 (fr) 2017-10-03 2018-09-25 Dispositif de commande audio, haut-parleur à ultrasons et système audio

Country Status (1)

Country Link
WO (1) WO2019069743A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020246136A1 (fr) * 2019-06-05 2020-12-10 ソニー株式会社 Dispositif de traitement d'informations, procédé de traitement d'informations, et programme
WO2021002162A1 (fr) * 2019-07-01 2021-01-07 ピクシーダストテクノロジーズ株式会社 Dispositif de commande audio, programme, haut-parleur directionnel et procédé de commande de haut-parleur directionnel
WO2021024692A1 (fr) * 2019-08-05 2021-02-11 ピクシーダストテクノロジーズ株式会社 Dispositif de commande audio, système audio, programme, et procédé de commande audio
JPWO2021038613A1 (fr) * 2019-08-23 2021-03-04

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004112211A (ja) * 2002-09-17 2004-04-08 Mitsubishi Electric Engineering Co Ltd 超指向性スピーカー
JP2012029096A (ja) * 2010-07-23 2012-02-09 Nec Casio Mobile Communications Ltd 音声出力装置
JP2015056905A (ja) * 2013-09-13 2015-03-23 ソニー株式会社 音声の到達性
WO2017135194A1 (fr) * 2016-02-05 2017-08-10 株式会社ソニー・インタラクティブエンタテインメント Dispositif de traitement d'informations, système de traitement d'informations, procédé de commande et programme

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004112211A (ja) * 2002-09-17 2004-04-08 Mitsubishi Electric Engineering Co Ltd 超指向性スピーカー
JP2012029096A (ja) * 2010-07-23 2012-02-09 Nec Casio Mobile Communications Ltd 音声出力装置
JP2015056905A (ja) * 2013-09-13 2015-03-23 ソニー株式会社 音声の到達性
WO2017135194A1 (fr) * 2016-02-05 2017-08-10 株式会社ソニー・インタラクティブエンタテインメント Dispositif de traitement d'informations, système de traitement d'informations, procédé de commande et programme

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
GAN, WOON-SENG ET AL.: "A Digital Beamsteerer for Difference Frequency in a Parametric Array", IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, vol. 14, no. 3, May 2006 (2006-05-01), pages 1018 - 1025, XP055589682, ISSN: 1558-7916, Retrieved from the Internet <URL:https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1621214> [retrieved on 20181218], DOI: 10.1109/TSA.2005.857786 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020246136A1 (fr) * 2019-06-05 2020-12-10 ソニー株式会社 Dispositif de traitement d'informations, procédé de traitement d'informations, et programme
WO2021002162A1 (fr) * 2019-07-01 2021-01-07 ピクシーダストテクノロジーズ株式会社 Dispositif de commande audio, programme, haut-parleur directionnel et procédé de commande de haut-parleur directionnel
WO2021024692A1 (fr) * 2019-08-05 2021-02-11 ピクシーダストテクノロジーズ株式会社 Dispositif de commande audio, système audio, programme, et procédé de commande audio
JPWO2021038613A1 (fr) * 2019-08-23 2021-03-04
WO2021038613A1 (fr) * 2019-08-23 2021-03-04 三菱電機株式会社 Système de dispositif électrique, appareil de diffusion sonore, dispositif électrique, procédé de diffusion sonore et programme
JP7278390B2 (ja) 2019-08-23 2023-05-19 三菱電機株式会社 電気機器システム、音出力装置、電気機器、音出力方法およびプログラム

Similar Documents

Publication Publication Date Title
US11617050B2 (en) Systems and methods for sound source virtualization
JP6904963B2 (ja) 拡張現実システムにおいてオーディオを指向させるための技法
US5764777A (en) Four dimensional acoustical audio system
WO2019069743A1 (fr) Dispositif de commande audio, haut-parleur à ultrasons et système audio
JP6101989B2 (ja) 拡張現実環境における信号増強ビーム形成
Shi et al. Development of parametric loudspeaker
US9913054B2 (en) System and method for mapping and displaying audio source locations
JP7271695B2 (ja) ハイブリッドスピーカ及びコンバータ
EP2737727B1 (fr) Procédé et appareil conçus pour le traitement d&#39;un signal audio
CN107360494A (zh) 一种3d音效处理方法、装置、系统及音响系统
US10299064B2 (en) Surround sound techniques for highly-directional speakers
WO2021067183A1 (fr) Systèmes et procédés de visualisation de source sonore
JP6329679B1 (ja) オーディオコントローラ、超音波スピーカ、オーディオシステム、及びプログラム
EP3474576B1 (fr) Contrôle acoustique actif pour les sons de champ proche et lointain
TW202215419A (zh) 在開放現場中主動噪聲消除的系統和方法
JP7095863B2 (ja) 音響システム、音響処理方法、及びプログラム
CN116405840A (zh) 用于任意声音方向呈现的扩音器系统
CN1188586A (zh) 产生三维声象的声频系统
JP7095857B2 (ja) 音響システム、音響処理方法、及びプログラム
JP6330098B1 (ja) オーディオコントローラ、プログラム、超音波スピーカ、音源装置
WO2020004460A1 (fr) Dispositif de commande à ultrasons, haut-parleur à ultrasons, et programme
WO2021024692A1 (fr) Dispositif de commande audio, système audio, programme, et procédé de commande audio
CN115604647B (zh) 一种超声波感知全景的方法及装置
JP2019068396A (ja) オーディオコントローラ、プログラム、超音波スピーカ、音源装置
JP2021180470A (ja) 指向性スピーカ、音響システム、及び、プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18865093

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 29/07/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18865093

Country of ref document: EP

Kind code of ref document: A1