GB2589720A - Information processing apparatus, program, and information processing system - Google Patents

Information processing apparatus, program, and information processing system Download PDF

Info

Publication number
GB2589720A
GB2589720A GB2015093.4A GB202015093A GB2589720A GB 2589720 A GB2589720 A GB 2589720A GB 202015093 A GB202015093 A GB 202015093A GB 2589720 A GB2589720 A GB 2589720A
Authority
GB
United Kingdom
Prior art keywords
speaker
target person
sound
distance
volume
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB2015093.4A
Other versions
GB202015093D0 (en
GB2589720A8 (en
Inventor
Matsushima Koji
Okamoto Yoshihiro
Watanabe Yuki
Shimokawa Hajime
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujitsu Client Computing Ltd
Original Assignee
Fujitsu Client Computing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Client Computing Ltd filed Critical Fujitsu Client Computing Ltd
Publication of GB202015093D0 publication Critical patent/GB202015093D0/en
Publication of GB2589720A publication Critical patent/GB2589720A/en
Publication of GB2589720A8 publication Critical patent/GB2589720A8/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03GCONTROL OF AMPLIFICATION
    • H03G3/00Gain control in amplifiers or frequency changers
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03GCONTROL OF AMPLIFICATION
    • H03G3/00Gain control in amplifiers or frequency changers
    • H03G3/20Automatic control
    • H03G3/30Automatic control in amplifiers having semiconductor devices
    • H03G3/3005Automatic control in amplifiers having semiconductor devices in amplifiers suitable for low-frequencies, e.g. audio amplifiers
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03GCONTROL OF AMPLIFICATION
    • H03G3/00Gain control in amplifiers or frequency changers
    • H03G3/20Automatic control
    • H03G3/30Automatic control in amplifiers having semiconductor devices
    • H03G3/32Automatic control in amplifiers having semiconductor devices the control being dependent upon ambient noise level or sound level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/34Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by using a single transducer with sound reflecting, diffracting, directing or guiding means
    • H04R1/345Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by using a single transducer with sound reflecting, diffracting, directing or guiding means for loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R27/00Public address systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R29/00Monitoring arrangements; Testing arrangements
    • H04R29/007Monitoring arrangements; Testing arrangements for public address systems
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03GCONTROL OF AMPLIFICATION
    • H03G2201/00Indexing scheme relating to subclass H03G
    • H03G2201/70Gain control characterized by the gain control parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2217/00Details of magnetostrictive, piezoelectric, or electrostrictive transducers covered by H04R15/00 or H04R17/00 but not provided for in any of their subgroups
    • H04R2217/03Parametric transducers where sound is generated or captured by the acoustic demodulation of amplitude modulated ultrasonic waves
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2227/00Details of public address [PA] systems covered by H04R27/00 but not provided for in any of its subgroups
    • H04R2227/001Adaptation of signal processing in PA systems in dependence of presence of noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/01Aspects of volume control, not necessarily automatic, in sound systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

The invention provides a system that allows sound to be supplied at a controlled level to selected people. The system calculates the distance between a loudspeaker and a target person at whom the speaker is pointing, based on a position of the loudspeaker and a position of the target person. The system specifies a sound volume level corresponding to the calculated distance based on information indicating correspondence between distances and volumes. The system then causes the loudspeaker to produce sound at the specified volume. The position of a target person may be detected from images and mapped into a 3D space. The loudspeaker may be a directional, parametric loudspeaker that emits ultrasound. It may be rotated to point at a position above the feet of a target person. The sound pressure level at the target person may also be controlled to be a predetermined value higher than an ambient noise level. The noise level may be determined based on a signal from a microphone within a predetermined range of the target person.

Description

INFORMATION PROCESSING APPARATUS, PROGRAM, AND INFORMATION PROCESSING SYSTEM
FIELD
The embodiments discussed herein are related to an information processing apparatus, a program, and an information processing system.
BACKGROUND
There are situations where it is desirable to provide sound to only specified people out of a plurality of people at a facility. This is sometimes achieved by outputting sound using a parametric speaker pointed at the intended listener. A parametric speaker is capable of outputting sound with narrow directionality using ultrasound, for example.
As one example of a technology relating to parametric speakers, an audio communication apparatus that controls the frequency of ultrasound outputted from a parametric speaker based on the angle of the ear position of the user has been proposed.
See, for exampIe, Japanese Laid-open Patent Publication No. 2011-55076.
A directional speaker such as a parametric 25 speaker is capable of providing sound to people far away. To provide sound to positions far away, the volume of the speaker may be set at a high value (as one example, at maximum volume). However, since sound pressure level becomes higher as you approach the sound source, when a speaker outputs sound at maximum volume, the sound will feel very loud to anyone near the speaker.
SUMMARY
Accordingly, it is desirable to appropriately set the volume of a speaker.
According to an embodiment of an aspect, there is provided an information processing apparatus including: a memory; and a processor coupled to the memory and configured to execute a process including: calculating a distance between a speaker and a target person who the speaker is pointing at, based on a position of the speaker and a position of the target person, specifying a volume corresponding to the calculated distance based on correspondence information indicating correspondence between distances and volumes, and instructing the speaker to output sound at the specified volume.
BRIEF DESCRIPTION OF DRAWINGS
FIG. 1 depicts one example of an information processing system according to a first embodiment; FIG. 2 depicts an example system configuration 25 of a second embodiment; FIG. 3 depicts example hardware of a host computer; FIG. 4 depicts example hardware configurations of a controller and a rotation control device; FIG. 5 depicts example relationships between a listening position relative to a speaker and sound pressure level; FIG. 6 is a block diagram depicting example functions of the host computer and the rotation control device; FIG. 7 depicts an example of correspondence 10 information; FIG. 8 depicts one example method of mapping onto a 3D space; FIG. 9 depicts an example method for controlling volume; FIG. 10 is a flowchart depicting an example procedure of sound reproduction; FIG. 11 depicts one example system configuration of a third embodiment; FIG. 12 depicts another example configuration of 20 hardware of a rotation control device; FIG. 13 is a block diagram depicting other example functions of a host computer and the rotation control device; FIG. 14 depicts one example of selection 25 information; FIG. 15 depicts another example of a method for controlling volume; and FIG. 16 is a flowchart depicting one example of a procedure for sound reproduction.
DESCRIPTION OF EMBODIMENTS
Several embodiments will be described below, by way of example only, with reference to the accompanying drawings. Note that when feasible, it is possible to implement a plurality of the following embodiments in combination.
First Embodiment A first embodiment will be described first. FIG. 1 depicts one example of an information processing system according to the first embodiment. In the example in FIG. 1, an information processing apparatus 10 15 sets a volume of a speaker 1 based on the distance between the speaker 1 and the target person 2. The information processing apparatus 10 executes a volume setting process by executing a program in which a processing procedure for setting the volume is written.
The information processing apparatus 10 is connected to the speaker 1. The speaker 1 is a parametric speaker that emits sound using ultrasound, for example. The target person 2 is the person to whom the sound is delivered by the speaker 1. The speaker 1 is pointed at the target person 2.
The information processing apparatus 10 includes a storage unit 11 and a processing unit 12. As one example, the storage unit 11 is a memory or storage device of the information processing apparatus 10. Likewise, as one example, the processing unit 12 is a processor or computational circuit of the information processing apparatus 10.
The storage unit 11 stores correspondence information 11a indicating correspondence between distances and volumes. The correspondence information lla is associated with a predetermined noise level. The predetermined noise level is an average noise level for a location where the speaker 1 has been installed, for example. The correspondence information 11a indicates correspondence between distances and volumes at which the sound pressure level of sound outputted by the speaker 1 at a point the corresponding distance away is a predetermined value (for example, 5 decibels (dB)) higher than a predetermined noise level.
As one example, in the correspondence information 11a, a volume of 80 percent (fl is associated with a distance of 1 meter (m). As other examples of the correspondence information 11a, a volume of 94 percent is associated with a distance of 5 meters and a volume of 100 percent is associated with a distance of 7 meters.
The processing unit 12 acquires a speaker 25 position 3 indicating the position of the speaker 1 and a target person position 4 indicating the position of the target person 2. As one example, the speaker position 3 and the target person position 4 are coordinate positions in a three-dimensional (3D) coordinate system. The processing unit 12 calculates the distance between the speaker 1 and the target person 2 based on the speaker position 3 and the target person position 4. As one example, the distance between the speaker 1 and the target person 2 is the distance between the speaker position 3 and the target person position 4 in a 3D coordinate system. It is assumed here that as one example, the distance between the speaker position 3 and the target person position 4 is 5.1 meters.
The processing unit 12 specifies the volume corresponding to the calculated distance based on the correspondence information 11a. As one example, the processing unit 12 specifies the distance "5 meters" that is closest to calculated distance of 5.1 meters out of the distances registered in the correspondence information ha. The processing unit 12 then specifies the volume 94 percent corresponding to the distance 5 meters in the correspondence information 11a.
The processing unit 12 instructs the speaker 1 to output sound at the specified volume. As one example, the processing unit 12 sets the volume of the speaker 1 at 94 percent and instructs the speaker 1 to reproduce sound. In response, the speaker 1 outputs sound to the target person 2 at a volume of 94 percent.
When the speaker 1 outputs sound at the volume set in this way, the sound pressure level of the outputted sound at the position of the target person 2 will have a value that exceeds the predetermined noise level by a predetermined value. A sound pressure level that is easy for humans to hear is a slightly higher value than the 5 ambient noise level (as one example, a value that is around 5 decibels higher). This means that by setting the ambient noise level as the "predetermined noise level" and 5 decibels as the "predetermined value", it is possible to set the volume of the speaker 1 at a volume that is easy 10 to hear for the target person 2.
According to the information processing apparatus 10 described above, the distance between the speaker 1 and the target person 2 is calculated based on the position of the speaker 1 and the position of the target person 2 the speaker 1 is pointing at. She volume corresponding to the distance between the speaker 1 and the target person 2 is then specified based on the correspondence information ha, and the speaker 1 is instructed to output sound at the specified volume. By doing so, the volume of the speaker 1 is set appropriately.
The correspondence information lla indicates the correspondence between distances and volumes at which the sound pressure level of sound outputted by the speaker 1 at a position separated by the corresponding distance is the predetermined value higher than the predetermined noise level. This means that the information processing apparatus 10 is able to cause the speaker 1 to output sound that is easily heard by the target person 2.
Although the correspondence information ha is used in the example described above, it is also possible to select which correspondence information is to be used out of a plurality of correspondence information that correspond to different noise levels. As one example, the processing unit 12 acquires a noise level that has been determined based on sound acquired from a microphone within a predetermined range which includes the target person position 4. The processing unit 12 may then specify the volume corresponding to the distance between the speaker 1 and the target person 2 based on correspondence information that corresponds to the acquired noise level, out of the plurality of correspondence information. By doing so, the information processing apparatus 10 sets the volume of the speaker 1 according to the ambient noise level.
The processing unit 12 may calculate, as the distance between the speaker 1 and the target person 2, a distance between the position of the speaker 1 and a position a predetermined distance higher than the position of the foot of the target person 2. By doing so, the information processing apparatus 10 calculates the distance between the speaker 1 and the ear position of the target person 2.
The speaker 1 may be a parametric speaker. By using a parametric speaker, it is possible for the information processing apparatus 10 to deliver sound to the target person 2 with an appropriate volume via the speaker 1 even when the distance between the speaker 1 and the target person 2 is large.
Second Embodiment Next, a second embodiment will be described. In the second embodiment, the volume of a speaker is controlled according to the distance between a target person and the speaker.
FIG. 2 depicts an example system configuration of the second embodiment. The information processing system depicted in FIG. 2 is installed at a facility such as a store. As one example, a target person 21 at the facility where the information processing system is installed is a person (or "suspicious person") who is exhibiting suspicious behavior in the facility. A camera 32 and a speaker 41 are also installed at the facility where the information processing system is installed. The information processing system depicted in FIG. 2 detects the target person 21 with the camera 32, and outputs sound toward the target person 21 using the speaker 41.
The host computer 100 detects the target person 21 and controls the speaker 41. The host computer 100 acquires images (which may be video) captured by the camera 32 and detects the target person 21. The host computer 100 also controls the sound outputted from the speaker 41 via a controller 200. As one example, the host computer 100 decides the volume and content (audio file) for the sound outputted from the speaker 41.
The host computer 100 also controls the orientation of the speaker 41 via the controller 200 and a rotation control device 300. The host computer 100 5 calculates the numbers of rotations of motors so that the speaker 41 is pointed in the direction of the target person 21. The host computer 100 informs the rotation control device 300 of a rotation speed and/or a number of rotations for pointing the speaker 41 in the direction of the target lp person 21.
The host computer 100 is connected to the rotation control device 300 via the controller 200. The rotation control device 300 rotationally drives a motor or motors for determining the orientation of the speaker 41 according to an instruction from the host computer 100. As one example, the rotation control device 300 causes the motor(s) to rotate at a rotation speed indicated by the host computer 100 for the number of rotations also indicated by the host computer 100.
FIG. 3 depicts example hardware of a host computer. The host computer 100 as a whole is controlled by a processor 101. A memory 102 and a plurality of peripherals are connected to the processor 101 via a bus 111. The processor 101 may be a multiprocessor. As examples, the 25 processor 101 is a CPU (Central Processing Unit), an MPU (Micro Processing Unit), or a DSP (Digital Signal Processor). At least some of the functions realized by the processor 101 executing a program may be realized by electronic circuits, such as an ASIC (Application Specific Integrated Circuit), a PLD (Programmable Logic Device), or the like.
The memory 102 is used as a main storage device of the host computer 100. The memory 102 temporarily stores at least part of an OS (Operating System) program and/or an application program to be executed by the processor 101. The memory 102 also stores various data used for processing by the processor 101. As one example, a volatile semiconductor storage device such as RAM (Random Access Memory) is used as the memory 102.
The peripherals connected to the bus 111 include a storage device 103, a graphics processing device 104, an appliance connecting interface 105, an input interface 106, an optical drive device 107, an appliance connecting interface 108, a connection interface 109, and a network interface 110.
The storage device 103 electrically or magnetically writes or reads data to or from a built-in recording medium. The storage device 103 is used as an auxiliary storage device of a computer. The storage device 103 stores an OS program, application programs, and various data. Note that as examples of the storage device 103, it is possible to use an HDD (Hard Disk Drive) or an SSD (Solid State Drive) . A monitor 31 is connected to the graphics processing device 104. The graphics processing device 104 displays images on the screen of the monitor 31 according to instructions from the processor 101. A display device that uses organic EL (Electro Luminescence), a liquid crystal display device, or the like may be used as the monitor 31.
The camera 32 is connected to the appliance connecting interface 105. According to instructions from the processor 101, the camera 32 generates data of still images or moving images of the scene in front of the lens of the camera 32 and stores the data in the memory 102.
A keyboard 33 and a mouse 34 are connected to the input interface 106. The input interface 106 transmits signals sent from the keyboard 33 and the mouse 34 to the processor 101. Note that the mouse 34 is one example of a pointing device, and it is possible to use another pointing device. Other pointing devices include a touch panel, a tablet, a touch pad, and a trackball.
The optical drive device 107 uses laser light or the like to read the data recorded on an optical disc 35. The optical disc 35 is a portable recording medium on which data is recorded so that it is read using reflected light. As examples, the optical disc 35 may be a DVD (Digital Versatile Disc), a DVD-RAM, a CD-ROM (Compact Disc Read Only Memory), and a CD-R (Recordable)/RW (ReWritable).
The appliance connecting interface 108 is a communication interface for connecting peripherals to the host computer 100. As examples, the appliance connecting interface 108 may be connected to a memory device 36 and a memory reader/writer 37. The memory device 36 is a recording medium equipped with a function for communicating with the appliance connecting interface 108. The memory reader/writer 37 is a device that writes data in a memory card 38 or reads data from a memory card 38. The memory card 38 is a card-type recording medium.
The connection interface 109 is an inzerface for 10 connecting to the controller 200. The connection interface 109 is connected to a hub inside the controller 200 according to a standard such as USB (Universal Serial Bus). The network interface 110 is connected to a network 20. The network interface 110 transmits and 15 receives data via the network 20 to and from another computer or a communication appliance.
The host computer 100 realizes the processing functions of the second embodiment with the hardware configuration described above. Note that the information processing apparatus 10 described in the first embodiment may also be realized by the same hardware as the host computer 100 depicted in FIG. 3. The processor 101 is one example of the processing unit 12 described in the first embodiment. The memory 102 or the storage device 103 is one example of the storage unit 11 described in the first embodiment.
As one example, the host computer 100 realizes the processing functions of the second embodiment by executing a program recorded on a computer-readable recording medium. The program in which the processing content to be executed by the host computer 100 is written may be recorded in advance on various /ecordino media. As one example, the program to be executed by the host computer may be stored in the storage device 103. The processor 101 loads at least part of the program in the storage device 103 into the memory 102 and executes the program. The pro gram to be executed by the host computer 100 may be recorded on a portable recording medium, such as the optical disc 35, the memory device 36, or the memory card 38. As one example, the program stored on the portable recording medium becomes executable after being installed in the storage device 103 under control of the processor 101. It is also possible for the processor 101 to directly read and execute the program from a portable recording medium.
FIG. 4 depicts example hardware configurations of a controller and a rotation control device. The controller 200 includes a hub 201 for connecting the host computer 100 and peripherals to each other. The hub 201 is connected to the host computer 100 according to a standard such as USB. The peripherals connected to the hub 201 include a serial bus 202, a DAC (Digital Analog Converter) 204, and a connection Interface 206.
The serial bus 202 is a bus for connecting the hub 201 and a DPOT (Digital POTentiometer) 203. As one example, the standard used by the serial bus 202 is I2C (registered trademark). The DPOT 203 is a variable resistor whose resistance value changes in accordance with a signal. This resistance value of the DPOT 203 controls the current 5 flowing through an amplifier (AMP) 205.
The DAC 204 converts a digital signal from the host computer 100 to an analog signal and outputs the analog signal to the amplifier 205. The amplifier 205 amplifies the analog signal inputted from the DAC 204 using a current 10 and outputs the amplified analog signal to the speaker 41. The speaker 41 outputs sound according to the signal inputted from the amplifier 205. As one example, the speaker 41 is a parametric speaker and outputs sound using ultrasound.
The connection interface 206 is an inerface for connecting to the rotation control device 300. As one example, the standard used by the connection interface 206 is RS-485.
The rotation control device 300 as a whole is controlled by a processor 301. A memory 302 and a plurality of peripherals are connected to the processor 301 via a bus 306. The processor 301 may be a multiprocessor. As examples, the processor 301 is a CPU, an MPU, or a DSP. At least some of the functions realized by the processor 301 executing the program may be realized by an electronic circuit, such as an ASIC or PLD.
The memory 302 is used as the main storage device of the rotation control device 300. The memory 302 temporarily stores at least part of an OS program and application programs to be executed by the processor 301. The memory 302 also stores various data used for processing by the processor 301. As one example, a volatile semiconductor storage device such as RAM is used as the memory 302.
The peripherals connected to the bus 306 include a storage device 303, a motor driver 304, and a connection interface 305.
The storage device 303 electrically Or magnetically writes and reads data to and from a built-in recording medium. The storage device 303 is used as an auxiliary storage device of a computer. The storage device 303 stores an OS program, application programs, and various data. As one example, flash memory may be used as the storage device 303.
Motors 42a and 42b are connected to the motor driver 304. The motor driver 304 receives pulse signals from the processor 301 and rotationally drives the shafts of the motors 42a and 42b. As one example, the motors 42a and 42b are stepping motors whose shafts rotate by amounts that are proportional to the number of pulses.
The connection interface 305 is an incerface for 25 connecting to the controller 200. The connection interface 305 uses the same standard as the connection interface 206. The rotation control device 300 realizes the processing functions of the second embodiment using the hardware configuration described above.
As one example, the rotation control device 300 realizes the processing functions of the second embodiment by executing a program recorded on a computer-readable recording medium. The program in which the processing content to be executed by the rotation control device 300 is written may be recorded on various recording media. As one example, a program to be executed by the rotation control device 300 may be stored in advance in the storage device 303. The processor 301 loads at least part of the program in the storage device 303 into the memory 302 and executes the program. The program executed by the rotation control device 300 may be recorded on a portable recording medium, such as an optical disc, a memory device, or a memory card. As one example, a program stored in a portable recording medium becomes executable after being installed in the storage device 303 under the control of the processor 301.
Next, the properties of the speaker 41 will be described.
FIG. 5 depicts example relationships between a listening position relative to the speaker and sound pressure level. The difference between a speaker 30, which is non-directional, and the speaker 41, which is a parametric speaker, will now be described. Note that in FIG. 5, the magnitude of the sound pressure level of sound emitted from the speakers 30 and 41 is represented by the density of dots, with dark areas indicating high sound pressure levels and light areas indicating low sound pressure levels.
The sound emitted by the speaker 30 travels in all directions and attenuates as the distance increases. For this reason, the sound pressure level is high close to the speaker 30, and falls as the distance from the speaker 30 increases. Here, even when the speaker 30 is pointing at the target person 21, when the distance between the speaker 30 and the target person 21 is large, the target person 21 may have difficulty hearing or be unable to hear the sound produced by the speaker 30.
On the other hand, the sound emitted from the speaker 41 propagates through ultrasound. Since ultrasound does not spread, there is little attenuation in the sound as the distance increases. This means that the sound pressure level is high on a straight line in the direction in which the speaker 41 is pointing and the sound pressure level is low in directions aside from the direction in which the speaker 41 is pointing. This means that when the speaker 41 is pointed at the target person 21, the targec person 21 is able to hear the sound emitted by the speaker 41 even when the distance between the speaker 41 and the target person 21 is large. With this configuration, the speaker 41 provides sound to only a specified person (the target person 21).
In this way, the speaker 41 is capable of delivering sound to the target person 21 who is far away. Here, as one example, when the distance between the speaker 41 and the target person 21 is large, the volume of the 5 speaker 41 is set at a large value. When the target person 21 subsequently approaches the speaker 41, the sound pressure level of the sound outputted from the speaker 41 will be too high, so that the sound outputted from the speaker 41 will feel very loud to the target person 21. For 10 this reason, in the second embodiment, the volume of the speaker 41 is set based on the distance between the speaker 41 and the target person 21.
The functions of the host computer 100 and the rotation control device 300 are described in detail below.
FIG. 6 is a block diagram depicting example functions of the host computer and the rotation control device. The host computer 100 includes a storage unit 120, a rotation instruction unit 130, a target person detection unit 140, a distance calculation unit 150, a volume setting unit 160, and a sound reproduction unit 170.
The storage unit 120 stores the correspondence information 121. The correspondence information 121 indicates the correspondence between the distance between the speaker 41 and the target person 21 and the volume of the speaker 41. The correspondence information 121 also associates volumes of the speaker 41 and resistance setting values of the DPOT 203.
The rotation instruction unit 130 specifies the rotation speed and/or the number of rotations, and causes the rotation control device 300 to rotate the motors 42a and 42b. As one example, the rotation instruction unit 130 5 refers to the present angles of rotational shafts that determine the orientation of the speaker 41, and calculates differences in angle from the target angles. The rotation instruction unit 130 calculates the number of rotations of the motors 42a and 42b for rotating the rotational shafts 10 by the calculated differences in angle. The rotation instruction unit 130 transmits a command to the rotation control device 300 to cause the rotation control device 300 to rotate the motors 42a and 42b by the calculated number of rotations.
The target person detection unit 140 detects the position of the target person 21 based on the images captured by the camera 32. As one example, the target person detection unit 140 detects the target person 21 by inputting images that have been capmred by the camera 32 into a learning model that determines whether a person appearing in the input images is suspicious. The target person detection unit 140 then maps the detected position of the target person 21 in a 3D space in which the positions of the camera 32 and the speaker 41 have been mapped.
The distance calculation unit 150 calculates the distance between the speaker 41 and the target person 21. As one example, the distance calculation unit 150 calculates the distance between the speaker 41 and the target person 21 in the 3D space in which the position of the speaker 41 and the position of the target person 21 have been mapped.
The volume setting unit 160 sets the volume of the speaker 41 based on the distance between the speaker 41 and the target person 21. As one example, the sound volume setting unit 160 uses the correspondence information 121 to specify the volume that corresponds to the distance between the speaker 41 and the target person 21 calculated by the distance calculation unit 150. The volume setting unit 160 then specifies a resistance setting value corresponding to the specified volume using the correspondence information 121, and sets the resistance value of the DPOT 203 at the specified resistance setting value.
The sound reproduction unit 170 transmits a command to the controller 200 to cause the speaker 41 to reproduce sound.
The rotation control device 300 includes a motor 20 control unit 310. The motor control unit 310 rotationally drives the motors 42a and 42b according to instructions from the host computer 100.
Note that the lines connecting the respective elements depicted in FIG. 6 illustrate only some communication paths, and it is possible to set communication paths aside from those that have been illustrated. In addition, the functions of the respective elements depicted in FIG. 6 may be realized by causing a computer to execute a program module corresponding to the elements, for example.
Next, the correspondence information 121 stored 5 in the storage unit 120 will be described in detail.
FIG. 7 depicts an example of the correspondence information. The correspondence information 121 indicates the correspondence between The distance between The speaker 41 and the target person 21, the volume of the speaker 41, and the resistance setting value of the DPOT 203. The correspondence information 121 is also associated with a predetermined noise level. As one example, the predetermined noise level is an average noise level of the facility where the information processing system has been installed. In the following description, as one example, the correspondence information 121 is associated with a noise level of 65 decibels. In the correspondence information 121, volumes of the speaker 41 are set such that sound with a sound pressure level that is 5 decibels higher than the associated noise level, which is to say, a sound pressure level of 70 decibels, is delivered to the target person 21.
The correspondence information 121 includes "Distance (m)", "Volume (1)", and "Resistance Setting Value (h)" columns. In the "Distance (m)" column, the distance between the speaker 41 and the target person 21 is set in meter units. In the "Volume (%)" column, the volume of the speaker 41 is set as a percentage. In the "Resistance Setting Value (h)" column, the resistance value of the DPOT 203 is set. In FIG. 7, the resistance value is expressed in hexadecimal.
As one example, the correspondence Information 121 includes a record where the "Distance (m)" field is "1", the "Volume (%)" field is "80", and the "Resistance Setting Value (h)" field is '50". This record indicates that the volume for outputting sound with a sound pressure level of 70 decibels at a point where the distance from the speaker 41 is 1 meter is 80 percent. This record also indicates that the resistance value of the DPOT 203 for setting the volume of the speaker 41 at 80 percent is "50" in hexadecimal.
Next, a method of mapping the target person 21, the camera 32, and the speaker 41 in the 3D space will be described.
FIG. 8 depicts one example method of mapping onto a 3D space. The 3D space 40 is a coordinate system that represents the positional relationships between the target person 21, the camera 32, and the speaker 41. The z axis in the 3D space 40 indicates the vertical direction in a real space, and the z=0 plane in the 3D space 40 indicates the floor or the ground in the real space.
In the 3D space 40, a camera position 32a indicating the position of the camera 32 and a speaker position 41a indicating the position of the speaker 41 are mapped in advance. The camera position 32a is indicated by the coordinates (xc, yo, zc), and the speaker position 41a is indicated by the coordinates (xs, ys, zs).
The target person detection unit 140 maps the position of the target person 21 in the 3D space 40. As one example, the target person detection unit 140 calculates the position of the foot of the target person 21 relative to the camera 32 based on images of the target person 21 captured by the camera 32. The target person detection unit 140 calculates a foot position 21a indicating the position of the foot of the target person 21 in the 3D space 40 based on the position of the foot of the target person 21 relative to the camera 32. This foot position 21a is indicated by the coordinates (xp, yp, 0). The target person detection unit 140 then sets a position produced by changing the z coordinate of the foot position 21a to 1.5 (that is, a position 1.5 meters above the ground in the real space) to set a head position 21b of the target person 21. The head position 21b is therefore indicated by the coordinates (xp, YP, 1.5) . By using these coordinates in the 3D space 40, the distance between the speaker 41 and the target person 21 is calculated. The distance between the speaker 41 and the target person 21 is the distance d between the speaker position 41a and the head position 21b calculated by the following equation.
d = .1(x, -xp)2 -ypf + (z, -1.5)2 (1) Note that the speaker 41 is controlled so as to be pointed at the head of the target person 21. The distance between the Installed position of the speaker 41 and the position of the head of the target person 21 is calculated by Equation (1). By doing so, the distance between the speaker 41, which is the sound source, and the ears of the target person 21 who hears the sound is calculated.
Next, a method of controlling the volume according to the distance between the speaker 41 and the 10 target person 21 will be described.
FIG. 9 depicts an example method for controlling the volume. The volume setting unit 160 controls the volume according to the distance between the speaker 41 and the target person 21. The speaker 41 is installed on the ceiling at a height of 3.0 meters from the floor. As one example, the speaker position 41a is a center position of the mount of the speaker 41 that is in contact with the ceiling. For ease of explanation, the floor and the ceiling are represented one-dimensionally in FIG. 9. Further, the direction perpendicular to the direction from the floor to the ceiling may be referred to as the "lateral direction". It is also assumed that the noise level is 65 decibels at any point in the space depicted in FIG. 9.
As one example, assume here that the target 25 person detection unit 140 has specified a point 1.3 meters away in the lateral direction from a point on the floor vertically below the speaker position 41a as the foot position 51a, that is, the position of the feet of the target person 21. In this case, assume also that the target person detection unit 140 sets a position that is 1.5 meters vertically above the foot position 51a as the head position 51b indicating the position of the head of the target person 21. The distance calcula4ion unit 150 calculates the distance between the speaker position 41a and the head position 51b as 2.0 meters using Equation (1).
When volume control is not performed according to the distance, the sound is outputted from the speaker 41 at a constant volume (as one example, a volume of 100 percent). When this happens, the sound pressure level of the outputted sound at the head position 51b is 81 decibels. On the other hand, when volume control is performed according to the distance, the volume setting unit 160 refers to the correspondence information 121 and specifies a volume of 85 percent corresponding to the calculated distance of 2.0 meters. When sound is outputted from the speaker 41 at a volume of 85 percent, the sound pressure level of the outputted sound at the head position 51b is 70 decibels.
As another example, assume that the target person detecting unit 140 has specified a point that is 2.4 meters away from the foot position 51a in the lateral direction as a foot position 52a. In this case, the target person detection unit 140 sets the position which is 1.5 meters vertically higher than the foot position 52a as a head position 52b. The distance calculation unit 150 calculates the distance between the speaker position 41a and the head position 52b as 4.0 meters using Equation (1).
When volume control is not performed according 5 to the distance and the sound is outputted from the speaker 41 at a volume of 100 percent, the sound pressure level of the outputted sound at the head position 52b will be 75 decibels. On the other hand, when volume control is performed according to the distance, the volume setting 10 unit 160 refers to the correspondence information 121 and specifies a volume of 90 percent corresponding to the calculated distance of 4.0 meters. When sound is outputted from the speaker 41 at a volume of 90 percent, the sound pressure level of the outputted sound at the head position 52b is 70 decibels.
As yet another example, assume that the target person detection unit 140 specifies a point that is 3.1 meters away from the foot position 52a in the lateral direction as a foot position 53a. In this case, the target person detection unit 140 sets a position 1.5 meters vertically above the foot position 53a as a head position 53b. The distance calculation unit 150 calculates the distance between the speaker position 41a and the head position 53b as 7.0 meters using Equation (1).
When volume control is not performed according to the distance and the sound is outputted from the speaker 41 at a volume of 100 percent, the sound pressure level of the outputted sound at the head position 53b will be 70 decibels. On the other hand, when volume control according to the distance is performed, the volume setting unit 160 refers to the correspondence information 121 and specifies a volume of 100 percent corresponding to the calculated distance of 7.0 meters. When sound is outputted from the speaker 41 at a volume of 100 percent, the sound pressure level of the outputted sound at the head position 53b is 70 decibels.
As described above, when volume control is not performed according to the distance, the sound pressure level of the sound outputted from the speaker 41 is high at locations near the speaker 41 (as examples, the head positions 51b and 52b). This means that the sound outputted from the speaker 41 feels loud to the target person 21. On the other hand, when the volume is controlled according to the distance, the sound pressure level of the sound outputted from the speaker 41 is 70 decibels at all of the head positions 51b, 52b, and 53b. This sound pressure level is only 5 decibels higher than the ambient noise level, which stops the sound outputted from the speaker 41 from sounding loud.
The procedure of sound reproduction by the information processing system will now be described in 25 detail.
FIG. 10 is a flowchart depicting an example procedure of sound reproduccion. The processing depicted in FIG. 10 is described below in order of the step numbers. [Step S101] The motor control unit 310 of the rotation control device 300 performs calibration. As one example, the motor control unit 310 calculates the angles 5 of the rotational shafts that determine the orientation of the speaker 41 based on the position of markers captured by a camera installed in the speaker 41, and calibrates present angles of the rotational shafts that are stored in the host computer 100.
[Step S102] The target person detection unit 140 of the host computer 100 detects the position of the foot of the target person 21 based on images captured by the camera 32. As one example, the target person detection unit 140 detects the target person 21 by inputting the images captured by the camera 32 into a learning model that determines whether a person appearing in the input images is suspicious. The target person detection unit 140 calculates the position of the foot of the detected target person 21 relative to the camera 32 based on the images captured by the camera 32. The target person detection unit 140 calculates a foot position 21a indicating the position of the foot of the target person 21 in the 3D space 40 based on the position of the foot of the target person 21 relative to the camera 32.
[Step S103] The target person detection unit 140 calculates the position of the head of the target person 21. As one example, the target person detection unit 140 sets a position produced by changing the z coordinate of the foot position 21a in the 3D space 40 to 1.5 as the head position 21b of the target person 21.
[Step S104] The distance calculation unit 150 of the host computer 100 calculates the distance between the speaker 41 and the target person 21. As one example, the distance calculation unit 150 calculates the distance d between the speaker position 41a set in advance in the 3D space 40 and the head position 21b calculated in step S103 using Equation (1).
[Step S105] The rotation instruction unit 130 of the host computer 100 converts an instruction for rotationally driving the motors 42a, 42b by designated numbers of rotations into motor control commands (for example, RS-465 commands). As one example, the rotation instruction unit 130 calculates the angle (target angle) of the rotational shaft for pointing the speaker 41 at the head of the target person 21 based on the speaker position 41a and the head position 21b in the 3D space 40. The rotation instruction unit 130 refers to the present angle of the speaker 41 and calculates the difference in angle from the target angle. The rotation instruction unit 130 calculates the numbers of rotations of the motors 42a and 42b for rotating the rotational shafts by the calculated differences in angle. The rotation instruction unit 130 then converts the instruction for rotationally driving the motors 42a and 42b by the calculated numbers of rotations into motor control commands.
[Step S106] The rotation instruction unit 130 transmits the commands produced by the conversion in step 3105 to the rotation control device 300.
[Step 5107] The motor control unit 310 rotationally drives the motors 42a and 42b (horizontal and vertical motors) by the numbers of rotations corresponding to the commands transmitted in step S106.
[Step S108] The motor control unit 310 transmits a rotation completion notification to the host computer 100. As one example, the motor control unit 310 transmits the rotation completion notification by way of a status transmission using RS-485.
[Step S109] The volume setting unit 1E0 of the host computer 100 specifies the volume of the speaker 41 corresponding to the distance between the speaker 41 and the target person 21. As one example, the volume setting unit 160 sets a volume corresponding to a distance that is closest to the distance d calculated in step S104 out of the distances registered in the correspondence information 121 as the volume of the speaker 41 corresponding to the distance between the speaker 41 and the target person 21.
[Step 5110] The volume setting unit 160 refers to the correspondence information 121 and specifies the resistance setting value corresponding to the volume specified in step S109.
[Step 5111] The volume setting unit 160 transmits a command (as one example, an 12C command), which has been obtained by converting an instruction for changing the resistance value of the SPOT 203 to the resistance setting value specified in step S110, to the controller 200.
[Step 5112] The controller 200 changes the resistance value of the SPOT 203 to the resistance setting value set by the command transmitted in step S111.
[Step S113] The sound reproducing unit 170 of the host computer 100 transmits a command for causing the 10 speaker 41 to reproduce sound to the controller 200.
[Step S114] The controller 200 reproduces sound from the speaker 41.
In this way, the speaker 41 reproduces sound at the volume set according to the distance between the speaker 41 and the target person 21. When doing so, the speaker 41 is pointed at the position of the head of the target person 21. Here, the distance between the installed position of the speaker 41 and the head position of the target person 21 is calculated as the distance between the speaker 41 and the target person 21. By doing so, the distance between the sound source and the position where the sound is received is accurately calculated.
The volume of the speaker 41 is then set at a volume corresponding to the calculated distance. The volume of the speaker 41 is set at a volume that makes the sound pressure level of the sound outputted by the speaker 41 5 decibels higher than the average ambient noise level at a point the calculated distance away from the speaker 41. By doing so, the sound outputted from the speaker 41 is easily heard by the target person 21, which means that the volume of the speaker 41 is set appropriately.
With the host computer 100 according to the second embodiment, the distance between the speaker 41 and the target person 21 is calculated based on the position of the speaker 41 and the position of the target person 21 who the speaker 41 is pointing at. A volume corresponding to the distance between the speaker 41 and the target person 21 is then specified based on the correspondence information 121, and the speaker 41 is instructed to output sound at the specified volume. By doing so, it Is possible to set the volume of the speaker 41 appropriately.
Further, the correspondence information 121 indicates the correspondence between distance and a volume at which the sound pressure level of sound outputted from the speaker 41 at a point the corresponding distance away is higher than a predetermined noise level by a predetermined value. By doing so, it becomes easy for the target person 21 to hear the sound emitted from the speaker 41.
The distance between the speaker position 41a and the head position 21b that is a predetermined height above the foot position 21a is calculated as the distance between the speaker 41 and the target person 21. By doing so, the distance between the speaker 41 and the position of the ears of the target person 21 is calculated.
The speaker 41 is a parametric speaker. By using a parametric speaker, even when the distance between the speaker 41 and the target person 21 is large, it is still possible for the speaker 41 to provide sound to the target person 21 at an appropriate volume.
Third Embodiment Next, a third embodiment will be described. Although the volume of the speaker is controlled according to the distance between the speaker and the target person in the second embodiment, in this third embodiment, the volume of the speaker is controlled according to the distance between the speaker and the target person and also the ambient noise level. The following description will focus on the differences with the second embodiment, and description of the same content as the second embodiment may be omitted.
FIG. 11 depicts one example system configuration of the third embodiment. In this third embodiment, a host computer 100a is used in place of the host computer 100 in the second embodiment, and a rotation control device 300a is used in place of the rotation control device 300.
The host computer 100a detects the target person 21 and controls the speaker 41. Here, the host computer 100a controls the volume according to the distance between the speaker 41 and the target person 21 and an ambient noise level acquired from the rotation control device 300a. The rotation control device 300a acquires ambient sound from a microphone 43 installed in the rotation control device 300a and determines the ambient noise level using the acquired sound. The rotation control device 300a transmits the 5 determined noise level to the host computer 100a. Note that it is assumed that in the facility where the information processing system is installed, the noise level is constant regardless of the location. The facility where the information processing system is installed is one example 10 of the predetermined range including the target person position 4 described in the first embodiment.
In the same way as the host computer 100 of the second embodiment, the host computer 100a is realized by the hardware configuration depicted in FIG. 3. In the following description, the same reference numerals as the hardware of the host computer 100 are used for the hardware of the host computer 100a.
FIG. 12 depicts another example configuration of the hardware of the rotation control device. In addition to the hardware of the rotation control device 300, a sound input unit 307 is connected to a bus 306 of the rotation control device 300a.
The microphone 43 is connected to the sound input unit 307. The sound input unit 307 converts a sound signal 25 inputted from the microphone 43 to a digital signal and transmits the digital signal to the processor 301.
Next, the functions of the host computer 100a and the rotation control device 300a will be described in detail.
FIG. 13 is a block diagram depicting examples of the functions of the host computer and the rotation control 5 device. The host computer 100a includes a storage unit 120a in place of the storage unit 120 of the host computer 100, and includes a volume setting unit 160a in place of the volume setting unit 160. In addition to the functions of the host computer 100, the host computer 100a includes a 10 noise level acquisition instruction unit 180.
The storage unit 120a includes correspondence information 121a, 121b, 121c, ... and selection information 122. In the same way as the correspondence information 121, the correspondence information 121a, 121b, 121c, ...
indicate the correspondence between the distance between the speaker 41 and the target person 21, the volume, and the resistance value of the DPOT 203. However, the respective correspondence information 121a, 121b, 121c, ... are associated with different noise levels. The correspondence information 121a, 121b, 121c, ... sets the volume of the speaker 41 for providing the targei person 21 with sound at a sound pressure level that is 5 decibels higher than the associated noise level. The selection information 122 indicates the correspondence between respective IDs (identifiers) of the correspondence information 121a, 121b, 121c, ... and noise levels.
The volume setting unit 160a sets the volume of the speaker 41 based on the distance between the speaker 41 and the target person 21 and the noise level. As one example, the volume setting unit 160a uses the selection information 122 to select the correspondence information (as one example, the correspondence informaGion 121a) associated with the noise level acquired by the noise level acquisition instruction unit 180 from the rotation control device 300a, out of the correspondence information 121a, 121b, 121c, .... The volume setting unit 160a then uses the correspondence information 121a to perform the same processing as the volume setting process performed by the volume setting unit 160 using the correspondence information 121.
The noise level acquisition instruction unit 180 15 transmits a command to the rotation control device 300a to cause the rotation control device 300a to acquire the ambient noise.
The rotation control device 300a includes the motor control unit 310 and a noise level acquisition unit 320. The noise level acquisition unit 320 acquires ambient sound from the microphone 43 and determines the ambient noise level.
FIG. 14 depicts one example of selection information. The selection information 122 has 'ID" and 'Noise level (dB)" columns. The respective IDs of the correspondence information 121a, 121b, 121c, ... are set in the "ID" column. Values of the noise level are set in decibel units in the "Noise level (dB)" column.
As one example, a record in which the "ID" field is "1" and the "noise level (dB)" field is "40" is set in the selection information 122. This record indicates that 5 the correspondence information with the ID "1-out of the correspondence information 121a, 121b, 121c, is associated with a noise level of 40 decibels. That is, the volume of the speaker 41 for providing sound with a sound pressure level that is 5 decibels higher than 40 decibels (which is to say, 45 decibels) is set in the correspondence information with the ID "1-out of the correspondence information 121a, 121b, 121c, ....
Next, a method of controlling the volume according to the distance between the speaker 41 and the 15 target person 21 will be described.
FIG. 15 depicts another example of a method for controlling the volume. The volume setting unit 160a controls the volume according to the distance between the speaker 41 and the target person 21 and the noise level.
Note that the speaker position 41a and the height from the floor to the ceiling in the example depicted in FIG. 15 are the same as in the example depicted in FIG. 9.
As one example, assume that the target person detection unit 140 has specified a point that is 1.3 meters away from a point on the floor vertically below the speaker position 41a in the lateral direction as a foot position Ela, and has set a position that is 1.5 meters vertically above the foot position 61a as a head position 61b. The distance calculation unit 150 calculates the distance between the speaker position 41a and the head position 61b as 2.0 meters according to Equation (1). It is also assumed that the ambient noise level determined by the noise level acquisition unit 320 is 50 decibels.
When control is performed here according to distance, sound is outputted from the speaker 41 at a volume that produces a constant sound pressure level regardless of the distance. As one example, the constant sound pressure level is a value that is 5 decibels higher than the average noise level (here, 65 decibels) in the facility where the information processing system is installed. When control is performed in this way, sound is outputted from the speaker 41 at a volume of 85 percent, and the sound pressure level of the outputted sound at the head position 61b is 70 decibels.
On the other hand, when the volume is controlled according to the distance and the noise level, the volume setting unit 160a refers to the selection information 122 and selects the correspondence information corresponding to a noise level of 50 decibels out of the correspondence information 121a, 121b, 121c, .... The volume setting unit 160a then refers to the specified correspondence information and specifies the volume 70 percent corresponding to the calculated distance of 2.0 meters. When sound is outputted from the speaker 41 with a volume of 70 percent, the sound pressure level of the outputted sound at the head position 61b is 55 decibels.
As another example, assume that the target person detection unit 140 has specified a point on the floor that is 5.5 meters away from the foot position 61a in the lateral direction as a foot position 62a, and has set a position 1.5 meters vertically above the foot position 62a as a head position 62b. The distance calculation unit 150 calculates the distance between the speaker position 41a and the head position 62b as 7.0 meters according to Equation (1). It is also assumed that the ambient noise level determined by the noise level acquisition unit 320 is 60 decibels. When control is performed according to the distance, sound is outputted from the speaker 41 at a volume of 100 percent and the sound pressure level of the outputted sound at the head position 62b is 70 decibels.
On the other hand, when the volume is controlled according to the distance and the noise level, the volume setting unit 160a refers to the selection information 122 and specifies the correspondence information corresponding to a noise level of 60 decibels out of the correspondence information 121a, 121b, 121c, .... The volume setting unit 160a then refers to the specified correspondence information and specifies a volume of 90 percent corresponding to the calculated distance of 7.0 meters. When sound is outputted from the speaker 41 at a volume of 90 percent, the sound pressure level of the outputted sound at the head position 62b is 65 decibels.
As described above, when volume control is performed according to the distance, when the noise level is lower than average, the sound pressure level of the sound 5 outputted from the speaker 41 will be 10 decibels or more higher than the noise level for example. This means that the sound outputted from the speaker 41 will feel loud to the target person 21. On the other hand, when the volume is controlled according to the distance and the noise level, 10 the sound pressure level of the sound outputted from the speaker 41 will be a sound pressure level that is 5 decibels higher than the ambient noise level. Accordingly, the sound outputted from the speaker 41 will not feel loud to the target person 21.
The procedure of sound reproduction by the information processing system will now be described in detail.
FIG. 16 is a flowchart depicting one example of the procedure for sound reproduction. During sound reproduction in the third embodiment, processing in steps S103a to S103d is executed between the steps 5103 and 5104 of the processing depicted in FIG. 10. This processing in steps S103a to S103d will now be described in order of the step numbers.
[Step S103a] The noise level acquisition instruction unit 180 of the host computer 100a transmits a command (for example, a RS-485 command) to the rotation control device 300a to cause the rotation control device 300a to acquire the ambient noise.
[Step 5103b] The noise level acquisition unit 320 of the rotation control device 300a acquires the ambient noise level. As one example, the noise level acquisition unit 320 acquires ambient sound from the microphone 43 and determines the noise level of the acquired sound.
[Step S103c] The noise level acquisition unit 320 transmits the noise level determined in step S103b to the host computer 100a. The noise level acquisition unit 320 transmits this noise level as one example by status transmission according to RS-485.
[Step S103d] The volume setting unit 160a of the host computer 100a selects correspondence information that is associated with the ambient noise level out of the correspondence information 121a, 121b, 121c, .... As one example, the volume setting unit 160a specifies a noise level that is closest to the noise level transmitted from the rotation control device 300a in step 5103c out of the noise levels registered in the selection information 122. The volume setting unit 160a specifies the ID associated with the specified noise level. The volume setting unit 160a then selects the correspondence information that corresponds to the specified ID (for example, the correspondence information 121a) out of the correspondence information 121a, 121b, 121c, .... The processing then proceeds to step S104. In the processing from step 5104 onward, the correspondence information 121a is used in place of the correspondence information 121.
Although the processing in steps 5103a to 5103d is executed between steps S103 and 5104 in the example described above, this processing may be executed at a different timing up to step 5109. The processing in steps S103a to 5103d may be executed at regular time intervals for example, and does not need to be executed every time the sound reproduction processing is executed.
In this way, the speaker 41 reproduces sound at a volume set according to the distance between the speaker 41 and the target person 21 and the ambient noise level. The volume of the speaker 41 is set so that the sound pressure level of the sound outputted from the speaker 41 is 5 decibels higher than a predetermined noise level at a point separated from the speaker 41 by the distance between the speaker 41 and the target person 21. Here, the predetermined noise level is selected according to the ambient noise level. By doing so, the sound outputted from the speaker 41 is easily heard by the target person 21 even when the ambient noise level fluctuates. This means that the volume of the speaker 41 is set appropriately.
According to the third embodiment, in the same way as in second embodiment, the volume of the speaker 41 is appropriately set. In addition, in the third embodiment, a noise level determined based on sound acquired from the microphone 43 within a predetermined range including the position of the target person 21 is acquired. The volume corresponding to the distance between the speaker 41 and the target person 21 is then specified based on correspondence information corresponding to the acquired noise level out of the correspondence information 121a, 121b, 121c, .... By doing so, the volume of the speaker 41 is set according to the ambient noise level.
According to the present embodiments, it is possible to appropriately set the volume of a speaker.

Claims (7)

  1. CLAIMS1. An information processing apparatus comprising: processing means configured to: calculate a distance between a speaker and a target person who the speaker is pointing at, based on a position of the speaker and a position of the target person; specify a volume corresponding to the calculated distance based on correspondence information 10 indicating correspondence between distances and volumes; and instruct the speaker to output sound at the specified volume.
  2. 2. The information processing apparatus according to claim 1, wherein the correspondence information indicates the correspondence between distances and volumes which produce a sound pressure level of sound outputted by the speaker such that the sound pressure level at a point the corresponding distance away from the speaker is a predetermined value higher than a predetermined noise level.
  3. 3. The information processing apparatus according to claim 1 or 2, wherein the processing means is configured to: acquire a noise level which is determined based on sound acquired from a microphone in a predetermined range including the position of the target person; and specify a volume corresponding to the 5 calculated distance based on correspondence information which corresponds to the acquired noise level, out of correspondence information provided in plurality that correspond to respectively different noise levels.
  4. 4. The information processing apparatus according to any of claims 1 to 3, wherein the processing means is configured to: calculate, as the distance between the speaker and the target person, a distance between the 15 position of the speaker and a position of a predetermined height above a position of a foot of the target person.
  5. 5. The information processing apparatus according to any of claims 1 to 4, wherein the speaker is a parametric speaker.
  6. 6. A computer program that causes a computer to execute a process comprising: calculating a distance between a speaker and a 25 target person who the speaker is pointing at, based on a position of the speaker and a position of the target person; specifying a volume corresponding to the calculated distance based on correspondence information indicating correspondence between distances and volumes; and instructing the speaker to output sound at the specified volume.
  7. 7. An information processing system comprising: a speaker; and an information processing apparatus configured to: calculate a distance between the speaker and 10 a target person who the speaker is pointing at, based on a position of the speaker and a position of the target person; specify a volume corresponding to the calculated distance based on correspondence information indicating correspondence between distances and volumes; 15 and instruct the speaker to output sound at the specified volume.
GB2015093.4A 2019-10-30 2020-09-24 Information processing apparatus, program, and information processing system Withdrawn GB2589720A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2019196909 2019-10-30

Publications (3)

Publication Number Publication Date
GB202015093D0 GB202015093D0 (en) 2020-11-11
GB2589720A true GB2589720A (en) 2021-06-09
GB2589720A8 GB2589720A8 (en) 2021-08-04

Family

ID=73197397

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2015093.4A Withdrawn GB2589720A (en) 2019-10-30 2020-09-24 Information processing apparatus, program, and information processing system

Country Status (1)

Country Link
GB (1) GB2589720A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114063964A (en) * 2021-10-29 2022-02-18 歌尔科技有限公司 Volume compensation optimization method and device, electronic equipment and readable storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115499738A (en) * 2022-09-21 2022-12-20 电子科技大学 Programmable parametric array speaker with safety device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080292115A1 (en) * 2007-05-22 2008-11-27 Hong Fu Jin Precision Industry (Shenzhen) Co., Ltd. Apparatus and method for adjusting sound volume
JP2012029096A (en) * 2010-07-23 2012-02-09 Nec Casio Mobile Communications Ltd Sound output device
US20130094656A1 (en) * 2011-10-16 2013-04-18 Hei Tao Fung Intelligent Audio Volume Control for Robot
US20130294618A1 (en) * 2012-05-06 2013-11-07 Mikhail LYUBACHEV Sound reproducing intellectual system and method of control thereof
US20130342669A1 (en) * 2012-06-22 2013-12-26 Wistron Corp. Method for auto-adjusting audio output volume and electronic apparatus using the same
US20190173446A1 (en) * 2017-12-04 2019-06-06 Lutron Electronics Co., Inc. Audio Device with Dynamically Responsive Volume
WO2020241845A1 (en) * 2019-05-29 2020-12-03 Asahi Kasei Kabushiki Kaisha Sound reproducing apparatus having multiple directional speakers and sound reproducing method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080292115A1 (en) * 2007-05-22 2008-11-27 Hong Fu Jin Precision Industry (Shenzhen) Co., Ltd. Apparatus and method for adjusting sound volume
JP2012029096A (en) * 2010-07-23 2012-02-09 Nec Casio Mobile Communications Ltd Sound output device
US20130094656A1 (en) * 2011-10-16 2013-04-18 Hei Tao Fung Intelligent Audio Volume Control for Robot
US20130294618A1 (en) * 2012-05-06 2013-11-07 Mikhail LYUBACHEV Sound reproducing intellectual system and method of control thereof
US20130342669A1 (en) * 2012-06-22 2013-12-26 Wistron Corp. Method for auto-adjusting audio output volume and electronic apparatus using the same
US20190173446A1 (en) * 2017-12-04 2019-06-06 Lutron Electronics Co., Inc. Audio Device with Dynamically Responsive Volume
WO2020241845A1 (en) * 2019-05-29 2020-12-03 Asahi Kasei Kabushiki Kaisha Sound reproducing apparatus having multiple directional speakers and sound reproducing method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114063964A (en) * 2021-10-29 2022-02-18 歌尔科技有限公司 Volume compensation optimization method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
GB202015093D0 (en) 2020-11-11
GB2589720A8 (en) 2021-08-04

Similar Documents

Publication Publication Date Title
WO2018149275A1 (en) Method and apparatus for adjusting audio output by speaker
JP6455686B2 (en) Distributed wireless speaker system
US9854371B2 (en) Information processing system, apparatus and method for measuring a head-related transfer function
US9357308B2 (en) System and method for dynamic control of audio playback based on the position of a listener
CN102577433B (en) Volume adjustment based on listener position
US9402145B2 (en) Wireless speaker system with distributed low (bass) frequency
US10798518B2 (en) Apparatus and associated methods
GB2589720A (en) Information processing apparatus, program, and information processing system
EP3399398B1 (en) An apparatus and associated methods for presentation of spatial audio
CN102547533A (en) Acoustic control apparatus and acoustic control method
JP2017500989A (en) Variable audio parameter setting
EP3473019A1 (en) Distributed audio capture and mixing controlling
US11320894B2 (en) Dynamic control of hovering drone
US10567871B1 (en) Automatically movable speaker to track listener or optimize sound performance
JP6697174B1 (en) Information processing device, program, and information processing system
US11221821B2 (en) Audio scene processing
US10616684B2 (en) Environmental sensing for a unique portable speaker listening experience
US11889288B2 (en) Using entertainment system remote commander for audio system calibration
JP2022533755A (en) Apparatus and associated methods for capturing spatial audio
US12035123B2 (en) Impulse response generation system and method
US20180115852A1 (en) Signal processing apparatus, signal processing method, and storage medium
US11217220B1 (en) Controlling devices to mask sound in areas proximate to the devices
US11599329B2 (en) Capacitive environmental sensing for a unique portable speaker listening experience
US20210120361A1 (en) Audio adjusting method and audio adjusting device
EP4037340A1 (en) Processing of audio data

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)