WO2024084811A1 - Acoustic characteristic analysis system and acoustic characteristic analysis method - Google Patents

Acoustic characteristic analysis system and acoustic characteristic analysis method Download PDF

Info

Publication number
WO2024084811A1
WO2024084811A1 PCT/JP2023/030763 JP2023030763W WO2024084811A1 WO 2024084811 A1 WO2024084811 A1 WO 2024084811A1 JP 2023030763 W JP2023030763 W JP 2023030763W WO 2024084811 A1 WO2024084811 A1 WO 2024084811A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound
acoustic
space
sound source
signal
Prior art date
Application number
PCT/JP2023/030763
Other languages
French (fr)
Japanese (ja)
Inventor
源太 山内
Original Assignee
株式会社日立製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立製作所 filed Critical 株式会社日立製作所
Publication of WO2024084811A1 publication Critical patent/WO2024084811A1/en

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K15/00Acoustics not otherwise provided for
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K15/00Acoustics not otherwise provided for
    • G10K15/04Sound-producing devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/54Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall

Definitions

  • the present invention relates to an acoustic characteristic analysis system, etc.
  • Patent Document 1 describes "a technique for evaluating the reduction of input sound by the arrangement of sound absorbing means.”
  • Patent Document 2 describes that "the noise level at a sound receiving position is predicted from the incident energy from each element to the sound receiving position, taking into account reflection, multiple diffraction, and sound transmission.”
  • Patent Documents 1 and 2 evaluate the sound in an acoustic space based on indices such as sound pressure level, but do not describe any technology that allows a user to simulate the experience of the sound of a sound source heard in a specified acoustic space.
  • the present invention aims to provide an acoustic characteristic analysis system that allows a user to virtually experience the sound of a sound source heard in a specified acoustic space.
  • the acoustic characteristic analysis system is provided with a control unit that outputs, as an audible signal, a received sound signal that is received when a user simulates the experience of hearing the sound of a specific sound source in an acoustic space, which may be a real space or a virtual space, based on the characteristics of the acoustic space.
  • the present invention provides an acoustic characteristic analysis system that allows a user to simulate the experience of hearing the sound of a sound source in a specified acoustic space.
  • FIG. 1 is an explanatory diagram of an acoustic space that is a target of an acoustic characteristic analysis system according to a first embodiment.
  • 1 is a functional block diagram of an acoustic characteristic analysis system according to a first embodiment.
  • FIG. FIG. 2 is a front view of a smartphone equipped with a display as an example of a display device of the acoustic characteristic analysis system according to the first embodiment.
  • 5 is a flowchart showing a process executed by a control unit of the acoustic characteristic analysis system according to the first embodiment.
  • FIG. 11 is a functional block diagram of an acoustic characteristic analysis system according to a second embodiment.
  • FIG. 13 is an explanatory diagram showing a situation in which a specific character is using a vacuum cleaner, which is a sound source, in the acoustic characteristic analysis system according to the third embodiment.
  • FIG. 13 is an explanatory diagram of an acoustic space that is a target of an acoustic characteristic analysis system according to a sixth embodiment.
  • FIG. 23 is an explanatory diagram of an acoustic space that is a target of an acoustic characteristic analysis system according to a seventh embodiment.
  • FIG. 1 is an explanatory diagram of an acoustic space 1 that is a target of the acoustic characteristic analysis system according to the first embodiment.
  • the acoustic characteristic analysis system 100 (see FIG. 2) is a system for allowing a user to simulate an experience of a sound heard from a sound source 2 in a specific acoustic space 1.
  • the acoustic space 1 is a real space (a space that actually exists).
  • An example of such an acoustic space 1 is a room in a building such as a house or a building, but the acoustic space 1 is not limited to this.
  • the acoustic space 1 may be a space in a moving body such as a car or a railroad vehicle, a ship or an airplane, or a space in an elevator car.
  • the acoustic space 1 may be a large space such as a sales floor of a department store or a concert hall, or may be a specific space that exists underground.
  • acoustic space 1 is made up of side surfaces 1a and 1b, as well as a ceiling surface 1c and a floor surface 1d.
  • certain objects 3 such as desks, chairs, and shelves are placed.
  • acoustic space 1 often has windows and doors installed.
  • the sound source 2 shown in FIG. 1 is a device whose operating sound (product sound) is the subject of a simulated experience by the user.
  • Examples of such sound sources 2 include home appliances such as vacuum cleaners, washing machines, and air conditioners, as well as microwave ovens and televisions, but other devices are also possible.
  • the user can use the acoustic characteristics analysis system 100 (see FIG. 2) to simulate an experience of how the operating sound of the device sounds in an acoustic space 1 (such as the living room of the user's home).
  • the sound source 2 shown in FIG. 1 is illustrated for the sake of convenience to show the location of a specific device that a user is considering purchasing (i.e., sound source 2) when it is assumed that the device is placed in acoustic space 1, but there is no particular need for the device to actually be placed in acoustic space 1. This is because the specific device is the subject of a user's simulated experience of its operating sound, and there is no particular need for it to actually be installed in acoustic space 1.
  • an acoustic characteristic analysis system 100 (see FIG. 2) is used to allow the user to simulate an experience of the operating sound of the equipment (sound source 2 in FIG. 1) in a specified acoustic space 1. This can be used as a basis for the user's decision when considering purchasing the equipment.
  • the receiver 4 shown in FIG. 1 is shown as a sound receiving position when listening to the sound of the sound source 2 in the acoustic space 1.
  • the sound pickup device 20 is a device that picks up sound in the acoustic space 1.
  • a microphone is used as such a sound pickup device 20.
  • the x-axis, y-axis, and z-axis shown in FIG. 1 will be described later.
  • FIG. 2 is a functional block diagram of the acoustic characteristic analysis system 100.
  • the acoustic characteristic analysis system 100 includes an acoustic characteristic analysis device 10, a sound collection device 20, a terminal 30, a display device 40, and a sound generation device 50.
  • the acoustic characteristic analysis device 10 is a device that analyzes a sound of a sound source 2 (see FIG. 1) heard by a sound receiver 4 (see FIG. 1) in a predetermined acoustic space 1 (see FIG. 1).
  • Such an acoustic characteristic analysis device 10 may be configured as one computer, or may be configured as multiple computers (servers, etc.).
  • the acoustic characteristic analysis device 10 includes a memory unit 11 and a control unit 12.
  • the memory unit 11 includes non-volatile memory such as a ROM (Read Only Memory) or a HDD (Hard Disk Drive), and volatile memory such as a RAM (Random Access Memory) or a register.
  • the memory unit 11 stores predetermined data in addition to programs executed by the control unit 12.
  • the control unit 12 is, for example, a CPU (Central Processing Unit), which reads out a program stored in the memory unit 11 and executes a predetermined process. That is, the control unit 12 outputs, as an audible signal, a received sound signal when the user simulates the experience of the sound of a predetermined sound source 2 heard in the acoustic space 1 (see FIG. 1) based on the "characteristics" of the acoustic space 1.
  • the "characteristics" of the acoustic space 1 are the reverberation time or average sound absorption coefficient in the acoustic space 1, as will be described in detail later.
  • the control unit 12 includes an acoustic space information setting unit 12a, a sound source information setting unit 12b, a received sound information setting unit 12c, a measurement unit 12d, and an acoustic characteristic calculation unit 12e.
  • the control unit 12 also includes an acoustic characteristic output unit 12f, a received sound signal calculation unit 12g, and a received sound signal output unit 12h.
  • the acoustic space information setting unit 12a sets acoustic space information.
  • acoustic space information includes information indicating the dimensions of acoustic space 1 (see FIG. 1). If acoustic space 1 is considered to be a rectangular parallelepiped space and a specific corner is set as the origin O (0,0,0) as shown in FIG. 1, the dimensions of acoustic space 1 are set as follows. That is, if the horizontal dimension of acoustic space 1 is xa, the depth dimension is ya, and the height dimension is za, the dimensions of acoustic space 1 are expressed as (xa, ya, za). The values of the dimensions of acoustic space 1 are input by the user operating terminal 30.
  • the user's terminal 30 may be, for example, a computer, or a portable terminal such as a mobile phone, smartphone, tablet, or wearable device.
  • the terminal 30 may also assume one or more of the functions of the sound collection device 20, display device 40, and sound generation device 50 shown in FIG. 2.
  • the dimensions of the acoustic space 1 may also be measured using a distance measurement sensor such as LiDAR (Light Detection and Ranging).
  • LiDAR Light Detection and Ranging
  • the sound source information setting unit 12b sets sound source information.
  • the sound source information is information about a specific sound source 2 (see FIG. 1) that is expected to be installed in the acoustic space 1 (see FIG. 1).
  • Such sound source information includes the position of the sound source 2 in the acoustic space 1, the sound source signal, the sound pressure level of the sound source signal, as well as the distance between the measurement point of the sound pressure level and the sound source 2, the sound field at the time of measuring the sound pressure level, and the directivity coefficient of the sound source 2.
  • the position of sound source 2 included in the sound source information is the position of sound source 2 when it is assumed that the user places sound source 2 (a specific device) in acoustic space 1 (for example, the living room of the user's home).
  • the position of sound source 2 in acoustic space 1 is expressed as (xs, ys, zs) with origin O shown in Figure 1 as the reference point.
  • xs ⁇ xa, ys ⁇ ya, zs ⁇ za Such a position of sound source 2 is input based on the user's operation of terminal 30.
  • the sound source signal included in the sound source information is an audible signal from the sound source 2.
  • a sound source signal is provided in advance by the manufacturer of the device (i.e., the sound source 2) based on the measurement results of the sound from the sound source 2 placed in an anechoic chamber (not shown).
  • the sound source signal is stored in the memory unit 11 as a predetermined audio signal format (audio file format) that can be played by a sound generating device 50 such as a speaker or headphones.
  • audio signal formats include, but are not limited to, WAV, MP3, AIFF, AU, and RAW.
  • the sound pressure level of the sound source signal is the sound pressure level emitted from sound source 2 placed in an anechoic chamber or the like where the effects of reflections from the walls are minimized.
  • Such sound pressure levels are extracted from the sound source signal described above and are provided in advance by the manufacturer of the device (i.e., sound source 2), etc.
  • the “distance” between the sound pressure level measurement point and the sound source 2 is the distance between the measurement point and the sound source 2 when the sound pressure level of the sound source 2 is measured in an anechoic chamber or the like, and is provided in advance by the manufacturer of the device (i.e., the sound source 2). Considering the stability when measuring the sound pressure level, it is desirable that the distance between the sound pressure level measurement point and the sound source 2 is 1 m or more. Also, the number of sound pressure level measurement points needs to be one or more, but it is also possible to measure the sound pressure level at multiple measurement points and include the average value of the sound pressure levels in the sound source information.
  • the "sound field" at the time of measuring the sound pressure level included in the sound source information is information indicating whether the acoustic space 1 (see Figure 1) is a free field or a semi-free field.
  • a free field is a sound field in which the effects of boundaries can be ignored in an isotropic and homogeneous medium.
  • a semi-free field is a sound field in which the upper half of the space above a reflective boundary surface can be considered a free field.
  • Information specifying the sound field is input based on the user's operation of the terminal 30.
  • the "directivity coefficient" of the sound source 2 (see Figure 1) included in the sound source information is a value indicating the direction in which sound from the sound source 2 propagates, and is related to the position of the sound source 2 in the acoustic space 1 (see Figure 1). For example, if the sound source 2 is more than 1 m away from each wall surface, such as the side surfaces 1a and 1b, the ceiling surface 1c, and the floor surface 1d of the acoustic space 1, the directivity coefficient is set to '1'. Also, if the sound source 2 is close to one of the walls and is more than 1 m away from a wall surface other than the adjacent wall surface, the directivity coefficient is set to '2'. If the sound source 2 is close to two adjacent walls, the directivity coefficient is set to '4'. If the sound source 2 is close to three adjacent walls, the directivity coefficient is set to '8'.
  • the value of such a directivity coefficient may be input by the user operating the terminal 30, or may be calculated by the control unit 12 based on the position of the sound source 2 (see FIG. 1) in the acoustic space 1 (see FIG. 1).
  • the sound source information set by the sound source information setting unit 12b is stored in the memory unit 11 and is also output to the received sound signal calculation unit 12g.
  • the sound receiving information setting unit 12c sets the sound receiving information.
  • Such sound receiving information includes information indicating the position (sound receiving position) of the sound receiver 4 (see FIG. 1) in the acoustic space 1 (see FIG. 1).
  • the sound receiving position in the acoustic space 1 is expressed as (xr, yr, zr) with the origin O shown in FIG. 1 as the reference. Note that the sound receiving position exists inside the acoustic space 1 and is different from the position (xs, ys, zs) of the sound source 2, so xr ⁇ xa, xr ⁇ xs, yr ⁇ ya, yr ⁇ ys, zr ⁇ za, zr ⁇ zs.
  • the sound receiving position is set at the center of the head or near the ears of the sound receiver 4 (see FIG. 1). This will make the sound heard when the user simulates an experience of the sound from the sound source 2 closer to the actual sound.
  • Such sound receiving information including the position of the sound receiver 4 is input based on the user's operation of the terminal 30.
  • the sound receiving information is stored in the memory unit 11 and is also output to the sound receiving signal calculation unit 12g.
  • the sound collection device 20 shown in FIG. 2 is a device that collects sound in the acoustic space 1 (see FIG. 1) as described above. Specifically, the sound collection device 20 collects a specific sound used to measure the reverberation time of the acoustic space 1. Note that "reverberation time" is a value that indicates how well a sound reverberates in the acoustic space 1.
  • the sound used to measure the reverberation time may be a highly impulsive sound such as a paper pistol or the popping of a balloon, in addition to the sound of clapping, or an intermittent sound (or continuous sound) such as pink noise.
  • the sound used to measure the reverberation time is referred to as the "acoustic signal for measuring reverberation time”.
  • the sound source that emits the acoustic signal for measuring reverberation time is referred to as the "sound source for measuring reverberation time”.
  • the sound signal for measuring reverberation time is a sound generated by human movement, such as clapping hands or the popping of a balloon
  • the human movement becomes the sound source for measuring reverberation time.
  • a specific sound generating device such as a speaker becomes the sound source for measuring reverberation time.
  • background noise containing various sounds such as sounds generated by each device (not shown) in the acoustic space 1 and environmental sounds propagating from the outdoors through walls and the like into the acoustic space 1 is mixed in. Even if such background noise is included, there is no particular risk of interference with the measurement of reverberation time as long as the loudness of the sound signal for measuring reverberation time is approximately 20 dB or more louder than the loudness of the background noise.
  • the sound source for measuring reverberation time is at least 1 m away from the sound collection device 20. It is also desirable that the sides 1a, 1b, ceiling surface 1c, and floor surface 1d of the acoustic space 1 (see FIG. 1) as well as the object 3 are at least 1 m away from the sound collection device 20. This improves stability when measuring the acoustic signal for measuring reverberation time.
  • Data on the sound collected by the sound collection device 20 is output to the measurement unit 12d, which will be described next.
  • the measuring unit 12d measures the reverberation time of the acoustic space 1 based on a predetermined sound (i.e., an acoustic signal for measuring reverberation time) picked up in the acoustic space 1 by the sound pickup device 20. More specifically, the measuring unit 12d converts the acoustic signal for measuring reverberation time into a predetermined decay curve of acoustic energy (not shown). If the acoustic signal for measuring reverberation time is a highly impulsive sound (e.g., clapping), the decay curve of the acoustic energy is generated based on the inverse square integral method.
  • a predetermined sound i.e., an acoustic signal for measuring reverberation time picked up in the acoustic space 1 by the sound pickup device 20. More specifically, the measuring unit 12d converts the acoustic signal for measuring reverberation time into a predetermined decay curve of acoustic
  • the measuring unit 12d sequentially records the acoustic energy after the sound is stopped, thereby generating the decay curve of the acoustic energy. If the decay rate of acoustic energy per second is D [dB/sec], the reverberation time T [sec] is expressed by the following formula (1).
  • the measurement unit 12d sets the average value of the multiple reverberation times as the reverberation time of the acoustic space 1.
  • the reverberation time value measured (calculated) by the measurement unit 12d is stored in the memory unit 11 and is also output to the acoustic characteristic calculation unit 12e.
  • the acoustic characteristic calculation unit 12e calculates the average sound absorption coefficient ⁇ of the acoustic space 1 based on the dimensions and reverberation time of the acoustic space 1 (see FIG. 1).
  • "average sound absorption coefficient” is a value that indicates the degree of sound absorption in the acoustic space 1.
  • the value of the average sound absorption coefficient ⁇ is in the range of zero or more and 1 or less. For example, in an acoustic space 1 where no sound absorption occurs at all, the value of the average sound absorption coefficient ⁇ is zero. Also, when all sound is absorbed in the acoustic space 1, the value of the average sound absorption coefficient ⁇ is 1.
  • Such an average sound absorption coefficient ⁇ is expressed by the following formula (2) based on the reverberation time T in the acoustic space 1 as well as the volume V, surface area S, and sound speed c of the acoustic space 1.
  • the average sound absorption coefficient ⁇ calculated by the acoustic characteristic calculation unit 12e is stored in the storage unit 11 and is also output to the acoustic characteristic output unit 12f and the received sound signal calculation unit 12g.
  • the acoustic characteristic output unit 12f causes the display device 40 to display the average sound absorption coefficient ⁇ calculated by the acoustic characteristic calculation unit 12e.
  • FIG. 3 is a front view of a smartphone 41 including a display 41a as an example of a display device.
  • the smartphone 41 shown in Fig. 3 is a mobile terminal of a user, and includes a display 41a as the display device 40 (see Fig. 2).
  • the average sound absorption coefficient for each sound frequency is displayed on the display 41a in a predetermined manner, so that the user can visually grasp the average sound absorption coefficient of the acoustic space 1 (see Fig. 1).
  • the average sound absorption coefficient for each sound frequency may be displayed on the display 41a in a predetermined table format or graph.
  • the reverberation time for each sound frequency may be displayed on the display 41a as information indicating the characteristics of the acoustic space 1 (see FIG. 1).
  • information such as the average sound absorption coefficient ⁇ being "high”, “medium”, or “low” may be displayed on the display 41a.
  • the smartphone 41 may also have the input function of the terminal 30 (see FIG. 2).
  • the sound reception signal calculation unit 12g shown in FIG. 2 calculates a sound reception signal based on the average sound absorption coefficient ⁇ calculated by the acoustic characteristic calculation unit 12e, as well as the acoustic space information, sound source information, and sound reception information described above.
  • the sound reception signal is a signal that indicates the sound heard at the position of the sound receiver 4 (see FIG. 1) when a sound source 2 (see FIG. 1) is placed in the acoustic space 1 (see FIG. 1).
  • Such a received sound signal L1 is expressed by the following formula (3) using a sound source signal L0 emitted from the sound source 2.
  • Y included in formula (3) is the area of a sphere (in the case of a free sound field) or a hemisphere (in the case of a semi-free sound field) whose radius is the distance between the measurement point of the sound pressure level and the sound source 2, and is calculated based on the sound source information.
  • Y0 included in formula (3) is a predetermined reference area
  • Q is a directivity coefficient.
  • r included in formula (3) is the distance between the position of the sound source 2 and the position of the sound receiver 4.
  • the distance r between the position of the sound source 2 and the position of the sound receiver 4 is expressed by the following equation (4).
  • the received signal L1 is a signal whose sound pressure level has changed by a difference value ⁇ L with respect to the sound source signal L0 emitted from the sound source 2.
  • This difference value ⁇ L is a value determined by the average sound absorption coefficient ⁇ of the acoustic space 1 as well as the distance r between the sound source 2 and the sound receiver 4.
  • an equalizer (not shown) that changes the frequency characteristics of a signal is used for the process of changing the sound source signal L0 by a predetermined difference value ⁇ L.
  • the received signal calculation unit 12g may have such an equalizer.
  • a difference value ⁇ L between the received signal L1 and the sound source signal L0 is stored in the storage unit 11.
  • the received signal L1 calculated by the received signal calculation unit 12g is output to the received signal output unit 12h.
  • the received sound signal output unit 12h outputs the received sound signal L1 calculated by the received sound signal calculation unit 12g as a predetermined audible signal.
  • the audible signal is a signal that can be heard by humans as sound.
  • the audible signal is stored in the storage unit 11 as an acoustic signal format (audio file format) that can be reproduced by the sound generation device 50.
  • the audible signal is output to the sound generation device 50 such as a speaker, whereby it is converted into air vibrations.
  • the sound signal output unit 12h is, for example, a speaker, an earphone, or a headphone, and is connected to the sound signal output unit 12h by wire or wirelessly.
  • a predetermined audible sound is generated from the sound device 50 based on the sound signal output from the sound signal output unit 12h.
  • the sound signal L1 is output from the sound signal output unit 12h as a predetermined audible signal, so that the user can have a simulated experience of the sound of the sound source 2 (see FIG. 1) in the acoustic space 1 (see FIG. 1).
  • FIG. 4 is a flowchart showing the process executed by the control unit of the acoustic characteristic analysis system (also see FIG. 1 and FIG. 2 as appropriate). The order of the processes in steps S101 to S104 in FIG. 4 can be changed as appropriate.
  • the control unit 12 sets acoustic space information by the acoustic space information setting unit 12a.
  • the acoustic space information is information including the dimensions of the acoustic space 1, and is set based on an input operation of the terminal 30.
  • the control unit 12 sets sound source information using the sound source information setting unit 12b. That is, the sound source information setting unit 12b sets sound source information including the position of the sound source 2, the sound pressure level of the sound source 2, and a sound source signal based on an input operation of the terminal 30.
  • step S103 the control unit 12 sets the sound receiving information by the sound receiving information setting unit 12c. That is, the sound receiving information setting unit 12c sets the sound receiving information including the position of the sound receiver 4 based on an input operation of the terminal 30.
  • step S104 the control unit 12 acquires an acoustic signal for measuring reverberation time by the sound collection device 20.
  • the acoustic signal for measuring reverberation time may be acquired in a state where a user has started a predetermined application using the terminal 30.
  • the acoustic signal for measuring reverberation time may be a highly impulsive sound such as the sound of a user clapping his/her hands, or an intermittent sound such as pink noise.
  • step S105 the control unit 12 causes the measurement unit 12d to measure the reverberation time of the acoustic space 1. That is, the control unit 12 measures the reverberation time of the acoustic space 1 based on the reverberation time measurement acoustic signal.
  • step S106 the control unit 12 causes the acoustic characteristic calculation unit 12e to calculate the average sound absorption coefficient of the acoustic space 1. That is, the control unit 12 calculates the average sound absorption coefficient of the acoustic space 1 based on the dimensions and reverberation time of the acoustic space 1.
  • step S107 the control unit 12 causes the acoustic characteristic output unit 12f to display the acoustic characteristic data on the display device 40. That is, the control unit 12 causes at least one of the reverberation time and the average sound absorption coefficient of the acoustic space 1 to be displayed on the display device 40. This allows the user to grasp the reverberation time and the average sound absorption coefficient of the acoustic space 1.
  • step S108 the control unit 12 calculates the received sound signal by the sound signal calculation unit 12g. That is, the control unit 12 calculates the received sound signal based on the acoustic space information, the sound source information, the sound reception information, and the average sound absorption coefficient.
  • step S109 the control unit 12 outputs the received sound signal to the sound generation device 50 via the received sound signal output unit 12h. This allows the user to virtually experience the sound that would be heard if the sound source 2 were placed in the acoustic space 1.
  • a user can hear the operation sound of a specific device when it is used in a usage environment (i.e., acoustic space 1) without going to a home appliance retailer or the like.
  • the received sound signal includes sound source information and sound reception information in addition to the dimensions of the acoustic space 1. Therefore, the user can simulate an operation sound close to the operation sound when the device is actually used in the acoustic space 1.
  • the device can be used in the development stage, where a designer can simulate an operation sound of the device in a specific usage environment and then reflect the result in the design.
  • the second embodiment differs from the first embodiment in that there is no particular need to provide a sound collection device 20 (see FIG. 2) or a measurement unit 12d (see FIG. 2).
  • the second embodiment also differs from the first embodiment in that the value of the average sound absorption coefficient is input from the terminal 30 (see FIG. 5) to the acoustic characteristic calculation unit 12e (see FIG. 5).
  • the rest of the second embodiment is the same as the first embodiment. Therefore, only the parts that differ from the first embodiment will be described, and the description of the overlapping parts will be omitted.
  • FIG. 5 is a functional block diagram of an acoustic characteristic analysis system 100A according to the second embodiment.
  • the acoustic characteristic analysis system 100A includes an acoustic characteristic analysis device 10A, a terminal 30, a display device 40, and a sound generation device 50.
  • the control unit 12A of the acoustic characteristic analysis device 10A includes an acoustic space information setting unit 12Aa, a sound source information setting unit 12b, and a received sound information setting unit 12c, as well as an acoustic characteristic calculation unit 12Ae, an acoustic characteristic output unit 12f, a received sound signal calculation unit 12g, and a received sound signal output unit 12h.
  • the acoustic space information setting unit 12Aa has a function of setting the average sound absorption coefficient of the acoustic space 1 (see FIG. 1) in addition to the dimensions of the acoustic space 1. It is assumed that the average sound absorption coefficient of the acoustic space 1 is known based on past measurements, etc. Such an average sound absorption coefficient value may be input by operating the terminal 30, or may be a value of the average sound absorption coefficient already stored in the memory unit 11.
  • the average sound absorption coefficient set by the acoustic space information setting unit 12Aa is output to the sound reception signal calculation unit 12g via the acoustic characteristic calculation unit 12Ae. Since the average sound absorption coefficient of the acoustic space 1 has already been given, there is no need for the acoustic characteristic calculation unit 12Ae to calculate the average sound absorption coefficient.
  • the remaining components of the control unit 12A shown in FIG. 2 are the same as those in the first embodiment, and therefore will not be described.
  • the acoustic space information setting unit 12Aa sets the reverberation time in the acoustic space 1 and outputs the value of this reverberation time to the acoustic characteristic calculation unit 12Ae.
  • the acoustic characteristic calculation unit 12Ae calculates the average sound absorption coefficient of the acoustic space 1 based on the reverberation time.
  • the average sound absorption coefficient calculated by the acoustic characteristic calculation unit 12Ae is output to the received sound signal calculation unit 12g.
  • the received sound signal calculation unit 12g i.e., the control unit 12A uses the value when calculating the received sound signal. With this configuration, the same effect as in the second embodiment can be achieved.
  • the third embodiment differs from the second embodiment in that an acoustic space 1B (see FIG. 6) is constructed as a virtual space, and when a predetermined sound source (e.g., the vacuum cleaner 60 in FIG. 6) is placed in the acoustic space 1B, the user experiences a simulated sound heard by a character 5 (see FIG. 6) in the virtual space.
  • a predetermined sound source e.g., the vacuum cleaner 60 in FIG. 6
  • FIG. 6 the configuration of the acoustic characteristic analysis system 100A: see FIG. 5
  • FIG. 6 is an explanatory diagram showing a situation in which a predetermined character 5 is using a vacuum cleaner 60, which is a sound source, in the acoustic characteristic analysis system according to the third embodiment.
  • the acoustic space 1B shown in Fig. 6 is a three-dimensional virtual space (e.g., metaverse) constructed by a computer such as a server (not shown). Such a virtual space may be used for virtual live shows, online games, and EC (Electronic Commerce) which is a virtual shop.
  • the image of the acoustic space 1B may be displayed on a mobile terminal such as a smartphone or tablet, or may be displayed on a head-mounted display, smart glasses, or smart contact lenses.
  • FIG. 6 shows an example in which a specific character 5 is using a vacuum cleaner 60 in a living room, which is an acoustic space 1B.
  • the character 5 is, for example, an avatar that represents a user, but may also be another specific character.
  • control unit 12A causes the display device 40 (see FIG. 5) to display an image corresponding to the sound source (vacuum cleaner 60 in the example of FIG. 6), and also causes a predetermined character 5 to be displayed on the display device 40.
  • the control unit 12A also sets the position of the sound receiver hearing the sound of the sound source (vacuum cleaner 60 in the example of FIG. 6) in the acoustic space 1B as the position of the character 5. This allows the user to hear the sound of the sound source heard at the position of the character 5 through the sound generation device 50 (see FIG. 5).
  • the vacuum cleaner 60 shown in FIG. 6 is a sound source selected by the user through operation of the terminal 30 (see FIG. 5).
  • the type of device corresponding to the sound source is not limited to the vacuum cleaner 60, but may be other home appliances such as an air conditioner or a washing machine, or may be a specified device other than a home appliance.
  • the character 5 can move in the acoustic space 1B, and the type and position of the sound source (vacuum cleaner 60 in the example of FIG. 6) can be changed.
  • control unit 12A The processing performed by the control unit 12A (see FIG. 5) is the same as in the second embodiment. That is, the control unit 12A outputs, as an audible signal, a received sound signal when the user simulates the experience of the sound of a specific sound source (vacuum cleaner 60 in the example of FIG. 6) heard in the acoustic space, based on the characteristics of the acoustic space 1B, which is a virtual space. Specifically, the control unit 12A outputs a specific received sound signal based on the average sound absorption coefficient or reverberation time value indicating the characteristics of the acoustic space 1B.
  • the average sound absorption coefficient or reverberation time of acoustic space 1B may be set based on the operation of terminal 30 (see FIG. 5), or a value previously stored in memory unit 11 (see FIG. 5) may be used. For example, when acoustic space 1B having the same dimensions as a specific acoustic space that actually exists is set in a virtual space, and the average sound absorption coefficient or reverberation time of the real acoustic space is known, these values may be used as the average sound absorption coefficient or reverberation time of acoustic space 1B in the virtual space.
  • control unit 12A changes the received sound signal in response to a change in the position of the character 5 in the acoustic space 1B or a change in the position of the sound source (vacuum cleaner 60 in the example of FIG. 6). This makes the sound that the user hears through the sound generation device 50 (see FIG. 5) of the operating sound of the vacuum cleaner 60 more realistic.
  • ⁇ Effects> when the acoustic space 1B is set as a virtual space, the user can hear the sound coming from the sound source at the position of a predetermined character 5. This can enhance the sense of reality when the user experiences the sound of the sound source in the virtual space in a simulated manner.
  • the fourth embodiment differs from the first embodiment in that a plurality of conditions, such as the dimensions of the acoustic space 1 (see FIG. 1), the type and position of the sound source 2 (see FIG. 1), and the position of the sound receiver 4 (see FIG. 1), can be set and compared.
  • a plurality of conditions such as the dimensions of the acoustic space 1 (see FIG. 1), the type and position of the sound source 2 (see FIG. 1), and the position of the sound receiver 4 (see FIG. 1), can be set and compared.
  • other aspects such as the configuration of the acoustic characteristic analysis system 100: see FIG. 2 are the same as those of the first embodiment. Therefore, only the parts that differ from the first embodiment will be described, and the explanation of the overlapping parts will be omitted.
  • the acoustic space information setting unit 12a shown in Fig. 2 sets acoustic space information including dimensions of the acoustic space 1.
  • Such acoustic space information may be information of a specific acoustic space 1, or may be information of multiple acoustic spaces 1 with different dimensions or acoustic characteristics (average sound absorption coefficient, etc.).
  • the operating sound of a vacuum cleaner which is the sound source 2
  • the user inputs the acoustic space information of each of the multiple acoustic spaces 1.
  • the sound source information setting unit 12b shown in FIG. 2 sets sound source information related to the sound source 2.
  • sound source information may be information related to one specific type of sound source 2, or information related to multiple types of sound sources 2.
  • multiple pieces of sound source information may be set for a specific sound source 2 when its position is different.
  • the sound source 2 which is a vacuum cleaner, may sound differently at a specific sound receiving position when the vacuum cleaner is operated in the center of the living room than when the vacuum cleaner is operated near a corner of the living room. Therefore, when simulating the experience of the sound of a specific sound source 2 of a different type or position, the user inputs sound source information related to that sound source 2.
  • the sound receiving information setting unit 12c shown in FIG. 2 sets sound receiving information including the position of the sound receiver 4.
  • the position information of the sound receiver 4 included in the sound receiving information may be a single predetermined sound receiving position in the acoustic space 1, or may be multiple sound receiving positions.
  • the way the sound of the sound source 2 is heard may differ between when the sound receiver 4 is in the center of the living room and when the sound receiver 4 is near a corner of the living room. Therefore, when simulating the experience of hearing at multiple sound receiving positions, the user inputs sound receiving information including each sound receiving position.
  • the acoustic characteristic calculation unit 12e shown in FIG. 2 calculates the average sound absorption coefficient of the acoustic space 1 based on the reverberation time of the acoustic space 1.
  • the acoustic characteristic calculation unit 12e shown in FIG. 2 calculates the average sound absorption coefficient of the acoustic space 1 based on the reverberation time of the acoustic space 1.
  • multiple reverberation times are measured so as to correspond to the acoustic signal for measuring the reverberation time of each acoustic space 1.
  • multiple average sound absorption coefficients are calculated so as to correspond to each reverberation time.
  • the multiple average sound absorption coefficients calculated by the acoustic characteristic calculation unit 12e are stored in the memory unit 11 in correspondence with the multiple acoustic spaces 1, and are also output to the acoustic characteristic output unit 12f and the sound reception signal calculation unit 12g.
  • the received sound signal calculation unit 12g shown in FIG. 2 calculates the received sound signal based on the average sound absorption coefficient, etc.
  • the received sound signal calculation unit 12g calculates the distance r between the sound source 2 and the sound receiver 4 in correspondence with the combination of the position of the sound source 2 and the position of the sound receiver 4. As in the first embodiment, the value of this distance r is used to calculate the received sound signal.
  • the received sound signal output unit 12h shown in FIG. 2 outputs the received sound signal calculated by the received sound signal calculation unit 12g to the sound generation device 50 as an audible signal.
  • control unit 12 change the received sound signal based on the changed information. This allows the user to compare how the sound sounds when using a vacuum cleaner in the living room and when using the vacuum cleaner in the bedroom, for example. The user can also compare how the sound sounds when the position of the vacuum cleaner or the position of receiver 4 is changed.
  • the user can realize the difference in how the sound sounds when the positions of the sound source 2 and the receiver 4 are changed, in addition to the dimensions and acoustic characteristics of the acoustic space 1, and can use this information as a basis for making a decision when the user purchases a device that is the sound source 2.
  • the fifth embodiment differs from the first embodiment in that a received sound signal is generated based on the head-related transfer function of the sound receiver 4 (see FIG. 1).
  • Other aspects (such as the configuration of the acoustic characteristic analysis system 100: see FIG. 2) are the same as those of the first embodiment. Therefore, only the parts that are different from the first embodiment will be described, and the description of the overlapping parts will be omitted.
  • the sound receiving information setting unit 12c shown in Fig. 2 sets the position of the sound receiver 4 (see Fig. 1) as well as the head transfer function of the sound receiver 4.
  • the head transfer function is a predetermined function that indicates the characteristics of the sound reflected or diffracted by the head, body, and pinna of the sound receiver 4 (see Fig. 1) until it reaches the ear (entrance of the ear canal) of the sound receiver 4.
  • the head transfer function often has a predetermined flat characteristic at low frequencies below 1.6 kHz. In this case, the incident sound wave that reaches the ear of the sound receiver 4 rarely becomes stronger or weaker than the actual sound.
  • the characteristics of the head-related transfer function are often not flat due to the effects of sound reflection and diffraction at the head, body, and pinna of the receiver 4.
  • the incident sound waves that reach the ears of the receiver 4 are often stronger or weaker than the actual sound. Therefore, when a sound with a high sound pressure level at high frequencies is set as the sound source 2 (see Figure 1), the sound heard by the user is more likely to be affected by the head-related transfer function.
  • the user can simulate an experience of sound that reflects the effects of reflection and diffraction at the head, body, and pinna of the receiver 4.
  • a predetermined value indicating a head-related transfer function may be input based on an operation of the terminal 30.
  • a standard predetermined head-related transfer function may be used as appropriate.
  • the received sound information including the head-related transfer function is output from the received sound information setting unit 12c to the received sound signal calculation unit 12g.
  • the control unit 12 calculates the received signal L1 based on the acoustic space information, sound source information, sound reception information, average sound absorption coefficient, and the head- related transfer function of the receiver.
  • a sound generation device 50 such as headphones or earphones that emit sound from near the user's ear (specifically, the entrance to the ear canal). This eliminates the need to consider the head-related transfer function from the sound generation device 50 to the user's ear.
  • ⁇ Effects> by including a head-related transfer function in the sound reception information, it is possible to enhance the sense of reality when the user virtually experiences the sound of the sound source 2 in the acoustic space 1.
  • a sound source 2 (see FIG. 7) is provided in one of two spaces adjacent to each other via a partition wall 6 (see FIG. 7), and a sound receiver 4 (see FIG. 7) is present in the other space.
  • the configuration of the acoustic characteristic analysis system 100 (see FIG. 2) is the same as that of the first embodiment. Therefore, only the parts different from the first embodiment will be described, and the explanation of the overlapping parts will be omitted.
  • FIG. 7 is an explanatory diagram of an acoustic space that is a target of the acoustic characteristic analysis system according to the sixth embodiment.
  • the sound source space 1C shown in FIG. 7 is an acoustic space in which a sound source 2 is provided.
  • the sound receiving space 1D is an acoustic space in which a sound receiver 4 exists. As shown in FIG. 7, the sound source space 1C and the sound receiving space 1D are adjacent to each other via a partition wall 6.
  • a certain object 31 such as a desk, a chair, or a shelf is placed, and a window and a door are provided, although not shown. The same is true for the sound receiving space 1D.
  • the partition wall 6 is a wall that separates the sound source space 1C and the sound receiving space 1D. As shown in FIG. 7, the partition wall 6 forms a part of the side surface of the sound source space 1C and also forms a part of the side surface of the sound receiving space 1D.
  • the processing of the acoustic characteristic analysis system 100 will be explained using FIG. 2 (also see FIG. 7 as appropriate).
  • the acoustic space information setting unit 12a shown in FIG. 2 sets acoustic space information including the dimensions of the sound source space 1C and the sound receiving space 1D and the sound transmission loss of the partition wall 6 based on the operation of the terminal 30.
  • the sound transmission loss is a value indicating the level of the sound insulation performance of the partition wall 6. The higher the sound insulation performance of the partition wall 6, the larger the value of the sound transmission loss.
  • the sound source information setting unit 12b shown in FIG. 2 sets predetermined sound source information.
  • Such sound source information includes information to set one of two acoustic spaces adjacent to each other via a partition wall 6 as sound source space 1C, as well as the sound source signal of sound source 2, the sound pressure level of the sound source signal, the distance between the sound pressure level measurement point and sound source 2, and the directivity coefficient of sound source 2.
  • Such sound receiving information includes information to set one of two acoustic spaces adjacent to each other via the partition wall 6 as the sound receiving space 1D.
  • a sound collecting device 21 shown in Fig. 7 measures a reverberation time measuring acoustic signal (sound of a user clapping hands or pink noise) in a sound source space 1C.
  • a sound collecting device 22 shown in Fig. 7 measures a reverberation time measuring acoustic signal in a sound receiving space 1D.
  • the measurement unit 12d shown in FIG. 2 measures the reverberation time of the sound source space 1C and the sound receiving space 1D based on the reverberation time measurement acoustic signals measured in each of the sound source space 1C and the sound receiving space 1D. 2 calculates the average sound absorption coefficient of the sound source space 1C and the sound receiving space 1D based on the reverberation times of the sound source space 1C and the sound receiving space 1D. The average sound absorption coefficient is calculated using the formula (2) described in the first embodiment.
  • the sound receiving signal calculation unit 12g shown in FIG. 2 calculates a sound receiving signal of an average sound that is heard in the sound receiving space 1D through the partition wall 6 when the sound emitted from the sound source 2 is heard in the sound receiving space 1D based on the average sound absorption coefficients of the sound source space 1C and the sound receiving space 1D, as well as the acoustic space information, sound source information, and sound receiving information described above.
  • the average sound pressure level L B of the sound receiving space 1D is expressed by the following formula (6).
  • L A included in formula (6) is the average sound pressure level of the sound source space 1C.
  • TL included in formula (6) is the sound transmission loss of the partition wall 6, and S is the surface area of the partition wall 6.
  • S B included in formula (6) is the surface area of the sound receiving space 1D
  • ⁇ B is the average sound absorption coefficient of the sound receiving space 1D.
  • the average sound pressure level L A of the sound source space 1C when the acoustic power level of the sound source 2 is L W is expressed by the following formula (7).
  • L 0 included in formula (7) is the sound source signal emitted from the sound source 2
  • Y is the area of a sphere (in the case of a free sound field) or a hemisphere (in the case of a semi-free sound field) whose radius is the distance between the measurement point of the sound pressure level and the sound source 2.
  • Y 0 included in formula (7) is a predetermined reference area
  • a A is the transmission and absorption area of the sound source space 1C.
  • the average sound pressure level L B of the sound receiving space 1D is expressed by the following formula (8):
  • S A is the surface area of the sound source space 1C
  • ⁇ A is the average sound absorption coefficient of the sound source space 1C.
  • the sound transmission loss of the partition 6 (also called the total sound transmission loss) is expressed by the following formula (10).
  • Such total sound transmission loss is particularly effective when the partition 6 contains parts made of different materials, for example, when part of the partition 6 is made of glass and other parts are made of resin.
  • the difference value ⁇ L between the sound pressure level L 0 of the sound source signal and the average sound pressure level L B of the sound receiving space 1D is expressed by the following formula (12).
  • the received sound signal L B calculated by the received sound signal calculation unit 12g is output to the received sound signal output unit 12h (see FIG. 2).
  • a predetermined received sound signal is then output as an audible signal from the received sound signal output unit 12h to the sound generation device 50 (see FIG. 2).
  • the control unit 12 calculates the received sound signal based on information including the sound transmission loss of the partition 6.
  • control unit 12 may change the received sound signal in accordance with the change in sound transmission loss. This allows the user to compare the difference in the sound heard by the receiver 4 when, for example, a window (not shown) in the partition wall 6 is closed and when the window is open.
  • a user when a sound source space 1C in which a sound source 2 is provided and a sound receiving space 1D in which a sound receiver 4 exists are adjacent to each other via a partition wall 6, a user can simulate an experience of a sound heard by the sound receiver 4. For example, when a user uses a vacuum cleaner in a living room of a house, the user can confirm how the sound sounds to a person in a bedroom next to the living room. Also, when a user uses a vacuum cleaner in a room, the user can confirm how the sound sounds to a person living in the next room.
  • a sound source 2 (see FIG. 8) is provided in an open space such as outdoors, and a sound receiver 4 (see FIG. 8) is present in a sound receiving space 1F separated from the open space by a partition wall 7 (see FIG. 8).
  • the configuration of the acoustic characteristic analysis system 100 (see FIG. 2) is the same as that of the first embodiment. Therefore, the parts different from the first embodiment will be described, and the explanation of the overlapping parts will be omitted.
  • FIG. 8 is an explanatory diagram of an acoustic space that is a target of the acoustic characteristic analysis system according to the seventh embodiment.
  • the sound source space 1E shown in Fig. 8 is an open space such as outdoors where the sound source 2 is installed.
  • An example of such an open space is an urban area, but is not limited to this.
  • the sound receiving space 1F is an acoustic space such as a room where a sound receiver 4 exists, and is separated from the sound source space 1E such as outdoors by a partition wall 7.
  • the surface of the partition wall 7 forms a part of the side surface of the sound receiving space 1F.
  • the processing of the acoustic characteristic analysis system 100 will be described with reference to FIG. 2 (also see FIG. 8 as appropriate).
  • the acoustic space information setting unit 12a shown in Fig. 2 sets acoustic space information based on the operation of the terminal 30.
  • Such acoustic space information includes information indicating that the sound source space 1E is an open space, as well as the dimensions of the sound receiving space 1F and the sound transmission loss of the partition wall 7.
  • the sound transmittance ⁇ of the partition wall 7 does not need to be uniform, and the partition wall 7 may be composed of n types of parts with different sound transmittances ⁇ .
  • the sound transmission loss of the partition wall 7 (total sound transmission loss) is expressed by equation (10) described in the sixth embodiment.
  • the sound source information setting unit 12b shown in FIG. 2 sets the sound source information.
  • such sound source information includes the sound pressure level of the sound source signal, the distance between the sound pressure level measurement point and the sound source 2, and the directivity coefficient of the sound source 2.
  • the position of the sound source 2 is expressed as (xp,yp,zp).
  • the directivity coefficient of the sound source 2 if the sound source 2 is 1m or more away from the ground 8 in an open space such as outdoors, the directivity coefficient of the sound source 2 is set to '1'. Also, if the sound source 2 is close to the ground 8 and is 1m or more away from other reflecting surfaces, the directivity coefficient of the sound source 2 is set to '2'.
  • the sound receiving information setting unit 12c shown in FIG. 2 sets the sound receiving information.
  • Such sound receiving information includes information to set one of the two acoustic spaces as the sound receiving space 1F, as well as the position of the partition wall 7 and the value of the sound transmission loss of the partition wall 7.
  • the position of the partition wall 7 is represented by (xq, yq, zq).
  • the sound collection device 20 shown in FIG. 8 collects an acoustic signal for measuring reverberation time in the sound receiving space 1F (e.g., the sound of a user clapping hands or pink noise).
  • the measurement unit 12d shown in FIG. 2 measures the reverberation time of the sound receiving space 1F based on the acoustic signal for measuring reverberation time collected by the sound collection device 20. Note that the equation (1) described in the first embodiment is used to calculate the reverberation time.
  • the value of the reverberation time measured by the measurement unit 12d is output to the acoustic characteristic calculation unit 12e.
  • the acoustic characteristic calculation unit 12e shown in FIG. 2 calculates the average sound absorption coefficient of the sound receiving space 1F based on the reverberation time.
  • the average sound absorption coefficient is calculated using the formula (2) described in the first embodiment.
  • the value of the average sound absorption coefficient calculated by the acoustic characteristic calculation unit 12e is output to the acoustic characteristic output unit 12f and the received sound signal calculation unit 12g.
  • the sound receiving signal calculation unit 12g calculates an average sound receiving signal of the sound emitted from the sound source 2 and heard in the sound receiving space 1F based on the average sound absorption coefficient of the sound receiving space 1F, as well as the acoustic space information, the sound source information, and the sound receiving information.
  • the average sound pressure level L B of the sound receiving space 1F is expressed by the following formula (13).
  • L C included in formula (13) is the sound pressure level of the sound incident on the partition wall 7 from the sound source space 1E.
  • TL included in formula (13) is the sound transmission loss in the partition wall 7, and S is the surface area of the partition wall 7.
  • S B included in formula (13) is the surface area of the sound receiving space 1F
  • ⁇ B is the average sound absorption coefficient of the sound receiving space 1F.
  • Equation (14) is r1 , which is the distance between the sound source 2 and the partition wall 7, and is expressed by the following equation (15).
  • the central position of the partition wall 7 may be set as the representative position of the partition wall 7.
  • the average sound pressure level L B in the sound receiving space 1F is expressed by the following formula (16).
  • the difference value ⁇ L1 between the sound pressure level L0 of the sound source signal and the average sound pressure level LB of the sound receiving space 1F is expressed by the following equation (17).
  • the received sound signal L B calculated by the received sound signal calculation unit 12g is output to the received sound signal output unit 12h (see FIG. 2).
  • a predetermined received sound signal is then output as an audible signal from the received sound signal output unit 12h to the sound generation device 50 (see FIG. 2).
  • the control unit 12 calculates the received sound signal based on information including the sound transmission loss of the partition wall 7 when it is assumed that the receiver 4 is present in the sound receiving space 1F (acoustic space) separated by the partition wall 7 from the sound source space 1E (open space) in which the sound source 2 is provided.
  • control unit 12 may change the received sound signal in accordance with the change in sound transmission loss. This allows the user to confirm, for example, the difference in how sound is heard when a window (not shown) in the partition wall 7 is opened or closed.
  • the user when the sound source 2 is in a sound receiving space 1F separated by a partition wall 7 from an open space such as outdoors and the like, the user can simulate the experience of the sound heard by the sound receiver 4.
  • the user when flying an airplane, helicopter, drone, or the like, or when operating a construction machine such as an excavator, the user can simulate the experience of how the sound is heard in the sound receiving space 1F.
  • the designer can simulate the experience of the operating sound of the equipment in a specified usage environment, and then the result can be reflected in the design.
  • the control unit 12 of the acoustic characteristic analysis device 10 (see FIG. 2) is described as having each component such as the acoustic space information setting unit 12a, but this is not limited to the above. That is, the terminal 30 may take on some or all of the functions of the acoustic space information setting unit 12a, etc., provided in the control unit 12. The same can be said about the second to seventh embodiments.
  • the sound collection device 20 (see FIG. 2) is separate from the acoustic characteristic analysis device 10 (see FIG. 2), but this is not limited to the above. In other words, the sound collection device 20 may be included in the acoustic characteristic analysis device 10. The same can be said about the fourth to seventh embodiments.
  • the average sound absorption coefficient and reverberation time of the acoustic space 1 are displayed on the display device 40, but other data may be displayed.
  • the waveform of the sound received at the position of the sound receiver 4 (such as changes in sound pressure level from moment to moment) may be displayed on the display device 40.
  • the waveform of the sound source signal may also be displayed on one screen in synchronization with the received sound signal.
  • the acoustic characteristic calculation unit 12e may calculate the voice clarity based on the sound collected in the acoustic space 1 by the sound collection device 20, the acoustic space information, the sound source information, and the sound reception information.
  • the voice clarity is a value indicating the degree of ease of hearing the voice.
  • the sound source information and the sound reception information are also input to the acoustic characteristic calculation unit 12e.
  • the voice clarity may be displayed on the display device 40 together with information indicating the characteristics of the acoustic space 1 (average sound absorption coefficient and reverberation time). This allows the user to understand the degree of ease of conversation in the acoustic space 1. Note that information such as "high”, “medium”, or "low” voice clarity may be displayed on the display device 40.
  • the partition 6 (see FIG. 7) constitutes part of the side of the sound source space 1C and also constitutes part of the side of the sound receiving space 1D, but this is not limited to the above.
  • the sixth embodiment can also be applied to a case where a specific partition (not shown) constitutes the ceiling surface (or floor surface) of the sound source space and also constitutes the floor surface (or ceiling surface) of the sound receiving space. The same can be said about the seventh embodiment.
  • each embodiment is a specific device, but this is not limiting.
  • each embodiment can be applied to a case where the sound source 2 is a human voice or an animal cry.
  • each embodiment can also be applied to the analysis of noise caused by the operation of equipment (i.e., the sound source 2).
  • the sound source 2 For example, based on the seventh embodiment, it is also possible to analyze noise generated when a construction machine or the like is operated outdoors.
  • each embodiment can be appropriately combined.
  • the control unit 12 may calculate a sound receiving signal based on an average sound absorption coefficient or the like input by operating the terminal 30 (second embodiment).
  • the third embodiment and the seventh embodiment may be combined, and in a virtual space (third embodiment), the control unit 12 may calculate a sound receiving signal indicating how sound is heard in a sound receiving space 1F separated from an open space by a partition 7 (seventh embodiment).
  • the third and sixth embodiments may also be combined to form the following configuration. That is, in a virtual space, when a specific character (sound receiver) exists in a sound source space in which a sound source 2 is provided, and when a character (sound receiver) exists in a sound receiving space adjacent to the sound source space via a partition wall 6, and when switching from one to the other by operating the terminal 30, the control unit 12 may change the position of the character on the display screen and change the sound receiving signal of the sound heard at the character's position. With this configuration, the sound receiving signal changes in accordance with the change in the character's position, thereby enhancing the sense of realism when the user has a specific simulated experience in the virtual space.
  • the third, sixth, and seventh embodiments may be combined to form the following configuration. That is, in a virtual space, when a predetermined character (sound receiver) exists in a sound receiving space adjacent to the sound source space in which the sound source 2 is provided via a partition wall 6, and when a character (sound receiver) exists in a sound receiving space separated by a partition wall 7 from an open space in which the sound source 2 is provided (outdoors, etc.), and when switching from one to the other by operating the terminal 30, the control unit 12 may change the position of the character on the display screen and change the sound receiving signal.
  • the sense of realism when the user has a predetermined simulated experience in the virtual space can be enhanced.
  • the third and seventh embodiments may also be combined to form the following configuration. That is, when a specific character (sound receiver) exists in a sound source space in which the sound source 2 is provided in a virtual space, and when a character (sound receiver) exists in a sound receiving space separated by a partition wall 7 from an open space in which the sound source 2 is provided (outdoors, etc.), and when switching is performed from one to the other by operating the terminal 30, the control unit 12 may change the position of the character on the display screen and change the sound receiving signal. With such a configuration, the sense of realism when the user has a specific simulated experience in the virtual space can also be enhanced.
  • At least a part of the calculation results of each component such as the acoustic space information setting section 12a of the control section 12 (see FIG. 2) may be stored in an information recording medium such as a hard disk drive or a removable disk.
  • the processes (such as the acoustic characteristic analysis method) executed by the acoustic characteristic analysis system 100 may be executed as a predetermined program of a computer.
  • the program may be provided via a communication line, or may be written onto a recording medium such as a CD-ROM and distributed.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)
  • Stereophonic System (AREA)

Abstract

Provided are an acoustic characteristic analysis system and the like which enable a user to experience, in a simulated manner, a sound from a sound source audible in a prescribed acoustic space. The acoustic characteristic analysis system comprises a control unit that outputs, as an audible signal and on the basis of characteristics of an acoustic space (1) which is a real space or a virtual space, a reception sound signal to be received when a user experiences, in a simulated manner, a sound from a prescribed sound source (2) audible in the acoustic space (1). The control unit measures reverberation time on the basis of a prescribed sound collected in the acoustic space (1) by a sound collection device (20), and calculates an average sound absorption rate on the basis of the dimensions of the acoustic space (1) and the reverberation time.

Description

音響特性解析システム及び音響特性解析方法Acoustic characteristic analysis system and acoustic characteristic analysis method
 本発明は、音響特性解析システム等に関する。 The present invention relates to an acoustic characteristic analysis system, etc.
 所定の音響空間における音の響き方を評価する技術として、例えば、特許文献1,2に記載の技術が知られている。すなわち、特許文献1には、「吸音手段の配置による入力音の低減を評価する技術」について記載されている。
 また、特許文献2には、「反射、多重回折、音響透過を考慮して、各要素から受音位置への入射エネルギから当該受音位置での騒音レベルを予測する」ことが記載されている。
As a technique for evaluating how sound resonates in a predetermined acoustic space, for example, the techniques described in Patent Documents 1 and 2 are known. That is, Patent Document 1 describes "a technique for evaluating the reduction of input sound by the arrangement of sound absorbing means."
Furthermore, Patent Document 2 describes that "the noise level at a sound receiving position is predicted from the incident energy from each element to the sound receiving position, taking into account reflection, multiple diffraction, and sound transmission."
特開2012-68384号公報JP 2012-68384 A 特開2003―75244号公報JP 2003-75244 A
 特許文献1,2に記載の技術では、音圧レベル等の指標に基づいて音響空間の音を評価するようにしているが、所定の音響空間で聞こえる音源の音をユーザが疑似体験するための技術については特許文献1,2には記載されていない。 The technologies described in Patent Documents 1 and 2 evaluate the sound in an acoustic space based on indices such as sound pressure level, but do not describe any technology that allows a user to simulate the experience of the sound of a sound source heard in a specified acoustic space.
 そこで、本発明は、所定の音響空間で聞こえる音源の音をユーザが疑似体験できるようにした音響特性解析システム等を提供することを課題とする。 The present invention aims to provide an acoustic characteristic analysis system that allows a user to virtually experience the sound of a sound source heard in a specified acoustic space.
 前記した課題を解決するために、本発明に係る音響特性解析システムは、現実空間又は仮想空間である音響空間の特性に基づいて、前記音響空間で聞こえる所定の音源の音をユーザが疑似体験する際の受音信号を可聴信号として出力する制御部を備えることとした。 In order to solve the above problems, the acoustic characteristic analysis system according to the present invention is provided with a control unit that outputs, as an audible signal, a received sound signal that is received when a user simulates the experience of hearing the sound of a specific sound source in an acoustic space, which may be a real space or a virtual space, based on the characteristics of the acoustic space.
 本発明によれば、所定の音響空間で聞こえる音源の音をユーザが疑似体験できるようにした音響特性解析システム等を提供できる。 The present invention provides an acoustic characteristic analysis system that allows a user to simulate the experience of hearing the sound of a sound source in a specified acoustic space.
第1実施形態に係る音響特性解析システムの対象となる音響空間の説明図である。FIG. 1 is an explanatory diagram of an acoustic space that is a target of an acoustic characteristic analysis system according to a first embodiment. 第1実施形態に係る音響特性解析システムの機能ブロック図である。1 is a functional block diagram of an acoustic characteristic analysis system according to a first embodiment. FIG. 第1実施形態に係る音響特性解析システムの表示装置の一例としてのディスプレイを備えるスマートフォンの正面図である。FIG. 2 is a front view of a smartphone equipped with a display as an example of a display device of the acoustic characteristic analysis system according to the first embodiment. 第1実施形態に係る音響特性解析システムの制御部が実行する処理を示すフローチャートである。5 is a flowchart showing a process executed by a control unit of the acoustic characteristic analysis system according to the first embodiment. 第2実施形態に係る音響特性解析システムの機能ブロック図である。FIG. 11 is a functional block diagram of an acoustic characteristic analysis system according to a second embodiment. 第3実施形態に係る音響特性解析システムにおいて、音源である掃除機を所定のキャラクタが使用している状況を示す説明図である。13 is an explanatory diagram showing a situation in which a specific character is using a vacuum cleaner, which is a sound source, in the acoustic characteristic analysis system according to the third embodiment. FIG. 第6実施形態に係る音響特性解析システムの対象である音響空間の説明図である。FIG. 13 is an explanatory diagram of an acoustic space that is a target of an acoustic characteristic analysis system according to a sixth embodiment. 第7実施形態に係る音響特性解析システムの対象である音響空間の説明図である。FIG. 23 is an explanatory diagram of an acoustic space that is a target of an acoustic characteristic analysis system according to a seventh embodiment.
≪第1実施形態≫
<音響特性解析システムの対象>
 図1は、第1実施形態に係る音響特性解析システムの対象となる音響空間1の説明図である。
 音響特性解析システム100(図2参照)は、所定の音響空間1で音源2から聞こえる音をユーザが疑似体験するためのシステムである。なお、第1実施形態では、音響空間1が現実空間(現実に存在する空間)である場合について説明する。このような音響空間1として、例えば、家屋やビルといった建物の中の部屋が挙げられるが、これに限定されるものではない。すなわち、音響空間1は、自動車や鉄道車両の他、船舶や航空機といった移動体の中の空間であってもよいし、また、エレベータのかごの中の空間であってもよい。その他、音響空間1は、デパート等の売場やコンサートホールといった大空間であってもよいし、また、地下に存在する所定の空間であってもよい。
First Embodiment
<Target of acoustic characteristic analysis system>
FIG. 1 is an explanatory diagram of an acoustic space 1 that is a target of the acoustic characteristic analysis system according to the first embodiment.
The acoustic characteristic analysis system 100 (see FIG. 2) is a system for allowing a user to simulate an experience of a sound heard from a sound source 2 in a specific acoustic space 1. In the first embodiment, a case will be described in which the acoustic space 1 is a real space (a space that actually exists). An example of such an acoustic space 1 is a room in a building such as a house or a building, but the acoustic space 1 is not limited to this. That is, the acoustic space 1 may be a space in a moving body such as a car or a railroad vehicle, a ship or an airplane, or a space in an elevator car. In addition, the acoustic space 1 may be a large space such as a sales floor of a department store or a concert hall, or may be a specific space that exists underground.
 図1に示すように、音響空間1は、この音響空間1の側面1a,1bの他、天井面1cや床面1dで構成されている。音響空間1には、机や椅子や棚といった所定の物体3が置かれている。また、図1には図示していないが、窓やドアが音響空間1に設けられていることが多い。 As shown in FIG. 1, acoustic space 1 is made up of side surfaces 1a and 1b, as well as a ceiling surface 1c and a floor surface 1d. In acoustic space 1, certain objects 3 such as desks, chairs, and shelves are placed. In addition, although not shown in FIG. 1, acoustic space 1 often has windows and doors installed.
 図1に示す音源2は、その運転音(製品音)をユーザが疑似体験する際の対象となる機器である。このような音源2として、例えば、掃除機や洗濯機やエアコンの他、電子レンジやテレビといった家電製品が挙げられるが、その他の機器であってもよい。そして、例えば、ユーザがインターネットで検索した所定の機器の購入を検討する際、その機器の運転音が音響空間1(ユーザの自宅のリビング等)でどのように聞こえるかを音響特性解析システム100(図2参照)を使って疑似体験できるようになっている。 The sound source 2 shown in FIG. 1 is a device whose operating sound (product sound) is the subject of a simulated experience by the user. Examples of such sound sources 2 include home appliances such as vacuum cleaners, washing machines, and air conditioners, as well as microwave ovens and televisions, but other devices are also possible. For example, when a user is considering purchasing a certain device that he or she found through an Internet search, the user can use the acoustic characteristics analysis system 100 (see FIG. 2) to simulate an experience of how the operating sound of the device sounds in an acoustic space 1 (such as the living room of the user's home).
 図1に示す音源2については、例えば、ユーザが購入を検討している所定の機器(つまり、音源2)が音響空間1に置かれたと仮定した場合の機器の位置を示すために便宜的に図示しているが、その機器が音響空間1に実際に置かれる必要は特にない。所定の機器は、その運転音をユーザが疑似体験する際の対象であり、実際に音響空間1に設置される必要は特にないからである。 The sound source 2 shown in FIG. 1 is illustrated for the sake of convenience to show the location of a specific device that a user is considering purchasing (i.e., sound source 2) when it is assumed that the device is placed in acoustic space 1, but there is no particular need for the device to actually be placed in acoustic space 1. This is because the specific device is the subject of a user's simulated experience of its operating sound, and there is no particular need for it to actually be installed in acoustic space 1.
 また、機器の運転音に関する騒音レベル(又は人間の聴覚に近いA特性音圧レベル)が機器の仕様書に記載されていることもあるが、騒音レベルのみでは、機器の運転音が実際にどのような音なのかをユーザが体感することができない。また、一般に騒音レベルの評価は、反射音等の影響を極力抑えた無響室(図示せず)で行われることが多いが、機器が実際に使用される部屋等では壁面で音が反射するため、音の響き方が無響室とは異なっている。そこで、第1実施形態では、所定の音響空間1における機器(図1の音源2)の運転音を、音響特性解析システム100(図2参照)を用いてユーザが疑似体験できるようにしている。これによって、ユーザが機器の購入を検討する際の判断材料にすることができる。 In addition, although the noise level (or A-weighted sound pressure level close to human hearing) of the operating sound of the equipment may be written in the specifications of the equipment, the noise level alone does not allow the user to experience what the operating sound of the equipment actually sounds like. In addition, noise level evaluations are generally performed in anechoic chambers (not shown) that minimize the effects of reflected sound, but in rooms where the equipment is actually used, sound is reflected by the walls, so the way the sound resonates is different from that in an anechoic chamber. Therefore, in the first embodiment, an acoustic characteristic analysis system 100 (see FIG. 2) is used to allow the user to simulate an experience of the operating sound of the equipment (sound source 2 in FIG. 1) in a specified acoustic space 1. This can be used as a basis for the user's decision when considering purchasing the equipment.
 図1に示す受音者4については、音響空間1で音源2の音を聞く場合の受音位置を示している。収音装置20は、音響空間1で音を収音する装置である。このような収音装置20として、例えば、マイクロホンが用いられる。なお、図1に示すx軸・y軸・z軸については後記する。 The receiver 4 shown in FIG. 1 is shown as a sound receiving position when listening to the sound of the sound source 2 in the acoustic space 1. The sound pickup device 20 is a device that picks up sound in the acoustic space 1. For example, a microphone is used as such a sound pickup device 20. The x-axis, y-axis, and z-axis shown in FIG. 1 will be described later.
<音響特性解析システムの構成>
 図2は、音響特性解析システム100の機能ブロック図である。
 図2に示すように、音響特性解析システム100は、音響特性解析装置10と、収音装置20と、端末30と、表示装置40と、発音装置50と、を含んで構成されている。音響特性解析装置10は、所定の音響空間1(図1参照)で受音者4(図1参照)に聞こえる音源2(図1参照)の音を解析する装置である。このような音響特性解析装置10は、1つのコンピュータで構成されていてもよいし、また、複数のコンピュータ(サーバ等)で構成されていてもよい。
<Configuration of Acoustic Characteristic Analysis System>
FIG. 2 is a functional block diagram of the acoustic characteristic analysis system 100.
2, the acoustic characteristic analysis system 100 includes an acoustic characteristic analysis device 10, a sound collection device 20, a terminal 30, a display device 40, and a sound generation device 50. The acoustic characteristic analysis device 10 is a device that analyzes a sound of a sound source 2 (see FIG. 1) heard by a sound receiver 4 (see FIG. 1) in a predetermined acoustic space 1 (see FIG. 1). Such an acoustic characteristic analysis device 10 may be configured as one computer, or may be configured as multiple computers (servers, etc.).
 図2に示すように、音響特性解析装置10は、記憶部11と、制御部12と、を備えている。記憶部11は、ROM(Read Only Memory)やHDD(Hard Disk Drive)等の不揮発性メモリと、RAM(Random Access Memory)やレジスタ等の揮発性メモリと、を含んで構成されている。記憶部11には、制御部12が実行するプログラムの他、所定のデータが格納されている。 As shown in FIG. 2, the acoustic characteristic analysis device 10 includes a memory unit 11 and a control unit 12. The memory unit 11 includes non-volatile memory such as a ROM (Read Only Memory) or a HDD (Hard Disk Drive), and volatile memory such as a RAM (Random Access Memory) or a register. The memory unit 11 stores predetermined data in addition to programs executed by the control unit 12.
 制御部12は、例えば、CPU(Central Processing Unit)であり、記憶部11に格納されたプログラムを読み出して、所定の処理を実行する。すなわち、制御部12は、音響空間1(図1参照)の「特性」に基づいて、音響空間1で聞こえる所定の音源2の音をユーザが疑似体験する際の受音信号を可聴信号として出力する。詳細については後記するが、音響空間1の「特性」とは、音響空間1における残響時間又は平均吸音率である。 The control unit 12 is, for example, a CPU (Central Processing Unit), which reads out a program stored in the memory unit 11 and executes a predetermined process. That is, the control unit 12 outputs, as an audible signal, a received sound signal when the user simulates the experience of the sound of a predetermined sound source 2 heard in the acoustic space 1 (see FIG. 1) based on the "characteristics" of the acoustic space 1. The "characteristics" of the acoustic space 1 are the reverberation time or average sound absorption coefficient in the acoustic space 1, as will be described in detail later.
 図2に示すように、制御部12は、音響空間情報設定部12aと、音源情報設定部12bと、受音情報設定部12cと、計測部12dと、音響特性算出部12eと、を備えている。また、制御部12は、前記した構成の他に、音響特性出力部12fと、受音信号算出部12gと、受音信号出力部12hと、を備えている。 As shown in FIG. 2, the control unit 12 includes an acoustic space information setting unit 12a, a sound source information setting unit 12b, a received sound information setting unit 12c, a measurement unit 12d, and an acoustic characteristic calculation unit 12e. In addition to the above-mentioned components, the control unit 12 also includes an acoustic characteristic output unit 12f, a received sound signal calculation unit 12g, and a received sound signal output unit 12h.
 音響空間情報設定部12aは、音響空間情報を設定する。このような音響空間情報には、音響空間1(図1参照)の寸法を示す情報が含まれている。音響空間1を直方体状の空間とみなし、図1に示すように、所定の隅部を原点O(0,0,0)とした場合、音響空間1の寸法は次のように設定される。すなわち、音響空間1の横寸法をxaとし、奥行き寸法をyaとし、高さ寸法をzaとすると、音響空間1の寸法は、(xa,ya,za)で表される。このような音響空間1の寸法の値は、ユーザによる端末30の操作で入力される。 The acoustic space information setting unit 12a sets acoustic space information. Such acoustic space information includes information indicating the dimensions of acoustic space 1 (see FIG. 1). If acoustic space 1 is considered to be a rectangular parallelepiped space and a specific corner is set as the origin O (0,0,0) as shown in FIG. 1, the dimensions of acoustic space 1 are set as follows. That is, if the horizontal dimension of acoustic space 1 is xa, the depth dimension is ya, and the height dimension is za, the dimensions of acoustic space 1 are expressed as (xa, ya, za). The values of the dimensions of acoustic space 1 are input by the user operating terminal 30.
 なお、ユーザの端末30は、例えば、コンピュータであってもよいし、携帯電話やスマートフォン、タブレット、ウェアラブル端末等の携帯端末であってもよい。また、図2に示す収音装置20、表示装置40、及び発音装置50のうちの一つ又は複数の機能を端末30が担うようにしてもよい。また、LiDAR(Light Detection and Ranging)等の測距センサを使って、音響空間1の寸法が測定されるようにしてもよい。このような音響空間情報は、記憶部11に格納される他、音響特性算出部12e及び受音信号算出部12gに出力される。 The user's terminal 30 may be, for example, a computer, or a portable terminal such as a mobile phone, smartphone, tablet, or wearable device. The terminal 30 may also assume one or more of the functions of the sound collection device 20, display device 40, and sound generation device 50 shown in FIG. 2. The dimensions of the acoustic space 1 may also be measured using a distance measurement sensor such as LiDAR (Light Detection and Ranging). Such acoustic space information is stored in the memory unit 11 and is output to the acoustic characteristic calculation unit 12e and the sound reception signal calculation unit 12g.
 音源情報設定部12bは、音源情報を設定する。なお、音源情報とは、音響空間1(図1参照)に設けることが想定される所定の音源2(図1参照)に関する情報である。このような音源情報には、音響空間1における音源2の位置や音源信号、音源信号の音圧レベルの他、音圧レベルの測定点と音源2との間の距離、音圧レベルの測定時の音場、音源2の指向係数が含まれている。 The sound source information setting unit 12b sets sound source information. The sound source information is information about a specific sound source 2 (see FIG. 1) that is expected to be installed in the acoustic space 1 (see FIG. 1). Such sound source information includes the position of the sound source 2 in the acoustic space 1, the sound source signal, the sound pressure level of the sound source signal, as well as the distance between the measurement point of the sound pressure level and the sound source 2, the sound field at the time of measuring the sound pressure level, and the directivity coefficient of the sound source 2.
 音源情報に含まれる音源2の位置とは、ユーザが音響空間1(例えば、ユーザの自宅のリビング)に音源2(所定の機器)を置くと仮定した場合の音源2の位置である。音響空間1における音源2の位置は、図1に示す原点Oを基準として、(xs,ys,zs)で表される。図1の例では、音響空間1の内部に音源2が存在しているため、xs<xa,ys<ya,zs<zaとする。このような音源2の位置は、ユーザによる端末30の操作に基づいて入力される。 The position of sound source 2 included in the sound source information is the position of sound source 2 when it is assumed that the user places sound source 2 (a specific device) in acoustic space 1 (for example, the living room of the user's home). The position of sound source 2 in acoustic space 1 is expressed as (xs, ys, zs) with origin O shown in Figure 1 as the reference point. In the example of Figure 1, since sound source 2 exists inside acoustic space 1, xs<xa, ys<ya, zs<za. Such a position of sound source 2 is input based on the user's operation of terminal 30.
 音源情報に含まれる音源信号とは、音源2からの可聴信号である。このような音源信号は、無響室(図示せず)に置かれた音源2からの音の測定結果に基づいて、機器(つまり、音源2)のメーカ等から事前に提供される。音源信号は、スピーカやヘッドホン等の発音装置50で再生可能な所定の音響信号フォーマット(音声ファイルフォーマット)として記憶部11に格納されている。このような音響信号フォーマットの代表的なものとして、例えば、WAV,MP3,AIFF,AU,RAWが挙げられるが、これに限定されるものではない。 The sound source signal included in the sound source information is an audible signal from the sound source 2. Such a sound source signal is provided in advance by the manufacturer of the device (i.e., the sound source 2) based on the measurement results of the sound from the sound source 2 placed in an anechoic chamber (not shown). The sound source signal is stored in the memory unit 11 as a predetermined audio signal format (audio file format) that can be played by a sound generating device 50 such as a speaker or headphones. Representative examples of such audio signal formats include, but are not limited to, WAV, MP3, AIFF, AU, and RAW.
 音源信号の音圧レベルとは、壁面での反射の影響を極力抑制した無響室等に置かれた音源2から発せられる音圧レベルである。このような音圧レベルは、前記した音源信号から抽出され、機器(つまり、音源2)のメーカ等から事前に提供される。 The sound pressure level of the sound source signal is the sound pressure level emitted from sound source 2 placed in an anechoic chamber or the like where the effects of reflections from the walls are minimized. Such sound pressure levels are extracted from the sound source signal described above and are provided in advance by the manufacturer of the device (i.e., sound source 2), etc.
 音圧レベルの測定点と音源2との間の「距離」とは、音源2の音圧レベルを無響室等で測定する際の測定点と音源2との間の距離であり、機器(つまり、音源2)のメーカ等から事前に提供される。なお、音圧レベルを測定する際の安定性を考慮すると、音圧レベルの測定点と音源2との間の距離が1m以上であることが望ましい。また、音圧レベルの測定点の数は1つ以上であればよいが、複数の測定点で音圧レベルを測定し、その音圧レベルの平均値を音源情報に含めるようにしてもよい。 The "distance" between the sound pressure level measurement point and the sound source 2 is the distance between the measurement point and the sound source 2 when the sound pressure level of the sound source 2 is measured in an anechoic chamber or the like, and is provided in advance by the manufacturer of the device (i.e., the sound source 2). Considering the stability when measuring the sound pressure level, it is desirable that the distance between the sound pressure level measurement point and the sound source 2 is 1 m or more. Also, the number of sound pressure level measurement points needs to be one or more, but it is also possible to measure the sound pressure level at multiple measurement points and include the average value of the sound pressure levels in the sound source information.
 音源情報に含まれる音圧レベル測定時の「音場」とは、音響空間1(図1参照)が自由音場及び半自由音場のうちのいずれに該当するか示す情報である。なお、自由音場とは、等方性かつ均質の媒質中において境界の影響を無視できるような音場である。また、半自由音場とは、反射性の境界面の上半分の空間を自由音場とみなせるような音場である。音場を特定する情報は、ユーザによる端末30の操作に基づいて入力される。 The "sound field" at the time of measuring the sound pressure level included in the sound source information is information indicating whether the acoustic space 1 (see Figure 1) is a free field or a semi-free field. Note that a free field is a sound field in which the effects of boundaries can be ignored in an isotropic and homogeneous medium. Also, a semi-free field is a sound field in which the upper half of the space above a reflective boundary surface can be considered a free field. Information specifying the sound field is input based on the user's operation of the terminal 30.
 音源情報に含まれる音源2(図1参照)の「指向係数」とは、音源2からの音がどの方向に伝播するかを示す値であり、音響空間1(図1参照)における音源2の位置に関係している。例えば、音響空間1の側面1a,1bや天井面1cや床面1dといった各壁面から音源2が1m以上離れている場合には、指向係数が‘1’に設定される。また、音源2が壁面のうちいずれか1つの壁面に近接し、さらに、近接した壁面以外の壁面から1m以上離れている場合には、指向係数が‘2’に設定される。音源2が、隣接する2つの壁面に近接している場合には、指向係数が‘4’に設定される。音源2が、隣接する3つの壁面に近接している場合には、指向係数が‘8’に設定される。 The "directivity coefficient" of the sound source 2 (see Figure 1) included in the sound source information is a value indicating the direction in which sound from the sound source 2 propagates, and is related to the position of the sound source 2 in the acoustic space 1 (see Figure 1). For example, if the sound source 2 is more than 1 m away from each wall surface, such as the side surfaces 1a and 1b, the ceiling surface 1c, and the floor surface 1d of the acoustic space 1, the directivity coefficient is set to '1'. Also, if the sound source 2 is close to one of the walls and is more than 1 m away from a wall surface other than the adjacent wall surface, the directivity coefficient is set to '2'. If the sound source 2 is close to two adjacent walls, the directivity coefficient is set to '4'. If the sound source 2 is close to three adjacent walls, the directivity coefficient is set to '8'.
 このような指向係数の値は、ユーザによる端末30の操作で入力されてもよいし、また、音響空間1(図1参照)における音源2(図1参照)の位置に基づいて、制御部12が算出するようにしてもよい。音源情報設定部12bで設定された音源情報は、記憶部11に格納される他、受音信号算出部12gに出力される。 The value of such a directivity coefficient may be input by the user operating the terminal 30, or may be calculated by the control unit 12 based on the position of the sound source 2 (see FIG. 1) in the acoustic space 1 (see FIG. 1). The sound source information set by the sound source information setting unit 12b is stored in the memory unit 11 and is also output to the received sound signal calculation unit 12g.
 受音情報設定部12cは、受音情報を設定する。このような受音情報には、音響空間1(図1参照)における受音者4(図1参照)の位置(受音位置)を示す情報が含まれている。音響空間1における受音位置は、図1に示す原点Oを基準として、(xr,yr,zr)で表される。なお、受音位置は音響空間1の内部に存在し、また、音源2の位置(xs,ys,zs)とは異なっているため、xr<xa,xr≠xs,yr<ya,yr≠ys,zr<za,zr≠zsとする。 The sound receiving information setting unit 12c sets the sound receiving information. Such sound receiving information includes information indicating the position (sound receiving position) of the sound receiver 4 (see FIG. 1) in the acoustic space 1 (see FIG. 1). The sound receiving position in the acoustic space 1 is expressed as (xr, yr, zr) with the origin O shown in FIG. 1 as the reference. Note that the sound receiving position exists inside the acoustic space 1 and is different from the position (xs, ys, zs) of the sound source 2, so xr<xa, xr≠xs, yr<ya, yr≠ys, zr<za, zr≠zs.
 また、受音者4(図1参照)の頭部中心や耳元付近の位置に受音位置が設定されることが望ましい。これによって、音源2の音をユーザが疑似体験する際に聞こえる音が実際に近いものになる。このような受音者4の位置を含む受音情報は、ユーザによる端末30の操作に基づいて入力される。受音情報は、記憶部11に格納される他、受音信号算出部12gに出力される。 Furthermore, it is desirable to set the sound receiving position at the center of the head or near the ears of the sound receiver 4 (see FIG. 1). This will make the sound heard when the user simulates an experience of the sound from the sound source 2 closer to the actual sound. Such sound receiving information including the position of the sound receiver 4 is input based on the user's operation of the terminal 30. The sound receiving information is stored in the memory unit 11 and is also output to the sound receiving signal calculation unit 12g.
 図2に示す収音装置20は、前記したように、音響空間1(図1参照)で音を収音する装置である。具体的には、収音装置20は、音響空間1の残響時間の計測に用いられる所定の音を収音する。なお、「残響時間」とは、音響空間1での音の響き具合を示す値である。残響時間の計測に用いられる音は、手を叩く音の他、紙鉄砲や風船の破裂音といったインパルス性の高い音であってもよいし、ピンクノイズ等の断続音(又は連続音)であってもよい。以下では、残響時間の計測に用いられる音を「残響時間計測用音響信号」という。また、残響時間計測用音響信号を発する音源を「残響時間計測用音源」という。 The sound collection device 20 shown in FIG. 2 is a device that collects sound in the acoustic space 1 (see FIG. 1) as described above. Specifically, the sound collection device 20 collects a specific sound used to measure the reverberation time of the acoustic space 1. Note that "reverberation time" is a value that indicates how well a sound reverberates in the acoustic space 1. The sound used to measure the reverberation time may be a highly impulsive sound such as a paper pistol or the popping of a balloon, in addition to the sound of clapping, or an intermittent sound (or continuous sound) such as pink noise. In the following, the sound used to measure the reverberation time is referred to as the "acoustic signal for measuring reverberation time". Also, the sound source that emits the acoustic signal for measuring reverberation time is referred to as the "sound source for measuring reverberation time".
 例えば、手を叩く音や風船の破裂音のように人の動作で発せられる音を残響時間計測用音響信号とする場合には、人の動作が残響時間計測用音源になる。また、例えば、ピンクノイズを残響時間計測用音響信号とする場合は、スピーカのような所定の発音機器(図示せず)が残響時間計測用音源になる。なお、収音装置20で残響時間計測用音響信号を収音する際には、音響空間1の各機器(図示せず)から発せられる音や、屋外から壁等を介して音響空間1に伝播する環境音といったさまざまな音を含む暗騒音が混在している。このような暗騒音が含まれていても、残響時間計測用音響信号の音の大きさが暗騒音の音の大きさに対して概ね20dB以上大きければ、残響時間の計測に支障が生じるおそれは特にない。 For example, if the sound signal for measuring reverberation time is a sound generated by human movement, such as clapping hands or the popping of a balloon, the human movement becomes the sound source for measuring reverberation time. Also, if the sound signal for measuring reverberation time is pink noise, a specific sound generating device (not shown) such as a speaker becomes the sound source for measuring reverberation time. When the sound collection device 20 collects the sound signal for measuring reverberation time, background noise containing various sounds such as sounds generated by each device (not shown) in the acoustic space 1 and environmental sounds propagating from the outdoors through walls and the like into the acoustic space 1 is mixed in. Even if such background noise is included, there is no particular risk of interference with the measurement of reverberation time as long as the loudness of the sound signal for measuring reverberation time is approximately 20 dB or more louder than the loudness of the background noise.
 また、収音装置20で残響時間計測用音響信号を収音する際、残響時間計測用音源から収音装置20までが1m以上離れていることが望ましい。また、音響空間1(図1参照)の側面1a,1bや天井面1cや床面1dの他、物体3から収音装置20までが1m以上離れていることが望ましい。これによって、残響時間計測用音響信号を測定する際の安定性が高められる。収音装置20によって収音された音のデータは、次に説明する計測部12dに出力される。 When collecting the acoustic signal for measuring reverberation time with the sound collection device 20, it is desirable that the sound source for measuring reverberation time is at least 1 m away from the sound collection device 20. It is also desirable that the sides 1a, 1b, ceiling surface 1c, and floor surface 1d of the acoustic space 1 (see FIG. 1) as well as the object 3 are at least 1 m away from the sound collection device 20. This improves stability when measuring the acoustic signal for measuring reverberation time. Data on the sound collected by the sound collection device 20 is output to the measurement unit 12d, which will be described next.
 計測部12dは、収音装置20によって音響空間1で収音される所定の音(つまり、残響時間計測用音響信号)に基づいて、音響空間1の残響時間を計測する。具体的に説明すると、計測部12dは、残響時間計測用音響信号を音響エネルギの所定の減衰曲線(図示せず)に変換する。なお、残響時間計測用音響信号がインパルス性の高い音(例えば、手を叩く音)である場合には、逆二乗積分法に基づいて、音響エネルギの減衰曲線が生成される。また、残響時間計測用音響信号がピンクノイズ等の断続音である場合には、音が停止された後の音響エネルギを計測部12dが逐次記録することで、音響エネルギの減衰曲線が生成される。なお、1秒間あたりの音響エネルギの減衰率をD[dB/sec]とすると、残響時間T[sec]は、次の式(1)で表される。 The measuring unit 12d measures the reverberation time of the acoustic space 1 based on a predetermined sound (i.e., an acoustic signal for measuring reverberation time) picked up in the acoustic space 1 by the sound pickup device 20. More specifically, the measuring unit 12d converts the acoustic signal for measuring reverberation time into a predetermined decay curve of acoustic energy (not shown). If the acoustic signal for measuring reverberation time is a highly impulsive sound (e.g., clapping), the decay curve of the acoustic energy is generated based on the inverse square integral method. If the acoustic signal for measuring reverberation time is an intermittent sound such as pink noise, the measuring unit 12d sequentially records the acoustic energy after the sound is stopped, thereby generating the decay curve of the acoustic energy. If the decay rate of acoustic energy per second is D [dB/sec], the reverberation time T [sec] is expressed by the following formula (1).
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 音響空間1での音の定在波に基づく室内固有モードの影響や、収音装置20の位置に起因するばらつきを考慮すると、収音装置20を用いて複数の位置で音を収音することが望ましい。この場合において計測部12dは、複数の残響時間の平均値を音響空間1の残響時間として設定する。計測部12dによって計測(算出)された残響時間の値は、記憶部11に格納される他、音響特性算出部12eに出力される。 Considering the influence of indoor eigenmodes based on sound standing waves in the acoustic space 1 and variations due to the position of the sound pickup device 20, it is desirable to pick up sound at multiple positions using the sound pickup device 20. In this case, the measurement unit 12d sets the average value of the multiple reverberation times as the reverberation time of the acoustic space 1. The reverberation time value measured (calculated) by the measurement unit 12d is stored in the memory unit 11 and is also output to the acoustic characteristic calculation unit 12e.
 音響特性算出部12eは、音響空間1(図1参照)の寸法及び残響時間に基づいて、音響空間1の平均吸音率αを算出する。ここで、「平均吸音率」とは、音響空間1における吸音の程度を示す値である。平均吸音率αの値は、ゼロ以上かつ1以下の範囲内になる。例えば、吸音が全く生じないような音響空間1では、平均吸音率αの値がゼロになる。また、音響空間1で全ての音が吸音される場合には、平均吸音率αの値が1になる。このような平均吸音率αは、音響空間1における残響時間Tの他、音響空間1の容積V、表面積S、音速cに基づいて、次の式(2)で表される。 The acoustic characteristic calculation unit 12e calculates the average sound absorption coefficient α of the acoustic space 1 based on the dimensions and reverberation time of the acoustic space 1 (see FIG. 1). Here, "average sound absorption coefficient" is a value that indicates the degree of sound absorption in the acoustic space 1. The value of the average sound absorption coefficient α is in the range of zero or more and 1 or less. For example, in an acoustic space 1 where no sound absorption occurs at all, the value of the average sound absorption coefficient α is zero. Also, when all sound is absorbed in the acoustic space 1, the value of the average sound absorption coefficient α is 1. Such an average sound absorption coefficient α is expressed by the following formula (2) based on the reverberation time T in the acoustic space 1 as well as the volume V, surface area S, and sound speed c of the acoustic space 1.
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
 音響特性算出部12eによって算出された平均吸音率αは、記憶部11に格納される他、音響特性出力部12f及び受音信号算出部12gに出力される。
 音響特性出力部12fは、音響特性算出部12eによって算出された平均吸音率αを表示装置40に表示させる。
The average sound absorption coefficient α calculated by the acoustic characteristic calculation unit 12e is stored in the storage unit 11 and is also output to the acoustic characteristic output unit 12f and the received sound signal calculation unit 12g.
The acoustic characteristic output unit 12f causes the display device 40 to display the average sound absorption coefficient α calculated by the acoustic characteristic calculation unit 12e.
 図3は、表示装置の一例としてのディスプレイ41aを備えるスマートフォン41の正面図である。
 図3に示すスマートフォン41は、ユーザの携帯端末であり、表示装置40(図2参照)としてのディスプレイ41aを備えている。そして、ディスプレイ41aに音の周波数ごとの平均吸音率が所定に表示されることで、音響空間1(図1参照)の平均吸音率がどの程度であるかをユーザが視覚的に把握できるようになっている。
FIG. 3 is a front view of a smartphone 41 including a display 41a as an example of a display device.
The smartphone 41 shown in Fig. 3 is a mobile terminal of a user, and includes a display 41a as the display device 40 (see Fig. 2). The average sound absorption coefficient for each sound frequency is displayed on the display 41a in a predetermined manner, so that the user can visually grasp the average sound absorption coefficient of the acoustic space 1 (see Fig. 1).
 なお、音の周波数ごとの平均吸音率が所定の表形式やグラフでディスプレイ41aに表示されるようにしてもよい。また、音響空間1(図1参照)の特性を示す情報として、平均吸音率の他に、音の周波数ごとの残響時間がディスプレイ41aに表示されるようにしてもよい。その他にも、例えば、平均吸音率αが「高め」・「中程度」・「低め」といった情報がディスプレイ41aに表示されるようにしてもよい。また、スマートフォン41が表示機能の他、前記した端末30(図2参照)の入力機能も担うようにしてもよい。 The average sound absorption coefficient for each sound frequency may be displayed on the display 41a in a predetermined table format or graph. In addition to the average sound absorption coefficient, the reverberation time for each sound frequency may be displayed on the display 41a as information indicating the characteristics of the acoustic space 1 (see FIG. 1). In addition, information such as the average sound absorption coefficient α being "high", "medium", or "low" may be displayed on the display 41a. In addition to the display function, the smartphone 41 may also have the input function of the terminal 30 (see FIG. 2).
 図2に示す受音信号算出部12gは、音響特性算出部12eによって算出された平均吸音率αの他、前記した音響空間情報、音源情報、及び受音情報に基づいて、受音信号を算出する。受音信号とは、音響空間1(図1参照)に音源2(図1参照)を配置した場合に、受音者4(図1参照)の位置で聞こえる音を示す信号である。 The sound reception signal calculation unit 12g shown in FIG. 2 calculates a sound reception signal based on the average sound absorption coefficient α calculated by the acoustic characteristic calculation unit 12e, as well as the acoustic space information, sound source information, and sound reception information described above. The sound reception signal is a signal that indicates the sound heard at the position of the sound receiver 4 (see FIG. 1) when a sound source 2 (see FIG. 1) is placed in the acoustic space 1 (see FIG. 1).
 このような受音信号Lは、音源2から発せられる音源信号Lを用いて、以下の式(3)で表される。なお、式(3)に含まれるYは、音圧レベルの測定点と音源2との間の距離を半径とする球(自由音場の場合)又は半球(半自由音場の場合)の面積であり、音源情報に基づいて算出される。また、式(3)に含まれるYは所定の基準面積であり、Qは指向係数である。また、式(3)に含まれるrは、音源2の位置と受音者4の位置との間の距離である。 Such a received sound signal L1 is expressed by the following formula (3) using a sound source signal L0 emitted from the sound source 2. Note that Y included in formula (3) is the area of a sphere (in the case of a free sound field) or a hemisphere (in the case of a semi-free sound field) whose radius is the distance between the measurement point of the sound pressure level and the sound source 2, and is calculated based on the sound source information. Also, Y0 included in formula (3) is a predetermined reference area, and Q is a directivity coefficient. Also, r included in formula (3) is the distance between the position of the sound source 2 and the position of the sound receiver 4.
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
 また、音源2の位置と受音者4の位置との間の距離rは、次の式(4)で表される。 Furthermore, the distance r between the position of the sound source 2 and the position of the sound receiver 4 is expressed by the following equation (4).
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000004
 音響空間1に音源2が配置されるとき、受音者4の位置における受音信号Lと、音源2から発せられる音源信号Lと、の差分値ΔLは、次の式(5)で表される。 When a sound source 2 is placed in an acoustic space 1, a difference value ΔL between a received sound signal L 1 at the position of a sound receiver 4 and a sound source signal L 0 emitted from the sound source 2 is expressed by the following equation (5).
Figure JPOXMLDOC01-appb-M000005
Figure JPOXMLDOC01-appb-M000005
 したがって、音響空間1に音源2が配置された場合の受音信号Lは、音源2から発せられる音源信号Lを基準として、音圧レベルが差分値ΔLだけ変化した信号になる。この差分値ΔLは、音響空間1の平均吸音率αの他、音源2と受音者4との間の距離rで決まる値である。なお、音源信号Lから所定の差分値ΔLだけ変化させる処理には、例えば、信号の周波数特性を変更するイコライザ(図示せず)が用いられる。このようなイコライザを受音信号算出部12gが有するようにしてもよい。 Therefore, when the sound source 2 is placed in the acoustic space 1, the received signal L1 is a signal whose sound pressure level has changed by a difference value ΔL with respect to the sound source signal L0 emitted from the sound source 2. This difference value ΔL is a value determined by the average sound absorption coefficient α of the acoustic space 1 as well as the distance r between the sound source 2 and the sound receiver 4. Note that, for example, an equalizer (not shown) that changes the frequency characteristics of a signal is used for the process of changing the sound source signal L0 by a predetermined difference value ΔL. The received signal calculation unit 12g may have such an equalizer.
 受音信号算出部12gによって算出された受音信号Lの他、受音信号Lと音源信号Lとの差分値ΔLは、記憶部11に格納される。また、受音信号算出部12gによって算出された受音信号Lは、受音信号出力部12hに出力される。 In addition to the received signal L1 calculated by the received signal calculation unit 12g, a difference value ΔL between the received signal L1 and the sound source signal L0 is stored in the storage unit 11. In addition, the received signal L1 calculated by the received signal calculation unit 12g is output to the received signal output unit 12h.
 受音信号出力部12hは、受音信号算出部12gによって算出された受音信号Lを所定の可聴信号として出力する。ここで、可聴信号とは、人が音として受聴できるような信号である。可聴信号は、発音装置50によって再生可能な音響信号フォーマット(音声ファイルフォーマット)として記憶部11に格納される。また、可聴信号がスピーカ等の発音装置50に出力されることで、空気振動に変換される。 The received sound signal output unit 12h outputs the received sound signal L1 calculated by the received sound signal calculation unit 12g as a predetermined audible signal. Here, the audible signal is a signal that can be heard by humans as sound. The audible signal is stored in the storage unit 11 as an acoustic signal format (audio file format) that can be reproduced by the sound generation device 50. In addition, the audible signal is output to the sound generation device 50 such as a speaker, whereby it is converted into air vibrations.
 図2に示す発音装置50は、例えば、スピーカやイヤホンやヘッドホンであり、受音信号出力部12hに有線又は無線で接続されている。そして、受音信号出力部12hから出力される受音信号に基づいて、所定の可聴音が発音装置50から発せられるようになっている。このように、受音信号出力部12hから受音信号Lが所定の可聴信号として出力されることで、ユーザは、音響空間1(図1参照)における音源2(図1参照)の音を疑似体験できる。 2 is, for example, a speaker, an earphone, or a headphone, and is connected to the sound signal output unit 12h by wire or wirelessly. A predetermined audible sound is generated from the sound device 50 based on the sound signal output from the sound signal output unit 12h. In this way, the sound signal L1 is output from the sound signal output unit 12h as a predetermined audible signal, so that the user can have a simulated experience of the sound of the sound source 2 (see FIG. 1) in the acoustic space 1 (see FIG. 1).
 図4は、音響特性解析システムの制御部が実行する処理を示すフローチャートである(適宜、図1や図2も参照)。
 なお、図4のステップS101~S104の処理の順番は、適宜に変更可能である。
 ステップS101において制御部12は、音響空間情報設定部12aによって、音響空間情報を設定する。前記したように、音響空間情報は、音響空間1の寸法を含む情報であり、端末30の入力操作に基づいて設定される。
 ステップS102において制御部12は、音源情報設定部12bによって、音源情報を設定する。すなわち、音源情報設定部12bは、音源2の位置、音源2の音圧レベル、及び音源信号を含む音源情報を端末30の入力操作に基づいて設定する。
FIG. 4 is a flowchart showing the process executed by the control unit of the acoustic characteristic analysis system (also see FIG. 1 and FIG. 2 as appropriate).
The order of the processes in steps S101 to S104 in FIG. 4 can be changed as appropriate.
In step S101, the control unit 12 sets acoustic space information by the acoustic space information setting unit 12a. As described above, the acoustic space information is information including the dimensions of the acoustic space 1, and is set based on an input operation of the terminal 30.
In step S102, the control unit 12 sets sound source information using the sound source information setting unit 12b. That is, the sound source information setting unit 12b sets sound source information including the position of the sound source 2, the sound pressure level of the sound source 2, and a sound source signal based on an input operation of the terminal 30.
 ステップS103において制御部12は、受音情報設定部12cによって、受音情報を設定する。すなわち、受音情報設定部12cは、受音者4の位置を含む受音情報を端末30の入力操作に基づいて設定する。
 ステップS104において制御部12は、収音装置20によって、残響時間計測用音響信号を取得する。例えば、ユーザが端末30を用いて所定のアプリケーションを起動させた状態で残響時間計測用音響信号を取得するようにしてもよい。前記したように、残響時間計測用音響信号は、ユーザが手を叩く音といったインパルス性の高い音でもよいし、ピンクノイズ等の断続音であってもよい。
In step S103, the control unit 12 sets the sound receiving information by the sound receiving information setting unit 12c. That is, the sound receiving information setting unit 12c sets the sound receiving information including the position of the sound receiver 4 based on an input operation of the terminal 30.
In step S104, the control unit 12 acquires an acoustic signal for measuring reverberation time by the sound collection device 20. For example, the acoustic signal for measuring reverberation time may be acquired in a state where a user has started a predetermined application using the terminal 30. As described above, the acoustic signal for measuring reverberation time may be a highly impulsive sound such as the sound of a user clapping his/her hands, or an intermittent sound such as pink noise.
 ステップS105において制御部12は、計測部12dによって、音響空間1の残響時間を計測する。すなわち、制御部12は、残響時間計測用音響信号に基づいて、音響空間1の残響時間を計測する。
 ステップS106において制御部12は、音響特性算出部12eによって、音響空間1の平均吸音率を算出する。すなわち、制御部12は、音響空間1の寸法及び残響時間に基づいて、音響空間1の平均吸音率を算出する。
In step S105, the control unit 12 causes the measurement unit 12d to measure the reverberation time of the acoustic space 1. That is, the control unit 12 measures the reverberation time of the acoustic space 1 based on the reverberation time measurement acoustic signal.
In step S106, the control unit 12 causes the acoustic characteristic calculation unit 12e to calculate the average sound absorption coefficient of the acoustic space 1. That is, the control unit 12 calculates the average sound absorption coefficient of the acoustic space 1 based on the dimensions and reverberation time of the acoustic space 1.
 ステップS107において制御部12は、音響特性出力部12fによって、表示装置40に音響特性のデータを表示させる。すなわち、制御部12は、音響空間1の残響時間及び平均吸音率のうちの少なくとも一方を表示装置40に表示させる。これによって、音響空間1の残響時間や平均吸音率をユーザが把握できる。 In step S107, the control unit 12 causes the acoustic characteristic output unit 12f to display the acoustic characteristic data on the display device 40. That is, the control unit 12 causes at least one of the reverberation time and the average sound absorption coefficient of the acoustic space 1 to be displayed on the display device 40. This allows the user to grasp the reverberation time and the average sound absorption coefficient of the acoustic space 1.
 ステップS108において制御部12は、受音信号算出部12gによって、受音信号を算出する。すなわち、制御部12は、音響空間情報、音源情報、受音情報、及び平均吸音率に基づいて、受音信号を算出する。
 ステップS109において制御部12は、受音信号出力部12hによって、発音装置50に受音信号を出力する。これによって、音響空間1に音源2を置いた場合に聞こえる音をユーザが疑似体験できる。
In step S108, the control unit 12 calculates the received sound signal by the sound signal calculation unit 12g. That is, the control unit 12 calculates the received sound signal based on the acoustic space information, the sound source information, the sound reception information, and the average sound absorption coefficient.
In step S109, the control unit 12 outputs the received sound signal to the sound generation device 50 via the received sound signal output unit 12h. This allows the user to virtually experience the sound that would be heard if the sound source 2 were placed in the acoustic space 1.
<効果>
 第1実施形態によれば、ユーザが家電量販店等に行かずとも、所定の機器を使用環境(つまり、音響空間1)で使用した場合の運転音を聞くことができる。また、受音信号には、音響空間1の寸法の他、音源情報や受音情報が含まれている。したがって、音響空間1で機器を実際に使った場合の運転音に近い音をユーザが疑似体験できる。また、機器の開発段階で、所定の使用環境における機器の運転音を設計者が疑似体験した上で、その結果を設計に反映させるといったことにも活用できる。
<Effects>
According to the first embodiment, a user can hear the operation sound of a specific device when it is used in a usage environment (i.e., acoustic space 1) without going to a home appliance retailer or the like. The received sound signal includes sound source information and sound reception information in addition to the dimensions of the acoustic space 1. Therefore, the user can simulate an operation sound close to the operation sound when the device is actually used in the acoustic space 1. In addition, the device can be used in the development stage, where a designer can simulate an operation sound of the device in a specific usage environment and then reflect the result in the design.
≪第2実施形態≫
 第2実施形態は、収音装置20(図2参照)や計測部12d(図2参照)を設ける必要が特にない点が第1実施形態とは異なっている。また、第2実施形態は、端末30(図5参照)から音響特性算出部12e(図5参照)に平均吸音率の値が入力される点が、第1実施形態とは異なっている。なお、その他については、第1実施形態と同様である。したがって、第1実施形態とは異なる部分について説明し、重複する部分については説明を省略する。
Second Embodiment
The second embodiment differs from the first embodiment in that there is no particular need to provide a sound collection device 20 (see FIG. 2) or a measurement unit 12d (see FIG. 2). The second embodiment also differs from the first embodiment in that the value of the average sound absorption coefficient is input from the terminal 30 (see FIG. 5) to the acoustic characteristic calculation unit 12e (see FIG. 5). The rest of the second embodiment is the same as the first embodiment. Therefore, only the parts that differ from the first embodiment will be described, and the description of the overlapping parts will be omitted.
 図5は、第2実施形態に係る音響特性解析システム100Aの機能ブロック図である。
 図5に示すように、音響特性解析システム100Aは、音響特性解析装置10Aと、端末30と、表示装置40と、発音装置50と、を含んで構成されている。また、音響特性解析装置10Aの制御部12Aは、音響空間情報設定部12Aaと、音源情報設定部12bと、受音情報設定部12cと、を備える他、音響特性算出部12Aeと、音響特性出力部12fと、受音信号算出部12gと、受音信号出力部12hと、を備えている。
FIG. 5 is a functional block diagram of an acoustic characteristic analysis system 100A according to the second embodiment.
5, the acoustic characteristic analysis system 100A includes an acoustic characteristic analysis device 10A, a terminal 30, a display device 40, and a sound generation device 50. The control unit 12A of the acoustic characteristic analysis device 10A includes an acoustic space information setting unit 12Aa, a sound source information setting unit 12b, and a received sound information setting unit 12c, as well as an acoustic characteristic calculation unit 12Ae, an acoustic characteristic output unit 12f, a received sound signal calculation unit 12g, and a received sound signal output unit 12h.
 音響空間情報設定部12Aaは、音響空間1(図1参照)の寸法の他に、音響空間1の平均吸音率を設定する機能を有している。なお、過去の測定等に基づいて、音響空間1の平均吸音率が既知であるものとする。このような平均吸音率の値は、端末30の操作で入力されてもよいし、記憶部11に既に格納されている平均吸音率の値であってもよい。 The acoustic space information setting unit 12Aa has a function of setting the average sound absorption coefficient of the acoustic space 1 (see FIG. 1) in addition to the dimensions of the acoustic space 1. It is assumed that the average sound absorption coefficient of the acoustic space 1 is known based on past measurements, etc. Such an average sound absorption coefficient value may be input by operating the terminal 30, or may be a value of the average sound absorption coefficient already stored in the memory unit 11.
 音響空間情報設定部12Aaで設定された平均吸音率は、音響特性算出部12Aeを介して、受音信号算出部12gに出力される。なお、音響空間1の平均吸音率が既に与えられているため、音響特性算出部12Aeが平均吸音率を算出する必要は特にない。図2に示す制御部12Aの残りの各構成については第1実施形態と同様であるため、その説明を省略する。 The average sound absorption coefficient set by the acoustic space information setting unit 12Aa is output to the sound reception signal calculation unit 12g via the acoustic characteristic calculation unit 12Ae. Since the average sound absorption coefficient of the acoustic space 1 has already been given, there is no need for the acoustic characteristic calculation unit 12Ae to calculate the average sound absorption coefficient. The remaining components of the control unit 12A shown in FIG. 2 are the same as those in the first embodiment, and therefore will not be described.
<効果>
 第2実施形態によれば、平均吸音率が取得済みである場合には、音響空間1(図1参照)で残響時間計測用音響信号(例えば、ユーザが手を叩く音)を改めて収音する必要がなくなるため、ユーザにとっての利便性が高められる。
<Effects>
According to the second embodiment, when the average sound absorption coefficient has already been acquired, it is no longer necessary to record an acoustic signal for measuring the reverberation time (for example, the sound of a user clapping his hands) again in the acoustic space 1 (see FIG. 1 ), thereby improving convenience for the user.
≪第2実施形態の変形例≫
 第2実施形態では、ユーザによる端末30(図5参照)の操作で平均吸音率の値が入力される場合について説明したが、平均吸音率に代えて、音響空間1(図1参照)の残響時間が入力されるようにしてもよい。なお、過去の測定等に基づいて、音響空間1の残響時間が既知であるものとする。この場合において音響空間情報設定部12Aa(図5参照)は、音響空間1における残響時間を設定し、この残響時間の値を音響特性算出部12Aeに出力する。
<Modification of the Second Embodiment>
In the second embodiment, a case has been described in which the value of the average sound absorption coefficient is input by the user operating the terminal 30 (see FIG. 5), but the reverberation time of the acoustic space 1 (see FIG. 1) may be input instead of the average sound absorption coefficient. It is assumed that the reverberation time of the acoustic space 1 is known based on past measurements, etc. In this case, the acoustic space information setting unit 12Aa (see FIG. 5) sets the reverberation time in the acoustic space 1 and outputs the value of this reverberation time to the acoustic characteristic calculation unit 12Ae.
 音響特性算出部12Aeは、残響時間に基づいて、音響空間1の平均吸音率を算出する。音響特性算出部12Aeで算出された平均吸音率は、受音信号算出部12gに出力される。受音信号算出部12g(つまり、制御部12A)は、端末30の操作で残響時間(又は平均吸音率)の値が入力された場合、受音信号を算出する際に当該値を用いるようにする。このような構成でも、第2実施形態と同様の効果が奏される。 The acoustic characteristic calculation unit 12Ae calculates the average sound absorption coefficient of the acoustic space 1 based on the reverberation time. The average sound absorption coefficient calculated by the acoustic characteristic calculation unit 12Ae is output to the received sound signal calculation unit 12g. When a value of the reverberation time (or the average sound absorption coefficient) is input by operating the terminal 30, the received sound signal calculation unit 12g (i.e., the control unit 12A) uses the value when calculating the received sound signal. With this configuration, the same effect as in the second embodiment can be achieved.
≪第3実施形態≫
 第3実施形態は、音響空間1B(図6参照)が仮想空間として構築され、所定の音源(例えば、図6の掃除機60)を音響空間1Bに置いた場合に仮想空間上のキャラクタ5(図6参照)に聞こえる音をユーザが疑似体験する点が、第2実施形態とは異なっている。なお、その他(音響特性解析システム100Aの構成等:図5参照)については、第2実施形態と同様である。したがって、第2実施形態とは異なる部分について説明し、重複する部分については説明を省略する。
Third Embodiment
The third embodiment differs from the second embodiment in that an acoustic space 1B (see FIG. 6) is constructed as a virtual space, and when a predetermined sound source (e.g., the vacuum cleaner 60 in FIG. 6) is placed in the acoustic space 1B, the user experiences a simulated sound heard by a character 5 (see FIG. 6) in the virtual space. Note that other aspects (such as the configuration of the acoustic characteristic analysis system 100A: see FIG. 5) are the same as those of the second embodiment. Therefore, only the parts that differ from the second embodiment will be described, and the explanation of the overlapping parts will be omitted.
 図6は、第3実施形態に係る音響特性解析システムにおいて、音源である掃除機60を所定のキャラクタ5が使用している状況を示す説明図である。
 図6に示す音響空間1Bは、サーバ等(図示せず)のコンピュータによって構築された3次元的な仮想空間(例えば、メタバース)である。このような仮想空間は、バーチャルライブやオンラインゲームの他、バーチャルショップであるEC(Electronic Commerce)に用いられるものであってもよい。また、音響空間1Bの映像がスマートフォンやタブレット等の携帯端末に表示されるようにしてもよいし、ヘッドマウントディスプレイやスマートグラスやスマートコンタクトレンズに表示されるようにしてもよい。
FIG. 6 is an explanatory diagram showing a situation in which a predetermined character 5 is using a vacuum cleaner 60, which is a sound source, in the acoustic characteristic analysis system according to the third embodiment.
The acoustic space 1B shown in Fig. 6 is a three-dimensional virtual space (e.g., metaverse) constructed by a computer such as a server (not shown). Such a virtual space may be used for virtual live shows, online games, and EC (Electronic Commerce) which is a virtual shop. In addition, the image of the acoustic space 1B may be displayed on a mobile terminal such as a smartphone or tablet, or may be displayed on a head-mounted display, smart glasses, or smart contact lenses.
 図6には、音響空間1Bであるリビングで所定のキャラクタ5が掃除機60を使用している例を示している。キャラクタ5は、例えば、ユーザに相当するアバターであるが、他の所定のキャラクタであってもよい。 FIG. 6 shows an example in which a specific character 5 is using a vacuum cleaner 60 in a living room, which is an acoustic space 1B. The character 5 is, for example, an avatar that represents a user, but may also be another specific character.
 そして、制御部12A(図5参照)は、音源(図6の例では掃除機60)に相当する画像を表示装置40(図5参照)に表示させるとともに、所定のキャラクタ5を表示装置40に表示させる。また、制御部12Aは、音響空間1Bで音源(図6の例では掃除機60)の音を聞く受音者の位置をキャラクタ5の位置として設定する。これによって、キャラクタ5の位置で聞こえる音源の音をユーザが発音装置50(図5参照)で聞くことができる。 Then, the control unit 12A (see FIG. 5) causes the display device 40 (see FIG. 5) to display an image corresponding to the sound source (vacuum cleaner 60 in the example of FIG. 6), and also causes a predetermined character 5 to be displayed on the display device 40. The control unit 12A also sets the position of the sound receiver hearing the sound of the sound source (vacuum cleaner 60 in the example of FIG. 6) in the acoustic space 1B as the position of the character 5. This allows the user to hear the sound of the sound source heard at the position of the character 5 through the sound generation device 50 (see FIG. 5).
 図6に示す掃除機60は、ユーザによる端末30(図5参照)の操作で選択された音源である。なお、音源に相当する機器の種類は、掃除機60に限定されず、エアコンや洗濯機といった他の家電製品であってもよいし、家電製品以外の所定の機器であってもよい。そして、ユーザによる端末30(図5参照)の操作に基づいて、音響空間1Bでキャラクタ5が移動したり、音源(図6の例では掃除機60)の種類や位置を変えたりすることができるようになっている。 The vacuum cleaner 60 shown in FIG. 6 is a sound source selected by the user through operation of the terminal 30 (see FIG. 5). The type of device corresponding to the sound source is not limited to the vacuum cleaner 60, but may be other home appliances such as an air conditioner or a washing machine, or may be a specified device other than a home appliance. Based on the user's operation of the terminal 30 (see FIG. 5), the character 5 can move in the acoustic space 1B, and the type and position of the sound source (vacuum cleaner 60 in the example of FIG. 6) can be changed.
 なお、制御部12A(図5参照)が行う処理については、第2実施形態と同様である。すなわち、制御部12Aは、仮想空間である音響空間1Bの特性に基づいて、音響空間で聞こえる所定の音源(図6の例では掃除機60)の音をユーザが疑似体験する際の受音信号を可聴信号として出力する。具体的には、音響空間1Bの特性を示す平均吸音率又は残響時間の値に基づいて、制御部12Aが所定の受音信号を出力する。 The processing performed by the control unit 12A (see FIG. 5) is the same as in the second embodiment. That is, the control unit 12A outputs, as an audible signal, a received sound signal when the user simulates the experience of the sound of a specific sound source (vacuum cleaner 60 in the example of FIG. 6) heard in the acoustic space, based on the characteristics of the acoustic space 1B, which is a virtual space. Specifically, the control unit 12A outputs a specific received sound signal based on the average sound absorption coefficient or reverberation time value indicating the characteristics of the acoustic space 1B.
 音響空間1Bの平均吸音率又は残響時間は、端末30(図5参照)の操作に基づいて設定されてもよいし、記憶部11(図5参照)に予め格納された値が用いられるようにしてもよい。例えば、現実に存在する所定の音響空間と寸法が共通である音響空間1Bが仮想空間上に設定された場合において、現実の音響空間の平均吸音率又は残響時間が既知であるときには、これらの値を仮想空間上の音響空間1Bの平均吸音率又は残響時間として用いるようにしてもよい。 The average sound absorption coefficient or reverberation time of acoustic space 1B may be set based on the operation of terminal 30 (see FIG. 5), or a value previously stored in memory unit 11 (see FIG. 5) may be used. For example, when acoustic space 1B having the same dimensions as a specific acoustic space that actually exists is set in a virtual space, and the average sound absorption coefficient or reverberation time of the real acoustic space is known, these values may be used as the average sound absorption coefficient or reverberation time of acoustic space 1B in the virtual space.
 また、音響空間1Bにおけるキャラクタ5の位置の変化、又は、音源(図6の例では掃除機60)の位置の変化に応じて、制御部12A(図5参照)が受音信号を変化させるようにするとよい。これによって、ユーザが発音装置50(図5参照)で掃除機60の運転音を聞いたときの音が、より現実感のあるものになる。 Furthermore, it is preferable that the control unit 12A (see FIG. 5) changes the received sound signal in response to a change in the position of the character 5 in the acoustic space 1B or a change in the position of the sound source (vacuum cleaner 60 in the example of FIG. 6). This makes the sound that the user hears through the sound generation device 50 (see FIG. 5) of the operating sound of the vacuum cleaner 60 more realistic.
<効果>
 第3実施形態によれば、音響空間1Bが仮想空間として設定された場合において、所定のキャラクタ5の位置で音源から聞こえる音をユーザが聞くことができる。したがって、ユーザが仮想空間上で音源の音を疑似体験する際の現実感を高めることができる。
<Effects>
According to the third embodiment, when the acoustic space 1B is set as a virtual space, the user can hear the sound coming from the sound source at the position of a predetermined character 5. This can enhance the sense of reality when the user experiences the sound of the sound source in the virtual space in a simulated manner.
≪第4実施形態≫
 第4実施形態は、音響空間1(図1参照)の寸法や音源2(図1参照)の種類・位置の他、受音者4(図1参照)の位置といった条件を複数設定して比較できるようにする点が、第1実施形態とは異なっている。なお、その他(音響特性解析システム100の構成等:図2参照)については、第1実施形態と同様である。したがって、第1実施形態とは異なる部分について説明し、重複する部分については説明を省略する。
Fourth Embodiment
The fourth embodiment differs from the first embodiment in that a plurality of conditions, such as the dimensions of the acoustic space 1 (see FIG. 1), the type and position of the sound source 2 (see FIG. 1), and the position of the sound receiver 4 (see FIG. 1), can be set and compared. Note that other aspects (such as the configuration of the acoustic characteristic analysis system 100: see FIG. 2) are the same as those of the first embodiment. Therefore, only the parts that differ from the first embodiment will be described, and the explanation of the overlapping parts will be omitted.
 第4実施形態については、図1、図2を用いて説明する。
 図2に示す音響空間情報設定部12aは、音響空間1の寸法を含む音響空間情報を設定する。このような音響空間情報は、所定の1つの音響空間1の情報であってもよいし、寸法や音響特性(平均吸音率等)が異なる複数の音響空間1の情報であってもよい。例えば、音源2である掃除機をユーザがリビングで使用する場合と、この掃除機を寝室で使用する場合とでは、掃除機の運転音の聞こえ方が異なることが多い。したがって、寸法や音響特性の異なる複数の音響空間1について音源2の音を疑似体験する際には、ユーザは、複数の音響空間1の音響空間情報をそれぞれ入力する。
The fourth embodiment will be described with reference to FIGS.
The acoustic space information setting unit 12a shown in Fig. 2 sets acoustic space information including dimensions of the acoustic space 1. Such acoustic space information may be information of a specific acoustic space 1, or may be information of multiple acoustic spaces 1 with different dimensions or acoustic characteristics (average sound absorption coefficient, etc.). For example, the operating sound of a vacuum cleaner, which is the sound source 2, often sounds different when the user uses the vacuum cleaner in the living room and when the user uses the vacuum cleaner in the bedroom. Therefore, when simulating the experience of the sound of the sound source 2 in multiple acoustic spaces 1 with different dimensions and acoustic characteristics, the user inputs the acoustic space information of each of the multiple acoustic spaces 1.
 図2に示す音源情報設定部12bは、音源2に関する音源情報を設定する。このような音源情報は、所定の1種類の音源2に関する情報であってもよいし、複数種類の音源2に関する情報であってもよい。また、所定の音源2について、その位置が異なる場合の複数の音源情報が設定されるようにしてもよい。例えば、音源2である掃除機をリビングの中央で駆動させる場合と、この掃除機をリビングの隅付近で駆動させる場合とでは、所定の受音位置での聞こえ方が異なることがある。したがって、種類又は位置が異なる所定の音源2の音を疑似体験する際には、ユーザは、その音源2に関する音源情報を入力する。 The sound source information setting unit 12b shown in FIG. 2 sets sound source information related to the sound source 2. Such sound source information may be information related to one specific type of sound source 2, or information related to multiple types of sound sources 2. Furthermore, multiple pieces of sound source information may be set for a specific sound source 2 when its position is different. For example, the sound source 2, which is a vacuum cleaner, may sound differently at a specific sound receiving position when the vacuum cleaner is operated in the center of the living room than when the vacuum cleaner is operated near a corner of the living room. Therefore, when simulating the experience of the sound of a specific sound source 2 of a different type or position, the user inputs sound source information related to that sound source 2.
 図2に示す受音情報設定部12cは、受音者4の位置を含む受音情報を設定する。受音情報に含まれる受音者4の位置情報は、音響空間1における所定の1箇所の受音位置であってもよいし、複数箇所の受音位置であってもよい。例えば、受音者4がリビングの中央にいる場合と、受音者4がリビングの隅付近にいる場合とでは、音源2の音の聞こえ方が異なることがある。したがって、複数の受音位置での聞こえ方を疑似体験する際には、ユーザは、それぞれの受音位置を含む受音情報を入力する。 The sound receiving information setting unit 12c shown in FIG. 2 sets sound receiving information including the position of the sound receiver 4. The position information of the sound receiver 4 included in the sound receiving information may be a single predetermined sound receiving position in the acoustic space 1, or may be multiple sound receiving positions. For example, the way the sound of the sound source 2 is heard may differ between when the sound receiver 4 is in the center of the living room and when the sound receiver 4 is near a corner of the living room. Therefore, when simulating the experience of hearing at multiple sound receiving positions, the user inputs sound receiving information including each sound receiving position.
 図2に示す音響特性算出部12eは、音響空間1の残響時間に基づいて、音響空間1の平均吸音率を算出する。なお、音響空間情報設定部12aで複数の音響空間1の寸法等が設定された場合には、それぞれの音響空間1の残響時間計測用音響信号に対応するように複数の残響時間が計測される。さらに、それぞれの残響時間に対応するように、複数の平均吸音率が算出される。音響特性算出部12eによって算出された複数の平均吸音率は、複数の音響空間1に対応付けて記憶部11に格納される他、音響特性出力部12f及び受音信号算出部12gに出力される。 The acoustic characteristic calculation unit 12e shown in FIG. 2 calculates the average sound absorption coefficient of the acoustic space 1 based on the reverberation time of the acoustic space 1. When the dimensions of multiple acoustic spaces 1 are set in the acoustic space information setting unit 12a, multiple reverberation times are measured so as to correspond to the acoustic signal for measuring the reverberation time of each acoustic space 1. Furthermore, multiple average sound absorption coefficients are calculated so as to correspond to each reverberation time. The multiple average sound absorption coefficients calculated by the acoustic characteristic calculation unit 12e are stored in the memory unit 11 in correspondence with the multiple acoustic spaces 1, and are also output to the acoustic characteristic output unit 12f and the sound reception signal calculation unit 12g.
 図2に示す受音信号算出部12gは、平均吸音率等に基づいて、受音信号を算出する。なお、音源2の位置及び受音者4の位置の少なくとも一方が複数設定されている場合、受音信号算出部12gは、音源2の位置と受音者4の位置との組合せに対応付けて、音源2と受音者4との間の距離rを算出する。第1実施形態と同様に、この距離rの値は、受音信号の算出に用いられる。図2に示す受音信号出力部12hは、受音信号算出部12gによって算出された受音信号を可聴信号として発音装置50に出力する。 The received sound signal calculation unit 12g shown in FIG. 2 calculates the received sound signal based on the average sound absorption coefficient, etc. When at least one of the positions of the sound source 2 and the position of the sound receiver 4 is set multiple times, the received sound signal calculation unit 12g calculates the distance r between the sound source 2 and the sound receiver 4 in correspondence with the combination of the position of the sound source 2 and the position of the sound receiver 4. As in the first embodiment, the value of this distance r is used to calculate the received sound signal. The received sound signal output unit 12h shown in FIG. 2 outputs the received sound signal calculated by the received sound signal calculation unit 12g to the sound generation device 50 as an audible signal.
 なお、音響空間1の寸法、音響空間1の特性(平均吸音率又は残響時間)、音源2の位置、音源2の種類、及び音響空間1における受音者4の位置のうちの少なくとも一つの情報が端末30の操作で変更された場合、変更後の情報に基づいて、制御部12が受音信号を変化させるようにするとよい。これによって、例えば、リビングで掃除機を使用した場合と、寝室で掃除機を使用した場合とで、音の聞こえ方をユーザが比較することができる。また、掃除機の位置や受音者4の位置を変えた場合の聞こえ方をユーザが比較することもできる。 If at least one piece of information among the dimensions of acoustic space 1, the characteristics of acoustic space 1 (average sound absorption coefficient or reverberation time), the position of sound source 2, the type of sound source 2, and the position of receiver 4 in acoustic space 1 is changed by operating terminal 30, it is preferable that control unit 12 change the received sound signal based on the changed information. This allows the user to compare how the sound sounds when using a vacuum cleaner in the living room and when using the vacuum cleaner in the bedroom, for example. The user can also compare how the sound sounds when the position of the vacuum cleaner or the position of receiver 4 is changed.
<効果>
 第4実施形態によれば、音響空間1の寸法や音響特性の他、音源2や受音者4の位置を変更した場合の聞こえ方の違いをユーザが実感し、音源2である機器をユーザが購入する際の判断材料にすることができる。
<Effects>
According to the fourth embodiment, the user can realize the difference in how the sound sounds when the positions of the sound source 2 and the receiver 4 are changed, in addition to the dimensions and acoustic characteristics of the acoustic space 1, and can use this information as a basis for making a decision when the user purchases a device that is the sound source 2.
≪第5実施形態≫
 第5実施形態は、受音者4(図1参照)の頭部伝達関数に基づいて受音信号が生成される点が、第1実施形態とは異なっている。なお、その他(音響特性解析システム100の構成等:図2参照)については、第1実施形態と同様である。したがって、第1実施形態とは異なる部分について説明し、重複する部分については説明を省略する。
Fifth embodiment
The fifth embodiment differs from the first embodiment in that a received sound signal is generated based on the head-related transfer function of the sound receiver 4 (see FIG. 1). Other aspects (such as the configuration of the acoustic characteristic analysis system 100: see FIG. 2) are the same as those of the first embodiment. Therefore, only the parts that are different from the first embodiment will be described, and the description of the overlapping parts will be omitted.
 第4実施形態については、図1、図2を用いて説明する。
 図2に示す受音情報設定部12cは、受音者4(図1参照)の位置の他、受音者4の頭部伝達関数を設定する。なお、頭部伝達関数とは、受音者4(図1参照)の頭部・身体・耳介で反射したり回折したりした音が、受音者4の耳(外耳道入口)に到達するまでの特性を示す所定の関数である。例えば、受音者4の正面から音が到来する場合において、1.6kHz未満の低周波数では、頭部伝達関数が所定のフラットな特性を有することが多い。この場合には、受音者4の耳に到達する入射音波が実際の音よりも強くなったり弱くなったりするといったことがほとんどない。
The fourth embodiment will be described with reference to FIGS.
The sound receiving information setting unit 12c shown in Fig. 2 sets the position of the sound receiver 4 (see Fig. 1) as well as the head transfer function of the sound receiver 4. The head transfer function is a predetermined function that indicates the characteristics of the sound reflected or diffracted by the head, body, and pinna of the sound receiver 4 (see Fig. 1) until it reaches the ear (entrance of the ear canal) of the sound receiver 4. For example, when a sound arrives from the front of the sound receiver 4, the head transfer function often has a predetermined flat characteristic at low frequencies below 1.6 kHz. In this case, the incident sound wave that reaches the ear of the sound receiver 4 rarely becomes stronger or weaker than the actual sound.
 一方、1.6kHz以上の高周波数では、受音者4の頭部・身体・耳介での音の反射や回折の影響で、頭部伝達関数の特性がフラットにならないことが多い。この場合には、受音者4の耳に到達する入射音波が実際の音よりも強くなったり弱くなったりすることが多い。したがって、高周波数での音圧レベルが高い音が音源2(図1参照)として設定された場合には、ユーザに聞こえる音が頭部伝達関数の影響を受けやすくなる。このような頭部伝達関数を受音情報に含めることで、受音者4の頭部・身体・耳介における反射や回折の影響を反映させた音をユーザが疑似体験できる。 On the other hand, at high frequencies above 1.6 kHz, the characteristics of the head-related transfer function are often not flat due to the effects of sound reflection and diffraction at the head, body, and pinna of the receiver 4. In this case, the incident sound waves that reach the ears of the receiver 4 are often stronger or weaker than the actual sound. Therefore, when a sound with a high sound pressure level at high frequencies is set as the sound source 2 (see Figure 1), the sound heard by the user is more likely to be affected by the head-related transfer function. By including such a head-related transfer function in the sound reception information, the user can simulate an experience of sound that reflects the effects of reflection and diffraction at the head, body, and pinna of the receiver 4.
 例えば、端末30の操作に基づいて、頭部伝達関数を示す所定の値が入力されるようにしてもよい。また、標準的な所定の頭部伝達関数が適宜に用いられるようにしてもよい。頭部伝達関数を含む受音情報は、受音情報設定部12cから受音信号算出部12gに出力される。 For example, a predetermined value indicating a head-related transfer function may be input based on an operation of the terminal 30. Also, a standard predetermined head-related transfer function may be used as appropriate. The received sound information including the head-related transfer function is output from the received sound information setting unit 12c to the received sound signal calculation unit 12g.
 図2に示す受音信号算出部12gは、受音者4の位置で聞こえる音の受音信号Lを算出する際、第1実施形態で説明した式(3)の受音信号Lに所定の頭部伝達関数を加算する。これによって、受音者4の頭部・身体・耳介での反射や回折の影響を反映させた新たな受音信号Lが算出される。すなわち、制御部12は、音響空間情報、音源情報、受音情報、平均吸音率、及び受音者の頭部伝達関数に基づいて、受音信号Lを算出する。 2 adds a predetermined head-related transfer function to the received signal L1 of the formula (3) described in the first embodiment when calculating the received signal L1 of the sound heard at the position of the receiver 4. This calculates a new received signal L1 that reflects the influence of reflection and diffraction at the head, body, and pinna of the receiver 4. That is, the control unit 12 calculates the received signal L1 based on the acoustic space information, sound source information, sound reception information, average sound absorption coefficient, and the head- related transfer function of the receiver.
 なお、ユーザが音を疑似体験する際には、ユーザの耳(具体的には外耳道入口)の付近から音を発するヘッドホンやイヤホンといった発音装置50(図2参照)を用いることが望ましい。これによって、発音装置50からユーザの耳に実際の音が到達するまでの頭部伝達関数を考慮する必要がなくなる。 When a user wants to simulate a sound experience, it is desirable to use a sound generation device 50 (see FIG. 2) such as headphones or earphones that emit sound from near the user's ear (specifically, the entrance to the ear canal). This eliminates the need to consider the head-related transfer function from the sound generation device 50 to the user's ear.
<効果>
 第5実施形態によれば、受音情報に頭部伝達関数を含めることで、音響空間1における音源2の音をユーザが疑似体験する際の現実感を高めることができる。
<Effects>
According to the fifth embodiment, by including a head-related transfer function in the sound reception information, it is possible to enhance the sense of reality when the user virtually experiences the sound of the sound source 2 in the acoustic space 1.
≪第6実施形態≫
 第6実施形態では、隔壁6(図7参照)を介して隣り合う2つの空間のうちの一方に音源2(図7参照)が設けられ、他方に受音者4(図7参照)が存在する場合について説明する。なお、音響特性解析システム100(図2参照)の構成については、第1実施形態と同様である。したがって、第1実施形態とは異なる部分について説明し、重複する部分については説明を省略する。
Sixth Embodiment
In the sixth embodiment, a case will be described in which a sound source 2 (see FIG. 7) is provided in one of two spaces adjacent to each other via a partition wall 6 (see FIG. 7), and a sound receiver 4 (see FIG. 7) is present in the other space. The configuration of the acoustic characteristic analysis system 100 (see FIG. 2) is the same as that of the first embodiment. Therefore, only the parts different from the first embodiment will be described, and the explanation of the overlapping parts will be omitted.
 図7は、第6実施形態に係る音響特性解析システムの対象である音響空間の説明図である。
 図7に示す音源空間1Cは、音源2が設けられる音響空間である。また、受音空間1Dは、受音者4が存在する音響空間である。図7に示すように、音源空間1Cと受音空間1Dとは隔壁6を介して隣り合っている。音源空間1Cには、机や椅子や棚といった所定の物体31が置かれている他、図示はしないが、窓やドアが設けられている。なお、受音空間1Dについても同様である。隔壁6は、音源空間1Cと受音空間1Dとを仕切る壁である。図7に示すように、隔壁6は、音源空間1Cの側面の一部を形成するとともに、受音空間1Dの側面の一部を形成している。
FIG. 7 is an explanatory diagram of an acoustic space that is a target of the acoustic characteristic analysis system according to the sixth embodiment.
The sound source space 1C shown in FIG. 7 is an acoustic space in which a sound source 2 is provided. The sound receiving space 1D is an acoustic space in which a sound receiver 4 exists. As shown in FIG. 7, the sound source space 1C and the sound receiving space 1D are adjacent to each other via a partition wall 6. In the sound source space 1C, a certain object 31 such as a desk, a chair, or a shelf is placed, and a window and a door are provided, although not shown. The same is true for the sound receiving space 1D. The partition wall 6 is a wall that separates the sound source space 1C and the sound receiving space 1D. As shown in FIG. 7, the partition wall 6 forms a part of the side surface of the sound source space 1C and also forms a part of the side surface of the sound receiving space 1D.
 音響特性解析システム100の処理については、図2を用いて説明する(適宜、図7も参照)。図2に示す音響空間情報設定部12aは、端末30の操作に基づいて、音源空間1C及び受音空間1Dのそれぞれの寸法と、隔壁6の音響透過損失と、を含む音響空間情報を設定する。なお、音響透過損失とは、隔壁6の遮音性能の高さを示す値である。隔壁6の遮音性能が高いほど、音響透過損失の値が大きくなる。 The processing of the acoustic characteristic analysis system 100 will be explained using FIG. 2 (also see FIG. 7 as appropriate). The acoustic space information setting unit 12a shown in FIG. 2 sets acoustic space information including the dimensions of the sound source space 1C and the sound receiving space 1D and the sound transmission loss of the partition wall 6 based on the operation of the terminal 30. The sound transmission loss is a value indicating the level of the sound insulation performance of the partition wall 6. The higher the sound insulation performance of the partition wall 6, the larger the value of the sound transmission loss.
 図2に示す音源情報設定部12bは、所定の音源情報を設定する。このような音源情報には、隔壁6を介して隣り合う2つの音響空間のうちの一方を音源空間1Cとして設定する旨の情報の他、音源2の音源信号や、音源信号の音圧レベル、音圧レベルの測定点と音源2との間の距離、音源2の指向係数が含まれている。 The sound source information setting unit 12b shown in FIG. 2 sets predetermined sound source information. Such sound source information includes information to set one of two acoustic spaces adjacent to each other via a partition wall 6 as sound source space 1C, as well as the sound source signal of sound source 2, the sound pressure level of the sound source signal, the distance between the sound pressure level measurement point and sound source 2, and the directivity coefficient of sound source 2.
 図2に示す受音情報設定部12cは、受音情報を設定する。このような受音情報には、隔壁6を介して隣り合う2つの音響空間のうちの一方を受音空間1Dとして設定する旨の情報が含まれている。
 図7に示す収音装置21は、音源空間1Cで残響時間計測用音響信号(ユーザが手を叩く音やピンクノイズ)を測定する。図7に示す収音装置22は、受音空間1Dで残響時間計測用音響信号を測定する。
2 sets the sound receiving information. Such sound receiving information includes information to set one of two acoustic spaces adjacent to each other via the partition wall 6 as the sound receiving space 1D.
A sound collecting device 21 shown in Fig. 7 measures a reverberation time measuring acoustic signal (sound of a user clapping hands or pink noise) in a sound source space 1C. A sound collecting device 22 shown in Fig. 7 measures a reverberation time measuring acoustic signal in a sound receiving space 1D.
 図2に示す計測部12dは、音源空間1C及び受音空間1Dのそれぞれで測定された残響時間計測用音響信号に基づいて、音源空間1Cの残響時間を計測するとともに、受音空間1Dの残響時間を計測する。
 図2に示す音響特性算出部12eは、音源空間1C及び受音空間1Dのそれぞれの残響時間に基づいて、音源空間1Cの平均吸音率を算出するとともに、受音空間1Dの平均吸音率を算出する。なお、平均吸音率の算出には、第1実施形態で説明した式(2)が用いられる。
The measurement unit 12d shown in FIG. 2 measures the reverberation time of the sound source space 1C and the sound receiving space 1D based on the reverberation time measurement acoustic signals measured in each of the sound source space 1C and the sound receiving space 1D.
2 calculates the average sound absorption coefficient of the sound source space 1C and the sound receiving space 1D based on the reverberation times of the sound source space 1C and the sound receiving space 1D. The average sound absorption coefficient is calculated using the formula (2) described in the first embodiment.
 図2に示す受音信号算出部12gは、音源空間1C及び受音空間1Dのそれぞれの平均吸音率の他、前記した音響空間情報、音源情報、及び受音情報に基づいて、音源2から発せられる音が隔壁6を介して、受音空間1Dで聞こえる平均的な音の受音信号を算出する。なお、受音空間1Dの平均音圧レベルLは、以下の式(6)で表される。ここで、式(6)に含まれるLは音源空間1Cの平均音圧レベルである。また、式(6)に含まれるTLは隔壁6の音響透過損失であり、Sは隔壁6の表面積である。また、式(6)に含まれるSは受音空間1Dの表面積であり、αは受音空間1Dの平均吸音率である。 The sound receiving signal calculation unit 12g shown in FIG. 2 calculates a sound receiving signal of an average sound that is heard in the sound receiving space 1D through the partition wall 6 when the sound emitted from the sound source 2 is heard in the sound receiving space 1D based on the average sound absorption coefficients of the sound source space 1C and the sound receiving space 1D, as well as the acoustic space information, sound source information, and sound receiving information described above. The average sound pressure level L B of the sound receiving space 1D is expressed by the following formula (6). Here, L A included in formula (6) is the average sound pressure level of the sound source space 1C. Furthermore, TL included in formula (6) is the sound transmission loss of the partition wall 6, and S is the surface area of the partition wall 6. Furthermore, S B included in formula (6) is the surface area of the sound receiving space 1D, and α B is the average sound absorption coefficient of the sound receiving space 1D.
Figure JPOXMLDOC01-appb-M000006
Figure JPOXMLDOC01-appb-M000006
 また、音源2の音響パワーレベルがLのときの音源空間1Cの平均音圧レベルLは、以下の式(7)で表される。なお、式(7)に含まれるLは音源2から発せられる音源信号であり、Yは音圧レベルの測定点と音源2との間の距離を半径とする球(自由音場の場合)又は半球(半自由音場の場合)の面積である。また、式(7)に含まれるYは所定の基準面積であり、Aは音源空間1Cの透過吸音面積である。 Moreover, the average sound pressure level L A of the sound source space 1C when the acoustic power level of the sound source 2 is L W is expressed by the following formula (7). Note that L 0 included in formula (7) is the sound source signal emitted from the sound source 2, and Y is the area of a sphere (in the case of a free sound field) or a hemisphere (in the case of a semi-free sound field) whose radius is the distance between the measurement point of the sound pressure level and the sound source 2. Furthermore, Y 0 included in formula (7) is a predetermined reference area, and A A is the transmission and absorption area of the sound source space 1C.
Figure JPOXMLDOC01-appb-M000007
Figure JPOXMLDOC01-appb-M000007
 式(7)を式(6)に代入すると、受音空間1Dの平均音圧レベルLは、以下の式(8)で表される。なお、式(8)に含まれるSは音源空間1Cの表面積であり、αは音源空間1Cの平均吸音率である。 By substituting formula (7) into formula (6), the average sound pressure level L B of the sound receiving space 1D is expressed by the following formula (8): In formula (8), S A is the surface area of the sound source space 1C, and α A is the average sound absorption coefficient of the sound source space 1C.
Figure JPOXMLDOC01-appb-M000008
Figure JPOXMLDOC01-appb-M000008
 したがって、音源信号の音圧レベルLと受音空間1Dの平均音圧レベルLとの差分値(L-L)は、以下の式(9)で表される。 Therefore, the difference value (L B −L 0 ) between the sound pressure level L 0 of the sound source signal and the average sound pressure level L B of the sound receiving space 1D is expressed by the following equation (9).
Figure JPOXMLDOC01-appb-M000009
Figure JPOXMLDOC01-appb-M000009
 なお、透過率τが異なるn種の部位で隔壁6(図7参照)が構成されている場合には、隔壁6の音響透過損失(総合音響透過損失ともいう)が以下の式(10)で表される。このような総合音響透過損失は、例えば、隔壁6の一部がガラスで形成され、他の部分が樹脂で形成されるといったように、異なる材質の部分が隔壁6に混在している場合に特に有効である。 When the partition 6 (see FIG. 7) is composed of n parts with different transmittances τ, the sound transmission loss of the partition 6 (also called the total sound transmission loss) is expressed by the following formula (10). Such total sound transmission loss is particularly effective when the partition 6 contains parts made of different materials, for example, when part of the partition 6 is made of glass and other parts are made of resin.
Figure JPOXMLDOC01-appb-M000010
Figure JPOXMLDOC01-appb-M000010
 この場合において、前記した式(6)は、以下の式(11)に置き換えられる。 In this case, the above formula (6) is replaced with the following formula (11).
Figure JPOXMLDOC01-appb-M000011
Figure JPOXMLDOC01-appb-M000011
 したがって、音源信号の音圧レベルLと受音空間1Dの平均音圧レベルLとの差分値ΔLは、以下の式(12)で表される。 Therefore, the difference value ΔL between the sound pressure level L 0 of the sound source signal and the average sound pressure level L B of the sound receiving space 1D is expressed by the following formula (12).
Figure JPOXMLDOC01-appb-M000012
Figure JPOXMLDOC01-appb-M000012
 式(12)から、次のことがいえる。すなわち、音源空間1C(図7参照)の音源2から発せられる音が隔壁6を介して受音空間1Dに伝播するとき、受音空間1Dで聴取される平均的な受音信号Lは、音源信号Lに対して差分値ΔLだけ変化した信号になる。音源信号Lに対して差分値ΔLだけ変化させる処理には、例えば、信号の周波数特性を変更可能なイコライザ(図示せず)が用いられる。このようなイコライザを受音信号算出部12g(図2参照)が有するようにしてもよい。 From formula (12), the following can be said. That is, when sound emitted from the sound source 2 in the sound source space 1C (see FIG. 7) propagates to the sound receiving space 1D through the partition wall 6, the average received sound signal L B heard in the sound receiving space 1D becomes a signal that has changed by a difference value ΔL with respect to the sound source signal L 0. For example, an equalizer (not shown) capable of changing the frequency characteristics of a signal is used for the process of changing the sound source signal L 0 by the difference value ΔL. Such an equalizer may be included in the received sound signal calculation unit 12g (see FIG. 2).
 受音信号算出部12gによって算出された受音信号Lは、受音信号出力部12h(図2参照)に出力される。そして、受音信号出力部12hから発音装置50(図2参照)に所定の受音信号が可聴信号として出力される。このように、隔壁6を介して隣り合う2つの音響空間(音源空間1C及び受音空間1D)のうちの一方に音源2が存在するとともに、他方に受音者4が存在することが想定される場合、制御部12(図2参照)は、隔壁6の音響透過損失を含む情報に基づいて、受音信号を算出する。 The received sound signal L B calculated by the received sound signal calculation unit 12g is output to the received sound signal output unit 12h (see FIG. 2). A predetermined received sound signal is then output as an audible signal from the received sound signal output unit 12h to the sound generation device 50 (see FIG. 2). In this manner, when it is assumed that the sound source 2 exists in one of the two acoustic spaces (sound source space 1C and sound receiving space 1D) adjacent to each other via the partition 6, and the sound receiver 4 exists in the other, the control unit 12 (see FIG. 2) calculates the received sound signal based on information including the sound transmission loss of the partition 6.
 なお、端末30の操作で隔壁6の音響透過損失が変更された場合、制御部12が、音響透過損失の変更に応じて、受音信号を変化させるようにしてもよい。これによって、例えば、隔壁6に設けられた窓(図示せず)が閉められた場合と、この窓が開けられた場合とで、受音者4に聞こえる音の違いをユーザが聞き比べることができる。 If the sound transmission loss of the partition wall 6 is changed by operating the terminal 30, the control unit 12 may change the received sound signal in accordance with the change in sound transmission loss. This allows the user to compare the difference in the sound heard by the receiver 4 when, for example, a window (not shown) in the partition wall 6 is closed and when the window is open.
<効果>
 第6実施形態によれば、音源2が設けられる音源空間1Cと、受音者4が存在する受音空間1Dと、が隔壁6を介して隣り合っている場合に、受音者4に聞こえる音をユーザが疑似体験できる。例えば、ユーザが自宅のリビングで掃除機を使用しているとき、リビングの隣の寝室にいる人にどのように聞こえるかを確かめることができる。また、例えば、ユーザが掃除機を部屋で使用した場合、隣の部屋に住んでいる人にどのように聞こえるかを確かめるといったこともできる。
<Effects>
According to the sixth embodiment, when a sound source space 1C in which a sound source 2 is provided and a sound receiving space 1D in which a sound receiver 4 exists are adjacent to each other via a partition wall 6, a user can simulate an experience of a sound heard by the sound receiver 4. For example, when a user uses a vacuum cleaner in a living room of a house, the user can confirm how the sound sounds to a person in a bedroom next to the living room. Also, when a user uses a vacuum cleaner in a room, the user can confirm how the sound sounds to a person living in the next room.
≪第7実施形態≫
 第7実施形態では、音源2(図8参照)が屋外等の開放空間に設けられ、この開放空間から隔壁7(図8参照)で隔てられた受音空間1Fに受音者4(図8参照)が存在する場合について説明する。なお、音響特性解析システム100(図2参照)の構成については、第1実施形態と同様である。したがって、第1実施形態とは異なる部分について説明し、重複する部分については説明を省略する。
Seventh embodiment
In the seventh embodiment, a case will be described in which a sound source 2 (see FIG. 8) is provided in an open space such as outdoors, and a sound receiver 4 (see FIG. 8) is present in a sound receiving space 1F separated from the open space by a partition wall 7 (see FIG. 8). The configuration of the acoustic characteristic analysis system 100 (see FIG. 2) is the same as that of the first embodiment. Therefore, the parts different from the first embodiment will be described, and the explanation of the overlapping parts will be omitted.
 図8は、第7実施形態に係る音響特性解析システムの対象である音響空間の説明図である。
 図8に示す音源空間1Eは、音源2が設けられる屋外等の開放空間である。このような開放空間として、例えば、市街地が挙げられるが、これに限定されるものではない。受音空間1Fは、受音者4が存在する部屋等の音響空間であり、屋外等の音源空間1Eとは隔壁7で隔てられている。隔壁7の面は、受音空間1Fの側面の一部を形成している。
FIG. 8 is an explanatory diagram of an acoustic space that is a target of the acoustic characteristic analysis system according to the seventh embodiment.
The sound source space 1E shown in Fig. 8 is an open space such as outdoors where the sound source 2 is installed. An example of such an open space is an urban area, but is not limited to this. The sound receiving space 1F is an acoustic space such as a room where a sound receiver 4 exists, and is separated from the sound source space 1E such as outdoors by a partition wall 7. The surface of the partition wall 7 forms a part of the side surface of the sound receiving space 1F.
 音響特性解析システム100の処理については、図2を用いて説明する(適宜、図8も参照)。
 図2に示す音響空間情報設定部12aは、端末30の操作に基づいて、音響空間情報を設定する。このような音響空間情報には、音源空間1Eが開放空間であることを示す情報の他、受音空間1Fの寸法と、隔壁7の音響透過損失と、が含まれている。なお、隔壁7において音の透過率τが一様である必要は特になく、透過率τが異なるn種の部位で構成されるようにしてもよい。この場合の隔壁7の音響透過損失(総合音響透過損失)は、第6実施形態で説明した式(10)で表される。
The processing of the acoustic characteristic analysis system 100 will be described with reference to FIG. 2 (also see FIG. 8 as appropriate).
The acoustic space information setting unit 12a shown in Fig. 2 sets acoustic space information based on the operation of the terminal 30. Such acoustic space information includes information indicating that the sound source space 1E is an open space, as well as the dimensions of the sound receiving space 1F and the sound transmission loss of the partition wall 7. Note that the sound transmittance τ of the partition wall 7 does not need to be uniform, and the partition wall 7 may be composed of n types of parts with different sound transmittances τ. In this case, the sound transmission loss of the partition wall 7 (total sound transmission loss) is expressed by equation (10) described in the sixth embodiment.
 図2に示す音源情報設定部12bは、音源情報を設定する。このような音源情報には、音源2の位置や音源信号の他、音源信号の音圧レベル、音圧レベルの測定点と音源2との間の距離、音源2の指向係数が含まれている。図8に示すように、音源空間1Eにおける地面8の所定位置を原点O(0,0,0)とすると、音源2の位置は、(xp,yp,zp)で表される。音源2の指向係数に関しては、屋外等の開放空間で音源2が地面8から1m以上離れている場合には、音源2の指向係数が‘1’に設定される。また、音源2が地面8に近接しており、さらに、他の反射面から1m以上離れている場合には、音源2の指向係数が‘2’に設定される。 The sound source information setting unit 12b shown in FIG. 2 sets the sound source information. In addition to the position of the sound source 2 and the sound source signal, such sound source information includes the sound pressure level of the sound source signal, the distance between the sound pressure level measurement point and the sound source 2, and the directivity coefficient of the sound source 2. As shown in FIG. 8, if a predetermined position of the ground 8 in the sound source space 1E is the origin O (0,0,0), the position of the sound source 2 is expressed as (xp,yp,zp). Regarding the directivity coefficient of the sound source 2, if the sound source 2 is 1m or more away from the ground 8 in an open space such as outdoors, the directivity coefficient of the sound source 2 is set to '1'. Also, if the sound source 2 is close to the ground 8 and is 1m or more away from other reflecting surfaces, the directivity coefficient of the sound source 2 is set to '2'.
 図2に示す受音情報設定部12cは、受音情報を設定する。このような受音情報には、2つの音響空間のうちの一方を受音空間1Fとして設定する旨の情報の他、隔壁7の位置や、隔壁7の音響透過損失の値が含まれている。図8に示すように、隔壁7の位置は、(xq,yq,zq)で表される。ただし、xp≠xq,yp≠yq,zp≠zqとする。 The sound receiving information setting unit 12c shown in FIG. 2 sets the sound receiving information. Such sound receiving information includes information to set one of the two acoustic spaces as the sound receiving space 1F, as well as the position of the partition wall 7 and the value of the sound transmission loss of the partition wall 7. As shown in FIG. 8, the position of the partition wall 7 is represented by (xq, yq, zq). However, xp ≠ xq, yp ≠ yq, zp ≠ zq.
 図8に示す収音装置20は、受音空間1Fにおける残響時間計測用音響信号(例えば、ユーザが手を叩く音やピンクノイズ)を収音する。図2に示す計測部12dは、収音装置20によって収音された残響時間計測用音響信号に基づいて、受音空間1Fの残響時間を計測する。なお、残響時間の算出には、第1実施形態で説明した式(1)が用いられる。計測部12dによって計測された残響時間の値は、音響特性算出部12eに出力される。 The sound collection device 20 shown in FIG. 8 collects an acoustic signal for measuring reverberation time in the sound receiving space 1F (e.g., the sound of a user clapping hands or pink noise). The measurement unit 12d shown in FIG. 2 measures the reverberation time of the sound receiving space 1F based on the acoustic signal for measuring reverberation time collected by the sound collection device 20. Note that the equation (1) described in the first embodiment is used to calculate the reverberation time. The value of the reverberation time measured by the measurement unit 12d is output to the acoustic characteristic calculation unit 12e.
 図2に示す音響特性算出部12eは、残響時間に基づいて、受音空間1Fの平均吸音率を算出する。平均吸音率の算出には、第1実施形態で説明した式(2)が用いられる。音響特性算出部12eによって算出された平均吸音率の値は、音響特性出力部12f及び受音信号算出部12gに出力される。 The acoustic characteristic calculation unit 12e shown in FIG. 2 calculates the average sound absorption coefficient of the sound receiving space 1F based on the reverberation time. The average sound absorption coefficient is calculated using the formula (2) described in the first embodiment. The value of the average sound absorption coefficient calculated by the acoustic characteristic calculation unit 12e is output to the acoustic characteristic output unit 12f and the received sound signal calculation unit 12g.
 受音信号算出部12g、受音空間1Fの平均吸音率の他、音響空間情報、音源情報、及び受音情報に基づいて、音源2から発せられた音が受音空間1Fで聞こえる音の平均的な受音信号を算出する。具体的には、受音空間1Fの平均音圧レベルLは、以下の式(13)で表される。なお、式(13)に含まれるLは音源空間1Eから隔壁7に入射する音の音圧レベルである。また、式(13)に含まれるTLは隔壁7における音響透過損失であり、Sは隔壁7の表面積である。また、式(13)に含まれるSは受音空間1Fの表面積であり、αは受音空間1Fの平均吸音率である。 The sound receiving signal calculation unit 12g calculates an average sound receiving signal of the sound emitted from the sound source 2 and heard in the sound receiving space 1F based on the average sound absorption coefficient of the sound receiving space 1F, as well as the acoustic space information, the sound source information, and the sound receiving information. Specifically, the average sound pressure level L B of the sound receiving space 1F is expressed by the following formula (13). Note that L C included in formula (13) is the sound pressure level of the sound incident on the partition wall 7 from the sound source space 1E. Also, TL included in formula (13) is the sound transmission loss in the partition wall 7, and S is the surface area of the partition wall 7. Also, S B included in formula (13) is the surface area of the sound receiving space 1F, and α B is the average sound absorption coefficient of the sound receiving space 1F.
Figure JPOXMLDOC01-appb-M000013
Figure JPOXMLDOC01-appb-M000013
 音源2の音響パワーレベルをLとすると、音源空間1Eから隔壁7に入射する音の音圧レベルLは、次の式(14)で表される。 When the acoustic power level of the sound source 2 is LW , the sound pressure level L C of the sound incident on the partition wall 7 from the sound source space 1E is expressed by the following equation (14).
Figure JPOXMLDOC01-appb-M000014
Figure JPOXMLDOC01-appb-M000014
 式(14)に含まれるrは、音源2と隔壁7との間の距離であり、次の式(15)で表される。 Included in equation (14) is r1 , which is the distance between the sound source 2 and the partition wall 7, and is expressed by the following equation (15).
Figure JPOXMLDOC01-appb-M000015
Figure JPOXMLDOC01-appb-M000015
 なお、隔壁7の中央の位置を隔壁7の代表位置としてもよい。この場合において、受音空間1Fにおける平均音圧レベルLは、次の式(16)で表される。 The central position of the partition wall 7 may be set as the representative position of the partition wall 7. In this case, the average sound pressure level L B in the sound receiving space 1F is expressed by the following formula (16).
Figure JPOXMLDOC01-appb-M000016
Figure JPOXMLDOC01-appb-M000016
 したがって、音源信号の音圧レベルLと、受音空間1Fの平均音圧レベルLと、の差分値ΔLは、次の式(17)で表される。 Therefore, the difference value ΔL1 between the sound pressure level L0 of the sound source signal and the average sound pressure level LB of the sound receiving space 1F is expressed by the following equation (17).
Figure JPOXMLDOC01-appb-M000017
Figure JPOXMLDOC01-appb-M000017
 式(17)から、次のことがいえる。すなわち、音源2から発せられる音が隔壁7を介して受音空間1Fに伝播するとき、受音空間1Fで聞こえる音の平均的な受音信号Lは、音源信号Lに対して差分値ΔLだけ変化した信号になる。音源信号Lに対して差分値ΔLだけ変化させる処理には、例えば、信号の周波数特性を変更可能なイコライザ(図示せず)が用いられる。このようなイコライザを受音信号算出部12g(図2参照)が有するようにしてもよい。 From formula (17), the following can be said. That is, when sound emitted from the sound source 2 propagates through the partition wall 7 to the sound receiving space 1F, the average received sound signal L B of the sound heard in the sound receiving space 1F becomes a signal that has changed by a difference value ΔL 1 with respect to the sound source signal L 0. For example, an equalizer (not shown) capable of changing the frequency characteristics of a signal is used for the process of changing the sound source signal L 0 by the difference value ΔL 1. Such an equalizer may be included in the received sound signal calculation unit 12g (see FIG. 2).
 受音信号算出部12gによって算出された受音信号Lは、受音信号出力部12h(図2参照)に出力される。そして、受音信号出力部12hから発音装置50(図2参照)に所定の受音信号が可聴信号として出力される。このように、制御部12は、音源2が設けられる音源空間1E(開放空間)から隔壁7で隔てられた受音空間1F(音響空間)に受音者4が存在することが想定される場合、隔壁7の音響透過損失を含む情報に基づいて、受音信号を算出する。 The received sound signal L B calculated by the received sound signal calculation unit 12g is output to the received sound signal output unit 12h (see FIG. 2). A predetermined received sound signal is then output as an audible signal from the received sound signal output unit 12h to the sound generation device 50 (see FIG. 2). In this manner, the control unit 12 calculates the received sound signal based on information including the sound transmission loss of the partition wall 7 when it is assumed that the receiver 4 is present in the sound receiving space 1F (acoustic space) separated by the partition wall 7 from the sound source space 1E (open space) in which the sound source 2 is provided.
 なお、端末30の操作で隔壁7の音響透過損失が変更された場合、制御部12が、音響透過損失の変更に応じて、受音信号を変化させるようにしてもよい。これによって、例えば、隔壁7に設けられた窓(図示せず)の開閉に伴う音の聞こえ方の違いをユーザが確認できる。 If the sound transmission loss of the partition wall 7 is changed by operating the terminal 30, the control unit 12 may change the received sound signal in accordance with the change in sound transmission loss. This allows the user to confirm, for example, the difference in how sound is heard when a window (not shown) in the partition wall 7 is opened or closed.
<効果>
 第7実施形態によれば、音源2が屋外等の開放空間から隔壁7で隔てられた受音空間1Fに受音者4が存在する場合に、受音者4に聞こえる音をユーザが疑似体験できる。また、航空機やヘリコプタやドローン等を飛行させる場合や、ショベルカー等の建設機械を駆動させる場合についても、受音空間1Fにおける音の聞こえ方をユーザが疑似体験できる。また、機器の開発段階で、所定の使用環境における機器の運転音を設計者が疑似体験した上で、その結果を設計に反映させるといったことにも活用できる。
<Effects>
According to the seventh embodiment, when the sound source 2 is in a sound receiving space 1F separated by a partition wall 7 from an open space such as outdoors and the like, the user can simulate the experience of the sound heard by the sound receiver 4. In addition, when flying an airplane, helicopter, drone, or the like, or when operating a construction machine such as an excavator, the user can simulate the experience of how the sound is heard in the sound receiving space 1F. In addition, during the development stage of the equipment, the designer can simulate the experience of the operating sound of the equipment in a specified usage environment, and then the result can be reflected in the design.
≪変形例≫
 以上、本開示に係る音響特性解析システム100について各実施形態で説明したが、これらの記載に限定されるものではなく、種々の変更を行うことができる。
 例えば、第1実施形態では、音響特性解析装置10(図2参照)の制御部12が音響空間情報設定部12a等の各構成を備える場合について説明したが、これに限らない。すなわち、制御部12が備える音響空間情報設定部12a等の機能の一部又は全部を端末30が担うようにしてもよい。なお、第2~第7実施形態についても同様のことがいえる。
<<Variations>>
Although the acoustic characteristic analysis system 100 according to the present disclosure has been described in each embodiment, the present disclosure is not limited to these descriptions and various modifications can be made.
For example, in the first embodiment, the control unit 12 of the acoustic characteristic analysis device 10 (see FIG. 2) is described as having each component such as the acoustic space information setting unit 12a, but this is not limited to the above. That is, the terminal 30 may take on some or all of the functions of the acoustic space information setting unit 12a, etc., provided in the control unit 12. The same can be said about the second to seventh embodiments.
 また、第1実施形態では、収音装置20(図2参照)が音響特性解析装置10(図2参照)とは別である場合について説明したが、これに限らない。すなわち、収音装置20が音響特性解析装置10に含まれるようにしてもよい。なお、第4~第7実施形態についても同様のことがいえる。 In the first embodiment, the sound collection device 20 (see FIG. 2) is separate from the acoustic characteristic analysis device 10 (see FIG. 2), but this is not limited to the above. In other words, the sound collection device 20 may be included in the acoustic characteristic analysis device 10. The same can be said about the fourth to seventh embodiments.
 また、各実施形態では、音響空間1の平均吸音率や残響時間の値が表示装置40に表示される場合について説明したが、他のデータが表示されるようにしてもよい。例えば、受音者4の位置で聞こえる音の受音信号の波形(時々刻々の音圧レベルの変化等)が表示装置40に表示されるようにしてもよい。この場合において、受音信号に同期させるように、音源信号の波形も併せて一画面で表示させるようにしてもよい。 In addition, in each embodiment, the average sound absorption coefficient and reverberation time of the acoustic space 1 are displayed on the display device 40, but other data may be displayed. For example, the waveform of the sound received at the position of the sound receiver 4 (such as changes in sound pressure level from moment to moment) may be displayed on the display device 40. In this case, the waveform of the sound source signal may also be displayed on one screen in synchronization with the received sound signal.
 また、収音装置20によって音響空間1で収音される音、音響空間情報、音源情報、及び受音情報に基づいて、音響特性算出部12e(つまり、制御部12)が音声明瞭度を算出するようにしてもよい。なお、音声明瞭度とは、音声の聞き取りやすさの度合いを示す値である。この場合には、音響空間情報の他、音源情報や受音情報も音響特性算出部12eに入力されるものとする。また、音響空間1の特性を示す情報(平均吸音率や残響時間)とともに、音声明瞭度が表示装置40に表示されるようにしてもよい。これによって、音響空間1での会話のしやすさの度合いをユーザが把握できる。なお、音声明瞭度が「高い」・「中程度」・「低い」といった情報が表示装置40に表示されるようにしてもよい。 Furthermore, the acoustic characteristic calculation unit 12e (i.e., the control unit 12) may calculate the voice clarity based on the sound collected in the acoustic space 1 by the sound collection device 20, the acoustic space information, the sound source information, and the sound reception information. Note that the voice clarity is a value indicating the degree of ease of hearing the voice. In this case, in addition to the acoustic space information, the sound source information and the sound reception information are also input to the acoustic characteristic calculation unit 12e. Furthermore, the voice clarity may be displayed on the display device 40 together with information indicating the characteristics of the acoustic space 1 (average sound absorption coefficient and reverberation time). This allows the user to understand the degree of ease of conversation in the acoustic space 1. Note that information such as "high", "medium", or "low" voice clarity may be displayed on the display device 40.
 また、第6実施形態では、隔壁6(図7参照)が音源空間1Cの側面の一部を構成するとともに、受音空間1Dの側面の一部を構成する場合について説明したが、これに限らない。例えば、所定の隔壁(図示せず)が、音源空間の天井面(又は床面)を構成するとともに、受音空間の床面(又は天井面)を構成する場合にも、第6実施形態を適用できる。なお、第7実施形態についても同様のことがいえる。 In the sixth embodiment, the partition 6 (see FIG. 7) constitutes part of the side of the sound source space 1C and also constitutes part of the side of the sound receiving space 1D, but this is not limited to the above. For example, the sixth embodiment can also be applied to a case where a specific partition (not shown) constitutes the ceiling surface (or floor surface) of the sound source space and also constitutes the floor surface (or ceiling surface) of the sound receiving space. The same can be said about the seventh embodiment.
 また、各実施形態では、音源2が所定の機器である場合について説明したが、これに限らない。例えば、音源2が人の声や動物の鳴き声である場合にも各実施形態を適用できる。
 また、各実施形態は、機器(つまり、音源2)の駆動に伴う騒音の解析にも適用できる。例えば、第7実施形態に基づいて、屋外で建設機械等を駆動させた場合の騒音の解析を行うことも可能である。
In addition, in each embodiment, the sound source 2 is a specific device, but this is not limiting. For example, each embodiment can be applied to a case where the sound source 2 is a human voice or an animal cry.
Moreover, each embodiment can also be applied to the analysis of noise caused by the operation of equipment (i.e., the sound source 2). For example, based on the seventh embodiment, it is also possible to analyze noise generated when a construction machine or the like is operated outdoors.
 また、各実施形態は、適宜に組み合わせることもできる。例えば、第2実施形態と第6実施形態とを組み合わせ、音源空間1Cと受音空間1Dとが隔壁6を介して隣り合っている場合において(第6実施形態)、端末30の操作で入力される平均吸音率等に基づいて制御部12が受音信号を算出するようにしてもよい(第2実施形態)。
 また、例えば、第3実施形態と第7実施形態とを組み合わせ、仮想空間において(第3実施形態)、開放空間から隔壁7で隔てられた受音空間1Fでの音の聞こえ方を示す受音信号を制御部12が算出するようにしてもよい(第7実施形態)。
In addition, each embodiment can be appropriately combined. For example, in a case where the sound source space 1C and the sound receiving space 1D are adjacent to each other via a partition wall 6 (sixth embodiment), the control unit 12 may calculate a sound receiving signal based on an average sound absorption coefficient or the like input by operating the terminal 30 (second embodiment).
In addition, for example, the third embodiment and the seventh embodiment may be combined, and in a virtual space (third embodiment), the control unit 12 may calculate a sound receiving signal indicating how sound is heard in a sound receiving space 1F separated from an open space by a partition 7 (seventh embodiment).
 また、第3実施形態と第6実施形態とを組み合わせて、次のように構成してもよい。すなわち、仮想空間において、音源2が設けられた音源空間に所定のキャラクタ(受音者)が存在する場合、及び、音源空間と隔壁6を介して隣り合っている受音空間にキャラクタ(受音者)が存在する場合のうちの一方から他方に端末30の操作で切り替えられた場合、制御部12が表示画面上のキャラクタの位置を変化させるとともに、キャラクタの位置で聞こえる音の受音信号を変化させるようにしてもよい。このような構成によれば、キャラクタの位置の変化に伴って受音信号が変化するため、ユーザが仮想空間上で所定の疑似体験をする際の現実感を高めることができる。 The third and sixth embodiments may also be combined to form the following configuration. That is, in a virtual space, when a specific character (sound receiver) exists in a sound source space in which a sound source 2 is provided, and when a character (sound receiver) exists in a sound receiving space adjacent to the sound source space via a partition wall 6, and when switching from one to the other by operating the terminal 30, the control unit 12 may change the position of the character on the display screen and change the sound receiving signal of the sound heard at the character's position. With this configuration, the sound receiving signal changes in accordance with the change in the character's position, thereby enhancing the sense of realism when the user has a specific simulated experience in the virtual space.
 また、第3実施形態、第6実施形態、及び第7実施形態を組み合わせて、次のように構成してもよい。すなわち、仮想空間において、音源2が設けられた音源空間と隔壁6を介して隣り合っている受音空間に所定のキャラクタ(受音者)が存在する場合、及び、音源2が設けられた開放空間(屋外等)から隔壁7で隔てられた受音空間にキャラクタ(受音者)が存在する場合のうちの一方から他方に端末30の操作で切り替えられた場合、制御部12が表示画面上のキャラクタの位置を変化させるとともに、受音信号を変化させるようにしてもよい。このような構成でも、ユーザが仮想空間上で所定の疑似体験をする際の現実感を高めることができる。 Furthermore, the third, sixth, and seventh embodiments may be combined to form the following configuration. That is, in a virtual space, when a predetermined character (sound receiver) exists in a sound receiving space adjacent to the sound source space in which the sound source 2 is provided via a partition wall 6, and when a character (sound receiver) exists in a sound receiving space separated by a partition wall 7 from an open space in which the sound source 2 is provided (outdoors, etc.), and when switching from one to the other by operating the terminal 30, the control unit 12 may change the position of the character on the display screen and change the sound receiving signal. With such a configuration, the sense of realism when the user has a predetermined simulated experience in the virtual space can be enhanced.
 また、第3実施形態と第7実施形態とを組み合わせて、次のように構成してもよい。すなわち、仮想空間において、音源2が設けられた音源空間に所定のキャラクタ(受音者)が存在する場合、及び、音源2が設けられた開放空間(屋外等)から隔壁7で隔てられた受音空間にキャラクタ(受音者)が存在する場合のうちの一方から他方に端末30の操作で切り替えられた場合、制御部12が表示画面上のキャラクタの位置を変化させるとともに、受音信号を変化させるようにしてもよい。このような構成でも、ユーザが仮想空間上で所定の疑似体験をする際の現実感を高めることができる。 The third and seventh embodiments may also be combined to form the following configuration. That is, when a specific character (sound receiver) exists in a sound source space in which the sound source 2 is provided in a virtual space, and when a character (sound receiver) exists in a sound receiving space separated by a partition wall 7 from an open space in which the sound source 2 is provided (outdoors, etc.), and when switching is performed from one to the other by operating the terminal 30, the control unit 12 may change the position of the character on the display screen and change the sound receiving signal. With such a configuration, the sense of realism when the user has a specific simulated experience in the virtual space can also be enhanced.
 また、制御部12(図2参照)が備える音響空間情報設定部12a等の各構成の算出結果のうちの少なくとも一部が、ハードディスクドライブやリムーバブルディスクといった情報記録媒体に格納されるようにしてもよい。
 また、音響特性解析システム100が実行する処理(音響特性解析方法等)が、コンピュータの所定のプログラムとして実行されるようにしてもよい。前記したプログラムは、通信線を介して提供することもできる他、CD-ROM等の記録媒体に書き込んで配布することも可能である。
At least a part of the calculation results of each component such as the acoustic space information setting section 12a of the control section 12 (see FIG. 2) may be stored in an information recording medium such as a hard disk drive or a removable disk.
Furthermore, the processes (such as the acoustic characteristic analysis method) executed by the acoustic characteristic analysis system 100 may be executed as a predetermined program of a computer. The program may be provided via a communication line, or may be written onto a recording medium such as a CD-ROM and distributed.
 また、各実施形態は本発明を分かりやすく説明するために詳細に記載したものであり、必ずしも説明した全ての構成を備えるものに限定されない。また、実施形態の構成の一部について、他の構成の追加・削除・置換をすることが可能である。また、前記した機構や構成は説明上必要と考えられるものを示しており、製品上必ずしも全ての機構や構成を示しているとは限らない。 In addition, each embodiment has been described in detail to clearly explain the present invention, and is not necessarily limited to having all of the configurations described. Furthermore, it is possible to add, delete, or replace some of the configurations of the embodiments with other configurations. Furthermore, the mechanisms and configurations described above are those that are considered necessary for explanation, and do not necessarily represent all of the mechanisms and configurations of the product.
 1 音響空間(現実空間)
 1B 音響空間(仮想空間)
 1C 音源空間(音響空間)
 1D 受音空間(音響空間)
 1E 音源空間(音響空間、開放空間)
 1F 受音空間(音響空間)
 2 音源
 3 物体
 4 受音者
 5 キャラクタ
 6,7 隔壁
 10,10A 音響特性解析装置
 11 記憶部
 12,12A 制御部
 12a,12Aa 音響空間情報設定部
 12b 音源情報設定部
 12c 受音情報設定部
 12d 計測部
 12e,12Ae 音響特性算出部
 12f 音響特性出力部
 12g 受音信号算出部
 12h 受音信号出力部
 20,21,22 収音装置
 30 端末
 40 表示装置
 50 発音装置
 60 掃除機(音源)
 41 スマートフォン
 41a ディスプレイ(表示装置)
 100,100A 音響特性解析システム
1. Acoustic space (real space)
1B Acoustic space (virtual space)
1C Sound source space (acoustic space)
1D Sound receiving space (acoustic space)
1E Sound source space (acoustic space, open space)
1F Sound receiving space (acoustic space)
2 Sound source 3 Object 4 Sound receiver 5 Character 6, 7 Partition 10, 10A Acoustic characteristic analysis device 11 Memory unit 12, 12A Control unit 12a, 12Aa Acoustic space information setting unit 12b Sound source information setting unit 12c Sound reception information setting unit 12d Measurement unit 12e, 12Ae Acoustic characteristic calculation unit 12f Acoustic characteristic output unit 12g Sound reception signal calculation unit 12h Sound reception signal output unit 20, 21, 22 Sound collection device 30 Terminal 40 Display device 50 Sound generation device 60 Vacuum cleaner (sound source)
41 Smartphone 41a Display (display device)
100, 100A Acoustic Characteristic Analysis System

Claims (15)

  1.  現実空間又は仮想空間である音響空間の特性に基づいて、前記音響空間で聞こえる所定の音源の音をユーザが疑似体験する際の受音信号を可聴信号として出力する制御部を備える音響特性解析システム。 An acoustic characteristic analysis system that includes a control unit that outputs, as an audible signal, a received signal that is received when a user simulates the experience of hearing a specific sound source in an acoustic space, which may be a real space or a virtual space, based on the characteristics of the acoustic space.
  2.  前記特性は、前記音響空間における残響時間又は平均吸音率であること
     を特徴とする請求項1に記載の音響特性解析システム。
    The acoustic characteristic analysis system according to claim 1 , wherein the characteristic is a reverberation time or an average sound absorption coefficient in the acoustic space.
  3.  前記制御部は、収音装置によって前記音響空間で収音される所定の音に基づいて、前記残響時間を計測し、前記音響空間の寸法及び前記残響時間に基づいて、前記平均吸音率を算出すること
     を特徴とする請求項2に記載の音響特性解析システム。
    3. The acoustic characteristic analysis system according to claim 2, wherein the control unit measures the reverberation time based on a predetermined sound picked up in the acoustic space by a sound pickup device, and calculates the average sound absorption coefficient based on a dimension of the acoustic space and the reverberation time.
  4.  前記制御部は、前記残響時間及び前記平均吸音率のうちの少なくとも一方を表示装置に表示させること
     を特徴とする請求項2に記載の音響特性解析システム。
    The acoustic characteristic analysis system according to claim 2 , wherein the control unit causes a display device to display at least one of the reverberation time and the average sound absorption coefficient.
  5.  前記制御部は、
     前記音響空間の寸法を含む音響空間情報を設定する音響空間情報設定部と、
     前記音源の位置、前記音源の音圧レベル、及び音源信号を含む音源情報を設定する音源情報設定部と、
     前記音響空間における受音者の位置を含む受音情報を設定する受音情報設定部と、を有するとともに、
     前記音響空間情報、前記音源情報、前記受音情報、及び前記平均吸音率に基づいて、前記受音信号を算出する受音信号算出部を有すること
     を特徴とする請求項2に記載の音響特性解析システム。
    The control unit is
    an acoustic space information setting unit that sets acoustic space information including dimensions of the acoustic space;
    a sound source information setting unit that sets sound source information including a position of the sound source, a sound pressure level of the sound source, and a sound source signal;
    a sound receiving information setting unit that sets sound receiving information including a position of a sound receiver in the acoustic space;
    The acoustic characteristic analyzing system according to claim 2 , further comprising a sound reception signal calculation unit that calculates the sound reception signal based on the acoustic space information, the sound source information, the sound reception information, and the average sound absorption coefficient.
  6.  前記制御部は、端末の操作で前記平均吸音率又は前記残響時間の値が入力された場合、前記受音信号を算出する際に当該値を用いること
     を特徴とする請求項5に記載の音響特性解析システム。
    6. The acoustic characteristic analysis system according to claim 5, wherein when a value of the average sound absorption coefficient or the reverberation time is input by operating a terminal, the control unit uses the input value when calculating the received sound signal.
  7.  前記制御部は、
     前記音響空間の寸法、前記音響空間の前記特性、前記音源の位置、前記音源の種類、及び前記音響空間における受音者の位置のうちの少なくとも一つの情報が端末の操作で変更された場合、変更後の前記情報に基づいて、前記受音信号を変化させること
     を特徴とする請求項1に記載の音響特性解析システム。
    The control unit is
    2. The acoustic characteristic analysis system according to claim 1, wherein, when at least one piece of information among the dimensions of the acoustic space, the characteristics of the acoustic space, the position of the sound source, the type of the sound source, and the position of the sound receiver in the acoustic space is changed by operating a terminal, the received signal is changed based on the changed information.
  8.  前記制御部は、前記音響空間情報、前記音源情報、前記受音情報、前記平均吸音率、及び受音者の頭部伝達関数に基づいて、前記受音信号を算出すること
     を特徴とする請求項5に記載の音響特性解析システム。
    The acoustic characteristic analysis system according to claim 5 , wherein the control unit calculates the received sound signal based on the acoustic space information, the sound source information, the sound reception information, the average sound absorption coefficient, and a head related transfer function of the sound receiver.
  9.  前記制御部は、収音装置によって前記音響空間で収音される音、前記音響空間情報、前記音源情報、及び前記受音情報に基づく音声明瞭度を表示装置に表示させること
     を特徴とする請求項5に記載の音響特性解析システム。
    The acoustic characteristic analysis system according to claim 5, wherein the control unit causes a display device to display a voice intelligibility based on the sound collected in the acoustic space by a sound collection device, the acoustic space information, the sound source information, and the sound reception information.
  10.  前記制御部は、隔壁を介して隣り合う2つの前記音響空間のうちの一方に前記音源が存在するとともに、他方に受音者が存在することが想定される場合、前記隔壁の音響透過損失を含む情報に基づいて、前記受音信号を算出すること
     を特徴とする請求項1に記載の音響特性解析システム。
    2. The acoustic characteristic analysis system according to claim 1, wherein when it is assumed that the sound source is present in one of the two acoustic spaces adjacent to each other via a partition wall and a sound receiver is present in the other acoustic space, the control unit calculates the received sound signal based on information including an acoustic transmission loss of the partition wall.
  11.  前記制御部は、前記音源が設けられる開放空間から隔壁で隔てられた前記音響空間に受音者が存在することが想定される場合、前記隔壁の音響透過損失を含む情報に基づいて、前記受音信号を算出すること
     を特徴とする請求項1に記載の音響特性解析システム。
    2. The acoustic characteristic analysis system according to claim 1, wherein the control unit calculates the received sound signal based on information including an acoustic transmission loss of a partition wall when it is assumed that a sound receiver is present in the acoustic space separated from an open space in which the sound source is provided by the partition wall.
  12.  前記音響空間は、前記仮想空間であり、
     前記特性は、前記音響空間における残響時間又は平均吸音率であること
     を特徴とする請求項1に記載の音響特性解析システム。
    the acoustic space is the virtual space,
    The acoustic characteristic analysis system according to claim 1 , wherein the characteristic is a reverberation time or an average sound absorption coefficient in the acoustic space.
  13.  前記制御部は、
     前記音源に相当する画像を表示装置に表示させるとともに、所定のキャラクタを前記表示装置に表示させ、
     前記音響空間で前記音源の音を聞く受音者の位置を前記キャラクタの位置として設定すること
     を特徴とする請求項12に記載の音響特性解析システム。
    The control unit is
    displaying an image corresponding to the sound source on a display device and displaying a predetermined character on the display device;
    13. The acoustic characteristic analysis system according to claim 12, wherein a position of a sound receiver who hears the sound of the sound source in the acoustic space is set as a position of the character.
  14.  前記制御部は、
     前記音源に相当する画像を表示装置に表示させるとともに、所定のキャラクタを前記表示装置に表示させ、
     前記キャラクタの位置の変化、又は、前記音源の位置の変化に応じて、前記受音信号を変化させること
     を特徴とする請求項12に記載の音響特性解析システム。
    The control unit is
    displaying an image corresponding to the sound source on a display device and displaying a predetermined character on the display device;
    13. The acoustic characteristic analyzing system according to claim 12, wherein the received sound signal is changed in response to a change in the position of the character or a change in the position of the sound source.
  15.  現実空間又は仮想空間である音響空間の特性に基づいて、前記音響空間で聞こえる所定の音源の音をユーザが疑似体験する際の受音信号を可聴信号として出力する処理を含む音響特性解析方法。 An acoustic characteristic analysis method that includes a process of outputting, as an audible signal, a received signal that is received when a user simulates the experience of hearing a sound from a specific sound source in an acoustic space, which may be a real space or a virtual space, based on the characteristics of the acoustic space.
PCT/JP2023/030763 2022-10-20 2023-08-25 Acoustic characteristic analysis system and acoustic characteristic analysis method WO2024084811A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022-168381 2022-10-20
JP2022168381A JP2024060836A (en) 2022-10-20 2022-10-20 Acoustic characteristic analysis system and acoustic characteristic analysis method

Publications (1)

Publication Number Publication Date
WO2024084811A1 true WO2024084811A1 (en) 2024-04-25

Family

ID=90737628

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/030763 WO2024084811A1 (en) 2022-10-20 2023-08-25 Acoustic characteristic analysis system and acoustic characteristic analysis method

Country Status (2)

Country Link
JP (1) JP2024060836A (en)
WO (1) WO2024084811A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06289882A (en) * 1993-03-31 1994-10-18 Victor Co Of Japan Ltd Sound field simulation system
JPH07168587A (en) * 1992-10-13 1995-07-04 Matsushita Electric Ind Co Ltd Virtually experiencing device for sound environment and analyzing method for sound environment
JP2014026240A (en) * 2012-07-30 2014-02-06 Yamaha Corp Acoustic characteristic evaluation device
JP2020166052A (en) * 2019-03-28 2020-10-08 株式会社デンソーテン Acoustic device and acoustic control method
JP2021076768A (en) * 2019-11-12 2021-05-20 株式会社カプコン Audio reproduction program and audio reproduction device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07168587A (en) * 1992-10-13 1995-07-04 Matsushita Electric Ind Co Ltd Virtually experiencing device for sound environment and analyzing method for sound environment
JPH06289882A (en) * 1993-03-31 1994-10-18 Victor Co Of Japan Ltd Sound field simulation system
JP2014026240A (en) * 2012-07-30 2014-02-06 Yamaha Corp Acoustic characteristic evaluation device
JP2020166052A (en) * 2019-03-28 2020-10-08 株式会社デンソーテン Acoustic device and acoustic control method
JP2021076768A (en) * 2019-11-12 2021-05-20 株式会社カプコン Audio reproduction program and audio reproduction device

Also Published As

Publication number Publication date
JP2024060836A (en) 2024-05-07

Similar Documents

Publication Publication Date Title
US11617050B2 (en) Systems and methods for sound source virtualization
US9107021B2 (en) Audio spatialization using reflective room model
CN1685762A (en) Sound reproduction system, program and data carrier
CN104240695A (en) Optimized virtual sound synthesis method based on headphone replay
Barron Auditorium acoustic modelling now
WO2024084811A1 (en) Acoustic characteristic analysis system and acoustic characteristic analysis method
WO2023246327A1 (en) Audio signal processing method and apparatus, and computer device
WO2021067183A1 (en) Systems and methods for sound source virtualization
TWI640983B (en) Method for simulating room acoustics effect
Amengual Garí et al. Evaluation of real-time sound propagation engines in a virtual reality framework
Kurniawan et al. Design and user evaluation of a spatial audio system for blind users
Vorländer Auralization of spaces
JPH11224273A (en) Noise simulation device and method
Celestinos et al. Low-Frequency Loudspeaker–Room Simulation Using Finite Differences in the Time Domain—Part 1: Analysis
Kim et al. Inexpensive indoor acoustic material estimation for realistic sound propagation
US20200133621A1 (en) Capacitive environmental sensing for a unique portable speaker listening experience
JP7152931B2 (en) Sound sensation method and sound sensation system
KR20200095778A (en) Space sound analyser based on material style method thereof
WO2024099324A1 (en) Noise reduction method, projection device, and storage medium
JP7403436B2 (en) Acoustic signal synthesis device, program, and method for synthesizing multiple recorded acoustic signals of different sound fields
Grimm et al. Implementation and perceptual evaluation of a simulation method for coupled rooms in higher order ambisonics
JP7032837B1 (en) Sonic propagation simulation system
Castro et al. Walk-through auralization framework for virtual reality environ-ments powered by game engine architectures, Part II
CN113936633A (en) Noise reduction method, noise reduction device and household appliance
JPH11327568A (en) Device and method for experiencing sound field in room