WO2020184423A1 - Sound wave generator, broadcasting system, method for generating sound wave, and program - Google Patents

Sound wave generator, broadcasting system, method for generating sound wave, and program Download PDF

Info

Publication number
WO2020184423A1
WO2020184423A1 PCT/JP2020/009631 JP2020009631W WO2020184423A1 WO 2020184423 A1 WO2020184423 A1 WO 2020184423A1 JP 2020009631 W JP2020009631 W JP 2020009631W WO 2020184423 A1 WO2020184423 A1 WO 2020184423A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound wave
sound
data
information
original
Prior art date
Application number
PCT/JP2020/009631
Other languages
French (fr)
Inventor
Tsutomu Kawase
Hisaharu SUZUKI
Atsushi Takigawa
Jun Hosokawa
Original Assignee
Ricoh Company, Ltd.
Evixar Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2020034341A external-priority patent/JP2020150538A/en
Application filed by Ricoh Company, Ltd., Evixar Inc. filed Critical Ricoh Company, Ltd.
Publication of WO2020184423A1 publication Critical patent/WO2020184423A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R27/00Public address systems
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B3/00Audible signalling systems; Audible personal calling systems
    • G08B3/10Audible signalling systems; Audible personal calling systems using electric transmission; using electromagnetic transmission
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/018Audio watermarking, i.e. embedding inaudible data in the audio signal
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B7/00Signalling systems according to more than one of groups G08B3/00 - G08B6/00; Personal calling systems according to more than one of groups G08B3/00 - G08B6/00
    • G08B7/06Signalling systems according to more than one of groups G08B3/00 - G08B6/00; Personal calling systems according to more than one of groups G08B3/00 - G08B6/00 using electric transmission, e.g. involving audible and visible signalling through the use of sound and light sources
    • G08B7/066Signalling systems according to more than one of groups G08B3/00 - G08B6/00; Personal calling systems according to more than one of groups G08B3/00 - G08B6/00 using electric transmission, e.g. involving audible and visible signalling through the use of sound and light sources guiding along a path, e.g. evacuation path lighting strip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/12Circuits for transducers, loudspeakers or microphones for distributing signals to two or more loudspeakers

Definitions

  • the present invention relates to a sound wave generator, a broadcasting system, a method for generating a sound wave, and a program.
  • a technology is known where a sound wave that includes predetermined identification information is output; an information terminal receives the sound wave and acquires the identification information; and the information terminal is provided with information corresponding to a location where the information terminal acquires the identification information.
  • An information providing system using a sound in a frequency range reproduced by a speaker in more particular, a sound in a frequency range higher than a frequency range used for television programs or the like (for example, a frequency range higher than or equal to 16 kHz) for superimposition of an audio ID is known (for example, see PTL 1).
  • identification information is sent with the use of a sound wave in a frequency range higher than or equal to 16 kHz, which humans cannot hear (hereinafter, referred to as a non-audible sound), and thus, identification information can be sent to an information terminal while adverse effects on an audible sound are reduced.
  • broadcasting equipment at the facility may be switched to act as emergency broadcasting equipment that outputs a warning sound, whereby it may be impossible to execute a normal voice broadcast.
  • One embodiment of the present invention has been devised in view of the above-described problems and allows information to be sent to an information terminal by a broadcasting system having the limited range of available frequencies, such as an emergency broadcasting system, while it is possible to reduce adverse effects on an audible sound.
  • a sound wave generator for generating a sound wave in a broadcasting system for broadcasting a sound wave that is within a predetermined frequency range, includes a storage unit configured to store sound wave data that represents information using a predetermined frequency component that is within the predetermined frequency range; an extracting unit configured to analyze original-sound data of the broadcasting to extract a section including a frequency component that is similar to the predetermined frequency component; and a sound wave generating unit configured to, in the extracted section, replace the frequency component of the original-sound data that is similar to the predetermined frequency component with the sound wave data, to generate the sound wave where the information is embedded.
  • FIG. 1 depicts an example of a system configuration of a broadcasting system according to any one of first through third embodiments
  • FIG. 2 depicts an example of a hardware configuration of a second broadcast apparatus according to any one of the first through third embodiments
  • FIG. 3 depicts an example of a functional configuration of the second broadcast apparatus according to the first embodiment
  • FIG. 4A depicts an example of an original sound according to the first embodiment
  • FIG. 4B depicts an example of a sound wave representing a sound wave ID according to the first embodiment
  • FIG. 4C depicts an example of an original sound and a sound wave representing a sound wave ID according to the first embodiment
  • FIG. 5A depicts an example of a sound wave generating process according to the first embodiment
  • FIG. 5B depicts the example of a sound wave generating process according to the first embodiment
  • FIG. 5C depicts the example of a sound wave generating process according to the first embodiment
  • FIG. 5D depicts the example of a sound wave generating process according to the first embodiment
  • FIG. 6 is a flowchart depicting an example of the sound wave generating process according to the first embodiment
  • FIG. 7 depicts an example of a functional configuration of the second broadcast apparatus according to the second embodiment
  • FIG. 8 is a flowchart depicting an example of a sound wave generating process according to the second embodiment
  • FIG. 9 depicts an image of an example of a sound wave representing a sound wave ID according to the third embodiment
  • FIG. 10 depicts an example of a functional configuration of the second broadcast apparatus according to the third embodiment
  • FIG. 11A is a flowchart depicting an example of a process for determining a sound wave representing a sound wave ID according to the third embodiment
  • FIG. 11B is a flowchart depicting the example of a process for determining a sound wave representing a sound wave ID according to the third embodiment
  • FIG. 12 depicts an example of a hardware configuration of an information terminal according to any one of the first through third embodiments
  • FIG. 13 depicts an example of a functional configuration of the information terminal according to any one of the first through third embodiments
  • FIG. 14A depicts an example of evacuation information according to any one of the first through third embodiments
  • FIG. 14B depicts another example of evacuation information according to any one of the first through third embodiments
  • FIG. 15 is a flowchart depicting an example of a process of the information terminal according to any one of the first through third embodiments
  • FIG. 16 depicts another example of the system configuration of the broadcasting system according to any one of the first through third embodiments.
  • FIG. 1 depicts an example of a system configuration of a broadcasting system according to the embodiments.
  • the broadcasting system 100 includes a first broadcast apparatus 110, a plurality of speakers 111a, 111b, 111c, ..., a second broadcast apparatus 120, and a plurality of speakers 121a, 121b, 121c, ..., provided at a facility 102, such as a sports venue, for example.
  • a facility 102 such as a sports venue, for example.
  • any one of the plurality of speakers 111a, 111b, 111c, ... may be referred to by the expression "speaker 111".
  • Any one of the plurality of speakers 121a, 121b, 121c, ... may be referred to by the expression "speaker 121".
  • the facility 102 is not limited to a sports venue, and may be, for example, another facility or venue such as an indoor facility, an underground facility, or an event venue.
  • the first broadcast apparatus 110 is connected to the plurality of speakers 111a, 111b, 111c, ... to act as broadcasting equipment for a normal situation for broadcasting sounds, such as voice guidance, music, and so forth, at the facility 102.
  • the first broadcast apparatus 110 send to an information terminal 104 identification information or the like using a sound wave in the frequency range higher than or equal to 16 kHz, which humans cannot hear, included in the voice frequency band reproducible by a speaker 111, for example, as in the technology disclosed in PTL 1.
  • the information terminal 104 extracts predetermined identification information from a sound wave output by a speaker 111 by executing an application program prepared for the broadcasting system 100 (hereinafter, referred to as an "application"), for example.
  • the information terminal 104 for example, provides any item of various contents (for example, information concerning sporting competitions, guidance information for available seats, shops, information concerning cheering, and so forth) to a user 103 in accordance with the extracted identification information and so forth.
  • the first broadcast apparatus 110 and the speakers 111a, 111b, 111c, ... may be configured at will, and thus, the detailed description will be omitted.
  • the second broadcast apparatus 120 is connected to the plurality of speakers 121a, 121b, 121c, ... to act as emergency broadcasting equipment that outputs emergency alarm sounds when a disaster such as fire or an earthquake occurs (i.e., in a time of emergency). For example, in the event of fire or an earthquake occurring at the facility 102, the broadcasting system of the facility 102 switches from using the first broadcast apparatus 120 to using the second broadcast apparatus 110, and then, only emergency alarm sounds output by the second broadcast apparatus 120 is broadcast at the facility 102.
  • the first broadcast apparatus 110 and the second broadcast apparatus 120 may be, for example, included in one broadcast apparatus 101, as depicted in FIG. 1, or may be separate broadcast apparatuses.
  • the second broadcast apparatus 120 is an example of a sound wave generator.
  • high-impedance speakers 121 and wiring 122 are used to efficiently transmit emergency alarm sounds at a facility with relatively low power consumption.
  • interference by noise from the outside would be more likely to occur and an oscillation would be likely to occur if, for example, the high frequency range higher than or equal to 10 kHz were used. Therefore, the range of available frequencies of the second broadcast apparatus 120 and the speakers 121a, 121b, 121c, ... acting as emergency broadcasting equipment is limited, and thus, the second broadcast apparatus 120 and the speakers 121a, 121b, 121c, ... cannot output sound waves in the frequency range, for example, higher than or equal to 10 kHz.
  • the second broadcast apparatus 120 unlike the first broadcast apparatus 110, cannot send to the information terminal 104 identification information and so forth using sound waves in the frequency range higher than or equal to 16 kHz (i.e., non-audible sounds).
  • the second broadcast apparatus 120 which is emergency broadcasting equipment, send sound waves including identification information and so forth to the information terminal 104 to provide, for example, information on the disaster and information on an evacuation route to the information terminal 104.
  • the second broadcast apparatus 120 generates and sends sound waves in the frequency range lower than or equal to 10 kHz including identification information and so forth to the information terminal 104 while influence on audible sounds is reduced. An actual method for generating such sound waves will be described later.
  • the information terminal 104 can extract identification information and so forth from emergency alarm sounds output by the second broadcast apparatus 120 and can provide information such as an evacuation route to the user 103.
  • FIG. 2 depicts an example of the hardware configuration of the second broadcast apparatus 120 according to the embodiments.
  • the second broadcast apparatus 120 includes, for example, a CPU 201, a memory 202, a storage device 203, a communication I/F (InterFace) 204, a sound wave processing circuit 205, an input I/F 206, one or more output I/Fs 207a, 207b, ..., an input device 208, a display device 209, and a system bus 210.
  • a CPU 201 for example, a CPU 201, a memory 202, a storage device 203, a communication I/F (InterFace) 204, a sound wave processing circuit 205, an input I/F 206, one or more output I/Fs 207a, 207b, ..., an input device 208, a display device 209, and a system bus 210.
  • a CPU 201 for example, a CPU 201, a memory 202, a storage device 203,
  • the CPU 201 is an arithmetic and logic unit that implements each function of the second broadcast apparatus 120 by reading and executing instructions written in programs and using data stored in the storage device 203, for example.
  • the memory 202 includes, for example, a RAM (Random Access Memory), which is a volatile memory used as a work area of the CPU 201, a ROM (Read-Only Memory), which is a non-volatile memory where programs for booting up and so forth are stored.
  • RAM Random Access Memory
  • ROM Read-Only Memory
  • the storage device 203 is a non-volatile mass storage device such as a HDD (hard disk drive) or a SSD (solid state drive), for example, and stores an OS (Operation System), application programs, and various data.
  • the communication I/F 204 is an interface for connecting the second broadcast apparatus 120 to communication networks and for communicating with other apparatuses.
  • the sound wave processing circuit 205 is a circuit for, for example, amplifying, analyzing, and filtering of sound waves under the control of the CPU 201 and may include, for example, an audio amplification circuit, a DAC (Digital to Analog Converter), an ADC (Analog to Digital Converter), and a DSP (Digital Signal Processor).
  • the input I/F 206 is an interface for inputting sound signals to the sound wave processing circuit 205.
  • the one or more output I/Fs 207a, 207b, ... are interfaces for outputting sound waves to the speakers 121 installed at the facility 102, for example. It is desirable to have the second broadcast apparatus 120 include a plurality of output I/Fs 207a, 207b, ... so as to output sound signals different from each other to a plurality of areas at the facility 102.
  • the input device 208 is such as a mouse, a keyboard, or a touch panel and is used to receive the user's input operation to the second broadcast apparatus 120.
  • the display device 209 is such as a display, and is used to display the results of processing performed by the second broadcast apparatus 120.
  • the input device 208 and the display device 209 may be, for example, an integrated display and input device such as a touch panel display.
  • the system bus 210 is connected to each of the above-described elements and transmits, for example, address signals, data signals, and various control signals.
  • the hardware configuration of the second broadcast apparatus 120 depicted in FIG. 2 is an example.
  • the second broadcast apparatus 120 may be, for example, a combination of a PC (Personal Computer) having a typical computer configuration and a sound wave processing apparatus including the sound wave processing circuit 205, the input I/F 206, and the output I/F 207a, 207b, ....
  • PC Personal Computer
  • FIG. 3 depicts an example of a functional configuration of the second broadcast apparatus 120 according to the first embodiment of the present invention.
  • the second broadcast apparatus 120 functions as, for example, a situation detecting unit 301, an input unit 302, an analysis unit 303, an extracting unit 304, a sound wave generating unit 305, an output unit 306, a display and input control unit 307, and a storage unit 308 by executing a predetermined program(s) with the CPU 201 depicted in FIG. 2.
  • a predetermined program(s) with the CPU 201 depicted in FIG. 2.
  • At least some of the above-described functional elements may be implemented by hardware.
  • the situation detecting unit 301 may be implemented, for example, by a program executed by the CPU 201 depicted in FIG. 2, and detects that a predetermined disaster has occurred at the facility 102. For example, the situation detecting unit 301 detects that a predetermined disaster has occurred at the facility 102 through the communication I/F 204 as a result of disaster information being sent from an emergency system or as a result of an input operation being performed by an administrator or the like to the input device 208.
  • the second broadcast apparatus 120 starts a disaster broadcast, i.e., outputting an emergency alarm sound and so forth using the speakers 121 installed at the facility 102.
  • the input unit 302 may be implemented, for example, by a program executed in the CPU 201 depicted in FIG. 2 together with the input I/F 206, the sound wave processing circuit 205, and so forth, and receives sound waves, sound wave signals, and sound wave data.
  • the sound waves may be sound waves of various sounds such as, for example, voices, alarm sounds, sound effects, sound trademarks, and sound waves collected at the facility 102.
  • the analysis unit 303 may be implemented by, for example, a program executed by the CPU 201 depicted in FIG. 2 together with the sound wave processing circuit 205, and so forth, and performs time frequency analysis on sound waves using STFT (Short Time Fourier Transform), FFT (Fast Fourier Transform), or the like.
  • STFT Short Time Fourier Transform
  • FFT Fast Fourier Transform
  • the second broadcast apparatus 120 embeds, in sound waves (original sounds) of disaster broadcasts, identification information (hereinafter, referred to as sound wave IDs) that cause the information terminal 104 to perform processes according to the broadcast contents.
  • sound wave IDs identification information
  • a sound wave where a sound wave ID has not been embedded will be referred to as an "original sound” and a sound wave obtained after embedding a sound wave ID in the original sound may be referred to as a "broadcast sound wave”.
  • the analysis unit 303 performs time frequency analysis on an original sound and analyzes a frequency component included at each interval of the original sound.
  • the extracting unit 304 may be implemented, for example, by a program executed by the CPU 201 depicted in FIG. 2 together with the sound wave processing circuit 205, and so forth.
  • the storage unit 308 of the second broadcast apparatus 120 previously stores one or more sets of sound wave data 312 that represent sound wave IDs using predetermined frequency components included in a predetermined frequency range (for example, the frequency range lower than or equal to 10 kHz) and frequency-component data 313 representing a frequency component of each set of sound wave data 312.
  • a predetermined frequency range for example, the frequency range lower than or equal to 10 kHz
  • frequency-component data 313 representing a frequency component of each set of sound wave data 312.
  • the extracting unit 304 analyzes original-sound data using the analysis unit 303 and extracts sections including frequency components that are similar to predetermined frequency components representing sound waves ID. For example, the extracting unit 304 extracts a section from original-sound data; the section includes a predetermined frequency component to be used to represent a sound wave ID and has a length greater than or equal to a length (for example, 0.3 seconds) sufficient to embed sound wave data representing a sound wave ID.
  • the sound wave generating unit 305 may be implemented, for example, by a program executed by the CPU 201 depicted in FIG. 2 together with the sound wave processing circuit 205, and so forth.
  • the sound wave generating unit 305 replaces a predetermined frequency component of original-sound data with sound wave data representing a sound wave ID in a section extracted by the extracting unit 304, thus generating a broadcast sound wave where the sound wave ID is embedded.
  • FIGs. 4A-4C depict an original sound as well as a sound wave representing a sound wave ID according to the first embodiment.
  • FIG. 4A depicts an example of an original sound.
  • the original sound 401 is a sound wave that varies in amplitude over time and may be any one of various sound waves such as, for example, a voice message, an alarm sound, a sound effect, a sound trademark, and so forth.
  • the original sound 401 may be output in the frequency range lower than or equal to 10 kHz, for example, as depicted in FIG. 4C.
  • the frequency range lower than or equal to 10 kHz is an example of the predetermined frequency range; the predetermined frequency range may be any other frequency range.
  • FIG. 4B depicts an image of an example of a sound wave representing a sound wave ID.
  • a sound wave 402 representing a sound wave ID is a sound wave representing a sound wave ID (for example, ID1, or the like) at a predetermined interval T1 (for example, 0.3 seconds, or so).
  • a sound wave 402 representing a sound wave ID represents the sound wave ID using a predetermined frequency component in the frequency range lower than or equal to 10 kHz, as depicted in FIG. 4C, for example.
  • a method for representing a sound wave ID is not particularly limited.
  • the sound wave 402 representing the sound wave ID is masked by the original sound 401 and it is difficult for the information terminal 104 to acquire the sound wave ID.
  • the information terminal 104 can acquire the sound wave ID.
  • the sound wave 402 representing the sound wave ID is an audible sound in the frequency range lower than or equal to 10 kHz, there is a problem that the sound wave 402 representing the sound wave ID may be audible to the user 103.
  • the second broadcast apparatus 120 generates and sends a sound wave including identification information and so forth to the information terminal 104 in the frequency range lower than or equal to 10 kHz, while also reducing the influence on an audible sound.
  • FIGs. 5A-5D depict a sound wave generating process according to the first embodiment.
  • FIG. 5A depicts an example of frequency components of original sounds.
  • the second broadcast apparatus 120 acquires a frequency component included in an original sound 401 at each interval, as depicted in FIG. 5A, by, for example, analyzing the original-sound data with the analysis unit 303.
  • an original sound 401 at a frequency extent 501 is output at an interval from time t1 to time t2; and an original sound 401 at a frequency extent 502 is output at an interval T2 from time t3 to time t4.
  • the frequency components of the original sound 401 depicted in FIG. 5A are an example for illustration.
  • FIG. 5B depicts an image of an example of a frequency component of a sound wave representing a sound wave ID.
  • a sound wave 402 representing a sound wave ID is output for a predetermined interval T1 from time t3 to time t5 (for example, 0.3 seconds) at a predetermined frequency extent 503, for example, as depicted in FIG. 5B.
  • the extracting unit 304 of the second broadcast apparatus 120 analyzes original-sound data using the analysis unit 303 and acquires a frequency component of an original sound 401 at each interval, for example, depicted in FIG. 5A.
  • the extracting unit 304 extracts from an original sound 401 a section including a frequency component similar to a frequency component of a sound wave 402 representing a sound wave ID.
  • the original sound 401 at a frequency extent 502 is output; the length of interval T2 (from time t3 to time t4) is longer than the predetermined interval T1 (from time t3 to time t5) at which the sound wave 402 representing the sound wave ID is output as depicted in FIG. 5B.
  • the frequency extent 502 includes the predetermined frequency extent 503 at which the sound wave 402 representing the sound wave ID is output; the frequency extent 502 is approximately the same as the frequency extent 503.
  • the interval T2 (from time t3 to time t4) of the original sound 401 is extracted by the extracting unit 304 as a section including a frequency component similar to the predetermined frequency component representing the sound wave ID having the frequency extent 503.
  • the extracting unit 304 extracts, from the original-sound data, a section that, for example, includes the frequency extent 503 at which the sound wave 402 representing the sound wave ID is output and that is longer than the interval T1 at which the sound wave 402 is output, as a section including a frequency component similar to the predetermined frequency component representing the sound wave ID.
  • the extracting unit 304 may extract a section, as a section including a frequency component similar to the predetermined frequency component representing the sound wave ID, in a case where, in addition to the above-described conditions, the section has the frequency component that is the same as the predetermined frequency component representing the sound wave ID and that has the sound intensity level greater than or equal to a predetermined level for a predetermined period of time.
  • the extracting unit 304 may extract, from the original-sound data, also a section that includes the frequency extent 503 at which the sound wave 402 representing the sound wave ID is output and that is shorter than the interval T1 at which the sound wave 402 is output, as a section including a frequency component similar to the predetermined frequency component representing the sound wave ID.
  • FIG. 5C depicts an image of an example of frequency components after undergoing filtering.
  • the sound wave generating unit 305 filters off and thus removes the predetermined frequency extent 503 of the original-sound data at the section of the interval T2 from time t3 to time t4 extracted by the extracting unit 304, as depicted in FIG. 5C, for example.
  • the gap thus created at the section of the interval T2 from time t3 to time t4 of the original-sound data as a result of the predetermined frequency extent 503 of the original-sound data being thus removed is then used to output the sound wave 402 representing the sound wave ID.
  • the sound wave generating unit 305 generates a broadcast sound wave 403, as depicted in FIG. 5D, where, at the section of the interval T2 from time t3 to time t4, the sound wave 402 representing the sound wave ID depicted in FIG. 5B is inserted in the gap of the original sound thus created from filtering off and thus removing the predetermined frequency extent 503.
  • FIG. 5D depicts an example of the frequency components of the thus generated broadcast sound wave 403.
  • the broadcast sound wave 403 generated by the sound wave generating unit 305 includes the frequency components similar to the frequency components of the original sound 401 depicted in FIG. 5A, while the sound wave ID is embedded in the broadcast sound wave 403.
  • the information terminal 104 acquires the sound wave ID from the broadcast sound wave 403 by, for example, filtering the broadcast sound wave 403 depicted in FIG. 5D to acquire the frequency extent 503.
  • the sound wave 402 representing the sound wave ID is synthesized with (i.e., is embedded in) the original sound 401 so that the broadcast sound wave 403 is generated, it is possible to minimize the influence on the original sound 401 exerted due to the synthesizing (embedding). This is because the predetermined frequency component of the original sound 401 that is replaced with the sound wave 402 representing the sound wave ID is similar to the frequency component of the sound wave 402 representing the sound wave ID.
  • the output unit 306 may be implemented, for example, by a program executed by the CPU 201 depicted in FIG. 2 together with the sound wave processing circuit 205, the output I/F 207a, 207b, ..., and so forth, and outputs a broadcast sound wave generated by the sound wave generating unit 305.
  • the display and input control unit 307 may be implemented, for example, by a program executed by the CPU 201 depicted in FIG. 2, and so forth, and performs a control to display various display screen pages on the display device 209 and a control to receive the user's operations performed on the input device 208, for example.
  • the storage unit 308 may be implemented, for example, by a program executed by the CPU 201 depicted in FIG. 2 together with the storage device 203, the memory 202, and so forth, and stores various information, data, and so forth, such as, for example, original-sound data 311, sound wave data 312, and frequency-component data 313.
  • Original-sound data 311 is, for example, sound wave data of any one of various original sounds (for example, a voice message and an alarm sound) broadcast as a disaster broadcast.
  • Sound wave data 312 includes one or more sets of sound wave data representing one or more sound wave IDs.
  • Frequency-component data 313 is data representing the frequency component of each set of sound wave data 312.
  • the second broadcast apparatus 120 generates broadcast sound waves obtained from embedding various sound wave IDs in various original sounds 401; the second broadcast apparatus 120 outputs the broadcast sound waves from the speakers 121a, 121b, 121c, .... ⁇ Process flow>
  • FIG. 6 is a flowchart depicting an example of a sound wave generating process according to the first embodiment.
  • FIG. 6 depicts an example of a sound wave generating process where the second broadcast apparatus 120 embeds a sound wave ID in an original sound to generate a broadcast sound wave.
  • step S601 the second broadcast apparatus 120 acquires original-sound data 311 input to the input unit 302 or original-sound data 311 stored in the storage unit 308.
  • the original-sound data 311 may be, for example, digital data obtained from encoding an original sound 401 using an audio codec according to PCM (Pulse Code Modulation) or the like.
  • step S602 the extracting unit 304 of the second broadcast apparatus 120 analyzes the frequency components of the original sound 401 using the analysis unit 303.
  • the extracting unit 304 analyzes the original-sound data using the analysis unit 303 according to time frequency analysis to acquire a frequency component of the original sound 401 at each interval, depicted in FIG. 5A, for example, and stores the acquired frequency-component data in the storage unit 308 or another storage device.
  • step S603 the extracting unit 304 searches for a section including a frequency component that is similar to the predetermined frequency component (of a sound wave or a set of sound wave data) representing a sound wave ID from the original sound 401 to extract, using the frequency-component data 313 stored in the storage unit 308.
  • step S604 the extracting unit 304 (or the sound wave generating unit 305) of the second broadcast apparatus 120 determines whether a section including the frequency component that is similar to the predetermined frequency component representing the sound wave ID (hereinafter, simply referred to as a "section") can be extracted in step S603. Upon extraction of a section in step S603, the process proceeds to step S605. In response to a section being not extracted in step S603, the second broadcast apparatus 120 performs, in step S607, a predetermined process for when a broadcast sound wave cannot be generated.
  • the predetermined process for when a broadcast sound wave cannot be generated is not particularly limited.
  • the predetermined process may be, for example, displaying a message indicating that the display and input control unit 307 is unable to generate a broadcast sound wave or a message urging the user to additionally provide original-sound data.
  • the predetermined process may be such that the display and input control unit 307 displays a selection screen for the user to determine whether to appropriately modify the original-sound data or to determine whether to output the sound wave 402 representing the sound wave ID at a time when the original sound 401 is absent.
  • step S605 the extracting unit 304 (or the sound wave generating unit 305) of the second broadcast apparatus 120 determines whether the length of the extracted section is longer than or equal to a predetermined length.
  • the predetermined length includes an interval (for example, 0.3 seconds or longer) required to embed the sound wave 402 representing the sound wave ID in the original sound 401.
  • step S606 In response to the length of the extracted section being longer than or equal to the predetermined length, the process proceeds to step S607.
  • step S606 in the extracted section, the sound wave generating unit 305 of the second broadcast apparatus 120 replaces the predetermined frequency extent of the original-sound data with the sound wave data representing the sound wave ID, for example, as depicted in FIGs. 5C and 5D. As a result, a broadcast sound wave obtained from embedding the sound wave ID in the original sound 401 is generated.
  • the output unit 306 of the second broadcast apparatus 120 converts the sound wave data generated by the sound wave generating unit 305 into a sound wave signal (an analog signal) and outputs the sound wave signal to the speakers 121.
  • Each of the speakers 121 converts the input sound wave signal into a sound wave and outputs the sound wave.
  • the broadcasting system 100 outputs the broadcast sound wave where the sound wave ID is embedded to the facility 102.
  • FIG. 7 depicts an example of a functional configuration of the second broadcast apparatus according to the second embodiment of the present invention.
  • the second broadcast apparatus 120 according to the second embodiment depicted in FIG. 7 includes a sound wave modifying unit 701 in addition to the functional configuration of the second broadcast apparatus 120 according to the first embodiment depicted in FIG. 3.
  • the sound wave modifying unit 701 may be implemented, for example, by a program executed by the CPU 201 depicted in FIG. 2 together with the sound wave processing circuit 205, and so forth.
  • the sound wave modifying unit 701 modifies original-sound data such that the length of a section extracted by the extracting unit 304 becomes greater than or equal to a predetermined length required to embed a sound wave ID.
  • an affricate where frequency spectra extend widely is suitable to embed a sound wave ID.
  • an affricate included in a voice message does not have a sufficient length (for example, longer than or equal to 0.3 seconds) to embed a sound wave ID, it is not possible to embed the sound wave ID in the affricate.
  • an affricate of a voice message is one example of a sound to embed a sound wave ID.
  • a suitable sound for embedding a sound wave ID may be another sound (for example, a sound effect, a sound trademark, a striking sound, music, or the like) where frequency spectra extends widely such as an affricate.
  • the sound wave modifying unit 701 elongates an affricate or the like included in an original sound in such a manner that the affricate or the like comes to have a predetermined length required to embed a sound wave ID (i.e., modifies the original-sound data) for a case where the length of the affricate or the like included in the original sound is shorter than the predetermined length required to embed the sound wave ID.
  • a sound wave ID comes to be able to be embedded in the affricate or the like.
  • an affricate or the like of approximately 0.25 seconds is elongated to have a length of approximately 0.3 seconds, possible adverse effects that may occur on the original sound is very small.
  • a method for elongating an affricate or the like is not particularly limited.
  • a conventional technology using a phase vocoder technology can be used to elongate an affricate or the like.
  • FIG. 8 is a flowchart depicting an example of a sound wave generating process according to the second embodiment.
  • FIG. 8 depicts an example of a process where a plurality of sound wave IDs are embedded in original-sound data. Because steps S601 to S603 and S607 in FIG. 8 are basically the same as steps S601 to S603 and S607 of the first embodiment depicted in FIG. 6, the differences between the second embodiment and the first embodiment will be mainly described here.
  • step S801 the extracting unit 304 (or the sound wave generating unit 305) of the second broadcast apparatus 120 determines whether a section including a frequency component that is similar to the predetermined frequency component representing a sound wave ID (hereinafter, simply referred to as a "section") can be extracted in step S603. In response to a section being extracted in step S603, the process proceeds to step S802. In response to a section not being extracted in step S603, the process proceeds to step S810.
  • step S802 the extracting unit 304 (or the sound wave generating unit 305) of the second broadcast apparatus 120 determines whether the length of the extracted section is greater than or equal to a predetermined length.
  • the predetermined length includes an interval (for example, 0.3 seconds or longer) required to embed a sound wave 402 representing a sound wave ID in an original sound 401.
  • step S802 In response to the length of the extracted section being greater than or equal to the predetermined length (YES in step S802), the process proceeds to step S806. In response to the length of the extracted section being smaller than the predetermined length (NO in step S802), the process proceeds to step S803.
  • step S803 the sound wave modifying unit 701 of the second broadcast apparatus 120 determines whether the length of the extracted section is greater than or equal to 80% of the predetermined length. In response to the length of the extracted section being greater than or equal to 80% of the predetermined length (YES in step S803), the process proceeds to step S804. In response to the length of the extracted section being smaller than 80% of the predetermined length (NO in step S803), the process proceeds to step S808. In this example, "80%" is used as the ratio of the predetermined length in the determination of step S803 as an example; and a ratio different from 80% may be used instead.
  • the sound wave modifying unit 701 of the second broadcast apparatus 120 modifies the extracted section of the original-sound data to cause the section to have the predetermined length or a longer length. For example, the sound wave modifying unit 701 modifies (elongates) the section shorter than the predetermined length and longer than or equal to 80% of the predetermined length to cause the section to have the predetermined length required to embed a sound wave ID or a longer length using a time vocoder technology.
  • step S805 the sound wave modifying unit 701 stores the start time and the length (or the end time) of the modified (elongated) section in the storage unit 308 or another storage device, and the process proceeds to step S807.
  • step S806 the extracting unit 304 (or the sound wave generating unit 305) stores the start time and the length (or the end time) of the extracted section in the storage unit 308 or another storage device, and proceeds to step S807.
  • the sound wave generating unit 305 replaces, on the basis of the start time and the length or the end time of the section stored in the storage unit 308 or another storage device, the predetermined frequency component of the stored section of the original-sound data with sound wave data representing a sound wave ID.
  • the sound wave generating unit 305 replaces a section, from among sections stored in the storage device 308, corresponding to the interval T1 at which the sound wave 402 representing the sound wave ID is output, with the sound data representing the sound wave ID.
  • step S808 the sound wave generating unit 305 determines whether the remaining length of the original-sound data is longer than or equal to a threshold.
  • the threshold is, for example, a length (for example, a length determined from among 0.8 through 5 times of the predetermined length) previously set for determining whether the remaining length of original-sound data is sufficient for extracting a section.
  • the process returns to step S602 so that a process starting from step S602 will be performed on the remaining portion of the original-sound data.
  • the process proceeds to step S809.
  • step S809 the output unit 306 of the second broadcast apparatus 120 outputs the thus generated broadcast sound wave using the speakers 121a, 121b, 121c, ....
  • the output unit 306 converts the sound wave data generated by the sound wave generating unit 305 into a sound wave signal and outputs the sound wave signal to the speakers 121a, 121b, and 121c, ....
  • the second broadcast apparatus 120 outputs the broadcast sound wave where the plurality of sound wave IDs are embedded at the facility 102 in a case where the original-sound data has a sufficient length to embed the sound wave IDs.
  • step S810 the output unit 306 of the second broadcast apparatus 120 determines whether there is a section for which replacement of sound wave data has been performed. In a case where there is a section for which replacement of sound wave data has been performed, the process proceeds to step S809. In a case where there is no section for which replacement of sound wave data has been performed, the process proceeds to step S607.
  • the second broadcast apparatus 120 can embed a plurality of sound wave IDs in a case where original sound 401 has a sufficient length.
  • the second broadcast apparatus 120 can modify original-sound data 402 to cause a section extracted from the original-sound data 402 to have a length longer than or equal to a predetermined length required to embed a sound wave ID, even in a case where the section extracted from the original-sound data 402 is shorter than the predetermined length.
  • FIG. 9 depicts an example of a sound wave representing a sound wave ID according to the third embodiment.
  • an original sound 401 is output in, for example, the frequency range lower than or equal to 10 kHz, similarly to the first embodiment.
  • the second broadcast apparatus 120 according to the third embodiment is capable of outputting any one of a plurality of sound waves 402a, 402b, and 402c representing the same information, using corresponding sets of sound wave data.
  • the sound wave IDs that the sound waves 402a, 402b, and 402c respectively represent may be the same as each other. However, as long as the sound waves 402a, 402b, and 402c substantively represent the same information, the sound wave IDs may be different from each other partially or completely.
  • the number of sound waves 402a, 402b, and 402c is not limited to the three as in the present example and may be any number greater than or equal to 2.
  • the second broadcast apparatus 120 in a case where there is a loud noise (for example, a voice of a person, a footstep, a siren sound of a fire engine, or the like) in the frequency range of the sound wave 402a, it is difficult for the second broadcast apparatus 120 to send the sound wave ID to the information terminal 104 using the sound wave 402a.
  • a loud noise for example, a voice of a person, a footstep, a siren sound of a fire engine, or the like
  • the second broadcast apparatus 120 determines a set of sound wave data to be used to generate a sound wave from among plural sets of sound wave data 312 depending on original-sound data or sound waves input from the outside (for example, sound waves collectable at the facility 102). ⁇ Functional configuration>
  • FIG. 10 depicts an example of a functional configuration of the second broadcast apparatus according to the third embodiment.
  • the second broadcast apparatus 120 according to the third embodiment includes a determining unit 1001 in addition to the functional configuration of the second broadcast apparatus 120 according to the first embodiment or the second embodiment described above.
  • the determining unit 1001 may be implemented by a program executed by the CPU 201 depicted in FIG. 2, and so forth, and determines a set of sound wave data to be used to generate a sound wave from among plural sets of sound wave data 312 stored in the storage unit 308 depending on, for example, original-sound data or sound waves input from the outside. ⁇ Process flow>
  • FIGs. 11A and 11B are flowcharts depicting examples of a sound wave determining process to determine a sound wave representing a sound wave ID according to the third embodiment. (Example of sound wave determining process)
  • FIG. 11A depicts an example of a sound wave determining process to determine a sound wave representing a sound wave ID.
  • FIG. 11A depicts an example of a process where the determining unit 1001 determines a set of sound wave data for the sound wave generating unit 305 to generate a sound wave representing a sound wave ID, on the basis of sound waves collected at the facility 102 using, for example, an external microphone. Sound waves collected at the facility 102 are examples of a sound wave input from the outside.
  • step S1101 the determining unit 1001 of the second broadcast apparatus 120 acquires sound waves (digital data or analog audio signals) collected at the facility 102 using the input unit 302.
  • the input unit 302 converts the acquired sound waves into digital data.
  • step S1102 the determining unit 1001 analyzes frequency components of the acquired sound waves using the analysis unit 303.
  • step S1103 using the result of the analysis performed in step S1102, the determining unit 1001 selects a set of sound wave data in a frequency range where sound wave environment is satisfactory from among the plural sets of sound wave data 312 representing a sound wave ID stored in the storage unit 308. For example, as described above, referring to FIG. 9, for a case where it is determined from the analysis result of step S1102 that there is a loud noise (for example, a voice of a person, a footstep, a siren sound of a fire engine, or the like) in the frequency range of the sound wave 402a, the determining unit 1001 selects the set of sound wave data corresponding to the sound wave 402b or the sound wave 402c. Alternatively, the determining unit 1001 may select the set of sound wave data corresponding to the sound wave of the frequency range of the lowest noise level from among the plurality of sound waves 402a, 402b, and 402c.
  • a loud noise for example, a voice of a person, a footstep,
  • step S1104 the second broadcast apparatus 120 performs a sound wave generating process according to the first embodiment depicted in FIG. 6 or a sound wave generating process according to the second embodiment depicted in FIG. 8 to output the sound wave using the set of sound wave data 312 thus selected and determined by the determining unit 1001. (Another Example of sound wave determining process)
  • FIG. 11B depicts another example of a sound wave determining process to determine a sound wave representing a sound wave ID.
  • FIG. 11B depicts an example of a process where the determining unit 1001 determines a set of sound wave data for the sound wave generating unit 305 to generate a sound wave representing a sound wave ID on the basis of original-sound data.
  • step S1111 the determining unit 1001 of the second broadcast apparatus 120 acquires original-sound data 311 input to the input unit 302 or original-sound data 311 stored in the storage unit 308.
  • step S1112 the determining unit 1001 analyzes frequency components of the thus acquired sound wave using the analysis unit 303.
  • step S1113 the determining unit 1001 selects the set of sound wave data of a frequency range, the same frequency range being most included in the original sound, from among the plural sets of sound wave data 312 representing the sound wave ID stored in the storage unit 308, for example.
  • step S1114 the second broadcast apparatus 120 performs a sound wave generating process according to the first embodiment depicted in FIG. 6 or a sound wave generating process according to the second embodiment depicted in FIG. 8 to output the sound wave using the set of sound wave data 312 thus selected and determined by the determining unit 1001.
  • the processes depicted in FIGs. 11A and 11B are examples of a sound wave determining process performed by the determining unit 1001.
  • the determining unit 1001 may combine the process depicted in FIG. 11A and the process depicted in FIG. 11B to determine a set of sound wave data to be used from among the plural sets of sound wave data 312 stored in the storage unit 308 depending on both original-sound data and sound waves input from the outside.
  • the determining unit 1001 may determine plural sets of sound wave data representing the same sound wave ID and the second broadcast apparatus 120 may output the corresponding plurality of sound waves (for example, the sound waves 402b and 402c) representing the sound wave ID.
  • FIG. 12 depicts an example of the hardware configuration of the information terminal 104.
  • the information terminal 104 is an information processing apparatus having a configuration of a computer such as a smartphone, a tablet terminal, or the like.
  • the information terminal 104 includes a CPU 1201, a memory 1202, a storage device 1203, a communication I/F 1204, a display and input device 1205, a sound input and output I/F 1206, a microphone 1207, a speaker 1208, a system bus 1209, and so forth as depicted in FIG. 12.
  • the CPU 1201 is an arithmetic and logic unit which implements various functions of the information terminal 104 by reading programs and data from the storage device 1203 into the memory 1202 to execute corresponding processes.
  • the memory 1202 includes, for example, a RAM used as a work area of the CPU 1201 and so forth and a ROM storing programs for booting and so forth.
  • the storage device 1203 is, for example, a non-volatile mass storage device such as a HDD or a SSD and stores an OS, application programs, various data, and so forth.
  • the communication I/F 1204 is an interface for connecting the information terminal 104 to a communication network and communicating with another apparatus.
  • the display and input device 1205 is, for example, a device such as a touch panel display obtained from integrating an input device such as a touch panel and a display device such as a display.
  • the display and input device 1205 may be separated to a display device and an input device.
  • the sound input and output I/F 1206 includes an input amplifier for amplifying a sound signal acquired by the microphone 1207, an ADC for converting an amplified sound signal into digital data, a DAC for converting digital data into a sound signal, and an output amplifier for amplifying a sound signal and outputting the sound signal to the speaker 1208.
  • the microphone 1207 converts an acquired sound wave into a sound signal and outputs the sound signal to the sound input and output I/F 1206.
  • the speaker 1208 is a speaker, a receiver, or the like, and converts a sound wave signal output by the sound input and output I/F 1206 into a sound wave and outputs the sound wave.
  • the system bus 1209 connects each of the above-mentioned elements and transmits, for example, address signals, data signals, and various control signals.
  • FIG. 13 depicts an example of a functional configuration of the information terminal 104.
  • the information terminal 104 functions as a sound wave acquiring unit 1301, a sound wave ID extracting unit 1302, a function selecting unit 1303, an emergency information providing unit 1304, an information providing unit 1305, a communication unit 1306, and a storage unit 1307.
  • the sound wave acquiring unit 1301 acquires a sound wave around the information terminal 104 using, for example, the microphone 1207, the sound input/output I/F 1206, and so forth depicted in FIG. 12.
  • the sound wave ID extracting unit 1302 extracts a sound wave ID from a sound wave acquired by the sound wave acquiring unit 1301.
  • the function selecting unit 1303 selects a function of an application to be executed by the information terminal 104 in accordance with a sound wave ID extracted by the sound wave ID extracting unit 1302.
  • the function selecting unit 1303 in response to the sound wave ID extracting unit 1302 acquiring a sound wave ID for emergency output by the second broadcast apparatus 120, the function selecting unit 1303 enters an emergency mode and, for example, causes the emergency information providing unit 1304 to display evacuation information.
  • the function selecting unit 1303 In response to the sound wave ID extracting unit 1302 acquiring a sound wave ID for a normal situation output by the first broadcast apparatus 110, the function selecting unit 1303 enters a normal mode and, for example, causes the information providing unit 1305 to display certain information from among various information according to the sound wave ID.
  • the emergency information providing unit 1304 displays evacuation information on, for example, the display and input device 1205 depicted in FIG. 12 on the basis of the emergency information 1311 stored in the storage unit 1307 and the sound wave ID extracted by the sound wave ID extracting unit 1302.
  • FIGs. 14A and 14B depict examples of evacuation information.
  • FIG. 14A depicts an example of emergency information 1311.
  • the emergency information 1311 includes information of "sound wave ID", "type", and "emergency information".
  • the "sound wave ID” is information for causing the information terminal 104 to perform a process corresponding to the sound wave ID and is included in a sound wave output by the broadcasting system 100.
  • the "type” is information indicating the type (a normal situation or emergency) of the sound wave ID.
  • the "emergency information” includes evacuation information corresponding to the sound wave ID, such as image data for displaying an evacuation route and link information for acquiring the image data.
  • the function selecting unit 1303 In response to the type of a sound wave ID extracted by the sound wave ID extracting unit 1302 being "emergency", the function selecting unit 1303 enters an emergency mode, and the emergency information providing unit 1304 displays emergency information corresponding to the sound wave ID on, for example, the display and input device 1205 of the information terminal 104.
  • the second broadcast apparatus 120 output a sound wave representing a different sound wave ID for each area of the facility 102.
  • the broadcasting system 100 causes the information terminal 104 to display an evacuation route suitable for a particular area.
  • the information terminal 104 store plural sets of evacuation route information indicating a plurality of different evacuation routes for the same area (for example, a general seat area A), such as the emergency information 1311 depicted in FIG. 14A. This allows the broadcasting system 100 to direct, by selecting a sound wave ID, the movements of users so as to prevent too many users from rushing to a single exit.
  • Figure 14B depicts another example of emergency information 1311.
  • information of "language setting" is added to the emergency information 1311 in addition to the information depicted in FIG. 14A.
  • the "language setting" is information indicating the language of "emergency information”.
  • plural sets of "emergency information" of different languages are stored corresponding to a single "sound wave ID".
  • the information terminal 104 selectively displays, on, for example, the display and input device 1205, emergency information according to language information set by the user 103 or the language setting of the information terminal 104.
  • the broadcasting system 100 can provide, for, for example, a foreigner having come from abroad, emergency information such as an evacuation route in an appropriate language.
  • the information providing unit 1305 provides any item of various contents (for example, information concerning a sporting competition, guidance information for an available seat or a shop, information concerning cheering, and so forth) to the user 103 according to a sound wave ID extracted by the sound wave ID extracting unit 1302.
  • the information providing unit 1305 may display an item of provided information corresponding to a sound wave ID on, for example, the display and input device 1205 on the basis of the provided information 1312 stored in the storage unit 1307 in the same manner as the emergency information providing unit 1304 does.
  • the information providing unit 1305 may acquire contents corresponding to a sound wave ID extracted by the sound wave ID extracting unit 1302 from, for example, the information providing server 1320 and display the contents on, for example, the display and input device 1205.
  • the communication unit 1306 may be implemented, for example, by a program executed by the CPU 1201 depicted in FIG. 12, the communication I/F 1204, and so forth.
  • the communication unit 1306 connects the information terminal 104 to a communication network 1330 and communicates with, for example, the information providing server 1320.
  • the storage unit 1307 may be implemented, for example, by a program executed by the CPU 1201 depicted in FIG. 12, the storage device 1203, the memory 1202, and so forth, and stores various information such as the emergency information 1311 and the provided information 1312. ⁇ Process flow>
  • FIG. 15 is a flowchart depicting an example of a process of the information terminal 104.
  • FIG. 15 depicts an example of a process performed by the information terminal 104 that executes an application prepared for the broadcasting system 100.
  • the information terminal 104 executes the application prepared for the broadcasting system 100, and the application is executed in the normal mode.
  • step S1501 the sound wave acquiring unit 1301 acquires sound waves around the information terminal 104.
  • step S1502 the sound wave ID extracting unit 1302 searches the sound waves acquired by the sound wave acquiring unit 1301 for a sound wave ID to extract.
  • step S1503 the information terminal 104 determines whether the sound wave ID extracting unit 1302 can acquire a sound wave ID. In a case where a sound wave ID cannot be acquired, the information terminal 104 proceeds to step S1501 again. In a case where a sound wave ID can be acquired, the information terminal 104 proceeds step S1504.
  • step S1504 the function selecting unit 1303 determines whether the acquired sound wave ID includes an emergency sound wave ID. In response to an emergency sound wave ID being included, the function selection unit 1303 proceeds to step S1505. In response to an emergency sound wave ID being not included, the function selection unit 1303 proceeds to step S1507.
  • step S1505 the function selection unit 1303 enters an emergency mode to cause the emergency information providing unit 1304 to provide evacuation information.
  • step S1506 the emergency information providing unit 1304 then displays the evacuation information according to the sound wave ID on, for example, the display and input device 1205 using the emergency information 1311 depicted in FIG. 14A, for example.
  • the emergency information providing unit 1304 may display evacuation information according to the sound wave ID and the language setting on, for example, the display and input device 1205 using the emergency information 1311 depicted in FIG. 14B.
  • step S1507 the function selecting unit 1303 enters a normal mode (or remains in the normal mode) to cause the information providing unit 1305 to provide provided information.
  • step S1508 the function selecting unit 1303 determines whether the acquired sound wave ID includes a sound wave ID for a normal situation. In response to a sound wave ID for a normal situation being included, the function selection unit 1303 proceeds to step S1509. In response to a sound wave ID for a normal situation being not included, the process ends.
  • step S1509 the information providing unit 1305 displays contents according to the sound wave ID (for example, information concerning a sporting competition, guidance information for an available seat or a shop, or information concerning cheering) on, for example, the display and input device 1205.
  • the information providing unit 1305 may display contents according to the sound wave ID and the language setting on, for example, the display and input device 1205.
  • the user 103 of the information terminal 104 can be provided with appropriate evacuation information, for example.
  • the case where the range of available frequencies of the second broadcast apparatus 120, the speakers 121a, 121b, 121c, ..., and so forth is limited is assumed as depicted in FIG. 1.
  • the embodiments are not limited to the case where the broadcast apparatuses and the speakers are separately provided as in the above-described system.
  • FIG. 16 depicts another example of the system configuration of the broadcasting system according to any one of the above-described embodiments.
  • the embodiments can also be applied to a broadcasting system 100 depicted in FIG. 16 where an amplifier 1601 for outputting a sound wave signal, the speakers 121a, 121b, 121c, ..., and so forth are shared by the first broadcast apparatus 110 and the second broadcast apparatus 120.
  • the speakers 121 may be able to be shared for different guidance broadcasts and, for example, a switch 1602 or the like may be used to disconnect the first broadcast apparatus 110 and the range of available frequencies may be limited through the amplifier 1601 in the event of a disaster or the like.
  • the embodiments can be applied to a broadcasting system 100 including a single broadcast apparatus 101 and a plurality of speakers 121a, 121b, 121c, ....
  • the broadcast apparatus 101 limits the range of available frequencies of sound wave signals that are output when a disaster or the like occurs.
  • information can be sent to an information terminal 104 while adverse effects on audible sound can be reduced.
  • Broadcasting system 101 Broadcast apparatus 110 First broadcast apparatus 120 Second broadcast apparatus (sound wave generator) 304 Extracting unit 305 Sound wave generating unit 308 Storage unit 701 Sound wave modifying unit 1001 Determining unit

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Alarm Systems (AREA)

Abstract

[Technical Problem] In a broadcasting system having a limited range of available frequencies, such as emergency broadcasting equipment, information can be sent to an information terminal while adverse effects on audible sound can be reduced. [Solution to Problem] A sound wave generator for generating a sound wave in a broadcasting system for broadcasting a sound wave that is within a predetermined frequency range includes a storage unit configured to store sound wave data that represents information using a predetermined frequency component that is within the predetermined frequency range; an extracting unit configured to analyze original-sound data of the broadcasting to extract a section including a frequency component similar to the predetermined frequency component; and a sound wave generating unit configured to, in the extracted section, replace the frequency component of the original-sound data that is similar to the predetermined frequency component with the sound wave data, to generate the sound wave where the information is embedded.

Description

SOUND WAVE GENERATOR, BROADCASTING SYSTEM, METHOD FOR GENERATING SOUND WAVE, AND PROGRAM
The present invention relates to a sound wave generator, a broadcasting system, a method for generating a sound wave, and a program.
A technology is known where a sound wave that includes predetermined identification information is output; an information terminal receives the sound wave and acquires the identification information; and the information terminal is provided with information corresponding to a location where the information terminal acquires the identification information.
An information providing system using a sound in a frequency range reproduced by a speaker, in more particular, a sound in a frequency range higher than a frequency range used for television programs or the like (for example, a frequency range higher than or equal to 16 kHz) for superimposition of an audio ID is known (for example, see PTL 1).
According to the technology disclosed in PTL 1, identification information is sent with the use of a sound wave in a frequency range higher than or equal to 16 kHz, which humans cannot hear (hereinafter, referred to as a non-audible sound), and thus, identification information can be sent to an information terminal while adverse effects on an audible sound are reduced.
In the event of a disaster at a facility such as a sports venue where many people gather, there is a desire to send to an information terminal a sound wave that includes identification information using broadcasting equipment installed at the facility to provide the information terminal with information on the disaster and an evacuation route, for example.
However, at a facility that accommodates many people, such as a sports venue, for example, in the event of a disaster such as fire, broadcasting equipment at the facility may be switched to act as emergency broadcasting equipment that outputs a warning sound, whereby it may be impossible to execute a normal voice broadcast.
In addition, in emergency broadcasting equipment having the limited range of available frequencies, it may be impossible to output a sound wave at a frequency of 10 kHz for example. Therefore, it may be impossible to send to an information terminal using a non-audible sound as in the technology disclosed in PTL 1.
One embodiment of the present invention has been devised in view of the above-described problems and allows information to be sent to an information terminal by a broadcasting system having the limited range of available frequencies, such as an emergency broadcasting system, while it is possible to reduce adverse effects on an audible sound.
In order to solve the problems, according to the embodiment of the present invention, a sound wave generator, for generating a sound wave in a broadcasting system for broadcasting a sound wave that is within a predetermined frequency range, includes a storage unit configured to store sound wave data that represents information using a predetermined frequency component that is within the predetermined frequency range; an extracting unit configured to analyze original-sound data of the broadcasting to extract a section including a frequency component that is similar to the predetermined frequency component; and a sound wave generating unit configured to, in the extracted section, replace the frequency component of the original-sound data that is similar to the predetermined frequency component with the sound wave data, to generate the sound wave where the information is embedded.
With the embodiment of the present invention, it is possible for information to be sent to an information terminal by the broadcasting system having the limited range of available frequencies, such as an emergency broadcasting system, while adverse effects on an audible sound are reduced.
Other features and advantages of the disclosure will become apparent in the following detailed description of the embodiments with reference to the accompanying drawings.

FIG. 1 depicts an example of a system configuration of a broadcasting system according to any one of first through third embodiments; FIG. 2 depicts an example of a hardware configuration of a second broadcast apparatus according to any one of the first through third embodiments; FIG. 3 depicts an example of a functional configuration of the second broadcast apparatus according to the first embodiment; FIG. 4A depicts an example of an original sound according to the first embodiment; FIG. 4B depicts an example of a sound wave representing a sound wave ID according to the first embodiment; FIG. 4C depicts an example of an original sound and a sound wave representing a sound wave ID according to the first embodiment; FIG. 5A depicts an example of a sound wave generating process according to the first embodiment; FIG. 5B depicts the example of a sound wave generating process according to the first embodiment; FIG. 5C depicts the example of a sound wave generating process according to the first embodiment; FIG. 5D depicts the example of a sound wave generating process according to the first embodiment; FIG. 6 is a flowchart depicting an example of the sound wave generating process according to the first embodiment; FIG. 7 depicts an example of a functional configuration of the second broadcast apparatus according to the second embodiment; FIG. 8 is a flowchart depicting an example of a sound wave generating process according to the second embodiment; FIG. 9 depicts an image of an example of a sound wave representing a sound wave ID according to the third embodiment; FIG. 10 depicts an example of a functional configuration of the second broadcast apparatus according to the third embodiment; FIG. 11A is a flowchart depicting an example of a process for determining a sound wave representing a sound wave ID according to the third embodiment; FIG. 11B is a flowchart depicting the example of a process for determining a sound wave representing a sound wave ID according to the third embodiment; FIG. 12 depicts an example of a hardware configuration of an information terminal according to any one of the first through third embodiments; FIG. 13 depicts an example of a functional configuration of the information terminal according to any one of the first through third embodiments; FIG. 14A depicts an example of evacuation information according to any one of the first through third embodiments; FIG. 14B depicts another example of evacuation information according to any one of the first through third embodiments; FIG. 15 is a flowchart depicting an example of a process of the information terminal according to any one of the first through third embodiments; and FIG. 16 depicts another example of the system configuration of the broadcasting system according to any one of the first through third embodiments.
Hereinafter, the embodiments of the present invention will be described with reference to the accompanying drawings.
<System configuration>
First, an outline of a broadcasting system 100 according to the embodiments will be described.
FIG. 1 depicts an example of a system configuration of a broadcasting system according to the embodiments. The broadcasting system 100 includes a first broadcast apparatus 110, a plurality of speakers 111a, 111b, 111c, ..., a second broadcast apparatus 120, and a plurality of speakers 121a, 121b, 121c, ..., provided at a facility 102, such as a sports venue, for example. In the following description, any one of the plurality of speakers 111a, 111b, 111c, ... may be referred to by the expression "speaker 111". Any one of the plurality of speakers 121a, 121b, 121c, ... may be referred to by the expression "speaker 121".
The facility 102 is not limited to a sports venue, and may be, for example, another facility or venue such as an indoor facility, an underground facility, or an event venue.
The first broadcast apparatus 110 is connected to the plurality of speakers 111a, 111b, 111c, ... to act as broadcasting equipment for a normal situation for broadcasting sounds, such as voice guidance, music, and so forth, at the facility 102.
It is desirable to have the first broadcast apparatus 110 send to an information terminal 104 identification information or the like using a sound wave in the frequency range higher than or equal to 16 kHz, which humans cannot hear, included in the voice frequency band reproducible by a speaker 111, for example, as in the technology disclosed in PTL 1.
The information terminal 104 extracts predetermined identification information from a sound wave output by a speaker 111 by executing an application program prepared for the broadcasting system 100 (hereinafter, referred to as an "application"), for example. The information terminal 104, for example, provides any item of various contents (for example, information concerning sporting competitions, guidance information for available seats, shops, information concerning cheering, and so forth) to a user 103 in accordance with the extracted identification information and so forth.
In the embodiments, the first broadcast apparatus 110 and the speakers 111a, 111b, 111c, ... may be configured at will, and thus, the detailed description will be omitted.
The second broadcast apparatus 120 is connected to the plurality of speakers 121a, 121b, 121c, ... to act as emergency broadcasting equipment that outputs emergency alarm sounds when a disaster such as fire or an earthquake occurs (i.e., in a time of emergency). For example, in the event of fire or an earthquake occurring at the facility 102, the broadcasting system of the facility 102 switches from using the first broadcast apparatus 120 to using the second broadcast apparatus 110, and then, only emergency alarm sounds output by the second broadcast apparatus 120 is broadcast at the facility 102.
The first broadcast apparatus 110 and the second broadcast apparatus 120 may be, for example, included in one broadcast apparatus 101, as depicted in FIG. 1, or may be separate broadcast apparatuses. The second broadcast apparatus 120 is an example of a sound wave generator.
In emergency broadcasting equipment, high-impedance speakers 121 and wiring 122 are used to efficiently transmit emergency alarm sounds at a facility with relatively low power consumption. By broadcasting through high impedance speakers 121 and wiring 122, interference by noise from the outside would be more likely to occur and an oscillation would be likely to occur if, for example, the high frequency range higher than or equal to 10 kHz were used. Therefore, the range of available frequencies of the second broadcast apparatus 120 and the speakers 121a, 121b, 121c, ... acting as emergency broadcasting equipment is limited, and thus, the second broadcast apparatus 120 and the speakers 121a, 121b, 121c, ... cannot output sound waves in the frequency range, for example, higher than or equal to 10 kHz.
Therefore, the second broadcast apparatus 120, unlike the first broadcast apparatus 110, cannot send to the information terminal 104 identification information and so forth using sound waves in the frequency range higher than or equal to 16 kHz (i.e., non-audible sounds).
However, it is desirable, even in the event of a disaster, to have the second broadcast apparatus 120, which is emergency broadcasting equipment, send sound waves including identification information and so forth to the information terminal 104 to provide, for example, information on the disaster and information on an evacuation route to the information terminal 104.
Therefore, the second broadcast apparatus 120 according to the embodiments generates and sends sound waves in the frequency range lower than or equal to 10 kHz including identification information and so forth to the information terminal 104 while influence on audible sounds is reduced. An actual method for generating such sound waves will be described later.
Thus, according to the embodiments, even when a disaster occurs and the broadcasting equipment at the facility 102 has been switched to act as the emergency broadcasting equipment, the information terminal 104 can extract identification information and so forth from emergency alarm sounds output by the second broadcast apparatus 120 and can provide information such as an evacuation route to the user 103.
<Hardware configuration>
FIG. 2 depicts an example of the hardware configuration of the second broadcast apparatus 120 according to the embodiments. The second broadcast apparatus 120 includes, for example, a CPU 201, a memory 202, a storage device 203, a communication I/F (InterFace) 204, a sound wave processing circuit 205, an input I/F 206, one or more output I/ Fs 207a, 207b, ..., an input device 208, a display device 209, and a system bus 210.
The CPU 201 is an arithmetic and logic unit that implements each function of the second broadcast apparatus 120 by reading and executing instructions written in programs and using data stored in the storage device 203, for example. The memory 202 includes, for example, a RAM (Random Access Memory), which is a volatile memory used as a work area of the CPU 201, a ROM (Read-Only Memory), which is a non-volatile memory where programs for booting up and so forth are stored.
The storage device 203 is a non-volatile mass storage device such as a HDD (hard disk drive) or a SSD (solid state drive), for example, and stores an OS (Operation System), application programs, and various data. The communication I/F 204 is an interface for connecting the second broadcast apparatus 120 to communication networks and for communicating with other apparatuses.
The sound wave processing circuit 205 is a circuit for, for example, amplifying, analyzing, and filtering of sound waves under the control of the CPU 201 and may include, for example, an audio amplification circuit, a DAC (Digital to Analog Converter), an ADC (Analog to Digital Converter), and a DSP (Digital Signal Processor). The input I/F 206 is an interface for inputting sound signals to the sound wave processing circuit 205. The one or more output I/ Fs 207a, 207b, ... are interfaces for outputting sound waves to the speakers 121 installed at the facility 102, for example. It is desirable to have the second broadcast apparatus 120 include a plurality of output I/ Fs 207a, 207b, ... so as to output sound signals different from each other to a plurality of areas at the facility 102.
The input device 208 is such as a mouse, a keyboard, or a touch panel and is used to receive the user's input operation to the second broadcast apparatus 120. The display device 209 is such as a display, and is used to display the results of processing performed by the second broadcast apparatus 120. The input device 208 and the display device 209 may be, for example, an integrated display and input device such as a touch panel display.
The system bus 210 is connected to each of the above-described elements and transmits, for example, address signals, data signals, and various control signals.
The hardware configuration of the second broadcast apparatus 120 depicted in FIG. 2 is an example. The second broadcast apparatus 120 may be, for example, a combination of a PC (Personal Computer) having a typical computer configuration and a sound wave processing apparatus including the sound wave processing circuit 205, the input I/F 206, and the output I/ F 207a, 207b, ....
Below, various embodiments of the second broadcast apparatus 120, which is an example of a sound wave generator according to the present invention, will be described.
(First Embodiment)
<Functional configuration>
FIG. 3 depicts an example of a functional configuration of the second broadcast apparatus 120 according to the first embodiment of the present invention. The second broadcast apparatus 120 functions as, for example, a situation detecting unit 301, an input unit 302, an analysis unit 303, an extracting unit 304, a sound wave generating unit 305, an output unit 306, a display and input control unit 307, and a storage unit 308 by executing a predetermined program(s) with the CPU 201 depicted in FIG. 2. At least some of the above-described functional elements may be implemented by hardware.
The situation detecting unit 301 may be implemented, for example, by a program executed by the CPU 201 depicted in FIG. 2, and detects that a predetermined disaster has occurred at the facility 102. For example, the situation detecting unit 301 detects that a predetermined disaster has occurred at the facility 102 through the communication I/F 204 as a result of disaster information being sent from an emergency system or as a result of an input operation being performed by an administrator or the like to the input device 208.
In the present embodiment, when the situation detecting unit 301 detects that a predetermined disaster has occurred at the facility 102, the second broadcast apparatus 120 starts a disaster broadcast, i.e., outputting an emergency alarm sound and so forth using the speakers 121 installed at the facility 102.
The input unit 302 may be implemented, for example, by a program executed in the CPU 201 depicted in FIG. 2 together with the input I/F 206, the sound wave processing circuit 205, and so forth, and receives sound waves, sound wave signals, and sound wave data. The sound waves may be sound waves of various sounds such as, for example, voices, alarm sounds, sound effects, sound trademarks, and sound waves collected at the facility 102.
The analysis unit 303 may be implemented by, for example, a program executed by the CPU 201 depicted in FIG. 2 together with the sound wave processing circuit 205, and so forth, and performs time frequency analysis on sound waves using STFT (Short Time Fourier Transform), FFT (Fast Fourier Transform), or the like.
The second broadcast apparatus 120 embeds, in sound waves (original sounds) of disaster broadcasts, identification information (hereinafter, referred to as sound wave IDs) that cause the information terminal 104 to perform processes according to the broadcast contents. In the following description, a sound wave where a sound wave ID has not been embedded will be referred to as an "original sound" and a sound wave obtained after embedding a sound wave ID in the original sound may be referred to as a "broadcast sound wave".
For example, the analysis unit 303 performs time frequency analysis on an original sound and analyzes a frequency component included at each interval of the original sound.
The extracting unit 304 may be implemented, for example, by a program executed by the CPU 201 depicted in FIG. 2 together with the sound wave processing circuit 205, and so forth.
The storage unit 308 of the second broadcast apparatus 120 previously stores one or more sets of sound wave data 312 that represent sound wave IDs using predetermined frequency components included in a predetermined frequency range (for example, the frequency range lower than or equal to 10 kHz) and frequency-component data 313 representing a frequency component of each set of sound wave data 312.
The extracting unit 304 analyzes original-sound data using the analysis unit 303 and extracts sections including frequency components that are similar to predetermined frequency components representing sound waves ID. For example, the extracting unit 304 extracts a section from original-sound data; the section includes a predetermined frequency component to be used to represent a sound wave ID and has a length greater than or equal to a length (for example, 0.3 seconds) sufficient to embed sound wave data representing a sound wave ID.
The sound wave generating unit 305 may be implemented, for example, by a program executed by the CPU 201 depicted in FIG. 2 together with the sound wave processing circuit 205, and so forth. The sound wave generating unit 305 replaces a predetermined frequency component of original-sound data with sound wave data representing a sound wave ID in a section extracted by the extracting unit 304, thus generating a broadcast sound wave where the sound wave ID is embedded.
FIGs. 4A-4C depict an original sound as well as a sound wave representing a sound wave ID according to the first embodiment.
FIG. 4A depicts an example of an original sound. The original sound 401 is a sound wave that varies in amplitude over time and may be any one of various sound waves such as, for example, a voice message, an alarm sound, a sound effect, a sound trademark, and so forth. The original sound 401 may be output in the frequency range lower than or equal to 10 kHz, for example, as depicted in FIG. 4C. However, the frequency range lower than or equal to 10 kHz is an example of the predetermined frequency range; the predetermined frequency range may be any other frequency range.
FIG. 4B depicts an image of an example of a sound wave representing a sound wave ID. A sound wave 402 representing a sound wave ID is a sound wave representing a sound wave ID (for example, ID1, or the like) at a predetermined interval T1 (for example, 0.3 seconds, or so). A sound wave 402 representing a sound wave ID represents the sound wave ID using a predetermined frequency component in the frequency range lower than or equal to 10 kHz, as depicted in FIG. 4C, for example.
In the present embodiment, a method for representing a sound wave ID is not particularly limited.
For example, as depicted in FIG. 4C, when an original sound 401 and a sound wave 402 representing a sound wave ID are output simultaneously, the sound wave 402 representing the sound wave ID is masked by the original sound 401 and it is difficult for the information terminal 104 to acquire the sound wave ID.
In this regard, by outputting an original sound 401 and a sound wave 402 representing a sound wave ID at different times, the information terminal 104 can acquire the sound wave ID. However, in such a case, because the sound wave 402 representing the sound wave ID is an audible sound in the frequency range lower than or equal to 10 kHz, there is a problem that the sound wave 402 representing the sound wave ID may be audible to the user 103.
Therefore, the second broadcast apparatus 120 according to the present embodiment generates and sends a sound wave including identification information and so forth to the information terminal 104 in the frequency range lower than or equal to 10 kHz, while also reducing the influence on an audible sound.
FIGs. 5A-5D depict a sound wave generating process according to the first embodiment. FIG. 5A depicts an example of frequency components of original sounds. The second broadcast apparatus 120 acquires a frequency component included in an original sound 401 at each interval, as depicted in FIG. 5A, by, for example, analyzing the original-sound data with the analysis unit 303. In the example of FIG. 5A, an original sound 401 at a frequency extent 501 is output at an interval from time t1 to time t2; and an original sound 401 at a frequency extent 502 is output at an interval T2 from time t3 to time t4. The frequency components of the original sound 401 depicted in FIG. 5A are an example for illustration.
FIG. 5B depicts an image of an example of a frequency component of a sound wave representing a sound wave ID. A sound wave 402 representing a sound wave ID is output for a predetermined interval T1 from time t3 to time t5 (for example, 0.3 seconds) at a predetermined frequency extent 503, for example, as depicted in FIG. 5B.
The extracting unit 304 of the second broadcast apparatus 120 analyzes original-sound data using the analysis unit 303 and acquires a frequency component of an original sound 401 at each interval, for example, depicted in FIG. 5A. The extracting unit 304 extracts from an original sound 401 a section including a frequency component similar to a frequency component of a sound wave 402 representing a sound wave ID.
By way of example, at the interval T2 depicted in FIG. 5A, the original sound 401 at a frequency extent 502 is output; the length of interval T2 (from time t3 to time t4) is longer than the predetermined interval T1 (from time t3 to time t5) at which the sound wave 402 representing the sound wave ID is output as depicted in FIG. 5B. At the interval T2, the frequency extent 502 includes the predetermined frequency extent 503 at which the sound wave 402 representing the sound wave ID is output; the frequency extent 502 is approximately the same as the frequency extent 503. In such a case, according to the present embodiment, the interval T2 (from time t3 to time t4) of the original sound 401 is extracted by the extracting unit 304 as a section including a frequency component similar to the predetermined frequency component representing the sound wave ID having the frequency extent 503.
Thus, the extracting unit 304 extracts, from the original-sound data, a section that, for example, includes the frequency extent 503 at which the sound wave 402 representing the sound wave ID is output and that is longer than the interval T1 at which the sound wave 402 is output, as a section including a frequency component similar to the predetermined frequency component representing the sound wave ID.
In this regard, the extracting unit 304 may extract a section, as a section including a frequency component similar to the predetermined frequency component representing the sound wave ID, in a case where, in addition to the above-described conditions, the section has the frequency component that is the same as the predetermined frequency component representing the sound wave ID and that has the sound intensity level greater than or equal to a predetermined level for a predetermined period of time.
As another example, the extracting unit 304 may extract, from the original-sound data, also a section that includes the frequency extent 503 at which the sound wave 402 representing the sound wave ID is output and that is shorter than the interval T1 at which the sound wave 402 is output, as a section including a frequency component similar to the predetermined frequency component representing the sound wave ID. An example of a process for such a case where an extracted section is shorter than the interval T1 will be described later concerning a second embodiment.
FIG. 5C depicts an image of an example of frequency components after undergoing filtering. The sound wave generating unit 305 filters off and thus removes the predetermined frequency extent 503 of the original-sound data at the section of the interval T2 from time t3 to time t4 extracted by the extracting unit 304, as depicted in FIG. 5C, for example. The gap thus created at the section of the interval T2 from time t3 to time t4 of the original-sound data as a result of the predetermined frequency extent 503 of the original-sound data being thus removed is then used to output the sound wave 402 representing the sound wave ID.
The sound wave generating unit 305 generates a broadcast sound wave 403, as depicted in FIG. 5D, where, at the section of the interval T2 from time t3 to time t4, the sound wave 402 representing the sound wave ID depicted in FIG. 5B is inserted in the gap of the original sound thus created from filtering off and thus removing the predetermined frequency extent 503.
FIG. 5D depicts an example of the frequency components of the thus generated broadcast sound wave 403. As depicted in FIG. 5D, the broadcast sound wave 403 generated by the sound wave generating unit 305 includes the frequency components similar to the frequency components of the original sound 401 depicted in FIG. 5A, while the sound wave ID is embedded in the broadcast sound wave 403. The information terminal 104 acquires the sound wave ID from the broadcast sound wave 403 by, for example, filtering the broadcast sound wave 403 depicted in FIG. 5D to acquire the frequency extent 503.
Thus, according to the present embodiment, when the sound wave 402 representing the sound wave ID is synthesized with (i.e., is embedded in) the original sound 401 so that the broadcast sound wave 403 is generated, it is possible to minimize the influence on the original sound 401 exerted due to the synthesizing (embedding). This is because the predetermined frequency component of the original sound 401 that is replaced with the sound wave 402 representing the sound wave ID is similar to the frequency component of the sound wave 402 representing the sound wave ID.
Referring back to FIG. 3, the description of the functional configuration of the second broadcast apparatus according to the first embodiment will now be continued.
The output unit 306 may be implemented, for example, by a program executed by the CPU 201 depicted in FIG. 2 together with the sound wave processing circuit 205, the output I/ F 207a, 207b, ..., and so forth, and outputs a broadcast sound wave generated by the sound wave generating unit 305.
The display and input control unit 307 may be implemented, for example, by a program executed by the CPU 201 depicted in FIG. 2, and so forth, and performs a control to display various display screen pages on the display device 209 and a control to receive the user's operations performed on the input device 208, for example.
The storage unit 308 may be implemented, for example, by a program executed by the CPU 201 depicted in FIG. 2 together with the storage device 203, the memory 202, and so forth, and stores various information, data, and so forth, such as, for example, original-sound data 311, sound wave data 312, and frequency-component data 313.
Original-sound data 311 is, for example, sound wave data of any one of various original sounds (for example, a voice message and an alarm sound) broadcast as a disaster broadcast. Sound wave data 312 includes one or more sets of sound wave data representing one or more sound wave IDs. Frequency-component data 313 is data representing the frequency component of each set of sound wave data 312.
With the above-described configuration, the second broadcast apparatus 120 generates broadcast sound waves obtained from embedding various sound wave IDs in various original sounds 401; the second broadcast apparatus 120 outputs the broadcast sound waves from the speakers 121a, 121b, 121c, ....
<Process flow>
Next, the process flow of a method for generating sound waves according to the first embodiment will be described.
(Sound wave generating process)
FIG. 6 is a flowchart depicting an example of a sound wave generating process according to the first embodiment. FIG. 6 depicts an example of a sound wave generating process where the second broadcast apparatus 120 embeds a sound wave ID in an original sound to generate a broadcast sound wave.
In step S601, the second broadcast apparatus 120 acquires original-sound data 311 input to the input unit 302 or original-sound data 311 stored in the storage unit 308. The original-sound data 311 may be, for example, digital data obtained from encoding an original sound 401 using an audio codec according to PCM (Pulse Code Modulation) or the like.
In step S602, the extracting unit 304 of the second broadcast apparatus 120 analyzes the frequency components of the original sound 401 using the analysis unit 303. For example, the extracting unit 304 analyzes the original-sound data using the analysis unit 303 according to time frequency analysis to acquire a frequency component of the original sound 401 at each interval, depicted in FIG. 5A, for example, and stores the acquired frequency-component data in the storage unit 308 or another storage device.
In step S603, the extracting unit 304 searches for a section including a frequency component that is similar to the predetermined frequency component (of a sound wave or a set of sound wave data) representing a sound wave ID from the original sound 401 to extract, using the frequency-component data 313 stored in the storage unit 308.
In step S604, the extracting unit 304 (or the sound wave generating unit 305) of the second broadcast apparatus 120 determines whether a section including the frequency component that is similar to the predetermined frequency component representing the sound wave ID (hereinafter, simply referred to as a "section") can be extracted in step S603. Upon extraction of a section in step S603, the process proceeds to step S605. In response to a section being not extracted in step S603, the second broadcast apparatus 120 performs, in step S607, a predetermined process for when a broadcast sound wave cannot be generated.
In the present embodiment, the predetermined process for when a broadcast sound wave cannot be generated is not particularly limited. The predetermined process may be, for example, displaying a message indicating that the display and input control unit 307 is unable to generate a broadcast sound wave or a message urging the user to additionally provide original-sound data. Alternatively, the predetermined process may be such that the display and input control unit 307 displays a selection screen for the user to determine whether to appropriately modify the original-sound data or to determine whether to output the sound wave 402 representing the sound wave ID at a time when the original sound 401 is absent.
In step S605, the extracting unit 304 (or the sound wave generating unit 305) of the second broadcast apparatus 120 determines whether the length of the extracted section is longer than or equal to a predetermined length. The predetermined length includes an interval (for example, 0.3 seconds or longer) required to embed the sound wave 402 representing the sound wave ID in the original sound 401.
In response to the length of the extracted section being longer than or equal to the predetermined length, the process proceeds to step S606. In response to the length of the extracted section being shorter than the predetermined length, the process proceeds to step S607.
In step S606, in the extracted section, the sound wave generating unit 305 of the second broadcast apparatus 120 replaces the predetermined frequency extent of the original-sound data with the sound wave data representing the sound wave ID, for example, as depicted in FIGs. 5C and 5D. As a result, a broadcast sound wave obtained from embedding the sound wave ID in the original sound 401 is generated.
Then, for example, the output unit 306 of the second broadcast apparatus 120 converts the sound wave data generated by the sound wave generating unit 305 into a sound wave signal (an analog signal) and outputs the sound wave signal to the speakers 121. Each of the speakers 121 converts the input sound wave signal into a sound wave and outputs the sound wave. Thus, the broadcasting system 100 outputs the broadcast sound wave where the sound wave ID is embedded to the facility 102.
Thus, according to the first embodiment, in the broadcasting system 100 having the limited range of available frequencies, such as emergency broadcasting equipment, information can be sent to the information terminal 104 while the influence on audible sound is reduced.
(Second Embodiment)
<Functional configuration>
FIG. 7 depicts an example of a functional configuration of the second broadcast apparatus according to the second embodiment of the present invention. The second broadcast apparatus 120 according to the second embodiment depicted in FIG. 7 includes a sound wave modifying unit 701 in addition to the functional configuration of the second broadcast apparatus 120 according to the first embodiment depicted in FIG. 3.
The sound wave modifying unit 701 may be implemented, for example, by a program executed by the CPU 201 depicted in FIG. 2 together with the sound wave processing circuit 205, and so forth. The sound wave modifying unit 701 modifies original-sound data such that the length of a section extracted by the extracting unit 304 becomes greater than or equal to a predetermined length required to embed a sound wave ID.
For example, in a case where an original sound 401 is a voice message, an affricate where frequency spectra extend widely is suitable to embed a sound wave ID. However, in a case where an affricate included in a voice message does not have a sufficient length (for example, longer than or equal to 0.3 seconds) to embed a sound wave ID, it is not possible to embed the sound wave ID in the affricate.
In this regard, an affricate of a voice message is one example of a sound to embed a sound wave ID. A suitable sound for embedding a sound wave ID may be another sound (for example, a sound effect, a sound trademark, a striking sound, music, or the like) where frequency spectra extends widely such as an affricate.
The sound wave modifying unit 701 elongates an affricate or the like included in an original sound in such a manner that the affricate or the like comes to have a predetermined length required to embed a sound wave ID (i.e., modifies the original-sound data) for a case where the length of the affricate or the like included in the original sound is shorter than the predetermined length required to embed the sound wave ID. Thus, for example, even for a case where an affricate or the like included in a voice message is originally short, a sound wave ID comes to be able to be embedded in the affricate or the like. In this regard, for example, even when an affricate or the like of approximately 0.25 seconds is elongated to have a length of approximately 0.3 seconds, possible adverse effects that may occur on the original sound is very small.
A method for elongating an affricate or the like (i.e., a time stretch) is not particularly limited. For example, a conventional technology using a phase vocoder technology can be used to elongate an affricate or the like.
  <Process flow>
FIG. 8 is a flowchart depicting an example of a sound wave generating process according to the second embodiment. FIG. 8 depicts an example of a process where a plurality of sound wave IDs are embedded in original-sound data. Because steps S601 to S603 and S607 in FIG. 8 are basically the same as steps S601 to S603 and S607 of the first embodiment depicted in FIG. 6, the differences between the second embodiment and the first embodiment will be mainly described here.
In step S801, the extracting unit 304 (or the sound wave generating unit 305) of the second broadcast apparatus 120 determines whether a section including a frequency component that is similar to the predetermined frequency component representing a sound wave ID (hereinafter, simply referred to as a "section") can be extracted in step S603. In response to a section being extracted in step S603, the process proceeds to step S802. In response to a section not being extracted in step S603, the process proceeds to step S810.
In step S802, the extracting unit 304 (or the sound wave generating unit 305) of the second broadcast apparatus 120 determines whether the length of the extracted section is greater than or equal to a predetermined length. The predetermined length includes an interval (for example, 0.3 seconds or longer) required to embed a sound wave 402 representing a sound wave ID in an original sound 401.
In response to the length of the extracted section being greater than or equal to the predetermined length (YES in step S802), the process proceeds to step S806. In response to the length of the extracted section being smaller than the predetermined length (NO in step S802), the process proceeds to step S803.
In step S803, the sound wave modifying unit 701 of the second broadcast apparatus 120 determines whether the length of the extracted section is greater than or equal to 80% of the predetermined length. In response to the length of the extracted section being greater than or equal to 80% of the predetermined length (YES in step S803), the process proceeds to step S804. In response to the length of the extracted section being smaller than 80% of the predetermined length (NO in step S803), the process proceeds to step S808. In this example, "80%" is used as the ratio of the predetermined length in the determination of step S803 as an example; and a ratio different from 80% may be used instead.
In step S804, the sound wave modifying unit 701 of the second broadcast apparatus 120 modifies the extracted section of the original-sound data to cause the section to have the predetermined length or a longer length. For example, the sound wave modifying unit 701 modifies (elongates) the section shorter than the predetermined length and longer than or equal to 80% of the predetermined length to cause the section to have the predetermined length required to embed a sound wave ID or a longer length using a time vocoder technology.
In step S805, the sound wave modifying unit 701 stores the start time and the length (or the end time) of the modified (elongated) section in the storage unit 308 or another storage device, and the process proceeds to step S807.
In step S806, the extracting unit 304 (or the sound wave generating unit 305) stores the start time and the length (or the end time) of the extracted section in the storage unit 308 or another storage device, and proceeds to step S807.In step S807, the sound wave generating unit 305 replaces, on the basis of the start time and the length or the end time of the section stored in the storage unit 308 or another storage device, the predetermined frequency component of the stored section of the original-sound data with sound wave data representing a sound wave ID. For example, the sound wave generating unit 305 replaces a section, from among sections stored in the storage device 308, corresponding to the interval T1 at which the sound wave 402 representing the sound wave ID is output, with the sound data representing the sound wave ID.
In step S808, the sound wave generating unit 305 determines whether the remaining length of the original-sound data is longer than or equal to a threshold. The threshold is, for example, a length (for example, a length determined from among 0.8 through 5 times of the predetermined length) previously set for determining whether the remaining length of original-sound data is sufficient for extracting a section. In a case where the remaining length of the original-sound data is longer than or equal to the threshold, the process returns to step S602 so that a process starting from step S602 will be performed on the remaining portion of the original-sound data. In a case the remaining length of the original-sound data is shorter than the threshold, the process proceeds to step S809.
In step S809, the output unit 306 of the second broadcast apparatus 120 outputs the thus generated broadcast sound wave using the speakers 121a, 121b, 121c, .... For example, the output unit 306 converts the sound wave data generated by the sound wave generating unit 305 into a sound wave signal and outputs the sound wave signal to the speakers 121a, 121b, and 121c, .... Thus, the second broadcast apparatus 120 outputs the broadcast sound wave where the plurality of sound wave IDs are embedded at the facility 102 in a case where the original-sound data has a sufficient length to embed the sound wave IDs.
In step S810, the output unit 306 of the second broadcast apparatus 120 determines whether there is a section for which replacement of sound wave data has been performed. In a case where there is a section for which replacement of sound wave data has been performed, the process proceeds to step S809. In a case where there is no section for which replacement of sound wave data has been performed, the process proceeds to step S607.
As described above, according to the second embodiment, the second broadcast apparatus 120 can embed a plurality of sound wave IDs in a case where original sound 401 has a sufficient length. In addition, the second broadcast apparatus 120 can modify original-sound data 402 to cause a section extracted from the original-sound data 402 to have a length longer than or equal to a predetermined length required to embed a sound wave ID, even in a case where the section extracted from the original-sound data 402 is shorter than the predetermined length.
(Third Embodiment)
Concerning the third embodiment of the present invention, an example where the second broadcast apparatus 120 stores plural sets of sound wave data 312 that represent the same information using different frequency components in the storage unit 308 will be described.
FIG. 9 depicts an example of a sound wave representing a sound wave ID according to the third embodiment. In FIG. 9, an original sound 401 is output in, for example, the frequency range lower than or equal to 10 kHz, similarly to the first embodiment. The second broadcast apparatus 120 according to the third embodiment is capable of outputting any one of a plurality of sound waves 402a, 402b, and 402c representing the same information, using corresponding sets of sound wave data. In this regard, the sound wave IDs that the sound waves 402a, 402b, and 402c respectively represent may be the same as each other. However, as long as the sound waves 402a, 402b, and 402c substantively represent the same information, the sound wave IDs may be different from each other partially or completely. The number of sound waves 402a, 402b, and 402c is not limited to the three as in the present example and may be any number greater than or equal to 2.
As an example, referring to FIG. 9, in a case where there is a loud noise (for example, a voice of a person, a footstep, a siren sound of a fire engine, or the like) in the frequency range of the sound wave 402a, it is difficult for the second broadcast apparatus 120 to send the sound wave ID to the information terminal 104 using the sound wave 402a.
As another example, referring to FIG. 9, in a case where the original sound 401 does not include a frequency component similar to a frequency component of the sound wave 402c, it is difficult for the second broadcast apparatus 120 to send to the information terminal 104 the sound wave ID using the sound wave 402c.
Therefore, according to the third embodiment, the second broadcast apparatus 120 determines a set of sound wave data to be used to generate a sound wave from among plural sets of sound wave data 312 depending on original-sound data or sound waves input from the outside (for example, sound waves collectable at the facility 102).
<Functional configuration>
FIG. 10 depicts an example of a functional configuration of the second broadcast apparatus according to the third embodiment. As depicted in FIG. 10, the second broadcast apparatus 120 according to the third embodiment includes a determining unit 1001 in addition to the functional configuration of the second broadcast apparatus 120 according to the first embodiment or the second embodiment described above.
For example, the determining unit 1001 may be implemented by a program executed by the CPU 201 depicted in FIG. 2, and so forth, and determines a set of sound wave data to be used to generate a sound wave from among plural sets of sound wave data 312 stored in the storage unit 308 depending on, for example, original-sound data or sound waves input from the outside.
<Process flow>
FIGs. 11A and 11B are flowcharts depicting examples of a sound wave determining process to determine a sound wave representing a sound wave ID according to the third embodiment.
(Example of sound wave determining process)
FIG. 11A depicts an example of a sound wave determining process to determine a sound wave representing a sound wave ID. Actually, FIG. 11A depicts an example of a process where the determining unit 1001 determines a set of sound wave data for the sound wave generating unit 305 to generate a sound wave representing a sound wave ID, on the basis of sound waves collected at the facility 102 using, for example, an external microphone. Sound waves collected at the facility 102 are examples of a sound wave input from the outside.
In step S1101, the determining unit 1001 of the second broadcast apparatus 120 acquires sound waves (digital data or analog audio signals) collected at the facility 102 using the input unit 302. In a case where the acquired sound waves are analog audio signals, the input unit 302 converts the acquired sound waves into digital data.
In step S1102, the determining unit 1001 analyzes frequency components of the acquired sound waves using the analysis unit 303.
In step S1103, using the result of the analysis performed in step S1102, the determining unit 1001 selects a set of sound wave data in a frequency range where sound wave environment is satisfactory from among the plural sets of sound wave data 312 representing a sound wave ID stored in the storage unit 308. For example, as described above, referring to FIG. 9, for a case where it is determined from the analysis result of step S1102 that there is a loud noise (for example, a voice of a person, a footstep, a siren sound of a fire engine, or the like) in the frequency range of the sound wave 402a, the determining unit 1001 selects the set of sound wave data corresponding to the sound wave 402b or the sound wave 402c. Alternatively, the determining unit 1001 may select the set of sound wave data corresponding to the sound wave of the frequency range of the lowest noise level from among the plurality of sound waves 402a, 402b, and 402c.
In step S1104, the second broadcast apparatus 120 performs a sound wave generating process according to the first embodiment depicted in FIG. 6 or a sound wave generating process according to the second embodiment depicted in FIG. 8 to output the sound wave using the set of sound wave data 312 thus selected and determined by the determining unit 1001.
(Another Example of sound wave determining process)
FIG. 11B depicts another example of a sound wave determining process to determine a sound wave representing a sound wave ID. Actually, FIG. 11B depicts an example of a process where the determining unit 1001 determines a set of sound wave data for the sound wave generating unit 305 to generate a sound wave representing a sound wave ID on the basis of original-sound data.
In step S1111, the determining unit 1001 of the second broadcast apparatus 120 acquires original-sound data 311 input to the input unit 302 or original-sound data 311 stored in the storage unit 308.
In step S1112, the determining unit 1001 analyzes frequency components of the thus acquired sound wave using the analysis unit 303.
In step S1113, the determining unit 1001 selects the set of sound wave data of a frequency range, the same frequency range being most included in the original sound, from among the plural sets of sound wave data 312 representing the sound wave ID stored in the storage unit 308, for example.
In step S1114, the second broadcast apparatus 120 performs a sound wave generating process according to the first embodiment depicted in FIG. 6 or a sound wave generating process according to the second embodiment depicted in FIG. 8 to output the sound wave using the set of sound wave data 312 thus selected and determined by the determining unit 1001.
The processes depicted in FIGs. 11A and 11B are examples of a sound wave determining process performed by the determining unit 1001. For example, the determining unit 1001 may combine the process depicted in FIG. 11A and the process depicted in FIG. 11B to determine a set of sound wave data to be used from among the plural sets of sound wave data 312 stored in the storage unit 308 depending on both original-sound data and sound waves input from the outside.
Furthermore, the determining unit 1001 may determine plural sets of sound wave data representing the same sound wave ID and the second broadcast apparatus 120 may output the corresponding plurality of sound waves (for example, the sound waves 402b and 402c) representing the sound wave ID.
(Information Terminal)
Next, the information terminal 104 used in the broadcasting system 100 according to any one of the first through third embodiments will be briefly described.
<Hardware configuration>
FIG. 12 depicts an example of the hardware configuration of the information terminal 104. The information terminal 104 is an information processing apparatus having a configuration of a computer such as a smartphone, a tablet terminal, or the like. The information terminal 104 includes a CPU 1201, a memory 1202, a storage device 1203, a communication I/F 1204, a display and input device 1205, a sound input and output I/F 1206, a microphone 1207, a speaker 1208, a system bus 1209, and so forth as depicted in FIG. 12.
The CPU 1201 is an arithmetic and logic unit which implements various functions of the information terminal 104 by reading programs and data from the storage device 1203 into the memory 1202 to execute corresponding processes. The memory 1202 includes, for example, a RAM used as a work area of the CPU 1201 and so forth and a ROM storing programs for booting and so forth.
The storage device 1203 is, for example, a non-volatile mass storage device such as a HDD or a SSD and stores an OS, application programs, various data, and so forth. The communication I/F 1204 is an interface for connecting the information terminal 104 to a communication network and communicating with another apparatus.
The display and input device 1205 is, for example, a device such as a touch panel display obtained from integrating an input device such as a touch panel and a display device such as a display. The display and input device 1205 may be separated to a display device and an input device.
The sound input and output I/F 1206 includes an input amplifier for amplifying a sound signal acquired by the microphone 1207, an ADC for converting an amplified sound signal into digital data, a DAC for converting digital data into a sound signal, and an output amplifier for amplifying a sound signal and outputting the sound signal to the speaker 1208.
The microphone 1207 converts an acquired sound wave into a sound signal and outputs the sound signal to the sound input and output I/F 1206. The speaker 1208 is a speaker, a receiver, or the like, and converts a sound wave signal output by the sound input and output I/F 1206 into a sound wave and outputs the sound wave.
The system bus 1209 connects each of the above-mentioned elements and transmits, for example, address signals, data signals, and various control signals.
<Functional configuration>
FIG. 13 depicts an example of a functional configuration of the information terminal 104. For example, by executing a predetermined application(s) with the CPU 1201 depicted in FIG. 12, the information terminal 104 functions as a sound wave acquiring unit 1301, a sound wave ID extracting unit 1302, a function selecting unit 1303, an emergency information providing unit 1304, an information providing unit 1305, a communication unit 1306, and a storage unit 1307.
The sound wave acquiring unit 1301 acquires a sound wave around the information terminal 104 using, for example, the microphone 1207, the sound input/output I/F 1206, and so forth depicted in FIG. 12.
The sound wave ID extracting unit 1302 extracts a sound wave ID from a sound wave acquired by the sound wave acquiring unit 1301.
The function selecting unit 1303 selects a function of an application to be executed by the information terminal 104 in accordance with a sound wave ID extracted by the sound wave ID extracting unit 1302.
For example, in response to the sound wave ID extracting unit 1302 acquiring a sound wave ID for emergency output by the second broadcast apparatus 120, the function selecting unit 1303 enters an emergency mode and, for example, causes the emergency information providing unit 1304 to display evacuation information.
In response to the sound wave ID extracting unit 1302 acquiring a sound wave ID for a normal situation output by the first broadcast apparatus 110, the function selecting unit 1303 enters a normal mode and, for example, causes the information providing unit 1305 to display certain information from among various information according to the sound wave ID.
In response to entering the emergency mode, the emergency information providing unit 1304 displays evacuation information on, for example, the display and input device 1205 depicted in FIG. 12 on the basis of the emergency information 1311 stored in the storage unit 1307 and the sound wave ID extracted by the sound wave ID extracting unit 1302.
FIGs. 14A and 14B depict examples of evacuation information. FIG. 14A depicts an example of emergency information 1311. In the example of FIG. 14A, the emergency information 1311 includes information of "sound wave ID", "type", and "emergency information".
The "sound wave ID" is information for causing the information terminal 104 to perform a process corresponding to the sound wave ID and is included in a sound wave output by the broadcasting system 100. The "type" is information indicating the type (a normal situation or emergency) of the sound wave ID. The "emergency information" includes evacuation information corresponding to the sound wave ID, such as image data for displaying an evacuation route and link information for acquiring the image data.
In response to the type of a sound wave ID extracted by the sound wave ID extracting unit 1302 being "emergency", the function selecting unit 1303 enters an emergency mode, and the emergency information providing unit 1304 displays emergency information corresponding to the sound wave ID on, for example, the display and input device 1205 of the information terminal 104.
It is desirable to have the second broadcast apparatus 120 output a sound wave representing a different sound wave ID for each area of the facility 102. As a result, in the event of a disaster, the broadcasting system 100 causes the information terminal 104 to display an evacuation route suitable for a particular area.
In addition, it is desirable to have the information terminal 104 store plural sets of evacuation route information indicating a plurality of different evacuation routes for the same area (for example, a general seat area A), such as the emergency information 1311 depicted in FIG. 14A. This allows the broadcasting system 100 to direct, by selecting a sound wave ID, the movements of users so as to prevent too many users from rushing to a single exit.
Figure 14B depicts another example of emergency information 1311. In the example of FIG. 14B, information of "language setting" is added to the emergency information 1311 in addition to the information depicted in FIG. 14A.
The "language setting" is information indicating the language of "emergency information". In the example of FIG. 14B, plural sets of "emergency information" of different languages are stored corresponding to a single "sound wave ID". At a time of displaying emergency information corresponding to a sound wave ID, the information terminal 104 selectively displays, on, for example, the display and input device 1205, emergency information according to language information set by the user 103 or the language setting of the information terminal 104. Thus, the broadcasting system 100 can provide, for, for example, a foreigner having come from abroad, emergency information such as an evacuation route in an appropriate language.
Returning now to FIG. 13, the description of the functional configuration of the information terminal 104 will be continued.
In response to entering a normal mode, the information providing unit 1305 provides any item of various contents (for example, information concerning a sporting competition, guidance information for an available seat or a shop, information concerning cheering, and so forth) to the user 103 according to a sound wave ID extracted by the sound wave ID extracting unit 1302.
For example, the information providing unit 1305 may display an item of provided information corresponding to a sound wave ID on, for example, the display and input device 1205 on the basis of the provided information 1312 stored in the storage unit 1307 in the same manner as the emergency information providing unit 1304 does.
As another example, the information providing unit 1305 may acquire contents corresponding to a sound wave ID extracted by the sound wave ID extracting unit 1302 from, for example, the information providing server 1320 and display the contents on, for example, the display and input device 1205.
The communication unit 1306 may be implemented, for example, by a program executed by the CPU 1201 depicted in FIG. 12, the communication I/F 1204, and so forth. The communication unit 1306 connects the information terminal 104 to a communication network 1330 and communicates with, for example, the information providing server 1320.
The storage unit 1307 may be implemented, for example, by a program executed by the CPU 1201 depicted in FIG. 12, the storage device 1203, the memory 1202, and so forth, and stores various information such as the emergency information 1311 and the provided information 1312.
<Process flow>
FIG. 15 is a flowchart depicting an example of a process of the information terminal 104. FIG. 15 depicts an example of a process performed by the information terminal 104 that executes an application prepared for the broadcasting system 100. At the start of the process depicted in FIG. 15, the information terminal 104 executes the application prepared for the broadcasting system 100, and the application is executed in the normal mode.
In step S1501, the sound wave acquiring unit 1301 acquires sound waves around the information terminal 104.
In step S1502, the sound wave ID extracting unit 1302 searches the sound waves acquired by the sound wave acquiring unit 1301 for a sound wave ID to extract.
In step S1503, the information terminal 104 determines whether the sound wave ID extracting unit 1302 can acquire a sound wave ID. In a case where a sound wave ID cannot be acquired, the information terminal 104 proceeds to step S1501 again. In a case where a sound wave ID can be acquired, the information terminal 104 proceeds step S1504.
In step S1504, the function selecting unit 1303 determines whether the acquired sound wave ID includes an emergency sound wave ID. In response to an emergency sound wave ID being included, the function selection unit 1303 proceeds to step S1505. In response to an emergency sound wave ID being not included, the function selection unit 1303 proceeds to step S1507.
In step S1505, the function selection unit 1303 enters an emergency mode to cause the emergency information providing unit 1304 to provide evacuation information.
In step S1506, the emergency information providing unit 1304 then displays the evacuation information according to the sound wave ID on, for example, the display and input device 1205 using the emergency information 1311 depicted in FIG. 14A, for example. Alternatively, the emergency information providing unit 1304 may display evacuation information according to the sound wave ID and the language setting on, for example, the display and input device 1205 using the emergency information 1311 depicted in FIG. 14B.
In step S1507, the function selecting unit 1303 enters a normal mode (or remains in the normal mode) to cause the information providing unit 1305 to provide provided information.
Then, in step S1508, the function selecting unit 1303 determines whether the acquired sound wave ID includes a sound wave ID for a normal situation. In response to a sound wave ID for a normal situation being included, the function selection unit 1303 proceeds to step S1509. In response to a sound wave ID for a normal situation being not included, the process ends.
In step S1509, the information providing unit 1305 displays contents according to the sound wave ID (for example, information concerning a sporting competition, guidance information for an available seat or a shop, or information concerning cheering) on, for example, the display and input device 1205. Alternatively, the information providing unit 1305 may display contents according to the sound wave ID and the language setting on, for example, the display and input device 1205.
As described above, according to the broadcasting system 100 in accordance with the above-described embodiments, even when a disaster occurs at the facility 102 and the broadcasting equipment at the facility is switched to act as emergency broadcasting equipment, the user 103 of the information terminal 104 can be provided with appropriate evacuation information, for example.
Concerning the above-described embodiments, the case where the range of available frequencies of the second broadcast apparatus 120, the speakers 121a, 121b, 121c, ..., and so forth is limited is assumed as depicted in FIG. 1. However, the embodiments are not limited to the case where the broadcast apparatuses and the speakers are separately provided as in the above-described system.
FIG. 16 depicts another example of the system configuration of the broadcasting system according to any one of the above-described embodiments. The embodiments can also be applied to a broadcasting system 100 depicted in FIG. 16 where an amplifier 1601 for outputting a sound wave signal, the speakers 121a, 121b, 121c, ..., and so forth are shared by the first broadcast apparatus 110 and the second broadcast apparatus 120. Thus, the speakers 121 may be able to be shared for different guidance broadcasts and, for example, a switch 1602 or the like may be used to disconnect the first broadcast apparatus 110 and the range of available frequencies may be limited through the amplifier 1601 in the event of a disaster or the like.
Further, the embodiments can be applied to a broadcasting system 100 including a single broadcast apparatus 101 and a plurality of speakers 121a, 121b, 121c, .... In this case, the broadcast apparatus 101 limits the range of available frequencies of sound wave signals that are output when a disaster or the like occurs.
According to the embodiments of the present invention, in a broadcasting system 100 having the limited range of available frequencies, such as emergency broadcasting equipment, information can be sent to an information terminal 104 while adverse effects on audible sound can be reduced.
Thus, the sound wave generators, the broadcasting systems, the methods for generating sound waves, and the programs have been described with reference to the preferred embodiments. However, the present invention is not limited to the specific embodiments, and various modifications, substitutions, and so forth, may be made without departing from the scope of the claimed invention.
The present application is based on and claims priority to Japanese patent application No. 2019-043193, filed on March 8, 2019 and Japanese patent application No. 2020-034341, filed on February 28, 2020. The entire contents of Japanese patent application No. 2019-043193 and Japanese patent application No. 2020-034341 are hereby incorporated herein by reference.

100 Broadcasting system
101 Broadcast apparatus
110 First broadcast apparatus
120 Second broadcast apparatus (sound wave generator)
304 Extracting unit
305 Sound wave generating unit
308 Storage unit
701 Sound wave modifying unit
1001 Determining unit

[PTL 1]  Japanese Laid-Open Patent Application No. 2012-227909

Claims (11)

  1.     A sound wave generator for generating a sound wave in a broadcasting system for broadcasting a sound wave that is within a predetermined frequency range, the sound wave generator comprising:
    a storage unit configured to store sound wave data that represents information using a predetermined frequency component that is within the predetermined frequency range;
    an extracting unit configured to analyze original-sound data of the broadcasting to extract a section including a frequency component that is similar to the predetermined frequency component; and
    a sound wave generating unit configured to, in the extracted section, replace the frequency component of the original-sound data that is similar to the predetermined frequency component with the sound wave data, to generate the sound wave where the information is embedded.
  2.     The sound wave generator according to claim 1, wherein
    the extracting unit is configured to extract, from the original-sound data, the section including a frequency extent at which the sound wave representing the information is output.
  3.     The sound wave generator according to claim 1 or 2, further comprising
    a sound modifying unit configured to, in response to a length of the section of the original-sound data being shorter than a predetermined length required to embed the information, modify the original-sound data so that the length of the section becomes greater than or equal to the predetermined length.
  4.     The sound wave generator according to any one of claims 1-3, wherein
    the sound wave data stored by the storage unit includes plural sets of sound wave data that represent the information using different frequency components, and
    the sound wave generator further comprises a determining unit configured to determine, according to the original-sound data or a sound wave input from the outside, a set of sound wave data to be used from among the plural sets of sound wave data.
  5.     The sound wave generator according to any one of claims 1-4, wherein
    the broadcasting system includes emergency broadcasting equipment for, in an emergency, outputting a predetermined alarm sound with the sound wave within the predetermined frequency range, and
    the sound wave generator is configured to generate the predetermined alarm sound where the information is embedded.
  6.     The sound wave generator according to any one of claims 1-5, wherein
    the information embedded in the sound wave includes identification information to cause an information terminal that receives the sound wave to execute a predetermined process.
  7.     The sound wave generator according to claim 6, wherein
    the predetermined process includes a process of displaying an evacuation route corresponding to the identification information.
  8.     The sound wave generator according to any one of claims 1-7, wherein
    the sound wave generating unit is further configured to, in the section of the original-sound data, synthesize the original-sound data from which the predetermined frequency component has been removed and the sound wave data, to generate the sound wave where the information is embedded.
  9.     A broadcasting system including the sound wave generator according to any one of claims 1-8.
  10.     A method for generating, within a predetermined frequency range, a sound wave where information is embedded in an original sound, the method comprising:
    storing, by a sound wave generator configured to generate the sound wave, sound wave data that represents information using a predetermined frequency component that is within the predetermined frequency range;
    analyzing, by the sound wave generator, original-sound data of the broadcasting to extract a section including a frequency component that is similar to the predetermined frequency component; and
    in the extracted section, replacing, by the sound wave generator, the frequency component of the original-sound data that is similar to the predetermined frequency component with the sound wave data, to generate the sound wave where the information is embedded.
  11.     A program causing a sound wave generator configured to generate, within a predetermined frequency range, a sound wave where information is embedded in an original sound to execute
    storing sound wave data that represents information using a predetermined frequency component that is within the predetermined frequency range;
    analyzing original-sound data to extract a section including a frequency component that is similar to the predetermined frequency component; and
    in the extracted section, replacing the frequency component of the original-sound data that is similar to the predetermined frequency component with the sound wave data, to generate the sound wave where the information is embedded.
PCT/JP2020/009631 2019-03-08 2020-03-06 Sound wave generator, broadcasting system, method for generating sound wave, and program WO2020184423A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2019043193 2019-03-08
JP2019-043193 2019-03-08
JP2020-034341 2020-02-28
JP2020034341A JP2020150538A (en) 2019-03-08 2020-02-28 Sound wave generation device, broadcast system, sound wave generation method and program

Publications (1)

Publication Number Publication Date
WO2020184423A1 true WO2020184423A1 (en) 2020-09-17

Family

ID=69904139

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/009631 WO2020184423A1 (en) 2019-03-08 2020-03-06 Sound wave generator, broadcasting system, method for generating sound wave, and program

Country Status (1)

Country Link
WO (1) WO2020184423A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100114344A1 (en) * 2008-10-31 2010-05-06 France Telecom Communication system incorporating ambient sound pattern detection and method of operation thereof
JP2012227909A (en) 2011-04-05 2012-11-15 Yamaha Corp Information providing system, portable terminal device, identification information resolution server, distribution server, and program
US9454343B1 (en) * 2015-07-20 2016-09-27 Tls Corp. Creating spectral wells for inserting watermarks in audio signals
US20170153117A1 (en) * 2015-11-30 2017-06-01 Ricoh Company, Ltd. Information providing system, mounted apparatus, and information processing apparatus
JP2019043193A (en) 2017-08-30 2019-03-22 マツダ株式会社 Vehicle control device
JP2020034341A (en) 2018-08-28 2020-03-05 東京瓦斯株式会社 Air quality providing system and program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100114344A1 (en) * 2008-10-31 2010-05-06 France Telecom Communication system incorporating ambient sound pattern detection and method of operation thereof
JP2012227909A (en) 2011-04-05 2012-11-15 Yamaha Corp Information providing system, portable terminal device, identification information resolution server, distribution server, and program
US9454343B1 (en) * 2015-07-20 2016-09-27 Tls Corp. Creating spectral wells for inserting watermarks in audio signals
US20170153117A1 (en) * 2015-11-30 2017-06-01 Ricoh Company, Ltd. Information providing system, mounted apparatus, and information processing apparatus
JP2019043193A (en) 2017-08-30 2019-03-22 マツダ株式会社 Vehicle control device
JP2020034341A (en) 2018-08-28 2020-03-05 東京瓦斯株式会社 Air quality providing system and program

Similar Documents

Publication Publication Date Title
JP5103974B2 (en) Masking sound generation apparatus, masking sound generation method and program
KR101902426B1 (en) System, method and recordable medium for providing related contents at low power
JP2008233671A (en) Sound masking system, masking sound generation method, and program
US10536786B1 (en) Augmented environmental awareness system
JP7140542B2 (en) SIGNAL PROCESSING DEVICE, SIGNAL PROCESSING METHOD, AND PROGRAM
US20160277834A1 (en) Sound Masking Apparatus and Sound Masking Method
JP4660275B2 (en) Information embedding apparatus and method for acoustic signal
US8793128B2 (en) Speech signal processing system, speech signal processing method and speech signal processing method program using noise environment and volume of an input speech signal at a time point
US20140358528A1 (en) Electronic Apparatus, Method for Outputting Data, and Computer Program Product
JP2001148670A (en) Method and device for transmitting acoustic signal
JP2007256498A (en) Voice situation data producing device, voice situation visualizing device, voice situation data editing apparatus, voice data reproducing device, and voice communication system
WO2020184423A1 (en) Sound wave generator, broadcasting system, method for generating sound wave, and program
JP2016005268A (en) Information transmission system, information transmission method, and program
JP4175376B2 (en) Audio signal processing apparatus, audio signal processing method, and audio signal processing program
KR102262634B1 (en) Method for determining audio preprocessing method based on surrounding environments and apparatus thereof
WO2017061278A1 (en) Signal processing device, signal processing method, and computer program
JP2020150538A (en) Sound wave generation device, broadcast system, sound wave generation method and program
KR101592518B1 (en) The method for online conference based on synchronization of voice signal and the voice signal synchronization process device for online conference and the recoding medium for performing the method
JP2006195061A (en) Information embedding device for acoustic signal, information extracting device from acoustic signal and acoustic signal reproducing device
JP2011199698A (en) Av equipment
WO2019234952A1 (en) Speech processing device and translation device
JP2006050045A (en) Moving picture data edit apparatus and moving picture edit method
Haywood et al. Effects of inducer continuity on auditory stream segregation: Comparison of physical and perceived continuity in different contexts
JP6291119B1 (en) Delivery data creation device, advertisement data creation device, and data creation system
JP7259435B2 (en) Terminal device, information processing system, and information processing method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20713110

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20713110

Country of ref document: EP

Kind code of ref document: A1