WO2017212981A1 - Information processing device, information service method, and program - Google Patents

Information processing device, information service method, and program Download PDF

Info

Publication number
WO2017212981A1
WO2017212981A1 PCT/JP2017/020081 JP2017020081W WO2017212981A1 WO 2017212981 A1 WO2017212981 A1 WO 2017212981A1 JP 2017020081 W JP2017020081 W JP 2017020081W WO 2017212981 A1 WO2017212981 A1 WO 2017212981A1
Authority
WO
WIPO (PCT)
Prior art keywords
character string
information
distribution
acquisition unit
information acquisition
Prior art date
Application number
PCT/JP2017/020081
Other languages
French (fr)
Japanese (ja)
Inventor
貴裕 岩田
岩瀬 裕之
優樹 瀬戸
Original Assignee
ヤマハ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ヤマハ株式会社 filed Critical ヤマハ株式会社
Publication of WO2017212981A1 publication Critical patent/WO2017212981A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition

Definitions

  • the present invention relates to a technique for providing information to a user.
  • Patent Literature 1 discloses a technique for distributing content corresponding to a position of a terminal device registered in advance as a distribution target.
  • an object of the present invention is to solve a problem assumed in a situation where information is provided to a user.
  • an information processing apparatus includes a first information acquisition unit that acquires a first character string according to an instruction from a user, and a first character string.
  • a second information acquisition unit that acquires a second character string that is partially different, a distribution control unit that causes the transmission device to transmit distribution information indicating the second character string, a display device, and a second character string in the display device
  • a display control unit for displaying is a preferred aspect of the present invention.
  • An information providing method is an information providing method in a computer system including a display device, and the information providing method acquires a first character string according to an instruction from a user. Then, a second character string that is partially different from the first character string is acquired, distribution information indicating the second character string is transmitted to the transmission device, and the second character string is displayed on the display device.
  • a program is partially different from a process of obtaining a first character string according to an instruction from a user in a computer system including a display device, and the first character string.
  • FIG. 1 is a block diagram of an information providing system according to a first embodiment of the present invention. It is a block diagram of an information processor. It is a schematic diagram of a guidance table. It is a schematic diagram of an operation screen. It is a schematic diagram of an operation screen. It is a schematic diagram of an operation screen. It is a flowchart of operation
  • FIG. 1 is a configuration diagram of an information providing system 10 according to the first embodiment of the present invention.
  • An information providing system 10 according to the first embodiment is a computer system for providing information to a user HA of a transportation facility such as a train or a bus, and includes an information processing device 12 and a sound emitting device 14.
  • the sound emitting device 14 is an acoustic system that is installed in a facility of a transportation facility (for example, in a station premises or a train) and emits a sound V (hereinafter referred to as “guidance sound”) V for guidance related to the transportation facility.
  • guidance sound a sound V for guidance related to the transportation facility.
  • Various voices related to transportation include voices that notify arrival or departure of trains or buses, voices that guide you to transfer to a stop or other route, voices that alert you to getting on and off, voices that inform you of lost children, Or the audio
  • the transportation user HA carries the terminal device 20.
  • the terminal device 20 is a portable information terminal such as a mobile phone or a smartphone. In practice, a large number of users HA can use the service of the information providing system 10, but in the following description, attention is paid to one terminal device 20 for convenience.
  • the information processing apparatus 12 of the information providing system 10 in FIG. 1 is an information terminal possessed and used by, for example, an employee (hereinafter referred to as “user of (information processing apparatus)”) HB who manages or operates a transportation facility. Specifically, various information terminals such as a smartphone or a tablet terminal can be used as the information processing apparatus 12.
  • the information processing device 12 is connected to the sound emitting device 14 by wire or wirelessly. Note that the information processing device 12 and the sound emitting device 14 may be configured integrally.
  • FIG. 2 is a configuration diagram of the information providing system 10.
  • the information processing apparatus 12 includes a control device 32, a storage device 34, a display device 36, and an operation device 38.
  • the control device 32 is a processing circuit that controls the overall operation of the information processing device 12, and includes, for example, a CPU (Central Processing Unit).
  • the display device 36 is composed of a liquid crystal display panel, for example, and displays various images under the control of the control device 32.
  • the operation device 38 is an input device that receives an instruction from the user HB of the information processing device. For example, a plurality of operators operated by the user HB or a touch panel that detects contact with the display surface of the display device 36 is preferably used as the operation device 38.
  • the storage device 34 stores a program executed by the control device 32 and various data used by the control device 32.
  • a known recording medium such as a magnetic recording medium or a semiconductor recording medium, or a combination of a plurality of types of recording media is arbitrarily employed as the storage device 34.
  • the storage device 34 of the first embodiment stores the guide table TA of FIG.
  • a plurality of registered character strings R (R1, R2,%) Corresponding to different contents of the guidance voice V are registered in the guidance table TA.
  • a registered character string R including one or more insertion sections.
  • the insertion section is represented by square brackets [].
  • Various words hereinafter referred to as “insert phrases”.
  • a character string hereinafter referred to as “guide character string”
  • guide character string X representing the pronunciation content of the guidance voice V is generated.
  • FIG. 3 shows a registered character string R1 (ie, “We have found a lost child [Age] [Outfit], [Name]. If you are looking for this child, please come to the customer service desk.)
  • a character string for notifying a lost child in a transportation facility is shown as an example. By inserting an appropriate insertion phrase into each insertion section of the registered character string R1, “We have found a lost child [of 4 years old] [in red clothes], [named Yuuki Suzuki]. If you are looking for this A guide character string X of “child,“ please ”,“ come ”,“ the customer, ”“ service, ”“ desk.
  • the registered character string R is a fixed character string (typically a sentence) that is common to a plurality of guide character strings X having different insertion phrases.
  • the insertion phrase is a character string that is selected for each guidance voice V and inserted into the insertion section of the registered character string R.
  • a registered character string R that does not include an insertion section can also be registered in the guidance table TA.
  • a plurality of distribution character strings Y (Y1, Y2,%) Corresponding to different registered character strings R are identified by the identification information DY (DY1) of each distribution character string Y. , DY2, ).
  • Any one delivery character string Y is a character string that is similar in content (meaning) itself to the guide character string X but partially different from the guide character string X.
  • a character string in which the insertion phrase is deleted from the guide character string X is registered in the guide table TA as the distribution character string Y. Is done.
  • the distribution character string Y is a character string obtained by deleting a highly confidential part (particularly a name or the like) related to an individual from the guide character string X. For example, “We ⁇ ⁇ have found a lost child [of 4 years old] [in red clothes], [named Yuuki Suzuki]. If you are looking for this child, please come to the customer service desk.” In X, the distribution string Y1 “We have found a lost child. If you are looking for a child, please come to the customer service ⁇ desk. ”, Which is deleted from X, is stored in the storage device 34. Is remembered.
  • the control device 32 in FIG. 2 executes a program stored in the storage device 34 to thereby execute a plurality of functions (a first information acquisition unit 42, a second information acquisition unit 44, a display control unit 46, a voice synthesis unit 52, and The distribution control unit 54) is realized.
  • a configuration in which a part of the function of the control device 32 is realized by a dedicated electronic circuit, or a configuration in which the function of the control device 32 is distributed to a plurality of devices may be employed.
  • the first information acquisition unit 42 acquires a guide character string X (an example of the first character string) according to an instruction from the user HB.
  • the user HB appropriately operates the operation device 38 to insert a desired one registered character string R among the plurality of registered character strings R registered in the guidance table TA and each of the registered character strings R. It is possible to arbitrarily specify an insertion phrase to be inserted into the section.
  • the 1st information acquisition part 42 inserts the insertion phrase instruct
  • the second information acquisition unit 44 acquires a distribution character string Y (an example of the second character string) that is partially different from the guide character string X acquired by the first information acquisition unit 42. Specifically, the second information acquisition unit 44 distributes corresponding to the registered character string R constituting the guide character string X designated by the user HB among the plurality of distribution character strings Y registered in the guide table TA.
  • the character string Y is acquired from the guidance table TA. That is, a character string obtained by deleting the insertion phrase from the guide character string X is acquired as the distribution character string Y.
  • the second information acquisition unit 44 can be rephrased as an element that corrects or converts the guide character string X into the distribution character string Y.
  • the second information acquisition unit 44 distributes by editing the guide character string X acquired by the first information acquisition unit 42 (for example, deleting the inserted phrase).
  • a configuration for generating the character string Y can also be adopted.
  • the display control unit 46 displays an image on the display device 36.
  • the display control unit 46 of the first embodiment displays, for example, the operation screen GA of FIG. 4 and the operation screen GB of FIG.
  • the operation screen GA and the operation screen GB are images for receiving instructions from the user HB.
  • a plurality of options 362 relating to the content of the guidance voice V are displayed on the operation screen GA.
  • options 362 are displayed on the operation screen GA for each of the plurality of registered character strings R registered in the guidance table TA.
  • the user HB can select an arbitrary option 362 by appropriately operating the operation device 38.
  • a guide character string X including a registered character string R corresponding to the option 362 selected by the user HB on the operation screen GA is acquired by the first information acquisition unit 42.
  • the operation screen GB is an image for the user HB to instruct or confirm the contents of the guide character string X and the distribution character string Y, and is displayed on the display device 36 after selection of the option 362 on the operation screen GA.
  • the operation screen GB includes an instruction area A1, a confirmation area A2, and a reproduction indicator A3.
  • the reproduction indicator A3 is an image (command button) for the user HB to instruct reproduction of the guidance voice V.
  • the instruction area A1 is an area for the user HB to indicate an insertion phrase to be inserted in each insertion section of the registered character string R.
  • an input field 364 is arranged in the instruction area A1 for each insertion section of the registered character string R corresponding to the option 362 selected by the user HB on the operation screen GA. Since the number or content (item name) of the insertion section is different for each registered character string R, the number or content (item name) of the input field 364 arranged in the instruction area A1 of the operation screen GB is the operation screen GA. It can be different depending on the registered character string R selected in.
  • the user HB can input an insertion phrase to be inserted into each insertion section of the registered character string R for each input field 364 by appropriately operating the operation device 38.
  • the user HB can input an insertion phrase in each instruction column 364 by either selection from a plurality of options 366 prepared in advance or direct character input. For example, assuming a registered string R1 “We have found a lost child [Age] [Outfit], [Name]. If you are looking for this child, please come to the customer service desk.
  • an insertion phrase is selected from a plurality of options 366, and for the insertion section of “Name”, the insertion phrase is arbitrarily designated by character input.
  • a guide character string X representing the pronunciation content of the voice V is generated. That is, the first information acquisition unit 42 of the first embodiment includes an insertion phrase selected by the user HB from a plurality of candidates for each of a plurality of items (insertion sections) in response to an instruction (input or selection) on the operation screen GB.
  • the guide character string X is acquired.
  • “input” in the input field 364 includes selection from a plurality of options 366 in addition to character input.
  • the registered character string R that does not include the insertion section.
  • a registered character string R that does not include an insertion section is selected on the operation screen GA, it is determined that the registered character string R does not include an insertion phrase (that is, a variable element) as illustrated in FIG.
  • a message to notify the user HB is displayed in the indication area A1. That is, the instruction column 364 is not displayed in the instruction area A1.
  • the indication area A1 can be left blank (a message indicating that the standard is not displayed).
  • a guide character string X and a delivery character string Y are displayed. That is, the display control unit 46 of the first embodiment causes the display device 36 to display the guide character string X acquired by the first information acquisition unit 42 and the distribution character string Y acquired by the second information acquisition unit 44. Specifically, as illustrated in FIG. 5, "We have found a lost child [of 4 years old] [in red clothes], [named Yuuki Suzuki]. If you are looking for this child, please come to the customer service desk. ”and“ We have found a lost child. If you are looking for a child, please come to the customer service desk. ” The distribution character string Y is displayed side by side on the display device 36 (positionally). Therefore, the user HB can check the guide character string X instructed by the user HB and the distribution character string Y obtained by partially changing the guide character string X while comparing them with each other.
  • the acoustic signal SV is a time signal that represents the waveform of the voice (ie, the guidance voice V) that is generated by the guidance character string X.
  • a known speech synthesis technique can be arbitrarily employed to generate the acoustic signal SV. Note that it is also possible to cause the speech synthesis server with which the information processing apparatus 12 can communicate to generate the acoustic signal SV (speech synthesis).
  • the first information acquisition unit 42 transmits the guidance character string X to the speech synthesis server, receives the acoustic signal SV generated by speech synthesis by the speech synthesis server from the speech synthesis server, and outputs the sound output device 14. To supply.
  • the speech synthesizer 52 may be omitted.
  • the distribution control unit 54 generates distribution information Q indicating the distribution character string Y acquired by the second information acquisition unit 44.
  • the distribution information Q of the first embodiment is identification information DY of the distribution character string Y.
  • the distribution control unit 54 acquires the identification information DY of the distribution character string Y acquired by the second information acquisition unit 44 as the distribution information Q from the guidance table TA.
  • the delivery control part 54 of 1st Embodiment produces
  • a known technique can be arbitrarily adopted to generate the acoustic signal SQ.
  • a configuration in which the acoustic signal SQ is generated by frequency-modulating a carrier wave such as a sine wave having a predetermined frequency with the distribution information Q is suitable.
  • a configuration in which the acoustic signal SQ is generated by sequentially executing spread modulation of the distribution information Q using a spread code and frequency conversion using a carrier wave of a predetermined frequency is preferable.
  • the frequency band of the acoustic signal SQ is a frequency band of sound in which the sound emitting device 14 emits sound and the terminal device 20 can collect sound.
  • the frequency band is included in a frequency band (for example, 18 kHz or more and 20 kHz or less) that exceeds the frequency band of sound (for example, guidance voice V) or musical sound that the user HA of the terminal device 20 listens in a normal environment. Is done.
  • the frequency band of the acoustic signal SQ is arbitrary.
  • the acoustic signal SQ within the audible band may be generated.
  • the speech synthesizer 52 When the playback indicator A3 on the operation screen GB in FIG. 5 is operated by the user HB, the speech synthesizer 52 generates the acoustic signal SV, and the distribution controller 54 generates the acoustic signal SQ. That is, the acoustic signal SV and the acoustic signal SQ are supplied to the sound emitting device 14 in response to the operation of the reproduction indicator A3.
  • the sound emitting device 14 emits the sound represented by the acoustic signal SV and the acoustic signal SQ. That is, the guidance voice V represented by the acoustic signal SV and the acoustic component represented by the acoustic signal SQ are emitted from the sound emitting device 14.
  • the sound emitting device 14 of the first embodiment functions as a means (transmitting device) for transmitting the distribution information Q to the terminal device 20 by acoustic communication using sound waves as air vibration as a transmission medium.
  • the delivery control unit 54 of the first embodiment causes the delivery information Q to be transmitted from the sound emitting device 14 by acoustic communication using the sound emitting device 14 that emits the guidance voice V as a transmitting device.
  • the distribution information Q is transmitted to the terminal device 20 by a wireless communication device that is separate from the sound emitting device 14. Compared to the configuration, there is an advantage that the configuration of the information providing system 10 can be simplified.
  • FIG. 7 is a flowchart illustrating the operation of the control device 32 of the information processing device 12.
  • the processing in FIG. 7 is started in response to an instruction from the user HB to the operation device 38.
  • the control device 32 displays the operation screen GA of FIG. 4 on the display device 36, and accepts the selection of the registered character string R by the user HB (SA1).
  • SA1 the registered character string R is selected
  • the control device 32 displays an initial operation screen GB on the display device 36 (SA2).
  • the control device 32 determines whether or not an instruction for an insertion phrase for any one of the instruction fields 364 on the operation screen GB has been received from the user HB (SA3).
  • SA3 the instruction of the insertion phrase is received
  • the control device 32 first information acquisition unit 42
  • a guide character string X including it is generated (SA4).
  • the control device 32 acquires a distribution character string Y that is partially different from the guide character string X (SA5).
  • the control device 32 (display control unit 46) displays the guide character string X and the distribution character string Y on the display device 36 (SA6).
  • the control device 32 determines whether or not an operation for the playback indicator A3 has been received from the user HB (SA7).
  • SA7 When the reproduction indicator A3 is not operated (SA7: NO), the control device 32 shifts the process to step SA3, and updates the guide character string X and the distribution character string Y according to the instruction from the user HB (SA3-SA6). ) Is repeated.
  • SA7 when the playback indicator A3 is operated (SA7: YES), the control device 32 (speech synthesizer 52) generates an acoustic signal SV from the guide character string X acquired by the first information acquisition unit 42 (SA8). ).
  • control device 32 (distribution control unit 54) generates an acoustic signal SQ of the distribution information Q indicating the distribution character string Y acquired by the second information acquisition unit 44 (SA9).
  • the control device 32 supplies the sound signal SV and the sound signal SQ to the sound emitting device 14 (SA10). Therefore, the guidance voice V that pronounces the guidance character string X is reproduced and the distribution information Q is transmitted by acoustic communication.
  • the generation of the acoustic signal SV (SA8) and the generation of the acoustic signal SQ (SA9) can be reversed.
  • the temporal relationship between the reproduction of the guidance voice V and the transmission of the distribution information Q is arbitrary. For example, it is possible to repeatedly transmit the distribution information Q in parallel with the reproduction of the guidance voice V.
  • FIG. 8 is a configuration diagram of the terminal device 20.
  • the terminal device 20 includes a sound collection device 62, a control device 64, a storage device 66, and a playback device 68.
  • the sound collection device 62 is an acoustic device (microphone) that collects ambient sounds.
  • the sound collection device 62 of the first embodiment collects the sound wave emitted from the sound emission device 14 of the information providing system 10 and generates the acoustic signal S.
  • the acoustic signal S contains the acoustic component (acoustic signal SQ) of the distribution information Q.
  • the sound collection device 62 functions as a reception device that receives the distribution information Q by acoustic communication using sound waves that are air vibrations as a transmission medium.
  • an A / D converter that converts the acoustic signal S generated by the sound pickup device 62 from analog to digital is not shown for convenience.
  • the control device 64 is a processing circuit that controls the overall operation of the information processing device 12, and includes, for example, a CPU.
  • the storage device 66 stores a program executed by the control device 64 and various data used by the control device 64.
  • the storage device 66 of the first embodiment stores a guide table TB. As illustrated in FIG. 8, the guide table TB includes a plurality of distribution character strings Y (Y1, Y2,%) That can be specified by the distribution information Q, and identification information DY (DY1, DY2) of each distribution character string Y. , ......) and registered.
  • a plurality of distribution character strings Y (distribution character strings Y (Y1, Y2,%) Registered in the guide table TA) that may be acquired by the second information acquisition unit 44 of the information processing apparatus 12 are stored in the storage device. 66.
  • the control device 64 in FIG. 8 executes a program stored in the storage device 66, thereby providing a plurality of functions (information extraction unit 642 and reproduction) for presenting the distribution character string Y indicated by the distribution information Q to the user HA.
  • the control unit 644) is realized.
  • a configuration in which a part of the function of the control device 64 is realized by a dedicated electronic circuit, or a configuration in which the function of the control device 64 is distributed to a plurality of devices may be employed.
  • the information extraction unit 642 extracts the distribution information Q from the acoustic signal S supplied from the sound collection device 62. Specifically, the information extraction unit 642 performs filtering processing for emphasizing the band component of the frequency band including the acoustic component of the distribution information Q and demodulation processing corresponding to the modulation processing in the distribution control unit 54 for the acoustic signal S. And the distribution information Q is extracted.
  • the reproduction control unit 644 causes the reproduction device 68 to reproduce the distribution character string Y corresponding to the identification information DY indicated by the distribution information Q extracted by the information extraction unit 642.
  • the reproduction device 68 reproduces the distribution character string Y designated by the reproduction control unit 644 and presents it to the user HA.
  • the playback device 68 of the first embodiment is a display device (for example, a liquid crystal display panel) that displays the distribution character string Y.
  • the guide voice V of the guide character string X corresponding to the instruction from the user HB is emitted from the sound emitting device 14, while the distribution is partially different from the guide character string X.
  • the character string Y is reproduced by the reproduction device 68 of the terminal device 20. Specifically, "We have found a lost child [of 4 years old] [in red clothes], [named Yuuki Suzuki]. If you are looking for this child, please come to the customer service desk.” When V is reproduced from the sound emitting device 14, the “We have found a lost child. If you are looking for a child, please come to the” is deleted from the guidance character string X.
  • the distribution character string Y “customer service desk” is displayed on the terminal device 20.
  • the personal information can be protected by reproducing the distribution character string Y partially different from the guide character string X on the terminal device 20.
  • the user HB is concerned whether or not an appropriate distribution character string Y corresponding to the guidance character string X instructed by the user HB is actually reproduced on the terminal device 20.
  • the distribution character string Y reproduced on the terminal device 20 is displayed on the display device 36 of the information processing apparatus 12, and therefore the distribution character string Y actually reproduced on the terminal device 20 is displayed on the user. There is an advantage that HB can be confirmed.
  • the guide character string X and the distribution character string Y are displayed side by side, so that the user HB is instructed by the user HB (guidance character string X) and the content reproduced by the terminal device 20 (
  • a relationship for example, a difference or a match
  • Second Embodiment A second embodiment of the present invention will be described.
  • the detailed description of each is abbreviate
  • FIG. 9 is a configuration diagram of the information providing system 10 in the second embodiment.
  • the information processing apparatus 12 of the second embodiment includes a sound collection device 16 in addition to the same elements as those of the first embodiment.
  • the sound collection device 16 is an acoustic device (microphone) that collects ambient sounds.
  • the user HB sounds the guidance voice V to the sound collection device 16. Specifically, the user HB pronounces the guidance voice V having contents including any of a plurality of registered character strings R registered in advance in the storage device 34.
  • the sound collection device 16 collects the guidance voice V generated by the user HB and generates an acoustic signal SV representing the guidance voice V. As illustrated in FIG.
  • the speech synthesizer 52 of the first embodiment is omitted, and the sound signal SV generated by the sound collection device 16 is supplied to the sound emission device 14.
  • the A / D converter for converting the acoustic signal SV generated by the sound collection device 16 from analog to digital is not shown for convenience.
  • the first information acquisition unit 42 in FIG. 9 specifies the guide character string X by voice recognition with respect to the acoustic signal SV.
  • the guide character string X is a character string representing the content of the guide voice V pronounced by the user HB.
  • voice recognition of the acoustic signal SV for example, a known technique such as a recognition process using an acoustic model such as HMM (Hidden Markov Model) and a language model indicating linguistic restrictions can be arbitrarily adopted. It is also possible to cause the voice recognition server with which the information processing apparatus 12 can communicate to execute voice recognition of the acoustic signal SV.
  • the first information acquisition unit 42 transmits the acoustic signal SV to the voice recognition server, and acquires the guide character string X specified by the voice recognition by the voice recognition server from the voice recognition server. That is, the first information acquisition unit 42 may itself generate the guide character string X, or may acquire the guide character string X generated by another device.
  • the second information acquisition unit 44 of the second embodiment acquires a distribution character string Y that is partially different from the guide character string X acquired by the first information acquisition unit 42. Specifically, the second information acquisition unit 44 searches for a registered character string R similar to the guide character string X among a plurality of registered character strings R registered in the guide table TA, and corresponds to the registered character string R.
  • the distribution character string Y to be acquired is acquired from the guidance table TA.
  • FIG. 10 is a flowchart of processing in which the second information acquisition unit 44 of the second embodiment acquires the distribution character string Y.
  • the second information acquisition unit 44 calculates an index of similarity (hereinafter referred to as “similarity index”) with the guide character string X for each of the plurality of registered character strings R (SB1).
  • the type of the similarity index is arbitrary, but a known index such as an edit distance (Levenstein distance) for evaluating the similarity between two types of character strings is suitable as the similarity index.
  • the second information acquisition unit 44 searches the guidance table TA for one registered character string R having the maximum similarity indicated by the similarity index among the plurality of registered character strings R (SB2). ).
  • the registered character string R most similar to the guide character string X is searched from the guide table TA.
  • the 2nd information acquisition part 44 acquires the delivery character string Y corresponding to the registration character string R searched from the guidance table TA from the guidance table TA (SB3).
  • the distribution character string Y corresponding to the registered character string R similar to the guide character string X is specified. Therefore, there is an advantage that an appropriate distribution character string Y can be selected even when the guidance character string X differs from the actual pronunciation content due to the influence of misrecognition in the voice recognition processing.
  • the operation in which the display control unit 46 displays the guide character string X and the distribution character string Y on the display device 36 is the same as in the first embodiment.
  • the first information acquisition unit 42 in FIG. 9 supplies the sound signal SV generated by the sound collecting device 16 to the sound emitting device 14 in response to an operation on the reproduction indicator A3 on the operation screen GB. Further, the distribution control unit 54 generates an acoustic signal SQ of the distribution information Q indicating the identification information DY of the distribution character string Y, triggered by an operation on the reproduction indicator A3, and supplies it to the sound emitting device 14. Therefore, as in the first embodiment, the guidance voice V of the guide character string X is reproduced from the sound emitting device 14, and the distribution information Q is transmitted by acoustic communication using the sound emitting device 14 as a transmitting device. In the second embodiment, the same effect as in the first embodiment is realized. In the second embodiment, there is an advantage that the user HB can instruct the guide character string X by sounding (speech input) to the sound collecting device 16.
  • the result of speech recognition for the acoustic signal SV is used as the guide character string X.
  • the registered character string R registered in the guide table TA can be used for obtaining the guide character string X. is there.
  • the first information acquisition unit 42 searches the guide table TA for a registered character string R that is most similar to the character string X0 (guide character string X in the above-described form) specified by voice recognition for the acoustic signal SV. Then, the first information acquisition unit 42 generates the guide character string X by inserting the insertion phrase included in the character string X0 into each insertion section of the registered character string R.
  • the machine translation unit 56 is added to the information processing apparatus 12 having the same configuration as that of the first embodiment.
  • the machine translation unit 56 converts the guide character string X in the first language (for example, English) into the guide character string X in the second language (for example, Japanese).
  • the voice synthesizer 52 generates, by voice synthesis, an acoustic signal SV representing the guidance voice V corresponding to the first language guidance character string X and the translated guidance voice V corresponding to the second language guidance character string X. To do.
  • the guidance voice V is emitted, for example, sequentially in both the first language and the second language.
  • the display control unit 46 displays the operation screen GB of FIG.
  • the operation screen GB illustrated in FIG. 12 includes a language selection area A4 in addition to the same elements as in FIG.
  • a plurality of language indicators 368 corresponding to different languages Japanese, English, Chinese
  • the language indicator 368 corresponding to any one type of language is an image for the user HB to set the language to either the valid state or the invalid state.
  • the voice synthesizer 52 generates an acoustic signal SV of the guidance voice V for the language set to the valid state by the language indicator 368 and supplies it to the sound emitting device 14.
  • the acoustic signal SV is not generated for the language set to the invalid state. Therefore, the guidance voice V in the valid language among the plurality of languages is emitted from the sound emitting device 14 together with the acoustic component of the distribution information Q.
  • the guide character string X and the distribution character string Y are displayed for each language set to the valid state by the language indicator 368.
  • FIG. 12 illustrates a case where the guide character string X and the distribution character string Y are displayed in the confirmation area A2 in both English and Japanese. According to the above configuration, the user HB can confirm the contents of the guidance voice V emitted in different languages in a plurality of languages.
  • a known technique can be arbitrarily employed for synthesizing the guidance voice V by the voice synthesis unit 52.
  • a speech synthesis method for example, a recording / editing method using recording data obtained by recording a voice that pronounces a specific character string in advance, or a voice is synthesized by selectively combining speech units such as speech units.
  • a rule composition method such as a unit connection method may be exemplified.
  • the speech synthesizer 52 generates a speech signal by the recording and editing method for the registered character string R whose pronunciation content is known, and generates a speech signal by the rule synthesis method for an insertion phrase whose pronunciation content is variable.
  • the voice synthesizer 52 generates the acoustic signal SV of the guidance voice V by connecting a plurality of voice signals on the time axis.
  • a voice signal is generated by a recording editing method using recorded data, while for a registered character string R and an insertion phrase added afterwards, a voice signal is generated by a rule synthesis method. It is also possible. Although it is assumed that the voice quality may be different between the default registered character string R and the new registered character string R, the synthesis parameters of the rule composition method applied to the new registered character string R (particularly contributing to the voice quality) By appropriately adjusting the variable), it is possible to bring the synthesized voice by the rule synthesis method closer to the voice quality of the recorded data.
  • the guidance voice V for notifying a lost child on the premises of the transportation facility is assumed, but the pronunciation content of the guidance voice V is arbitrary.
  • the guidance voice V related to the operation status of the transportation facility from the sound emitting device 14.
  • FIG. 13 shows options 362 that can be selected on the operation screen GA, the registered character string R corresponding to each option 362, and the instruction field 364 corresponding to each insertion section of the registered character string R regarding the operation status of the transportation facility. And combinations with options 366 that can be selected in the instruction field 364 are illustrated.
  • the registration character string R of “Accident prevention announcement” (accident prevention announcement) in FIG. 13 is a fixed phrase that does not include an insertion phrase, and therefore, as illustrated in FIG. A message notifying that the character string is R is displayed in the indication area A1.
  • the distribution character string Y from which the insertion phrase of the guidance character string X is deleted (simplified) is illustrated, but the relationship between the guidance character string X and the distribution character string Y is illustrated above. It is not limited.
  • the second information acquisition unit 44 acquires the character string in which the insertion phrase is inserted into the registered character string R as the distribution character string Y, and deletes the insertion phrase from the distribution character string Y.
  • the first information acquisition unit 42 can acquire the character string (for example, registered character string R) as the guide character string X.
  • the distribution character string Y including the detailed information is provided to the terminal device 20, while the simplified guide character string X having the contents omitted from each information is reproduced as the guidance voice V. Therefore, there is an advantage that the distribution character string Y can be provided to the terminal device 20 while shortening the pronunciation time of the guidance voice V.
  • a character string obtained by inserting a plurality of insertion phrases into the registered character string R as a guide character string X and generate a character string obtained by deleting some of the plurality of insertion phrases as a distribution character string Y. It is. For example, “We have found a lost child [of 4 years old] [in red clothes], [named Yuuki Suzuki]. If you are looking for this The character string “child, please come to the customer service ⁇ desk.” is the guide character string X, and [WeNamehave found a lost child [of 4 years old] [in red] clothes]. ⁇ ⁇ ⁇ If you are looking for this child, please come to the customer service desk.
  • the distribution character string Y (example of the second character string) is comprehensively expressed as a character string partially different from the guide character string X (example of the first character string).
  • one of the guide character string X and the distribution character string Y is a character string obtained by partially deleting the other, or a character string obtained by adding a specific phrase to the other.
  • the reproduction control unit 644 of the terminal device 20 generates the distribution character string Y with reference to the guidance table TB that associates the registered character string R with the identification information and associates the insertion phrase with the identification information. Specifically, the reproduction control unit 644 specifies a registered character string R and an insertion phrase corresponding to the identification information indicated by the distribution information Q from the guide table TB, and inserts the insertion phrase into the registered character string R.
  • a distribution character string Y is generated.
  • both the guide character string X and the distribution character string Y are displayed on the display device 36.
  • the display control unit 46 displays only the distribution character string Y on the display device 36 (guidance). (The display of the character string X is omitted).
  • the display of the distribution character string Y may be omitted, and the display control unit 46 may display only the guidance character string X on the display device 36.
  • the display device that displays the distribution character string Y is exemplified as the playback device 68.
  • the method for reproducing the distribution character string Y is not limited to the above examples.
  • a sound emitting device that emits a sound that sounds the distribution character string Y may be used as the playback device 68.
  • the playback control unit 644 generates a voice signal representing the voice of the distribution character string by voice synthesis using the distribution character string Y indicated by the distribution information Q, and supplies the voice signal to the sound emitting device of the playback device 68.
  • the display of the delivery character string Y and the sound emission of the delivery character string Y may be used in combination.
  • the distribution character string Y is reproduced in one kind of language (English) in the terminal device 20 is exemplified.
  • the language of the distribution character string Y reproduced in the terminal device 20 A configuration capable of changing is also suitable.
  • a language hereinafter referred to as “designated language”
  • designated language a language designated by the terminal device 20 among a plurality of delivery character strings Y corresponding to the identification information DY indicated by the delivery information Q extracted by the information extraction unit 642.
  • the reproduction device 68 reproduces the corresponding distribution character string Y.
  • the designated language is, for example, a language designated by a language setting of an OS (Operating System) of the terminal device 20, or a language arbitrarily designated by the user HA of the terminal device 20.
  • the distribution character string Y is reproduced in a designated language that is convenient for the user HA of the terminal device 20, which is convenient for foreigners who have difficulty understanding the language of the guidance voice V, for example.
  • the guidance table TB used in the terminal device 20 is distributed from a specific distribution server via a communication network such as a mobile communication network or the Internet and stored in the storage device 66.
  • the time when the guidance table TB is distributed from the distribution server to the terminal device 20 is arbitrary.
  • the distribution table TB can be periodically updated at a predetermined cycle.
  • the terminal device 20 inquires of the distribution server whether or not the distribution table TB is updated, and when there is an update, the latest distribution table TB is transmitted from the distribution server to the terminal. Distribution to the device 20 is also possible.
  • the distribution server that communicates with the terminal device 20 via a communication network such as a mobile communication network or the Internet. May hold the guide table TB.
  • the reproduction control unit 644 of the terminal device 20 transmits a distribution request including the distribution information Q extracted by the information extraction unit 642 to the distribution server.
  • the distribution server specifies the distribution character string Y indicated by the distribution information Q in the distribution request received from the terminal device 20 from the guidance table TB and transmits it to the requesting terminal device 20.
  • the reproduction control unit 644 of the terminal device 20 causes the reproduction device 68 to reproduce the distribution character string Y received from the distribution server.
  • the terminal device 20 there is an advantage that it is not necessary to hold the guide table TB (a plurality of distribution character strings Y) in the terminal device 20.
  • the terminal device 20 holds the guidance table TB, it is not necessary to communicate with the distribution server when the distribution character string Y is played back, so that the terminal device 20 can communicate with the distribution server.
  • the distribution character string Y can be reproduced by the terminal device 20 regardless of whether or not it exists.
  • the distribution information Q is transmitted to the terminal device 20 by acoustic communication using sound waves as a transmission medium.
  • the communication method for transmitting the distribution information Q to the terminal device 20 is limited to the above examples.
  • the distribution information Q can be transmitted from the information providing system 10 to the terminal device 20 by wireless communication using electromagnetic waves such as radio waves or infrared rays as a transmission medium.
  • near field communication without a communication network is suitable for transmission of the distribution information Q.
  • near field communication uses acoustic communication or electromagnetic waves using sound waves as a transmission medium.
  • Wireless communication using a transmission medium.
  • the distribution information Q may be transmitted from the information providing system 10 to the terminal device 20 via a communication network such as a mobile communication network or the Internet.
  • the identification information DY of the distribution character string Y is transmitted to the terminal device 20 as the distribution information Q.
  • the contents of the distribution information Q are not limited to the above examples.
  • the distribution information Q indicating the distribution character string Y itself may be transmitted from the information providing system 10 to the terminal device 20.
  • the scene where the information providing system 10 is used is not limited to a transportation facility such as a train or a bus.
  • various facilities including facilities such as seaports or airports, commercial facilities such as shopping malls, exhibition facilities such as museums or museums, sports facilities such as stadiums or gymnasiums, accommodation facilities such as hotels or inns, and tourist facilities such as temples
  • the information providing system 10 exemplified in each of the above-described forms can be used for guidance regarding facilities.
  • the information processing apparatus 12 is realized by the cooperation of the control device 32 and the program as illustrated in the above-described embodiments.
  • the program according to each of the above-described embodiments acquires a first information acquisition unit 42 that acquires a guide character string X according to an instruction from the user HB, and a distribution character string Y that is partially different from the guide character string X.
  • a computer for example, the control device 32 is caused to function.
  • the programs exemplified above can be provided in a form stored in a computer-readable recording medium and installed in the computer.
  • the recording medium is, for example, a non-transitory recording medium, and an optical recording medium (optical disk) such as a CD-ROM is a good example, but a known arbitrary one such as a semiconductor recording medium or a magnetic recording medium This type of recording medium can be included.
  • the non-transitory recording medium includes an arbitrary recording medium excluding a transient propagation signal (transitory, “propagating signal”) and does not exclude a volatile recording medium. It is also possible to provide a program to a computer in the form of distribution via a communication network.
  • the present invention can also be specified as an operation method (information providing method) of the information processing apparatus 12 according to each of the above-described embodiments.
  • the computer system (the information providing system 10 including a single computer or a plurality of computers) including the display device 36 acquires the guide character string X according to the instruction from the user HB. (SA4), a distribution character string Y partially different from the guide character string X is acquired (SA5), and distribution information Q indicating the distribution character string Y is transmitted to a transmission device (for example, sound emitting device 14) ( SA10), the distribution character string Y is displayed on the display device 36 (SA6).
  • the information processing apparatus includes a first information acquisition unit that acquires a first character string according to an instruction from a user, and a second character string that is partially different from the first character string.
  • a second information acquisition unit for acquiring the distribution information, a distribution control unit for transmitting the distribution information indicating the second character string to the transmission device, a display device, and a display control unit for displaying the second character string on the display device.
  • the distribution control unit causes distribution information to be transmitted by acoustic communication using a sound emitting device that emits sound corresponding to the first character string as a transmitting device.
  • the display control unit may display the first character string and the second character string side by side on the display device. Therefore, the user can easily compare the first character string and the second character string. More preferably, for each of a plurality of languages selected by the user, the display control unit may cause the display device to display the first character string and the second character string expressed in the language. According to this aspect, there is an advantage that the user can confirm the first character string and the second character string in a plurality of languages.
  • the first information acquisition unit may acquire a first character string corresponding to a result of speech recognition for the sound collected by the sound collection device.
  • a 1st information acquisition part may acquire the 1st character string instruct
  • the first information acquisition unit may acquire a first character string including a character string selected by the user from a plurality of candidates for each of the plurality of items in response to an instruction on the operation screen.
  • the second information acquisition unit specifies a second character string similar to the first character string.
  • the 2nd information acquisition part may specify the 2nd character string which simplified the contents of the 1st character string, and the 2nd information acquisition part adds the detailed information to the 1st character string.
  • Two character strings may be specified.
  • An information providing method is an information providing method in a computer system including a display device, and the information providing method acquires a first character string according to an instruction from a user, A second character string partially different from the first character string is acquired, distribution information indicating the second character string is transmitted to the transmission device, and the second character string is displayed on the display device.
  • a process of acquiring two character strings, a process of transmitting distribution information indicating the second character string to the transmission device, and a process of displaying the second character string on the display device are executed.
  • various suitable aspects in the first aspect can be applied.
  • DESCRIPTION OF SYMBOLS 10 ... Information provision system, 12 ... Information processing device, 14 ... Sound emission device, 16, 62 ... Sound collection device, 20 ... Terminal device, 32, 64 ... Control device, 34, 66 ... Storage device, 36 ... Display device, DESCRIPTION OF SYMBOLS 38 ... Operation apparatus, 42 ... 1st information acquisition part, 44 ... 2nd information acquisition part, 46 ... Display control part, 52 ... Speech synthesis part, 54 ... Delivery control part, 56 ... Machine translation part, 642 ... Information extraction part , 644... Playback control unit, 68.

Abstract

This information processing device is provided with: a first information acquisition unit which acquires a guidance character string in accordance with an instruction from a user; a second information acquisition unit which acquires a distributed character string that is partially different from the guidance character string; a distribution control unit which causes a transmission device to transmit distribution information indicating the distributed character string; a display device; and a display control unit which causes the display device to display the distributed character string.

Description

情報処理装置、情報提供方法、およびプログラムInformation processing apparatus, information providing method, and program
 本発明は、利用者に情報を提供する技術に関する。 The present invention relates to a technique for providing information to a user.
 画像または音声等の情報を端末装置に提供するための各種の技術が従来から提案されている。例えば特許文献1には、配信対象として事前に登録された端末装置に対して、当該端末装置の位置に応じたコンテンツを配信する技術が開示されている。 Various technologies for providing information such as images or sounds to terminal devices have been proposed. For example, Patent Literature 1 discloses a technique for distributing content corresponding to a position of a terminal device registered in advance as a distribution target.
特開2002-351905号公報JP 2002-351905 A
 例えば電車またはバス等の交通機関では、乗降または乗換等に関する情報を利用者に案内する案内音声が随時に再生される。案内音声の発音内容の文字列または翻訳文等の情報を案内音声の放音とともに各利用者の端末装置にて再生できれば、例えば案内音声の聴取が困難な難聴者、または案内音声の言語の理解が困難な外国人にとって便利である。しかし、各種の情報を利用者の端末装置に提供する現実の場面では種々の課題が想定される。具体的には、案内音声と完全に同じ内容の情報を利用者の端末装置に提供することが適切でない場合がある。例えば、迷子の名前等の個人情報を含む案内音声と同内容の情報を各利用者の端末装置に提供することは、個人情報の保護の観点から問題となり得る。以上の事情を考慮して、本発明は、利用者に情報を提供する場面で想定される課題の解決を目的とする。 For example, in transportation such as trains or buses, guidance voices that guide users to information on getting on and off or transferring are reproduced as needed. If information such as the character strings of the pronunciation content of the guidance voice or translated sentences can be reproduced on the terminal device of each user together with the sound of the guidance voice, for example, the hearing-impaired person who is difficult to hear the guidance voice, or the language of the guidance voice Convenient for foreigners who have difficulty. However, various problems are assumed in an actual situation where various types of information are provided to a user terminal device. Specifically, it may not be appropriate to provide the user terminal device with information having the same content as the guidance voice. For example, providing information having the same content as the guidance voice including personal information such as the name of a lost child to each user's terminal device can be a problem from the viewpoint of protecting personal information. In view of the above circumstances, an object of the present invention is to solve a problem assumed in a situation where information is provided to a user.
 以上の課題を解決するために、本発明の好適な態様に係る情報処理装置は、利用者からの指示に応じた第1文字列を取得する第1情報取得部と、第1文字列とは部分的に相違する第2文字列を取得する第2情報取得部と、第2文字列を示す配信情報を送信装置に送信させる配信制御部と、表示装置と、第2文字列を表示装置に表示させる表示制御部とを具備する。 In order to solve the above problems, an information processing apparatus according to a preferred aspect of the present invention includes a first information acquisition unit that acquires a first character string according to an instruction from a user, and a first character string. A second information acquisition unit that acquires a second character string that is partially different, a distribution control unit that causes the transmission device to transmit distribution information indicating the second character string, a display device, and a second character string in the display device A display control unit for displaying.
 本発明の別の好適な態様に係る情報提供方法は、表示装置を具備するコンピュータシステムにおける情報提供方法であって、当該情報提供方法は、利用者からの指示に応じた第1文字列を取得し、第1文字列とは部分的に相違する第2文字列を取得し、第2文字列を示す配信情報を送信装置に送信させ、第2文字列を表示装置に表示させる。
 本発明の別の好適な態様にかかるプログラムは、表示装置を具備するコンピュータシステムに、利用者からの指示に応じた第1文字列を取得する処理と、第1文字列とは部分的に相違する第2文字列を取得する処理と、第2文字列を示す配信情報を送信装置に送信させる処理と、第2文字列を表示装置に表示させる処理とを実行させる。
An information providing method according to another preferred aspect of the present invention is an information providing method in a computer system including a display device, and the information providing method acquires a first character string according to an instruction from a user. Then, a second character string that is partially different from the first character string is acquired, distribution information indicating the second character string is transmitted to the transmission device, and the second character string is displayed on the display device.
A program according to another preferred aspect of the present invention is partially different from a process of obtaining a first character string according to an instruction from a user in a computer system including a display device, and the first character string. A process for acquiring the second character string to be performed, a process for transmitting the distribution information indicating the second character string to the transmission apparatus, and a process for displaying the second character string on the display apparatus.
本発明の第1実施形態に係る情報提供システムのブロック図である。1 is a block diagram of an information providing system according to a first embodiment of the present invention. 情報処理装置のブロック図である。It is a block diagram of an information processor. 案内テーブルの模式図である。It is a schematic diagram of a guidance table. 操作画面の模式図である。It is a schematic diagram of an operation screen. 操作画面の模式図である。It is a schematic diagram of an operation screen. 操作画面の模式図である。It is a schematic diagram of an operation screen. 情報処理装置の動作のフローチャートである。It is a flowchart of operation | movement of information processing apparatus. 端末装置のブロック図である。It is a block diagram of a terminal device. 第2実施形態における情報処理装置のブロック図である。It is a block diagram of the information processing apparatus in 2nd Embodiment. 第2実施形態における第2情報取得部の動作のフローチャートである。It is a flowchart of operation | movement of the 2nd information acquisition part in 2nd Embodiment. 変形例における情報処理装置のブロック図である。It is a block diagram of the information processor in a modification. 変形例における操作画面の模式図である。It is a schematic diagram of the operation screen in a modification. 登録文字列と挿入句との関係を例示する説明図である。It is explanatory drawing which illustrates the relationship between a registration character string and an insertion phrase.
<第1実施形態>
 図1は、本発明の第1実施形態における情報提供システム10の構成図である。第1実施形態の情報提供システム10は、電車またはバス等の交通機関の利用者HAに情報を提供するためのコンピュータシステムであり、情報処理装置12と放音装置14とを具備する。放音装置14は、交通機関の施設内(例えば駅構内または電車内)に設置され、交通機関に関する案内のための音声(以下「案内音声」という)Vを放音する音響システムである。交通機関に関する各種の音声としては、電車またはバスの到着または発進を告知する音声、停車地点または他路線への乗換を案内する音声、乗降時の注意事項を告知する音声、迷子を通知する音声、もしくは緊急事態の発生を告知する音声などが例示される。これら各種の音声は案内音声Vとして放音装置14から放音される。
<First Embodiment>
FIG. 1 is a configuration diagram of an information providing system 10 according to the first embodiment of the present invention. An information providing system 10 according to the first embodiment is a computer system for providing information to a user HA of a transportation facility such as a train or a bus, and includes an information processing device 12 and a sound emitting device 14. The sound emitting device 14 is an acoustic system that is installed in a facility of a transportation facility (for example, in a station premises or a train) and emits a sound V (hereinafter referred to as “guidance sound”) V for guidance related to the transportation facility. Various voices related to transportation include voices that notify arrival or departure of trains or buses, voices that guide you to transfer to a stop or other route, voices that alert you to getting on and off, voices that inform you of lost children, Or the audio | voice etc. which alert | report the occurrence of an emergency are illustrated. These various voices are emitted from the sound emission device 14 as the guidance voice V.
 交通機関の利用者HAは、端末装置20を携帯する。端末装置20は、例えば携帯電話機またはスマートフォン等の可搬型の情報端末である。なお、実際には多数の利用者HAが情報提供システム10のサービスを利用し得るが、以下の説明では便宜的に1個の端末装置20に着目する。図1の情報提供システム10の情報処理装置12は、例えば交通機関を管理または運営する従業者(以下「(情報処理装置の)利用者」という)HBが所持および使用する情報端末である。具体的には、スマートフォンまたはタブレット端末等の種々の情報端末が情報処理装置12として利用され得る。情報処理装置12は、放音装置14に有線または無線で接続される。なお、情報処理装置12と放音装置14とを一体に構成することも可能である。 The transportation user HA carries the terminal device 20. The terminal device 20 is a portable information terminal such as a mobile phone or a smartphone. In practice, a large number of users HA can use the service of the information providing system 10, but in the following description, attention is paid to one terminal device 20 for convenience. The information processing apparatus 12 of the information providing system 10 in FIG. 1 is an information terminal possessed and used by, for example, an employee (hereinafter referred to as “user of (information processing apparatus)”) HB who manages or operates a transportation facility. Specifically, various information terminals such as a smartphone or a tablet terminal can be used as the information processing apparatus 12. The information processing device 12 is connected to the sound emitting device 14 by wire or wirelessly. Note that the information processing device 12 and the sound emitting device 14 may be configured integrally.
 図2は、情報提供システム10の構成図である。図2に例示される通り、第1実施形態の情報処理装置12は、制御装置32と記憶装置34と表示装置36と操作装置38とを具備する。制御装置32は、情報処理装置12の全体的な動作を制御する処理回路であり、例えばCPU(Central Processing Unit)を含んで構成される。表示装置36は、例えば液晶表示パネルで構成され、制御装置32による制御のもとで各種の画像を表示する。操作装置38は、情報処理装置の利用者HBからの指示を受付ける入力機器である。例えば利用者HBが操作する複数の操作子、または、表示装置36の表示面に対する接触を検知するタッチパネルが操作装置38として好適に利用される。記憶装置34は、制御装置32が実行するプログラムと制御装置32が使用する各種のデータとを記憶する。例えば、磁気記録媒体または半導体記録媒体等の公知の記録媒体、あるいは複数種の記録媒体の組合せが記憶装置34として任意に採用される。 FIG. 2 is a configuration diagram of the information providing system 10. As illustrated in FIG. 2, the information processing apparatus 12 according to the first embodiment includes a control device 32, a storage device 34, a display device 36, and an operation device 38. The control device 32 is a processing circuit that controls the overall operation of the information processing device 12, and includes, for example, a CPU (Central Processing Unit). The display device 36 is composed of a liquid crystal display panel, for example, and displays various images under the control of the control device 32. The operation device 38 is an input device that receives an instruction from the user HB of the information processing device. For example, a plurality of operators operated by the user HB or a touch panel that detects contact with the display surface of the display device 36 is preferably used as the operation device 38. The storage device 34 stores a program executed by the control device 32 and various data used by the control device 32. For example, a known recording medium such as a magnetic recording medium or a semiconductor recording medium, or a combination of a plurality of types of recording media is arbitrarily employed as the storage device 34.
 第1実施形態の記憶装置34は、図3の案内テーブルTAを記憶する。図3に例示される通り、案内テーブルTAには、案内音声Vの相異なる内容に対応する複数の登録文字列R(R1,R2,……)が登録される。図3に例示される通り、案内テーブルTAの複数の登録文字列Rのなかには、1個以上の挿入区間を包含する登録文字列Rがある。図3では挿入区間が大括弧[ ]で表現されている。挿入区間には種々の語句(以下「挿入句」という)が挿入される。登録文字列Rの挿入区間に可変の挿入句を挿入することで、案内音声Vの発音内容を表す文字列(以下「案内文字列」という)Xが生成される。例えば、図3には、「We have found a lost child [Age] [Outfit], [Name].  If you are looking for this child, please come to the customer service desk.」という登録文字列R1(すなわち、交通機関の施設構内での迷子を通知する文字列)が例示されている。登録文字列R1の各挿入区間に適切な挿入句を挿入することで、「We have found a lost child [of 4 years old] [in red clothes], [named Yuuki Suzuki].  If you are looking for this child, please come to the customer service desk.」という案内文字列Xが生成される。以上の説明から理解される通り、登録文字列Rは、挿入句が相違する複数の案内文字列Xにわたり共通する定型的な文字列(典型的には文)である。また、挿入句は、案内音声V毎に選定されて登録文字列Rの挿入区間に挿入される文字列である。なお、以上に例示した1個以上の挿入区間を含む登録文字列Rのほか、挿入区間を含まない登録文字列Rも案内テーブルTAには登録され得る。 The storage device 34 of the first embodiment stores the guide table TA of FIG. As illustrated in FIG. 3, a plurality of registered character strings R (R1, R2,...) Corresponding to different contents of the guidance voice V are registered in the guidance table TA. As illustrated in FIG. 3, among the plurality of registered character strings R of the guide table TA, there is a registered character string R including one or more insertion sections. In FIG. 3, the insertion section is represented by square brackets []. Various words (hereinafter referred to as “insert phrases”) are inserted into the insertion section. By inserting a variable insertion phrase in the insertion section of the registered character string R, a character string (hereinafter referred to as “guide character string”) X representing the pronunciation content of the guidance voice V is generated. For example, FIG. 3 shows a registered character string R1 (ie, “We have found a lost child [Age] [Outfit], [Name]. If you are looking for this child, please come to the customer service desk.) A character string for notifying a lost child in a transportation facility is shown as an example. By inserting an appropriate insertion phrase into each insertion section of the registered character string R1, “We have found a lost child [of 4 years old] [in red clothes], [named Yuuki Suzuki]. If you are looking for this A guide character string X of “child,“ please ”,“ come ”,“ the customer, ”“ service, ”“ desk. As understood from the above description, the registered character string R is a fixed character string (typically a sentence) that is common to a plurality of guide character strings X having different insertion phrases. The insertion phrase is a character string that is selected for each guidance voice V and inserted into the insertion section of the registered character string R. In addition to the registered character string R including one or more insertion sections as exemplified above, a registered character string R that does not include an insertion section can also be registered in the guidance table TA.
 図3に例示される通り、案内テーブルTAには、相異なる登録文字列Rに対応する複数の配信文字列Y(Y1,Y2,……)が、各配信文字列Yの識別情報DY(DY1,DY2,……)とともに登録される。任意の1個の配信文字列Yは、案内文字列Xと内容(意味)自体は類似するが案内文字列Xとは部分的に相違する文字列である。第1実施形態では、案内文字列Xのうち挿入句を削除した文字列(ただし、削除後の文字列は、文法上の必要な変更を含み得る)が配信文字列Yとして案内テーブルTAに登録される。具体的には、配信文字列Yは、案内文字列Xのうち個人に関する秘匿性の高い部分(特に名前等)を削除した文字列である。例えば、「We have found a lost child [of 4 years old] [in red clothes], [named Yuuki Suzuki].  If you are looking for this child, please come to the customer service desk.」という前述の案内文字列Xのうち、個人に関する秘匿性の高い挿入句を削除した「We have found a lost child. If you are looking for a child, please come to the customer service desk.」という配信文字列Y1が、記憶装置34に記憶される。 As illustrated in FIG. 3, in the guidance table TA, a plurality of distribution character strings Y (Y1, Y2,...) Corresponding to different registered character strings R are identified by the identification information DY (DY1) of each distribution character string Y. , DY2, ...). Any one delivery character string Y is a character string that is similar in content (meaning) itself to the guide character string X but partially different from the guide character string X. In the first embodiment, a character string in which the insertion phrase is deleted from the guide character string X (however, the deleted character string may include necessary grammatical changes) is registered in the guide table TA as the distribution character string Y. Is done. Specifically, the distribution character string Y is a character string obtained by deleting a highly confidential part (particularly a name or the like) related to an individual from the guide character string X. For example, "We 案 内 have found a lost child [of 4 years old] [in red clothes], [named Yuuki Suzuki]. If you are looking for this child, please come to the customer service desk." In X, the distribution string Y1 “We have found a lost child. If you are looking for a child, please come to the customer service」 desk. ”, Which is deleted from X, is stored in the storage device 34. Is remembered.
 図2の制御装置32は、記憶装置34に記憶されたプログラムを実行することで複数の機能(第1情報取得部42,第2情報取得部44,表示制御部46,音声合成部52,および配信制御部54)を実現する。なお、制御装置32の一部の機能を専用の電子回路で実現した構成、または、制御装置32の機能を複数の装置に分散した構成も採用され得る。 The control device 32 in FIG. 2 executes a program stored in the storage device 34 to thereby execute a plurality of functions (a first information acquisition unit 42, a second information acquisition unit 44, a display control unit 46, a voice synthesis unit 52, and The distribution control unit 54) is realized. A configuration in which a part of the function of the control device 32 is realized by a dedicated electronic circuit, or a configuration in which the function of the control device 32 is distributed to a plurality of devices may be employed.
 第1情報取得部42は、利用者HBからの指示に応じた案内文字列X(第1文字列の例示)を取得する。利用者HBは、操作装置38を適宜に操作することで、案内テーブルTAに登録された複数の登録文字列Rのうち所望の1個の登録文字列Rと、当該登録文字列Rの各挿入区間に挿入される挿入句とを任意に指定することが可能である。第1情報取得部42は、利用者HBから指示された登録文字列Rの各挿入区間に、利用者HBから指示された挿入句を挿入することで、案内音声Vの発音内容を表す案内文字列Xを生成する。 The first information acquisition unit 42 acquires a guide character string X (an example of the first character string) according to an instruction from the user HB. The user HB appropriately operates the operation device 38 to insert a desired one registered character string R among the plurality of registered character strings R registered in the guidance table TA and each of the registered character strings R. It is possible to arbitrarily specify an insertion phrase to be inserted into the section. The 1st information acquisition part 42 inserts the insertion phrase instruct | indicated from the user HB into each insertion area of the registration character string R instruct | indicated from the user HB, and the guide character showing the pronunciation content of the guidance voice V is inserted. Generate column X.
 第2情報取得部44は、第1情報取得部42が取得した案内文字列Xとは部分的に相違する配信文字列Y(第2文字列の例示)を取得する。具体的には、第2情報取得部44は、案内テーブルTAに登録された複数の配信文字列Yのうち、利用者HBが指定した案内文字列Xを構成する登録文字列Rに対応した配信文字列Yを案内テーブルTAから取得する。すなわち、案内文字列Xから挿入句を削除した文字列が配信文字列Yとして取得される。第2情報取得部44は、案内文字列Xを配信文字列Yに補正ないし変換する要素とも換言され得る。なお、案内テーブルTAから配信文字列Yを取得する構成のほか、第1情報取得部42が取得した案内文字列Xを編集(例えば挿入句の削除)することにより第2情報取得部44が配信文字列Yを生成する構成も採用され得る。 The second information acquisition unit 44 acquires a distribution character string Y (an example of the second character string) that is partially different from the guide character string X acquired by the first information acquisition unit 42. Specifically, the second information acquisition unit 44 distributes corresponding to the registered character string R constituting the guide character string X designated by the user HB among the plurality of distribution character strings Y registered in the guide table TA. The character string Y is acquired from the guidance table TA. That is, a character string obtained by deleting the insertion phrase from the guide character string X is acquired as the distribution character string Y. The second information acquisition unit 44 can be rephrased as an element that corrects or converts the guide character string X into the distribution character string Y. In addition to the configuration in which the distribution character string Y is acquired from the guidance table TA, the second information acquisition unit 44 distributes by editing the guide character string X acquired by the first information acquisition unit 42 (for example, deleting the inserted phrase). A configuration for generating the character string Y can also be adopted.
 表示制御部46は、表示装置36に画像を表示させる。第1実施形態の表示制御部46は、例えば図4の操作画面GAと図5の操作画面GBとを表示装置36に表示させる。操作画面GAおよび操作画面GBは、利用者HBからの指示を受付けるための画像である。図4に例示される通り、操作画面GAには、案内音声Vの内容に関する複数の選択肢362が表示される。具体的には、案内テーブルTAに登録された複数の登録文字列Rの各々について操作画面GAに選択肢362が表示される。利用者HBは、操作装置38を適宜に操作することで任意の選択肢362を選択することが可能である。操作画面GAにて利用者HBが選択した選択肢362に対応した登録文字列Rを含む案内文字列Xが第1情報取得部42により取得される。 The display control unit 46 displays an image on the display device 36. The display control unit 46 of the first embodiment displays, for example, the operation screen GA of FIG. 4 and the operation screen GB of FIG. The operation screen GA and the operation screen GB are images for receiving instructions from the user HB. As illustrated in FIG. 4, a plurality of options 362 relating to the content of the guidance voice V are displayed on the operation screen GA. Specifically, options 362 are displayed on the operation screen GA for each of the plurality of registered character strings R registered in the guidance table TA. The user HB can select an arbitrary option 362 by appropriately operating the operation device 38. A guide character string X including a registered character string R corresponding to the option 362 selected by the user HB on the operation screen GA is acquired by the first information acquisition unit 42.
 操作画面GBは、案内文字列Xおよび配信文字列Yの内容を利用者HBが指示または確認するための画像であり、操作画面GAでの選択肢362の選択後に表示装置36に表示される。図5に例示される通り、操作画面GBは、指示領域A1と確認領域A2と再生指示子A3とを包含する。再生指示子A3は、案内音声Vの再生を利用者HBが指示するための画像(コマンドボタン)である。 The operation screen GB is an image for the user HB to instruct or confirm the contents of the guide character string X and the distribution character string Y, and is displayed on the display device 36 after selection of the option 362 on the operation screen GA. As illustrated in FIG. 5, the operation screen GB includes an instruction area A1, a confirmation area A2, and a reproduction indicator A3. The reproduction indicator A3 is an image (command button) for the user HB to instruct reproduction of the guidance voice V.
 指示領域A1は、登録文字列Rの各挿入区間に挿入されるべき挿入句を利用者HBが指示するための領域である。具体的には、指示領域A1には、操作画面GAにて利用者HBが選択した選択肢362に対応する登録文字列Rの挿入区間毎に入力欄364が配置される。挿入区間の個数または内容(項目名)は登録文字列R毎に区々であるから、操作画面GBの指示領域A1に配置される入力欄364の個数または内容(項目名)は、操作画面GAで選択された登録文字列Rに応じて相違し得る。 The instruction area A1 is an area for the user HB to indicate an insertion phrase to be inserted in each insertion section of the registered character string R. Specifically, an input field 364 is arranged in the instruction area A1 for each insertion section of the registered character string R corresponding to the option 362 selected by the user HB on the operation screen GA. Since the number or content (item name) of the insertion section is different for each registered character string R, the number or content (item name) of the input field 364 arranged in the instruction area A1 of the operation screen GB is the operation screen GA. It can be different depending on the registered character string R selected in.
 利用者HBは、操作装置38を適宜に操作することで、登録文字列Rの各挿入区間に挿入すべき挿入句を入力欄364毎に入力することが可能である。具体的には、利用者HBは、事前に用意された複数の選択肢366からの選択と直接的な文字入力との何れかにより、各指示欄364の挿入句を入力することが可能である。例えば、「We have found a lost child [Age] [Outfit], [Name].  If you are looking for this child, please come to the customer service desk.」という登録文字列R1を想定すると、「Outfit」および「Age」の挿入区間については複数の選択肢366から挿入句が選択され、「Name」の挿入区間については挿入句が文字入力で任意に指定される。図2の第1情報取得部42は、操作画面GAにて指示された登録文字列Rの各挿入区間に、操作画面GBで各入力欄364に入力された挿入句を挿入することで、案内音声Vの発音内容を表す案内文字列Xを生成する。すなわち、第1実施形態の第1情報取得部42は、操作画面GBに対する指示(入力または選択)で複数の項目(挿入区間)の各々について利用者HBが複数の候補から選択した挿入句を含む案内文字列Xを取得する。以上から理解されるように、入力欄364への「入力」には、文字入力のほか、複数の選択肢366からの選択が含まれる。 The user HB can input an insertion phrase to be inserted into each insertion section of the registered character string R for each input field 364 by appropriately operating the operation device 38. Specifically, the user HB can input an insertion phrase in each instruction column 364 by either selection from a plurality of options 366 prepared in advance or direct character input. For example, assuming a registered string R1 “We have found a lost child [Age] [Outfit], [Name]. If you are looking for this child, please come to the customer service desk. For the insertion section of “Age”, an insertion phrase is selected from a plurality of options 366, and for the insertion section of “Name”, the insertion phrase is arbitrarily designated by character input. The first information acquisition unit 42 in FIG. 2 inserts the insertion phrase input in each input field 364 on the operation screen GB into each insertion section of the registered character string R instructed on the operation screen GA, thereby providing guidance. A guide character string X representing the pronunciation content of the voice V is generated. That is, the first information acquisition unit 42 of the first embodiment includes an insertion phrase selected by the user HB from a plurality of candidates for each of a plurality of items (insertion sections) in response to an instruction (input or selection) on the operation screen GB. The guide character string X is acquired. As understood from the above, “input” in the input field 364 includes selection from a plurality of options 366 in addition to character input.
 なお、複数の登録文字列Rのなかには、挿入区間を含まない登録文字列Rもある。挿入区間を含まない登録文字列Rが操作画面GAにて選択された場合、図6に例示される通り、挿入句(すなわち可変の要素)を含まない定型的な登録文字列Rであることを利用者HBに報知するメッセージが指示領域A1に表示される。すなわち、指示領域A1に指示欄364は表示されない。代わりに、定型的な登録文字列Rが選択された場合に指示領域A1を空欄とする(定型の旨のメッセージを表示しない)ことも可能である。 In addition, among the plurality of registered character strings R, there is a registered character string R that does not include the insertion section. When a registered character string R that does not include an insertion section is selected on the operation screen GA, it is determined that the registered character string R does not include an insertion phrase (that is, a variable element) as illustrated in FIG. A message to notify the user HB is displayed in the indication area A1. That is, the instruction column 364 is not displayed in the instruction area A1. Alternatively, when the standard registered character string R is selected, the indication area A1 can be left blank (a message indicating that the standard is not displayed).
 図5の確認領域A2には、案内文字列Xと配信文字列Yとが表示される。すなわち、第1実施形態の表示制御部46は、第1情報取得部42が取得した案内文字列Xと第2情報取得部44が取得した配信文字列Yとを表示装置36に表示させる。具体的には、図5に例示される通り、「We have found a lost child [of 4 years old] [in red clothes], [named Yuuki Suzuki].  If you are looking for this child, please come to the customer service desk.」という案内文字列Xと、当該案内文字列Xのうち個人に関する部分を削除した「We have found a lost child.  If you are looking for a child, please come to the customer service desk.」という配信文字列Yとが、表示装置36に(位置的に)相互に並んで表示される。したがって、利用者HBは、自分自身が指示した案内文字列Xと、当該案内文字列Xを部分的に変更した配信文字列Yとを、相互に対比しながら確認することが可能である。 In the confirmation area A2 in FIG. 5, a guide character string X and a delivery character string Y are displayed. That is, the display control unit 46 of the first embodiment causes the display device 36 to display the guide character string X acquired by the first information acquisition unit 42 and the distribution character string Y acquired by the second information acquisition unit 44. Specifically, as illustrated in FIG. 5, "We have found a lost child [of 4 years old] [in red clothes], [named Yuuki Suzuki]. If you are looking for this child, please come to the customer service desk. ”and“ We have found a lost child. If you are looking for a child, please come to the customer service desk. ” The distribution character string Y is displayed side by side on the display device 36 (positionally). Therefore, the user HB can check the guide character string X instructed by the user HB and the distribution character string Y obtained by partially changing the guide character string X while comparing them with each other.
 図2の音声合成部52は、第1情報取得部42が取得した案内文字列Xを適用した音声合成で音響信号SVを生成する。音響信号SVは、案内文字列Xを発音した音声(すなわち案内音声V)の波形を表す時間信号である。音響信号SVの生成には公知の音声合成技術が任意に採用され得る。なお、情報処理装置12が通信可能な音声合成サーバに音響信号SVの生成(音声合成)を実行させることも可能である。具体的には、第1情報取得部42は、案内文字列Xを音声合成サーバに送信し、音声合成サーバによる音声合成で生成された音響信号SVを音声合成サーバから受信して放音装置14に供給する。この場合には、音声合成部52は省略されてもよい。 2 generates the acoustic signal SV by speech synthesis using the guide character string X acquired by the first information acquisition unit 42. The acoustic signal SV is a time signal that represents the waveform of the voice (ie, the guidance voice V) that is generated by the guidance character string X. A known speech synthesis technique can be arbitrarily employed to generate the acoustic signal SV. Note that it is also possible to cause the speech synthesis server with which the information processing apparatus 12 can communicate to generate the acoustic signal SV (speech synthesis). Specifically, the first information acquisition unit 42 transmits the guidance character string X to the speech synthesis server, receives the acoustic signal SV generated by speech synthesis by the speech synthesis server from the speech synthesis server, and outputs the sound output device 14. To supply. In this case, the speech synthesizer 52 may be omitted.
 配信制御部54は、第2情報取得部44が取得した配信文字列Yを示す配信情報Qを生成する。第1実施形態の配信情報Qは、配信文字列Yの識別情報DYである。具体的には、配信制御部54は、第2情報取得部44が取得した配信文字列Yの識別情報DYを案内テーブルTAから配信情報Qとして取得する。また、第1実施形態の配信制御部54は、配信情報Qを音響成分として含有する音響信号SQを生成する。音響信号SQの生成には公知の技術が任意に採用され得る。例えば、所定の周波数の正弦波等の搬送波を配信情報Qにより周波数変調することで音響信号SQを生成する構成が好適である。あるいは、拡散符号を利用した配信情報Qの拡散変調と所定の周波数の搬送波を利用した周波数変換とを順次に実行して音響信号SQを生成する構成が好適である。音響信号SQの周波数帯域は、放音装置14が音を放音し、端末装置20が音を収音することが可能な音の周波数帯域である。かつ、当該周波数帯域は、端末装置20の利用者HAが通常の環境で聴取する音声(例えば案内音声V)または楽音等の音の周波数帯域を上回る周波数帯域(例えば18kHz以上かつ20kHz以下)に包含される。ただし、音響信号SQの周波数帯域は任意であり、例えば可聴帯域内の音響信号SQを生成してもよい。 The distribution control unit 54 generates distribution information Q indicating the distribution character string Y acquired by the second information acquisition unit 44. The distribution information Q of the first embodiment is identification information DY of the distribution character string Y. Specifically, the distribution control unit 54 acquires the identification information DY of the distribution character string Y acquired by the second information acquisition unit 44 as the distribution information Q from the guidance table TA. Moreover, the delivery control part 54 of 1st Embodiment produces | generates the acoustic signal SQ which contains the delivery information Q as an acoustic component. A known technique can be arbitrarily adopted to generate the acoustic signal SQ. For example, a configuration in which the acoustic signal SQ is generated by frequency-modulating a carrier wave such as a sine wave having a predetermined frequency with the distribution information Q is suitable. Alternatively, a configuration in which the acoustic signal SQ is generated by sequentially executing spread modulation of the distribution information Q using a spread code and frequency conversion using a carrier wave of a predetermined frequency is preferable. The frequency band of the acoustic signal SQ is a frequency band of sound in which the sound emitting device 14 emits sound and the terminal device 20 can collect sound. The frequency band is included in a frequency band (for example, 18 kHz or more and 20 kHz or less) that exceeds the frequency band of sound (for example, guidance voice V) or musical sound that the user HA of the terminal device 20 listens in a normal environment. Is done. However, the frequency band of the acoustic signal SQ is arbitrary. For example, the acoustic signal SQ within the audible band may be generated.
 図5の操作画面GBの再生指示子A3が利用者HBにより操作された場合に、音声合成部52は音響信号SVを生成し、配信制御部54が音響信号SQを生成する。すなわち、再生指示子A3の操作を契機として音響信号SVおよび音響信号SQが放音装置14に供給される。放音装置14は、音響信号SVおよび音響信号SQが表す音を放音する。すなわち、音響信号SVが表す案内音声Vと音響信号SQが表す音響成分とが放音装置14から放音される。以上の説明から理解される通り、第1実施形態の放音装置14は、空気振動としての音波を伝送媒体とする音響通信で配信情報Qを端末装置20に送信する手段(送信装置)として機能する。すなわち、第1実施形態の配信制御部54は、案内音声Vを放音する放音装置14を送信装置として利用した音響通信で配信情報Qを放音装置14から送信させる。以上の構成では、案内音声Vを放音する放音装置14が配信情報Qの送信に流用されるから、放音装置14とは別個の無線通信機器で配信情報Qを端末装置20に送信する構成と比較して、情報提供システム10の構成を簡素化できるという利点がある。 When the playback indicator A3 on the operation screen GB in FIG. 5 is operated by the user HB, the speech synthesizer 52 generates the acoustic signal SV, and the distribution controller 54 generates the acoustic signal SQ. That is, the acoustic signal SV and the acoustic signal SQ are supplied to the sound emitting device 14 in response to the operation of the reproduction indicator A3. The sound emitting device 14 emits the sound represented by the acoustic signal SV and the acoustic signal SQ. That is, the guidance voice V represented by the acoustic signal SV and the acoustic component represented by the acoustic signal SQ are emitted from the sound emitting device 14. As understood from the above description, the sound emitting device 14 of the first embodiment functions as a means (transmitting device) for transmitting the distribution information Q to the terminal device 20 by acoustic communication using sound waves as air vibration as a transmission medium. To do. That is, the delivery control unit 54 of the first embodiment causes the delivery information Q to be transmitted from the sound emitting device 14 by acoustic communication using the sound emitting device 14 that emits the guidance voice V as a transmitting device. In the above configuration, since the sound emitting device 14 that emits the guidance voice V is used for transmission of the distribution information Q, the distribution information Q is transmitted to the terminal device 20 by a wireless communication device that is separate from the sound emitting device 14. Compared to the configuration, there is an advantage that the configuration of the information providing system 10 can be simplified.
 図7は、情報処理装置12の制御装置32の動作を例示するフローチャートである。例えば操作装置38に対する利用者HBからの指示を契機として図7の処理が開始される。図7の処理を開始すると、制御装置32(表示制御部46)は、図4の操作画面GAを表示装置36に表示させ、利用者HBによる登録文字列Rの選択を受付ける(SA1)。登録文字列Rが選択されると、制御装置32(表示制御部46)は、初期的な操作画面GBを表示装置36に表示させる(SA2)。 FIG. 7 is a flowchart illustrating the operation of the control device 32 of the information processing device 12. For example, the processing in FIG. 7 is started in response to an instruction from the user HB to the operation device 38. When the process of FIG. 7 is started, the control device 32 (display control unit 46) displays the operation screen GA of FIG. 4 on the display device 36, and accepts the selection of the registered character string R by the user HB (SA1). When the registered character string R is selected, the control device 32 (display control unit 46) displays an initial operation screen GB on the display device 36 (SA2).
 制御装置32は、操作画面GBの何れかの指示欄364に対する挿入句の指示を利用者HBから受付けたか否かを判定する(SA3)。挿入句の指示を受付けた場合(SA3:YES)、制御装置32(第1情報取得部42)は、操作画面GAで選択された登録文字列Rと利用者HBから指示された挿入句とを含む案内文字列Xを生成する(SA4)。また、制御装置32(第2情報取得部44)は、案内文字列Xとは部分的に相違する配信文字列Yを取得する(SA5)。制御装置32(表示制御部46)は、案内文字列Xと配信文字列Yとを表示装置36に表示させる(SA6)。挿入句が指示されない場合(SA3:NO)には、案内文字列Xおよび配信文字列Yの生成および表示(SA4-SA6)は省略される。 The control device 32 determines whether or not an instruction for an insertion phrase for any one of the instruction fields 364 on the operation screen GB has been received from the user HB (SA3). When the instruction of the insertion phrase is received (SA3: YES), the control device 32 (first information acquisition unit 42) displays the registered character string R selected on the operation screen GA and the insertion phrase specified by the user HB. A guide character string X including it is generated (SA4). Further, the control device 32 (second information acquisition unit 44) acquires a distribution character string Y that is partially different from the guide character string X (SA5). The control device 32 (display control unit 46) displays the guide character string X and the distribution character string Y on the display device 36 (SA6). When the insertion phrase is not specified (SA3: NO), the generation and display of the guide character string X and the distribution character string Y (SA4-SA6) are omitted.
 制御装置32は、再生指示子A3に対する操作を利用者HBから受付けたか否かを判定する(SA7)。再生指示子A3が操作されない場合(SA7:NO)、制御装置32は処理をステップSA3に移行し、利用者HBからの指示に応じた案内文字列Xおよび配信文字列Yの更新(SA3-SA6)を反復する。他方、再生指示子A3が操作された場合(SA7:YES)、制御装置32(音声合成部52)は、第1情報取得部42が取得した案内文字列Xから音響信号SVを生成する(SA8)。また、制御装置32(配信制御部54)は、第2情報取得部44が取得した配信文字列Yを示す配信情報Qの音響信号SQを生成する(SA9)。制御装置32は、音響信号SVと音響信号SQとを放音装置14に供給する(SA10)。したがって、案内文字列Xを発音した案内音声Vが再生されるとともに配信情報Qが音響通信により送信される。なお、音響信号SVの生成(SA8)と音響信号SQの生成(SA9)との先後は逆転され得る。また、案内音声Vの再生と配信情報Qの送信との時間的な関係は任意である。例えば、案内音声Vの再生に並行して配信情報Qを反復的に送信することが可能である。 The control device 32 determines whether or not an operation for the playback indicator A3 has been received from the user HB (SA7). When the reproduction indicator A3 is not operated (SA7: NO), the control device 32 shifts the process to step SA3, and updates the guide character string X and the distribution character string Y according to the instruction from the user HB (SA3-SA6). ) Is repeated. On the other hand, when the playback indicator A3 is operated (SA7: YES), the control device 32 (speech synthesizer 52) generates an acoustic signal SV from the guide character string X acquired by the first information acquisition unit 42 (SA8). ). Further, the control device 32 (distribution control unit 54) generates an acoustic signal SQ of the distribution information Q indicating the distribution character string Y acquired by the second information acquisition unit 44 (SA9). The control device 32 supplies the sound signal SV and the sound signal SQ to the sound emitting device 14 (SA10). Therefore, the guidance voice V that pronounces the guidance character string X is reproduced and the distribution information Q is transmitted by acoustic communication. The generation of the acoustic signal SV (SA8) and the generation of the acoustic signal SQ (SA9) can be reversed. Further, the temporal relationship between the reproduction of the guidance voice V and the transmission of the distribution information Q is arbitrary. For example, it is possible to repeatedly transmit the distribution information Q in parallel with the reproduction of the guidance voice V.
 図8は、端末装置20の構成図である。図8に例示される通り、端末装置20は、収音装置62と制御装置64と記憶装置66と再生装置68とを具備する。収音装置62は、周囲の音を収音する音響機器(マイクロホン)である。第1実施形態の収音装置62は、情報提供システム10の放音装置14から放音される音波を収音して音響信号Sを生成する。音響信号Sは、配信情報Qの音響成分(音響信号SQ)を含有する。すなわち、収音装置62は、空気振動たる音波を伝送媒体とした音響通信で配信情報Qを受信する受信装置として機能する。なお、収音装置62が生成した音響信号Sをアナログからデジタルに変換するA/D変換器の図示は便宜的に省略した。 FIG. 8 is a configuration diagram of the terminal device 20. As illustrated in FIG. 8, the terminal device 20 includes a sound collection device 62, a control device 64, a storage device 66, and a playback device 68. The sound collection device 62 is an acoustic device (microphone) that collects ambient sounds. The sound collection device 62 of the first embodiment collects the sound wave emitted from the sound emission device 14 of the information providing system 10 and generates the acoustic signal S. The acoustic signal S contains the acoustic component (acoustic signal SQ) of the distribution information Q. That is, the sound collection device 62 functions as a reception device that receives the distribution information Q by acoustic communication using sound waves that are air vibrations as a transmission medium. Note that an A / D converter that converts the acoustic signal S generated by the sound pickup device 62 from analog to digital is not shown for convenience.
 制御装置64は、情報処理装置12の全体的な動作を制御する処理回路であり、例えばCPUを含んで構成される。記憶装置66は、制御装置64が実行するプログラムと制御装置64が使用する各種のデータとを記憶する。第1実施形態の記憶装置66は、案内テーブルTBを記憶する。図8に例示される通り、案内テーブルTBには、配信情報Qで指定され得る複数の配信文字列Y(Y1,Y2,……)が、各配信文字列Yの識別情報DY(DY1,DY2,……)とともに登録される。すなわち、情報処理装置12の第2情報取得部44が取得する可能性がある複数の配信文字列Y(案内テーブルTAに登録された配信文字列Y(Y1,Y2,……))が記憶装置66に記憶される。 The control device 64 is a processing circuit that controls the overall operation of the information processing device 12, and includes, for example, a CPU. The storage device 66 stores a program executed by the control device 64 and various data used by the control device 64. The storage device 66 of the first embodiment stores a guide table TB. As illustrated in FIG. 8, the guide table TB includes a plurality of distribution character strings Y (Y1, Y2,...) That can be specified by the distribution information Q, and identification information DY (DY1, DY2) of each distribution character string Y. , ……) and registered. That is, a plurality of distribution character strings Y (distribution character strings Y (Y1, Y2,...) Registered in the guide table TA) that may be acquired by the second information acquisition unit 44 of the information processing apparatus 12 are stored in the storage device. 66.
 図8の制御装置64は、記憶装置66に記憶されたプログラムを実行することで、配信情報Qが示す配信文字列Yを利用者HAに提示するための複数の機能(情報抽出部642および再生制御部644)を実現する。なお、制御装置64の一部の機能を専用の電子回路で実現した構成、または、制御装置64の機能を複数の装置に分散した構成も採用され得る。 The control device 64 in FIG. 8 executes a program stored in the storage device 66, thereby providing a plurality of functions (information extraction unit 642 and reproduction) for presenting the distribution character string Y indicated by the distribution information Q to the user HA. The control unit 644) is realized. A configuration in which a part of the function of the control device 64 is realized by a dedicated electronic circuit, or a configuration in which the function of the control device 64 is distributed to a plurality of devices may be employed.
 情報抽出部642は、収音装置62から供給される音響信号Sから配信情報Qを抽出する。具体的には、情報抽出部642は、配信情報Qの音響成分を含む周波数帯域の帯域成分を強調するフィルタ処理と配信制御部54での変調処理に対応した復調処理とを音響信号Sに対して実行することで配信情報Qを抽出する。再生制御部644は、情報抽出部642が抽出した配信情報Qが示す識別情報DYに対応する配信文字列Yを再生装置68に再生させる。再生装置68は、再生制御部644から指示された配信文字列Yを再生して利用者HAに提示する。第1実施形態の再生装置68は、配信文字列Yを表示する表示機器(例えば液晶表示パネル)である。 The information extraction unit 642 extracts the distribution information Q from the acoustic signal S supplied from the sound collection device 62. Specifically, the information extraction unit 642 performs filtering processing for emphasizing the band component of the frequency band including the acoustic component of the distribution information Q and demodulation processing corresponding to the modulation processing in the distribution control unit 54 for the acoustic signal S. And the distribution information Q is extracted. The reproduction control unit 644 causes the reproduction device 68 to reproduce the distribution character string Y corresponding to the identification information DY indicated by the distribution information Q extracted by the information extraction unit 642. The reproduction device 68 reproduces the distribution character string Y designated by the reproduction control unit 644 and presents it to the user HA. The playback device 68 of the first embodiment is a display device (for example, a liquid crystal display panel) that displays the distribution character string Y.
 以上の説明から理解される通り、利用者HBからの指示に応じた案内文字列Xの案内音声Vが放音装置14から放音される一方、案内文字列Xとは部分的に相違する配信文字列Yが端末装置20の再生装置68で再生される。具体的には、「We have found a lost child [of 4 years old] [in red clothes], [named Yuuki Suzuki].  If you are looking for this child, please come to the customer service desk.」という案内音声Vが放音装置14から再生される場合には、案内文字列Xのうち個人に関する秘匿性の高い部分を削除した「We have found a lost child. If you are looking for a child, please come to the customer service desk.」という配信文字列Yが端末装置20にて表示される。 As understood from the above description, the guide voice V of the guide character string X corresponding to the instruction from the user HB is emitted from the sound emitting device 14, while the distribution is partially different from the guide character string X. The character string Y is reproduced by the reproduction device 68 of the terminal device 20. Specifically, "We have found a lost child [of 4 years old] [in red clothes], [named Yuuki Suzuki]. If you are looking for this child, please come to the customer service desk." When V is reproduced from the sound emitting device 14, the “We have found a lost child. If you are looking for a child, please come to the" is deleted from the guidance character string X. The distribution character string Y “customer service desk” is displayed on the terminal device 20.
 以上の通り、案内文字列Xとは部分的に相違する配信文字列Yを端末装置20にて再生することで個人情報の保護が実現され得る。他方、利用者HBにとっては、自身が指示した案内文字列Xに対応した適切な配信文字列Yが端末装置20にて実際に再生されているか否かが懸念される。第1実施形態では、端末装置20にて再生される配信文字列Yが情報処理装置12の表示装置36に表示されるから、端末装置20にて実際に再生される配信文字列Yを利用者HBが確認できるという利点がある。第1実施形態では特に、案内文字列Xと配信文字列Yとが並んで表示されるから、利用者HBは、自身が指示した内容(案内文字列X)と端末装置20が再生する内容(配信文字列Y)とを相互に対比しながら両者間の関係(例えば相違点または一致点)を確認できるという利点がある。 As described above, the personal information can be protected by reproducing the distribution character string Y partially different from the guide character string X on the terminal device 20. On the other hand, the user HB is concerned whether or not an appropriate distribution character string Y corresponding to the guidance character string X instructed by the user HB is actually reproduced on the terminal device 20. In the first embodiment, the distribution character string Y reproduced on the terminal device 20 is displayed on the display device 36 of the information processing apparatus 12, and therefore the distribution character string Y actually reproduced on the terminal device 20 is displayed on the user. There is an advantage that HB can be confirmed. In particular, in the first embodiment, the guide character string X and the distribution character string Y are displayed side by side, so that the user HB is instructed by the user HB (guidance character string X) and the content reproduced by the terminal device 20 ( There is an advantage that a relationship (for example, a difference or a match) between the two can be confirmed while comparing the distribution character string Y) with each other.
<第2実施形態>
 本発明の第2実施形態を説明する。なお、以下の各例示において作用または機能が第1実施形態と同様である要素については、第1実施形態の説明で使用した符号を流用して各々の詳細な説明を適宜に省略する。
Second Embodiment
A second embodiment of the present invention will be described. In addition, about the element which an effect | action or function is the same as that of 1st Embodiment in each following illustration, the detailed description of each is abbreviate | omitted suitably using the code | symbol used by description of 1st Embodiment.
 図9は、第2実施形態における情報提供システム10の構成図である。図9に例示される通り、第2実施形態の情報処理装置12は、第1実施形態と同様の要素に加えて収音装置16を具備する。収音装置16は、周囲の音響を収音する音響機器(マイクロホン)である。利用者HBは、案内音声Vを収音装置16に対して発音する。具体的には、記憶装置34に事前に登録された複数の登録文字列Rの何れかを含む内容の案内音声Vを利用者HBは発音する。収音装置16は、利用者HBが発音した案内音声Vを収音して当該案内音声Vを表す音響信号SVを生成する。図9に例示される通り、第2実施形態では第1実施形態の音声合成部52が省略され、収音装置16が生成した音響信号SVが放音装置14に供給される。なお、収音装置16が生成した音響信号SVをアナログからデジタルに変換するA/D変換器の図示は便宜的に省略した。 FIG. 9 is a configuration diagram of the information providing system 10 in the second embodiment. As illustrated in FIG. 9, the information processing apparatus 12 of the second embodiment includes a sound collection device 16 in addition to the same elements as those of the first embodiment. The sound collection device 16 is an acoustic device (microphone) that collects ambient sounds. The user HB sounds the guidance voice V to the sound collection device 16. Specifically, the user HB pronounces the guidance voice V having contents including any of a plurality of registered character strings R registered in advance in the storage device 34. The sound collection device 16 collects the guidance voice V generated by the user HB and generates an acoustic signal SV representing the guidance voice V. As illustrated in FIG. 9, in the second embodiment, the speech synthesizer 52 of the first embodiment is omitted, and the sound signal SV generated by the sound collection device 16 is supplied to the sound emission device 14. The A / D converter for converting the acoustic signal SV generated by the sound collection device 16 from analog to digital is not shown for convenience.
 図9の第1情報取得部42は、音響信号SVに対する音声認識で案内文字列Xを特定する。案内文字列Xは、利用者HBが発音した案内音声Vの内容を表す文字列である。音響信号SVの音声認識には、例えばHMM(Hidden Markov Model)等の音響モデルと、言語的な制約を示す言語モデルとを利用した認識処理等の公知の技術が任意に採用され得る。なお、情報処理装置12が通信可能な音声認識サーバに、音響信号SVの音声認識を実行させることも可能である。例えば、第1情報取得部42は、音響信号SVを音声認識サーバに送信し、音声認識サーバによる音声認識で特定された案内文字列Xを音声認識サーバから取得する。すなわち、第1情報取得部42は、それ自身が案内文字列Xを生成してもよいし、他装置により生成された案内文字列Xを取得してもよい。 The first information acquisition unit 42 in FIG. 9 specifies the guide character string X by voice recognition with respect to the acoustic signal SV. The guide character string X is a character string representing the content of the guide voice V pronounced by the user HB. For voice recognition of the acoustic signal SV, for example, a known technique such as a recognition process using an acoustic model such as HMM (Hidden Markov Model) and a language model indicating linguistic restrictions can be arbitrarily adopted. It is also possible to cause the voice recognition server with which the information processing apparatus 12 can communicate to execute voice recognition of the acoustic signal SV. For example, the first information acquisition unit 42 transmits the acoustic signal SV to the voice recognition server, and acquires the guide character string X specified by the voice recognition by the voice recognition server from the voice recognition server. That is, the first information acquisition unit 42 may itself generate the guide character string X, or may acquire the guide character string X generated by another device.
 第2実施形態の第2情報取得部44は、第1情報取得部42が取得した案内文字列Xとは部分的に相違する配信文字列Yを取得する。具体的には、第2情報取得部44は、案内テーブルTAに登録された複数の登録文字列Rのうち案内文字列Xに類似する登録文字列Rを探索し、当該登録文字列Rに対応する配信文字列Yを案内テーブルTAから取得する。 The second information acquisition unit 44 of the second embodiment acquires a distribution character string Y that is partially different from the guide character string X acquired by the first information acquisition unit 42. Specifically, the second information acquisition unit 44 searches for a registered character string R similar to the guide character string X among a plurality of registered character strings R registered in the guide table TA, and corresponds to the registered character string R. The distribution character string Y to be acquired is acquired from the guidance table TA.
 図10は、第2実施形態の第2情報取得部44が配信文字列Yを取得する処理のフローチャートである。図10に例示される通り、第2情報取得部44は、複数の登録文字列Rの各々について案内文字列Xとの類似度の指標(以下「類似指標」という)を算定する(SB1)。類似指標の種類は任意であるが、例えば2種類の文字列間の類似性を評価するための編集距離(レーベンシュタイン距離)等の公知の指標が類似指標として好適である。類似指標の算定が完了すると、第2情報取得部44は、複数の登録文字列Rのうち類似指標が示す類似度が最大となる1個の登録文字列Rを案内テーブルTAから検索する(SB2)。すなわち、案内文字列Xに最も類似する登録文字列Rを案内テーブルTAから検索する。そして、第2情報取得部44は、案内テーブルTAから検索した登録文字列Rに対応する配信文字列Yを案内テーブルTAから取得する(SB3)。以上の例示の通り、第2実施形態では、案内文字列Xに類似する登録文字列Rに対応した配信文字列Yが特定される。したがって、音声認識処理における誤認識等の影響で実際の発音内容とは案内文字列Xが相違する場合でも適切な配信文字列Yを選択できるという利点がある。表示制御部46が案内文字列Xおよび配信文字列Yを表示装置36に表示させる動作は第1実施形態と同様である。 FIG. 10 is a flowchart of processing in which the second information acquisition unit 44 of the second embodiment acquires the distribution character string Y. As illustrated in FIG. 10, the second information acquisition unit 44 calculates an index of similarity (hereinafter referred to as “similarity index”) with the guide character string X for each of the plurality of registered character strings R (SB1). The type of the similarity index is arbitrary, but a known index such as an edit distance (Levenstein distance) for evaluating the similarity between two types of character strings is suitable as the similarity index. When the calculation of the similarity index is completed, the second information acquisition unit 44 searches the guidance table TA for one registered character string R having the maximum similarity indicated by the similarity index among the plurality of registered character strings R (SB2). ). That is, the registered character string R most similar to the guide character string X is searched from the guide table TA. And the 2nd information acquisition part 44 acquires the delivery character string Y corresponding to the registration character string R searched from the guidance table TA from the guidance table TA (SB3). As described above, in the second embodiment, the distribution character string Y corresponding to the registered character string R similar to the guide character string X is specified. Therefore, there is an advantage that an appropriate distribution character string Y can be selected even when the guidance character string X differs from the actual pronunciation content due to the influence of misrecognition in the voice recognition processing. The operation in which the display control unit 46 displays the guide character string X and the distribution character string Y on the display device 36 is the same as in the first embodiment.
 図9の第1情報取得部42は、操作画面GBの再生指示子A3に対する操作を契機として、収音装置16が生成した音響信号SVを放音装置14に供給する。また、配信制御部54は、再生指示子A3に対する操作を契機として、配信文字列Yの識別情報DYを示す配信情報Qの音響信号SQを生成して放音装置14に供給する。したがって、第1実施形態と同様に、案内文字列Xの案内音声Vが放音装置14から再生され、かつ、放音装置14を送信装置として利用した音響通信で配信情報Qが送信される。第2実施形態においても第1実施形態と同様の効果が実現される。また、第2実施形態では、収音装置16に対する発音(音声入力)で利用者HBが案内文字列Xを指示できるという利点がある。 The first information acquisition unit 42 in FIG. 9 supplies the sound signal SV generated by the sound collecting device 16 to the sound emitting device 14 in response to an operation on the reproduction indicator A3 on the operation screen GB. Further, the distribution control unit 54 generates an acoustic signal SQ of the distribution information Q indicating the identification information DY of the distribution character string Y, triggered by an operation on the reproduction indicator A3, and supplies it to the sound emitting device 14. Therefore, as in the first embodiment, the guidance voice V of the guide character string X is reproduced from the sound emitting device 14, and the distribution information Q is transmitted by acoustic communication using the sound emitting device 14 as a transmitting device. In the second embodiment, the same effect as in the first embodiment is realized. In the second embodiment, there is an advantage that the user HB can instruct the guide character string X by sounding (speech input) to the sound collecting device 16.
 なお、以上の説明では、音響信号SVに対する音声認識の結果を案内文字列Xとして利用したが、案内テーブルTAに登録された登録文字列Rを案内文字列Xの取得に利用することも可能である。例えば、第1情報取得部42は、音響信号SVに対する音声認識で特定された文字列X0(前述の形態での案内文字列X)に最も類似する登録文字列Rを案内テーブルTAから検索する。そして、第1情報取得部42は、文字列X0に含まれる挿入句を当該登録文字列Rの各挿入区間に挿入することで案内文字列Xを生成する。また、登録文字列Rの各挿入区間について挿入句の複数の候補を案内テーブルTAに事前に登録し、文字列X0のうち挿入区間に対応する部分を複数の候補の各々と比較することで、当該部分に最も類似する候補を挿入句として選択することも可能である。 In the above description, the result of speech recognition for the acoustic signal SV is used as the guide character string X. However, the registered character string R registered in the guide table TA can be used for obtaining the guide character string X. is there. For example, the first information acquisition unit 42 searches the guide table TA for a registered character string R that is most similar to the character string X0 (guide character string X in the above-described form) specified by voice recognition for the acoustic signal SV. Then, the first information acquisition unit 42 generates the guide character string X by inserting the insertion phrase included in the character string X0 into each insertion section of the registered character string R. Further, by registering a plurality of insertion phrase candidates in the guidance table TA in advance for each insertion section of the registered character string R, and comparing a portion corresponding to the insertion section of the character string X0 with each of the plurality of candidates. It is also possible to select a candidate most similar to the part as an insertion phrase.
<変形例>
 以上に例示した各態様は多様に変形され得る。具体的な変形の態様を以下に例示する。以下の例示から任意に選択された2個以上の態様は、相互に矛盾しない範囲で適宜に併合され得る。
<Modification>
Each aspect illustrated above can be variously modified. Specific modifications are exemplified below. Two or more modes arbitrarily selected from the following examples can be appropriately combined within a range that does not contradict each other.
(1)案内文字列Xと同様の内容を他言語で表現した案内音声Vを放音装置14から放音することも可能である。例えば、図11に例示される通り、第1実施形態と同様の構成の情報処理装置12に機械翻訳部56が追加される。機械翻訳部56は、第1言語(例えば英語)の案内文字列Xを第2言語(例えば日本語)の案内文字列Xに変換する。案内文字列Xの機械翻訳には、公知の技術が任意に採用され得る。音声合成部52は、第1言語の案内文字列Xに対応する案内音声Vと、翻訳後の第2言語の案内文字列Xに対応する案内音声Vとを表す音響信号SVを音声合成により生成する。音響信号SVが放音装置14に供給されることで案内音声Vが第1言語および第2言語の双方で例えば順次に放音される。 (1) It is also possible to emit the guidance voice V expressing the same content as the guidance character string X in another language from the sound emitting device 14. For example, as illustrated in FIG. 11, the machine translation unit 56 is added to the information processing apparatus 12 having the same configuration as that of the first embodiment. The machine translation unit 56 converts the guide character string X in the first language (for example, English) into the guide character string X in the second language (for example, Japanese). For machine translation of the guide character string X, a known technique can be arbitrarily adopted. The voice synthesizer 52 generates, by voice synthesis, an acoustic signal SV representing the guidance voice V corresponding to the first language guidance character string X and the translated guidance voice V corresponding to the second language guidance character string X. To do. By supplying the acoustic signal SV to the sound emitting device 14, the guidance voice V is emitted, for example, sequentially in both the first language and the second language.
 複数の言語の案内音声Vを選択的に放音装置14から放音することも可能である。例えば、図12の操作画面GBを表示制御部46が表示装置36に表示させる。図12に例示された操作画面GBは、図5と同様の要素に加えて言語選択領域A4を包含する。言語選択領域A4には、相異なる言語(日本語,英語,中国語)に対応する複数の言語指示子368が配置される。任意の1種類の言語に対応する言語指示子368は、当該言語を有効状態および無効状態の何れかに利用者HBが設定するための画像である。音声合成部52は、言語指示子368により有効状態に設定された言語について案内音声Vの音響信号SVを生成して放音装置14に供給する。無効状態に設定された言語について音響信号SVは生成されない。したがって、複数の言語のうち有効状態の言語の案内音声Vが配信情報Qの音響成分とともに放音装置14から放音される。 It is also possible to selectively emit the guidance voice V in a plurality of languages from the sound emitting device 14. For example, the display control unit 46 displays the operation screen GB of FIG. The operation screen GB illustrated in FIG. 12 includes a language selection area A4 in addition to the same elements as in FIG. A plurality of language indicators 368 corresponding to different languages (Japanese, English, Chinese) are arranged in the language selection area A4. The language indicator 368 corresponding to any one type of language is an image for the user HB to set the language to either the valid state or the invalid state. The voice synthesizer 52 generates an acoustic signal SV of the guidance voice V for the language set to the valid state by the language indicator 368 and supplies it to the sound emitting device 14. The acoustic signal SV is not generated for the language set to the invalid state. Therefore, the guidance voice V in the valid language among the plurality of languages is emitted from the sound emitting device 14 together with the acoustic component of the distribution information Q.
 操作画面GBの確認領域A2には、言語指示子368により有効状態に設定された各言語について案内文字列Xと配信文字列Yとが表示される。図12では、案内文字列Xと配信文字列Yとが英語および日本語の双方で確認領域A2に表示された場合が例示されている。以上の構成によれば、相異なる言語で放音される案内音声Vの内容を利用者HBが複数の言語で確認することが可能である。 In the confirmation area A2 of the operation screen GB, the guide character string X and the distribution character string Y are displayed for each language set to the valid state by the language indicator 368. FIG. 12 illustrates a case where the guide character string X and the distribution character string Y are displayed in the confirmation area A2 in both English and Japanese. According to the above configuration, the user HB can confirm the contents of the guidance voice V emitted in different languages in a plurality of languages.
(2)音声合成部52による案内音声Vの合成には公知の技術が任意に採用され得る。また、例えば案内文字列Xのうち定型的な登録文字列Rと可変の各挿入句とで音声合成の方法を相違させることも可能である。音声合成の方法としては、例えば特定の文字列を発音した音声を事前に録音した録音データを利用する録音編集方式、または、音声素片等の音声単位を選択的に組合わせて音声を合成する素片接続方式等の規則合成方式が例示され得る。例えば、音声合成部52は、発音内容が既知である登録文字列Rについては録音編集方式で音声信号を生成し、発音内容が可変である挿入句については規則合成方式で音声信号を生成する。そして、音声合成部52は、複数の音声信号を時間軸上で接続することで案内音声Vの音響信号SVを生成する。以上の構成によれば、登録文字列Rの音声合成については事前に録音された音を使用することで処理負荷を軽減しながら、挿入句については規則合成方式で多様な発音内容の音声を合成できるという利点がある。 (2) A known technique can be arbitrarily employed for synthesizing the guidance voice V by the voice synthesis unit 52. In addition, for example, it is possible to make the speech synthesis method different between the standard registered character string R in the guide character string X and each variable insertion phrase. As a speech synthesis method, for example, a recording / editing method using recording data obtained by recording a voice that pronounces a specific character string in advance, or a voice is synthesized by selectively combining speech units such as speech units. A rule composition method such as a unit connection method may be exemplified. For example, the speech synthesizer 52 generates a speech signal by the recording and editing method for the registered character string R whose pronunciation content is known, and generates a speech signal by the rule synthesis method for an insertion phrase whose pronunciation content is variable. The voice synthesizer 52 generates the acoustic signal SV of the guidance voice V by connecting a plurality of voice signals on the time axis. According to the above configuration, for the speech synthesis of the registered character string R, while using a pre-recorded sound to reduce the processing load, for the insertion phrase, the speech of various pronunciation contents is synthesized by the rule synthesis method. There is an advantage that you can.
 また、既定の登録文字列Rについては録音データを利用した録音編集方式で音声信号を生成する一方、事後的に追加された登録文字列Rおよび挿入句については規則合成方式で音声信号を生成することも可能である。なお、既定の登録文字列Rと新規な登録文字列Rとで声質が相違する可能性も想定されるが、新規な登録文字列Rに適用される規則合成方式の合成パラメータ(特に声質に寄与する変数)を適宜に調整することで、規則合成方式による合成音声を、録音済の録音データの声質に近付けることが可能である。 For the default registered character string R, a voice signal is generated by a recording editing method using recorded data, while for a registered character string R and an insertion phrase added afterwards, a voice signal is generated by a rule synthesis method. It is also possible. Although it is assumed that the voice quality may be different between the default registered character string R and the new registered character string R, the synthesis parameters of the rule composition method applied to the new registered character string R (particularly contributing to the voice quality) By appropriately adjusting the variable), it is possible to bring the synthesized voice by the rule synthesis method closer to the voice quality of the recorded data.
(3)前述の各形態では、交通機関の構内での迷子を通知する案内音声Vを想定したが、案内音声Vの発音内容は任意である。例えば、図4の例示からも理解される通り、交通機関の運行状況に関する案内音声Vを放音装置14から放音することも可能である。 (3) In each of the above-described embodiments, the guidance voice V for notifying a lost child on the premises of the transportation facility is assumed, but the pronunciation content of the guidance voice V is arbitrary. For example, as understood from the illustration of FIG. 4, it is also possible to emit the guidance voice V related to the operation status of the transportation facility from the sound emitting device 14.
 前述の通り、挿入区間の個数または内容は登録文字列R毎に区々である。したがって、操作画面GBの指示領域A1に配置される指示欄364の個数および内容(選択項目)、あるいは、各指示欄364で選択可能な選択肢366の個数および内容は、登録文字列R毎に相違し得る。図13には、交通機関の運行状況について、操作画面GAで選択可能な選択肢362と、各選択肢362に対応する登録文字列Rと、当該登録文字列Rの各挿入区間に対応する指示欄364と、指示欄364で選択可能な選択肢366との組合せが例示されている。なお、図13の「Accident prevention announcement」(事故防止アナウンス)の登録文字列Rは、挿入句を含まない定型文であるから、図6に例示した通り、可変の要素を含まない定型的な登録文字列Rであることを報知するメッセージが指示領域A1に表示される。 As described above, the number or contents of the inserted sections are different for each registered character string R. Therefore, the number and contents (selection items) of the instruction field 364 arranged in the instruction area A1 of the operation screen GB, or the number and contents of the options 366 selectable in each instruction field 364 are different for each registered character string R. Can do. FIG. 13 shows options 362 that can be selected on the operation screen GA, the registered character string R corresponding to each option 362, and the instruction field 364 corresponding to each insertion section of the registered character string R regarding the operation status of the transportation facility. And combinations with options 366 that can be selected in the instruction field 364 are illustrated. Note that the registration character string R of “Accident prevention announcement” (accident prevention announcement) in FIG. 13 is a fixed phrase that does not include an insertion phrase, and therefore, as illustrated in FIG. A message notifying that the character string is R is displayed in the indication area A1.
 例えば、図13の例示から理解される通り、「Train service has been suspended [due to signal failure caused by a lightning strike].  [Service is expected to resume in approximately 10 minutes.] [Please wait until service resumes.]」という案内文字列Xと、挿入区間を除外した「Train service has been suspended.」という配信文字列Yとが表示装置36の確認領域A2に表示される。 For example, as understood from the example of FIG. 13, “Train service has been suspended [due to signal failure caused by a lightning strike]. [Service is expected to resume in approximately 10 minutes.] [Please wait until service resumes.] And a distribution character string Y “Train service has been suspended” excluding the insertion section are displayed in the confirmation area A2 of the display device 36.
(4)前述の各形態では、案内文字列Xの挿入句を削除した(簡略化した)配信文字列Yを例示したが、案内文字列Xと配信文字列Yとの関係は以上の例示に限定されない。例えば、前述の各形態の例示とは反対に、登録文字列Rに挿入句を挿入した文字列を配信文字列Yとして第2情報取得部44が取得し、配信文字列Yから挿入句を削除した文字列(例えば登録文字列R)を案内文字列Xとして第1情報取得部42が取得することも可能である。以上の構成によれば、詳細な情報を含む配信文字列Yを端末装置20に提供する一方、各情報を省略した内容の簡略的な案内文字列Xが案内音声Vとして再生される。したがって、案内音声Vの発音時間を短縮しながら、端末装置20に配信文字列Yを提供できるという利点がある。 (4) In each of the above-described embodiments, the distribution character string Y from which the insertion phrase of the guidance character string X is deleted (simplified) is illustrated, but the relationship between the guidance character string X and the distribution character string Y is illustrated above. It is not limited. For example, contrary to the above-described examples of each form, the second information acquisition unit 44 acquires the character string in which the insertion phrase is inserted into the registered character string R as the distribution character string Y, and deletes the insertion phrase from the distribution character string Y. It is also possible for the first information acquisition unit 42 to acquire the character string (for example, registered character string R) as the guide character string X. According to the above configuration, the distribution character string Y including the detailed information is provided to the terminal device 20, while the simplified guide character string X having the contents omitted from each information is reproduced as the guidance voice V. Therefore, there is an advantage that the distribution character string Y can be provided to the terminal device 20 while shortening the pronunciation time of the guidance voice V.
 また、登録文字列Rに複数の挿入句を挿入した文字列を案内文字列Xとして生成し、複数の挿入句のうちの一部を削除した文字列を配信文字列Yとして生成することも可能である。例えば、前述の登録文字列R1の全部の挿入区間に挿入句を挿入した「We have found a lost child [of 4 years old] [in red clothes], [named Yuuki Suzuki].  If you are looking for this child, please come to the customer service desk.」という文字列が案内文字列Xとされ、複数の挿入句のうち[Name]を削除した「We have found a lost child [of 4 years old] [in red clothes].  If you are looking for this child, please come to the customer service desk.」という文字列が配信文字列Yとされる。また、登録文字列Rに複数の挿入句を挿入した文字列を配信文字列Yとして生成し、複数の挿入句のうちの一部を削除した文字列を案内文字列Xとして生成することも可能である。 It is also possible to generate a character string obtained by inserting a plurality of insertion phrases into the registered character string R as a guide character string X and generate a character string obtained by deleting some of the plurality of insertion phrases as a distribution character string Y. It is. For example, “We have found a lost child [of 4 years old] [in red clothes], [named Yuuki Suzuki]. If you are looking for this The character string “child, please come to the customer service と desk.” is the guide character string X, and [WeNamehave found a lost child [of 4 years old] [in red] clothes]. と い う If you are looking for this child, please come to the customer service desk. " It is also possible to generate a character string in which a plurality of insertion phrases are inserted into the registered character string R as a distribution character string Y and to generate a character string in which a part of the plurality of insertion phrases is deleted as a guide character string X. It is.
 以上の例示から理解される通り、配信文字列Y(第2文字列の例示)は、案内文字列X(第1文字列の例示)とは部分的に相違する文字列として包括的に表現される。具体的には、案内文字列Xおよび配信文字列Yの一方は、他方を部分的に削除した文字列または他方に特定の語句を追加した文字列である。 As understood from the above examples, the distribution character string Y (example of the second character string) is comprehensively expressed as a character string partially different from the guide character string X (example of the first character string). The Specifically, one of the guide character string X and the distribution character string Y is a character string obtained by partially deleting the other, or a character string obtained by adding a specific phrase to the other.
 なお、配信文字列Yが挿入句を含む構成では、登録文字列Rの識別情報と挿入区間毎の挿入句の識別情報とを含む配信情報Qを情報処理装置12から端末装置20に送信する必要がある。端末装置20の再生制御部644は、登録文字列Rと識別情報とを対応させるとともに挿入句と識別情報とを対応させる案内テーブルTBを参照して配信文字列Yを生成する。具体的には、再生制御部644は、配信情報Qが示す識別情報に対応する登録文字列Rと挿入句とを案内テーブルTBから特定し、当該登録文字列Rに挿入句を挿入することで配信文字列Yを生成する。 In the configuration in which the distribution character string Y includes an insertion phrase, it is necessary to transmit distribution information Q including the identification information of the registered character string R and the identification information of the insertion phrase for each insertion section from the information processing device 12 to the terminal device 20. There is. The reproduction control unit 644 of the terminal device 20 generates the distribution character string Y with reference to the guidance table TB that associates the registered character string R with the identification information and associates the insertion phrase with the identification information. Specifically, the reproduction control unit 644 specifies a registered character string R and an insertion phrase corresponding to the identification information indicated by the distribution information Q from the guide table TB, and inserts the insertion phrase into the registered character string R. A distribution character string Y is generated.
(5)前述の各形態では、案内文字列Xおよび配信文字列Yの双方を表示装置36に表示させたが、表示制御部46が配信文字列Yのみを表示装置36に表示させて(案内文字列Xの表示は省略する)もよい。また、配信文字列Yの表示を省略し、表示制御部46が案内文字列Xのみを表示装置36に表示させてもよい。 (5) In each of the above-described embodiments, both the guide character string X and the distribution character string Y are displayed on the display device 36. However, the display control unit 46 displays only the distribution character string Y on the display device 36 (guidance). (The display of the character string X is omitted). Alternatively, the display of the distribution character string Y may be omitted, and the display control unit 46 may display only the guidance character string X on the display device 36.
(6)前述の各形態では、配信文字列Yを表示する表示機器を再生装置68として例示したが、配信文字列Yの再生方法は以上の例示に限定されない。例えば、配信文字列Yを発音した音声を放音する放音機器を再生装置68として利用してもよい。具体的には、再生制御部644は、配信情報Qが示す配信文字列Yを用いた音声合成により、当該配信文字列の音声を表す音声信号を生成して再生装置68の放音機器に供給する。配信文字列Yの表示と当該配信文字列Yの音声の放音とを併用してもよい。 (6) In each of the above-described embodiments, the display device that displays the distribution character string Y is exemplified as the playback device 68. However, the method for reproducing the distribution character string Y is not limited to the above examples. For example, a sound emitting device that emits a sound that sounds the distribution character string Y may be used as the playback device 68. Specifically, the playback control unit 644 generates a voice signal representing the voice of the distribution character string by voice synthesis using the distribution character string Y indicated by the distribution information Q, and supplies the voice signal to the sound emitting device of the playback device 68. To do. The display of the delivery character string Y and the sound emission of the delivery character string Y may be used in combination.
(7)前述の各形態では、端末装置20にて配信文字列Yが1種類の言語(英語)で再生される場合を例示したが、端末装置20にて再生される配信文字列Yの言語を変更できる構成も好適である。例えば、配信情報Qで指定され得る複数の識別情報DYの各々について、同様の内容を相異なる言語で表現した複数の配信文字列Yが案内テーブルTBに登録される。再生制御部644は、情報抽出部642が抽出した配信情報Qが示す識別情報DYに対応する複数の配信文字列Yのうち、端末装置20で指定された言語(以下「指定言語」という)に対応する配信文字列Yを再生装置68に再生させる。指定言語は、例えば、端末装置20のOS(Operating System)の言語設定で指定された言語、または、端末装置20の利用者HAが任意に指定した言語である。以上の構成によれば、端末装置20の利用者HAに好都合な指定言語で配信文字列Yが再生されるから、例えば案内音声Vの言語の理解が困難である外国人等にとって便利である。 (7) In each of the above-described embodiments, the case where the distribution character string Y is reproduced in one kind of language (English) in the terminal device 20 is exemplified. However, the language of the distribution character string Y reproduced in the terminal device 20 A configuration capable of changing is also suitable. For example, for each of a plurality of pieces of identification information DY that can be specified by the distribution information Q, a plurality of distribution character strings Y expressing the same contents in different languages are registered in the guide table TB. The reproduction control unit 644 uses a language (hereinafter referred to as “designated language”) designated by the terminal device 20 among a plurality of delivery character strings Y corresponding to the identification information DY indicated by the delivery information Q extracted by the information extraction unit 642. The reproduction device 68 reproduces the corresponding distribution character string Y. The designated language is, for example, a language designated by a language setting of an OS (Operating System) of the terminal device 20, or a language arbitrarily designated by the user HA of the terminal device 20. According to the above configuration, the distribution character string Y is reproduced in a designated language that is convenient for the user HA of the terminal device 20, which is convenient for foreigners who have difficulty understanding the language of the guidance voice V, for example.
(8)端末装置20で使用される案内テーブルTBは、例えば、移動体通信網またはインターネット等の通信網を介して特定の配信サーバから配信されて記憶装置66に記憶される。なお、配信サーバから端末装置20に案内テーブルTBが配信される時期は任意である。例えば、所定の周期で配信テーブルTBを定期的に更新することが可能である。また、例えば配信文字列Yの再生用のプログラムの起動毎に、配信テーブルTBの更新の有無を端末装置20から配信サーバに照会し、更新がある場合に最新の配信テーブルTBを配信サーバから端末装置20に配信することも可能である。 (8) The guidance table TB used in the terminal device 20 is distributed from a specific distribution server via a communication network such as a mobile communication network or the Internet and stored in the storage device 66. Note that the time when the guidance table TB is distributed from the distribution server to the terminal device 20 is arbitrary. For example, the distribution table TB can be periodically updated at a predetermined cycle. Further, for example, every time the program for reproducing the distribution character string Y is started, the terminal device 20 inquires of the distribution server whether or not the distribution table TB is updated, and when there is an update, the latest distribution table TB is transmitted from the distribution server to the terminal. Distribution to the device 20 is also possible.
(9)前述の各形態では、端末装置20の記憶装置66に案内テーブルTBを記憶した場合を例示したが、移動体通信網またはインターネット等の通信網を介して端末装置20と通信する配信サーバが案内テーブルTBを保持してもよい。端末装置20の再生制御部644は、情報抽出部642が抽出した配信情報Qを含む配信要求を配信サーバに送信する。配信サーバは、端末装置20から受信した配信要求内の配信情報Qが示す配信文字列Yを案内テーブルTBから特定して要求元の端末装置20に送信する。端末装置20の再生制御部644は、配信サーバから受信した配信文字列Yを再生装置68に再生させる。以上の構成によれば、端末装置20に案内テーブルTB(複数の配信文字列Y)を保持する必要がないという利点がある。他方、端末装置20が案内テーブルTBを保持する前述の各形態の構成によれば、配信文字列Yの再生時に配信サーバと通信する必要がないから、端末装置20が配信サーバと通信可能な状況にあるか否かに関わらず、端末装置20にて配信文字列Yを再生できるという利点がある。 (9) In each of the above embodiments, the case where the guide table TB is stored in the storage device 66 of the terminal device 20 is exemplified. However, the distribution server that communicates with the terminal device 20 via a communication network such as a mobile communication network or the Internet. May hold the guide table TB. The reproduction control unit 644 of the terminal device 20 transmits a distribution request including the distribution information Q extracted by the information extraction unit 642 to the distribution server. The distribution server specifies the distribution character string Y indicated by the distribution information Q in the distribution request received from the terminal device 20 from the guidance table TB and transmits it to the requesting terminal device 20. The reproduction control unit 644 of the terminal device 20 causes the reproduction device 68 to reproduce the distribution character string Y received from the distribution server. According to the above configuration, there is an advantage that it is not necessary to hold the guide table TB (a plurality of distribution character strings Y) in the terminal device 20. On the other hand, according to the configuration of each embodiment described above in which the terminal device 20 holds the guidance table TB, it is not necessary to communicate with the distribution server when the distribution character string Y is played back, so that the terminal device 20 can communicate with the distribution server. There is an advantage that the distribution character string Y can be reproduced by the terminal device 20 regardless of whether or not it exists.
(10)前述の各形態では、音波を伝送媒体とする音響通信で配信情報Qを端末装置20に送信したが、配信情報Qを端末装置20に送信するための通信方式は以上の例示に限定されない。例えば、電波または赤外線等の電磁波を伝送媒体とした無線通信で情報提供システム10から端末装置20に配信情報Qを送信することも可能である。以上の例示から理解される通り、配信情報Qの送信には、通信網が介在しない近距離無線通信が好適であり、近距離無線通信は、例えば、音波を伝送媒体とする音響通信または電磁波を伝送媒体とする無線通信である。ただし、移動体通信網またはインターネット等の通信網を介して情報提供システム10から端末装置20に配信情報Qを送信してもよい。 (10) In each of the above-described embodiments, the distribution information Q is transmitted to the terminal device 20 by acoustic communication using sound waves as a transmission medium. However, the communication method for transmitting the distribution information Q to the terminal device 20 is limited to the above examples. Not. For example, the distribution information Q can be transmitted from the information providing system 10 to the terminal device 20 by wireless communication using electromagnetic waves such as radio waves or infrared rays as a transmission medium. As understood from the above examples, near field communication without a communication network is suitable for transmission of the distribution information Q. For example, near field communication uses acoustic communication or electromagnetic waves using sound waves as a transmission medium. Wireless communication using a transmission medium. However, the distribution information Q may be transmitted from the information providing system 10 to the terminal device 20 via a communication network such as a mobile communication network or the Internet.
(11)前述の各形態では、配信文字列Yの識別情報DYを配信情報Qとして端末装置20に送信したが、配信情報Qの内容は以上の例示に限定されない。例えば、配信文字列Y自体を示す配信情報Qを情報提供システム10から端末装置20に送信してもよい。 (11) In each embodiment described above, the identification information DY of the distribution character string Y is transmitted to the terminal device 20 as the distribution information Q. However, the contents of the distribution information Q are not limited to the above examples. For example, the distribution information Q indicating the distribution character string Y itself may be transmitted from the information providing system 10 to the terminal device 20.
(12)前述の各形態では、交通機関の案内に情報提供システム10を利用した場合を例示したが、情報提供システム10が利用される場面は、電車またはバス等の交通機関に限定されない。例えば、海港または空港等の施設、ショッピングモール等の商業施設、美術館または博物館等の展示施設、競技場または体育館等の運動施設、ホテルまたは旅館等の宿泊施設、寺院等の観光施設を含む各種の施設に関する案内に、前述の各形態で例示した情報提供システム10が利用され得る。 (12) In each of the above-described embodiments, the case where the information providing system 10 is used for transportation guidance is illustrated. However, the scene where the information providing system 10 is used is not limited to a transportation facility such as a train or a bus. For example, various facilities including facilities such as seaports or airports, commercial facilities such as shopping malls, exhibition facilities such as museums or museums, sports facilities such as stadiums or gymnasiums, accommodation facilities such as hotels or inns, and tourist facilities such as temples The information providing system 10 exemplified in each of the above-described forms can be used for guidance regarding facilities.
(13)前述の各態様に係る情報処理装置12は、前述の各形態の例示の通り、制御装置32とプログラムとの協働で実現される。前述の各形態に係るプログラムは、利用者HBからの指示に応じた案内文字列Xを取得する第1情報取得部42、案内文字列Xとは部分的に相違する配信文字列Yを取得する第2情報取得部44、配信文字列Yを示す配信情報Qを送信装置(例えば放音装置14)に送信させる配信制御部54、および、配信文字列Yを表示装置36に表示させる表示制御部46として、コンピュータ(例えば制御装置32)を機能させる。以上に例示したプログラムは、コンピュータが読取可能な記録媒体に格納された形態で提供されてコンピュータにインストールされ得る。記録媒体は、例えば非一過性(non-transitory)の記録媒体であり、CD-ROM等の光学式記録媒体(光ディスク)が好例であるが、半導体記録媒体または磁気記録媒体等の公知の任意の形式の記録媒体を包含し得る。なお、非一過性の記録媒体とは、一過性の伝搬信号(transitory, propagating signal)を除く任意の記録媒体を含み、揮発性の記録媒体を除外するものではない。また、通信網を介した配信の形態でプログラムをコンピュータに提供することも可能である。 (13) The information processing apparatus 12 according to each aspect described above is realized by the cooperation of the control device 32 and the program as illustrated in the above-described embodiments. The program according to each of the above-described embodiments acquires a first information acquisition unit 42 that acquires a guide character string X according to an instruction from the user HB, and a distribution character string Y that is partially different from the guide character string X. Second information acquisition unit 44, distribution control unit 54 for transmitting distribution information Q indicating distribution character string Y to a transmission device (for example, sound emitting device 14), and display control unit for displaying distribution character string Y on display device 36 As 46, a computer (for example, the control device 32) is caused to function. The programs exemplified above can be provided in a form stored in a computer-readable recording medium and installed in the computer. The recording medium is, for example, a non-transitory recording medium, and an optical recording medium (optical disk) such as a CD-ROM is a good example, but a known arbitrary one such as a semiconductor recording medium or a magnetic recording medium This type of recording medium can be included. Note that the non-transitory recording medium includes an arbitrary recording medium excluding a transient propagation signal (transitory, “propagating signal”) and does not exclude a volatile recording medium. It is also possible to provide a program to a computer in the form of distribution via a communication network.
(14)本発明は、前述の各形態に係る情報処理装置12の動作方法(情報提供方法)としても特定され得る。好適な態様に係る情報処理方法は、表示装置36を具備するコンピュータシステム(単体または複数のコンピュータで構成される情報提供システム10)が、利用者HBからの指示に応じた案内文字列Xを取得し(SA4)、案内文字列Xとは部分的に相違する配信文字列Yを取得し(SA5)、配信文字列Yを示す配信情報Qを送信装置(例えば放音装置14)に送信させ(SA10)、配信文字列Yを表示装置36に表示させる(SA6)。 (14) The present invention can also be specified as an operation method (information providing method) of the information processing apparatus 12 according to each of the above-described embodiments. In the information processing method according to the preferred embodiment, the computer system (the information providing system 10 including a single computer or a plurality of computers) including the display device 36 acquires the guide character string X according to the instruction from the user HB. (SA4), a distribution character string Y partially different from the guide character string X is acquired (SA5), and distribution information Q indicating the distribution character string Y is transmitted to a transmission device (for example, sound emitting device 14) ( SA10), the distribution character string Y is displayed on the display device 36 (SA6).
 上述した実施形態から、下記の態様が把握される。本発明の第1態様に係る情報処理装置は、利用者からの指示に応じた第1文字列を取得する第1情報取得部と、第1文字列とは部分的に相違する第2文字列を取得する第2情報取得部と、第2文字列を示す配信情報を送信装置に送信させる配信制御部と、表示装置と、第2文字列を表示装置に表示させる表示制御部とを具備する。具体的な態様において、配信制御部は、第1文字列に対応する音声を放音する放音装置を送信装置として音響通信で配信情報を送信させる。 From the embodiment described above, the following aspects can be grasped. The information processing apparatus according to the first aspect of the present invention includes a first information acquisition unit that acquires a first character string according to an instruction from a user, and a second character string that is partially different from the first character string. A second information acquisition unit for acquiring the distribution information, a distribution control unit for transmitting the distribution information indicating the second character string to the transmission device, a display device, and a display control unit for displaying the second character string on the display device. . In a specific aspect, the distribution control unit causes distribution information to be transmitted by acoustic communication using a sound emitting device that emits sound corresponding to the first character string as a transmitting device.
 第1態様において、好ましくは、表示制御部は、第1文字列と第2文字列とを並べて表示装置に表示させてもよい。よって、利用者が第1文字列と第2文字列とを容易に対比することが可能である。
 より好ましくは、表示制御部は、利用者が選択した複数の言語の各々について、当該言語で表現された第1文字列と第2文字列とを表示装置に表示させてもよい。この態様によれば、第1文字列と第2文字列とを利用者が複数の言語で確認できるという利点がある。
In the first aspect, preferably, the display control unit may display the first character string and the second character string side by side on the display device. Therefore, the user can easily compare the first character string and the second character string.
More preferably, for each of a plurality of languages selected by the user, the display control unit may cause the display device to display the first character string and the second character string expressed in the language. According to this aspect, there is an advantage that the user can confirm the first character string and the second character string in a plurality of languages.
 第1態様において、好ましくは、第1情報取得部は、収音装置が収音した音声に対する音声認識の結果に応じた第1文字列を取得してもよい。あるいは、第1情報取得部は、表示装置の操作画面に対して指示された第1文字列を取得してもよい。この場合、第1情報取得部は、操作画面に対する指示で複数の項目の各々について利用者が複数の候補から選択した文字列を含む第1文字列を取得してもよい。 In the first aspect, preferably, the first information acquisition unit may acquire a first character string corresponding to a result of speech recognition for the sound collected by the sound collection device. Or a 1st information acquisition part may acquire the 1st character string instruct | indicated with respect to the operation screen of a display apparatus. In this case, the first information acquisition unit may acquire a first character string including a character string selected by the user from a plurality of candidates for each of the plurality of items in response to an instruction on the operation screen.
 第1態様において、好ましくは、第2情報取得部は、第1文字列に類似する第2文字列を特定する。あるいは、第2情報取得部は、第1文字列の内容を簡略化した第2文字列を特定してもよいし、第2情報取得部は、第1文字列に詳細な情報を追加した第2文字列を特定してもよい。 In the first aspect, preferably, the second information acquisition unit specifies a second character string similar to the first character string. Or the 2nd information acquisition part may specify the 2nd character string which simplified the contents of the 1st character string, and the 2nd information acquisition part adds the detailed information to the 1st character string. Two character strings may be specified.
 本発明の第2態様に係る情報提供方法は、表示装置を具備するコンピュータシステムにおける情報提供方法であって、当該情報提供方法は、利用者からの指示に応じた第1文字列を取得し、第1文字列とは部分的に相違する第2文字列を取得し、第2文字列を示す配信情報を送信装置に送信させ、第2文字列を表示装置に表示させる。
 本発明の第3態様にかかるプログラムは、表示装置を具備するコンピュータシステムに、利用者からの指示に応じた第1文字列を取得する処理と、第1文字列とは部分的に相違する第2文字列を取得する処理と、第2文字列を示す配信情報を送信装置に送信させる処理と、第2文字列を表示装置に表示させる処理とを実行させる。第2態様および第3態様のいずれにおいても、第1態様における各種の好適な態様が適用され得る。
An information providing method according to a second aspect of the present invention is an information providing method in a computer system including a display device, and the information providing method acquires a first character string according to an instruction from a user, A second character string partially different from the first character string is acquired, distribution information indicating the second character string is transmitted to the transmission device, and the second character string is displayed on the display device.
According to a third aspect of the present invention, there is provided a program for acquiring a first character string in response to an instruction from a user in a computer system including a display device, and a first character string partially different from the first character string. A process of acquiring two character strings, a process of transmitting distribution information indicating the second character string to the transmission device, and a process of displaying the second character string on the display device are executed. In any of the second aspect and the third aspect, various suitable aspects in the first aspect can be applied.
10…情報提供システム、12…情報処理装置、14…放音装置、16,62…収音装置、20…端末装置、32,64…制御装置、34,66…記憶装置、36…表示装置、38…操作装置、42…第1情報取得部、44…第2情報取得部、46…表示制御部、52…音声合成部、54…配信制御部、56…機械翻訳部、642…情報抽出部、644…再生制御部、68…再生装置。
 
DESCRIPTION OF SYMBOLS 10 ... Information provision system, 12 ... Information processing device, 14 ... Sound emission device, 16, 62 ... Sound collection device, 20 ... Terminal device, 32, 64 ... Control device, 34, 66 ... Storage device, 36 ... Display device, DESCRIPTION OF SYMBOLS 38 ... Operation apparatus, 42 ... 1st information acquisition part, 44 ... 2nd information acquisition part, 46 ... Display control part, 52 ... Speech synthesis part, 54 ... Delivery control part, 56 ... Machine translation part, 642 ... Information extraction part , 644... Playback control unit, 68.

Claims (12)

  1.  利用者からの指示に応じた第1文字列を取得する第1情報取得部と、
     前記第1文字列とは部分的に相違する第2文字列を取得する第2情報取得部と、
     前記第2文字列を示す配信情報を送信装置に送信させる配信制御部と、
     表示装置と、
     前記第2文字列を前記表示装置に表示させる表示制御部と
     を具備する情報処理装置。
    A first information acquisition unit that acquires a first character string according to an instruction from a user;
    A second information acquisition unit for acquiring a second character string partially different from the first character string;
    A distribution control unit that causes the transmission device to transmit distribution information indicating the second character string;
    A display device;
    An information processing apparatus comprising: a display control unit configured to display the second character string on the display device.
  2.  前記配信制御部は、前記第1文字列に対応する音声を放音する放音装置を前記送信装置として音響通信で前記配信情報を送信させる
     請求項1の情報処理装置。
    The information processing apparatus according to claim 1, wherein the distribution control unit causes the distribution information to be transmitted by acoustic communication using a sound emitting device that emits a sound corresponding to the first character string as the transmitting device.
  3.  前記表示制御部は、前記第1文字列と前記第2文字列とを並べて前記表示装置に表示させる
     請求項1または請求項2の情報処理装置。
    The information processing apparatus according to claim 1, wherein the display control unit displays the first character string and the second character string side by side on the display device.
  4.  前記表示制御部は、前記利用者が選択した複数の言語の各々について、当該言語で表現された前記第1文字列と前記第2文字列とを前記表示装置に表示させる
     請求項3の情報処理装置。
    The information processing unit according to claim 3, wherein the display control unit causes the display device to display the first character string and the second character string expressed in the language for each of a plurality of languages selected by the user. apparatus.
  5.  前記第1情報取得部は、収音装置が収音した音声に対する音声認識の結果に応じた前記第1文字列を取得する
     請求項1から請求項4の何れかの情報処理装置。
    5. The information processing apparatus according to claim 1, wherein the first information acquisition unit acquires the first character string according to a result of speech recognition for the sound collected by the sound collection device.
  6.  前記第1情報取得部は、前記表示装置の操作画面に対して指示された前記第1文字列を取得する
     請求項1から請求項4の何れかの情報処理装置。
    The information processing apparatus according to any one of claims 1 to 4, wherein the first information acquisition unit acquires the first character string instructed on an operation screen of the display device.
  7.  前記第1情報取得部は、前記操作画面に対する指示で複数の項目の各々について利用者が複数の候補から選択した文字列を含む前記第1文字列を取得する
     請求項6の情報処理装置。
    The information processing apparatus according to claim 6, wherein the first information acquisition unit acquires the first character string including a character string selected from a plurality of candidates for each of a plurality of items by an instruction to the operation screen.
  8.  前記第2情報取得部は、前記第1文字列に類似する第2文字列を特定する
     請求項1から請求項7の何れかの情報処理装置。
    The information processing apparatus according to any one of claims 1 to 7, wherein the second information acquisition unit specifies a second character string similar to the first character string.
  9.  前記第2情報取得部は、前記第1文字列の内容を簡略化した第2文字列を特定する
     請求項1から請求項7の何れかの情報処理装置。
    The information processing apparatus according to claim 1, wherein the second information acquisition unit specifies a second character string obtained by simplifying the content of the first character string.
  10.  前記第2情報取得部は、前記第1文字列に詳細な情報を追加した第2文字列を特定する
     請求項1から請求項7の何れかの情報処理装置。
    The information processing apparatus according to claim 1, wherein the second information acquisition unit specifies a second character string obtained by adding detailed information to the first character string.
  11.  表示装置を具備するコンピュータシステムにおける情報提供方法であって、
     利用者からの指示に応じた第1文字列を取得し、
     前記第1文字列とは部分的に相違する第2文字列を取得し、
     前記第2文字列を示す配信情報を送信装置に送信させ、
     前記第2文字列を前記表示装置に表示させる
     を具備する情報提供方法。
    An information providing method in a computer system comprising a display device,
    Get the first character string according to the instruction from the user,
    Obtaining a second character string partially different from the first character string;
    Distribution information indicating the second character string is transmitted to a transmission device;
    An information providing method comprising: displaying the second character string on the display device.
  12.  表示装置を具備するコンピュータシステムに、
     利用者からの指示に応じた第1文字列を取得する処理と、
     前記第1文字列とは部分的に相違する第2文字列を取得する処理と、
     前記第2文字列を示す配信情報を送信装置に送信させる処理と、
     前記第2文字列を前記表示装置に表示させる処理と
     を実行させるプログラム。
     
    In a computer system having a display device,
    Processing for obtaining a first character string in accordance with an instruction from a user;
    Processing to obtain a second character string partially different from the first character string;
    Processing for causing the transmission device to transmit distribution information indicating the second character string;
    A program for executing the process of displaying the second character string on the display device.
PCT/JP2017/020081 2016-06-06 2017-05-30 Information processing device, information service method, and program WO2017212981A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016112541A JP6780305B2 (en) 2016-06-06 2016-06-06 Information processing device and information provision method
JP2016-112541 2016-06-06

Publications (1)

Publication Number Publication Date
WO2017212981A1 true WO2017212981A1 (en) 2017-12-14

Family

ID=60577860

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/020081 WO2017212981A1 (en) 2016-06-06 2017-05-30 Information processing device, information service method, and program

Country Status (2)

Country Link
JP (1) JP6780305B2 (en)
WO (1) WO2017212981A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109117235B (en) 2018-08-24 2019-11-05 腾讯科技(深圳)有限公司 A kind of business data processing method, device and relevant device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016046753A (en) * 2014-08-26 2016-04-04 ヤマハ株式会社 Acoustic processing device
JP2016075890A (en) * 2014-07-29 2016-05-12 ヤマハ株式会社 Terminal equipment, information providing system, information providing method, and program

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4937614B2 (en) * 2006-03-23 2012-05-23 ソフトバンクモバイル株式会社 Electronic bulletin board system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016075890A (en) * 2014-07-29 2016-05-12 ヤマハ株式会社 Terminal equipment, information providing system, information providing method, and program
JP2016046753A (en) * 2014-08-26 2016-04-04 ヤマハ株式会社 Acoustic processing device

Also Published As

Publication number Publication date
JP6780305B2 (en) 2020-11-04
JP2017219951A (en) 2017-12-14

Similar Documents

Publication Publication Date Title
JP6747535B2 (en) Terminals and programs
JP6569252B2 (en) Information providing system, information providing method and program
JP6729494B2 (en) Information management system and information management method
WO2017212981A1 (en) Information processing device, information service method, and program
JP6686306B2 (en) Information providing apparatus and information providing method
JP6596903B2 (en) Information providing system and information providing method
JP6048516B2 (en) Information providing system and information providing method
JP6772468B2 (en) Management device, information processing device, information provision system, language information management method, information provision method, and operation method of information processing device
JP7196426B2 (en) Information processing method and information processing system
JP6878922B2 (en) Information provision method, terminal device and program
JP7192948B2 (en) Information provision method, information provision system and program
JP6493343B2 (en) Information providing system, information providing method and program
JP2017033398A (en) Terminal device
JP6984769B2 (en) Information provision method and information provision system
US20170352269A1 (en) Information provision device, terminal device, information provision system, and information provision method
JP6915357B2 (en) Information provision method and information provision system
JP6834634B2 (en) Information provision method and information provision system
WO2017179461A1 (en) Information generation system, information provision method, and information distribution method
JP7074116B2 (en) Information processing method and information processing equipment
WO2017130794A1 (en) Information processing device, information processing method, information management device, and information management method
JP2017016163A (en) Management device
JP6597156B2 (en) Information generation system
JP2020198550A (en) program
JP2018132634A (en) Information providing device and information providing system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17810160

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17810160

Country of ref document: EP

Kind code of ref document: A1