WO2018008383A1 - Dispositif terminal, procédé de reproduction d'informations et programme - Google Patents

Dispositif terminal, procédé de reproduction d'informations et programme Download PDF

Info

Publication number
WO2018008383A1
WO2018008383A1 PCT/JP2017/022650 JP2017022650W WO2018008383A1 WO 2018008383 A1 WO2018008383 A1 WO 2018008383A1 JP 2017022650 W JP2017022650 W JP 2017022650W WO 2018008383 A1 WO2018008383 A1 WO 2018008383A1
Authority
WO
WIPO (PCT)
Prior art keywords
event
playback
reproduction
time point
identification information
Prior art date
Application number
PCT/JP2017/022650
Other languages
English (en)
Japanese (ja)
Inventor
翔太 森口
貴裕 岩田
優樹 瀬戸
岩瀬 裕之
Original Assignee
ヤマハ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ヤマハ株式会社 filed Critical ヤマハ株式会社
Publication of WO2018008383A1 publication Critical patent/WO2018008383A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones

Definitions

  • the present invention relates to a technique for providing information to a terminal device.
  • Patent Document 1 discloses a configuration in which time codes are sequentially transmitted to a user's portable device in parallel with the performance of the box office, and caption information such as captions is displayed on the portable device at the time specified from the time code. Is disclosed.
  • an object of the present invention is to reproduce information related to reproduction events that occur in time series at an appropriate time.
  • a terminal device includes an information receiving unit capable of receiving identification information for each reproduction event that is sequentially transmitted in parallel with reproduction of a plurality of reproduction events.
  • a time point setting unit that sequentially sets a notification time point corresponding to each of a plurality of playback events, and a playback control unit that causes the playback device to play back related information of the playback event corresponding to the notification time point when the notification time point arrives
  • the time point setting unit specifies a time difference between the first playback event of the plurality of playback events and the second playback event immediately after the first playback event, and from the time point of notification of the first playback event, While the time difference has passed is set as the notification point of the second playback event, the information receiving unit receives the identification information of the second playback event before the notification point of the second playback event arrives Expediently, it updates the notification time of the second reproduction event reception time of the identification information.
  • An information reproduction method is an information reproduction method in a terminal device having a reproduction device, and is identification information for each reproduction event transmitted sequentially in parallel with reproduction of a plurality of reproduction events.
  • the time difference between the first reproduction event of the plurality of reproduction events and the second reproduction event immediately after the first reproduction event is specified, and the time point when the time difference has elapsed from the notification point of the first reproduction event is determined. While the second playback event notification time is set before the second playback event notification time is received before the second playback event notification time arrives, the second playback event notification time is set as the notification time of the second playback event.
  • the information is updated to the reception time, and triggered by the arrival of the notification time, the reproduction apparatus reproduces the related information of the reproduction event corresponding to the notification time.
  • the present invention can also be understood as a program for causing a computer of a terminal device having a playback device to execute the information playback method or a recording medium on which the program is recorded.
  • FIG. 1 is a block diagram of an information providing system according to a first embodiment of the present invention. It is a block diagram of a delivery apparatus. It is explanatory drawing of an acoustic signal. It is explanatory drawing of the relationship between progress of a show, and operation
  • FIG. 1 is a block diagram of an information providing system 100 according to the first embodiment of the present invention.
  • the information providing system 100 according to the first embodiment is a computer system for providing information related to various entertainments (for example, plays, performances, movies, etc.) to users in parallel with the progress thereof, and is exemplified in FIG.
  • the distribution apparatus 10 and the terminal apparatus 20 are provided.
  • the distribution device 10 is installed in a facility H such as a theater or a hall where various performances are held.
  • a user who visits the facility H carries the terminal device 20.
  • the terminal device 20 is a portable information processing terminal such as a mobile phone and a smartphone.
  • the terminal device 20 is not limited to the information processing terminal carried by the user, and may be a large display device (for example, a caption display device) installed in the facility H.
  • a show for example, Character Show Stage
  • the terminal device 20 sequentially displays information related to each of a plurality of lines pronounced in time series (hereinafter referred to as “related information”) in parallel with the progress of the show.
  • related information information related to each of a plurality of lines pronounced in time series
  • a character string that is, caption
  • the user of the terminal device 20 can visually grasp the line of the show by checking the related information displayed sequentially by the terminal device 20 together with the appreciation of the show. is there. Therefore, for example, there is an advantage that even the hearing impaired can grasp the contents of the show.
  • FIG. 2 is a block diagram of the distribution apparatus 10.
  • the distribution device 10 includes a control device 12, a storage device 14, an input device 16, and a sound emission device 18.
  • the control device 12 is a processing circuit including a CPU (Central Processing Unit), for example, and comprehensively controls each element of the distribution device 10.
  • the storage device 14 is configured by a known recording medium such as a magnetic recording medium or a semiconductor recording medium, or a combination of a plurality of types of recording media, and includes a program executed by the control device 12 and various data used by the control device 12.
  • the storage device 14 of the first embodiment stores the acoustic signal X.
  • the acoustic signal X is a time signal representing a plurality of dialogue sounds that are pronounced in a show.
  • the acoustic signal X is generated in advance by recording a sound produced by a speaker such as a voice actor or by a known speech synthesis process.
  • the control device 12 supplies the sound signal X to the sound emitting device 18 in accordance with an instruction from the administrator with respect to the input device 16, so that each sound of the plurality of lines is sequentially reproduced from the sound emitting device 18.
  • a show is formed by reproducing the acoustic signal X in parallel with the performance of each performer on the stage in the facility H.
  • FIG. 3 is an explanatory diagram of the acoustic signal X.
  • the acoustic signal X of the first embodiment has a pronunciation period An (A1, A2, A3,%)
  • the sound generation period An corresponding to any one line Ln is a section in which the acoustic component that pronounces the line Ln exists in the air. Accordingly, each line Ln is sequentially sounded every sounding period An by supplying the sound signal X to the sound emitting device 18.
  • Each sound generation period An has a variable length according to the length of the dialogue Ln.
  • the identification information Dn of each line Ln is distributed from the distribution apparatus 10 to the terminal apparatus 20 in parallel with the reproduction of the plurality of lines Ln. Specifically, the identification information Dn of the dialogue Ln is distributed to the terminal device 20 during the pronunciation period An in which any one dialogue Ln is pronounced. As illustrated in FIG. 3, the identification information Dn is repeatedly distributed from the distribution device 10 to the terminal device 20 a plurality of times. For example, the distribution of the identification information Dn is repeated intermittently or continuously from the start point to the end point of the sound generation period An. The distribution of the identification information Dn may be started before the start of the sound generation period An, and / or the distribution of the identification information Dn may be ended before the end of the sound generation period An.
  • a configuration in which the identification information Dn is distributed a plurality of times within a period overlapping with the sound generation period An is suitable, but the difference between the start point and the end point between the sound generation period An and the period during which the identification information Dn is distributed is arbitrary.
  • acoustic communication using sound waves as air vibrations as a transmission medium is used for distributing the identification information Dn to the terminal device 20.
  • the acoustic component of the identification information Dn is contained in a plurality of different positions on the time axis in the sound generation period An of the acoustic signal X.
  • the acoustic component of the identification information Dn is a modulation component generated by a modulation process to which the identification information Dn is applied. For example, frequency modulation in which a carrier wave such as a sine wave having a predetermined frequency is modulated by the identification information Dn or spread modulation of the identification information Dn using a spreading code is suitable as the modulation processing of the identification information Dn.
  • the sound signal X containing the sound component of the line Ln and the sound component of the identification information Dn in each sound generation period An is supplied to the sound emitting device 18 of FIG. Accordingly, the acoustic component of the line Ln and the acoustic component of the identification information Dn are emitted in sequence from the sound emitting device 18.
  • the frequency band of the acoustic component of the identification information Dn is set in a range (for example, 18 kHz or more and 20 kHz or less) that exceeds the frequency band of the sound that the user listens in a normal environment.
  • the sound emitting device 18 of the first embodiment functions as an acoustic device that emits the acoustic component of the line Ln, and in acoustic communication using sound waves as air vibration as a transmission medium. It also functions as a transmitter that transmits the identification information Dn of the dialogue Ln to the surroundings.
  • FIG. 4 is an explanatory diagram of the relationship between the progress of the show and the operation of the terminal device 20.
  • the distribution apparatus 10 sequentially transmits the identification information Dn of each line Ln by acoustic communication in parallel with the pronunciation of the plurality of lines Ln (that is, the progress of the show). Specifically, as described above, in parallel with the reproduction of the dialogue Ln in the pronunciation period An, the distribution of the identification information Dn of the dialogue Ln is repeated within the pronunciation period An.
  • the identification information Dn is distributed a plurality of times within the sound generation period An, the possibility that the terminal device 20 cannot receive the identification information Dn is reduced.
  • FIG. 5 is a block diagram of the terminal device 20.
  • the terminal device 20 includes a control device 21, a storage device 23, a sound collection device 25, an input device 27, and a display device 29.
  • the control device 21 is a processing circuit including a CPU, for example, and comprehensively controls each element of the terminal device 20.
  • the storage device 23 is configured by a known recording medium such as a magnetic recording medium or a semiconductor recording medium, or a combination of a plurality of types of recording media, and includes a program executed by the control device 21 and various data used by the control device 21.
  • the storage device 23 of the first embodiment stores an information table T illustrated in FIG.
  • the information table T received by the terminal device 20 from the management server (not shown) via a communication network such as a mobile communication network or the Internet is stored in the storage device 23.
  • the information table T includes identification information Dn (D1, D2, D3,%) And related information Rn (R1, R2, R3,%) For each of a plurality of lines Ln that can be pronounced in a show. 7) is a registered data table.
  • the related information Rn of each line Ln in the first embodiment is text data representing a character string of the line Ln.
  • the identification information Dn is a code for identifying the dialogue Ln as described above, but can also be referred to as a code for identifying the related information Rn.
  • the information table T of the first embodiment includes a time difference Q [n, n + 1] (Q [1] for each pair of two lines (Ln, Ln + 1) that follow each other. , 2], Q [2,3], Q [3,4], ...) are registered.
  • the time difference Q [n, n + 1] is the time between the dialogue Ln and the dialogue Ln + 1.
  • any one time difference Q [n, n + 1] indicates that the starting point of the pronunciation period An in which the nth dialogue Ln is pronounced and the immediately following dialogue Ln +. 1 is the interval from the start point of the sound generation period An + 1 in which sound is generated.
  • the speech of the line Ln + 1 is started when the time difference Q [n, n + 1] has elapsed from the start of the speech of the speech Ln by the sound emitting device 18.
  • the time difference Q [1,2] has elapsed since the start of the first line L1
  • the second line L2 begins to be sounded, and only the time difference Q [2,3] has elapsed since the start of the line L2.
  • the pronunciation of the third line L3 is started.
  • the information table T can be configured by a plurality of separate tables.
  • the sound collection device 25 (microphone) in FIG. 5 is an acoustic device that collects ambient sounds and generates an acoustic signal Y. Specifically, the sound collecting device 25 generates an acoustic signal Y representing the sound reproduced by the sound emitting device 18 of the distribution device 10 (that is, a mixed sound of the acoustic component of the dialogue Ln and the acoustic component of the identification information Dn). . Note that an A / D converter that converts the acoustic signal Y generated by the sound collecting device 25 from analog to digital is not shown for convenience.
  • the input device 27 is an operation device operated by the user for various instructions to the terminal device 20.
  • a plurality of operators operated by the user and a touch panel that detects contact by the user are preferably used as the input device 27.
  • the display device 29 (for example, a liquid crystal display panel) displays related information Rn under the control of the control device 21. Note that one or both of the sound collection device 25 and the display device 29 may be configured separately from the terminal device 20 and connected to the terminal device 20.
  • the control device 21 executes a program stored in the storage device 23, thereby providing a plurality of functions (information extraction unit 42, time setting unit 44) for providing related information Rn to the user. , And the reproduction control unit 46).
  • a configuration in which the functions of the control device 21 exemplified above are distributed to a plurality of devices, or a configuration in which a dedicated electronic circuit realizes a part of the functions of the control device 21 may be employed.
  • the information extraction unit 42 extracts the identification information Dn of each line Ln from the acoustic signal Y generated by the sound collection device 25. Specifically, the information extraction unit 42 emphasizes an acoustic component in a frequency band including the identification information Dn in the acoustic signal Y by, for example, a band pass filter, and performs a demodulation process corresponding to the modulation process at the time of generating the acoustic signal X.
  • the identification information Dn is extracted by executing it on the emphasized acoustic component. Each time the line Ln is pronounced by the sound emitting device 18, the identification information Dn of the line Ln is extracted.
  • the sound collection device 25 is configured such that when the terminal device 20 is a portable information processing terminal such as a mobile phone and a smartphone, the sound between the terminal devices 20 is obtained. In addition to being used for voice recording during calls and moving image shooting, it is also used for receiving identification information Dn by acoustic communication. As described above, the sound collection device 25 and the information extraction unit 42 serve as the information reception unit 50 that can receive the identification information Dn of each line Ln sequentially transmitted from the distribution apparatus 10 in parallel with the reproduction of the plurality of lines Ln. Function.
  • the time point setting unit 44 sequentially reports the time points Pn (hereinafter referred to as “notification time points”) Pn to be notified to the user of the related information Rn corresponding to each of the plurality of lines Ln.
  • the notification time point Pn of the related information Rn corresponding to any one line Ln is set to a time point (typically a time point within the sounding period An) corresponding to the sounding period An of the line Ln.
  • the time point setting unit 44 of the first embodiment determines the time point when the time difference Q [n, n + 1] specified in the information table T has elapsed from the predetermined notification time point Pn as the (n + 1) th line Ln +.
  • One related information Rn + 1 is specified as a notification time point Pn + 1 to be reproduced.
  • the notification time point P3 of the third line L3 is the time point when the time difference Q [2,3] has elapsed from the notification time point P2 of the immediately preceding line L2.
  • the time point setting unit 44 increases a time-measured value (for example, a count value by an internal timer) that is initialized to zero at an arbitrary notification time point Pn with time, and sets the time difference Q specified in the information table T.
  • the time point at which the measured value reaches [n, n + 1] is specified as the notification time point Pn + 1 of the related information Rn + 1.
  • the time point setting unit 44 specifies the time point when the information receiving unit 50 receives the first identification information D1 from the distribution device 10 as the notification time point P1.
  • the reproduction control unit 46 in FIG. 5 causes the display device 29 to sequentially display the related information Rn of each of the plurality of dialogues Ln. Specifically, the reproduction control unit 46 causes the display device 29 to display related information Rn of the dialogue Ln when the notification time Pn set by the time setting unit 44 for any one dialogue Ln arrives. That is, the related information Rn displayed on the display device 29 is updated every time the notification time point Pn set by the time point setting unit 44 arrives. Accordingly, the related information Rn of each line Ln is sequentially displayed in parallel with the reproduction of the plurality of lines Ln.
  • display of the related information R1 of the dialogue L1 is started at the notification time point P1 when the terminal device 20 receives the identification information D1 during the pronunciation period A1.
  • the display of the related information R2 of the dialogue L2 is started at the notification time point P2 behind the notification time point P1 by the time difference Q [1,2], and the notification time point P3 after the time difference Q [2,3] from the notification time point P2.
  • the display of the related information R3 of the dialogue L3 is started.
  • the time point when the time difference Q [n, n + 1] has elapsed from the notification time point Pn of the related information Rn corresponding to one line Ln corresponds to the next line Ln + 1.
  • the notification time point Pn of the second and subsequent related information Rn is set starting from the notification time point P1, which is the reception time point of the identification information D1. Accordingly, the related information Rn of the second and subsequent dialogues Ln is displayed on the display device 29 at the notification time point Pn that is delayed by the time ⁇ from the starting point of the pronunciation period An (ie, the start of pronunciation of the dialogue Ln) where the dialogue Ln is pronounced. Is displayed. That is, the display of the related information Rn is delayed by the time ⁇ with respect to the progress of the dialogue in the actual show.
  • the notification time point Pn is corrected to the time point ahead. That is, the display delay of the related information Rn is reduced.
  • the identification information Dn of the dialogue Ln is repeatedly distributed from the starting point of the pronunciation period An. Therefore, in a situation where the notification time point Pn of the related information Rn is delayed from the start point of the speech period An of the line Ln, the information receiving unit 50 recognizes the identification information within the period of time ⁇ from the start point of the sound generation period An to the notification time point Pn. It is possible to receive Dn. That is, when the information receiving unit 50 receives the identification information Dn of the dialogue Ln before the arrival of the predetermined announcement time point Pn corresponding to the dialogue Ln, the situation where the announcement time point Pn is delayed with respect to the starting point of the pronunciation period An. It can be estimated that.
  • the information receiving unit 50 when the information receiving unit 50 receives the identification information Dn of the line Ln before the arrival of the line Ln notification point Pn (that is, the notification point Pn with respect to the progress of the line). Notification time point Pn is updated to the reception time point of the identification information Dn.
  • the time setting unit 44 sets the time when the time difference Q [1,2] has elapsed from the notification time point P1 of the dialogue L1 as the notification time point P2 of the immediately following dialogue L2.
  • the information receiving unit 50 receives the identification information D2 of the dialogue L2 at the time t2 before the arrival of the dialogue L2 notification time P2
  • the time setting unit 44 is notified of the dialogue L2 notification time.
  • P2 is updated at the time t2 when the identification information D2 is received.
  • the notification time point Pn is corrected to the front time point (previous time) (time point t2 when the identification information D2 is received). Then, the time setting unit 44 sets the time when the time difference Q [2,3] has elapsed from the updated notification time point P2 as the notification time point P3 of the dialogue L3 immediately after. As understood from the above description, the delay in displaying the related information Rn with respect to the speech progression is reduced.
  • FIG. 8 is a flowchart of a process in which the time point setting unit 44 sets the notification time point Pn and the reproduction control unit 46 reproduces the related information Rn (hereinafter referred to as “information reproduction process”).
  • An information reproduction process is started in response to an instruction from the user to the input device 27. Reception of the identification information Dn by the information receiving unit 50 is repeated in a predetermined cycle in parallel with the information reproduction process.
  • the reproduction control unit 46 causes the display device 29 to display the related information R1 of the dialogue L1 (SA2).
  • the time setting unit 44 adds 1 to the variable n indicating one line Ln (SA3). Then, the time point setting unit 44 searches the information table T for a time difference Q [n ⁇ 1, n] between the line Ln ⁇ 1 corresponding to the predetermined notification time point Pn ⁇ 1 and the immediately following line Ln, and notifies the notification time point Pn ⁇ . The time point when the time difference Q [n ⁇ 1, n] has elapsed from 1 is set as the notification time point Pn of the dialogue Ln (SA4). For example, in step SA4 immediately after the setting of the notification time point P1 (first time), the time point when the time difference Q [1, 2] has elapsed from the notification time point P1 is set as the notification time point P2 of the dialogue L2.
  • the reproduction control unit 46 determines whether or not the notification time point Pn has arrived (SA5).
  • the reproduction control unit 46 retrieves the related information Rn from the information table T and displays it on the display device 29 (SA6). That is, with the arrival of the notification time point Pn, the related information Rn-1 currently displayed is updated to the related information Rn of the next line Ln.
  • the process proceeds to step SA3, and 1 is added to the variable n, so that the processing target is updated to the next line Ln. Therefore, the related information Rn of each line Ln is sequentially displayed on the display device 29 in parallel with the pronunciation of each line Ln by the sound emitting device 18.
  • the reproduction control unit 46 determines whether or not the information receiving unit 50 has received the identification information Dn of the dialogue Ln (SA7).
  • the time setting unit 44 sets the current notification time Pn (that is, the provisional notification time Pn set in the immediately preceding step SA4) as the reception time of the identification information Dn.
  • Update SA8. That is, when the identification information Dn is received before the arrival of the notification time point Pn (SA5: NO, SA7: YES), the time point setting unit 44 causes the notification time point Pn to reduce the delay of the notification time point Pn with respect to the speech progression.
  • step SA5 the related information Rn is displayed on the display device 29 when the updated notification time point Pn arrives (SA5: YES) (SA6).
  • step SA5 when the information receiving unit 50 does not receive the identification information Dn, the process proceeds to step SA5 without executing the update of the notification time point Pn (SA8). Therefore, if the notification time point Pn is not delayed with respect to the progression of dialogue, as shown in the example of FIG. 4, every notification time point Pn at which each time difference Q [n ⁇ 1, n] has elapsed from the first notification time point P1.
  • the display of the related information Rn is sequentially updated.
  • the time points at which the time difference Q [n ⁇ 1, n] has elapsed from the notification time point Pn ⁇ 1 are sequentially set as the notification time point Pn, and the relevant point is triggered by the arrival of the notification time point Pn.
  • Information Rn is notified to the user. Therefore, for example, when compared with the configuration in which the related information Rn is reproduced when the identification information Dn of each line Ln is received, even when the terminal device 20 cannot properly receive the identification information Dn, the appropriateness linked to the pronunciation of each line Ln.
  • related information Rn for each line Ln can be reproduced sequentially at a certain point.
  • the notification point Pn is updated to the reception point of the identification information Dn.
  • Second Embodiment A second embodiment of the present invention will be described.
  • symbol used by description of 1st Embodiment is diverted, and each detailed description is abbreviate
  • an unexpected event (hereinafter referred to as “sudden event”) that is not assumed in advance may occur between adjacent lines Ln.
  • An unexpected event is a matter of variable length, for example, an improvised performance (ad lib) by a performer of a show is a typical example.
  • the administrator instructs the stop of the reproduction of the acoustic signal X by an operation on the input device 16 of the distribution device 10, and instructs the restart of the reproduction of the acoustic signal X when the sudden event ends. To do.
  • the control device 12 of the distribution device 10 stops and restarts the reproduction of the acoustic signal X in accordance with an instruction from the administrator.
  • the progression of dialogue stops when a sudden event occurs. Therefore, in order to display the related information Rn at an appropriate time parallel to the progression of the dialogue, it is necessary to correct the notification time Pn when a sudden event occurs.
  • FIG. 9 is a schematic diagram of the information table T in the second embodiment.
  • the plurality of lines Ln are divided into a plurality of sets (hereinafter referred to as “line groups”) Gm on the time axis with a time point at which a sudden event may occur as a boundary (m Is a natural number).
  • Each dialogue group Gm (G1, G2,...) Is a set of one or more dialogues Ln that are in succession on the time axis.
  • the first line group G1 is composed of three lines Ln (L1, L2, L3) from the first to the third, and the second line group G2 is the fourth and fifth lines.
  • the information table T designates the division of the dialogue group Gm (boundary of the neighboring dialogue group Gm).
  • the information table T includes data designating the boundaries of the adjacent speech groups (Gm, Gm + 1), or data designating the beginning or end of each speech group Gm (for example, a flag).
  • the total number of dialogues Ln belonging to each dialogue group Gm may be different for each dialogue group Gm.
  • the time length of sudden events is variable. Therefore, the point in time when the line Ln immediately after the sudden event is generated from the sound emitting device 18 may vary depending on the time length of the sudden event. Specifically, the shorter the length of the sudden event, the sooner the dialogue Ln is pronounced at an earlier time. Therefore, in the configuration in which the notification time point Pn of the first dialogue Ln of the dialogue group Gm is fixedly set to the time point when the time difference Q [n ⁇ 1, n] has elapsed from the notification time Pn ⁇ 1 of the previous dialogue Ln ⁇ 1.
  • the notification time Pn may not be an appropriate time for the pronunciation of the dialogue Ln immediately after the sudden event.
  • the notification time point Pn can be delayed with respect to the line Ln. Further, when the time length of the sudden event is long (when the dialogue Ln is pronounced later), the notification time point Pn may excessively precede the pronunciation of the dialogue Ln.
  • the time point setting unit 44 of the present embodiment for the first line Ln of each line group Gm, the time length Q [n-1 from the notification time point Pn-1 of the previous line Ln-1. , n], not the time when the information receiving unit 50 has received the identification information Dn of the dialogue Ln, but the notification time Pn.
  • the method of setting the notification time point Pn for the dialogue Ln other than the head of each dialogue group Gm is the same as in the first embodiment.
  • FIG. 10 is an explanatory diagram of the process of setting the notification time point Pn for the first line Ln of the line group Gm. As illustrated in FIG. 10, when the time length Q [n-1, n] has elapsed from the last line Ln-1 of the previous line group Gm-1, the provisional of the first line Ln of the line group Gm is provisional. Assume a situation set as a specific notification time point Pn.
  • the reproduction of the sound signal X (pronunciation of the line Ln and distribution of the identification information Dn) is resumed early, so that the information receiving unit before the provisional notification time point Pn arrives. 50 can receive the identification information Dn.
  • the time point setting unit 44 updates the notification time point Pn at the time of reception of the identification information Dn, as in the first embodiment.
  • the information receiving unit 50 receives the identification information Dn after the provisional notification time point Pn arrives.
  • the notification time point Pn is updated at the time when the identification information Dn is received later. That is, the notification time point Pn is corrected to a time point that is later in time.
  • FIG. 11 is a flowchart of information reproduction processing in the second embodiment.
  • the information reproduction process of the second embodiment is a content obtained by adding steps SB1 to SB4 to the information reproduction process of the first embodiment.
  • the information receiving unit 50 receives the identification information Dn before the arrival of the notification time Pn (SA5: NO, SA7: YES)
  • the operation of updating the notification time Pn at the time of reception of the identification information Dn (SA8) This is the same as in the first embodiment.
  • the time point setting unit 44 determines whether or not the line Ln corresponding to the current variable n is the head of the line group Gm (SB1).
  • the time setting unit 44 determines whether or not the identification information Dn of the dialogue Ln has been received (SB2). If the identification information Dn has already been received before the arrival of the notification time point Pn (SA5: NO, SA7: YES), the result of the determination in step SB2 is affirmative.
  • the playback control unit 46 determines whether the line Ln is related.
  • Information Rn is displayed on the display device 29 (SA6).
  • the time setting unit 44 is identified by the information receiving unit 50. Wait until information Dn is received (SB3: NO).
  • the time point setting unit 44 updates the notification time point Pn at the time of receiving the identification information Dn (SB4).
  • the reproduction control unit 46 displays the related information Rn of the dialogue Ln on the display device 29 (SA6). After the display of the related information Rn (SA6), the process proceeds to step SA3 to update the variable n, and the related information Rn is sequentially displayed in parallel with the subsequent pronunciation of each line Ln.
  • reporting time Pn of the first speech Ln of each speech group Gm is set to the time when the information receiving part 50 received the identification information Dn. Therefore, even if the sudden event E that occurs between the two adjacent speech groups Gm is prolonged, after each sudden event E, each line Ln is displayed at an appropriate time linked to the pronunciation of each speech Ln. It is possible to sequentially reproduce the related information Rn.
  • the notification time point Pn set for the first line Ln of the line group Gm is corrected to the reception time point of the identification information Dn.
  • the setting of the notification time point Pn using the time difference Q [n ⁇ 1, n] is omitted, and the reception time point of the identification information Dn is set as the notification time point Pn.
  • FIG. 12 is a flowchart of information reproduction processing in the third embodiment. As illustrated in FIG. 12, the information reproduction process of the third embodiment is a content obtained by adding steps SC1 to SC4 to the information reproduction process of the first embodiment.
  • the time setting unit 44 determines whether or not the line Ln corresponding to the updated variable n is the head of the line group Gm (SC1).
  • the time point setting unit 44 when the time difference Q [n-1, n] has elapsed from the previous line Ln-1, as in the first embodiment. Is set as the notification time point Pn (SA4).
  • An operation for displaying the related information Rn when the notification time point Pn arrives (SA6), and an operation for updating the notification time point Pn at the time of reception when the identification information Dn is received before the arrival of the notification time point Pn (SA8). ) Is the same as in the first embodiment.
  • the time setting unit 44 waits until the information receiving unit 50 receives the identification information Dn of the line Ln (SC2: NO).
  • the time point setting unit 44 sets the reception time point of the identification information Dn as the notification time point Pn (SC3), and the reproduction control unit 46 sets the line Ln.
  • the related information Rn is displayed on the display device 29 (SC4).
  • the notification time point Pn is set (SC3) and the related information Rn is displayed (SC4), the process proceeds to step SA3.
  • the time point setting unit 44 of the third embodiment is similar to the second embodiment in that the time length Q [ The time when the information receiving unit 50 receives the identification information Dn of the dialogue Ln, not the time when n ⁇ 1, n] has elapsed, is set as the notification time Pn. Therefore, the third embodiment can achieve the same effect as the second embodiment.
  • the time difference Q [n ⁇ 1, n] from the immediately preceding dialogue Ln ⁇ 1 is not used for setting the notification time point Pn. Therefore, the time difference Q [n ⁇ 1, n] corresponding to the first line Ln of the line group Gm can be omitted from the information table T.
  • the related information Rn is displayed on the display device 29.
  • the method for reproducing the related information Rn is not limited to the above examples.
  • a speech signal representing speech that pronounces the speech Ln is stored in the storage device 23 for each speech Ln as the relevant information Rn, and the speech represented by the relevant information Rn corresponding to the identification information Dn received by the information receiving unit 50 is stored in the speaker. It is also possible to emit sound from a sound emitting device such as headphones. It is also possible to synthesize speech of the dialogue Ln by speech synthesis using related information Rn.
  • the reproduction control unit 46 is included as an element that causes the reproduction apparatus to reproduce the related information Rn.
  • the playback device includes a sound emitting device that emits the sound indicated by the related information Rn.
  • the notification time point Pn can be used as a reproduction time point that can be instructed by the user in the moving image.
  • the playback control unit 46 displays, for example, a moving image stored in the storage device 23 on the display device 29 and plays the moving image in response to an instruction from the user to the input device 27 of the terminal device 20. The time is changed to the next notification time Pn (that is, fast forward).
  • the time difference Q [n ⁇ 1] between the dialogue Ln ⁇ 1 immediately before the point at which the sudden event E can occur and the dialogue Ln immediately after the sudden event E is generated.
  • n may be set to the maximum value or the average value of the time length assumed for the sudden event E.
  • the character string representing each line Ln is reproduced as the related information Rn, but the content of the related information Rn is not limited to the above examples.
  • the related information Rn of the lines Ln that are reproduced in time series is illustrated, but the scene in which the information providing system 100 is used is in the show illustrated in each of the above-described forms. It is not limited.
  • the information provided in the above-mentioned forms even when a voice that guides various facilities such as a transportation facility (for example, a train or bus), an exhibition facility (for example, a museum or an art museum), or a tourist facility is generated in time series System 100 may be utilized.
  • the information providing system 100 provides related information Rn of each reproduction event transmitted in parallel with a plurality of reproduction events that occur in time series (for example, speech Ln and pronunciation of guidance).
  • the reproduction of the dialogue Ln and the guidance of various facilities are included as reproduction events that occur in time series
  • the dialogue group Gm exemplified in the above embodiments is an example of a reproduction event group in which a plurality of reproduction events are divided. It is. For example, when a plurality of dialogues Ln expressing the same contents in different languages (for example, Japanese and foreign languages) are displayed, a plurality of pieces of information are displayed simultaneously or sequentially (in time series). It can also be included in the playback event.
  • the acoustic signal X containing the acoustic component of the dialogue Ln and the acoustic component of the identification information Dn is reproduced, but a configuration in which the acoustic signal X does not contain the acoustic component of each dialogue Ln is also employed.
  • an acoustic signal X is generated by mixing an audio signal obtained by collecting the voice of a performer of a show and an acoustic signal representing the acoustic component of each identification information Dn in real time in parallel with the progress of the show. Then, it is also possible to supply the sound emitting device 18.
  • the sound emitting device 18 that reproduces the line Ln is used for transmitting the identification information Dn.
  • the terminal device 20 is identified from a sound emitting device that is separate from the sound emitting device for reproducing the line Ln. It is also possible to transmit information Dn.
  • the identification information Dn is transmitted to the terminal device 20 by acoustic communication using sound waves as a transmission medium, but the communication method for transmitting the identification information Dn to the terminal device 20 is not limited to acoustic communication.
  • the identification information Dn can be transmitted to the terminal device 20 by radio communication using electromagnetic waves such as radio waves and infrared rays in synchronization with the sound emission of the line Ln by the sound emitting device 18.
  • short-range wireless communication not involving a communication network such as a mobile communication network is suitable for transmitting the identification information Dn, and acoustic communication using electromagnetic waves as a transmission medium or electromagnetic waves as a transmission medium.
  • the wireless communication is an example of short-range wireless communication.
  • the acoustic communication has various advantages such as the ability to divert the existing sound emitting device 18 or easy control of the communication range, but noise compared to wireless communication using an electromagnetic wave as a transmission medium.
  • the possibility of communication errors is sufficiently reduced.
  • a reproduction delay of the related information Rn with respect to the speech progression may occur.
  • each of the above-described embodiments can be said to be particularly effective for a configuration using acoustic communication for transmission of the identification information Dn because it can reduce the delay of reproduction of the related information Rn with respect to the speech progression.
  • the reproduction control unit 46 selectively displays the related information Rn stored in advance in the storage device 23 of the terminal device 20 on the display device 29. It is not limited to the storage device 23.
  • the terminal device 20 communicates with a management server that holds a plurality of related information Rn via a communication network.
  • the reproduction control unit 46 transmits an information request specifying the identification information Dn extracted by the information extraction unit 42 to the management server, and the management server associates the related information corresponding to the identification information Dn specified in the information request.
  • Rn is searched and transmitted to the requesting terminal device 20.
  • the reproduction control unit 46 of the terminal device 20 causes the display device 29 to display the related information Rn received from the management server.
  • the time difference Q [n ⁇ 1, n] between two lines that follow each other is specified with reference to the information table T stored in the storage device 23 of the terminal device 20.
  • the destination is not limited to the storage device 23.
  • a configuration is assumed in which the terminal device 20 communicates via a communication network with a management server that holds a time difference Q [n ⁇ 1, n] between two adjacent lines.
  • the time point setting unit 44 sets the notification time point Pn
  • the time point setting unit 44 transmits an information request specifying the variable n to the management server.
  • the management server searches for Q [n ⁇ 1, n] corresponding to the variable n and transmits it to the requesting terminal device 20.
  • the time point setting unit 44 of the terminal device 20 adds the time difference Q [n ⁇ 1, n] received from the management server to the previous notification time point Pn ⁇ 1 to set the notification time point Pn.
  • an information table T in which one of the time difference Q [n ⁇ 1, n] or the related information Rn is omitted is held in the terminal device 20, and the omitted time difference Q [n ⁇ 1, n] or the related information is stored.
  • the information Rn may be acquired from the management server, or the information table T may not be held in the terminal device 20 and the time difference Q [n ⁇ 1, n] and the related information Rn may be acquired from the management server.
  • the terminal device 20 exemplified in each of the above-described embodiments is realized by the cooperation of the control device 21 and the program as described above.
  • the program according to a preferred embodiment includes an information receiving unit 50 capable of receiving identification information Dn for each line Ln sequentially transmitted in parallel with the reproduction of the plurality of lines Ln, and a notification time point corresponding to each of the plurality of lines Ln.
  • the time setting unit 44 identifies the time difference Q [n ⁇ 1, n] between the dialogue Ln ⁇ 1 and the immediately following dialogue Ln, and the time difference Q [n ⁇ n] from the notification time Pn ⁇ 1 of the dialogue Ln ⁇ 1. 1, n] is set as the notification time point Pn of the line Ln.
  • the information receiving unit 50 receives the identification information Dn of the line Ln before the arrival of the notification time point Pn of the line Ln, the line Ln The notification time point Pn is updated to the reception time point of the identification information Dn.
  • time difference Q [n ⁇ 1, n] for example, refer to the information table T that specifies the time difference Q [n ⁇ 1, n] between successive lines (Ln ⁇ 1, Ln), or the time difference Q [n-1, n] can be specified by querying the management server.
  • the program of 1st Embodiment and 2nd Embodiment can be provided in the form stored in the computer-readable recording medium, and can be installed in a computer.
  • the recording medium is, for example, a non-transitory recording medium, and an optical recording medium (optical disk) such as a CD-ROM is a good example, but a known arbitrary one such as a semiconductor recording medium or a magnetic recording medium This type of recording medium can be included.
  • the terminal device includes an information receiving unit capable of receiving identification information for each reproduction event sequentially transmitted in parallel with the reproduction of the plurality of reproduction events, and each of the plurality of reproduction events.
  • a time point setting unit that sequentially sets corresponding notification time points; and a playback control unit that causes a playback device to play back related information of a playback event corresponding to the notification time point when triggered by the arrival of the notification time point.
  • the information receiving unit receives the identification information of the second playback event before the notification time of the second playback event arrives, the second playback event is reported. The time to update the reception time of the identification information.
  • An information reproduction method is an information reproduction method in a terminal device having a reproduction device, and includes identification information for each reproduction event transmitted sequentially in parallel with reproduction of a plurality of reproduction events.
  • the time difference between the first reproduction event of the plurality of reproduction events and the second reproduction event immediately after the first reproduction event is specified, and the time point when the time difference has elapsed from the notification point of the first reproduction event is specified.
  • the second playback event identification information is received prior to the arrival of the second playback event notification time
  • the second playback event notification time is received.
  • the information is updated to the time point, and triggered by the arrival of the notification time point, the playback apparatus reproduces the related information of the playback event corresponding to the notification time point.
  • the present invention can also be understood as a program (third aspect) for causing a computer of a terminal device having a reproducing apparatus to execute the information reproducing method or a recording medium (fourth aspect) on which the program is recorded.
  • the time point when the time difference specified in the information table has elapsed from the notification point of the first reproduction event is set as the notification point of the second reproduction event, and triggered by the arrival of the notification point Related information is played back. Therefore, for example, compared to a configuration in which the related information of the playback event is played back when receiving the identification information of each playback event, even if the terminal device cannot receive the identification information appropriately, it is linked to the occurrence of each playback event. It is possible to reproduce related information for each reproduction event at an appropriate time.
  • the notification time of the second playback event is updated to the reception time. There is an advantage that delay in reproducing information can be suppressed.
  • the plurality of playback events are temporally divided into a plurality of playback event groups, and the first playback event in each of the plurality of playback event groups is related to the playback event.
  • the time point at which the event identification information is received is specified as the notification time point.
  • the time point at which the identification information of the playback event is received is specified as the notification time point.
  • the information receiving unit may acquire the identification information from an acoustic signal generated by collecting sound including the acoustic component of the identification information.
  • the terminal device is a portable information processing terminal such as a mobile phone and a smartphone
  • the sound collection device used for voice recording between the terminal devices and voice recording at the time of video shooting is identified. Can be used for receiving.
  • an existing sound emitting device in a distribution device that distributes identification information can be used for transmission of identification information.
  • the information receiving unit may receive any of a plurality of pieces of identification information distributed repeatedly over a plurality of times for each reproduction event. According to this aspect, the possibility that the terminal device cannot receive the identification information is reduced.
  • the distribution of the identification information may be started before the playback start point of each playback event.
  • the identification information is transmitted to the terminal at a point closer to the reproduction event reproduction start point. The possibility of being received by the apparatus increases, and as a result, the possibility that the reproduction start of the related information comes closer to the start point of the reproduction of the reproduction event (ideally in synchronization) increases.
  • DESCRIPTION OF SYMBOLS 100 ... Information provision system, 10 ... Distribution apparatus, 12, 21 ... Control apparatus, 14, 23 ... Storage device, 16, 27 ... Input device, 18 ... Sound emission apparatus, 20 ... Terminal apparatus, 25 ... Sound collection apparatus, 29 Display device 42 Information extracting unit 44 Time point setting unit 46 Reproduction control unit 50 Information receiving unit

Abstract

La présente invention concerne un dispositif terminal qui comporte : une unité de réception d'informations permettant de recevoir des informations d'identification destinées à chaque événement de reproduction d'une pluralité d'événements de reproduction, lesdites informations d'identification étant transmises successivement en parallèle avec la reproduction de la pluralité d'événements de reproduction ; une unité de définition de moment qui définit successivement des moments de notification correspondant à chaque événement de reproduction de la pluralité d'événements de reproduction ; et une unité de commande de reproduction qui considère l'arrivée de chaque moment de notification comme déclencheur pour amener un dispositif de reproduction à reproduire des informations associées relatives à l'événement de reproduction correspondant au moment de notification. L'unité de définition de moment identifie une différence temporelle entre un premier événement de reproduction parmi la pluralité d'événements de reproduction et un second événement de reproduction immédiatement après le premier événement de reproduction, et définit, comme moment de notification du second événement de reproduction, un moment atteint lorsque ladite différence temporelle s'est écoulée après le moment de notification du premier événement de reproduction, et, par ailleurs, si l'unité de réception d'informations reçoit les informations d'identification du second événement de reproduction avant l'arrivée du moment de notification du second événement de reproduction, l'unité de définition de moment met à jour le moment de notification du second événement de reproduction au moment où lesdites informations d'identification ont été reçues.
PCT/JP2017/022650 2016-07-06 2017-06-20 Dispositif terminal, procédé de reproduction d'informations et programme WO2018008383A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016-134074 2016-07-06
JP2016134074A JP6702042B2 (ja) 2016-07-06 2016-07-06 端末装置

Publications (1)

Publication Number Publication Date
WO2018008383A1 true WO2018008383A1 (fr) 2018-01-11

Family

ID=60912558

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/022650 WO2018008383A1 (fr) 2016-07-06 2017-06-20 Dispositif terminal, procédé de reproduction d'informations et programme

Country Status (2)

Country Link
JP (1) JP6702042B2 (fr)
WO (1) WO2018008383A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6557886B1 (ja) * 2018-02-23 2019-08-14 エヴィクサー株式会社 コンテンツ再生プログラム、コンテンツ再生方法及びコンテンツ再生システム
JP2019146174A (ja) * 2018-02-23 2019-08-29 エヴィクサー株式会社 コンテンツ再生プログラム、コンテンツ再生方法及びコンテンツ再生システム
JP2020021020A (ja) * 2018-08-03 2020-02-06 ヤマハ株式会社 端末装置、端末装置の動作方法およびプログラム

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004229706A (ja) * 2003-01-28 2004-08-19 Takuya Miyagawa 演劇通訳システム、演劇通訳装置
JP2006507723A (ja) * 2002-10-25 2006-03-02 ディズニー エンタープライゼス インコーポレイテッド 携帯機器へのデジタルデータのストリーミング方式
JP2006253894A (ja) * 2005-03-09 2006-09-21 Nec Corp 通訳システム、通訳方法、移動通信端末およびサーバ装置
JP2010068016A (ja) * 2008-09-08 2010-03-25 Q-Tec Inc 映画・字幕同期表示システム
JP2015061112A (ja) * 2013-09-17 2015-03-30 Npo法人メディア・アクセス・サポートセンター 携帯デバイスへのセカンドスクリーン情報の提供方法
WO2016017576A1 (fr) * 2014-07-29 2016-02-04 ヤマハ株式会社 Système et procédé de gestion d'informations

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006507723A (ja) * 2002-10-25 2006-03-02 ディズニー エンタープライゼス インコーポレイテッド 携帯機器へのデジタルデータのストリーミング方式
JP2004229706A (ja) * 2003-01-28 2004-08-19 Takuya Miyagawa 演劇通訳システム、演劇通訳装置
JP2006253894A (ja) * 2005-03-09 2006-09-21 Nec Corp 通訳システム、通訳方法、移動通信端末およびサーバ装置
JP2010068016A (ja) * 2008-09-08 2010-03-25 Q-Tec Inc 映画・字幕同期表示システム
JP2015061112A (ja) * 2013-09-17 2015-03-30 Npo法人メディア・アクセス・サポートセンター 携帯デバイスへのセカンドスクリーン情報の提供方法
WO2016017576A1 (fr) * 2014-07-29 2016-02-04 ヤマハ株式会社 Système et procédé de gestion d'informations

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6557886B1 (ja) * 2018-02-23 2019-08-14 エヴィクサー株式会社 コンテンツ再生プログラム、コンテンツ再生方法及びコンテンツ再生システム
JP2019146174A (ja) * 2018-02-23 2019-08-29 エヴィクサー株式会社 コンテンツ再生プログラム、コンテンツ再生方法及びコンテンツ再生システム
WO2019163085A1 (fr) * 2018-02-23 2019-08-29 エヴィクサー株式会社 Programme de reproduction de contenu, procédé de reproduction de contenu, et système de reproduction de contenu
KR20200118876A (ko) * 2018-02-23 2020-10-16 에빅사 가부시키가이샤 콘텐츠 재생 프로그램, 콘텐츠 재생 방법 및 콘텐츠 재생 시스템
KR102389040B1 (ko) * 2018-02-23 2022-04-22 에빅사 가부시키가이샤 콘텐츠 재생 프로그램, 콘텐츠 재생 방법 및 콘텐츠 재생 시스템
US11432031B2 (en) 2018-02-23 2022-08-30 Evixar Inc. Content playback program, content playback method, and content playback system
JP2020021020A (ja) * 2018-08-03 2020-02-06 ヤマハ株式会社 端末装置、端末装置の動作方法およびプログラム
JP7139766B2 (ja) 2018-08-03 2022-09-21 ヤマハ株式会社 端末装置、端末装置の動作方法およびプログラム

Also Published As

Publication number Publication date
JP2018005071A (ja) 2018-01-11
JP6702042B2 (ja) 2020-05-27

Similar Documents

Publication Publication Date Title
KR101796429B1 (ko) 단말 디바이스, 정보 제공 시스템, 정보 제시 방법, 및 정보 제공 방법
KR101942678B1 (ko) 정보 관리 시스템 및 정보 관리 방법
JP2016153906A (ja) 端末装置
WO2018008383A1 (fr) Dispositif terminal, procédé de reproduction d'informations et programme
JP2016046753A (ja) 音響処理装置
US10846150B2 (en) Information processing method and terminal apparatus
JP6231244B1 (ja) 再生システム、端末装置、情報提供方法、端末装置の動作方法およびプログラム
JP6866948B2 (ja) 端末装置、端末装置の動作方法、およびプログラム
JP6930639B2 (ja) 端末装置、端末装置の動作方法およびプログラム
WO2019230363A1 (fr) Système de transmission sonore, système de traitement d'informations, procédé de fourniture d'informations, et procédé de traitement d'informations
WO2019159679A1 (fr) Procédé de commande de reproduction, équipement terminal et programme
JP2019054351A (ja) 信号処理方法、信号処理装置、および情報提供システム
US11438397B2 (en) Broadcast system, terminal apparatus, method for operating terminal apparatus, and recording medium
JP7087745B2 (ja) 端末装置、情報提供システム、端末装置の動作方法および情報提供方法
JP6596144B2 (ja) 情報提供方法、音響処理装置およびプログラム
WO2020246205A1 (fr) Programme, dispositif terminal et procédé de fonctionnement d'un dispositif terminal
JP6447695B2 (ja) 再生システムおよび情報提供方法
WO2017130794A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations, dispositif de gestion d'informations et procédé de gestion d'informations
JP2019169949A (ja) 音声処理システムおよび音声処理方法
JP2018132634A (ja) 情報提供装置および情報提供システム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17823990

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17823990

Country of ref document: EP

Kind code of ref document: A1