GB2389220A - An autocue - Google Patents

An autocue Download PDF

Info

Publication number
GB2389220A
GB2389220A GB0320871A GB0320871A GB2389220A GB 2389220 A GB2389220 A GB 2389220A GB 0320871 A GB0320871 A GB 0320871A GB 0320871 A GB0320871 A GB 0320871A GB 2389220 A GB2389220 A GB 2389220A
Authority
GB
United Kingdom
Prior art keywords
speech
user
speaker
signals
autocue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB0320871A
Other versions
GB0320871D0 (en
GB2389220B (en
Inventor
Robert Alexander Keiller
Richard Antony Kirk
De Veen Evelyn Van
Gerhardt Paul Otto
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Technology Europe Ltd
Original Assignee
Canon Research Centre Europe Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Research Centre Europe Ltd filed Critical Canon Research Centre Europe Ltd
Publication of GB0320871D0 publication Critical patent/GB0320871D0/en
Publication of GB2389220A publication Critical patent/GB2389220A/en
Application granted granted Critical
Publication of GB2389220B publication Critical patent/GB2389220B/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/04Speaking
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Debugging And Monitoring (AREA)

Abstract

An autocue for prompting a user with the next part of a known speech comprises a means for storing the signals representative of the known speech spoken by the speaker. A means 57 for receiving speech signals from the speaker as the speaker delivers the known speech. A means for determining from said stored and received signals the position of the speaker within the speech and a means for informing the user of the next part of the known speech. The autocue may display the next part of the known speech on a display 57 and may comprise a counter to count the words and or syllables within the received speech signals. The autocue may further have a determining means comprising a speech recognition means 5 for comparing the received speech signals with the known speech signals. The speaker and user may be different people and the autocue may be a dater carrier, carrying computer executable instructions for carrying out the autocue method.

Description

GOB 2389220 A continuation (74) Agent and/or Address for Service:
Bereaford & Co 2-5 Warwick Court, High Holborn, LONDON, WC1R 5DH, United Kingdom
1 2560110
SPEECH MONITORING SYSTEM
The present invention relates to an apparatus for and a method of monitoring speech. The invention has 5 particular, although not exclusive, relevance to the monitoring of various characteristics of a user's speech signal in order to provide control signals for controlling the way in which the user gives a presentation to an audience and/or for controlling an 10 interaction between the user and a computer system.
Another aspect of the present invention concerns the monitoring of certain characteristics of a speech signal representative of a known text for controlling an autocue or the like.
An audience's interest in a speech or presentation made by a speaker often depends upon the presentation and communication skills which the speaker has. For example, if the speaker speaks too quickly, then the audience will 20 not be able to keep up with the information which is being presented to them and consequently they will lose interest in the remainder of the speech. Similarly, speakers who speak in a monotone are liable to send the audience to sleep, even though the content of the speech 25 may be very interesting to the audience. Similarly, speakers who tend to leave large gaps or pauses within their presentations or who repeatedly use certain words
( 2 2560110
or phrases, such as "basically'' or "to be honest", are likely to annoy or bore the audience.
According to a first aspect, the present invention 5 provides a system for monitoring a user's speech and for providing a feedback signal to the user for controlling the user's presentation. This system can be used, for example, to monitor for predetermined characteristics, such as speaking too fast, speaking in a monotone, 10 leaving large gaps or pauses etc. If a speech recognition unit is used as well, then the system can also monitor for the occurrence of preselected words within the users speech.
IS In existing computer-user interactive systems, the rate of interaction is usually the same each time the user interacts with the computer. According to second aspect, the present invention provides a system for monitoring a user's speech and for varying the interaction between 20 the user and a computer system which the user is using in dependence upon the monitored speech. The system can, for example, try to infer the user's mood from the input speech and vary the interaction in dependence upon the inferred mood. In particular, if the user is talking 25 fast and sounds bright and alert, then the computer system can increase its rate of interaction with the user by, for example, decreasing the response time of the
( 3 2560110
software or by increasing the speaking rate of a speech synthesizer forming part of the computer system.
According to a third aspect, the present invention 5 provides a system for monitoring the progress of a speaker as he/she delivers a known speech. The system can be used for identifying the approximate position within the known speech so as to control the automatic advancement of an autocue system.
Exemplary embodiments of the present invention will now be described with reference to the accompanying drawings in which: 15 Figure 1 is a schematic block diagram illustrating a speech monitoring system according to a first aspect of the present invention; Figure 2 is a schematic block diagram illustrating a speech monitoring system according to a second aspect of 20 the present invention; and Figure 3 is a schematic block diagram illustrating a speech monitoring system according to a third aspect of the present invention.
25 Figure 1, is a schematic block diagram illustrating a speech monitoring system according to a first aspect of the present invention. The purpose of the speech
( 4 2560110
monitoring system shown in Figure 1 is to warn the user when he/she is making a speech that they are talking too fast or doing something else that impairs their communication. As shown, the monitoring system comprises 5 a microphone 1, a speech processor and analysis unit 3, a speech recognition unit 5 and associated language and word models 7, a control unit 9, an alarm 11 a display 13, a data log 15 and a printer 17.
10 In operation, the microphone 1 converts an acoustic speech signal from the user into an equivalent electrical signal which is passed, via connector 21, to the speech processor and analysis unit 3. In this embodiment, the speech processor and analysis unit 3 is operable to 15 convert the input speech signal from the microphone 1 into a sequence of parameter frames, each parameter frame representing a corresponding time frame of the input speech signal. The parameters in each parameter frame cypicaily include cepstral coefficients and powerienergy 20 coefficients, which provide important information characteristic of the input speech signal. The sequence of parameter frames generated by the speech processor and analysis unit 3 are supplied, via conductor 23, to the speech recognition unit 5 which is operable to try to 25 identify the presence of predetermined words and/or phrases within the user's speech for which there is a model in the language and word models 7. In this
( 5 2560110
embodiment, the language and word models 7 are generated in advance by the user identifying words and/or phases which he/she tends to repeat and which may cause annoyance to the audience. In the event that the speech 5 recognition unit 5 identifies one of the words and/or phases in the input speech, it outputs a corresponding signal to the control unit 9, via conductor 27.
In this embodiment, the speech processor and analysis 10 unit 3 is also arranged to process the input speech to derive (i) a value indicative of the rate at which the user is speaking; (ii) a value indicative of whether or not the user is speaking in a monotone; and (iii) a value indicative of the gaps or pauses within the input speech, 15 which values are passed to the control unit 9 via conductor 25. The control unit 9 is operable for receiving the output from the speech recognition unit 5 via conductor 27 and the above values output by the speech processor and analysis unit 3 via conductor 25 and 20 to generate control signals for controlling the alarm 11 and the display 13. The alarm 11 may be, for example, an audible, visible or vibrating alarm. More specifically, the control unit 9 is operable to monitor the output from the speech recognition unit 5 and the 25 speech processing and analysis unit 3 and to generate, if appropriate, a warning to the user either by activating the alarm 11 or by displaying appropriate
/ 6 2560110
information on the display 13 in order to inform the user that, for example, he is speaking too quickly, speaking in a monotone, is leaving large gaps in the speech or is repeatedly saying one of the words and/or phrases which 5 the speech recognition unit is designed to identify. In this embodiment, the speech monitoring system operates in real time so that the speaker is given instantaneous feedback! so that he/she and can modify their presentation accordingly In addition to being able to be used in real time during a presentation, the speech monitoring system shown in Figure 1, and described above can also be used to monitor the entire speech given by a speaker and to provide an 15 analysis of the speech after it has ended The analysis might include the number of repetitions of selected words and/or phases, the average speaking rate, the variation of the speaking rate throughout the speech, the number of gaps or pauses within the speech, etc. This analysis 20 is generated by the control unit and logged in the data log 15 which can be printed out to the printer 17 or displayed on the display 13, so that the speaker is given the appropriate feedback so that they can improve their presentation skills.
The speech monitoring system illustrated in Figure 1, might be built into a separate portable computer device
/ 7 2560110
or it might be implemented in computer software 1 on a personal computer which may also be assisting the user in their presentation by, for example, generating slides for display on an overhead projector (not shown).
Figure 2 is a schematic block diagram of a speech monitoring system according to a second aspect of the present invention. The purpose of the speech monitoring system shown in Figure 2 is to monitor a user's speech 10 and to vary an interaction between the user and a computer system in dependence upon the mood of the user which is inferred from the user's speech. As shown, the speech monitoring system comprises a microphone 1, a speech monitoring unit 31, a control unit 33 and a 15 computer application 35, all of which, in this embodiment, are operated in a common computer system 37.
In operation, the microphone 1 converts an acoustic speech signal from the user 39 into an equivalent 20 electrical signal which is passed, via connector 41, to the speech monitoring unit 31. In this embodiment, the speech monitoring unit identifies the rate at which the user 39 is speaking (and optionally the user's pitch and style of speaking) and passes this information to the 25 control unit 33 via the conductor 43. In this embodiment, the rate at which the user is speaking is identified by monitoring the beginning of known words in
/ 8 2560110
the input speech and their duration. The rate at which the user is speaking is then determined by comparing the durations with pre-stored durations loaded in memory.
From this information, the control unit 33 infers the 5 speakers t mood and outputs a control signal on conductor 45 for controlling the computer application 35 accordingly. In particular, the control signal output by the control unit 33 changes the way in which the computer application 35 interacts (as represented by the 10 doubleheader arrow 47) with the user 39 For example, if the user 39 is talking quickly, the control unit 33 infers that the user is bright and alert, and accordingly causes the computer application 35 to react more quickly to the user 39. For example, where the computer 15 application 35 includes a speech synthesizer, in response to the control unit 33 inferring that the user is bright and alert the computer application 35 increases the speed of speech synthesis. Alternatively, the computer application 3 might control the time resolution of the 20 double click detection of a mouse button (not shown).
Conversely, if the speaker is speaking slowly, then the control unit 33 infers that the user is not alert and therefore causes the computer application to react more slowly to the user 39.
Figure 3, is a schematic block diagram of a speech monitoring system according to a third aspect of the
/ ( 9 2560110
present invention. The purpose of the speech monitoring system shown in Figure 3 is to track the position of a speaker as he/she delivers a known speech. As shown, the speech monitoring system comprises a microphone 1, a 5 speech monitoring unit 51, a control unit 53, a text file 55 and a display 57, all of which, in this embodiment, form part of a single computer system 59.
In operation, the microphone l converts an acoustic 10 speech signal of the user 39 into an equivalent electrical signal which is supplied, via conductor 61, to the speech monitoring unit 51. In this embodiment, the speech monitoring unit 51 identifies the words and/or syllables in the input speech signal and outputs a signal 15 to the control unit via conductor 63 whenever a word and/or syllable is identified. In response, the control unit 53 counts the words and/or syllables and outputs a control signal to the text file 55 for identifying a part of the text file corresponding to the speech which the 20 user 39 is about the speak. The identified part of the text file is then passed to the display 57, so that the user 59 can read the next part of the speech from the display 57.
25 In addition to being able to be used in an autocue system, the speech monitoring system illustrated in Figure 3 can be used in similar applications, such as in
( 10 2560110
a play or in an opera, where the input speech will come from the stage whilst the displayed text will be displayed to a stage manager who can provide oral prompts whenever necessary. The advantage of such an automatic 5 autocue system is that even if the operator/stage manager is temporarily distracted, the autocue will not lose its place within the known text.
In an alternative embodiment, the input speech could be 10 passed to a speech recognition unit which is operable to compare the input speech signal with the text file in order to identify the next part of the text file to be displayed on the display 57.
IS As those skilled in the art will appreciate, the above embodiments could be combined to provide, for example, a system which can be used to promote the presentational skills of a speaker and which interacts with the speaker in dependence upon the mood of the speaker which is 20 inferred from the speakers speech. The system may also track the speaker's progress through his speech so as to control an automatic autocue.
The present invention is not limited to the exemplary 25 embodiments described above, and various other modifications and embodiments will be apparent to those skilled in the art.
( 11 2560110
The present application also includes the following numbered clauses: 1. A speech monitoring system for use in promoting the 5 presentational skills of a user, the apparatus comprising: means for receiving speech signals from the user; means for processing the received speech signals and for generating signals indicative of the occurrence of 10 one or more predetermined events within the received speech signals; and control means responsive to said generated signals for outputting a feedback signal to said user for warning the user of the occurrence of the predetermined 15 events within the received speech signals.
2. A speech monitoring system according to clause 1, further comprising a speech recognition unit operable for receiving the processed speech and for identifying one 20 or more predetermined words and/or phases within the received speech signals by comparing the processed speech with stored reference models, and wherein said control means is operable to generate said feedback signal in dependence upon said recognition result.
3. A speech monitor according to clause 1 or 2, wherein said feedback signal is supplied to said user via at
( 12 256Q110
least one of an audible, visible or vibrating alarm.
4. A speech monitoring system according to any of clauses 1 to 3, which is operable to provide said 5 feedback signal in real time, so that said user can try to reduce the occurrence of said predetermined event or events within subsequent speech signals output by the user. 10 5. A speech monitor according to any preceding clause, further comprising means for generating a data log indicative of the occurrences of the predetermined event or events which are identified within the user's speech.
15 6. A speech monitor according to any preceding clause, wherein said predetermined event or events comprise at least one of speaking too fast, leaving too many gaps or pauses within the speech, speaking in a monotone or the like. 7. A computer system comprising: a computer application operable for interacting with a user; means for receiving speech signals from the user; 25 means for processing the received speech signals and for deriving from the processed speech signals an indication of the mood of the user; and
control means for controlling the interaction between the computer application and the user in dependence upon the derived indication of the mood of the user. 8. A system according to clause 7, wherein said received speech signals are processed in order to extract at least one of the speed at which the user is speaking, the user's pitch and the style of speech employed by the 10 user, 9. A system according to clause 7 or 8, wherein said computer application comprises a speech synthesiser, and wherein said computer application is arranged to vary the 15 speaking rate of the speech synthesizer in dependence upon the indication of the user's mood.
10. A system according to any of clauses 7 to 9 f wherein said control means is operable to vary the response rate 20 of the computer application in dependence upon the indication of the user's mood.
ll. A speech processing system comprising: means for storing signals representative of a known 25 speech to be spoken by a speaker; means for receiving speech signals from the speaker as the speaker delivers the known speech; and
( 14 2560110
means for determining from said stored signals and said received signals the position of the speaker within the known speech.
5 12. A computer system for use by a user during the presentation of a speech, the computer system comprising: a speech monitoring system according to any of clauses 1 to 6 for promoting the presentational skills of the user; and 10 a computer system according to any of clauses 7 to 10 for varying an interaction between the computer system and the user.
13. A computer system according to clause 12, further 15 comprising a speech processing system according to clause 11 or an autocue according to claim 1.
14. A method of promoting the presentational skills of a user, comprising steps of: 20 receiving speech signals from the user; processing the received speech signals and generating signals indicative of the occurrence of one or more predetermined events within the received speech; and 25 in response to the generated signals, outputting a feedback signal to the user for warning the user of the occurrence of the predetermined events within the
l received speech signals.
15. A method of varying the interaction between a computer application and a user, the method comprising S the steps of: receiving speech signals from the user; processing the received speech signals and deriving from the processed speech signals an indication of the mood of the user; and 10 controlling the interaction between the computer application and the user in dependence upon the derived mood of the user.
16. A speech processing method comprising the steps of: 15 storing signals representative of a known speech to be spoken by a speaker: receiving speech signals from the speaker as the speaker delivers the known speech; and determining from the stored signals and the received 20 signals the position of the speaker within the known speech. 17. A data carrier programmed with instructions for carrying out the method according to any of clauses 14 25 to 16 or for implementing the apparatus of any of clauses 1 to 13.
( 16 2560110
18, A speech monitoring system or method substantially as hereinbefore described with reference to or as shown in any of Figures 1 to 3.

Claims (7)

( 17 2560110 CLAIMS:
1. An autocue for prompting a user with the next part of a known speech comprising: 5 means for storing signals representative of the known speech to be spoken by a speaker; means for receiving speech signals from the speaker as the speaker delivers the known speech; means for determining from said stored signals and 10 said received signals the position of the speaker within the known speech; and means for informing the user of the next part of the known speech to be spoken by said speaker.
15
2. An autocue according to claim 1, wherein said user and said speaker are different people.
3. An autocue according to claim 1 or 2, wherein said next part of the known speech is displayed to said user 20 on a display.
4. An autocue according to any preceding claim, wherein said determining means comprises a counter for counting words and/or syllables within the received speech 25 signals, 5. An autocue according to any preceding claim, wherein
/ 18 2560110
said determining means comprises speech recognition means for comparing the received speech signals with the known speech signals.
5
6. A method of operating an autocue for prompting a user with the next part of a known speech, the method comprising steps of: storing signals representative of the known speech to be spoken by a speaker; 10 receiving speech signals from the speaker as the speaker delivers the known speech; determining from the stored signals and the received signals the position of the speaker within the known speech; and 15 informing the user of the next part of the known speech to be spoken by the speaker.
7. A data carrier carrying computer executable instructions for carrying out the autocue method of claim 20 6 or for configuring a programmable computer device to become configured as an autocue according to any of claims 1 to 5.
GB0320871A 1998-12-23 1998-12-23 Speech monitoring system Expired - Fee Related GB2389220B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB9828545A GB2345183B (en) 1998-12-23 1998-12-23 Speech monitoring system

Publications (3)

Publication Number Publication Date
GB0320871D0 GB0320871D0 (en) 2003-10-08
GB2389220A true GB2389220A (en) 2003-12-03
GB2389220B GB2389220B (en) 2004-02-25

Family

ID=10844974

Family Applications (2)

Application Number Title Priority Date Filing Date
GB9828545A Expired - Fee Related GB2345183B (en) 1998-12-23 1998-12-23 Speech monitoring system
GB0320871A Expired - Fee Related GB2389220B (en) 1998-12-23 1998-12-23 Speech monitoring system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
GB9828545A Expired - Fee Related GB2345183B (en) 1998-12-23 1998-12-23 Speech monitoring system

Country Status (1)

Country Link
GB (2) GB2345183B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109639935A (en) * 2019-01-25 2019-04-16 合肥学院 The automatic word extractor system and method for video record
DE102020102468B3 (en) 2020-01-31 2021-08-05 Robidia GmbH Method for controlling a display device and display device for dynamic display of a predefined text

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19947359A1 (en) * 1999-10-01 2001-05-03 Siemens Ag Method and device for therapy control and optimization for speech disorders
US9124972B2 (en) * 2001-12-18 2015-09-01 Intel Corporation Voice-bearing light
FR2847706B1 (en) * 2002-11-27 2005-05-20 Vocebella Sa ANALYSIS OF THE QUALITY OF VOICE SIGNAL ACCORDING TO QUALITY CRITERIA
EP2037798B1 (en) * 2006-07-10 2012-10-31 Accenture Global Services Limited Mobile personal services platform for providing feedback
US9344821B2 (en) 2014-03-21 2016-05-17 International Business Machines Corporation Dynamically providing to a person feedback pertaining to utterances spoken or sung by the person
GB2597975B (en) * 2020-08-13 2023-04-26 Videndum Plc Voice controlled studio apparatus

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB903888A (en) * 1960-03-22 1962-08-22 Autocue Great Britain Ltd Improvements in or relating to prompting or cueing equipment
GB1578054A (en) * 1978-05-26 1980-10-29 Autocue Holdings Ltd Transmitter unit of a display system
WO1994023405A1 (en) * 1993-04-02 1994-10-13 Pinewood Associates Limited Information display apparatus
US6272461B1 (en) * 1999-03-22 2001-08-07 Siemens Information And Communication Networks, Inc. Method and apparatus for an enhanced presentation aid

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2049190B (en) * 1979-04-27 1983-05-25 Friedman E Voice fluency monitor
GB2102171B (en) * 1981-06-24 1984-10-31 John Graham Parkhouse Speech aiding apparatus and method
US5015179A (en) * 1986-07-29 1991-05-14 Resnick Joseph A Speech monitor
EP0360909B1 (en) * 1988-09-30 1994-12-14 Siemens Audiologische Technik GmbH Speech practising apparatus
GB9322112D0 (en) * 1993-10-27 1993-12-15 El Houssaini Talal A Language analysis instrument
JPH07334075A (en) * 1994-06-03 1995-12-22 Hitachi Ltd Presentation supporting device
US5647834A (en) * 1995-06-30 1997-07-15 Ron; Samuel Speech-based biofeedback method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB903888A (en) * 1960-03-22 1962-08-22 Autocue Great Britain Ltd Improvements in or relating to prompting or cueing equipment
GB1578054A (en) * 1978-05-26 1980-10-29 Autocue Holdings Ltd Transmitter unit of a display system
WO1994023405A1 (en) * 1993-04-02 1994-10-13 Pinewood Associates Limited Information display apparatus
US6272461B1 (en) * 1999-03-22 2001-08-07 Siemens Information And Communication Networks, Inc. Method and apparatus for an enhanced presentation aid

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109639935A (en) * 2019-01-25 2019-04-16 合肥学院 The automatic word extractor system and method for video record
CN109639935B (en) * 2019-01-25 2020-10-13 合肥学院 Video recording automatic prompting method and computer readable storage medium
DE102020102468B3 (en) 2020-01-31 2021-08-05 Robidia GmbH Method for controlling a display device and display device for dynamic display of a predefined text
WO2021151574A1 (en) 2020-01-31 2021-08-05 Robidia GmbH Method for controlling a teleprompter and teleprompter for the dynamic display of a predefined text

Also Published As

Publication number Publication date
GB2345183B (en) 2003-11-05
GB0320871D0 (en) 2003-10-08
GB2345183A (en) 2000-06-28
GB2389220B (en) 2004-02-25
GB9828545D0 (en) 1999-02-17

Similar Documents

Publication Publication Date Title
US6358054B1 (en) Method and apparatus for teaching prosodic features of speech
JP4972645B2 (en) System and method for synchronizing sound and manually transcribed text
KR101826714B1 (en) Foreign language learning system and foreign language learning method
US20020086269A1 (en) Spoken language teaching system based on language unit segmentation
JP3248981B2 (en) calculator
EP1028410A1 (en) Speech recognition enrolment system
JP2002503353A (en) Reading aloud and pronunciation guidance device
GB2389220A (en) An autocue
KR101877559B1 (en) Method for allowing user self-studying language by using mobile terminal, mobile terminal for executing the said method and record medium for storing application executing the said method
JPH08286693A (en) Information processing device
JP6347938B2 (en) Utterance key word extraction device, key word extraction system using the device, method and program thereof
JP3588596B2 (en) Karaoke device with singing special training function
Möller et al. Evaluating the speech output component of a smart-home system
US10896689B2 (en) Voice tonal control system to change perceived cognitive state
CN113257246B (en) Prompting method, device, equipment, system and storage medium
US20020082843A1 (en) Method and system for automatic action control during speech deliveries
JPH10161518A (en) Terminal device and host device used for vocal practice training equipment and vocal practice training system
JP2007017733A (en) Input apparatus, input system, input method, input processing program and program recording medium
KR102631621B1 (en) Method and device for processing voice information
CN111273879A (en) Large-screen display method and device for user interactive display
KR20020087709A (en) A language training system
JP5092311B2 (en) Voice evaluation device
JP2002268683A (en) Method and device for information processing
JPS60195584A (en) Enunciation training apparatus
WO2002050798A2 (en) Spoken language teaching system based on language unit segmentation

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20061223