US20170301259A9 - Language delay treatment system and control method for the same - Google Patents

Language delay treatment system and control method for the same Download PDF

Info

Publication number
US20170301259A9
US20170301259A9 US14/047,177 US201314047177A US2017301259A9 US 20170301259 A9 US20170301259 A9 US 20170301259A9 US 201314047177 A US201314047177 A US 201314047177A US 2017301259 A9 US2017301259 A9 US 2017301259A9
Authority
US
United States
Prior art keywords
turn
user
voice
information
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US14/047,177
Other versions
US20150064666A1 (en
US9875668B2 (en
Inventor
June Hwa Song
In Seok Hwang
Chung Kuk Yoo
Chan You Hwang
Young Ki Lee
John Dong Jun Kim
Dong Sun Jennifer Yim
Chul Hong Min
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Korea Advanced Institute of Science and Technology KAIST
Original Assignee
Korea Advanced Institute of Science and Technology KAIST
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Korea Advanced Institute of Science and Technology KAIST filed Critical Korea Advanced Institute of Science and Technology KAIST
Assigned to KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY reassignment KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HWANG, CHAN YOU, HWANG, IN SEOK, KIM, JOHN DONG JUN, LEE, YOUNG KI, MIN, CHUL HONG, SONG, JUNE HWA, YIM, DONG SUN JENNIFER, YOO, CHUNG KUK
Publication of US20150064666A1 publication Critical patent/US20150064666A1/en
Publication of US20170301259A9 publication Critical patent/US20170301259A9/en
Application granted granted Critical
Publication of US9875668B2 publication Critical patent/US9875668B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/04Speaking
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/04Electrically-operated educational appliances with audible presentation of the material to be studied
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • G09B5/14Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations with provision for individual teacher-student communication
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals

Definitions

  • the present disclosure relates to a language delay treatment system and a control method for the same, and more particularly, to a language delay treatment system configured to analyze a conversation pattern between a parent and a child and correct a conversation habit of the parent, and a control method for the same.
  • Language delay means a state in which verbal development of an infant is relatively delayed in comparison to physical development.
  • the language delay may act as various latent risk factors over the entirely life of the corresponding infant. For example, learning disability or social skill deficiency in an adolescent period, or even economic hardship or long-term unemployment in an adult age has been reported.
  • the participation of the parent may be more effective when conversation habits of the parent, which have been during the lifetime of the parent, are corrected suitably for the purpose of the treatment.
  • correcting conversation habits of a parent as described above is called as ‘parent training’.
  • the present disclosure is directed to providing a language delay treatment system configured to analyze a conversation pattern between a parent and a child and guides the parent to correct a conversation habit, and a control method for the same.
  • the language delay treatment system and the control method for the same according to the present disclosure may actively expand the language treatment effects for an infant, who is suffering from language delay, over the entire daily life.
  • a control terminal comprising: a data communication unit for receiving a first user voice by data communication with a first audio device and receiving a second user voice by data communication with a second audio device; a turn information generating unit for generating turn information, which is voice unit information, by using the first and second user voices; and a metalanguage processing unit for determining a conversation pattern of the first and second users by using the turn information, and outputting a reminder message corresponding to a reminder event to the first user when the conversation pattern corresponds to a preset reminder event occurrence condition.
  • the control terminal may further comprise a preprocessing unit for optionally processing the first and second user voices with respect to a voice range.
  • the turn information in the control terminal may include at least one of speaker identification information, time, accent, loudness and speed of a unit voice.
  • the turn information generating unit in the control terminal may determine speaker identification information of the turn information according to a ratio of the first user voice and the second user voice.
  • the turn information generating unit generating unit in the control terminal may generate the turn information when the first user voice or the second user voice is equal to or greater than a preset loudness.
  • the reminder event occurrence condition may include at least one of a case in which only a turn of the first user occurs during a preset time, a case in which only a turn of the second user occurs during a preset time, a case in which the turn of the first user occurs over a preset number before the turn of the second user ends, a case in which the turn of the first user continues over a preset time, and a case in which the turn of the first user is equal to or greater than a preset speed.
  • a control method for a language delay treatment system which includes a first audio device for receiving a voice of a first user, a second audio device for receiving a voice of a second user, and a control terminal
  • the control method comprising: receiving, by the control terminal, the first user voice by data communication with the first audio device; receiving, by the control terminal, the second user voice by data communication with the second audio device; generating, by the control terminal, turn information which is voice unit information by using the first and second user voices; determining, by the control terminal, a conversation pattern of the first and second users by using the turn information; and outputting, by the control terminal, a reminder message corresponding to a reminder event to the first user when the conversation pattern corresponds to a preset reminder event occurrence condition.
  • the control method for a language delay treatment system may further comprise: preprocessing for optionally processing the first and second user voices with respect to a voice range.
  • the turn information may include at least one of speaker identification information, time, accent, loudness and speed of a unit voice.
  • the generating of turn information may determine speaker identification information of the turn information according to a ratio of the first user voice and the second user voice.
  • the generating of turn information generates the turn information when the first user voice or the second user voice is equal to or greater than a preset loudness.
  • the reminder event occurrence condition includes at least one of a case in which only a turn of the first user occurs during a preset time, a case in which only a turn of the second user occurs during a preset time, a case in which the turn of the first user occurs over a preset number before the turn of the second user ends, a case in which the turn of the first user continues over a preset time, and a case in which the turn of the first user is equal to or greater than a preset speed.
  • FIG. 1 shows a service environment of a language delay treatment system according to an embodiment of the present disclosure
  • FIGS. 2 to 4 are diagrams showing the language delay treatment system according to an embodiment of the present disclosure
  • FIG. 5 is a flowchart illustrating a control method for the language delay treatment system according to an embodiment of the present disclosure
  • FIG. 6 describes relations of the user voice stream and turn, and the turn information
  • FIG. 7 is a diagram showing a reminder event occurrence condition of turn information according to an embodiment of the present disclosure.
  • FIG. 8 is a diagram showing a reminder event occurrence condition according to an embodiment of the present disclosure.
  • FIG. 9 is a diagram showing turn information of the first reminder event occurrence condition according to an embodiment of the present disclosure, in which only a turn of the parent occurs during a preset time;
  • FIG. 10 is a diagram showing turn information of the second reminder event occurrence condition according to an embodiment of the present disclosure, in which only a turn of the child occurs during a preset time;
  • FIG. 11 is a diagram showing turn information of the fourth reminder event occurrence condition according to an embodiment of the present disclosure, in which the turn of the parent continues over a preset time;
  • FIG. 12 is a diagram showing a reminder message according to an embodiment of the present disclosure.
  • a term such as a “unit”, a “module”, a “block” or like when used in the specification, represents a unit that processes at least one function or operation, and the unit or the like may be implemented by hardware or software or a combination of hardware and software.
  • FIG. 1 shows a service environment of a language delay treatment system according to an embodiment of the present disclosure.
  • the language delay treatment system provides a conversation habit correction guide service to parent in real time.
  • the language delay treatment system receives a voice of a user (a parent or a child) through an audio device such as a Bluetooth headset or a microphone and sends the voice to a control terminal such as a smart phone.
  • the control terminal operates the conversation habit correction guide service as a background service to continuously monitor conversations between the parent and the child without intentional intervention of the parent.
  • the language delay treatment system analyzes a time-based pattern of the conversations between the parent and the child in real time, and if a pattern not in accordance with recommended patterns treated by a speech therapist is found, the language delay treatment system automatically reminds the parent of it through voice guidance or the like.
  • FIGS. 2 to 4 are diagrams showing the language delay treatment system according to an embodiment of the present disclosure.
  • FIG. 2 is a diagram showing a language delay treatment system according to a first embodiment of the present disclosure, and the language delay treatment system includes a control terminal 100 , a first audio device 300 and a second audio device 500 .
  • the first audio device 300 is configured to receive a voice of the parent, and for example, the first audio device 300 may be a Bluetooth headset.
  • the parent wears the first audio device 300 and input their voice thereto.
  • the second audio device 500 is configured to receive a voice of the child, and for example, the second audio device 500 may be a Bluetooth microphone. The child wears the second audio device 500 and inputs a voice thereto.
  • the control terminal 100 includes a data communication unit 110 , a turn information generating unit 130 and a metalanguage processing unit 150 , and for example, the control terminal 100 may be a mobile terminal such as a smart phone, a tablet or a notebook.
  • the data communication unit 110 is configured to receive a parent voice by data communication with the first audio device 300 , and to receive a child voice by data communication with the second audio device 500 .
  • the data communication unit 110 and the first and second audio devices 300 , 500 make Bluetooth communication
  • the present disclosure is not limited thereto and may receive a user voice by means of various kinds of data communication such as IR communication, NFC, wire communication or the like.
  • the turn information generating unit 130 is configured to generate turn information, which is voice unit information, by using the input parent and child voices.
  • the turn represents a vocalization unit extracted from a successive voice stream of the parent and the child.
  • the turn information includes speaker identification information, start time, duration time, voice accent, voice loudness, voice speed or the like of each turn.
  • the turn information generating unit 130 may determine speaker identification information of the turn information by comparing loudness of the input parent voice with loudness of the input child voice and finding relative voice loudness in comparison to surrounding noise loudness.
  • the turn information generating unit 130 may determine that the corresponding turn belongs to a parent voice, namely, the speaker identification information of the turn is the parent.
  • the turn information generating unit 130 may extract acoustic meta information such as voice accent, voice loudness and voice speed by applying various acoustic signal processing logics.
  • the turn information generating unit 130 may be configured to generate the corresponding turn information only when the parent voice or the child voice is equal to or greater than a preset loudness. This prevents turn information from being generated by surrounding noise.
  • the metalanguage processing unit 150 analyzes a conversation pattern between the parent and the child by using the turn information.
  • the metalanguage processing unit 150 If the conversation pattern between the parent and the child corresponds to a preset reminder event occurrence condition, the metalanguage processing unit 150 outputs a reminder message corresponding to the reminder event to the parent.
  • the reminder event occurrence condition may include five cases as follows.
  • the reminder event occurrence condition will be described later in detail with reference to FIGS. 7 and 8 .
  • the metalanguage processing unit 150 may output the reminder message through the control terminal 100 , and may send the reminder message to the first audio device 300 so that the first audio device 300 outputs the reminder message to the parent.
  • the present disclosure may also output the reminder message on a screen by using a display of the control terminal 100 or the first audio device 300 .
  • the reminder message will be described in detail later with reference to FIG. 12 .
  • the control terminal 100 may further include a preprocessing unit (not shown) for optionally processing a voice of the parent and the child with respect to a voice range.
  • a preprocessing unit (not shown) for optionally processing a voice of the parent and the child with respect to a voice range.
  • the sound input from the first audio device 300 and the second audio device 500 may include not only a user voice but also various surrounding noise. Therefore, a preprocessing technique for enhancing selectivity for a human voice range from the input sound is required.
  • the preprocessing unit may perform the preprocessing work by using a band-pass filter turned suitable for a human voice spectrum band or a voice activity detection (VAD) technique.
  • VAD voice activity detection
  • the data communication unit 110 and the preprocessing unit may operate on an operating system (OS) of the control terminal 100 .
  • OS operating system
  • FIG. 3 is a diagram showing a language delay treatment system according to a second embodiment of the present disclosure
  • FIG. 4 is a diagram showing a language delay treatment system according to a third embodiment of the present disclosure.
  • the language delay treatment system further includes a second mobile terminal 400 .
  • the second mobile terminal 400 receives a child voice from the second audio device 500 and transmits the child voice to the control terminal 100 .
  • the language delay treatment system further includes a first mobile terminal 200 and a second mobile terminal 400 .
  • the first mobile terminal 200 receives a parent voice from the first audio device 300 and transmits the parent voice to the control terminal 100
  • the second mobile terminal 400 receives a child voice from the second audio device 500 and transmits the child voice to the control terminal 100 .
  • the first mobile terminal 400 and the second mobile terminal 400 may preprocess a voice of the parent or the child and transmit the preprocessed voice to the control terminal 100 . By doing so, the workloads of the control terminal 100 may be reduced.
  • FIG. 5 is a flowchart illustrating a control method for the language delay treatment system according to an embodiment of the present disclosure.
  • the control method for the language delay treatment system includes receiving, by the first audio device 300 , a parent voice (S 100 ), receiving, by the second audio device 500 , a child voice (S 200 ), receiving, by the control terminal 100 , the parent voice by data communication with the first audio device 300 (S 300 ), receiving, by the control terminal 100 , the child voice by data communication with the second audio device 500 (S 400 ), generating, by the control terminal 100 , turn information which is voice unit information by using the parent and child voice (S 500 ), analyzing, by the control terminal 100 , a conversation pattern between the parent and the child by using the turn information (S 600 ), and outputting, by the control terminal 100 , a reminder message corresponding to a reminder event to the parent when the conversation pattern corresponds to a preset reminder event occurrence condition (S 700 ).
  • the first audio device 300 receives a parent voice (S 100 ), and the second audio device 500 receives a child voice (S 200 ).
  • the first audio device 300 and the second audio device 500 are configured with a Bluetooth headset or a Bluetooth microphone to receive a voice of a user.
  • the data communication unit 110 of the control terminal 100 receives a parent voice by data communication with the first audio device 300 (S 300 ), and receives a child voice by data communication with the second audio device 500 (S 400 ).
  • the present disclosure is not limited thereto but may receive a user voice by means of various kinds of data communication such as IR communication, NFC, wire communication or the like.
  • the turn information generating unit 130 of the control terminal 100 generates turn information, which is voice unit information, by using the parent voice and the child voice (S 500 ).
  • the turn represents a vocalization unit extracted from a successive voice stream of the parent and the child.
  • the turn information includes speaker identification information, start time, duration time, voice accent, voice loudness, voice speed or the like of each turn.
  • the turn information generating unit 130 may determine speaker identification information of the turn information by comparing loudness of the input parent voice with loudness of the input child voice and finding relative voice loudness in comparison to surrounding noise loudness.
  • the turn information generating unit 130 may extract acoustic meta information such as voice accent, voice loudness and voice speed by applying various acoustic signal processing logics.
  • the turn information generating unit 130 may be configured to generate the corresponding turn information only when the parent voice or the child voice is equal to or greater than a preset loudness. This prevents turn information from being generated by surrounding noise.
  • the metalanguage processing unit 150 of the control terminal 100 analyzes a conversation pattern between the parent and the child by using the turn information (S 600 ). Moreover, if the conversation pattern corresponds to a preset reminder event occurrence condition, the metalanguage processing unit 150 outputs a reminder message corresponding to the reminder event to the parent (S 700 ).
  • the metalanguage processing unit 150 may output the reminder message through the control terminal 100 , and may send the reminder message to the first audio device 300 so that the first audio device 300 outputs the reminder message to the parent.
  • the present disclosure may also output the reminder message on a screen by using a display of the control terminal 100 or the first audio device 300 .
  • control method for the language delay treatment system may further include optionally processing a voice of the parent and the child with respect to a voice range, by means of a preprocessing unit (not shown) of the control terminal 100 .
  • the sound input from the first audio device 300 and the second audio device 500 may include not only a user voice but also various surrounding noise. Therefore, a preprocessing technique for enhancing selectivity for a human voice range from the input sound is required.
  • the preprocessing unit may perform the preprocessing work by using a band-pass filter turned suitable for a human voice spectrum band or a voice activity detection (VAD) technique.
  • VAD voice activity detection
  • FIG. 6 is a diagram showing user voice information and turn information according to an embodiment of the present disclosure.
  • FIG. 6 shows a voice stream of the parent and turn information of the corresponding voice stream.
  • the turn is obtained by extracting a vocalization region from a voice stream as a unit, and in FIG. 6 , a turn is generated by extracting a vocalization region from the voice stream of the parent.
  • the turn information is voice stream information of the generated turn, and the turn information speaker identification information, start time, duration time, voice accent, voice loudness, voice speed or the like of the voice stream to which the corresponding turn belongs.
  • the turn information generating unit 130 extracts a turn of a voice stream by using the corresponding voice stream and generates turn information which is voice stream information of the corresponding turn.
  • FIG. 7 is a diagram showing a reminder event occurrence condition of turn information according to an embodiment of the present disclosure
  • FIG. 8 is a diagram showing a reminder event occurrence condition according to an embodiment of the present disclosure.
  • the reminder event occurrence condition may include five cases.
  • R1 first reminder event occurrence condition in which only a turn of the parent occurs during a preset time. This condition means that the parent talks alone regardless of an answer of the child.
  • the first reminder event (R1) occurrence condition has a formula “R1 is triggered if a parent's turns repeat N dominance times in which pauses between adjacent turns are shorter than T wait AND no child's turn appears during these parent turns.”
  • N dominance represents a repetition number of the preset parent turns
  • T wait represents an interval time of the preset parent turns.
  • R2 reminder event
  • the second reminder event (R2) occurrence condition has a formula “R2 is triggered if the following condition repeats N grace2 times: Given a child's turn, neither a parent's nor a child's turn follows within time duration T neglect .”
  • N grace2 represents a repetition number of the preset child turns
  • T neglect represents an interval time of the preset child turns.
  • R3 reminder event
  • the third reminder event (R3) occurrence condition has a formula “R3 is triggered if a parent's turn begins before the child's turn ends for N grace3 times.”
  • N grace3 represents a preset number of the parent's turns which have occurred before the child's turn ends.
  • R4 reminder event
  • the fourth reminder event (R4) occurrence condition has a formula “R4 is triggered if the duration of a parent's turn is longer than T long AND no child turn follows within T response4 after the parent's turn ends.”
  • T long represents a duration time of the preset parent's turn
  • T response4 represents a generation time of the preset child's turn.
  • the fifth reminder event (R5) occurrence condition has a formula “R5 is triggered if the estimated syllable rate of a parent's turn is higher than R fast AND no child turn follows within T response5 after the parent's turn ends.”
  • R fast represents a preset voice speed
  • T response5 represents a generation time of the preset child's turn.
  • the metalanguage processing unit 150 determines by using the turn information whether the conversation pattern between the parent and the child corresponds to the reminder event occurrence condition mentioned above.
  • reminder event occurrence conditions may also include various reminder event occurrence conditions which may be applied to treat language delay of a child.
  • FIGS. 9 to 11 show cases in which turn information of a parent and a child corresponds to the above reminder event occurrence conditions and therefore corresponding reminder events occur.
  • FIG. 9 is a diagram showing turn information of the first reminder event occurrence condition according to an embodiment of the present disclosure, in which only a turn of the parent occurs during a preset time.
  • FIG. 10 is a diagram showing turn information of the second reminder event occurrence condition according to an embodiment of the present disclosure, in which only a turn of the child occurs during a preset time.
  • FIG. 11 is a diagram showing turn information of the fourth reminder event occurrence condition according to an embodiment of the present disclosure, in which the turn of the parent continues over a preset time.
  • FIG. 12 is a diagram showing a reminder message according to an embodiment of the present disclosure.
  • the first reminder (R1) may have a message “Please wait for your child to talk back.”
  • the second reminder (R2) may have a message “Please respond to your child.”
  • the third reminder (R3) may have a message “Please do not interrupt your child.”
  • the fourth reminder (R4) may have a message “Please say it short and simple.”
  • the fifth reminder (R5) may have a message “Please talk more slowly.”
  • the metalanguage processing unit 150 may output the corresponding reminder message through the control terminal 100 , or may send the reminder message to the first audio device 300 so that the first audio device 300 outputs it to the parent.
  • the metalanguage processing unit 150 may output the reminder message on a screen through a display of the control terminal 100 or the first audio device 300 .

Abstract

The present disclosures relates to a control terminal, comprising: a data communication unit for receiving a first user voice by data communication with a first audio device and receiving a second user voice by data communication with a second audio device; a turn information generating unit for generating turn information, which is voice unit information, by using the first and second user voices; and a metalanguage processing unit for determining a conversation pattern of the first and second users by using the turn information, and outputting a reminder message corresponding to a reminder event to the first user when the conversation pattern corresponds to a preset reminder event occurrence condition.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • This application claims priority of Korean Patent Application No. 10-2013-0106393, filed on Sep. 5, 2013, in the KIPO (Korean Intellectual Property Office), the disclosure of which is incorporated herein entirely by reference.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present disclosure relates to a language delay treatment system and a control method for the same, and more particularly, to a language delay treatment system configured to analyze a conversation pattern between a parent and a child and correct a conversation habit of the parent, and a control method for the same.
  • 2. Description of the Related Art
  • Language delay means a state in which verbal development of an infant is relatively delayed in comparison to physical development.
  • Unless suitable treatment is timely provided to the language delay symptom, the language delay may act as various latent risk factors over the entirely life of the corresponding infant. For example, learning disability or social skill deficiency in an adolescent period, or even economic hardship or long-term unemployment in an adult age has been reported.
  • Through studies for more ten years, speech pathologists have proved that very important effects can be provided to infants who are suffering from language delay, when formal treatment provided to the infants under a dedicated therapeutic environment is accompanied with active participation and endeavors of a parent under various conversation situations in daily life.
  • However, at the conversation in daily life between a parent and a child, the participation of the parent may be more effective when conversation habits of the parent, which have been during the lifetime of the parent, are corrected suitably for the purpose of the treatment. In the speech pathology, correcting conversation habits of a parent as described above is called as ‘parent training’.
  • In order to intentionally change natural conversation habits of a person, it is demanded to concentrate efforts for a long time and always pay attention at every instant in daily life. This is never simple to a parent who has not been studied specialized language treatment.
  • Therefore, in order to correct conversation habits of parent suitably for the treatment of language delay of a child, there is demanded a system for monitoring conversations in daily life between the parent and the child and guiding the parent to rapidly correct the conversation habits based on the monitoring results.
  • SUMMARY OF THE INVENTION
  • The present disclosure is directed to providing a language delay treatment system configured to analyze a conversation pattern between a parent and a child and guides the parent to correct a conversation habit, and a control method for the same.
  • By using the above configuration, the language delay treatment system and the control method for the same according to the present disclosure may actively expand the language treatment effects for an infant, who is suffering from language delay, over the entire daily life.
  • In addition, by monitoring a conversation pattern between the parent and the child, a conversation habit which should be corrected may be rapidly recognized.
  • Moreover, by sending a correction guide message for the conversation habit which should be corrected, it is possible to support the parent to be efficiently trained against the language delay.
  • Further, it is possible to give a motive for preventing or early treating a language delay problem.
  • According to an aspect of the present disclosure, there is provided a control terminal, comprising: a data communication unit for receiving a first user voice by data communication with a first audio device and receiving a second user voice by data communication with a second audio device; a turn information generating unit for generating turn information, which is voice unit information, by using the first and second user voices; and a metalanguage processing unit for determining a conversation pattern of the first and second users by using the turn information, and outputting a reminder message corresponding to a reminder event to the first user when the conversation pattern corresponds to a preset reminder event occurrence condition.
  • The control terminal may further comprise a preprocessing unit for optionally processing the first and second user voices with respect to a voice range.
  • The turn information in the control terminal may include at least one of speaker identification information, time, accent, loudness and speed of a unit voice.
  • The turn information generating unit in the control terminal may determine speaker identification information of the turn information according to a ratio of the first user voice and the second user voice.
  • The turn information generating unit generating unit in the control terminal may generate the turn information when the first user voice or the second user voice is equal to or greater than a preset loudness.
  • The reminder event occurrence condition may include at least one of a case in which only a turn of the first user occurs during a preset time, a case in which only a turn of the second user occurs during a preset time, a case in which the turn of the first user occurs over a preset number before the turn of the second user ends, a case in which the turn of the first user continues over a preset time, and a case in which the turn of the first user is equal to or greater than a preset speed.
  • According to still another aspect of the present disclosure, there is provided a control method for a language delay treatment system, which includes a first audio device for receiving a voice of a first user, a second audio device for receiving a voice of a second user, and a control terminal, the control method comprising: receiving, by the control terminal, the first user voice by data communication with the first audio device; receiving, by the control terminal, the second user voice by data communication with the second audio device; generating, by the control terminal, turn information which is voice unit information by using the first and second user voices; determining, by the control terminal, a conversation pattern of the first and second users by using the turn information; and outputting, by the control terminal, a reminder message corresponding to a reminder event to the first user when the conversation pattern corresponds to a preset reminder event occurrence condition.
  • The control method for a language delay treatment system may further comprise: preprocessing for optionally processing the first and second user voices with respect to a voice range.
  • The turn information may include at least one of speaker identification information, time, accent, loudness and speed of a unit voice.
  • The generating of turn information may determine speaker identification information of the turn information according to a ratio of the first user voice and the second user voice.
  • The generating of turn information generates the turn information when the first user voice or the second user voice is equal to or greater than a preset loudness.
  • The reminder event occurrence condition includes at least one of a case in which only a turn of the first user occurs during a preset time, a case in which only a turn of the second user occurs during a preset time, a case in which the turn of the first user occurs over a preset number before the turn of the second user ends, a case in which the turn of the first user continues over a preset time, and a case in which the turn of the first user is equal to or greater than a preset speed.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The above and other features and advantages will become more apparent to those of ordinary skill in the art by describing in detail exemplary embodiments with reference to the attached drawings, in which:
  • FIG. 1 shows a service environment of a language delay treatment system according to an embodiment of the present disclosure;
  • FIGS. 2 to 4 are diagrams showing the language delay treatment system according to an embodiment of the present disclosure;
  • FIG. 5 is a flowchart illustrating a control method for the language delay treatment system according to an embodiment of the present disclosure;
  • FIG. 6 describes relations of the user voice stream and turn, and the turn information;
  • FIG. 7 is a diagram showing a reminder event occurrence condition of turn information according to an embodiment of the present disclosure;
  • FIG. 8 is a diagram showing a reminder event occurrence condition according to an embodiment of the present disclosure;
  • FIG. 9 is a diagram showing turn information of the first reminder event occurrence condition according to an embodiment of the present disclosure, in which only a turn of the parent occurs during a preset time;
  • FIG. 10 is a diagram showing turn information of the second reminder event occurrence condition according to an embodiment of the present disclosure, in which only a turn of the child occurs during a preset time;
  • FIG. 11 is a diagram showing turn information of the fourth reminder event occurrence condition according to an embodiment of the present disclosure, in which the turn of the parent continues over a preset time; and
  • FIG. 12 is a diagram showing a reminder message according to an embodiment of the present disclosure.
  • In the following description, the same or similar elements are labeled with the same or similar reference numbers.
  • DETAILED DESCRIPTION
  • The present invention now will be described more fully hereinafter with reference to the accompanying drawings, in which embodiments of the invention are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
  • The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “includes”, “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. In addition, a term such as a “unit”, a “module”, a “block” or like, when used in the specification, represents a unit that processes at least one function or operation, and the unit or the like may be implemented by hardware or software or a combination of hardware and software.
  • Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
  • Preferred embodiments will now be described more fully hereinafter with reference to the accompanying drawings. However, they may be embodied in different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
  • FIG. 1 shows a service environment of a language delay treatment system according to an embodiment of the present disclosure.
  • The language delay treatment system provides a conversation habit correction guide service to parent in real time.
  • The language delay treatment system receives a voice of a user (a parent or a child) through an audio device such as a Bluetooth headset or a microphone and sends the voice to a control terminal such as a smart phone. In addition, the control terminal operates the conversation habit correction guide service as a background service to continuously monitor conversations between the parent and the child without intentional intervention of the parent.
  • In addition, the language delay treatment system analyzes a time-based pattern of the conversations between the parent and the child in real time, and if a pattern not in accordance with recommended patterns treated by a speech therapist is found, the language delay treatment system automatically reminds the parent of it through voice guidance or the like.
  • FIGS. 2 to 4 are diagrams showing the language delay treatment system according to an embodiment of the present disclosure.
  • First, FIG. 2 is a diagram showing a language delay treatment system according to a first embodiment of the present disclosure, and the language delay treatment system includes a control terminal 100, a first audio device 300 and a second audio device 500.
  • The first audio device 300 is configured to receive a voice of the parent, and for example, the first audio device 300 may be a Bluetooth headset. The parent wears the first audio device 300 and input their voice thereto.
  • The second audio device 500 is configured to receive a voice of the child, and for example, the second audio device 500 may be a Bluetooth microphone. The child wears the second audio device 500 and inputs a voice thereto.
  • The control terminal 100 includes a data communication unit 110, a turn information generating unit 130 and a metalanguage processing unit 150, and for example, the control terminal 100 may be a mobile terminal such as a smart phone, a tablet or a notebook.
  • The data communication unit 110 is configured to receive a parent voice by data communication with the first audio device 300, and to receive a child voice by data communication with the second audio device 500.
  • Even though it is depicted that the data communication unit 110 and the first and second audio devices 300, 500 make Bluetooth communication, the present disclosure is not limited thereto and may receive a user voice by means of various kinds of data communication such as IR communication, NFC, wire communication or the like.
  • The turn information generating unit 130 is configured to generate turn information, which is voice unit information, by using the input parent and child voices.
  • First, the turn represents a vocalization unit extracted from a successive voice stream of the parent and the child. In addition, the turn information includes speaker identification information, start time, duration time, voice accent, voice loudness, voice speed or the like of each turn.
  • Relations of the user voice stream and turn, and the turn information will be described later in detail with reference to FIG. 6.
  • In addition, the turn information generating unit 130 may determine speaker identification information of the turn information by comparing loudness of the input parent voice with loudness of the input child voice and finding relative voice loudness in comparison to surrounding noise loudness.
  • For example, if a ratio of parent voice loudness and child voice loudness in one turn is 8:2, the turn information generating unit 130 may determine that the corresponding turn belongs to a parent voice, namely, the speaker identification information of the turn is the parent.
  • In addition, the turn information generating unit 130 may extract acoustic meta information such as voice accent, voice loudness and voice speed by applying various acoustic signal processing logics.
  • Moreover, the turn information generating unit 130 may be configured to generate the corresponding turn information only when the parent voice or the child voice is equal to or greater than a preset loudness. This prevents turn information from being generated by surrounding noise.
  • The metalanguage processing unit 150 analyzes a conversation pattern between the parent and the child by using the turn information.
  • If the conversation pattern between the parent and the child corresponds to a preset reminder event occurrence condition, the metalanguage processing unit 150 outputs a reminder message corresponding to the reminder event to the parent.
  • In the present disclosure, the reminder event occurrence condition may include five cases as follows.
  • (R1) a case in which only a turn of the parent occurs during a preset time
  • (R2) a case in which only a turn of the child occurs during a preset time
  • (R3) a case in which the turn of the parent occurs over a preset number before the turn of the child ends
  • (R4) a case in which the turn of the parent continues over a preset time
  • (R5) a case in which the turn of the parent is equal to or greater than a preset speed
  • The reminder event occurrence condition will be described later in detail with reference to FIGS. 7 and 8.
  • In addition, the metalanguage processing unit 150 may output the reminder message through the control terminal 100, and may send the reminder message to the first audio device 300 so that the first audio device 300 outputs the reminder message to the parent.
  • Even though it is depicted that the reminder message is output to the parent as a voice, the present disclosure may also output the reminder message on a screen by using a display of the control terminal 100 or the first audio device 300.
  • The reminder message will be described in detail later with reference to FIG. 12.
  • The control terminal 100 may further include a preprocessing unit (not shown) for optionally processing a voice of the parent and the child with respect to a voice range.
  • The sound input from the first audio device 300 and the second audio device 500 may include not only a user voice but also various surrounding noise. Therefore, a preprocessing technique for enhancing selectivity for a human voice range from the input sound is required.
  • The preprocessing unit (not shown) may perform the preprocessing work by using a band-pass filter turned suitable for a human voice spectrum band or a voice activity detection (VAD) technique.
  • In addition, as shown in the figures, the data communication unit 110 and the preprocessing unit (not shown) may operate on an operating system (OS) of the control terminal 100.
  • FIG. 3 is a diagram showing a language delay treatment system according to a second embodiment of the present disclosure, and FIG. 4 is a diagram showing a language delay treatment system according to a third embodiment of the present disclosure.
  • The language delay treatment system according to the second embodiment of the present disclosure further includes a second mobile terminal 400. The second mobile terminal 400 receives a child voice from the second audio device 500 and transmits the child voice to the control terminal 100.
  • The language delay treatment system according to the third embodiment of the present disclosure further includes a first mobile terminal 200 and a second mobile terminal 400. The first mobile terminal 200 receives a parent voice from the first audio device 300 and transmits the parent voice to the control terminal 100, and the second mobile terminal 400 receives a child voice from the second audio device 500 and transmits the child voice to the control terminal 100.
  • In addition, the first mobile terminal 400 and the second mobile terminal 400 may preprocess a voice of the parent or the child and transmit the preprocessed voice to the control terminal 100. By doing so, the workloads of the control terminal 100 may be reduced.
  • FIG. 5 is a flowchart illustrating a control method for the language delay treatment system according to an embodiment of the present disclosure.
  • As shown in FIG. 5, the control method for the language delay treatment system includes receiving, by the first audio device 300, a parent voice (S100), receiving, by the second audio device 500, a child voice (S200), receiving, by the control terminal 100, the parent voice by data communication with the first audio device 300 (S300), receiving, by the control terminal 100, the child voice by data communication with the second audio device 500 (S400), generating, by the control terminal 100, turn information which is voice unit information by using the parent and child voice (S500), analyzing, by the control terminal 100, a conversation pattern between the parent and the child by using the turn information (S600), and outputting, by the control terminal 100, a reminder message corresponding to a reminder event to the parent when the conversation pattern corresponds to a preset reminder event occurrence condition (S700).
  • First, the first audio device 300 receives a parent voice (S100), and the second audio device 500 receives a child voice (S200). As described above, the first audio device 300 and the second audio device 500 are configured with a Bluetooth headset or a Bluetooth microphone to receive a voice of a user.
  • In addition, the data communication unit 110 of the control terminal 100 receives a parent voice by data communication with the first audio device 300 (S300), and receives a child voice by data communication with the second audio device 500 (S400).
  • Even though it is described in the specification that the data communication unit 110 and the first and second audio devices 300, 500 perform Bluetooth communication, the present disclosure is not limited thereto but may receive a user voice by means of various kinds of data communication such as IR communication, NFC, wire communication or the like.
  • In addition, the turn information generating unit 130 of the control terminal 100 generates turn information, which is voice unit information, by using the parent voice and the child voice (S500).
  • As described above, the turn represents a vocalization unit extracted from a successive voice stream of the parent and the child. In addition, the turn information includes speaker identification information, start time, duration time, voice accent, voice loudness, voice speed or the like of each turn.
  • Moreover, the turn information generating unit 130 may determine speaker identification information of the turn information by comparing loudness of the input parent voice with loudness of the input child voice and finding relative voice loudness in comparison to surrounding noise loudness.
  • In addition, the turn information generating unit 130 may extract acoustic meta information such as voice accent, voice loudness and voice speed by applying various acoustic signal processing logics.
  • Moreover, the turn information generating unit 130 may be configured to generate the corresponding turn information only when the parent voice or the child voice is equal to or greater than a preset loudness. This prevents turn information from being generated by surrounding noise.
  • In addition, the metalanguage processing unit 150 of the control terminal 100 analyzes a conversation pattern between the parent and the child by using the turn information (S600). Moreover, if the conversation pattern corresponds to a preset reminder event occurrence condition, the metalanguage processing unit 150 outputs a reminder message corresponding to the reminder event to the parent (S700).
  • The metalanguage processing unit 150 may output the reminder message through the control terminal 100, and may send the reminder message to the first audio device 300 so that the first audio device 300 outputs the reminder message to the parent.
  • In addition, even though it is described in the specification that the reminder message is output to the parent as a voice, the present disclosure may also output the reminder message on a screen by using a display of the control terminal 100 or the first audio device 300.
  • Moreover, the control method for the language delay treatment system may further include optionally processing a voice of the parent and the child with respect to a voice range, by means of a preprocessing unit (not shown) of the control terminal 100.
  • As described above, the sound input from the first audio device 300 and the second audio device 500 may include not only a user voice but also various surrounding noise. Therefore, a preprocessing technique for enhancing selectivity for a human voice range from the input sound is required.
  • The preprocessing unit (not shown) may perform the preprocessing work by using a band-pass filter turned suitable for a human voice spectrum band or a voice activity detection (VAD) technique.
  • FIG. 6 is a diagram showing user voice information and turn information according to an embodiment of the present disclosure.
  • FIG. 6 shows a voice stream of the parent and turn information of the corresponding voice stream.
  • First, the turn is obtained by extracting a vocalization region from a voice stream as a unit, and in FIG. 6, a turn is generated by extracting a vocalization region from the voice stream of the parent.
  • In addition, the turn information is voice stream information of the generated turn, and the turn information speaker identification information, start time, duration time, voice accent, voice loudness, voice speed or the like of the voice stream to which the corresponding turn belongs.
  • Therefore, the turn information generating unit 130 extracts a turn of a voice stream by using the corresponding voice stream and generates turn information which is voice stream information of the corresponding turn.
  • FIG. 7 is a diagram showing a reminder event occurrence condition of turn information according to an embodiment of the present disclosure, and FIG. 8 is a diagram showing a reminder event occurrence condition according to an embodiment of the present disclosure.
  • As described above, the reminder event occurrence condition may include five cases.
  • First, there is a first reminder event (R1) occurrence condition in which only a turn of the parent occurs during a preset time. This condition means that the parent talks alone regardless of an answer of the child.
  • The first reminder event (R1) occurrence condition has a formula “R1 is triggered if a parent's turns repeat Ndominance times in which pauses between adjacent turns are shorter than Twait AND no child's turn appears during these parent turns.” Here, Ndominance represents a repetition number of the preset parent turns, and Twait represents an interval time of the preset parent turns.
  • In addition, there is a second reminder event (R2) occurrence condition in which only a turn of the child occurs during a preset time. This condition means that the parent does not answer to the child's talk.
  • The second reminder event (R2) occurrence condition has a formula “R2 is triggered if the following condition repeats Ngrace2 times: Given a child's turn, neither a parent's nor a child's turn follows within time duration Tneglect.” Here, Ngrace2 represents a repetition number of the preset child turns, and Tneglect represents an interval time of the preset child turns.
  • In addition, there is a third reminder event (R3) occurrence condition in which the turn of the parent occurs over a preset number before the turn of the child ends. This condition means that the parent does not answer to the child's talk.
  • The third reminder event (R3) occurrence condition has a formula “R3 is triggered if a parent's turn begins before the child's turn ends for Ngrace3 times.” Here, Ngrace3 represents a preset number of the parent's turns which have occurred before the child's turn ends.
  • In addition, there is a fourth reminder event (R4) occurrence condition in which the turn of the parent continues over a preset time. This condition means that the parent talks too long sentence to be understood by the child.
  • The fourth reminder event (R4) occurrence condition has a formula “R4 is triggered if the duration of a parent's turn is longer than Tlong AND no child turn follows within Tresponse4 after the parent's turn ends.” Here, Tlong represents a duration time of the preset parent's turn, and Tresponse4 represents a generation time of the preset child's turn.
  • Finally, there is a fifth reminder event (R5) occurrence condition in which the turn of the parent is equal to or greater than a preset speed. This condition means that the parent talks too fast to be understood by the child.
  • The fifth reminder event (R5) occurrence condition has a formula “R5 is triggered if the estimated syllable rate of a parent's turn is higher than Rfast AND no child turn follows within Tresponse5 after the parent's turn ends.” Here, Rfast represents a preset voice speed, and Tresponse5 represents a generation time of the preset child's turn.
  • Therefore, the metalanguage processing unit 150 determines by using the turn information whether the conversation pattern between the parent and the child corresponds to the reminder event occurrence condition mentioned above.
  • In addition, even though 5 reminder event occurrence conditions have been described in the specification, the present disclosure may also include various reminder event occurrence conditions which may be applied to treat language delay of a child.
  • FIGS. 9 to 11 show cases in which turn information of a parent and a child corresponds to the above reminder event occurrence conditions and therefore corresponding reminder events occur.
  • FIG. 9 is a diagram showing turn information of the first reminder event occurrence condition according to an embodiment of the present disclosure, in which only a turn of the parent occurs during a preset time.
  • FIG. 10 is a diagram showing turn information of the second reminder event occurrence condition according to an embodiment of the present disclosure, in which only a turn of the child occurs during a preset time.
  • FIG. 11 is a diagram showing turn information of the fourth reminder event occurrence condition according to an embodiment of the present disclosure, in which the turn of the parent continues over a preset time.
  • FIG. 12 is a diagram showing a reminder message according to an embodiment of the present disclosure.
  • As shown in FIG. 12, the first reminder (R1) may have a message “Please wait for your child to talk back.”, the second reminder (R2) may have a message “Please respond to your child.”, the third reminder (R3) may have a message “Please do not interrupt your child.”, the fourth reminder (R4) may have a message “Please say it short and simple.”, and the fifth reminder (R5) may have a message “Please talk more slowly.”
  • Therefore, if a reminder event occurs, the metalanguage processing unit 150 may output the corresponding reminder message through the control terminal 100, or may send the reminder message to the first audio device 300 so that the first audio device 300 outputs it to the parent.
  • In addition, the metalanguage processing unit 150 may output the reminder message on a screen through a display of the control terminal 100 or the first audio device 300.
  • While the present disclosure has been described with reference to the embodiments illustrated in the figures, the embodiments are merely examples, and it will be understood by those skilled in the art that various changes in form and other embodiments equivalent thereto can be performed. Therefore, the technical scope of the disclosure is defined by the technical idea of the appended claims The drawings and the forgoing description gave examples of the present invention. The scope of the present invention, however, is by no means limited by these specific examples. Numerous variations, whether explicitly given in the specification or not, such as differences in structure, dimension, and use of material, are possible. The scope of the invention is at least as broad as given by the following claims.

Claims (20)

What is claimed is:
1. A control terminal comprising:
a data communication unit for receiving a first user voice by data communication with a first audio device and receiving a second user voice by data communication with a second audio device;
a turn information generating unit for generating turn information, which is voice unit information, by using the first and second user voices; and
a metalanguage processing unit for determining a conversation pattern of the first and second users by using the turn information, and outputting a reminder message corresponding to a reminder event to the first user when the conversation pattern corresponds to a preset reminder event occurrence condition.
2. The control terminal of claim 1, further comprising a preprocessing unit for optionally processing the first and second user voices with respect to a voice range.
3. The control terminal of claim 1, wherein the turn information includes at least one of speaker identification information, time, accent, loudness and speed of a unit voice.
4. The control terminal of claim 3, wherein the turn information generating unit determines speaker identification information of the turn information according to a ratio of the first user voice and the second user voice.
5. The control terminal of claim 1, wherein the turn information generating unit generates the turn information when the first user voice or the second user voice is equal to or greater than a preset loudness.
6. The control terminal of claim 1, wherein the reminder event occurrence condition includes at least one of a case in which only a turn of the first user occurs during a preset time, a case in which only a turn of the second user occurs during a preset time, a case in which the turn of the first user occurs over a preset number before the turn of the second user ends, a case in which the turn of the first user continues over a preset time, and a case in which the turn of the first user is equal to or greater than a preset speed.
7. The control terminal of claim 2, wherein the turn information includes at least one of speaker identification information, time, accent, loudness and speed of a unit voice.
8. The control terminal of claim 7, wherein the turn information generating unit determines speaker identification information of the turn information according to a ratio of the first user voice and the second user voice.
9. The control terminal of claim 2, wherein the turn information generating unit generates the turn information when the first user voice or the second user voice is equal to or greater than a preset loudness.
10. The control terminal of claim 2, wherein the reminder event occurrence condition includes at least one of a case in which only a turn of the first user occurs during a preset time, a case in which only a turn of the second user occurs during a preset time, a case in which the turn of the first user occurs over a preset number before the turn of the second user ends, a case in which the turn of the first user continues over a preset time, and a case in which the turn of the first user is equal to or greater than a preset speed.
11. A control method for a language delay treatment system, which includes a first audio device for receiving a voice of a first user, a second audio device for receiving a voice of a second user, and a control terminal, the control method comprising:
receiving, by the control terminal, the first user voice by data communication with the first audio device;
receiving, by the control terminal, the second user voice by data communication with the second audio device;
generating, by the control terminal, turn information which is voice unit information by using the first and second user voices;
determining, by the control terminal, a conversation pattern of the first and second users by using the turn information; and
outputting, by the control terminal, a reminder message corresponding to a reminder event to the first user when the conversation pattern corresponds to a preset reminder event occurrence condition.
12. The control method for a language delay treatment system of claim 11, further comprising:
preprocessing for optionally processing the first and second user voices with respect to a voice range.
13. The control method for a language delay treatment system of claim 11, wherein the turn information includes at least one of speaker identification information, time, accent, loudness and speed of a unit voice.
14. The control method for a language delay treatment system of claim 13, wherein the generating of turn information determines speaker identification information of the turn information according to a ratio of the first user voice and the second user voice.
15. The control method for a language delay treatment system of claim 11, wherein the generating of turn information generates the turn information when the first user voice or the second user voice is equal to or greater than a preset loudness.
16. The control method for a language delay treatment system of claim 11, wherein the reminder event occurrence condition includes at least one of a case in which only a turn of the first user occurs during a preset time, a case in which only a turn of the second user occurs during a preset time, a case in which the turn of the first user occurs over a preset number before the turn of the second user ends, a case in which the turn of the first user continues over a preset time, and a case in which the turn of the first user is equal to or greater than a preset speed.
17. The control method for a language delay treatment system of claim 12, wherein the turn information includes at least one of speaker identification information, time, accent, loudness and speed of a unit voice.
18. The control method for a language delay treatment system of claim 17, wherein the generating of turn information determines speaker identification information of the turn information according to a ratio of the first user voice and the second user voice.
19. The control method for a language delay treatment system of claim 12, wherein the generating of turn information generates the turn information when the first user voice or the second user voice is equal to or greater than a preset loudness.
20. The control method for a language delay treatment system of claim 12, wherein the reminder event occurrence condition includes at least one of a case in which only a turn of the first user occurs during a preset time, a case in which only a turn of the second user occurs during a preset time, a case in which the turn of the first user occurs over a preset number before the turn of the second user ends, a case in which the turn of the first user continues over a preset time, and a case in which the turn of the first user is equal to or greater than a preset speed.
US14/047,177 2013-09-05 2013-10-07 Language delay treatment system and control method for the same Active 2034-02-25 US9875668B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
KR10-2013-0106393 2013-09-05
KR10-2013-0106395 2013-09-05
KR1020130106393A KR101478459B1 (en) 2013-09-05 2013-09-05 Language delay treatment system and control method for the same

Publications (3)

Publication Number Publication Date
US20150064666A1 US20150064666A1 (en) 2015-03-05
US20170301259A9 true US20170301259A9 (en) 2017-10-19
US9875668B2 US9875668B2 (en) 2018-01-23

Family

ID=52680407

Family Applications (1)

Application Number Title Priority Date Filing Date
US14/047,177 Active 2034-02-25 US9875668B2 (en) 2013-09-05 2013-10-07 Language delay treatment system and control method for the same

Country Status (2)

Country Link
US (1) US9875668B2 (en)
KR (1) KR101478459B1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110148411A (en) * 2019-06-28 2019-08-20 百度在线网络技术(北京)有限公司 Voice prompting method, device and terminal

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107198880A (en) * 2017-05-26 2017-09-26 合肥充盈信息科技有限公司 A kind of poem conference games system
US10755717B2 (en) * 2018-05-10 2020-08-25 International Business Machines Corporation Providing reminders based on voice recognition
US11848019B2 (en) * 2021-06-16 2023-12-19 Hewlett-Packard Development Company, L.P. Private speech filterings

Family Cites Families (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4472833A (en) * 1981-06-24 1984-09-18 Turrell Ronald P Speech aiding by indicating speech rate is excessive
US7107539B2 (en) * 1998-12-18 2006-09-12 Tangis Corporation Thematic response to a computer user's context, such as by a wearable personal computer
US6628759B1 (en) * 1999-12-10 2003-09-30 Agere Systems, Inc. Alert signal during telephone conversation
KR100405061B1 (en) * 2000-03-10 2003-11-10 문창호 Apparatus for training language and Method for analyzing language thereof
US7257537B2 (en) * 2001-01-12 2007-08-14 International Business Machines Corporation Method and apparatus for performing dialog management in a computer conversational interface
US7739115B1 (en) * 2001-02-15 2010-06-15 West Corporation Script compliance and agent feedback
MXPA04010611A (en) * 2002-04-26 2004-12-13 Univ East Carolina Non-stuttering biofeedback method and apparatus using daf.
US6952592B2 (en) * 2002-10-15 2005-10-04 Motorola, Inc. Method and apparatus for limiting a transmission in a dispatch system
KR100580619B1 (en) * 2002-12-11 2006-05-16 삼성전자주식회사 Apparatus and method of managing dialog between user and agent
US8340972B2 (en) * 2003-06-27 2012-12-25 Motorola Mobility Llc Psychoacoustic method and system to impose a preferred talking rate through auditory feedback rate adjustment
US20050047394A1 (en) * 2003-08-28 2005-03-03 Jeff Hodson Automatic contact navigation system
US7818179B2 (en) * 2004-11-12 2010-10-19 International Business Machines Corporation Devices and methods providing automated assistance for verbal communication
US9300790B2 (en) * 2005-06-24 2016-03-29 Securus Technologies, Inc. Multi-party conversation analyzer and logger
US7529683B2 (en) * 2005-06-29 2009-05-05 Microsoft Corporation Principals and methods for balancing the timeliness of communications and information delivery with the expected cost of interruption via deferral policies
US20070055514A1 (en) * 2005-09-08 2007-03-08 Beattie Valerie L Intelligent tutoring feedback
JP4836290B2 (en) * 2007-03-20 2011-12-14 富士通株式会社 Speech recognition system, speech recognition program, and speech recognition method
US8195460B2 (en) * 2008-06-17 2012-06-05 Voicesense Ltd. Speaker characterization through speech analysis
US7995496B2 (en) * 2008-08-20 2011-08-09 The Boeing Company Methods and systems for internet protocol (IP) traffic conversation detection and storage
KR100913817B1 (en) * 2008-11-27 2009-08-26 주식회사 지팡이 Apparatus and method for delivering message for interactive toy
TWI403304B (en) * 2010-08-27 2013-08-01 Ind Tech Res Inst Method and mobile device for awareness of linguistic ability
KR101522837B1 (en) * 2010-12-16 2015-05-26 한국전자통신연구원 Communication method and system for the same
US20120191454A1 (en) * 2011-01-26 2012-07-26 TrackThings LLC Method and Apparatus for Obtaining Statistical Data from a Conversation
US20130115586A1 (en) * 2011-11-07 2013-05-09 Shawn R. Cornally Feedback methods and systems
JP2013167806A (en) * 2012-02-16 2013-08-29 Toshiba Corp Information notification supporting device, information notification supporting method, and program
US9171291B2 (en) * 2012-04-26 2015-10-27 Blackberry Limited Electronic device and method for updating message body content based on recipient changes
US8751943B1 (en) * 2013-01-24 2014-06-10 Zotobi Management Ltd. System and method for presenting views of dialogues to a user
US20140272827A1 (en) * 2013-03-14 2014-09-18 Toytalk, Inc. Systems and methods for managing a voice acting session
US9691296B2 (en) * 2013-06-03 2017-06-27 Massachusetts Institute Of Technology Methods and apparatus for conversation coach

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110148411A (en) * 2019-06-28 2019-08-20 百度在线网络技术(北京)有限公司 Voice prompting method, device and terminal

Also Published As

Publication number Publication date
US20150064666A1 (en) 2015-03-05
US9875668B2 (en) 2018-01-23
KR101478459B1 (en) 2014-12-31

Similar Documents

Publication Publication Date Title
US11355104B2 (en) Post-speech recognition request surplus detection and prevention
Gogate et al. CochleaNet: A robust language-independent audio-visual model for real-time speech enhancement
US8731912B1 (en) Delaying audio notifications
Borrie et al. Rhythm as a coordinating device: Entrainment with disordered speech
Hwang et al. TalkBetter: family-driven mobile intervention care for children with language delay
US9473643B2 (en) Mute detector
US9875668B2 (en) Language delay treatment system and control method for the same
Dubno Benefits of auditory training for aided listening by older adults
Turcott et al. Efficient evaluation of coding strategies for transcutaneous language communication
Guerreiro et al. Scanning for digital content: How blind and sighted people perceive concurrent speech
CN111971979A (en) Rehabilitation and/or rehabilitation of advanced hearing prosthesis recipients
US20170364516A1 (en) Linguistic model selection for adaptive automatic speech recognition
Tuomainen et al. Speech modifications in interactive speech: effects of age, sex and noise type
Bleakley et al. Exploring smart speaker user experience for people who stammer
US9123340B2 (en) Detecting the end of a user question
Reed et al. Identification of words and phrases through a phonemic-based haptic display: Effects of inter-phoneme and inter-word interval durations
Maharjan et al. Experiences of a speech-enabled conversational agent for the self-report of well-being among people living with affective disorders: an in-the-wild study
Aubanel et al. Conversing in the presence of a competing conversation: Effects on speech production
KR20210124050A (en) Automatic interpretation server and method thereof
Aubanel et al. Strategies adopted by talkers faced with fluctuating and competing-speech maskers
Zhong et al. Effects of text supplementation on speech intelligibility for listeners with normal and impaired hearing: A systematic review with implications for telecommunication
Lunsford et al. Audio-visual cues distinguishing self-from system-directed speech in younger and older adults
US20240071364A1 (en) Facilitating silent conversation
US20240079011A1 (en) Interpreting words prior to vocalization
US11862147B2 (en) Method and system for enhancing the intelligibility of information for a user

Legal Events

Date Code Title Description
AS Assignment

Owner name: KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SONG, JUNE HWA;HWANG, IN SEOK;YOO, CHUNG KUK;AND OTHERS;REEL/FRAME:031355/0233

Effective date: 20131004

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

Year of fee payment: 4