WO2019132092A1 - Plush doll robot with voice recognition function - Google Patents
Plush doll robot with voice recognition function Download PDFInfo
- Publication number
- WO2019132092A1 WO2019132092A1 PCT/KR2018/000173 KR2018000173W WO2019132092A1 WO 2019132092 A1 WO2019132092 A1 WO 2019132092A1 KR 2018000173 W KR2018000173 W KR 2018000173W WO 2019132092 A1 WO2019132092 A1 WO 2019132092A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- voice
- file
- robot
- unit
- input
- Prior art date
Links
- 238000004891 communication Methods 0.000 claims abstract description 21
- 230000009471 action Effects 0.000 claims abstract description 5
- 238000012937 correction Methods 0.000 claims description 25
- 238000009958 sewing Methods 0.000 claims description 16
- 238000006243 chemical reaction Methods 0.000 claims description 12
- 238000000034 method Methods 0.000 claims description 8
- 238000012795 verification Methods 0.000 claims description 5
- 230000004044 response Effects 0.000 abstract description 4
- 230000006870 function Effects 0.000 description 24
- 241001465754 Metazoa Species 0.000 description 14
- 238000010586 diagram Methods 0.000 description 8
- 241001481833 Coryphaena hippurus Species 0.000 description 4
- 230000008451 emotion Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 125000002066 L-histidyl group Chemical group [H]N1C([H])=NC(C([H])([H])[C@](C(=O)[*])([H])N([H])[H])=C1[H] 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63H—TOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
- A63H3/00—Dolls
- A63H3/28—Arrangements of sound-producing means in dolls; Means in dolls for producing sounds
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63H—TOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
- A63H3/00—Dolls
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63H—TOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
- A63H3/00—Dolls
- A63H3/003—Dolls specially adapted for a particular function not connected with dolls
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63H—TOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
- A63H3/00—Dolls
- A63H3/02—Dolls made of fabrics or stuffed
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63H—TOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
- A63H30/00—Remote-control arrangements specially adapted for toys, e.g. for toy vehicles
- A63H30/02—Electrical arrangements
- A63H30/04—Electrical arrangements using wireless transmission
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/08—Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
- G10L15/30—Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63H—TOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
- A63H2200/00—Computerized interactive toys, e.g. dolls
Definitions
- the present invention relates to a sewing robot having a voice recognition function, and more particularly to a plush toy having a voice recognition function for outputting a voice corresponding to a user's voice by connecting to a voice recognition server through an external terminal using wireless communication Robot.
- the present invention provides a plush dolphin robot having a voice recognition function that is connected to a server through wireless communication to analyze a voice of a user and output a voice corresponding thereto.
- the present invention provides a plush dolphin robot having a speech recognition function capable of providing a variety of contents to a user by converting a file including a text into a voice file and correcting an error occurring during the conversion.
- the present invention provides a stuffed toy robot having a voice recognition function that can be of interest to a child user by providing mobile contents composed of a stuffed toy character connected to an external terminal.
- a robot for performing a sewing robot having a voice recognition function includes a controller for performing an action corresponding to a command input of a user, a controller for storing a user's voice as an input voice file, A voice recognition unit for transmitting the input voice file to the control unit and transmitting the input voice file to the voice recognition server if the voice of the user is not included, A touch sensor, and a pulse sensor for receiving the answer voice file from the speech recognition server and outputting the answer voice file as a sound; a sensor unit for detecting a body input by providing at least one sensor, A wireless communication unit for wirelessly communicating with an external terminal using at least one of Wi-Fi, Bluetooth, and NFC; A motor unit for controlling the movement of the body by driving a number of motors and an LED lighting signal among signals received from the sensor unit and the voice recognition server, And an LED (light emitting diode).
- the voice providing unit may allow a user to synthesize a voice to select a base voice or a base voice with a predetermined voice.
- the speech recognition server may include a TTS conversion unit that receives a scan file including text and converts the text included in the scan file into a speech file.
- the TTS conversion unit provides basic information of the scan file to a user registered in a predetermined space on a network, and first requests an operation of correcting an error of the voice file in accordance with the basic information
- the original of the scan file and the voice file is transmitted to the correction applicant, the correction applicant receives the corrected voice file to verify the corrected voice file, and when the verification of the corrected voice file is completed, A predetermined portion of the sales revenue of one voice file can be provided to the correction applicant.
- the TTS conversion unit provides the correction applicant with a (100-X)% (here, X denotes the contribution of the correction applicant) of the net profit to the predetermined sales quantity of the voice file ((100-X) - (sales quantity - predetermined sales quantity))% of the net profit when the sales quantity exceeds the preset sales quantity.
- a sewing robot having a voice recognition function connected to a server through wireless communication to analyze a voice of a user and output a voice corresponding thereto.
- an operation of the robot due to the motor and an LED (LED) blink to provide a sewing robot having a voice recognition function that can be of interest to the user .
- a plush dolls robot having a speech recognition function capable of providing a variety of contents to a user by converting a file including a text into a voice file and correcting an error occurring during the conversion do.
- a stuffed toy robot having a voice recognition function that can be of interest to a child user by providing mobile contents composed of a stuffed toy character connected to an external terminal.
- FIG. 1 is a block diagram of a stuffed toy robot having a speech recognition function according to an embodiment of the present invention.
- FIG. 2 is a diagram for explaining a process of converting a scan file including text into an audio file according to an embodiment of the present invention.
- FIG. 3 is a diagram showing a voice file sold according to an embodiment of the present invention.
- FIG. 4 is a diagram illustrating a change in expression of an application character according to a user voice according to an embodiment of the present invention.
- FIG. 1 is a block diagram of a sewing robot 100 having a speech recognition function according to an embodiment of the present invention.
- a sewing robot 100 having a voice recognition function includes a control unit 110, a voice recognition unit 120, a voice data providing unit 130, a wireless communication unit 140, a sensor unit 150, An LED 170, a voice recognition server 200, and an external terminal 300.
- the voice recognition server 200 and the voice recognition server 200 are connected to each other via a network.
- the controller 110 may perform an action corresponding to a command input by the user.
- the voice recognition unit 120 stores a voice of a user as an input voice file and transmits the input voice file to the control unit 110 when a predetermined command among the input voice is included, If the preset command is not included, the input voice file can be transferred to the voice recognition server 200 through the external terminal 300.
- the voice recognition unit 120 recognizes a predetermined 'LED lighting' command among the input voice file, And the controller 110 turns on the LED 170 provided in the stuffed animal robot 100 or the LED provided in the outside. In addition, the control unit 110 analyzes the input voice file to start the LED lighting. And transmits the answer voice file to the stuffed animal robot 100.
- the voice data providing server 130 receives the answer voice file corresponding to the input voice file from the voice recognition server 200, and outputs the answer voice file transmitted through the voice.
- the voice data providing unit 130 may select a base voice by a user to synthesize a voice, or may select a base voice with a predetermined voice. For example, the user may input his /
- the wireless communication unit 140 may communicate with an external terminal 300 using at least one of Wi-Fi, Bluetooth, and NFC,
- Wireless communication can be performed.
- the NFC communication is NFC-communicated with the NFC reader attached to the sewing robot 100 and the NFC reader installed in the external terminal 300, the NFC communication is performed through the application executed in the external terminal 300, (100) can be controlled.
- the stuffed animal robot 100 and the external terminal 300 are in a state in which the NFC communication is connected through the NFC reader.
- the application of the external terminal 300 is executed and the 'fairy tale' menu is clicked.
- the moving picture is clicked, an answer voice file in which the text is changed to speech in advance is selected, and the answer voice file is transmitted to the voice data providing unit 130 of the stuffed animal robot 100, 130) outputs the answer voice file as a sound.
- the sensor unit 150 may detect at least one of a touch sensor and a pulse sensor to detect a body input.
- the touch sensor is attached to the head portion of the stuffed animal robot 100, and when the user's touch is recognized, the touch sensor transmits a signal to the controller 110.
- the sound supplied from the sound providing unit may be output to the signal transmitted to the controller 110, or the arm may be operated by driving the motor.
- the pulse sensor is attached to an arm portion of the sewing robot 100 and is executed when a voice command is given to the user's voice by the voice recognition unit 120.
- the voice recognition unit 120 When the user holds the arm of the stuffed animal robot 100, the pulse is measured, and the measurement signal is transmitted to the controller 110.
- the controller 110 controls the controller 110 to generate a voice corresponding to the measurement signal, A sound is output from the voice remover 130 and the user can measure the pulse.
- the motor of the motor unit 160 is attached to the shoulder of the sewing robot 100.
- the controller 110 receives a signal, the motor unit 160 drives the motor based on the received signal.
- the LED 170 is transmitted to the controller 110 when an LED lighting signal is generated in the sensor unit 150 and the voice recognition server 200.
- the LED lighting signal is transmitted to the LED lighting signal received from the controller 110 And can be turned on in a corresponding manner.
- the input voice file stored in the voice recognition server 200 can be analyzed and stored through the voice recognition program 210 included in the voice recognition server 200.
- an answer voice file is generated based on the analyzed information through the speech recognition program 210, and the generated answer voice file is transmitted to the voice data providing unit 130 of the stuffed animal robot 100,
- the answerer 130 can output a voice response file by voice.
- the wireless communication unit 140 and the external terminal 300 of the stuffed animal robot 100 are connected to Wi-Fi, Bluetooth, and NFC communication,
- the plush doll robot 100 tells the voice 'what is the weather today', the plush doll
- the voice recognition unit 120 of the robot 100 stores the voice as an input voice file
- the input speech file transmitted from the speech recognition server 200 is analyzed through the speech recognition program 210, and the 'Today's weather is minus 4 degrees.
- a voice file is generated and transmitted to the voice data providing unit 130 of the stuffed animal robot 100.
- the voice data providing unit 130 sounds an answer voice file to output voice data, .
- FIG. 2 is a diagram for explaining a process of converting a scan file including text into an audio file according to an embodiment of the present invention.
- the voice recognition server 200 may include a TTS converter 220 for receiving a scan file including text and converting the text included in the scan file to a voice file.
- the TTS conversion unit 220 provides the basic information of the scan file to a user registered in a predetermined space on the network and transmits the first correction applicant who applied for the correction of the error of the voice file corresponding to the basic information And a verification unit that verifies the corrected voice file by receiving the voice file whose correction is completed by the correction applicant, and when the verification of the corrected voice file is completed, Of the sales revenue of the applicant.
- FIG. 3 is a diagram showing a voice file sold according to an embodiment of the present invention.
- the scan file including the text is converted into a voice file, and the voice file is sold to explain the occurrence of profit.
- the distribution of the profit will be described in detail.
- the TTS conversion unit provides the correction applicant with a (100-X)% (where X represents the contribution of the correction applicant) of the net profit to the predetermined sales quantity of the audio file, (100-X) - (Sales Quantity - Pre-established Sales Quantity))% of the net profit.
- the applicant can increase the participation in the application for correction, and the manager can increase the profit more as the sales quantity increases, thereby effectively distributing the profit of the applicant and the manager.
- a user registered in a predetermined space on the network is notified of a scan file of a book called 'great pussy', basic information (e.g., title of a book, number of pages of a book, etc.)
- the error% e.g., 30%, in this case, the error included in the voice file means the contribution when the correction is completed
- the user registered in the predetermined space provides the voice file 30% of the speech file and send the original of the speech file containing the 'great shit' scan file and 30% error to the first applicant who applied for the correction of the 30% error of the voice file do.
- the original is transmitted to the correction applicant, and when the correction applicant completes correcting the error of the voice file including the 30% error, the voice file having corrected the error is transmitted to the voice recognition server 200.
- the voice recognition server (200) notifies the voice recognition server (200) of the voice file that the voice recognition server (200) corrects the error and informs another user registered in a predetermined space on the network
- the voice recognition server (200) When the user purchases and reproduces an audio file, the sound is output from the sewing robot 100 and the external terminal 300.
- the correction applicant who corrected the 30% error provides 70 (100-30%) of the net profit up to 10 of the sales quantity of the error-corrected voice file, and ((100 -30) - (sales quantity-10))%.
- the error compensated user provides 55 ((100-30) - (25-10))% of the net profit.
- the speech recognition server 200 may extract the word when a preset word is included in the input speech file received from the speech recognition unit 120, generate information based on the extracted word, And generate an answer voice file by analyzing based on the generated information.
- the stuffed animal robot 100 For example, if the user speaks a voice 'promise tomorrow at 12 o'clock', the stuffed animal robot 100 generates an input voice file based on the voice and transmits it to the voice recognition server 200, 'Tomorrow', '12 o'clock', and 'appointment' of the input voice file are extracted and stored in the server 200.
- the speech recognition server 200 When the user speaks a voice 'Tell me a schedule tomorrow', the speech recognition server 200 generates an answer voice file, reflects the extracted and stored information, and the 'tomorrow schedule has an appointment at 12 o'clock.' And sends the answer voice file to the voice data providing unit 130 of the stuffed animal robot 100 and outputs the answer voice to the user so that the user can recognize the schedule for tomorrow.
- the external terminal 300 receives a signal of the stuffed animal robot 100 from the wireless communication unit 140 of the stuffed animal robot 100,
- the mobile communication terminal 200 provides the mobile contents using the received signal
- the doll robot 100 can relay the communication.
- FIG. 4 is a diagram illustrating a change in expression of an application character according to a user voice according to an embodiment of the present invention.
- the mobile contents include a function of expressing the representation of the mobile robot 100 in the form of animation through an application, a function of moving the mouth of the robot character to the sewing robot, outputting a voice file,
- the stuffed toy robot 100 can perform at least one of the functions of outputting sound, driving the motor and flashing LEDs, and providing a learning content by executing the application .
- an input voice file based on the voice is analyzed by the voice recognition server 200 to select an emotion, and the selected emotion is transmitted to the application of the external terminal 300 And displays a facial expression matching the emotion on the external terminal 300.
- the present invention can provide a plush dolphin robot having a speech recognition function, which is connected to a server through wireless communication to analyze a user's voice and output a corresponding voice.
- a plush dolphin robot having a speech recognition function that allows a user to provide various contents by converting a file including a text into a voice file and correcting an error occurring during the conversion.
- the control method of the sewing robot having the speech recognition function may be recorded in a computer-readable medium including program instructions for performing various computer-implemented operations.
- the computer-readable medium may include program instructions, data files, data structures, and the like, alone or in combination.
- the media may be program instructions that are specially designed and constructed for the present invention or may be available to those skilled in the art of computer software.
- Examples of computer-readable media include magnetic media such as hard disks, floppy disks and magnetic tape; optical media such as CD-ROMs and DVDs; magnetic media such as floppy disks; Magneto-optical media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like.
- Examples of program instructions include machine language code such as those produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter or the like.
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Toys (AREA)
Abstract
Description
Claims (5)
- 사용자의 명령 입력에 대응하는 액션을 수행하는 제어부;A controller for performing an action corresponding to a command input of a user;사용자의 음성을 입력음성파일로 저장하고, 상기 입력된 음성 중 기설정된Storing the voice of the user as an input voice file,명령이 포함되어 있는 경우, 상기 입력음성파일을 상기 제어부로 전송하고, 상기The control unit transmits the input voice file to the control unit,사용자의 음성 중 기설정된 명령이 포함되어 있지 않으면 상기 입력음성파일을 음If the preset voice command is not included in the voice of the user,성인식서버로 전달하는 음성인식부;A speech recognition unit for delivering the speech to a ceremonial server;상기 입력음성파일에 대응하는 답변음성파일을 음성인식서버로부터The answer voice file corresponding to the input voice file is transmitted from the voice recognition server전달받고, 상기 전달받은 답변음성파일을 소리로 출력하는 음성제공부;And outputting the answer voice file as a sound;터치센서 및 맥박센서 중 적어도 하나의 센서를 마련하여 신체 입력을 감지At least one of a touch sensor and a pulse sensor is provided to detect a body input하는 센서부;;와이파이(Wi-Fi), 블루투스(Bluetooth) 및 NFC 중 적어도 하나의 통신을 사Wi-Fi, Bluetooth, and NFC.용하여 외부 단말과 무선통신을 하는 무선통신부;A wireless communication unit for performing wireless communication with an external terminal using the wireless communication unit;다수의 모터를 구동하여 신체의 움직임을 제어하는 모터부; 및A motor unit for driving a plurality of motors to control movement of the body; And상기 센서부 및 음성인식서버에서 전달받은 신호 중 엘이디(LED) 점등신호가And an LED (Light Emitting Diode) lighting signal among signals received from the sensor unit and the voice recognition server포함되어 있는 경우, 상기 전달받은 점등신호에 대응하는 방식으로 점등하는 엘이If it is determined that the LED is lit,디(LED); 를 포함하는 음성인식 기능을 가진 봉제 인형 로봇.Di (LED); A robot having a speech recognition function including a robot.
- 제1항에 있어서,The method according to claim 1,상기 음성제공부는,Wherein the voice providing unit comprises:사용자가 목소리를 합성하여 베이스 목소리를 선택하거나 기설정된 목소리로Users can combine voices to select a bass voice or a preset voice베이스 목소리를 선택할 수 있는 것을 특징으로 하는 음성인식 기능을 가진 봉제A sewing machine having a voice recognition function which can select a base voice인형 로봇.Doll robots.
- 제1항에 있어서,The method according to claim 1,상기 음성인식서버는,The voice recognition server comprises:텍스트가 포함된 스캔파일을 전송 받고, 상기 스캔파일에 포함된 텍스트를A scan file including text is received, and a text included in the scan file is transmitted음성파일로 변환하는 TTS 변환부를 포함하는 것을 특징으로 하는 음성인식 기능을And a TTS conversion unit for converting the voice signal into a voice file.가진 봉제 인형 로봇.Robot with a stuffed doll.
- 제3항에 있어서,The method of claim 3,상기 TTS 변환부는,Wherein the TTS conversion unit comprises:네트워크 상의 소정 공간에 등록된 사용자에게 상기 스캔파일의 기초정보를The basic information of the scan file is transmitted to a user registered in a predetermined space on the network제공하고, 상기 기초정보에 대응하여 상기 음성파일의 오류를 보정하는 작업을 최And an operation of correcting the error of the voice file in accordance with the basic information is defined as Choi초로 신청한 보정신청자에게 상기 스캔파일 및 상기 음성파일의 원본을 전송하며,The original of the scan file and the voice file is transmitted to the correction applicant who has applied for the second time,상기 보정신청자가 보정을 완료한 음성파일을 수신하여 상기 보정한 음성파일을 검증하고, 상기 보정한 음성파일의 검증이 완료되면 상기 보정한 음성파일의 판매 수And a verification unit configured to verify the corrected voice file by receiving the voice file whose correction has been completed by the correction applicant and verify the corrected voice file,익의 기설정된 부분을 상기 보정신청자에게 제공하는 것을 특징으로 하는 음성인식And provides the predetermined portion of the voice to the correction applicant.기능을 가진 봉제 인형 로봇.Plush dolls robot with function.
- 제4항에 있어서,5. The method of claim 4,상기 TTS 변환부는,Wherein the TTS conversion unit comprises:상기 음성파일의 기설정된 판매 수량에 대해서는 순수익의 (100-X)%(여기서,(100-X)% of the net profit (here,X는 상기 보정신청자의 기여도를 의미함) 를 상기 보정신청자에게 제공하고, 기설X represents the contribution of the correction applicant) to the correction applicant,정된 판매 수량을 초과하는 경우, 순수익의 ((100-X)-(판매 수량-기설정된 판매 수((100-X) - (sales quantity - the number of sales already set)량))%를 제공하는 것을 특징으로 하는 음성인식 기능을 가진 봉제 인형 로봇.) Of the robot is provided with a speech recognition function.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020207023831A KR20200119821A (en) | 2017-12-29 | 2018-01-04 | Plush toy robot with voice recognition function |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20170184127 | 2017-12-29 | ||
KR10-2017-0184127 | 2017-12-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019132092A1 true WO2019132092A1 (en) | 2019-07-04 |
Family
ID=67063917
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2018/000173 WO2019132092A1 (en) | 2017-12-29 | 2018-01-04 | Plush doll robot with voice recognition function |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR20200119821A (en) |
WO (1) | WO2019132092A1 (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20010007842A (en) * | 2000-10-06 | 2001-02-05 | 남호원 | The system and method of a dialogue form voice and multi-sense recognition for a toy |
US20100041304A1 (en) * | 2008-02-13 | 2010-02-18 | Eisenson Henry L | Interactive toy system |
JP2013099823A (en) * | 2011-11-09 | 2013-05-23 | Panasonic Corp | Robot device, robot control method, robot control program and robot system |
KR20170027705A (en) * | 2014-04-17 | 2017-03-10 | 소프트뱅크 로보틱스 유럽 | Methods and systems of handling a dialog with a robot |
KR20170096502A (en) * | 2016-02-16 | 2017-08-24 | 최진양 | Talking doll, circuit module of talking doll and voice service system based on the same |
-
2018
- 2018-01-04 WO PCT/KR2018/000173 patent/WO2019132092A1/en active Application Filing
- 2018-01-04 KR KR1020207023831A patent/KR20200119821A/en not_active Application Discontinuation
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20010007842A (en) * | 2000-10-06 | 2001-02-05 | 남호원 | The system and method of a dialogue form voice and multi-sense recognition for a toy |
US20100041304A1 (en) * | 2008-02-13 | 2010-02-18 | Eisenson Henry L | Interactive toy system |
JP2013099823A (en) * | 2011-11-09 | 2013-05-23 | Panasonic Corp | Robot device, robot control method, robot control program and robot system |
KR20170027705A (en) * | 2014-04-17 | 2017-03-10 | 소프트뱅크 로보틱스 유럽 | Methods and systems of handling a dialog with a robot |
KR20170096502A (en) * | 2016-02-16 | 2017-08-24 | 최진양 | Talking doll, circuit module of talking doll and voice service system based on the same |
Also Published As
Publication number | Publication date |
---|---|
KR20200119821A (en) | 2020-10-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Bevilacqua et al. | Wireless sensor interface and gesture-follower for music pedagogy | |
Pieraccini | The voice in the machine: building computers that understand speech | |
CN100352622C (en) | Robot device, information processing method, and program | |
KR100906136B1 (en) | Information processing robot | |
WO2002045916A1 (en) | Robot device, method for controlling motion of robot device, and system for controlling motion of robot device | |
JP2017201342A (en) | Language Learning Robot Software | |
JP5404781B2 (en) | Interactive toys | |
JP5020593B2 (en) | Foreign language learning communication system | |
JP2011528246A5 (en) | ||
WO2020159073A1 (en) | Conversation-based foreign language learning method using reciprocal speech transmission through speech recognition function and tts function of terminal | |
WO2019132092A1 (en) | Plush doll robot with voice recognition function | |
JP2001242780A (en) | Information communication robot device, information communication method, and information communication robot system | |
WO2015037871A1 (en) | System, server and terminal for providing voice playback service using text recognition | |
US20230230493A1 (en) | Information Processing Method, Information Processing System, and Recording Medium | |
US20210319715A1 (en) | Information processing apparatus, information processing method, and program | |
KR20010007842A (en) | The system and method of a dialogue form voice and multi-sense recognition for a toy | |
Li et al. | Designing a realistic peer-like embodied conversational agent for supporting children\textquotesingle s storytelling | |
Angulo et al. | Aibo jukeBox–A robot dance interactive experience | |
WO2015076483A1 (en) | Control system for toys through scenario command | |
WO2020111835A1 (en) | User device and education server included in conversation-based education system | |
KR20020068835A (en) | System and method for learnning foreign language using network | |
US20040072498A1 (en) | System and method for controlling toy using web | |
KR20200085433A (en) | Voice synthesis system with detachable speaker and method using the same | |
KR100591465B1 (en) | Network based robot system playing multimedia content having motion information selected by the optical identification device | |
KR20200064021A (en) | conversation education system including user device and education server |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18897167 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18897167 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 21/01/2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18897167 Country of ref document: EP Kind code of ref document: A1 |