MXPA06014212A - Figurine using wireless communication to harness external computing power. - Google Patents

Figurine using wireless communication to harness external computing power.

Info

Publication number
MXPA06014212A
MXPA06014212A MXPA06014212A MXPA06014212A MXPA06014212A MX PA06014212 A MXPA06014212 A MX PA06014212A MX PA06014212 A MXPA06014212 A MX PA06014212A MX PA06014212 A MXPA06014212 A MX PA06014212A MX PA06014212 A MXPA06014212 A MX PA06014212A
Authority
MX
Mexico
Prior art keywords
figurine
data
computer
translation
user
Prior art date
Application number
MXPA06014212A
Other languages
Spanish (es)
Inventor
Robert D Palmquist
Original Assignee
Speechgear Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Speechgear Inc filed Critical Speechgear Inc
Publication of MXPA06014212A publication Critical patent/MXPA06014212A/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H3/00Dolls
    • A63H3/003Dolls specially adapted for a particular function not connected with dolls
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H3/00Dolls
    • A63H3/28Arrangements of sound-producing means in dolls; Means in dolls for producing sounds
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H30/00Remote-control arrangements specially adapted for toys, e.g. for toy vehicles
    • A63H30/02Electrical arrangements
    • A63H30/04Electrical arrangements using wireless transmission
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H2200/00Computerized interactive toys, e.g. dolls

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Toys (AREA)
  • Arrangements For Transmission Of Measured Signals (AREA)
  • Cable Transmission Systems, Equalization Of Radio And Reduction Of Echo (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention is directed toward a figurine that utilizes wireless communication to harness computing power of an external computer. The figurine may capture visual or audible input and wirelessly transfer the input to the external computer, either directly or via a network. The external computer processes the input, generates output, and transfers the output to the figurine. The output can then be presented to a child as though the figurine processed and generated the output directly.

Description

FIGURILLA USING INAL MBRICA COMMUNICATION TO IMPLEMENT EXTERNAL COMPUTER POWER Field of the Invention The invention relates to figurines such as stuffed animals, teddy bears, dolls, toy robots, action figures, and the like, and more particularly, to figurines that include electronic components. BACKGROUND OF THE INVENTION In the description, the term "figurine" refers to a doll, a teddy bear, a stuffed animal, a toy robot, a toy statue, an action figure, and the like. Figurines are commonly used by children to pass time and facilitate imaginative thinking. In recent times, more advanced computerized figurines have been developed. These more advanced figurines, for example, can incorporate electronic components that allow the figurine to interact with the child. Brief Description of the Invention In general, the invention relates to a system that includes a figurine that uses wireless communication to implement computing power of an external computer. In particular, applications that require intensive processing power can be executed REF: i78184 apparently by the external computer. The figurines can capture an input and wirelessly transfer the input to an external computer, which processes the input. The external computer returns the exit to the figurine, which presents the exit to a child. Voice recognition applications, voice interpretation applications, image processing applications, speech recognition applications and language translation applications are some examples of applications that typically require intensive processing power and large amounts of memory. The invention contemplates a figurine that uses wireless communication to implement computing power of an external computer in order to facilitate the presentation of speech recognition applications, speech interpretation applications, image processing applications, speech recognition applications, and language translation applications through the figurine. By performing intensive processing external to the figurine, the internal electronic components of the figurine can be largely simplified. In particular, the need for intensive processing power and a large amount of memory in the figurines can be avoided. Therefore, the need to protect powerful processors and memory of misuse by a child handling the figurine can also be avoided. In addition, the life of the battery in the figurine can be prolonged by using the techniques described herein. In one embodiment, the invention provides a system comprising a figurine that captures an input from a user and wirelessly communicates the input. The input may be image data, for example, or audio data such as voice data. The system also includes a computer that receives the figure's voice data, generates a response to the voice data, and wirelessly communicates the response to the figurine. The figurine then transfers the response to the user. In another embodiment, the invention provides a system comprising a figurine that captures voice data in a user and wirelessly communicates voice data. The system also includes a computer that receives voice data from the figurine, generates a translation of the voice data, and wirelessly communicates the translation to the figurine. The figurine transfers the translation to the user. In another embodiment, the invention provides a system comprising a figurine that captures image data of a user and wirelessly communicates the image data, wherein the image data includes one or more words or phrases. The system also includes a computer that receives the image data of the statuette, generates a translation of the words or phrases, and wirelessly communicates the translation to the figurine. The figurine transfers the translation to the user. In another embodiment, the invention provides a system comprising a figurine that captures image data of a user and wirelessly communicates the image data, wherein the image data includes one or more words or phrases. The system also includes a computer that receives the image data of the figurine, generates audio data correspng to the words or phrases, and wirelessly communicates the audio data to the figurine. The figurine transfers the audio data to the user. In another embodiment, the invention provides an interactive toy figurine comprising a data capture device and a wireless transmitter / receiver for wirelessly transferring the data captured by the data capture device and receiving the output associated with the data captured by the device. data capture device. For example, the data capture device may be an image capture device for capturing image data, such as a camera deployed in one or both of the eyes of the toy figurine, or elsewhere. In another embodiment, a method comprises capturing a user's voice data in a figurine, and wirelessly communicating the voice data to an external computer.
The method also comprises receiving a response to the voice data from the external computer, and transferring the response to the user of the figurine. In another embodiment, a method comprises capturing a user's voice data in a figurine, and wirelessly communicating the voice data to an external computer. The method also comprises receiving a translation of the voice data from the external computer, and transferring the translation to the user of the figurine. In another embodiment, one method comprises capturing image data with a figurine, and wirelessly communicating the image data to an external computer. The image data includes one or more words or phrases. The method also includes receiving a translation of the words or phrases from the external computer, and transferring the translation of the figurine. In another embodiment, one method comprises capturing image data with a figurine, and wirelessly communicating the image data to an external computer. The image data includes one or more words or phrases. The method also comprises receiving audio data corresponding to the words or phrases from the external computer, and transferring the audio data of the figure. In another embodiment, a system comprises a figurine that captures an input and wirelessly communicates the input. The system also includes a computer that receives the input of the figurine, generates an output based on the input, and wirelessly communicates the output to the figurine. The figurine presents the output to a user. In another embodiment, a system comprises a figure communicatively coupled to a computer, which in turn communicatively connects to a server through a network. The figurine provides an input to the computer and receives an output from the computer. The computer can receive software updates from the server so that the functionality of the figurine can be changed or expanded through software updates. Of course, you can also download updates to the computer using a conventional disk or other storage medium, in which case, communication with the server will not be necessary. In another embodiment, a system comprises a figure communicatively coupled to a computer. In addition, the system includes one or more objects compatible with the system with which the figurine can interact, implementing the power of the computer. Compatible objects can include signs identifiable by the figurine, which can ensure that the software on the computer can provide 'useful interaction between the figurine and the object.
In another embodiment, a system comprises a figurine, a computer, and a parent unit. The parent unit may comprise a software module in the computer, or a separate hardware device. In either case, the parent unit allows parents to exercise parental control over the figurine's functionality by interacting with software modules on the computer that controls the operation and interactive features of the figurine. The parent unit can also function as a baby monitor, for example, an intelligent baby monitor can generate an alarm if a baby in the vicinity of the figurine stops breathing, or has other detectable problems. Brief Description of the Figures Figure 1 is a conceptual diagram illustrating a figurine that communicates wirelessly with a computer. Figures 2 and 3 are block diagrams of a figurine that communicates wirelessly with a computer. Figures 4-6 are flow diagrams according to the embodiments of the invention, illustrate the application of the invention to the translation of spoken or written messages. Figure 7 is a conceptual diagram illustrating a figurine that communicates wirelessly with a computer through a wireless hub. Figure 8 is a conceptual diagram illustrating a figurine that communicates wirelessly with a computer through a network. Figure 9 is a conceptual diagram illustrating a figurine that communicates wirelessly with a computer and a compatible object. Figure 10 is a conceptual diagram illustrating a figurine that communicates wirelessly with a computer, with a parent unit. Figure 11 is a conceptual diagram illustrating a system in which a server communicates with clients that communicate wirelessly with the figurines. Detailed Description of the Invention The invention relates to a system that includes a figurine that uses wireless communication to implement computing power of an external computer. In particular, certain applications that require intensive processing power can apparently be executed by the figurine, with the intensive processing that is actually done external to the figurine on another computer. The figurine can capture audio data, video data or both, and wirelessly transfer the captured data to the external computer, either directly or through a network. Audio data includes, but is not limited to, voice data, voice data and music data. The external computer receives the data as input, processes the data, generates an output based on the input data and transfers the output to the figurine. The output can then be presented to a child as if the figurine had processed and generated the output directly. Voice recognition applications, speech recognition applications, voice interpretation applications, and language translation applications are some examples of applications that may require intensive processing power and a large amount of memory. The invention contemplates a figurine that uses wireless communication to implement computing power of an external computer in order to facilitate the presentation of speech recognition applications, speech recognition applications, speech interpretation applications, and language translation applications through of the figurine. Figure 1 is a diagram illustrating a system 10 according to an embodiment of the invention. System 10 includes adding a figurine 12 such as a doll, teddy bear, stuffed animal, toy robot, toy statue, action figure or the like. The system 10 also includes an external computer 14 such as a personal computer (PC), Macintosh, work station, laptop, suitcase computer, pocket computer, or other computer external to the figurine 12. Figurine 12 and the computer external 14 communicate either indirectly or directly via one or more communication links 16, wireless. In some cases, the external computer 14 may be interconnected in a network to one or more wireless hubs or other devices that facilitate wireless communication. The figurine 12 implements the computing power of the external computer 14 in order to facilitate the execution of processor-intensive and / or memory-intensive applications. A child can interact with the figurine 12. Accordingly, the figurine 12 can facilitate learning and provide instruction and guidance to the child. Since the figurine 12 implements the computing power of the external computer 14 in order to facilitate the execution of these applications, the computational power and memory required in the figurine 12 can be significantly reduced. Therefore, the need for protect processors and / or memory from misuse, by a child who operates the figurine 12, can also be reduced. Also, the power used by the figurine 12 can be reduced, prolonging the life of the battery within the figurine 12. In one example, figurine 12 can present a speech recognition application to the child, for example, a program that teaches to the child the meanings of one or more words or phrases. In that case, the child can talk to the figurine 12, who captures the voice and wirelessly communicates the captured voice to the external computer 14. The external computer 14 grammatically analyzes the voice and generates one or more meanings, if they are communicated back to the figurine 12. The figurine 12 can then transfer the meanings to the child in any number of ways. As an illustration, a child can pronounce the word "trip" to figure 12, who captures the expression and wirelessly communicates the captured voice to the external computer 14. The external computer 14 grammatically analyzes the captured voice and generates one or more definitions, which are communicated back to figurine 12. Figurine 12 can transfer the definition for example by answering "the word" trip "means going to a trip". In another example, the figurine 12 may be able to maintain intelligent conversation with the child by implementing the computing power of the external computer. In that case, the child can talk to the figurine 12, who captures the voice and wirelessly communicates the captured voice to the external computer 14. The external computer 14 grammatically analyzes the voice and generates one or more answers, which are communicated back to the figurine 12. The figurine 12 can then transfer the responses to the child. In this way, if the child asks a question to the figurine 12, the figurine 12 can respond with an intelligent response. The software running on the external computer 14 can be adapted over time to the questions presented by the child, and can also be updated. Updates to the software on the external computer 14, for example, can make the figurine 12 appear to grow intellectually with the child. By way of illustration, a child can pronounce the words "I love you" to figure 12, who captures the expression and wirelessly communicates the captured voice to the external computer 14. The external computer 14 grammatically analyzes the child's message and generates a more answers, which are communicated back to the figurine 12. The figurine 12 can transfer the answer for example by answering "I love you too". Or the child may pronounce the question "what is a triangle?" to figurine 12, which may respond "A triangle is a shape that has three sides". In another example, the figurine 12 can also help the child with reading. For example, one or more image capture devices, such as digital cameras, may be placed on the figurine 12, such as in one or more eyes 18 of the figurine 12. The child may present a page of a book to the figurine 12 by directing the eyes 18 of the figurine 12 to the page and by pressing a button (not shown). In this case, a captured image of the page can be communicated wirelessly to the external computer 14. The external computer 14 grammatically analyzes the page and identifies the words printed on the page. The external computer 14 then communicates back to the figurine, so that the figurine 12 can transfer the words written on the page. In this sense, figurine 12 may seem to read the child. In this case, a screen (not shown) can also be incorporated in figure 12, to present the child with the captured words that are read by the figure 12. Therefore, the figure 12 can help in the child's learning process by helping to teach the child to read. In another example, figurine 12 can facilitate the translation of words spoken by the child. For example, the child can talk to the figurine 12, who captures the voice and wirelessly communicates the captured voice to the external computer 14. The external computer 14 grammatically analyzes the voice and identifies a translation of the words or phrases spoken by the child. The external computer 14 then communicates back to the figurine, so that the figurine 12 can transfer the translations to the child. In this example, figurine 12 serves as an interpreter. For purposes of illustration, a child may pronounce the words "thank you" to the figurine 12, which captures the expression and wirelessly communicates the captured voice to the external computer 14. The external computer 14 grammatically analyzes the expression and identifies a translation of the phrase . The external computer 14 then communicates the translation back to the figurine, the figurine 12 can transfer the translations for example when responding with "'than you' means 'thank you' in English". In another example, figurine 12 can facilitate the translation of written words. For example, one or more image capture devices, such as digital cameras, may be placed in the eyes 18 of the figurine 12. The child may present words or phrases to the figurine 12 by directing the eyes 18 of the figurine 12 toward the words or phrases. The child can press a button (not shown) on figurine 12 to capture the words or phrases that are "seen" by the figurine. In that case, a captured image of the words or phrases can be wirelessly communicated to the external computer 14. The external computer 14 grammatically analyzes the words or phrases, and identifies a translation of the words or phrases. The external computer 14 then communicates back to the figurine, so that the figurine 12 can transfer the translation. In that case, a screen (not shown) can also be incorporated in the figurine 12 to present the captured words that are translated to the child. The screen, for example, can be located in another part of the figurine, but is preferably located on the back of the figurine so that the words can be seen by the child as eyes 18 of figurine 12 are directed away of the child to the page that is going to be read. In a further example, the figurine 12 implements the computing power of the external computer 14 to perform the processing of images unrelated to words and phrases. One or more image capture devices, such as digital cameras, the eyes 18 of the figurine 12 can be placed, and can capture image data to be processed by the external computer 14. The image processing can include recognition of faces, objects, colors, numbers, places, activities, and the like. When the external computer 14 runs the face recognition software, for example, the figurine 12 may appear to recognize the person or persons interacting with the figurine 12. The figurine 12 may use the recognition in its interaction for example by calling the child by his name. When the external computer 14 runs the object recognition software, the figurine 12 may appear to recognize objects and attributes of the objects such as shape, type or quantity. In an example application, figurine 12 can teach a child to recognize shapes, count objects, become familiar with colors, and the like. In yet another example, the figurine 12 implements the computing power of the external computer 14 to perform speech recognition in order to identify the interlocutor. The figurine 12 can use speech recognition in its interaction for example by calling a child by name. When the external computer 14 runs the speech recognition software, the figurine 12 may appear to recognize the speaker. Typically, speech recognition applications will be used in conjunction with speech recognition applications. Voice recognition applications refer to applications that identify who is speaking and can allow the programmed interaction of the figurine only with those people associated with a recognized voice. Voice recognition applications refer to applications that recognize what was said and can generally be used with any voice. In some cases, the invention may use both speech and voice applications together to determine what is said and who is speaking. This can improve the interaction with the figurine 12 such that the figurine 12 can only respond to the child for whom it was programmed to respond. With voice recognition, a child can say "sing me a song" or "tell me a story" and the figurine can select a song or story from the library and respond to the recognized voice, how is it directed? The responses of the figurine to others, however, can be limited or prohibited if the voice making the request is not recognized. The interaction between a user and the figurine 12 can be proactive as well as reactive. In other words, the external computer 14 can cause the figurine 12 to take action which is not responding to the action by a user. For example, figurine 12 can serve as an alarm clock, telling a child that it is time to go to bed. The figurine 12 can also proactively remind a user of the day's annotations, the birthday of friends or relatives, and the like. In this way, a first output, which is sensitive to the input to the figurine, can be provided, and the computer 12 can be programmed to proactively make the figurine 12 transfer a second output to a user, for example, an alarm or reminder . Figure 2 is a functional block diagram of system 10 including a figurine 12 and an external computer 14. Again, figurine 12 uses wireless communication in order to implement the processing power of the external computer 14. In this way, they can perform complex applications by the computer 14, even presenting a user with the figurine 12. The figurine 12 includes one or more input devices 22 for capturing the input of a user, for example, a child. Figure 12 also includes one or more output devices 23 presenting the output to the user. The input device 22 may comprise, for example, a sound detection transducer such as a microphone, or an image capture device, such as a digital camera. A button or other actuator can be placed on fig. 12 to turn on the microphone or have the digital camera take an image. The output device 23 may comprise a sound generating translator such as a loudspeaker, or possibly a display screen. The sounds or images generated by the device 22 can be processed locally by the central processing unit (CPU) 24 in order to facilitate the communication of the data to the external computer 14. For example, the local CPU 24 can pack the captured input for transmission to the external computer 14. The local CPU 24 can also control the transmitter / receiver 26 to cause the transmission of data indicative of the sounds or images detected by the input device 22. The local CPU 24, for example, may comprise a relatively simple controller implemented in an application-specific integrated circuit (ASIC). If the images are captured by the figurine 12, the local CPU 24 can compress the image file to simplify the wireless transfer of the image file. In any case, the transmitter / receiver 26 transfers the data collected by the figurine 12 so that the data can be processed external to the figurine 12. The wireless communication between the transmitter / receiver 26 of the figurine 12 and the transmitter / receiver 27 of the external computer 14 can be set to any of a wide variety of wireless communication protocols. Examples include, without limitation, a wireless network interconnection standard such as one of the IEEE 802.11 standards, a standard according to Bluetooth Special Intimate Group, or the like. The IEEE 802.11 standards include, for example, the original 802.11 standard that has data transfer speeds of 1-2 megabits per second (Mbps) in a frequency band of 2.4-2.483 gigahertz (GHz), as well as the IEEE standard 802.11b (sometimes referred to as 802.11 wireless fidelity or 802.11 i-Fi) that uses binary phase shift switching (BPSK) for 1.0 MBPS transmission, and quadrature phase shift switching (QPSK) for 2.0 transmission , 5.5 and 11.0 Mbps, the IEEE 802. llg standard that uses orthogonal frequency division multiplexing (OFDM) in the 2.4 GHz frequency band to provide data transmission at speeds up to 54 Mbps, and the IEEE 802.11a standard It uses OFDM in a 5 GHz frequency band to provide data transmission at speeds up to 54 Mbps. These and other wireless networks have been developed. Additional extensions to the IEEE 802.11 standard, as well as other wireless standards, will probably emerge in the future. The invention is not limited to the type of wireless communication techniques used, and can also be implemented with wireless protocols similar to those used for cell phone communication or direct bidirectional radiolocators, or any other wireless protocol, either known or subsequently developed. The transmitter / receiver 27 of the external computer 14 receives data sent by the transmitter / receiver 26 of the figurine 12. The remote CPU 28 performs extensive processing on the data to generate the output. For example, the remote CPU 28 may comprise a general purpose microprocessor that executes the software to generate the output. The output is then transmitted back to the figurine 12. The device 23 of the outlet of the figurine 12 can then present the output to the user. By performing intensive processing on the external computer 14, the local electronic components of the figurine 12 can be simplified for the most part. In particular, the need for intensive processing power and a large amount of memory in the figurine 12 can be avoided. Therefore, the need to protect powerful processors and memory of misuse by a child handling the figurine can also be avoided. The power of the battery in the figurine 12 can also be extended by performing the processing tasks externally on the computer 14. In addition, the software updates can be easily implemented for execution by the remote CPU 28 if update of the data is required. the components of the figurine 12. The processing tasks performed on the remote CPU 28 of the external computer 14, depend in general on the particular application that is presented to the user by the figurine 12. In one example, the figurine 12 may present an application of speech recognition to the child, for example, a program that teaches the child the meanings of one or more words or phrases. In that case, the child can talk to the figurine 12, and the input device 22 can capture the voice. The local CPU 24 packs the voice and causes the transmitter / receiver 26 to wirelessly communicate the captured voice to the external computer 14. The remote CPU 28 grammatically analyzes the voice and generates one or more meanings, which are communicated back to the figurine 12 by the transmitter / receiver 27. The output device 23 of the figurine 12 can then transfer the meanings to the child. In another example, the figurine 12 may be able to maintain intelligent conversation with the child by implementing the remote CPU 28 of the external computer 14. In this case, the child may speak to the figurine 12, and the input device 22 may capture the voice. The local CPU 24 packs the voice and causes the transmitter / receiver 26 to wirelessly communicate the captured voice to the external computer 14. The remote CPU 28 grammatically analyzes the voice and generates one or more responses, which are communicated back to the figurine 12 by the transmitter / receiver 27. The output device 23 of the figurine 12 can then transfer the responses to the child.
In this way, if the child asks a question to the figurine 12, the figurine 12 can respond with an intelligent answer. In another example, figurine 12 can help the child with reading. In this case, the device 22 of input in the form of an image capture device can capture images of a page. The local CPU 24 packages the image and causes the transmitter / receiver 26 to wirelessly communicate the captured image to the external computer 14. The local CPU can also compress the image before the transmission. Once the external computer 14 has received the captured image, the remote CPU 28 grammatically analyzes the image and generates one or more meanings, which are communicated back to the figurine 12 by the transmitter / receiver 27. For example, the remote CPU 28 can perform the recognition of '25 characters in the image in order to identify characters, and then you can decipher the meaning of the identified characters using one or more dictionaries stored in the memory and accessible by the remote CPU 28. The output device 23 of the figurine 12 can then transfer the meanings to the child. In that sense, the figurine 12 may appear to read the child. The output device 23 may include speakers for the verbal output and possibly a screen for presenting the captured words to the child that are read by the figurine 12. In another example, the figurine 12 may facilitate the translation of words spoken by the child. In this case, the child can talk to the figurine 12, and the input device 22 can capture the voice. The local CPU 24 packs the voice and causes the transmitter / receiver 26 to wirelessly communicate the captured voice to the external computer 14. The remote CPU 28 grammatically analyzes the voice and identifies a translation of the word or phrases spoken by the child, which is they communicate back to the figurine 12 by the transmitter / receiver 27. The output device 23 of the figurine 12 can then transfer the translation to the child. In this example, figurine 12 serves as an interpreter. In a further example, the figurine 12 implements the computing power of the external computer 14 to perform other types of image processing. One or more image capture devices may capture image data to be processed by the external computer 14. The image processing may include recognition of faces, objects, colors, numbers, places, activities, and the like. When the external computer 14 runs the face recognition software, for example, the figurine 12 may appear to recognize the person or persons interacting with the figurine 12. The figurine 12 may use the recognition in its interaction for example when calling a child by his name. When the external computer 14 runs the object recognition software, the figurine 12 appears to recognize objects and attributes of objects such as shape, type or quantity. In an example application, figurine 12 can teach a child to recognize shapes, count objects, become familiar with colors, and the like. The interaction between a user and the figurine 12 can be pre-active as well as reactive. In other words, the external computer 14 can cause the figurine 12 to take action that is not responsive to the action by a user. For example, figurine 12 can serve as an alarm clock, telling a child that it is time to go to bed. The figurine 12 can also proactively remind a user of the day notes, birthday of friends or relatives and the like. For applications on elderly people, figurine 12 can remind the user to take a medication. Any of these alarms or reminders may be normal audio tones, music or possibly programmed or recorded audio from a familiar voice, which makes the figurine 12 speak with a pleasant tone to the user when the reminders are provided. For example, the voice of the father can be recorded such that the figurine 12 speaks with these recordings. Voice emulation software can also be used by computer 14 so that figure 12 speaks new words or phrases in a voice emulating that of the parents. Figure 3 is a more detailed block diagram of the system 30 illustrating the application of the invention to one of the example applications described above, in particular, translation of written words. System 30 may correspond to system 10 (Figures 1 and 2). In this case, the image capture device 33 can capture images of a page. For example, the image capture device 33 may comprise a digital camera located in the eyes of the figurine 32 so that when a child directs the eyes of the figure 32 towards a page and presses an actuator, the image of the image is captured. page. The actuator, for example, can be placed on the back of the figurine 32 so that when the eyes of the figurine 32 are directed towards a page, it is easily accessible to the actuator. The image capture device and the actuator, however, can be displayed in other locations in the figurine 32.
The local CPU 34 packs the image and causes the transmitter / receiver 36 to wirelessly communicate the captured image to the external computer 31. The remote CPU 38 grammatically analyzes the image and generates a translation, which is communicated back to the figurine 32 by the transmitter / receiver 37. The remote CPU 28 can call the software modules 37, 39 to specifically perform the recognition 39 of optical characters and the translation 40. Once the image has been translated and the translation has been communicated back to the figurine 32, the output device 35 of the figurine 32 can then transfer the translation. You can also call, for different languages, different optical character recognition modules and different translators modules. The optical character recognition module 39, for example, can recognize English, and the translator module 40 can translate from English to Spanish. However, any other language can also be supported. In another embodiment, optical character recognition can be performed locally on the figurine, with the most intensive translation of the processor performed by the external computer 31. Other example applications can be supported in a manner similar to that depicted in FIG. Figure 3. In the context of face recognition, for example, the image capture device 33 can capture one or more images of a face. The local CPU 34 packs the image and causes the transmitter / receiver 36 to wirelessly communicate the captured image to the external computer 31. The remote CPU 38 executes the face recognition software modules to identify the face in the image. Once the face has been identified, the remote CPU 38 can then incorporate that identity into the output of the figure 32 for example by referring to the user by name. Speech recognition can also be used to make the figurine 32 refer to the user by name. In addition, the remote CPU 38 can execute shape recognition software modules, color recognition software modules, object recognition software modules, quantization software modules. The remote CPU 38 can use more software modules to help a user recognize shapes, count objects, become familiar with colors, and the like. Although a user perceives all the action that is presented through the figurine 32, the intensive image processing to the processor is actually performed remotely by the external computer 31. An object recognition module, for example, can be designed to recognize money (such as coins) and allow the figurine to teach the child how to really count the change.
The various modules and components described herein may be implemented in hardware, software, circuit program or any combination. The invention is not limited to a particular software or hardware implementation. If implemented in the software, the modules can be stored in a computer-readable medium such as a memory or a non-volatile storage medium. In fact, one advantage of the techniques described herein is that the need for large amounts of memory in a figurine can be avoided. In contrast, the memory needed to run very intense applications in memory is included in an external computer. Additionally, the example applications described above are not exclusive of others. The external computer 31 can execute any combination of translation, speech and speech recognition, image processing and other types of software modules, and the invention is not limited to systems that perform an individual application to the exclusion of other applications. On the contrary, an advantage of the invention is its versatility. The invention can be adapted to implement one or more applications as desired by the user. Figures 4-6 are flow diagrams according to some embodiments of the invention, which illustrate the application of the invention to one of the example applications described above, in particular, translation and written or spoken messages. In the technique shown in Figure 4, the figurine 12 performs the voice capture (41), and then transmits the voice data to the external computer 14 (42). The external computer 14 receives the voice data (43) and can perform speech recognition (44) to identify spoken words or phrases. The external computer 14 performs the translation with respect to the words or phrases (45) spoken, identified and transmits the translation back to the figurine 12 (46). The figurine 12 receives the translation (47) and transfers the translation to the user. The figurine 12 can drive an output device such as a display screen, thereby providing a written output. A user may find it more desirable, however, to have the figurine 12 activate a loudspeaker in figurine 12, thereby providing an audible output, such as quote from the synthesized voice of the translation. In this way, the figurine 12 acts as a translator of spoken words or phrases, calling an external computer 14 to reduce local processing on the figurine 12. Also, as mentioned above, in addition to performing speech recognition, the computer 14 can also perform speech recognition so that the figurine 12 only responds to recognized voices, or responds differently, for example by identifying different people, in response to the computer 14 recognizing different voices.
In the technique shown in Figure 5, the figurine 12 captures an image (51), and then transmits the image to the external computer 14 (52). The external computer 14 receives the image (53) and decodes the image (54), for example, when performing optical character recognition. The external computer then translates the characters to generate a translation (55). The external computer 12 then transmits the translation back to figurine 12 (56). The figure 12 receives the translation (57) and transfers the translation to the user (58). In this way, the figurine 12 acts as a translator of written words or phrases, which calls an external computer 14 to reduce local processing in the figurine 12. The translation can be transferred in audio, video or both. In the technique shown in Figure 6, the figurine 12 captures an image that includes written words or phrases (61), and then transmits the images to the external computer 14 (62). The external computer 14 receives the image (63) and decodes the image (64), for example, when performing optical character recognition to identify written words or phrases. The external computer 14 then generates an audio signal (65) as a function of the words or phrases identified. The external computer 12 then transmits the audio signal back to the figurine 12 (66). The figurine 12 receives the audio signal (67) and transfers the translation to the user. In this way, the figurine 12 seems to read the written words or phrases, by calling an external computer 14 to reduce the local processing on the figurine 12. In the various embodiments described herein, a figurine using wireless communication is described for implement computing power of an external computer. However, the figurine does not need to be directly coupled to the external computer in order to take advantage of the computing resources of the external device. Figure 7 is a block diagram of a system 70, similar to the system 10. In the system 70, however, the figure 72 communicates wirelessly with the external computer 74 via a wireless hub 75. In particular, the wireless hub 75 wirelessly communicates with the figurine 72, and is coupled to the external computer 74. Figure 8 illustrates another system 80, similar to the system 10, in which the figurine 82 communicates wirelessly to take advantage of the computation power of the external computer 84 In the system 80, the figurine 82 communicates wirelessly with the external computer 84 via a wireless hub 85 which is coupled to the external computer 84 via a network 86. In particular, the wireless hub 85 communicates wirelessly with the figurine 82, and is coupled to the external computer 84 via the network 86. The network 86 may comprise a small local area network (LAN), a wide area network , or even a global network such as the Internet. The communication between the hub 85 and the external computer 84 can be, but does not need to be, wireless. Importantly, the wireless capabilities of the figurine 82 allow communication with the external computer 84, which thus allows the figurine 82 to make use of the processing capabilities of the external computer 84. When a figurine is configured to communicate with a global network such as the Internet, as represented in figures 8 or 11, the figurine can serve as an input-output device for interaction with the network and other stations or servers coupled to the network 82. In an application typical, the figurine 82 reports information obtained from one or more network servers (not shown). For example, a child may ask figure 82, "What is the weather forecast for today?" The question is transmitted to an external computer 84, which has access to a server through the network 86 that can provide the local forecast. When retrieving the local forecast, the external computer 84 supplies that information to the figurine 82, which can answer the child's question. The invention supports wire communication in other configurations as well. The figurine does not need to be directly coupled to the external computer in order to take advantage of the computing resources of the external device. The figurine 82 can communicate wirelessly with any intermediate device, such as another figurine, a wireless access point, or a computer that does not serve as an external computer 84. Fig. 9 is a diagram illustrating a system 90 according to an additional mode of the invention. The system 90 includes a figurine 92 and an external computer 94, which communicate either directly or indirectly through one or more wireless communication links. In addition, the system 90 includes a compatible object 95, incorporated in Figure 9 as a book. The compatible object 95 can be an object of any type, but in a typical implementation, the compatible object 95 is a figurine accessory 92. The compatible object 95 includes a wireless identifier, by which a detector in the figurine 92 can detect the presence of the compatible object 95. An example of this wireless identifier is a radio frequency identification (RFID) tag 96. The RFID tag 96 can be hidden in the compatible object 95 and not easily observable to the user. An RFID tag reader 98 in the figurine 92 detects and reads the RFID tag 96. Bar codes or other signals can also be used, in which case the reader 98 will facilitate the reading of these signals. The RFID tag 96 is a wireless electronic device that communicates with the RFID tag reader 98. The RFID tag 96 may include an integrated circuit (not shown) and a coil (not shown). The coil can act as a power source, as a receiving antenna, and as a transmitting antenna. The coil can be coupled to the capacitor to store energy when interrogated in order to drive the integrated circuit. The integrated circuit may include wireless communication and memory components. The RFID tag reader 98 may include an antenna and a receiver. The RFID tag reader 98 can "interrogate" the RFID tag 96 by directing an electromagnetic (i.e., radio) signal to the RFID tag 96. The RFID tag 96 can include, but need not include, an independent power source. In a typical embodiment, the RFID tag 96 receives power from the interrogation signal of the RFID tag reader 98. At power up, the RFID tag 96 can perform certain operations, which may include transmitting data stored in the memory of the RFID tag 96 to the RFID tag reader 98. The transmitted data may include a compatible object identification 95. In this way, the manufacturer of the figurine 92 can exercise better control over those objects that will be used to interact with the figurine, and can help ensure that a child will not be reached. frustrate, for example, if the figurine 92 was used with an incompatible book or object. When the RFID tag reader 98 identifies the RFID tag 96, the external computer 94 becomes aware of the compatible object 95 next to the figurine 92. The external computer 94 can use the identity of the compatible object 95 to communicate more effectively with a user. For example, a compatible object 95 of example in Figure 9 is a book. When the external computer 94 learns the identity of the book, the external computer 94 can generate the appropriate output for that book. Figurine 92 for example can direct a child's attention to illustrations shown in the book, and explain how the illustrations correspond to the story. Figurine 92 can also describe how the story relates to other books, such as other books that deal with the same characters, or the figurine 92 can explain background information about the story or its author. Figure 10 is a diagram illustrating a system 100 according to another embodiment of the invention. System 100 includes a figurine 102 and an external computer 104, which communicate either directly or indirectly via one or more wireless communication links. In addition, the system 100 includes a parent unit 106. The parent unit 106 may comprise any device, including, without limitation, a television, a computer, a telephone, a loudspeaker, a video monitor and the like. The parent unit 106 can communicate with the external computer 104 in any way, such as by an electrical connection, an optical link, or radio frequency. System 100 is configured to serve as a child monitoring system. A parent can display the figurine 102 next to a child so that the figurine 102 can capture video information and audio information or both about the child. The figurine 102 transmits the information captured on the external computer 104. The external computer 104 in turn sends the information to the parent unit 106. In this example, the parent can also communicate in real time to the child through the figurine 102, for example, by speaking into a microphone of the parent unit 106. The parent unit 106 may be a separate unit, or may be implemented as a software module running directly on the external computer 104. In an application, the external computer 104 simply transmits the captured information to the parent unit 106. For example, captured audio and video data showing the child's location, condition and child's acty can be transmitted to parent unit 106. In another application, the external computer 104 may also process the captured audio and video data and provide useful information to the parent unit 106. For example, the external computer 104 can process audio data captured by the figurine 102 and determine if the child is crying, sleeping, breathing abnormally and the like. The external computer 104 can also process video data captured in the figurine 102 and determine if the child is conscious or has left the bed or the like. Figure 11 is a diagram illustrating a system 110 according to another embodiment of the invention. The system 110 is a server-client system in which a server 112 supplies one or more functionalities to one or more client-computer figurine systems 114, 116. The server 112 manages a database 113 that stores the software that can provide figurines with one or more functionalities. The client-computer figurine system 114, 116 downloads one or more functionalities of the server 112 via a network 118. The network 118 can comprise any network, including a global network such as the Internet. Examples of functionalities include, without limitation, the functionalities described herein. The owner of the client-computer figurine system 114, for example, may wish that his child figurine 122 should be able to help teach his child about numbers, letters, basic shapes and basic colors. In addition, your child's figurine 122 should be able to tell appropriate stories at the child's four years of age. Accordingly, the owner of the client-computer figurine system 114 downloads the software for these functionalities from the server 112 via the network 118. The software is stored locally to the external computer 120. In contrast, the owner of the figurine system 116 You may want your child's figurine 124 to be able to read a book, to help teach your child to speak and write in English and Spanish, and to play appropriate games for the child's six-year-old. Accordingly, the owner of the client-computer figurine system 116 downloads software for these functionalities from the server 112 via the network 118. With the system 110, each parent can customize their child's figurine for the child's age, needs or wishes. As the child develops, the parent can obtain more advanced functionality. In addition, as new features are developed and added to the 113 database, parents can download the new features. As a result, the figurines seem to "grow" with the children, and can be enabled to develop new or more sophisticated functions. Because the new and more advanced functionality can be executed on the external computer 120, the need to update the figurine 122, which may be important to a child who has emotionally attached to the figurine 122, can be avoided. Although the figurines can Representing an individual figurine with an individual external computer, the invention encompasses modalities in which an individual external computer interacts with two or more figurines. A father with two children can give each child a different figurine, and each figurine can communicate wirelessly with the same or different external computer. Each child will perceive that each figurine operates independently of the other. Additionally, each figurine can be empowered with appropriate functionality for each child. The invention can offer one or more advantages. A child's toy can be very versatile, capable of a wide variety of functionality. In addition, the functionality can be customized to the child, and can change as the child develops. In addition, the invention supports an interesting and adaptable system that can help a child learn a wide variety of things, making the interaction with the figurine not only pleasant, but also educational. It is noted that in relation to this date, the best method known to the applicant to carry out the aforementioned invention, is that which is clear from the present description of the invention.

Claims (29)

  1. CLAIMS Having described the invention as above, the content of the following claims is claimed as property: 1. System, characterized in that it comprises: a figurine that captures an entrance and wirelessly communicates the entrance; and a computer that receives the entrance of the figurine, generates an exit based on the entrance, and wirelessly communicates the exit to the figurine, where the figurine presents the exit to a user. System according to claim 1, characterized in that: the figure captures voice data of a user and wirelessly communicates the voice data; and the computer receives the voice data of the figurine, generates an audible response to the voice data, and wirelessly communicates the response to the figurine, where the figurine transfers the audible response to the user. System according to claim 2, characterized in that the computer receives additional data through a network, and where the computer generates an output based on the additional data. System according to claim 1, characterized in that: the figurine captures voice data of a user and wirelessly communicates the voice data; and the computer receives the voice data of the figurine, identifies a person associated with the voice data, and generates an audible response to the voice data identifying the person. 5. System according to claim 1, characterized in that: the captured entry comprises voice data of a user; and the computer receives the voice data of the figurine, generates a translation of the voice data, and wirelessly communicates the translation to the figurine, where the figurine transfers the translation to the user. System according to claim 1, characterized in that: the captured entry comprises image data including one or more words or phrases; and the computer receives the image data of the figurine, generates a translation of the words or phrases, and wirelessly communicates the translation to the figurine, where the figurine transfers the translation to the user. System according to claim 1, characterized in that: the captured entry comprises image data including one or more words or phrases; and the computer receives the image data of the figurine, generates audio data corresponding to the words or phrases, and wirelessly communicates the audio data to the figurine, where the figurine transfers the audio data to the user. 8. System according to claim 1, characterized in that it also comprises a wireless hub, where the statue communicates wirelessly with the computer through the wireless hub. 9. System according to claim 8, characterized in that it also comprises the Internet, where the statuette communicates wirelessly with the computer over the Internet through the wireless hub. System according to claim 1, characterized in that: the figurine captures image data including an identifiable face or object; and the computer receives the image data of the figurine, determines an identification that identifies the face or identifiable object, and wirelessly communicates the identifier to the figurine. 11. System according to claim 1, characterized in that: the figurine captures additional data of a compatible object; and the computer generates the output based on the additional data. System according to claim 1, characterized in that the output is transferred first and the computer is programmed to proactively make the figurine transfer the second output to a user. 13. Interactive toy figurine, characterized in that it comprises: a data capture device for capturing audio or video data; and a wireless transmitter / receiver for wirelessly transferring data captured by the data capture device and receiving the output associated with the data captured by the data capture device. An interactive toy figurine according to claim 13, characterized in that the data capture device comprises an image capture device, the figure further comprising a screen for displaying at least one of an image captured by the capture device. of image and output. 15. Interactive toy figurine according to claim 13, characterized in that it further comprises a loudspeaker for transferring audio data associated with the data captured by the data capture device. 16. Interactive toy figurine according to claim 13, characterized in that the data captured by the data capture device includes one or more written words or phrases, and the audio data comprises an audible expression of the words or phrases. Interactive toy figurine according to claim 13, characterized in that the data captured by the data capture device includes one or more words or phrases in a first language, and the audio data comprises a translation of the words or phrases in a second language. 18. Method, characterized in that it comprises: capturing data of a user in a figurine; wirelessly communicate the data to an external computer; receive a response to the data from the external computer; and transfer a response to the user from the figurine. Method according to claim 18, characterized in that the data comprises voice data captured from the user, wherein the transfer of the response comprises transferring an audible response to the voice data. Method according to claim 19, characterized in that the reception of the response comprises receiving a translation of the voice data and wherein the transfer of the response comprises transferring the translation to the user from the figure. Method according to claim 18, characterized in that the data comprises image data including one or more words or phrases, wherein receiving the response comprises receiving audio data corresponding to the words or phrases, and wherein the transfer of the response involves transferring the audio data from the figurine. Method according to claim 18, characterized in that the data comprises image data including one or more words or phrases in a first language, wherein receiving the response comprises receiving a translation of the words or phrases in a second language, and where the transfer of the response involves transferring the translation from the figurine. Method according to claim 22, characterized in that the transfer of the translation includes activating a loudspeaker to generate an audible expression of the translation. 24. Method according to claim 22, characterized in that the transfer of the translation includes activating a screen to generate a visual translation. 25. System, characterized in that it comprises: a computer; and a figurine communicatively coupled to the computer, where the figurine provides input to the computer, receives the output from the computer and transfers the output to a user, and where the figurine's functionality is expandable through updates to the computer. 26. System, characterized in that it comprises: a figurine that captures an entrance and wirelessly communicates the entrance; and a parent unit that receives the input and generates an alarm based on the input. 27. System according to claim 26, characterized in that it also comprises a computer that receives the input of the statuette, sends the input to the parent unit and causes the parent unit to generate the alarm. 28. System according to claim 26, characterized in that the input comprises breathing information associated with a child and the parent unit generates an alarm if the child stops breathing. 29. System, characterized in that it comprises: a computer; and a figurine communicatively coupled to the computer; one or more objects compatible with the system, where the figurine interacts with one or more objects when implementing the computational power of the computer, where the objects compatible with the system include signals identifiable by the figurine to ensure that it refers to the computer can ensure the interaction between the figurine and the object.
MXPA06014212A 2004-06-08 2005-06-07 Figurine using wireless communication to harness external computing power. MXPA06014212A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US57810104P 2004-06-08 2004-06-08
PCT/US2005/019933 WO2005123210A2 (en) 2004-06-08 2005-06-07 Figurine using wireless communication to harness external computing power

Publications (1)

Publication Number Publication Date
MXPA06014212A true MXPA06014212A (en) 2007-03-12

Family

ID=35510282

Family Applications (1)

Application Number Title Priority Date Filing Date
MXPA06014212A MXPA06014212A (en) 2004-06-08 2005-06-07 Figurine using wireless communication to harness external computing power.

Country Status (8)

Country Link
US (1) US20060234602A1 (en)
EP (1) EP1765478A2 (en)
JP (1) JP2008506510A (en)
CN (1) CN101193684A (en)
BR (1) BRPI0511898A (en)
CA (1) CA2569731A1 (en)
MX (1) MXPA06014212A (en)
WO (1) WO2005123210A2 (en)

Families Citing this family (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070073436A1 (en) * 2005-09-26 2007-03-29 Sham John C Robot with audio and video capabilities for displaying advertisements
JP5020593B2 (en) * 2006-10-16 2012-09-05 株式会社日立ソリューションズ Foreign language learning communication system
US7551523B2 (en) * 2007-02-08 2009-06-23 Isaac Larian Animated character alarm clock
US8894461B2 (en) * 2008-10-20 2014-11-25 Eyecue Vision Technologies Ltd. System and method for interactive toys based on recognition and tracking of pre-programmed accessories
TWI338588B (en) * 2007-07-31 2011-03-11 Ind Tech Res Inst Method and apparatus for robot behavior series control based on rfid technology
US7868762B2 (en) * 2007-12-12 2011-01-11 Nokia Corporation Wireless association
US20090197504A1 (en) * 2008-02-06 2009-08-06 Weistech Technology Co., Ltd. Doll with communication function
US10265609B2 (en) * 2008-06-03 2019-04-23 Tweedletech, Llc Intelligent game system for putting intelligence into board and tabletop games including miniatures
US9712359B2 (en) * 2009-04-30 2017-07-18 Humana Inc. System and method for communication using ambient communication devices
US20100325781A1 (en) * 2009-06-24 2010-12-30 David Lopes Pouch pets networking
US8568189B2 (en) * 2009-11-25 2013-10-29 Hallmark Cards, Incorporated Context-based interactive plush toy
US9421475B2 (en) 2009-11-25 2016-08-23 Hallmark Cards Incorporated Context-based interactive plush toy
US20110230116A1 (en) * 2010-03-19 2011-09-22 Jeremiah William Balik Bluetooth speaker embed toyetic
US8801490B2 (en) 2010-12-23 2014-08-12 Lcaip, Llc Smart stuffed toy with air flow ventilation system
WO2012088524A1 (en) * 2010-12-23 2012-06-28 Lcaip, Llc Smart stuffed animal with air flow ventilation system
US9089782B2 (en) * 2010-12-23 2015-07-28 Lcaip, Llc. Smart stuffed toy with air flow ventilation system
US20120185254A1 (en) * 2011-01-18 2012-07-19 Biehler William A Interactive figurine in a communications system incorporating selective content delivery
US20120190453A1 (en) * 2011-01-25 2012-07-26 Bossa Nova Robotics Ip, Inc. System and method for online-offline interactive experience
JP5844288B2 (en) * 2011-02-01 2016-01-13 パナソニック インテレクチュアル プロパティ コーポレーション オブアメリカPanasonic Intellectual Property Corporation of America Function expansion device, function expansion method, function expansion program, and integrated circuit
US9126122B2 (en) * 2011-05-17 2015-09-08 Zugworks, Inc Doll companion integrating child self-directed execution of applications with cell phone communication, education, entertainment, alert and monitoring systems
US20130078886A1 (en) * 2011-09-28 2013-03-28 Helena Wisniewski Interactive Toy with Object Recognition
GB2496169B (en) * 2011-11-04 2014-03-12 Commotion Ltd Toy
US9492762B2 (en) 2012-05-08 2016-11-15 Funfare, Llc Sensor configuration for toy
US9565402B2 (en) * 2012-10-30 2017-02-07 Baby-Tech Innovations, Inc. Video camera device and method to monitor a child in a vehicle
US11020680B2 (en) * 2012-11-15 2021-06-01 Shana Lee McCart-Pollak System and method for providing a toy operable for receiving and selectively vocalizing various electronic communications from authorized parties, and for providing a configurable platform independent interactive infrastructure for facilitating optimal utilization thereof
US20140349547A1 (en) * 2012-12-08 2014-11-27 Retail Authority LLC Wirelessly controlled action figures
US20140162230A1 (en) * 2012-12-12 2014-06-12 Aram Akopian Exercise demonstration devices and systems
US20140256214A1 (en) * 2013-03-11 2014-09-11 Raja Ramamoorthy Multi Function Toy with Embedded Wireless Hardware
US9610500B2 (en) 2013-03-15 2017-04-04 Disney Enterprise, Inc. Managing virtual content based on information associated with toy objects
US9011194B2 (en) 2013-03-15 2015-04-21 Disney Enterprises, Inc. Managing virtual content based on information associated with toy objects
EP2777786A3 (en) * 2013-03-15 2014-12-10 Disney Enterprises, Inc. Managing virtual content based on information associated with toy objects
KR101504699B1 (en) * 2013-04-09 2015-03-20 얄리주식회사 Phonetic conversation method and device using wired and wiress communication
US20140329433A1 (en) * 2013-05-06 2014-11-06 Israel Carrero Toy Stuffed Animal with Remote Video and Audio Capability
KR101458460B1 (en) * 2013-05-27 2014-11-12 주식회사 매직에듀 3-dimentional character and album system using the same
US9406240B2 (en) * 2013-10-11 2016-08-02 Dynepic Inc. Interactive educational system
JP6174543B2 (en) * 2014-03-07 2017-08-02 摩豆科技有限公司 Doll control method and interactive doll operation method by application, and apparatus for doll control and operation
US20150290548A1 (en) * 2014-04-09 2015-10-15 Mark Meyers Toy messaging system
KR101623167B1 (en) * 2014-05-16 2016-05-24 수상에스티(주) Monitoring system for baby
WO2015195550A1 (en) * 2014-06-16 2015-12-23 Watry Krissa Interactive cloud-based toy
KR102156536B1 (en) 2014-06-23 2020-09-16 신에쓰 가가꾸 고교 가부시끼가이샤 Crosslinked organopolysiloxane and method for producing same, mist suppressant, and solvent-free silicone composition for release paper
US9931572B2 (en) 2014-09-15 2018-04-03 Future of Play Global Limited Systems and methods for interactive communication between an object and a smart device
TWI559966B (en) * 2014-11-04 2016-12-01 Mooredoll Inc Method and device of community interaction with toy as the center
JP5866539B1 (en) * 2014-11-21 2016-02-17 パナソニックIpマネジメント株式会社 Communication system and sound source reproduction method in communication system
US10616310B2 (en) 2015-06-15 2020-04-07 Dynepic, Inc. Interactive friend linked cloud-based toy
US10405745B2 (en) 2015-09-27 2019-09-10 Gnana Haranth Human socializable entity for improving digital health care delivery
JP6680125B2 (en) * 2016-07-25 2020-04-15 トヨタ自動車株式会社 Robot and voice interaction method
US20180158458A1 (en) * 2016-10-21 2018-06-07 Shenetics, Inc. Conversational voice interface of connected devices, including toys, cars, avionics, mobile, iot and home appliances
US10783799B1 (en) * 2016-12-17 2020-09-22 Sproutel, Inc. System, apparatus, and method for educating and reducing stress for patients with illness or trauma using an interactive location-aware toy and a distributed sensor network
US10441879B2 (en) * 2017-03-29 2019-10-15 Disney Enterprises, Inc. Registration of wireless encounters between wireless devices
KR102295836B1 (en) * 2020-11-20 2021-08-31 오로라월드 주식회사 Apparatus And System for Growth Type Smart Toy

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2516425Y2 (en) * 1990-12-11 1996-11-06 株式会社タカラ Operating device
JPH0731748A (en) * 1992-12-08 1995-02-03 Steven Lebensfeld Toy doll of visual sensation and language responsive type
US6947571B1 (en) * 1999-05-19 2005-09-20 Digimarc Corporation Cell phones with optical capabilities, and related applications
US5945656A (en) * 1997-05-27 1999-08-31 Lemelson; Jerome H. Apparatus and method for stand-alone scanning and audio generation from printed material
US6159101A (en) * 1997-07-24 2000-12-12 Tiger Electronics, Ltd. Interactive toy products
US6554679B1 (en) * 1999-01-29 2003-04-29 Playmates Toys, Inc. Interactive virtual character doll
US7261612B1 (en) * 1999-08-30 2007-08-28 Digimarc Corporation Methods and systems for read-aloud books
US6227931B1 (en) * 1999-07-02 2001-05-08 Judith Ann Shackelford Electronic interactive play environment for toy characters
US6719604B2 (en) * 2000-01-04 2004-04-13 Thinking Technology, Inc. Interactive dress-up toy
US6773344B1 (en) * 2000-03-16 2004-08-10 Creator Ltd. Methods and apparatus for integration of interactive toys with interactive television and cellular communication systems
US6443796B1 (en) * 2000-06-19 2002-09-03 Judith Ann Shackelford Smart blocks
US7008288B2 (en) * 2001-07-26 2006-03-07 Eastman Kodak Company Intelligent toy with internet connection capability
CN106648146B (en) * 2002-09-26 2021-02-02 吉田健治 Dot pattern, information reproducing method using dot pattern, and input/output method
US7248170B2 (en) * 2003-01-22 2007-07-24 Deome Dennis E Interactive personal security system

Also Published As

Publication number Publication date
CN101193684A (en) 2008-06-04
EP1765478A2 (en) 2007-03-28
CA2569731A1 (en) 2005-12-29
US20060234602A1 (en) 2006-10-19
WO2005123210A3 (en) 2008-02-14
JP2008506510A (en) 2008-03-06
WO2005123210A2 (en) 2005-12-29
BRPI0511898A (en) 2008-01-15

Similar Documents

Publication Publication Date Title
MXPA06014212A (en) Figurine using wireless communication to harness external computing power.
CN110609620B (en) Human-computer interaction method and device based on virtual image and electronic equipment
US8172637B2 (en) Programmable interactive talking device
US10957325B2 (en) Method and apparatus for speech interaction with children
CN105126355A (en) Child companion robot and child companioning system
US5982853A (en) Telephone for the deaf and method of using same
CN110998725B (en) Generating a response in a dialog
JP2019521449A (en) Persistent Companion Device Configuration and Deployment Platform
CN106200886A (en) A kind of intelligent movable toy manipulated alternately based on language and toy using method
CN109074117A (en) Built-in storage and cognition insight are felt with the computer-readable cognition based on personal mood made decision for promoting memory
JP2003205483A (en) Robot system and control method for robot device
US20180272240A1 (en) Modular interaction device for toys and other devices
Polite et al. A pernicious silence: Confronting race in the elementary classroom
Tan et al. iSocioBot: a multimodal interactive social robot
CN109015647A (en) Mutual education robot system and its terminal
JP2003108362A (en) Communication supporting device and system thereof
JP2001338077A (en) Language lesson method through internet, system for the same and recording medium
Eden Technology Makes Things Possible
US20210136323A1 (en) Information processing device, information processing method, and program
JP2014161593A (en) Toy
JP4741817B2 (en) Audio output device, character image display device, audio output method, and character image display method
Venkatagiri Clinical implications of an augmentative and alternative communication taxonomy
WO2019190817A1 (en) Method and apparatus for speech interaction with children
JP2002041279A (en) Agent message system
KR102652008B1 (en) Method and apparatus for providing a multimodal-based english learning service applying native language acquisition principles to a user terminal using a neural network

Legal Events

Date Code Title Description
FA Abandonment or withdrawal