WO2017152167A1 - Interactive toy device, and systems and methods of communication between the same and network devices - Google Patents

Interactive toy device, and systems and methods of communication between the same and network devices Download PDF

Info

Publication number
WO2017152167A1
WO2017152167A1 PCT/US2017/020899 US2017020899W WO2017152167A1 WO 2017152167 A1 WO2017152167 A1 WO 2017152167A1 US 2017020899 W US2017020899 W US 2017020899W WO 2017152167 A1 WO2017152167 A1 WO 2017152167A1
Authority
WO
WIPO (PCT)
Prior art keywords
toy
toy device
user
data
sensor
Prior art date
Application number
PCT/US2017/020899
Other languages
French (fr)
Inventor
Gauri Nanda
Audry HILL
Samir Roy
Aron STEG
Original Assignee
Toymail Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toymail Inc. filed Critical Toymail Inc.
Publication of WO2017152167A1 publication Critical patent/WO2017152167A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H3/00Dolls
    • A63H3/28Arrangements of sound-producing means in dolls; Means in dolls for producing sounds
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H5/00Musical or noise- producing devices for additional toy effects other than acoustical
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H2200/00Computerized interactive toys, e.g. dolls

Definitions

  • Talking toys are meant to affect and improve the child' s interaction with the toy.
  • a talking toy is typically designed to operate in the company of a single child.
  • the talking toy does not change its behavior when the child is playing with other children with their own toys.
  • a parent wishes to communicate with a child remotely, or vice versa.
  • normal communications devices are not suitable for small children. For example, a smartphone would bring privacy concerns and many small children cannot operate them properly.
  • Implementations of the disclosed subject matter provide systems, methods, and interactive toy devices so that parents, guardians, relatives, and friends may engage a child in audio and/or video communication, games, activities, education, and the like. Implementations of the disclosed subject matter may provide systems, methods, and interactive toy devices so that children may communicate with one with one another via the toy devices.
  • a first computing device may receive from a first toy device that is remote from the first computing device and that is associated with a first user, a first sensor data detected by a first sensor in communication with the first toy device.
  • the first computing device may compare at least the first sensor data to a plurality of conditions.
  • the first computing device may determine that a first condition of the plurality of conditions is satisfied based on the comparison of at least the first sensor data to the plurality of conditions.
  • the first computing device may select a first contact from among a plurality of contacts based on an association of the first contact with the first condition,.
  • the plurality of contacts may each be approved by a second user for the first user.
  • the first computing device may provide to a second computing device that is remote from the first computing device and that is associated with the first contact, an indicator of a first status of the first toy device, the first status associated with the satisfied first condition.
  • the first computing device may receive from the second computing device, an indicator of a first toy behavior, and provide to the first toy device, an instruction to perform the first toy behavior.
  • a non-transitory, computer readable medium may store instructions that, when executed by a processor, cause the processor to perform operations including receiving, from a first computing device by a second computing device that is remote from the first computing device, an indicator of a status of a toy device that is associated with sensor data collected by a sensor in communication with the toy device, the toy device associated with a first user.
  • the operations may further include presenting, on a display of the second computing device, a representation of the status of the toy device; and receiving, by the second computing device, a selection of a toy behavior for the toy device by a first contact of a plurality of contacts, the plurality of contacts each approved by a second user for the first user.
  • the operations may further include, in response to the selection of the toy behavior by the first contact, providing, by the second computing device to the first computing device, an instruction for the toy device to perform the selected toy behavior.
  • a toy device may include a sensor, a network interface, a processor; and a non-transitory, computer readable medium in communication with the sensor, the network interface, and the processor.
  • the non- transitory, computer readable medium may store instruction that, when executed by the processor, cause the processor to perform operations including collecting, by the sensor, a sensor data that satisfies a condition associated with a status of the toy device, the toy device associated with a first user, and providing, by the network interface, the sensor data to a first computing device that is remote from the toy device, the first computing device executing an application approved by a second user for communication with the toy device.
  • the operations may further include receiving, by the network interface from the first computing device, an instruction to perform a toy behavior selected on a second computing device by a first contact of a plurality of contacts, the plurality of contacts each approved by the second user for the first user, and executing, by the processor, instructions to perform the toy behavior.
  • a method for setting a profile of a toy device, at a mobile computing device, that includes one or more approved contacts and user interest information.
  • the method includes recording, at the toy device, message that includes at least an audio portion, and receiving, at the toy device, a selection of at least one of the approved contacts.
  • the method includes transmitting the recorded message to at least one device associated with the selected at least one of the approved contacts.
  • a method for receiving, at a toy device, a message that includes at least an audio portion.
  • the method includes receiving, at a user interface of the toy device, a selection of an option to output at least the audio portion of the message, and when an audio output device of the toy device is outputting the audio portion of the message based on the selected option, changing the waveform of an audio signal of the audio portion based on sensor data collected at the toy device so as to change at least one characteristic of the outputted audio portion.
  • a method for detecting, at a sensor, a sensor data.
  • a status of the toy device may be determined based on the detected sensor data.
  • An indicator of the determined status may be provided.
  • the toy device may receive an instruction for a toy behavior.
  • the toy device may execute the toy behavior based on the received instruction.
  • a method for receiving an indicator of a status of a toy device.
  • a toy behavior may be selected based on the interaction status.
  • An instruction may be provided to the toy device to execute the selected toy behavior.
  • a method for receiving an indicator of a status of a toy device.
  • a usage level of a user may be updated based on the status.
  • the updated usage level may be compared to a usage threshold.
  • a toy reward may be determined based on the comparison, and an instruction may be provided to the toy device to execute the toy reward.
  • a method for receiving a source data for a toy device.
  • the source data may be compared to a context condition.
  • the context condition may be determined to be satisfied based on the comparison.
  • a toy behavior may be selected based on a context associated with the context condition, and an instruction may be provided to the toy device to execute the selected behavior.
  • a method for detecting a proximity of a first toy device to a second toy device.
  • An identifier of the second toy device may be received and provided to a device.
  • the first toy device may receive an instruction to execute a selected behavior, and the first toy device may execute the selected behavior based on the received instruction.
  • a method for receiving an identifier of a second toy device and comparing the received identifier to an interaction history of a first toy device.
  • a toy behavior may be selected based on the comparison, and the first toy device may be provided an instruction to execute the behavior.
  • FIG. 1 shows a circuit configuration of a toy device according to an implementation of the disclosed subject matter.
  • FIGS. 2A-2C show front and back views of a toy device according to an
  • FIG. 3 shows a computing device according to an implementation of the disclosed subject matter.
  • FIG. 4 shows a network configuration according to an implementation of the disclosed subject matter.
  • FIGS. 5-8 show example displays of a mobile computing device to an implementation of the disclosed subject matter.
  • FIG. 9 shows an example method of communicating recorded messages between toy devices and/or other approved network devices according to an implementation of the disclosed subject matter.
  • FIG. 10 shows an example method of changing the output characteristics of a message according to an implementation of the disclosed subject matter.
  • FIG. 11 shows an example method for determining toy usage according to an implementation of the disclosed subject matter.
  • FIG. 12 shows an example method for determining a toy usage according to an implementation of the disclosed subject matter.
  • FIG. 13 shows an example method for providing a reward based on usage according to an implementation of the disclosed subject matter.
  • FIG. 14 shows an example method for determining a toy behavior based on context according to an implementation of the disclosed subject matter.
  • FIG. 15 shows an example method of determining a toy behavior based on toy proximity detection according to an implementation of the disclosed subject matter.
  • FIG. 16 shows an example method of determining a toy behavior based on toy proximity detection according to an implementation of the disclosed subject matter.
  • a toy device may record a message (e.g., an audible message, a video message, a message include audio and a still image, or the like) that may be transmitted to a computing device (e.g., a mobile computing device), and/or may receive a message from the computing device.
  • the toy device may be configured so that it may communicate messages between devices associated with a parent, guardian, relative, and/or friend of the user of the toy device.
  • two or more toy devices may be configured so as to communicate audio and/or video messages, and/or sound effects (e.g., audible emojis, as discussed in detail below).
  • toy devices may be communicatively coupled to one another so that one toy device user may communicate with another toy device user.
  • a child with a toy device may communicate with one or more other users having toy devices.
  • a user of a toy device may communicate with a user having, for example, a mobile computing device.
  • Parental and/or network controls may be provided, at least in part, by the toy device and/or the mobile computing device so as to control and/or manage when the toy device may receive and/or transmit messages from (e.g., parents, guardians, relatives, friends, or the like), and what kind and content of messages may be exchanged.
  • the parental and/or network controls may control and/or manage toy-to-device communications and/or toy-to-toy communications.
  • the implementation may provide for a user of a toy device to set and/or modify a network profile that is accessible via the toy device.
  • the profile may include one or more devices and/or contacts (e.g., parent, guardian, relative, friends, or the like) that may be approved (e.g., by a parent and/or guardian) for the toy device to communicate with (e.g., transmit messages to and/or receive messages from).
  • the profile may include information about the user of the toy device, such as the user's name, the toy's name, and the user's interests, such as a favorite food, game, song, or favorite color.
  • the toy device may manipulate the waveform of an audio signal of a received message so as to alter the character of sounds emitted from the toy device.
  • the pitch, frequency, amplitude, envelope, distortion, or the like of the waveform of the audio signal may be manipulated by the toy device.
  • the waveform of the audio signal may be modified based on input from one or more sensors of the toy device, such as a motion sensor, a pressure sensor, an orientation detector, or the like.
  • the waveform may be altered based on a magnitude and direction of an acceleration, a movement, and/or an orientation movement as detected by the sensor.
  • the senor may detect that the toy device is turned to the right and/or clockwise, and the pitch of the sound output by the toy device may be manipulated so as to increase.
  • the pitch of the sound may be manipulated so as to decrease.
  • the toy device may manipulate the waveform of an audio signal received and/or recorded from an audio input device (e.g., microphone) so as to alter the character of sounds emitted from the toy device when a selection is made to output and/or playback the received and/or recorded audio.
  • an audio input device e.g., microphone
  • a child may record their voice using the audio input device, manipulate the waveform of the audio so as to alter the characteristics of the signal, and output and/or playback the manipulated audio.
  • the child may change the pitch, frequency, distortion, or the like of the recorded audio input so as to make the output of their recorded audio sound funny, scary, or the like.
  • the recording, manipulation of audio, and output of the manipulated audio may be for non- messaging purposes (e.g., the child may amuse themselves by making funny voice recordings and playing them back).
  • the output audio may be further manipulated when one or more sensors detect a change in motion, orientation, or the like of the toy device, as discussed in detail below.
  • a mobile computing device may monitor the usage of one or more toy devices based on detecting the motion of the toy device and/or a child's engagement with the toy device.
  • One or usages and/or patterns of use of the toy device may trigger one or more actions and/or events by the toy device.
  • the processor and/or application executed by the toy device may operate so as to invite a child user to play with it if no usage has been detected for a predetermined time period.
  • the toy device may transmit notifications to a mobile computing device (e.g., operated by a parent and/or guardian) when, for example, movement sensor measurements reach a threshold level, the voice filter is used, and/or the toy device responds to the child's behavior (e.g., the toy device outputs a predetermined response based on the child user's input).
  • the parent and/or guardian may be invited to communicate with the toy device (i.e., via the mobile communication device) when notified that the child is playing with the toy device.
  • One or more messages may be transmitted to other toy devices associated with the child's network to participate when the child plays with the toy device.
  • the parent, guardian, and/or authorized user may communicate directly with the child via the toy device using a mobile computing device and/or set automatic responses (e.g., the toy device may play recorded messages of encouragement or instruction stored on the toy device).
  • the automatic responses may be configured using toy device specific communications options in an application being executed at least in part by the mobile computing device, such as 'audible emojis' that, when received by the toy device (e.g., by themselves and/or in a transmitted message), may configure the toy device to output one or more sounds.
  • usage-based rewards may be provided. Usage of the toy device may be tracked and/or monitored, and rewards may be provided based on the type of use, the frequency of use, and the amount of time of use over a predetermined period of time, or the like. For example, total motion can be tracked with the accelerometer or the number of messages sent can be tracked by the application. In response to reaching one or more usage thresholds, the child may receive one or more messages from a parent and/or guardian, one or more new songs may be received, one or more new features may be downloaded to the toy device, new toy devices may be purchased, or the like.
  • An application that is executed at least in part by a mobile computing device may track an account (e.g., a child account associated with one or more toy devices) with tokens corresponding to the amount and/or types of usage.
  • the toy device may interact with a child, such as with educational games, brushing teeth, or the like.
  • the toy device may conduct a spelling contest, and an application executed at least in part by a mobile computing device may track and/or monitor a child's performance via the toy device in the game. The better the child performs, the more features may be available for use in the toy device.
  • one or more features of the toy device may change based on context, such as time and date.
  • context such as time and date.
  • the sound of the toy device's voice may sound different in winter months, or may sound happy on a birthday of the child user.
  • the toy device may be configured so as to output audio and/or video messages at predetermined times and/or on selected dates.
  • the toy device may output a Valentine's Day message on February 14th, or remind it to put on sun tan lotion in the summer.
  • a toy device may detect (e.g., using one or more sensors) when they are in proximity to other networked toy devices through Near Field Communication protocols, Bluetooth, GPS, or the like. Toy devices that may have interacted (e.g., exchanged messages and/or information) and/or have played together before may have previous interactions and/or information stored from the interaction. For example, two proximate toy devices may have previously used to play a game.
  • a toy device may detect (e.g., using one or more sensors) when they are in proximity to other networked toy devices through Near Field Communication protocols, Bluetooth, GPS, or the like.
  • Toy devices that may have interacted e.g., exchanged messages and/or information
  • two proximate toy devices may have previously used to play a game.
  • determined proximity between the toy devices may configure at least one of the toy devices to output a suggestion to play the game again, a reminder of who won last time, or the like.
  • a toy device may "learn" a child's (and parent's) preferences in terms of the content of audio and/or video messages, types of play, times of play, the timing and nature of the child's desire to
  • toy device communicates with a parent, guardian, relative, and/or friend via a toy device, and the like.
  • FIG. 1 is an example toy device 100a, 100b suitable for implementations of the presently disclosed subject matter.
  • the toy device 100a, 100b may include a bus 120 which interconnects major components of the device 100a, 100b, such as sensors 102a, 102b (e.g., a motion sensor, a smoke sensor, a carbon monoxide sensor, a proximity sensor, a temperature sensor, a time sensor, a physical orientation sensor, an acceleration sensor, a location sensor, a pressure sensor, a light sensor, a passive and/or active infrared sensor, a water sensor, a microphone and/or sound sensor, or the like), a network interface 104 operable to communicate with one or more remote devices via a suitable network connection, a processor 106, a memory 108 such as Random Access Memory (RAM), Read Only Memory (ROM), flash RAM, or the like, an image sensor 109 (e.g., a camera to capture video and/or still images), an audio input device 110 such as a microphone, an audio
  • the bus 120 allows data communication between the processor 106 and one or more memory components, which may include RAM, ROM, and other memory, as previously noted.
  • RAM is the main memory into which an operating system and application programs are loaded.
  • a ROM or flash memory component can contain, among other code, the Basic Input-Output system (BIOS) that controls basic hardware operation such as the interaction with peripheral components.
  • BIOS Basic Input-Output system
  • Applications resident with the device 100a, 100b are generally stored on and accessed via a computer readable medium, such as a solid state drive (SSD) (e.g., fixed storage 116), hard disk drive, an optical drive, floppy disk, or other storage medium.
  • SSD solid state drive
  • the sensor 102a, 102b may be an accelerometer and/or other suitable sensor to detect motion of the toy device 100a, 100b.
  • the motion sensor may detect the movement of the toy device 100a, 100b by a child, such as shaking, rotating, horizontal and/or vertical movement, changing the orientation of the toy device 100a, 100b, or the like.
  • the sensor 102a, 102b may be a physical orientation sensor so as to determine whether the toy device 100a, 100b is oriented, for example, upside-down, right-side-up, on its side, or the like.
  • the sensor 102a, 102b may be a location sensor so that the toy device 100a, 100b may determine its position in its environment (e.g., a room, whether it is next to another toy 100a, 100b, or the like).
  • the sensor 102a, 102b may be a smoke sensor, a carbon monoxide sensor, and/or water sensor to detect smoke, carbon monoxide, and/or water.
  • the toy device 100a, 100b may output a warning via the audio output device 112, the display 120, and/or to a device 10, 11, server 13, and/or remote platform 17 via the network interface 104.
  • an alert in the form of an audible alarm, text message, voice message or telephone call can be made to one or more parents or guardians or other third parties, such as the police, fire, and emergency medical service providers.
  • the sensor 102a, 102b may be a proximity sensor and/or a location sensor which may determine where the toy device 100a, 100b is located (e.g., within a home) and/or whether the toy device 100a, 100b is proximately located next to a user (e.g. a child, parent, relative, friend, or the like).
  • the user may have a wearable device (e.g., band, watch, or the like) which may output a radio frequency and/or other signal (e.g., which include identifier information), which may be detected by the sensor 102a, 102b of the toy device (e.g., where the identifier information may be used to determine whether the user is registered with the toy device 100a, 100b).
  • a wearable device e.g., band, watch, or the like
  • a radio frequency and/or other signal e.g., which include identifier information
  • the identifier information may be used to determine whether the user is registered with the toy device 100a, 100b.
  • the sensor 102a, 102b may be a pressure sensor that may detect pressure applied to the toy device 100a, 100b by a user.
  • the output of a received audible message via the network interface 104 may be changed (e.g., a waveform of the audio signal may be altered to change the pitch, frequency, amplitude, envelope, or the like) during the output of the audible message by the audio output device 112.
  • the waveform of the audio signal may be changed when the value detected by the sensor 102a, 102b is greater than a predetermined threshold value.
  • one or more sounds may be digitally stored in the memory
  • the toy device 100a, 100b may output a "WHEEEE!” sound is response to the detected movement.
  • the toy device 100a, 100b may output an "Ouch!” sound in response to the detected impact.
  • the toy device may output a snoring sound in response to the detected movement.
  • the processor 106 may put the toy device 100a, 100b into a sleep mode with reduced functionality (e.g., the toy device 100a, 100b may not transmit or receive message to preserve battery life, or the like).
  • the processor 106 may execute a "bedtime routine" application which may prompt a child to brush their teeth, or the toy device 100a, 100b may output a bedtime story or a bedtime song.
  • the toy device 100a, 100b may output a song (e.g., selected from a list of songs stored by the toy device 100a, 100b) in response to the detected motion by the sensor 102a, 102b.
  • one or more stored sounds may be output by the audio output device 112.
  • the processor 106 may manipulate the waveform of the audio signal of the sound being output to change the pitch, frequency, amplitude, envelope, or the like. The manipulation of the waveform may be based, at least in part, on data captured by the sensor 102a, 102b.
  • the interaction between a user and the toy device 100a, 100b may enable the toy device 100a, 100b to output stored sounds and/or manipulate the output of the stored sounds based on the data captured by the sensors 102a, 102b.
  • the interaction between the user and the toy device 100a, 100b may further manipulate the output of a user's voice and/or other sounds that have been recorded and have been processed by changing the pitch, frequency, or the like.
  • FIG. 1 shows sensor 102a and 102b
  • any suitable number of sensors may be included on toy device 100a, 100b.
  • one or more sensors 102a, 102b may be arranged in a single sensor package device.
  • the network interface 104 may provide a wired and/or wireless communicative coupling between the toy devices 100a and 100b, between the toy device 100a, 100b and a computer and/or mobile computing device (e.g., device 10, 11 shown in FIG. 4), and/or between the toy device 100a, 100b and one or more devices coupled to the network 7 shown in FIG. 4, such as a server 13, a database 15, and/or a remote platform 17.
  • the network interface 104 may provide such connection using any suitable technique and protocol as will be readily understood by one of skill in the art, including digital cellular telephone, Wi-Fi, Bluetooth(R), near-field, and the like.
  • the network interface 104 may allow the toy device 100a, 100b to communicate with other toy device, mobile devices, computers, servers, and the like via one or more home, local, wide-area, or other communication networks, as described in further detail below.
  • the processor 106 may be any suitable processor, controller, field programmable gate array, programmable logic device, integrated circuit, or the like.
  • the processor 106 may include an audio signal processor that may manipulate and/or change audio waveform signals (e.g., based in part on data received from the sensor 102a, 102b).
  • the audio signal processor may be a separate device from the processor 106, and may be computed to the devices of the toy device 100a, 100b via the bus 120.
  • the image sensor 109 may be a charged coupled device (CCD), a complementary metal oxide semiconductor (CMOS) image sensor, and/or any other suitable camera and/or image capture device to capture still images and/or video.
  • CCD charged coupled device
  • CMOS complementary metal oxide semiconductor
  • the audio input device 110 may be a microphone and/or other suitable voice sensor that may detect a voice and/or other audible sound.
  • the audio input device 110 may convert the detected audible sound into an electrical signal (e.g., an analog signal), and may convert the electrical signal into a digital signal that represents detected audible sound.
  • the audio input device 110 may receive one or more voice commands and/or information to be stored in memory 108, fixed storage, and/or removable media 118.
  • the audio input device 110 may be used to record an audible message, access a network profile of a user and/or a toy device 100a, 100b, provide network profile information, or the like.
  • the processor 106 may perform one or more operations based on the voice command received by the audio input device 110.
  • the audio output device 112 may be one or more speakers to output audio signals that may be stored in the fixed storage 116, the removable media 118, and/or may be received from the network interface 104 and/or the processor 106.
  • the user interface 114 may be one or more buttons, touch input devices, or the like.
  • the user interface 114 may be used to play audio messages received by the toy device 100a, 100b via the network interface 104.
  • the user interface may be used to record and transmit audio messages to other toy devices 100a, 100b, and/or to one or more mobile computing devices, computers, servers, remote services, and the like.
  • the fixed storage 116 may be integral with the toy device 100a, 100b or may be separate and accessed through other interfaces.
  • the display 120 may be a liquid crystal display (LCD), a light emitting diode (LED) and/or organic light emitting diode (OLED) display, and/or any suitable display device.
  • the display 120 may be a touch screen to receive input from a user based on displayed options (e.g., in a menu or user interface).
  • Code to implement the present disclosure can be stored in computer-readable storage media such as one or more of the memory 108, fixed storage 116, removable media 118, or on a remote storage location.
  • FIGS. 2A-2C show front and back views of an outer casing of the toy device 100a, 100b according to an implementation of the disclosed subject matter.
  • FIG. 2A shows an example outer casing of a front side of the toy device 100a, 100b.
  • FIG. 2B shows an example outer casing of a back side of the toy device 100a, 100b.
  • FIG. 2B shows a pass-through portion that may include one or more holes from an exterior of the casing to the interior of the casing. The pass-through portion may be such that audible sound may pass from an exterior portion of the casing so as to be received by the audio input device 110.
  • the pass-through portion may output sound from the audio output device 112 from the interior of the casing to the exterior of the casing of the toy device 100a, 100b, such that it may be heard by a user (e.g., a child).
  • FIG. 2B shows an example user input interface 114, which may include a plurality of buttons.
  • the under input interface 114 may include a button 114a which may be selected by a user (e.g., a child) so as to record an audio message and/or transmit it to another device (e.g., another toy device 100a, 100b, and/or a device 10, 11, a server 13, and/or a remote platform 17 shown in FIG. 4).
  • the button 114a may be select by a child user to send an audio message from the toy device 100a, 100b to a device 10, 11 of the child's parent.
  • the toy device 100a, 100b may prompt the child (e.g., via an audible message that is output via the audio output device 112) to state the audible message so that it may be recorded by the toy device 100a, 110b.
  • the audible message may be transmitted via the network interface 104 of the toy device 100a, 100b to a device communicatively coupled to the network 7 shown in FIG. 4 (e.g., the device 10, 11, the server 13, and/or the remote platform 17).
  • the child may select the button 114a after the message is recorded to transmit the recorded audible message.
  • one or more messages may be recorded by selecting button 114a, and may be stored by the memory 108, the fixed storage 116, and/or the removable media 118.
  • the one or more messages may be automatically transmitted, and/or the toy device 100a, 100b may determine from the user whether to transmit the messages once the network communication has been enabled.
  • the audio input device 110 may receive one or more voice commands from a user, which may be interpreted and/or processed by the processor 106.
  • the received one or more voice commands may be used to record a message, send a recorded message, play a received message, accessing a network profile for a user and/or the toy device 100a, 100b (e.g., the user name, toy's name, and/or interests, such as favorite food, game, song, color, or the like, and/or communications network information and/or settings).
  • the one or more voice commands may be used to select an authorized contact (e.g., a parent, a guardian, a relative, a friend, or the like), device 10, 11, and/or toy device 100a, 100b to send a recorded message to.
  • the one or more voice commands may be used to transmit audio and/or video messages and/or sound effects (e.g., audible emojis), and/or play back received messages that include audio, video, and/or sound effects.
  • the user input interface 114 may include a button to change the waveform and/or audio characteristics of the recorded message and/or a recorded audio (e.g., a child's voice, which may be for entertainment purposes).
  • a voice command may be received via the audio input device 110 which may enable the processor 106 to manipulate the waveform of the message and/or recorded audio.
  • the processor 106 and/or an audio processor may manipulate the audio signal and/or waveform of the recorded audible message and/or recorded audio (e.g., the child's voice that may be played back).
  • the waveform may be altered based on the magnitude and direction of an acceleration, a movement, and/or an orientation movement.
  • the changed and/or manipulated waveform may distort, change the pitch, change the frequency, change the amplitude, change the envelope, or the like of the recorded audible message and/or recorded audio (e.g., audio that may be recorded for entertainment and/or non-messaging purposes).
  • the waveform may be altered based on a magnitude and direction of an acceleration, a movement, and/or an orientation movement as detected by the sensor 102a, 102b.
  • the waveform of the audio signal may be manipulated when a value detected by the sensor 102a, 102b exceeds a predetermined threshold.
  • the processed recorded audible message and/or processed recorded audio may be output via the audio output device 112 before transmission, and the message may be transmitted, for example, when the user selects the button 114a.
  • the senor 102a, 102b may detect motion of the toy device.
  • motion of the device may be induced by the user when the user is recording the audible message.
  • the user may move the toy device 100a, 100b horizontally and/or vertically, and/or may rotate, swing in an arc, shake, or move the device 100a, 100b in any other suitable manner.
  • the motion sensor 102 may detect the motion of the toy device 100a, 100b by the user, and the processor 106 and/or and audio processor may change the waveform of the audio signal of the audible message that is captured by the audio input device 110 when the button 114a is selected. That is, the processor 106 and/or audio processor may use the motion data collected by the sensor 102a, 102b to manipulate the waveform of the captured audible message.
  • the waveform may be changed by the processor 106 and/or the audio processor may change the pitch, frequency, amplitude, distortion, envelope, or the like of the audio signal of the captured audible message for recording.
  • the waveform may be altered based at least on, for example, the magnitude and direction of an acceleration, a movement, and/or an orientation movement as detected by the sensor 102a, 102b.
  • the waveform of the audio signal may be changed by the processor 106 when a value detected by the sensor 102a, 102b exceeds a predetermined threshold.
  • the manipulated waveform of the audio signal of the captured audible message may be stored by the toy device 100a, 100b (e.g., by the memory 108, the fixed storage 116, and/or the removable media 118), and may be transmitted via the network interface 104 according to the input received from the user via the user input interface 114.
  • the toy device 100a, 100b may indicate that an audible message has been received (e.g., via the network 7 from the device 10, 11, the server 13, and/or the remote platform 17 shown in FIG. 4).
  • the toy device 100a, 100b may vibrate (e.g., using a vibration device (not shown) that may be coupled to the bus 120 of FIG. 1) and/or may visually indicate (e.g., via a light, a display, or the like) on the external housing of the toy device 100a, 100b.
  • a sound may be output by the audio output device 112 when a message has been received by the toy device 10, 11.
  • the user may select the button 114b to play the received audible message.
  • the audio output device 112 may reproduce the received audible message for the user. If the received message includes a song (which may be included with the message, as discussed below), the song may be output by the audio output device 112.
  • the audio output device 112 may output a sound that corresponds with the received emoji (e.g., a "happy” sound, a "sad” sound, an "angry” sound, or the like).
  • messages that include emojis can be generated based on input detected by the sensor 102a, 102b. For example, a message may be generated that includes a hug emoji when the sensor 102a, 102b detects a predetermined amount of pressure to send to an approved contact. In another example, if the sensor 102a, 102b is a touch sensor, and the child imparts a tickling motion to the touch sensor, the toy device 100a, 100b may generate a message with a laugh emoji to an approved contact. In another example, the sensor 102a, 102b may detect a child moving a hand of the toy device 100a, 100b, and the toy device may output a message with a hello emoji to an approved contact.
  • data collected by the sensor 102a, 102b may be used by the processor 106 and/or an audio processor to manipulate the waveform of the audio signal of the received audible message to be output by the audio output device 112.
  • the user may move (e.g., horizontally, vertically, and/or rotationally move and/or shake) the toy device 100a, 100b, and the processor 106 may manipulate the waveform of the audio signal to be output by the audio output device 112 according to the detected movement.
  • the processor 106 may manipulate the waveform of the audio signal so as to change its pitch, frequency, amplitude, envelope, or the like based at least in part on the magnitude and/or direction of the detected movement of the toy device 100a, 100b.
  • the waveform of the audio signal may be manipulated when a value detected by the sensor 102a, 102b exceeds a predetermined threshold.
  • the manipulated waveform of the audio signal may be output by the audio output device 112.
  • a computing device such as a tablet, laptop, or smart phone may execute an application for manipulating aspects of toy device 100a, 100b.
  • toy device 100a, 100b may express one of several personalities. Each of the personalities may be associated with a selectable mode via the application.
  • a personality for toy device 100a, 100b may be associated with a variety of toy behaviors and may change based on context. For example, personalities may include a warm personality, an indifferent personality, or a grumpy personality. When toy device 100a, 100b detects that a child is in proximity, it may greet the child.
  • a toy device having the warm personality may greet the child with a statement such as "Hello, I hope you're doing well today! or if the context indicates it is the morning, "Good Morning!.
  • a toy device having the indifferent personality may greet the child with a statement such as "Hi.” or if the context is at night, “I'm going to bed.”.
  • a toy device having the grumpy personality may not greet the child at all or may express a statement such as "What are you doing here?" or if the context indicates it is dinner time, "I'm hungry!
  • These or any other suitable personalities may be presented as selectable modes on an application suited for child interaction.
  • a child or other user may engage with a computing device executing the application, select a mode associated with a desired personality, and the computing device may provide an instruction to toy device 100a, 100b to execute the selected personality mode.
  • Other features may also be selectable via the application by the child or other user, such as particular sounds emitted or other behaviors expressed by toy device 100a, 100b when a sensor reading is received. For example, expressions such as screams may be emitted when an accelerometer in toy device 100a, 100b measures an acceleration value above a threshold level or a snoring sound may be expressed when an acceleration value below a threshold level is measured for a threshold period of time.
  • the particular expression and/or threshold sensor reading levels linked to a particular expression may be selected by a user of the application, as well as other expressions or behaviors whether or not linked to sensor reading levels.
  • any of the expressions, sounds, or other behaviors of toy device 100a, 100b as well as any coupled sensor reading thresholds, levels, or other values as discussed in this disclosure may be selected by a user of the application, such as a child.
  • FIG. 2C shows an alternative implementation of the toy device 100a, 100b shown in
  • FIG. 2B that includes the display 120, and the image sensor 109.
  • a user may record and send a message (e.g., by selecting button 114a and/or by selecting an option displayed on the display 120) that may include a still image, video, and/or an audible message using the image sensor 109 and/or the audio input device 110.
  • a user selects to play a received message (e.g., by selecting the button 114b and/or by selecting an option displayed on the display 120)
  • an image and/or video portion of the received message may be displayed on the display 120, and the audible message may be output via the audio output device 112.
  • the display 120 and/or the audio output device 112 may output a visual and/or audio notification that a message has been received by the toy device 100a, 100b.
  • the image sensor 109 may be activated when the sensor
  • the image sensor 109 may capture an image of the child to send to a parent and/or guardian from the approved contacts, and/or may capture an image of the child when the child's face is detected by the image sensor 109.
  • the parent and/or guardian may retransmit the image to one or more contacts and/or social networks according to one or more selections.
  • the implementations of the toy device 100a, 100b shown in FIGS. 2B-2C may include one or more lights (e.g., light emitting diodes (LEDs) and/or organic light emitting diodes) to inddicate a status of the device (not shown).
  • lights e.g., light emitting diodes (LEDs) and/or organic light emitting diodes
  • a light outputs green light that is pulsed it may indicate that the toy device 100a, 100b has been configured to interoperate with a communications network (e.g., network 7 shown in FIG. 4).
  • a light outputs red light that is pulsed it may indicate that the toy device 100a, 100b may have failed to be configured to a communications network.
  • an orange light is emitted that is slowly pulsed (e.g., at 1 Hz)
  • a red light is slowly pulsed (e.g., at 1 Hz)
  • a light sequence is red, orange, and then off, with slow pulsing (e.g. 1 Hz)
  • the toy device 100a, 100b may be attempting to obtain an Internet Protocol (IP address).
  • IP address Internet Protocol
  • the IP address has been retrieved, and the toy device is attempting to connect with a server (e.g., server 13).
  • a server e.g., server 13
  • a light is green and pulsing slowly (e.g., 0.5 Hz)
  • the toy device 100a, 100b may be attempting to connect to a cloud device.
  • a red light is pulsed fast (e.g., 2 Hz)
  • the network and/or communicative connection may be lost, and the toy device 100a, 100b may be attempting to reconnect.
  • a light sequence is short green, off, long green, off with a slow pulse (e.g., 1 Hz)
  • the toy device 100a, 100b may be attempting to verify a software (e.g., firmware update). If a green light is on (i.e., not flashing), the software and/or firmware update may be in progress. If there is no light being emitted, the toy device 100a, 100b may be operating
  • FIG. 3 is an example computing device 10, 11 suitable for implementations of the presently disclosed subject matter.
  • the device 10, 11 may be, for example, a desktop or laptop computer, or a mobile computing device such as a smart phone, wearable computing device, smart watch, tablet, or the like.
  • device 10, 11 may be mobile computing devices of a parent, guardian, and/or relative of a child user of the toy device 100a, 100b.
  • the device 10, 11 and toy device 100a, 100b may be configured so as to communicate with one another via network 7, and/or with server 13, database 15, and/or remote platform 17. Based on a configuration of toy device 100a and/or toy device 110b, the device 10 and/or device 11 may communicate with one or both of toy devices 100a, 100b.
  • the device 10, 11 may include a bus 21 which interconnects major components of the device 10, 11, such as a central processor 24, a memory 27 such as Random Access Memory (RAM), Read Only Memory (ROM), flash RAM, or the like, an output device 22 (e.g., a display screen, a touch screen, one or more audio speakers, or the like), a user input interface 26, which may include one or more controllers and associated user input devices such as a keyboard, buttons, mouse, touch screen, microphone, and the like, a fixed storage 23 such as a hard drive, flash storage, and the like, a removable media component 25 operative to control and receive an optical disk, flash drive, and the like, and a network interface 29 operable to communicate with one or more remote devices via a suitable network connection.
  • a bus 21 which interconnects major components of the device 10, 11, such as a central processor 24, a memory 27 such as Random Access Memory (RAM), Read Only Memory (ROM), flash RAM, or the like, an output device 22 (e.g., a display screen, a touch screen
  • the bus 21 allows data communication between the central processor 24 and one or more memory components, which may include RAM, ROM, and other memory, as previously noted.
  • RAM is the main memory into which an operating system and application programs are loaded.
  • a ROM or flash memory component can contain, among other code, the Basic Input-Output system (BIOS) that controls basic hardware operation such as the interaction with peripheral components.
  • BIOS Basic Input-Output system
  • FIGS. 5- 8, as discussed below, show examples of display screens for the application to interoperate with and/or control the operation of the device 10, 11, the toy device 100a, 100b, and/or the server 13.
  • the fixed storage 23 may be integral with the device 10, 11 or may be separate and accessed through other interfaces.
  • the network interface 29 may provide a direct connection to a remote server via a wired or wireless connection.
  • the network interface 29 may provide such connection using any suitable technique and protocol as will be readily understood by one of skill in the art, including digital cellular telephone, Wi-Fi, Bluetooth(R), near-field, and the like.
  • the network interface 29 may allow the computer to communicate with other computers via one or more local, wide-area, or other communication networks, as described in further detail below.
  • FIG. 4 shows an example network arrangement according to an implementation of the disclosed subject matter.
  • One or more devices 10, 11, such as local computers, smart phones, tablet computing devices, wearable computing devices, smart watches, and the like may connect to other devices via one or more networks 7.
  • Each device may be a computing device as previously described.
  • One or more toy devices 100a, 100b may connect to the one or more devices 10, 11, and/or with one another, and/or with a server 13, a database 15, and/or a remote platform 17.
  • the network may be a home network, a local network, wide-area network, the Internet, or any other suitable communication network or networks, and may be implemented on any suitable platform including wired and/or wireless networks.
  • the devices 10, 11 and/or the toy device 100a, 100b may communicate with one or more remote devices, such as servers 13 and/or databases 15.
  • the remote devices may be directly accessible by the devices 10, 11, or one or more other devices may provide intermediary access such as where a server 13 provides access to resources stored in a database 15.
  • the devices 10, 11 also may access remote platforms 17 or services provided by remote platforms 17 such as cloud computing arrangements and services.
  • the remote platform 17 may include one or more servers 13 and/or databases 15.
  • the server 13, database 15, and/or remote platform 17 may store information related to one or more of the toy device 100a, 100b and/or the device 10, 11.
  • the server 13, database 15, and/or remote platform 17 may store information such as the devices 10, 11 that the toy device 100a, 100b is authorized to communicate with (e.g., transmit and/or receive audible messages), voice filtering settings, network configuration information, one or more recorded audible message, account information, or the like.
  • the remote platform 17 may be a server and/or other computer which hosts an application such that one or more toy devices 100a, 100b may communicate directly with one another, and/or communicate amongst a group of toy devices 100a, 100b. In some implementations, the remote platform 17 may be server and/or other computer which hosts an application such that one or more toy devices 100a, 100b may determine the location of another one or more of the toy device 100a, 100b.
  • FIGS. 5-8 show example displays of a mobile computing device (e.g., device 10, 11 shown in FIGS. 3-4) to an implementation of the disclosed subject matter.
  • the device 10, 11 may execute at least a portion of one or more applications so as to display the example displays shown in FIGS. 5-8.
  • the one or more applications may control and/or interoperate with the toy device 100a, 100b.
  • FIG. 5 shows a display 200 displayed on the output device 22 of the device 10, 11 according to an implementation of the disclosed subject matter.
  • Display 200 may include selectable option 202 to configure devices to a network, selectable option 204 to manage a child account, a selectable option 206 to record a message, a selectable option 208 to play a received message, and/or a selectable option 210 to change filter settings.
  • the output device 22 of the device 10, 11 may be a touch screen, and the selectable options 202, 204, 206, 208, and/or 210 displayed on the display 200 may be selected by a user's touch via the touch screen of output device 22.
  • selectable option 202 When a user selects selectable option 202 to configure devices to the a network, the user may configure one or more of the device 10, 11 and/or the toy device 100a, 100b so that they may communicate with one another and/or other devices connected to the network 7 (e.g., server 13, remote platform 17, and the like). Selectable option 202 may be selected by a user so as to add and/or remove one or more devices (e.g., devices 10, 11 and/or toy device 100a, 100b) that can communicate with one another and/or via the network 7.
  • FIG. 6 shows a display presented on device 10, 11 when selectable option 202 is selected, such the display presents selectable option 212 to select a network, a selectable option 214 to add or remove a toy device, and/or a selectable option 216 to add or remove a mobile device.
  • selectable option 212 When selectable option 212 is selected, a network (e.g., network 7 shown in FIG.
  • a home network such as a home network, local area network, wide area network, or the like
  • one or more devices e.g., devices 10, 11 and/or toy devices 100a, 100b
  • selectable option 214 to add and/or remove a toy device 100a, 100b
  • selectable option 216 to add and/or remove a device 10, 11
  • the user may configure which one or more devices 10, 11 and/or people a child account for a toy device 100a, 100b may communicate with (e.g., transmit and/or receive audible messages).
  • the selectable option 204 the user may select and/or approve device 10, 11 for a toy device 100a, 100b to communicate with, such as devices that belong to one or more parents, guardians, relatives, friends, and the like.
  • the option 204 may allow a parent and/or guardian to control and/or manage which friends (e.g., that have toy devices 100a, 100b) a child may communicate with via the toy device 100a, 100b.
  • the selectable option 204 may allow a user to provide information about their child's favorites, include colors, food, or the like.
  • the selectable options 204 may allow a user to identify other devices of a child that may communicate with a toy device 100a, 100b, so that a child may user the toy device 100a, 100b to locate other registered toy devices (e.g., in a room, in a house, or the like).
  • the display shown in FIG. 7 may be presented on the display of device 10, 11, which may include selectable option 218 to add/remove a child account, selectable option 220 to manage child accounts 220, selectable option 222 to select child favorites, and/or selectable option 224 to set audio filter for a child.
  • selectable option 218 to add/remove a child account
  • selectable option 220 to manage child accounts 220
  • selectable option 222 to select child favorites
  • selectable option 224 to set audio filter for a child.
  • a parent and/or guardian may select an option to have all messages routed through the parent device 10, 11 before the messages are sent to the toy device 100a, 100b. That is, the parent and/or guardian may monitor and/or control
  • a parent and/or guardian may access recordings that are save by the device 10, 11 and/or the toy 100a, 100b.
  • the recordings may be stored, for example, the server 13, the database 15, and/or the remote platform 17.
  • the recordings may be stored, as well as text (e.g., the audio recordings may be converted to text).
  • the text may be analyzed by at least one of the device 10, 11, the server 13, and/or the remote platform 17 so that a parent and/or guardian may be able to determine what may be going on in a child's life and/or with their mental, emotional, and/or developmental state.
  • the parent and/or guardian may access programs and/or applications to be transmitted to and/or accessed by the toy devices 100a, 100b periodically (e.g., every day, every other day, every week, every month, or the like).
  • the programs and/or applications may be educational, such as "word of the day” or "foreign language words” applications, song subscriptions, games, or the like.
  • the child may send the program and/or application (or a link thereto) to a friend from the approved contacts list. For example, the program and/or application may be sent from one toy device 100a, 100b to another toy device 100a, 100b.
  • a user may add one or more child accounts to be managed by the application being executed, at least in part, by the device 10, 11. That is, a child may have one or more toy devices 100a, 100b that may be registered with a child account. A child account may be added when, for example, the child receives a new toy device 100a, 100b.
  • a user of the device 10, 11 may select and/or authorize one or more persons (e.g., parents, relatives, friends, or the like) of the child so that messages may be transmitted between the toy device 100a, 100b associated with a child account and the selected and/or authorized persons.
  • a parent and/or guardian having a device 10, 11 may authorize friends of a child to communicate with the child.
  • the communication may be toy-to-toy communication between a plurality of toy devices 100a, 100b.
  • a parent and/or guardian may have control over which persons may send messages to and/or receive messages from the child.
  • a user of the device 10, 11 may select and/or authorize one or more persons (e.g., guardians, relatives, friends, and the like) from one or more social networks.
  • a request from a person associated with the user of the device 10, 11 may send a request via a social network to become an approved contact with a child user.
  • An invite may be sent via the device 10, 11 to another device 10, 11 coupled to the network 7 to become an approved contact.
  • a list of one or more contacts may be displayed on the display of device 10, 11.
  • a contact in a list of one of more contacts may include a picture of the contact, a phone number, an email address, a social network identifier, or the like.
  • the parent and/or guardian may manage contacts of friends of the child user who do not have a toy device 100a, 100b.
  • the child may record a message with the toy device 100a, 100b, and may select a contact of a friend that does not have a toy device 100a, 100b, and the recoded message may be converted to text, and may be printed and sent by regular postal mail (e.g., via a post card), and/or by email or text to a parent device 10, 11 of the child contact.
  • the post card, email, and/or text communication may include a discount offer to purchase a toy device 100a, 100b.
  • a user may set and/or provide one or more child favorites, such as favorite color, food, animal, or the like.
  • the favorites may be used, for example, to facilitate communications between toy devices 100a, 100b.
  • a parent, guardian, and/or relative may create a profile of a child's likes and dislikes.
  • the toy device 100a, 100b may provide the child's likes and/or dislikes to the device 10, 11 of the parent and/or guardian that may be learned by the toy device 100a, 100b and/or may be provided to the toy device 100a, 100b by the child (e.g., child is prompted by the toy device 100a, 100b to name a favorite food, a favorite song, a favorite color, or the like).
  • the toy device 100a, 100b may prompt the child to provide information (e.g., likes, dislikes, favorites, etc.) that may be transmitted from the toy device 100a, 100b to the device 10, 11 and may be used to create and/or update a child's profile.
  • information e.g., likes, dislikes, favorites, etc.
  • the toy device 100a, 100b may record the child's voice answers, and provide the responses to the device 10, 11 so that they may be added to the child's profile.
  • the child's responses may be transmitted to the device 10, 11 as audio data, and/or may be converted from audio into text and may be sent as text data.
  • the profiles of one or more children that a parent, guardian, relative, and/or friend is connected to may be stored, e.g., at the device 10, 11, the server 13, the database 15, and/or the remote platform 17.
  • the profile of each child may be periodically updated.
  • the toy device 100a, 100b may prompt the child to update their profile.
  • Some questions that may prompt the child may include, for example: "Is your favorite color still green?", "What do you want to be called?", or the like). Based on the answer provided by the child, if a favorite has changed, all of those in the child's trusted network (e.g., the approved contacts) may be notified of the status update.
  • the devices 10, 11 and/or toy device 100a, 100b may encourage their respective users to "like” the status update and/or comment (e.g., via a message) on the status update.
  • a notification may be transmitted from the toy device 100a, 100b to device 10, 11 which indicates that: "Henry now wants to be called T-rex" to then encourage all of those in Henry's network to "Say hello to T-rex!
  • the toy device 100a, 100b may begin to address the child by the child's profile name to let them know when they have a new message to listen to.
  • the child may be identified by the toy device 100a, 100b, by the sensors 102a, 102b that may detect a wearable identification device on the child.
  • the child may be identified based on the child's voice, which may be captured by the audio input device 110 and/or the sensors 102a, 102b. Alternatively, or in addition, the child may be identified by the image sensor 109.
  • a captured image may be compared with a stored image in the toy device 100a, 100b to identify the child.
  • the favorites information included in the profile may be used by the device 10, 11 and/or the toy device 100a, 100b to match the child with potential playmates nearby (e.g., within a predetermined proximity, where the potential playmates would be approved by a parent and/or guardian).
  • a parent and/or guardian may find other playmates for the child by matching at least a portion of the favorites included in the profiles, and thus approve additional kids for their child to connect with and/or interact with (e.g., send messages, play games, or the like) via the toy device 100a, 100b.
  • the profile information may be used (e.g., by the server 13 and/or the remote platform 17) to determine gifts and/or gift suggestion for a child.
  • the determined suggestions may be distributed to the approved contacts periodically and/or upon request by a user.
  • the profile may include, for example, the birthday of a child user and/or other important events.
  • a reminder on the day of the child's birthday may be sent as a push notification (e.g., by the toy device 100a, 100b, the device 10, 11, the server 13, and/or the remote platform 17) to all of the child's contacts.
  • the push notification may make gift suggestions for the child, including suggestions for audible gifts such as songs, stories, and/or sound effects.
  • the toy device 100a, 100b may provide a notification to the child of the birthdays of their friends and family so that the child may record and send a message. For example, the child may record a birthday message with the toy device 100a, 100b, and transmit the birthday message to a friend who has a toy device 100a, 100b.
  • the profile may include a child's bedtime.
  • the toy device 100a, 100b may alert the child that, for example, it is time to brush their teeth or read a story.
  • the toy device 100a, 100b say goodnight to the child's favorite objects from their profile (e.g., audible goodnight messages may be output by the toy device 100a, 100b to objects in the room).
  • a user may set one or more audio filter options for a child's toy device 100a, 100b.
  • the audio filter may manipulate a waveform of an audio signal of a received audible message by changing, for example, a pitch, frequency, amplitude, envelope, or the like during playback.
  • the waveform may be altered based on a magnitude and direction of an acceleration, a movement, and/or an orientation movement as detected by the sensor 102a, 102b.
  • the audio filter may manipulate the waveform of the audio signal when a value detected by the sensor 102a, 102b exceeds a predetermined threshold.
  • the audio filter may be set at the toy device 100a, 100b.
  • the user may record a message to be transmitted to one or more toy devices 100a, 100b.
  • the message may be recorded regardless as to whether the device 10, 11 is communicatively connected to the network 7.
  • the device 10, 11 may prompt the user by displaying a visual cue on the display 200 and/or may output an audio cue for the user to begin recording the audible message.
  • the user may select the selectable option 206 and/or another selectable option to end recording of the audible message.
  • a selectable option (not shown) may be displayed on the display 200 which, when selected by the user, may play the recorded message so that the user may hear it before the user transmits the audible message to one or more toy devices 100a, 100b.
  • the display shown in FIG. 8 may be displayed, which may include option 226 to select a child, option 228 to record a message, option 230 to set audio filter for message to be recorded, and/or option 232 to send (i.e., transmit) the recorded message.
  • option 226 to select at least one child from a list of one or more children.
  • the user may select one or more children to which a message to be recorded may be sent.
  • option 226 the user may select option 228 to record a message.
  • the message may be an audio message and/or a video message, and/or a message that includes a still image and an audio message.
  • a user may select option 230 to set an audio filter for the recorded message.
  • option 230 a user may manipulate the waveform of an audio signal by, for example, adjusting the pitch, the frequency, the amplitude, envelope, or the like of the recorded message.
  • the recorded message (e.g., which may be filtered and/or manipulated) may be sent to the toy device 100a, 100b of child selected using option 226.
  • the user may select a song (e.g., that is stored by the device 10, 11) and/or an emoji to be transmitted with the recorded message.
  • the audible emoji may be transmitted without a recorded message.
  • the processor 106 may output the song and/or one or more sounds (e.g.
  • the toy device 100a, 100b may output a sound designated as a "happy” via the audio output device 112 when a happy and/or smiling emoji is received.
  • the toy device 100a, 100b may output a sound designated as "sad” via the audio output device 112 when a sad and/or frowning emoji is received.
  • the display 200 of FIG. 5 may display when a user has received an audible message from one or more of the toy devices 100a, 100b.
  • the device 10, 11 may output an audio notification via the output device 22 that the user has received an audible message.
  • the device 10, 11 may output via a speaker and/or other output device (not shown). The user may select the selectable option 208 to repeat the recorded audible message, and/or may select an option to replay the message.
  • the display may include one or more selectable options to "like" the received message, share the message with one or more approved contacts or with persons of a user's social network, and/or delete the message.
  • a user may change and/or configure one or more filter settings, which may be used to manipulate and/or change the waveform of the audio signal of the audible message to be output when selectable option 208 to play a received message is selected and/or when the selectable option 206 is selected to record an audible message. That is, the user may adjust the frequency, pitch, amplitude, distortion, envelope, or the like of the recorded audible message.
  • FIG. 9 shows an example method 300 of communicating recorded messages between toy devices 100a, 100b and/or other approved network devices described above in connection with FIGS. 1-8 according to an implementation of the disclosed subject matter.
  • a profile of a toy device 100a, 100b may be set, at a mobile computing device 10, 11, that includes one or more approved contacts and user interest information.
  • the profile of the toy device 100a, 100b e.g., approved contacts and/or user interest information
  • the user interface 114 the audio input device 110, and/or the display 120, which may be a touch screen.
  • the profile of the toy device 100a, 100b may be accessible to a child user using one or more voice commands that may be received by the audio input device 110, which may be processed by processor 106, and contacts and/or interest information may be output via the audio output device 112 as audible information and/or may be output via the display 120.
  • the approved contacts of the profile of the toy device 100a, 100b may include a parent, a guardian, a relative, and/or a friend of the child associated with the toy device 100a, 100b.
  • the approval of contacts by, for example, a parent and/or guardian enables them to control who and/or what devices (e.g., devices 10, 11, other toy device 100a, 100b, server 13, or the like) the child and the toy device 100a, 100b are able to transmit messages to and receive messages from.
  • the approved contacts may include device identification information and/or network identification information so that the toy device 100a, 100b may be enabled to transmit messages to and/or receive messages from a device 10, 11.
  • the user interest information may include information about the child user of the toy device 100a, 100b, such as their favorite color, song, food, games, and the like.
  • the interest information in some implementations, may include the child's name or nickname, the name of the toy device, or the like.
  • the child may access, add to, and/or revise the user interest information using one or more audio commands input via the audio input device 110. That is, the child user may use the user interface 114 and/or the audio input device to input and/or record the user interests.
  • the user interest information may be used in encouraging toy device to toy device
  • a child user may record a message that includes at least an audio portion. As described above in connection with FIGS. 2B and 2C, a child user may select button 114a to begin record an audio message and/or a video message that includes audio (e.g., where the image sensor 109 may capture the video images, and the audio input device 110 may capture the audible message).
  • the toy device 100a, 100b may receive a selection at least one of the approved contacts. For example, the toy device 100a, 100b may receive a user selection via the user interface 114 and/or via the display 120 (e.g., where the display is a touch screen).
  • a selection may be made via a voice command by the child user.
  • the selection made via voice may be received by the audio input device 110, and may be processed by the processor 106.
  • the recorded message may be to at least one device associated with the selected at least one of the approved contacts, which may be associated with one or more devices 10, 11 communicatively coupled to the network 7.
  • the child user may make a selection via the user interface 114, via the display 120, and/or via a voice command received by the audio input device to transmit the recorded message to a device 10, 11 associated with at least one of the approved contacts.
  • FIG. 10 shows an example method 400 of changing the output characteristics of a message according to an implementation of the disclosed subject matter.
  • the toy device 100a, 100b may receive a message via the network 7 that includes at least an audio portion. That is the message may be an audible message, the message may include one or more still images and an audible message, and/or the message may include video and the audible message.
  • the user interface 114 may receive a selection of an option to output at least the audio portion of the message.
  • the processor 106 may change the waveform of an audio signal of the audio portion based on sensor data collected (e.g., from sensors 102a, 102b) as the toy device 100a, 100b is moved so as to change at least one characteristic of the outputted audio portion.
  • the sensors 102a, 102b may be a motion sensor, a physical orientation sensor, an acceleration sensor, a pressure sensor, or the like.
  • the waveform of the audio signal may be manipulated so as to change the pitch, frequency, amplitude, envelope, distortion, or the like based at least in part on the collected sensor data.
  • the manipulated audio waveform may be output by the audio output device 112.
  • the sound characteristics of the audible message being output may correspondingly change as the audible message is being output by the audio output device 112.
  • a sensor 102a, 102b of a toy device 100a, 100b may detect a status of the toy device 100a, 100b.
  • a sensor 102a, 102b of a toy device 100a, 100b may detect a status of the toy device 100a, 100b.
  • accelerometer may be included with the toy device (e.g., as sensor 102a, 102b) and may detect a change in acceleration associated with motion of the toy device 100a, 100b.
  • a communications interface e.g., network interface 104 of the toy device 100a, 100b may transmit a message to a mobile device 10, 11 in response to the detected status.
  • a child may have picked up the toy device 100a, 100b, and the sensor 102a, 102b may detect the change in acceleration.
  • the detected status may trigger a Wi-Fi radio within the toy device 100a, 100b to transmit a message to a smart phone (e.g., device 10, 1 1) of the child's parent.
  • a smart phone e.g., device 10, 1 1
  • the message may be received by the parent's smartphone (e.g., device 10, 11) and trigger a notification via an application executing on the smart phone (e.g., device 10, 11).
  • the application may prompt the parent to select a behavior, operation, and/or output of the toy device 100a, 100b.
  • the application may provide an option for the parent to record a voice message to be played for the child via the toy device 100a, 100b, such as "I love you” or "go back to bed.”
  • the application may also present an option for the parent to select a sound for the toy device to make, such as a "bark” if the toy device is a dog, or an "oink” if the toy device is a pig.
  • the parent may select an option and the application may instruct a communication component of the smart phone (e.g., device 10, 11) to transmit an instruction for the selected behavior to the toy device 100a, 100b.
  • the smart phone e.g., device 10, 11
  • the smart phone may transmit the instruction, and when the instruction is received, the toy device 100a, 100b may execute the instruction and perform the selected behavior and/or operation, and/or provide an output.
  • motion sensors or direction sensors such as accelerometers or compasses may be integrated with toy device 100a, 100b.
  • One or more thresholds of acceleration or change in direction may be pre-determined and coupled to a status. For example, sensor measurements greater than or less than a specified acceleration or change of direction value or within a specified range may be coupled to a status that indicates a child is playing with toy device 100a, 100b. Sensor measurements outside of this predetermined range may be coupled to a status that indicates the child is not playing with toy device 100a, 100b or that someone has rapidly thrown toy device 100a, 100b.
  • a sensor reading below a predetermined range such as a range of acceleration values may be coupled to a status that toy device 100a, 100b is not being played with.
  • a sensor reading above a predetermined range such as a range of acceleration values, may be coupled to a status that toy device 100a, 100b is being thrown or otherwise not properly played with.
  • a sensor reading within a predetermined range such as a range of acceleration values, may be coupled to a status that toy device 100a, 100b is being played with.
  • indicators of any of the aforementioned statues may be transmitted by toy device 100a, 100b to a smart phone such as device 10, 11, or to another toy device, such as a toy device within a toy -to- toy social network with toy device 100a, 100b.
  • indicators of any of the aforementioned statuses may serve as a basis for a communication performed by toy device 100a, 100b. For example, an acceleration of toy device 100a, 100b that is within a predetermined acceleration range may be linked to a behavior of toy device 100a, 100b of expressing a verbal indication that toy device 100a, 100b is being played with.
  • a toy device 100a, 100b may be any object with which is suitable for a person, such as a child, to play.
  • FIG. 2A shows, according to an implementation of this disclosure, an example toy device 100a shaped in a "pig" form.
  • Toy devices 100a, 100b may have any shape suitable for the purposes of this disclosure.
  • a toy device 100a, 100b may take the shape of a "dog", a "robot", a "humanoid”, and so forth.
  • toy device 100a, 100b may be embodied in any form factor suitable for the purposes of this disclosure.
  • toy device 100a, 100b may include a plush toy, hard surfaced toy, pliable toy, and/or may be attached or integrated with a watch, hat, shoe, purse, bracelet, button, sweater, wristband, headband, jacket, bag, pants, shirt, or any other apparel or wearable item that is suitable for the purposes of this disclosure.
  • Electronic hardware may include a plush toy, hard surfaced toy, pliable toy, and/or may be attached or integrated with a watch, hat, shoe, purse, bracelet, button, sweater, wristband, headband, jacket, bag, pants, shirt, or any other apparel or wearable item that is suitable for the purposes of this disclosure.
  • components of the subject matter of this disclosure may be interchangeable among any suitable form factor.
  • the same electronic components may be operatively installed within a shark shaped plush toy, pig shaped plush toy, and/or a wearable item.
  • a particular set of electronic hardware may be transferred among various form factors. For example a 2 year old child may interact with a plush toy having the particular electronic components. At 4 years old, the child may interact with a different plush toy having the particular electronic components. At 8 years old, the child may interact with yet a different form factor having the same electronic components, such as a book bag or a watch. Software that may interact with or execute on the particular electronic components may be updated over time and result in improved or more applicable functionality of toy device 100a, 100b as an applicable child or other user changes interests or ages.
  • a toy device 100a, 100b may communicate with other devices such as other toy devices 100a, 100b, mobile devices 10, 11, or remote servers (e.g., server 13, remote platform 17, or the like).
  • a child named Karen may possess a toy device 100a, 100b, and the child's parent may want to monitor the child's usage of the toy device.
  • the parent may possess a mobile device 10, 11 such as a smart phone or other computing device that executes an application for monitoring and communicating with the toy device.
  • the application may include options such as those shown in FIGS. 5-8.
  • a communications component (e.g., network interface 29) of the parent's mobile device 10, 11, such as a Wi-Fi or cellular radio may receive signals from and transmit signals to the toy device 100a, 100b.
  • the parent's mobile device 10, 11 may receive an indicator of a status of the toy device and/or push a notification via the application to the interface of the parent's mobile device 10, 11.
  • Karen's parent may thereby be alerted of the status of the toy device 100a, 100b.
  • the parent may choose to communicate with the Karen via the toy device 100a, 100b, for example, by entering a message into the application via the option 228 shown in FIG. 8 to record a message and transmitting the message to the toy device 100a, 100b via the option 232 to send a message.
  • the toy device 100a, 100b may receive the message and communicate the message to the child via the toy device 100a, 100b.
  • a parent or the child may also benefit from the transmission of automated messages to the toy device 100a, 100b.
  • Messages may be transmitted to the toy device 100a, 100b from the parent's device 10, 11, and/or from a remote device (e.g., the server 13, the remote platform 17, or the like) via the network 7.
  • network interface 104 of Karen's toy device 100a, 100b may periodically transmit indicators of the status of toy device 100a, 100b to a remote server (e.g., server 13) that is managed by Karen's parent and/or a third party, such as a service provider and/or the manufacturer of toy device 100a, 100b.
  • the status indicators may be received by the parent's device 10, 11 and retransmitted to a remote server (e.g., server 13), or the remote server may receive the indicators from the toy device 10, 11.
  • the remote server e.g., server 13
  • the remote server may receive the status indicator and compare the indicated status to a set of automated responses. For example, a status "Karen is present and not playing with toy" may be coupled and/or linked to the behavior, operation, and/or output of "Let's play! Thus, if the received status indicator is "child present, not interacting with toy", then the remote server (e.g., server 13) may
  • the remote server e.g., sever 13
  • the remote server may then transmit the selected instruction to the toy device 100a.
  • the processor 106 of the toy device 100a, 100b may execute the instruction. For example, by executing the instruction, processor 106 may generate a signal that when received by audio output device 112 causes a speaker of the audio output device 112 to emit the audible message "Let's play!”.
  • One or more sensors may detect a status of a toy device 100a, 100b.
  • the sensor 102a, 102b may be attached to, stored or integrated within, or embedded on the surface of a toy device 100a, 100b.
  • the sensor 102a, 102b may also be separate and distinct from the toy device 100a, 100b, and may be communicatively coupled to the toy device 100a, 100b.
  • a sensor may not be attached to bus 120 of toy device 100a, 100b as shown in FIG 1. Rather, a sensor may located in a room, building, or area where the toy 100a, 100b is located, such as in the corner of Karen's room or near the doorway.
  • the sensor 102a, 102b may be communicatively coupled to the network 7. Such a sensor 102a, 102b may thus be remote from the toy device 100a, 100b and rely on its own communications components to communicate with toy device 100a, 100b, the parent's device 10, 11, a remote server (e.g., server 13), other sensors, and/or other components of this disclosure.
  • a remote server e.g., server 13
  • a status may be any status that describes the behavior, operation, and/or output of the toy device 100a, 100b, a behavior of a person or other entity interacting with the toy device 100a, 100b, a behavior of another toy device 100a, 100b interacting with the toy device 100a, 100b, or any combination of any of the forgoing interactions.
  • the status of a toy device 100a, 100b may be detected by data received from one or more sensors 102a, 102b, other data received from non- sensor sources, or combinations of sensor data and non-sensor data.
  • the following status table shows a non-exhaustive set of example statuses, descriptions of the statuses, and the sensor data or other data that may be coupled to those statuses and thereby result in their detection according to implementations of this disclosure:
  • Status Table shows a non-exhaustive set of example statuses, descriptions of the statuses, and the sensor data or other data that may be coupled to those statuses and thereby result in their detection according to implementations
  • GPS location data received from the toy.
  • Keren is As sensor has identified A microphone identified speech associated playing with Karen and detected she with Karen, words spoken by Karen and the toy” is playing with the toy. associated with the toy are detected, and the accelerometer data has exceeded a threshold associated with being picked up.
  • Kanaren is A sensor has identified A microphone identified speech associated recording a Karen and detected that with Karen and data is received indicating the message
  • Karen is recording a audio input device has been activated.
  • a sensor such as sensor 102a or 102b may be any device that detects or measures a physical property, and records, indicates, or otherwise responds to it, as discussed above in connection with FIG. 1. Sensors may be attached or located within the toy device. Sensors may also be external to the toy device and located within the room or rooms where the toy might be found. Additional discussion of sensors and their functionality is set forth throughout this disclosure.
  • a status of a toy device 100a, 100b may be detected by data received from a single sensor (e.g., sensor 102a).
  • the toy device 100a, 100b may include an accelerometer which may measure a change in acceleration when the toy device 100a, 100b is picked up by a child.
  • One or more sensor threshold values may be linked to one or more interaction statuses.
  • an acceleration change from 0.0 meters per second squared (m/s A 2) to greater than 0.3 m/s A 2 may be a threshold acceleration difference linked to the interaction status "someone playing with the toy.”
  • this threshold acceleration difference is measured by the accelerometer in the toy device, the interaction status may be determined to be "someone is playing with the toy.”
  • a status of a toy device 100a, 100b may be detected by data received from multiple sensors.
  • the toy device 100a, 100b may include a sensor 102a, which may be a microphone, and a sensor 102b, which may be accelerometer.
  • the accelerometer e.g., sensor 102b
  • the accelerometer may detect a first acceleration reading of approximately 0.0 m/s A 2 at a first time.
  • the accelerometer may detect a second acceleration reading of 0.23 m/s A 2.
  • the change in acceleration measurements of 0.23 m/s A 2 may exceed a threshold value of 0.1 m/s A 2. This change in the acceleration measurement from a 0.0 m/s A 2 initial value may be associated with the status "someone is playing with the toy" as shown in the status table above.
  • the microphone may detect audio content and identify the audio as being Karen's voice based on a voice recognition operation.
  • Karen may record samples of her voice using the microphone (e.g., audio input device 110 and/or sensor 102a) when she first registers her account with the toy device 100a, 100b.
  • Suitable speech recognition procedures may analyze the samples for frequencies and/or waveforms that are characteristic of Karen's voice and store these frequencies and/or waveforms as templates of Karen's voice.
  • this content may be compared to the stored templates (e.g., by processor 106).
  • the audio content may be determined to be uttered by Karen.
  • speech recognition procedures such as stochastic, probabilistic, and statistical techniques within natural language processing may be implemented to determine that Karen is speaking about the toy device 100a, 100b. For example, the words “pig,” “piggy,” “snort,” “oink,” and “pink” may be determined or predetermined to be associated with Karen's toy device because her toy device may be a pig. Speech recognition procedures may recognize these terms.
  • Data representing the accelerometer measurements, the identification of Karen, and the recognition of associated terms may be combined and coupled or linked to the status "Karen is playing with the toy.” Thus, when this combined set of data is received the status "Karen is playing with the toy" may be detected.
  • a status of a toy device 100a, 100b may be determined by data received from a pressure sensor integrated within the toy device or remotely from the toy device.
  • toy device 100a may contain sensor 102a, which may be a pressure sensor.
  • the pressure sensor e.g., sensor 102a
  • the pressure sensor may be a piezoelectric sensor that contains a material that converts mechanical stress into a signal based on a generated electric potential.
  • the pressure sensor may have sensing elements on the outer surfaces of toy device 100a, 100b. Signals representing pressure measurements may be compared to threshold pressure values, such as values representing the limits of the structural integrity of the toy device. These threshold values may be coupled or linked to the status "You're squeezing me too tightly". Thus when data from the pressure sensor reaches such a threshold, the status "You're squeezing me too tightly! may be detected.
  • a status may be based on a measurement of one or more a passive infrared (PIR) sensors may be included in the toy device 100a, 100b or located remotely from and communicatively coupled to the toy device 100a, 100b.
  • the PIR sensor may include radiation capture components composed of pyroelectric materials such as gallium nitride, cesium nitrate, polyvinyl fluorides, derivatives of phenylpyridine, cobalt phthalocyanine, other similar materials commonly used in PIR sensors, or any other material that generates energy when exposed to heat and that is suitable for the purposes of the disclosure.
  • the energy generating materials may be formed in a thin film and positioned parallel with a sensor face of the toy device or in other formations or locations suitable to capture incident infrared radiation.
  • a single PIR sensor or multiple PIR sensors may be employed in implementations of the disclosure.
  • PIR sensors may detect events such as motion or temperature changes.
  • a PIR sensor integrated into the surface of the toy device may be calibrated with a threshold background voltage measurement generated by the environment of the room or rooms in which the toy device is typically located. When a person walks in front of the PIR sensor, radiation emitted from the person may exceed this background threshold. Circuitry in communication with the PIR sensor may compare this measured energy to the background threshold value, and if the difference exceeds a threshold amount, then the presence of a person may be detected. This sensor data may be coupled to the status "Someone is in the room with the toy.”
  • a status may be detected based data received from one or more active infrared (AIR) sensors of the toy device 100a, 100b or located remotely from and communicatively coupled to the toy device 100a, 100b.
  • AIR sensors may be placed throughout a room where the toy device 100a, 100b is located.
  • These AIR sensors may be configured in an array such that they project an array of beams that define regions throughout a room, within which presence and depth may be measured.
  • a template made up of object characteristics of the child may be created and used to detect the presence of the child. For example, data may be collected from the AIR sensors when Karen walks throughout the room to determine object characteristics for Karen. For example approximate heights and widths of Karen may be determined based on the presence of the child within regions of the AIR array. These object characteristics may define a template against which future readings of the array elements may be compared for the purposes of determining statuses. For example, measurements of the presence of someone within array elements that are outside of the range of array elements associated with object characteristics of Karen may be coupled to the status of "Someone other than the Karen is in the room with the toy"
  • AIR sensors may include an emission component such as a light emitting diode point source, a laser, or a lens-focused light source. Implementations of the disclosed subject matter may include non-point sources. Radiation may be emitted in a pattern such as a certain arrangement of projected pixels and other structured formats or unstructured radiation formats. For purposes of this disclosure, a pattern may include no more than a single region or a pattern may include multiple regions. For example, the pattern of radiation may be a single projected pixel or beam.
  • AIR sensors may capture radiation through capture components of the sensor.
  • Capture components may be any suitable radiation sensor.
  • the capture components may be image sensors such as photodiodes, charge-coupled devices, complementary metal- oxide- semi conductor devices, red-green-blue imaging cameras, red-green-blue-depth imaging cameras, infrared imaging sensors, and other components configured to detect electromagnetic radiation.
  • the AIR sensors may emit patterns of radiation from emissions components, and the reflected patterns may be captured by capture components housed in the AIR sensor. Patterns of radiation may define no more than a single region or a series of regions. For example, an emission may be an emission of a single pattern for a single instance (e.g. a light pulse), an emission of a single pattern for multiple instances, or an emission may be an emission of multiple patterns for multiple instances.
  • the patterns of radiation may vary in arrangement within a sequence or they may be constant. The time periods between emissions of instances of patterns of radiation may be constant or they may vary.
  • Variations may be detected based on techniques such as structured light techniques, stereo techniques, and time-of-flight sensing.
  • fixed or programmable structured light techniques may be employed to detect variations in a pattern of radiation such as the dimensional spreading, geometrical skewing, or depth of its elements in order to determine information about an object.
  • a time-of-flight variation may be measured between a pulse emission of a pattern of radiation and the captured reflection of that pattern of radiation, or a time-of-flight variation may be measured by determining the phase shift between an emitted pattern of radiation modulated by a continuous wave and the captured reflection of that pattern of radiation. Time-of-flight variations such as these may be used to determine depth information of an object.
  • stereo techniques may be employed to detect a variation between the location of an aspect of a pattern of radiation captured in a first capture component and the location of the aspect in a second capture component. This variation may be used to determine depth information of the object from which the pattern is reflected.
  • Object characteristics of an object may be determined based upon a detected presence of an object within a certain region. For example the initial width of emitted beams of radiation and the rate of width spreading as depth increases may be known based on the initial
  • the total number of beams she traverses at any given moment may be detected based on the reflection of the traversed beams into the capture component.
  • the depth of Karen within the array may be determined by, for example, geometric or time of flight sensing techniques.
  • the width between two beams at Karen's location in the array may be determined based on the detected depth.
  • the total number of beams traversed at a given moment may then be detected and the widths between them summed, resulting in an approximate width of Karen.
  • an approximate height of Karen may also be determined. This approximate width and approximate height may then serve as object characteristics in a template for Karen.
  • the template may be coupled to the status "Karen is in the toy's room".
  • the detection of object characteristics outside of Karen's width and height within Karen's room may be coupled to the status "Someone other than the Karen is in toy's room”.
  • a status may also be based on an imaging sensor, which may be located within toy device 100a, 100b or remotely from the toy device 100a, 100b.
  • Sensor data such as imaging sensor data
  • Sensor data may be combined with other data, such as sensor data, in a sequence or usage pattern.
  • a remote sensor such as an imaging device, may detect events within the room where the toy device is located.
  • the imaging device may be in communication with the toy device 100a, 100b such as by a wireless radio and/or via network 7.
  • the imaging device may be a video camera and may detect motion in room intermittently over a period of time. For example, Karen may walk into the room and play with several toys within the field of view of the camera over a ten-minute period.
  • the motion detection data may be provided to the toy device.
  • the accelerometer sensor may not detect a difference value.
  • the combination of the detection of motion by a remote sensor over a threshold period of time when no acceleration was detected by the accelerometer may be a usage pattern that may be coupled to the status "Karen is present and not playing with the toy.”
  • An imaging device may also identify Karen through facial recognition techniques. For example, when registering with the toy device, Karen may present her face to the image sensor 109 and an image of her face may be captured. Identifying features of Karen's face may be located and stored as a template. For example, facial recognition operations performed by the processor 106 may analyze the relative position, size, and/or shape of the eyes, nose, cheekbones, and jaw in Karen's image. These features may then be stored as a template for later comparisons during identification procedures. Other template generation processes are also contemplated by this disclosure.
  • Identifying data received from imaging devices e.g., image sensor 109) performing facial recognition may be coupled alone or with other data to a status.
  • a status may also be determined from other data such as data provided directly from Karen. For example, Karen may identify herself by saying her name or a password. This data may be received by audio input device 110 and compared to authorized user information stored in memory in communication with the toy device 100a, 100b. If Karen's data is matched then the speaker of the message may be identified as Karen.
  • the toy device 100a, 100b may receive data indicating that voice recording and transmission functions are being accessed within the toy device 100a, 100b. For example, it may be determined by the processor 106 that control operations for audio input device 110 and/or the network interface 104 are being accessed. This determination may be coupled to the "Karen is recording a message" status if the microphone is recording a message or the "Karen is sending a message" status if the network interface is being accessed.
  • one or more interactions between the child and toy device 100a, 100b may result in a status.
  • speech recognition procedures of the toy device may recognize certain types of speech as the child telling a joke.
  • Karen may utter the phrase "knock-knock". These terms may be stored and coupled to an automatic behavior of the toy device of emitting the statement "whose there?" when recognized. Whatever statement is recognized after the "whose there" emission by Karen may be automatically mimicked by the toy device with the addition of the ending phrase " who?" The child may then state the punch line of the joke, and as a result, the toy device may emit laughing sounds. Completing a script such as this may be coupled to the status "The toy laughed at Karen's joke.”
  • the toy device may provide an indicator of the status to a remote device (e.g., device 10, 11, server 13, or the like).
  • a remote device e.g., device 10, 11, server 13, or the like.
  • the status, "Karen is playing with the toy” may be linked to an indicator associated with a record of the status stored on Karen's parent's mobile device 10, 11.
  • the toy device may transmit the associated indicator using the wireless interface to Karen's parent's mobile device 10, 11.
  • the sensor data being provided for at least a portion of the status determination may be transmitted to a remote server (e.g., server 13 and/or remote platform 17) that is distinct from the mobile device 10, 11 and the toy device 100a, 100b.
  • a remote server e.g., server 13 and/or remote platform 17
  • an imaging device may include a wireless radio that is in communication with the remote sever (e.g., server 13 and/or remote platform 17) over the Internet or a virtual private network (e.g., network 7).
  • an accelerometer e.g., sensor 102a, 102b
  • network interface 104 may transmit the sensor data over the Internet or a virtual private network (e.g., network 7) to the remote server (e.g., server 13).
  • the remote server e.g., server 13 and/or remote platform 17
  • the remote server may then determine the status of the toy device 100a, 100b based on received sensor data and other data and provide an indicator to the parent's device 10, 11, or another authorized person's device 10, 11.
  • a toy device 100a, 100b behavior may be selected based on the determined status.
  • the toy behavior may be user-directed or automated.
  • a user-directed behavior may be a behavior selected by a parent on their mobile device 10, 11 in response to receiving a notification of a status of the toy device 100a, 100b on an application on their device 10, 11.
  • a parent's device 10, 11 may receive a notification via an application that the status of the toy device 100a, 100b is "Karen is in the toy's room".
  • the parent may choose to cause the toy device 100a, 100b to execute a behavior, operation, and/or output of providing a direct message to Karen.
  • the parent may select option 206 to record a message on the application implemented on device 10, 11 and record the message "Hello Karen, I hope you're having fun".
  • the application may then encode this recording and transmit it to the toy device 100a, 100b along with an instruction to execute the behavior of audibly emitting the message via the network 7.
  • the processor 106 of toy device 100a, 100b may execute the instruction and cause the audio output device 112 to emit "Hello Karen, I hope you're having fun.”
  • a toy device 100a, 100b behavior may be selected based on a graphical icon displayed on the parent's mobile application, such as an "audile emoji.”
  • the icons may be customized based on the toy device 100a, 100b of the child such that if the toy device 100a, 100b is a pig, the graphical icons may represent behaviors that a pig might exhibit, such as emitting the sounds of "oink” or "Let's go play in the mud.”
  • the parent's mobile device 10, 11 receives a status notification of "Karen is in the room with the toy device” the parent may choose to select the behavior "oink” by selecting a graphical icon on the application implemented on their mobile device that represents a pig oinking.
  • the application may transmit an instruction to the toy device that when executed by a processor of the toy device, causes a speaker within the toy device to emit the sound "oink.”
  • the application stored and/or executed at least in part on the mobile device 100a, 100b may transmit an indicator of the selected behavior to a remote server (e.g., server 13), which may transmit an instruction to the toy device 100a 100b to configure the toy device 100a, 100b to perform one or more operations and/or output (e.g., exhibit a behavior), such as emitting the sound "oink" from the audio output device 112.
  • a remote server e.g., server 13
  • a behavior may be selected automatically based on a determined status. For example, if the determined status is "Karen is present and not playing with the toy" and such a status is based on Karen being present for an extended period of time, such as twenty minutes, then a behavior may be automatically selected, such as causing the toy device to emit the statement "come play with me.” Other behaviors may include
  • a parent or other authorized individual may be presented with the option to instruct the toy device to perform the behavior of emitting "Pick me up,” when the status "Karen is present and not playing with the toy is detected.”
  • Automatically selected behaviors may be default behaviors, may be configured and stored by a parent or other authorized individual, or may be learned and automatically generated by employing machine learning techniques. For example, a determined status may be "Karen is in the room with the toy.” This status may be coupled to the automatic selection of the behavior that causes the toy device 100a, 100b to emit the statement "Hello Karen". An authorized user may also configure automatic selection of behaviors via their own instance of an application in communication with the toy device 100a, 100b. For example, a parent may link the status "someone is in the room with the toy device” to the behavior of the toy device emitting the statement "whose there?" The parent may pre-record this statement or it may be generated by speech generation procedures implemented in the toy device.
  • Implementations of this disclosure may also employ machine learning techniques. Suitable machine learning techniques may include linear regression, naive Bayes, neural networks, logistic regression, and optimized logistic regression. Machine learning techniques may analyze prior statuses of the toy device and behaviors executed by the toy device to determine an expected behavior. For example, Karen's toy device and another toy device belonging to Craig may have participated in an interactive game where the toy devices function as elements of the game. For example, Karen and Craig may have played a game called "Candy Country" that involved features such as a map, accumulated points, questions, and the option to request help from your toy if the question is too difficult.
  • Karen may be asked a question that is too difficult, such as "Where does wind come from.”
  • the toy may access help files configured to give players a hint, or may access public sources such as over the Internet.
  • the toy device may be an element of the Candy Country.
  • Karen's toy device may store a record of the game, identifiers of the toys and people who participated, the score, and who was the winner.
  • Karen's toy device may detect that it is in the presence of Craig's toy device.
  • Karen's toy device may detect Craig's by means of near field communication techniques, such that detection indicates the toy device is within several meters of Karen's toy device. Based on the historical behaviors of Karen and Craig's toy device, an automatic behavior may be generated that causes Karen's toy device to emit the statement "Does anyone want to play Candy Country?"
  • an option for a selected behavior may be based on further sensor data collected after the behavior is executed by the toy device 100a, 100b.
  • Karen's parent may choose to transmit (e.g., from device 10, 11) a variety of customized messages to Karen's toy device 100a, 100b.
  • One such message may be "Karen you're a silly goose.”
  • the audio input device 110 of the toy device 100a, 100b may detect laughter each time the words "you're a silly goose” are recognized by speech recognition procedures.
  • a threshold frequency of detected laughs may be 0.333.
  • Messages including the terms "you're a silly goose” may be followed by detected laughs at a frequency of 0.666.
  • the phrase “you're a silly goose” may receive detected laughs in excess of a threshold amount.
  • the phrase "you're a silly goose” may be automatically added the predefined messaging options by the application executing on the parent's device 10, 11.
  • a status such as "Karen is present in the room” may be coupled to an automatic behavior that outputs questions Karen.
  • one or more questions may be asked of Karen (e.g., output by the audio output device 112) when she is present such as "what is your favorite food?" or "What is your favorite color?" More complex questions may be asked such as a fill-in-the-blank questions or "madlib” type questions that may be built into a story.
  • Karen's answers to these questions may be recorded (e.g., by the audio input device 110) by the toy device 100a, 100b and shared with other members of her social network.
  • FIG. 11 and FIG. 12 show example method for determining toy usage according to implementations of the disclosed subject matter.
  • Instructions for executing the example operations shown in FIGS. 11 and 12 may be stored in storage such as memory 108 or fixed storage 116 of toy device 100a, 100b.
  • the instructions may be executed by a processor such as processor 106 of toy device 100a, 100b.
  • operations shown in FIGS. 11 and 12 may be stored as instructions on a remote device such as a device 10, 11 of FIG. 3 or server 13 of FIG. 4.
  • the instructions may be stored in storage of the remote device and be provided via a network interface to the processor 106 of the toy device 100a, 110b or may be executed by one or more processors of the remote device.
  • FIG. 11 shows an example method of determining toy usage according to an implementation of this disclosure.
  • a status of a toy device 100a, 100b may be detected at 1110.
  • a sensor 102a, 102b of a toy device 100a, 100b may detect sensor data that is coupled to a status.
  • an indicator of the status may be provided to a remote device.
  • network interface 104 of the toy device 100a, 100b may transmit an indicator of the status to mobile device 10, 11 belonging to a parent.
  • an instruction to execute a selected behavior, operation, and/or output may be received.
  • the toy device 100a, 100b may receive a message recorded by the parent on their mobile device 10, 11 and an instruction for the toy device 100a, 100b to emit the recorded message.
  • the selected behavior and/or operation may be executed.
  • the toy device may emit the recorded message.
  • FIG. 12 shows an example method according to an implementation of this disclosure.
  • an indicator of a status of a remote device may be received.
  • a parent's mobile phone or tablet e.g., device 10, 11
  • a behavior, operation, and/or output may be selected based on the indicated status.
  • a parent may record a message and select the behavior of the toy device 100a, 100b to output the message in response to receiving a notification of the toy device's status.
  • an instruction may be provided to execute the selected behavior.
  • the parent's tablet device may transmit an instruction to the toy device 100a, 100b so that the toy device 100a, 100b outputs the parent's recorded message (e.g., via the audio output device 112).
  • the usage of a toy device 100a, 100b may be tracked, and rewards may be issued for a child based on the child's usage of the toy device 100a, 100b.
  • a status of a toy device 100a, 100b belonging to a child such as Karen may be determined (e.g., by the processor 106) in a manner such as that described throughout this disclosure.
  • a status such as "Karen is sending a message" may be determined by processor 106.
  • a usage level for Karen may be updated as a result of the determined status.
  • a server 13 remote from the toy device 100a, 100b may receive an indicator of the status and increase a record of a number of times the toy device has entered that status by one. Above a certain usage threshold, a reward maybe provided to Karen. For example, if Karen exceeds 50 messages sent, then Karen's toy device may receive a new song that Karen can play. Karen's updated message interaction level may be compared to such a usage threshold and it may be determined that Karen's usage level has exceeded the threshold. For example, Karen's most recent message may have been her 51 st message. Based on the comparison, a reward may be provided to Karen. For example, as a result of exceeding 50 messages, the remote server may send an instruction to Karen's toy device to enable playback of a new song.
  • usage levels for various statuses of a toy device 100a, 100b may be tracked for a user.
  • the status "someone is playing with the toy” may be based on data from a sensor 102a, such as an accelerometer, and data indicating the number of times a function of the toy device that is associated with play is accessed.
  • Such functions may include accessing the user input interface 114, playing a message via audio output device 112, and recording a message on audio input device 110.
  • the total number of times the status "someone is playing with the toy” may be calculated and this sum may serve as an indicator of usage of the toy device 100a, 100b.
  • subsets of total usage may be tracked, such as the total number of times a child such as Karen sends a message using the toy device, the number of times Karen sends a message to a particular person, such as her grandmother, and the frequency messages are sent, such as the total number of messages sent per day.
  • Karen's usage levels may be maintained in an account accessible by Karen and other authorized individuals, such as her parents.
  • This account may be stored in device such as sever 13 or database 15 of FIG. 4., fixed storage 23 of device 10 of FIG. 3, or fixed storage 116 or memory 108 of Toy device 100a.
  • the account may be accessed through application functionality such as option 204 to manage a child account of the application implemented on device 10.
  • More complex interaction levels may also be tracked. For example, Karen's toy device 100a, 100b may assist in monitoring Karen's completion of certain tasks. For example, Karen may be asked by her mother to clean her room once a week. After successfully completing cleaning her room Karen's parent may provide Karen a password to share with her toy device 100a, 100b.
  • Karen's parent my preconfigure this password via the application so that it is recognized by the toy device and associated with the status "Karen cleaned her room".
  • the status "Karen cleaned her room” may be determined by audibly recognizing the preconfigured password from Karen.
  • the usage level for that status may be updated.
  • Similar tasks such a brushing teeth, making the bed, or taking a nap may also be updated.
  • Sensor data indicative of such tasks may also be collected and serve as a basis for a status determination.
  • Tokens or other demarcations of value may be assigned to each status update and serve as a basis for tracking usage levels. For example, each time that Karen cleans her room, rather than increase the usage level by one, a predetermined number of tokens may be assigned to her account. Differing numbers of tokens may be assigned for different usage levels. For example, an update to the status level for "Karen cleaned her room” may count for 50 tokens, whereas an update to "Karen played with her toy" may only count for a single token. Various other denominations and allocations of tokens may be assigned for various types of status updates.
  • ⁇ toy device 100a, 100b may function as an element of a solo or multi-person game such as a spelling contest or "Hang-man.”
  • Karen and her toy device may play against Craig and his toy device 100a, 100b.
  • Each toy device 100a, 100b may take turns generating the word for hangman and may periodically generate hints to the players if they are stuck.
  • Karen's toy device may also track her performance in such games. For example, the toy device may track how many letters Karen guesses correctly, how many games she wins, who she wins against, her winning percentage, and so forth. Each time a tracked event occurs in the game a corresponding status may be
  • the usage level for this status may also then be increased accordingly. For example, Karen may win hangman against Craig by guessing an 11 letter word. Her previous largest word may be only 9 letters. Thus her usage level for the status "Hangman victory" may be increased to 11 letters.
  • an updated usage level may be compared to a threshold level to determine whether a reward may be granted.
  • a threshold for the status "Karen is messaging her mom" may be three times in a day. If Karen exceeds this threshold then she may be presented with a reward.
  • a threshold for "Karen played with her toy” may be set every 50 updates. Every time Karen exceeds the 50 status update threshold for "Karen played with her toy,” a reward may be triggered.
  • a threshold for the status "Karen won the spelling contest" may be a single update. Thus, each time Karen wins a spelling contest, she may be eligible for a reward.
  • Games such as spelling contests, may be held individually by Karen alone, or as a contest between Karen and one or more other children.
  • the games may be coordinated with toy device a single toy device, such Karen's toy device, or each child's toy device may participate.
  • Other games involving toy devices, such as those discussed in this disclosure, may be tracked in similar ways.
  • Quantities of tokens may also serve as thresholds. For example, different rewards may be triggered by different token quantities, which may serve as thresholds.
  • Tokens may be generated based on a variety of usage level updates. Thus, several different usage level updates may contribute to a single token threshold. For example, Karen may get seven tokens for updating the status "Karen is playing with her toy" seven times, and 10 tokens for updating the usage level of "Karen brushed her teeth" one time. A certain reward may be triggered at 15 tokens. Usage level updates may be combined to result in 17 tokens and enable Karen to receive the reward.
  • a reward may be determined (e.g., by processor 106) when a threshold is reached.
  • a reward may be trigged automatically or it may be made available and selected by a user. For example, as discussed in the above, Karen may become eligible for a certain reward upon amassing over 15 tokens. For example, this reward may be a new song.
  • Karen may receive a notification via her toy device 100a, 100b when she has reached the threshold level and be presented to reward options (e.g., via the output audio device 112 and/or the display 120).
  • the audio output device 112 of Karen's toy device 100a, 100b may make a sound indicating a reward is available, and Karen may query the toy device 100a, 100b (e.g., via a voice command received by the audio input device 110) to determine what the reward options are. For example Karen might utter "what are my choices" upon hearing the audible alert. Voice recognition procedures implemented within the toy device 100a, 100b may detect Karen's query and emit three song options (e.g., via the audio output device 112 and/or the display 120) she can choose from. Karen may state the name of one of the songs and thereby make her choice.
  • another authorized user may choose a reward for Karen.
  • a notification may be provided to an application executing on Karen's parent's smart phone (e.g., device 10, 11).
  • Karen's parent may be presented with the three song options as graphical representations on her mobile device 10, 11.
  • Karen's parent may select one of the options.
  • the application may transmit the song or a link to the song to Karen's toy device 100a, 100b along with an instruction to play the song or present it as an option for Karen to choose.
  • Rewards may also be automatically generated. For example, upon winning at Hangman, Karen may be automatically alerted (e.g., via the audio output device 112) that she has won an award. The award may be tokens rather than a content based reward. The alert may be different than other alerts output from the toy device 100a, 100b in order to make an award alert identifiable (e.g., the tone or sound of the alert may be different). As discussed above, an award may be generated every time the usage level of "Karen is playing with her toy" exceeds a threshold value such as 50. Each time this threshold is exceeded, reward that is a new feature of Karen's toy device 100a, 100b may be downloaded to the device. For example, the toy device 100a, 100b may be able to play a new game, speak a new language, or tell new jokes. In this way, rewards may serve to keep the experience of the toy device 100a, 100b fresh and new for the child.
  • the award may be provided to the toy device.
  • Rewards may take various formats.
  • a reward may be a downloaded feature of the toy device 100a, 100b, such as a new song, a parent pre-recorded content such as a joke, a new story, a new accent, a new skill set, a new game, or a new ability of feature within the game.
  • new game skills may be downloaded to Karen's toy device, such as the ability to protect Karen's character in the game from being susceptible to certain obstacles in the game, or Karen's character may win double the award when certain circumstances in the game are encountered.
  • Karen's toy device 100a, 100b may access information, such as the favorite colors, songs, or foods, of other members Karen's social network, such as from the one or more devices communicatively coupled to network 7 of FIG. 4.
  • Physical rewards may also be provided to Karen. For example, upon reaching a threshold level of tokens, Karen may be eligible to receive another toy device 100a, 100b or physical accessory to her current toy device 100a, 100b.
  • Karen's toy device 100a, 100b may be a Pig, and she may receive a new donkey toy device as a reward, or Karen may receive a "pig pen" set for her pig to inhabit, or clothing for her pig such as a hat and suspenders.
  • Physical rewards may be mailed to Karen or her parent.
  • FIG. 13 shows an example method for providing a reward based on usage according to an implementation of the disclosed subject matter.
  • Instructions for executing the method may be stored in storage such as memory 108 or fixed storage 116 of toy device 100a.
  • the instructions may be executed by a processor such as processor 106 of toy device 100a.
  • the methods of this disclosure may be stored as instructions in fixed storage 23 or memory 27 of a remote device such as a device 10, 11 of FIG. 3 or server 13 of FIG. 4.
  • the instructions may be stored in storage of the remote device and be provided via a network interface to a processor 106 of the toy device 100a, 100b or may be executed by one or more processors of the remote device.
  • an indicator of a status of a toy device 100a, 100b may be received.
  • a remote server e.g., server 13
  • the remote sever may update a usage level based on the received indicator of the status. For example, the remote server may increase the usage level by a by a certain value.
  • the updated usage level may be compared to a threshold level. For example an updated usage level may be at 51 and a threshold level may be at 50. The remote server may compare 51 to 50 and determine that the usage level exceeds the threshold level.
  • a reward may be determined based on the comparison. For example, exceeding the threshold of 50 may be linked to an automatic reward of a new song. Thus, the remote server may determine that the new song is the determined reward.
  • an instruction may be provided to the toy device 100a, 100b to execute the reward.
  • the new song may be transmitted by the remote server along with an instruction to play the new song at the toy device 100a, 100b.
  • a toy behavior may be based on a context.
  • a context may include a status of a toy device 100a, 100b as described throughout this disclosure and other data, such as a profile of a child associated with a social network (e.g., a toy-to-toy social network), time and date data, seasonal data, weather data, and a child's behavior.
  • Data from sources in communication with the toy device 100a, 100b may be received and compared to a condition associated with a particular context.
  • the particular context may be coupled to a particular toy behavior. If the context condition is satisfied, then the instructions may be provided to the toy device to execute the behavior.
  • a context may include data stored in a profile for a child included on a social network such as is described throughout in this disclosure. For example, a child's interests such as favorite songs, foods, games, colors, pets, sports, weather, holidays, flavors, friends, books, plays, movies, toys, and so forth may be stored on the child's profile.
  • the behavior of a toy, such as toy device 100a, 100b in FIGS. 1-3 may be based on this context.
  • a child such as Karen may request that her toy device 100a, 100b sing her a song, or her toy device 100a, 100b may execute a procedure that triggers the toy device 100a, 100b to select a song to be output (e.g., via the audio output device 112).
  • the toy device 100a, 100b may have access to source data such as library of songs stored on a remote server, such as server 13 shown in FIG. 4.
  • the source data from this library may include artist and genre data for each song in the library.
  • a context for Karen may include context conditions such as that a song emitted by the toy device 100a, 100b may either be by an artist of a favorite song in her profile or a genre of a favorite song in her profile.
  • Artist data and genre data may be retrieved from the song library and compared to the context condition.
  • the artist data may match the artist data contained in the context condition and thus satisfy the condition. Satisfaction of the artist context condition may be couple to the toy behavior of performing the song. As a result an instruction may be transmitted to Karen's toy device to emit audio content of the song. Similar procedures may be executed when determining a story to be told by the toy device 100a, 100b.
  • the context conditions may include author, genre, and length.
  • a toy behavior of telling a child about an event may be coupled to the context of an event indicated in another person's profile on a social network.
  • a social network may be a "toy -to-toy" social network where association with a toy device 100a, 100b may be needed for participation.
  • the toy behavior of reminding the child of her mom's birthday may be coupled to the context of her mom's birthday.
  • Context conditions may include it being the date of her mom's birthday.
  • Source data may include the current date. The current date may be received and it may match the date of her mom's birthday. The condition may be satisfied and the toy device may exhibit the behavior of emitting the statement "It's your mommy's birthday today! Don't forget to wish her a happy birthday!
  • a context may include time, date, and seasonal data.
  • a toy behavior of Karen's toy device emitting audio content associated with cold weather may be coupled to the context of a winter season.
  • a context condition for the winter season context may be any month between November and March.
  • a source data may be a calendar component implemented on a device such toy device 100a, 100b, user device 10, 11, or server 13.
  • a source data may be retrieved indicating the month is
  • the source data may be compared to the context condition, and the condition may be satisfied because February in included in the months between November and March.
  • an instruction may be transmitted to Karen's toy device 100a, 100b to emit audio content associate with cold weather.
  • Karen's toy device 100a, 100b may emit sounds of shivering or periodically exclaim "Brrrrr! .”
  • Similar procedures may be executed when determining recommendations to be told by the toy device 100a, 100b based on the season.
  • a toy device 100a, 100b may recommend wearing sunscreen or a hat during the summer months.
  • Similar procedures may be executed when selecting a tone of speech for Karen's toy device 100a, 100b.
  • the toy device 100a, 100b may have a back-story that includes its "birthday".
  • a toy behavior of having a happy or cheery tone of voice may be coupled to the context of being the toy device's birthday.
  • Context conditions may include the date of birth for the toy device.
  • Source data may include the current date.
  • the toy device may exhibit a cheery tone.
  • the pitch of Karen's toy device's voice may increase or more enthusiastic language may be emitted.
  • the toy device may use terms such as "fantastic" in place of "good” or "tremendous" in place of "a lot.”
  • the toy device 100a, 100b may be a turkey.
  • a toy behavior of sounding afraid may be coupled to the context being near the Thanksgiving holiday.
  • Context conditions may include a date before the fourth Thursday in the month of November.
  • the toy device may exhibit fearful behavior.
  • the toy device may speak in a soft tone or periodically state "I'm afraid!" Similar procedures may be executed related to other holidays such as Valentine's Day. For example, if the context condition, of being February 14 th is satisfied, then the toy device may periodically emit "Happy Valentine's day!”
  • a context may include a weather prediction.
  • a toy behavior of reminding the child to pack an umbrella may be coupled to the context of a rainy day.
  • a context condition may include a weather forecast that predicts over 50% chance of rain.
  • the source data may be weather forecast data obtained from a weather site on the Internet. If the retrieved weather forecast includes a greater than 50% chance of rain, then the context condition may be satisfied. As a result the toy device 100a, 100b may state "Good Morning! It's going to rain today, so don't forget to take your umbrella!
  • a context may include a child behavior.
  • a context may include the child falling asleep.
  • Context conditions may include a sound template of the child sleeping.
  • Source data may include sensor data, such as audio content collected from a microphone and processed by audio input device 110 of toy device 100a, 100b shown in FIG. 1.
  • the audio input device may have recorded and stored audio of the child sleeping at a prior time. Characteristic features of the audio data may be extracted by one or more signal processing operations. These features may be stored as a template of the child sleeping.
  • the toy device may be exhibiting the behavior of telling the child a story. While exhibiting this behavior an audio source data may be receive through the microphone. This source data may be compared to the stored template of the child sleeping. If there is a match, then the context condition may be satisfied, and an instruction may be transmitted to the toy device to gradually reduce the volume of the story telling.
  • a context may include the child waking up.
  • the context may be coupled to the toy behavior of exhibiting a morning wake up routine.
  • Context conditions may include a motion being detected over the bed of the child.
  • Source data may include sensor data, such as data from a PIR sensor and/or sensor 102a, 102b.
  • a PIR sensor may be located with its sensor face directed at the bed of the child.
  • the sensor may be included in the child's toy device 100a, lOObor remote from the toy device 100a, 100b.
  • Motion data may be received from the PIR sensor.
  • the received motion data may match the context condition of detecting motion over the bed of the child.
  • the context condition may be satisfied and an instruction may be provided to the toy device 100a, 100b to exhibit the morning wake up routine.
  • the toy device may gradually increase the volume on a favorite song of the child and may state "Good morning!"
  • instructions for executing some or all of the operations shown in FIG. 14 may be stored in storage such as memory 108 or fixed storage 116 of toy device 100a.
  • the instructions may be executed by a processor such as processor 106 of toy device 100a.
  • some or all of the operations shown in FIG. 14 may be stored as instructions in fixed storage 23 or memory 27 of a remote device such as a device 10 of FIG. 3 or server 13 of FIG. 4.
  • the instructions may be stored in storage of the remote device and be provided via a network interface 104 to a processor 106 of the toy device 100a, 100b or may be executed by one or more processors of the remote device.
  • FIG. 14 shows an example method according to an implementation of this disclosure.
  • a source data may be received, and at 1420 the source data may be compared to a context condition associated with a context.
  • a particular toy behavior may be coupled to the context.
  • the particular toy behavior may be selected based on the context.
  • an instruction may be provide to a toy device to execute the selected behavior.
  • a toy behavior may be based on other toy devices in its proximity.
  • a first toy device may detect a second toy device based on received sensor data such as data received over a network in accordance with a near field communications (NFC) protocol.
  • NFC near field communications
  • joint behaviors for both toys may be triggered. For example if the first toy device is singing a song, an identifier of the song and track location data may be provided to the second toy device.
  • the second toy device may also sing the song.
  • the received sensor data may also include an identifier of the second toy device. The identifier may be used to determine past interactions between the first toy device and the second toy device.
  • a behavior of the first toy device may be selected base on the past interaction data. For example, a game played with the second toy device in the past may be suggested.
  • a proximity may be defined to be various distances, including a maximum distance of a wireless communications protocol.
  • a first toy may detect when it is in proximity to a second toy based on various sensor data.
  • each toy device may have wireless radios configured to transmit and receive in accordance with an NFC protocol. NFC transmissions are generally inoperable over distances greater than 20 centimeters. Therefore, when a connection is established between the two toy devices, it may be determined that both toy devices are within 20 centimeters of each other.
  • a first toy device and a second toy device may each have wireless radios configured to transmit and receive in accordance with a Bluetooth protocol.
  • Certain classes of the BluetoothTM protocol may be inoperable over 1 meter. Therefore, when a connection is established between the two toy devices, it may be determined that both toy devices are within 1 meter of each other.
  • a first toy device and a second toy device may each be connected to the same wireless Wi-Fi router.
  • the first to device may be in communication with the router and may retrieve a list of devices connected to the router.
  • a query of the list may return an identifier of the second device.
  • the wireless router may have a maximum range of 46 meters. Therefore when identifiers of the first toy device and second toy device are both found on the list of devices connected to the router, it may be determined that both toy devices are within 92 meters of each other.
  • a first toy device and a second toy device may each have a global positioning system (GPS) sensor.
  • Each toy device may receive location data from its GPS sensor and transmit the location data over a Wi-Fi network.
  • the fist toy device may receive the location data of the second toy device over the Wi-Fi network.
  • the first toy device may compare its locations data to the location data received from the second toy device and calculate a distance between the two devices. This distance may be compared to a threshold value that is defined to be the proximity value, such as 10 meters. If the distance is less than the threshold value, then it may be determined that the first toy device, a first indicator of a second toy device.
  • a toy behavior for a first toy may be triggered when a second toy is brought within its proximity.
  • a first toy device may detect that it is in proximity to a second toy device in accordance with techniques such as those discussed above.
  • the first toy device may present an option to a child such as "I have a friend nearby, should play together?" The child may respond affirmatively, such as by saying "yes.”
  • Speech recognition procedures implemented with the first toy device may recognize the child's response, and as a result generate options for playing together. For example, the first toy device may state "Great, what would you like to do?
  • the child may respond: "song.”
  • the child's response may be recognized and a list of song options may be presented. In a similar way a song may be chosen.
  • the first device may then provide an identifier of the song to the second toy device, or the second toy device may recognize the child's selections.
  • the first toy device and second toy device may then begin exhibiting the behavior of singing the selected song.
  • the first toy device may present song options such as those above, but may also present role or part options. For example, the child may be presented with the option of choosing which toy device sings the verse, which toy device sings the chorus, and which part of the song is sung in harmony. These options may be beneficial because different toy devices may have different pitches to their voice.
  • a toy behavior may be triggered by the history between two toys brought into proximity. For example, a first toy device may detect that it is in proximity to a second toy device in accordance with techniques such as those discussed above. An identifier of the second toy device may be provided to the first toy device, such as by transmitting the identifier over the local Wi-Fi network.
  • a history of toy interactions may be maintaining by the first toy device. For example, a log of the behaviors exhibited by the first toy device and the identifiers of other toy devices that interacted with the first toy devise may be stored on a remote server in communication with the first toy device. This history may indicate that the first toy device and the second toy device have played games together in the past. For example, the first toy device may have function as a game token against the second toy device in the game "Candy Country.” A record of the state of the game of Candy Country, such as how many points each player had, which questions had be answered, and where on the board each toy device was located may also be kept in the log. The game may not have been finished.
  • the first toy device may query its log with the identifier of the second toy deice. This query may return an indicator of the unfinished game of Candy Country. As a result, the first toy device may exhibit the behavior of prompting the child about the unfinished game. The child may respond affirmatively.
  • the second child who is linked to the second toy device may similarly be queried either by the second toy device or the first toy device. If the second child also responds affirmatively, then the game state may be loaded from the log and the children may begin to play.
  • a geofence can be established based on the location of the toy.
  • the geofence encloses a geographical area, such as a zone around a home location for the toy.
  • the implementation can detect when the toy moves outside the geofence and trigger an alert to be sent to a third party or a behavior in the toy. For example, a text message or call can be made to a parent when the system determines that the toy has left the geofenced area.
  • the speaker in the toy may play a message in the toy's or a parent's voice to the effect of "Take me home," or "Go to your room.”
  • instructions for executing some or all operations shown in FIGS. 15-16 may be stored in storage such as memory 108 or fixed storage 116 of toy device 100a, 100b.
  • the instructions may be executed by a processor such as processor 106 of toy device 100a, 100b.
  • some or all operations shown in FIGS. 15-16 may be stored as instructions in fixed storage 23 or memory 27 of a remote device such as a device 10 of FIG. 3 or server 13 of FIG. 4.
  • the instructions may be stored in storage of the remote device and be provided via a network interface to a processor of the toy device or may be executed by one or more processors of the remote device.
  • FIG. 15 shows an example method according to an implementation of this disclosure.
  • a first toy device may detect a proximity to a second toy device at 1510.
  • the first toy devices may provide an indicator of the second toy device to a remote device, such as a remote server.
  • the remote server may select a behavior based on the second toy device.
  • the first toy device may receive an instruction to execute the selected behavior at 1530.
  • the second toy device may execute the selected.
  • FIG. 16 shows an example method according to an implementation of this disclosure.
  • a remote sever may receive an indicator of the second toy device.
  • the remote server may compare the received indicator to an interaction history of the first toy device at 1620.
  • the remote sever may select a behavior based on the interaction history and the second toy device.
  • the remote sever may provide an instruction to execute the selected behavior to the first toy device.
  • voice recognition and language processing procedures may provide an interface between a child, such as Karen, and her toy device.
  • profile information of Karen's friend Mary on Karen's social network may include Mary's favorite color, green.
  • Karen may ask her toy device, "What is Mary's favorite color?", and via voice recognition and language processing techniques, the toy device 100a, 100b may recognize Karen's query.
  • the toy device 100a, 100b may access Mary's profile, determine her favor color, and tell Karen the result "Mary's favorite color is green!
  • Karen may name her toy device 100a, 100b.
  • Karen may state "your name is Roxy.”
  • the toy device 100a, 100b may recognize that it is being named through voice recognition and language processing techniques, and when "Roxy" is uttered, the toy device 100a, 100b may recognize that it is being addressed.
  • the toy device 100a, 100b may ask Karen what her name is.
  • Karen may provide a name that she wants to be called, such as "Karen.”
  • the toy device 100a, 100b may verbally address Karen by name when
  • Message recording may also be initiated via verbal instruction.
  • Karen may state "Toymail mom” and her toy device may recognize the instruction and begin recording a message to send to Karen's mom.
  • Gifts such as songs, stories, sound effects, etc. may also be sent via verbal instruction.
  • Karen may state "Send a new song to Mary.”
  • Karen's toy device may recognize Karen's instruction and send a gift of whatever is stated after "send” to whoever is stated after "to.”
  • Activities such as games, stories, and songs may also be accessed via voice recognition and language processing techniques. For example, Karen may ask her toy device 100a, 100b to read her a story, and her toy device 100a, 100b may begin speaking the words of a story Karen requests.
  • Karen's toy device 100a, 100b may increase its knowledge of Karen by asking Karen questions. For example, Karen's toy device 100a, 100b may ask her what her favorite subjects are. The toy device 100a, 100b may recognize Karen's answers and use such information to provide suggestions of future activities. Karen may audibly play games with her toy device. For example, Karen may sing a song and ask her toy device 100a, 100b to guess which song it is. Karen's toy device 100a, 100b may recognize the lyrics of the song and access information sources over networks such as the Internet in order to determine the song from which the lyrics are derived. In another example, the toy device 100a, 100b help Karen create a story.
  • the toy device 100a, 100b may tell a first part of the story to get Karen stared, and Karen may be asked to generate the second part of the story.
  • the toy device 100a, 100b may record the entire story and provide a transcript to Karen or an authorized user of an application in communication with the toy device 100a, 100b.
  • the toy device 100a, 100b may operate in various modes. For example, in “offline mode” the toy device may not be connected to a network such a local wireless network. In this mode, the toy device's wireless radio may be deactivated. In an “online mode” the toy device may be connected to a network and may conserve battery life by only checking for messages from other devices periodically, such as every 10 minutes. In “walkie-talkie mode” the toy device 100a, 100b may communicate with other devices in near real time.
  • Karen may carry on a conversation with her mother's mobile device 10, 11 or another toy device 100a, 100b in a manner similar to a conventional telephone conversation via the toy device 100a, 100b.
  • Conversations with multiple devices such as in a conventional conference call may also be conducted in "walkie-talkie mode.”
  • group messaging mode messages may be transmitted to multiple recipient devices at one time.
  • Such messages may be voice messages, images, symbols, or other messages capable of being captured and communicated via the toy device.
  • lights on the toy device may be illuminated or appendages of the toy device may be articulated when a message is sent or received.
  • Such actions may communicate the content of the message itself, such as an arm of a toy device waving goodbye when a message intended to convey "goodbye" is received by the toy device.
  • machine learning may be enabled on the toy device 100a, 100b and/or the device 10, 11.
  • the toy device 100a, 100b may use the sensors 102a, 102b to determine usage patterns of the toy device by the user, including the time of day used, the duration used, how many messages are recorded and/or transmitted, and the like.
  • the toy device 100a, 100b may encourage a user to record and send a message to one or more contacts if they have not communicated with the contacts within a predetermined period of time.
  • the toy device may encourage a learning activity when the user has not participated in a learning activity in a predetermined amount of time.
  • the usage patterns of the toy device 100a, 100b may be transmitted to the device 10, 11, which may be used by the parent and/or guardian to determine when the child is using the toy device, and may prompt them to record and send a message during the child's period of use.
  • a parent may send an audible message to one or more children having toy devices 100a, 100b.
  • the one or more children may respond to the received messages, by recording and/or transmitting messages back to the parent's device 10, 11 within a predetermined period of time.
  • the two-way communications may be between two or more toy devices 100a, 100b.
  • toy device 100a, 100b may record audio or video of the a user's activities and generate a summary of the user's activities over a period of time.
  • the child may play a game with friends in the presence of toy device 100a, 100b.
  • Toy device 100a, 100b may record the conversations the child has while playing the game.
  • Toy device 100a, 100b may transmit the recording to a remote computing device such as one or more servers.
  • the remote computing device may execute procedures to perform natural language processing or related language processing techniques on the audio recordings. These techniques may extract meaningful content from the audio recordings and automatically generate a summary of the content, for example in written or audio form.
  • This summary may then be sent to a device associated with the toy device 100a, 100b such as a parent's mobile device. In this way the parent may monitor or periodically receive updates of the child's activities.
  • one or more computing devices remote from toy device 100a, 100b may determine content to suggest and/or provide to toy device 100a, 100b based on natural language processing of audio recorded by toy device 100a, 100b.
  • toy device 100a, 100b may be a plush toy that a user such as a child carries with her to school. While at school toy device 100a, 100b may record audio such as the child's discussion about a new movie with another child, a teacher's announcement of the new lesson unit they will be covering this month, and an announcement of a guest speaker coming to class.
  • One or more computing devices remote from toy device 100a, 100b may perform natural language processing or similar language processing techniques on the recorded content and extract information such as the enthusiasm for the movie discussed, the new course unit, and the guest speaker.
  • the remote computing device may determine new content related to this extracted information that can be recommended to the child or the child's parents, such as a new song about characters in the movie, an audio book about a subject in the new unit, and biographical information about the guest speaker.
  • These recommendations may be sent to the child's parent's mobile device or may be provided directly to toy device 100a, 100b.
  • the new content may be provided directly to toy device 100a, 100b and presented to the child.
  • the parent may select the recommendation on the parent's device and execute a transaction.
  • toy device 100a, 100b In response to the transaction the new content can be provided to toy device 100a, 100b or otherwise accessed on a device linked to an account associated with toy device 100a, 100b, such as a smart television or projection system, tablet, laptop, or other content presentation device.
  • toy device 100a, 100b may be a wearable device such as a watch, book bag, or have any other form factor discussed in this disclosure.
  • Audio content may be recorded and aggregated across toy devices belonging to users of multiple ages. The audio content may be analyzed by natural language processing techniques to, for example, determine interest trends among different ages of users. For example a certain type of shoe or other clothing may be determined to be desirable by a particular age range of users, but not relevant to a younger age range of users.
  • Recommendations related to the desirable shoe type may be targeted at users within the older range and not at users in the younger range.
  • users approaching the older age range of users such as those users within a year of the older age range, may be targeted for recommendations of the desirable shoe type.
  • other age specific items may be determined such as games, songs, movies, or similar content that may be recommended or delivered to toy device 100a, 100b or otherwise accessed on a device linked to an account associated with toy device 100a, 100b, such as a smart television or projection system, tablet, laptop, or other content presentation device.
  • Aggregated information collected from one or more toy devices 100a, 100b may be anonymized or otherwise treated in a manner sufficient to maintain the privacy of the users.
  • implementations of the presently disclosed subject matter may include or be embodied in the form of computer-implemented processes and apparatuses for practicing those processes. Implementations also may be embodied in the form of a computer program product having computer program code containing instructions embodied in non- transitory and/or tangible media, such as floppy diskettes, CD-ROMs, hard drives, USB
  • Implementations also may be embodied in the form of computer program code, for example, whether stored in a storage medium, loaded into and/or executed by a computer, or transmitted over some transmission medium, such as over electrical wiring or cabling, through fiber optics, or via electromagnetic radiation, such that when the computer program code is loaded into and executed by a computer, the computer becomes an apparatus for practicing implementations of the disclosed subject matter.
  • the computer program code segments configure the microprocessor to create specific logic circuits.
  • a set of computer-readable instructions stored on a computer- readable storage medium may be implemented by a general-purpose processor, which may transform the general-purpose processor or a device containing the general-purpose processor into a special-purpose device configured to implement or carry out the instructions.
  • the implementations may use hardware that may include a processor, such as a general-purpose microprocessor and/or an Application Specific Integrated Circuit (ASIC) that embodies all or part of the techniques according to implementations of the disclosed subject matter in hardware and/or firmware.
  • the processor may be coupled to memory, such as RAM, ROM, flash memory, a hard disk or any other device capable of storing electronic information.
  • the memory may store instructions adapted to be executed by the processor to perform the techniques according to implementations of the disclosed subject matter.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Toys (AREA)

Abstract

Systems and methods are provided for setting a profile of a toy device that includes one or more approved contacts and user interest information. Audio and/or video messages may be recorded and/or transmitted between one or more toy devices and/or other devices coupled to a network that are associated with the approved contacts. A message may be changed by manipulating the waveform of the audio portion of the message based on sensor data collected so as to change at least one characteristic of the outputted audio portion.

Description

INTERACTIVE TOY DEVICE, AND SYSTEMS AND METHODS OF COMMUNICATION BETWEEN THE SAME AND NETWORK DEVICES
BACKGROUND
[1] Talking toys are meant to affect and improve the child' s interaction with the toy.
However, most talking toys store predetermined statements only that can be triggered by the child behavior, such as pulling a string or pushing a certain button or sequence of buttons.
Further, a talking toy is typically designed to operate in the company of a single child. The talking toy does not change its behavior when the child is playing with other children with their own toys. In some cases, a parent wishes to communicate with a child remotely, or vice versa. But normal communications devices are not suitable for small children. For example, a smartphone would bring privacy concerns and many small children cannot operate them properly.
BRIEF SUMMARY
[2] Implementations of the disclosed subject matter provide systems, methods, and interactive toy devices so that parents, guardians, relatives, and friends may engage a child in audio and/or video communication, games, activities, education, and the like. Implementations of the disclosed subject matter may provide systems, methods, and interactive toy devices so that children may communicate with one with one another via the toy devices.
[3] According to an implementation of the disclosed subject matter, a first computing device may receive from a first toy device that is remote from the first computing device and that is associated with a first user, a first sensor data detected by a first sensor in communication with the first toy device. The first computing device may compare at least the first sensor data to a plurality of conditions. The first computing device may determine that a first condition of the plurality of conditions is satisfied based on the comparison of at least the first sensor data to the plurality of conditions. In response to the determination that the first condition is satisfied, the first computing device may select a first contact from among a plurality of contacts based on an association of the first contact with the first condition,. The plurality of contacts may each be approved by a second user for the first user. The first computing device may provide to a second computing device that is remote from the first computing device and that is associated with the first contact, an indicator of a first status of the first toy device, the first status associated with the satisfied first condition. The first computing device may receive from the second computing device, an indicator of a first toy behavior, and provide to the first toy device, an instruction to perform the first toy behavior.
[4] According to an implementation of the disclosed subject matter, a non-transitory, computer readable medium may store instructions that, when executed by a processor, cause the processor to perform operations including receiving, from a first computing device by a second computing device that is remote from the first computing device, an indicator of a status of a toy device that is associated with sensor data collected by a sensor in communication with the toy device, the toy device associated with a first user. The operations may further include presenting, on a display of the second computing device, a representation of the status of the toy device; and receiving, by the second computing device, a selection of a toy behavior for the toy device by a first contact of a plurality of contacts, the plurality of contacts each approved by a second user for the first user. The operations may further include, in response to the selection of the toy behavior by the first contact, providing, by the second computing device to the first computing device, an instruction for the toy device to perform the selected toy behavior.
[5] According to an implementation of the disclosed subject matter, a toy device may include a sensor, a network interface, a processor; and a non-transitory, computer readable medium in communication with the sensor, the network interface, and the processor. The non- transitory, computer readable medium may store instruction that, when executed by the processor, cause the processor to perform operations including collecting, by the sensor, a sensor data that satisfies a condition associated with a status of the toy device, the toy device associated with a first user, and providing, by the network interface, the sensor data to a first computing device that is remote from the toy device, the first computing device executing an application approved by a second user for communication with the toy device. The operations may further include receiving, by the network interface from the first computing device, an instruction to perform a toy behavior selected on a second computing device by a first contact of a plurality of contacts, the plurality of contacts each approved by the second user for the first user, and executing, by the processor, instructions to perform the toy behavior.
[6] According to an implementation of the disclosed subject matter, a method is provided for setting a profile of a toy device, at a mobile computing device, that includes one or more approved contacts and user interest information. The method includes recording, at the toy device, message that includes at least an audio portion, and receiving, at the toy device, a selection of at least one of the approved contacts. The method includes transmitting the recorded message to at least one device associated with the selected at least one of the approved contacts.
[7] According to an implementation of the disclosed subject matter, a method is provided for receiving, at a toy device, a message that includes at least an audio portion. The method includes receiving, at a user interface of the toy device, a selection of an option to output at least the audio portion of the message, and when an audio output device of the toy device is outputting the audio portion of the message based on the selected option, changing the waveform of an audio signal of the audio portion based on sensor data collected at the toy device so as to change at least one characteristic of the outputted audio portion.
[8] According to an implementation of the disclosed subject matter, a method is provided for detecting, at a sensor, a sensor data. A status of the toy device may be determined based on the detected sensor data. An indicator of the determined status may be provided. In response to the provided indicator, the toy device may receive an instruction for a toy behavior. The toy device may execute the toy behavior based on the received instruction.
[9] According to an implementation of the disclosed subject matter, a method is provided for receiving an indicator of a status of a toy device. A toy behavior may be selected based on the interaction status. An instruction may be provided to the toy device to execute the selected toy behavior.
[10] According to an implementation of the disclosed subject matter, a method is provided for receiving an indicator of a status of a toy device. A usage level of a user may be updated based on the status. The updated usage level may be compared to a usage threshold. A toy reward may be determined based on the comparison, and an instruction may be provided to the toy device to execute the toy reward.
[11] According to an implementation of the disclosed subject matter, a method is provided for receiving a source data for a toy device. The source data may be compared to a context condition. The context condition may be determined to be satisfied based on the comparison. A toy behavior may be selected based on a context associated with the context condition, and an instruction may be provided to the toy device to execute the selected behavior.
[12] According to an implementation of the disclosed subject matter, a method is provided for detecting a proximity of a first toy device to a second toy device. An identifier of the second toy device may be received and provided to a device. The first toy device may receive an instruction to execute a selected behavior, and the first toy device may execute the selected behavior based on the received instruction.
[13] According to an implementation of the disclosed subject matter, a method is provided for receiving an identifier of a second toy device and comparing the received identifier to an interaction history of a first toy device. A toy behavior may be selected based on the comparison, and the first toy device may be provided an instruction to execute the behavior.
[14] Additional features, advantages, and implementations of the disclosed subject matter may be set forth or apparent from consideration of the following detailed description, drawings, and claims. Moreover, it is to be understood that both the foregoing summary and the following detailed description are illustrative and are intended to provide further explanation without limiting the scope of the claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[15] The accompanying drawings, which are included to provide a further understanding of the disclosed subject matter, are incorporated in and constitute a part of this specification. The drawings also illustrate implementations of the disclosed subject matter and together with the detailed description serve to explain the principles of implementations of the disclosed subject matter. No attempt is made to show structural details in more detail than may be necessary for a fundamental understanding of the disclosed subject matter and various ways in which it may be practiced.
[16] FIG. 1 shows a circuit configuration of a toy device according to an implementation of the disclosed subject matter.
[17] FIGS. 2A-2C show front and back views of a toy device according to an
implementation of the disclosed subject matter.
[18] FIG. 3 shows a computing device according to an implementation of the disclosed subject matter.
[19] FIG. 4 shows a network configuration according to an implementation of the disclosed subject matter.
[20] FIGS. 5-8 show example displays of a mobile computing device to an implementation of the disclosed subject matter.
[21] FIG. 9 shows an example method of communicating recorded messages between toy devices and/or other approved network devices according to an implementation of the disclosed subject matter.
[22] FIG. 10 shows an example method of changing the output characteristics of a message according to an implementation of the disclosed subject matter.
[23] FIG. 11 shows an example method for determining toy usage according to an implementation of the disclosed subject matter.
[24] FIG. 12 shows an example method for determining a toy usage according to an implementation of the disclosed subject matter.
[25] FIG. 13 shows an example method for providing a reward based on usage according to an implementation of the disclosed subject matter.
[26] FIG. 14 shows an example method for determining a toy behavior based on context according to an implementation of the disclosed subject matter. [27] FIG. 15 shows an example method of determining a toy behavior based on toy proximity detection according to an implementation of the disclosed subject matter.
[28] FIG. 16 shows an example method of determining a toy behavior based on toy proximity detection according to an implementation of the disclosed subject matter.
DETAILED DESCRIPTION
[29] According to an implementation of the disclosed subject matter, a toy device may record a message (e.g., an audible message, a video message, a message include audio and a still image, or the like) that may be transmitted to a computing device (e.g., a mobile computing device), and/or may receive a message from the computing device. The toy device may be configured so that it may communicate messages between devices associated with a parent, guardian, relative, and/or friend of the user of the toy device. In some implementations, two or more toy devices may be configured so as to communicate audio and/or video messages, and/or sound effects (e.g., audible emojis, as discussed in detail below).
[30] According to an implementation of the disclosed subject matter, toy devices may be communicatively coupled to one another so that one toy device user may communicate with another toy device user. For example, a child with a toy device may communicate with one or more other users having toy devices. In some implementations, a user of a toy device may communicate with a user having, for example, a mobile computing device. Parental and/or network controls may be provided, at least in part, by the toy device and/or the mobile computing device so as to control and/or manage when the toy device may receive and/or transmit messages from (e.g., parents, guardians, relatives, friends, or the like), and what kind and content of messages may be exchanged. The parental and/or network controls may control and/or manage toy-to-device communications and/or toy-to-toy communications. The implementation may provide for a user of a toy device to set and/or modify a network profile that is accessible via the toy device. The profile may include one or more devices and/or contacts (e.g., parent, guardian, relative, friends, or the like) that may be approved (e.g., by a parent and/or guardian) for the toy device to communicate with (e.g., transmit messages to and/or receive messages from). The profile may include information about the user of the toy device, such as the user's name, the toy's name, and the user's interests, such as a favorite food, game, song, or favorite color.
[31] According to an implementation of the disclosed subject matter, the toy device may manipulate the waveform of an audio signal of a received message so as to alter the character of sounds emitted from the toy device. The pitch, frequency, amplitude, envelope, distortion, or the like of the waveform of the audio signal may be manipulated by the toy device. For example, the waveform of the audio signal may be modified based on input from one or more sensors of the toy device, such as a motion sensor, a pressure sensor, an orientation detector, or the like. The waveform may be altered based on a magnitude and direction of an acceleration, a movement, and/or an orientation movement as detected by the sensor. For example, the sensor may detect that the toy device is turned to the right and/or clockwise, and the pitch of the sound output by the toy device may be manipulated so as to increase. When the sensor detects that the toy device is turned to the left and/or counterclockwise, the pitch of the sound may be manipulated so as to decrease.
[32] In some implementations, the toy device may manipulate the waveform of an audio signal received and/or recorded from an audio input device (e.g., microphone) so as to alter the character of sounds emitted from the toy device when a selection is made to output and/or playback the received and/or recorded audio. For example, a child may record their voice using the audio input device, manipulate the waveform of the audio so as to alter the characteristics of the signal, and output and/or playback the manipulated audio. In this example, the child may change the pitch, frequency, distortion, or the like of the recorded audio input so as to make the output of their recorded audio sound funny, scary, or the like. In some implementations, the recording, manipulation of audio, and output of the manipulated audio may be for non- messaging purposes (e.g., the child may amuse themselves by making funny voice recordings and playing them back). The output audio may be further manipulated when one or more sensors detect a change in motion, orientation, or the like of the toy device, as discussed in detail below.
[33] According to an implementation of the disclosed subject matter, a mobile computing device may monitor the usage of one or more toy devices based on detecting the motion of the toy device and/or a child's engagement with the toy device. One or usages and/or patterns of use of the toy device may trigger one or more actions and/or events by the toy device. For example, the processor and/or application executed by the toy device may operate so as to invite a child user to play with it if no usage has been detected for a predetermined time period. The toy device may transmit notifications to a mobile computing device (e.g., operated by a parent and/or guardian) when, for example, movement sensor measurements reach a threshold level, the voice filter is used, and/or the toy device responds to the child's behavior (e.g., the toy device outputs a predetermined response based on the child user's input). The parent and/or guardian may be invited to communicate with the toy device (i.e., via the mobile communication device) when notified that the child is playing with the toy device. One or more messages may be transmitted to other toy devices associated with the child's network to participate when the child plays with the toy device. The parent, guardian, and/or authorized user may communicate directly with the child via the toy device using a mobile computing device and/or set automatic responses (e.g., the toy device may play recorded messages of encouragement or instruction stored on the toy device). The automatic responses may be configured using toy device specific communications options in an application being executed at least in part by the mobile computing device, such as 'audible emojis' that, when received by the toy device (e.g., by themselves and/or in a transmitted message), may configure the toy device to output one or more sounds.
[34] According to an implementation of the disclosed subject matter, usage-based rewards may be provided. Usage of the toy device may be tracked and/or monitored, and rewards may be provided based on the type of use, the frequency of use, and the amount of time of use over a predetermined period of time, or the like. For example, total motion can be tracked with the accelerometer or the number of messages sent can be tracked by the application. In response to reaching one or more usage thresholds, the child may receive one or more messages from a parent and/or guardian, one or more new songs may be received, one or more new features may be downloaded to the toy device, new toy devices may be purchased, or the like. An application that is executed at least in part by a mobile computing device may track an account (e.g., a child account associated with one or more toy devices) with tokens corresponding to the amount and/or types of usage. In some implementations, the toy device may interact with a child, such as with educational games, brushing teeth, or the like. For example, the toy device may conduct a spelling contest, and an application executed at least in part by a mobile computing device may track and/or monitor a child's performance via the toy device in the game. The better the child performs, the more features may be available for use in the toy device.
[35] According to an implementation of the disclosed subject matter, one or more features of the toy device may change based on context, such as time and date. For example, the sound of the toy device's voice may sound different in winter months, or may sound happy on a birthday of the child user. The toy device may be configured so as to output audio and/or video messages at predetermined times and/or on selected dates. For example, the toy device may output a Valentine's Day message on February 14th, or remind it to put on sun tan lotion in the summer.
[36] According to an implementation of the disclosed subject matter, a toy device may detect (e.g., using one or more sensors) when they are in proximity to other networked toy devices through Near Field Communication protocols, Bluetooth, GPS, or the like. Toy devices that may have interacted (e.g., exchanged messages and/or information) and/or have played together before may have previous interactions and/or information stored from the interaction. For example, two proximate toy devices may have previously used to play a game. A
determined proximity between the toy devices may configure at least one of the toy devices to output a suggestion to play the game again, a reminder of who won last time, or the like.
[37] In one or more of the implementations of the disclosed subject matter, a toy device may "learn" a child's (and parent's) preferences in terms of the content of audio and/or video messages, types of play, times of play, the timing and nature of the child's desire to
communicate with a parent, guardian, relative, and/or friend via a toy device, and the like.
[38] FIG. 1 is an example toy device 100a, 100b suitable for implementations of the presently disclosed subject matter. The toy device 100a, 100b may include a bus 120 which interconnects major components of the device 100a, 100b, such as sensors 102a, 102b (e.g., a motion sensor, a smoke sensor, a carbon monoxide sensor, a proximity sensor, a temperature sensor, a time sensor, a physical orientation sensor, an acceleration sensor, a location sensor, a pressure sensor, a light sensor, a passive and/or active infrared sensor, a water sensor, a microphone and/or sound sensor, or the like), a network interface 104 operable to communicate with one or more remote devices via a suitable network connection, a processor 106, a memory 108 such as Random Access Memory (RAM), Read Only Memory (ROM), flash RAM, or the like, an image sensor 109 (e.g., a camera to capture video and/or still images), an audio input device 110 such as a microphone, an audio output device 112, which may include one or more speakers, a user input interface 114 such as one or more buttons, a fixed storage 116 such as a hard drive, flash storage, and the like, a removable media component 118 operative to control and receive an optical disk, flash drive, and a display 120 (e.g., a display screen, a touch screen, etc.) and the like.
[39] The bus 120 allows data communication between the processor 106 and one or more memory components, which may include RAM, ROM, and other memory, as previously noted. Typically RAM is the main memory into which an operating system and application programs are loaded. A ROM or flash memory component can contain, among other code, the Basic Input-Output system (BIOS) that controls basic hardware operation such as the interaction with peripheral components. Applications resident with the device 100a, 100b are generally stored on and accessed via a computer readable medium, such as a solid state drive (SSD) (e.g., fixed storage 116), hard disk drive, an optical drive, floppy disk, or other storage medium.
[40] The sensor 102a, 102b may be an accelerometer and/or other suitable sensor to detect motion of the toy device 100a, 100b. For example, the motion sensor may detect the movement of the toy device 100a, 100b by a child, such as shaking, rotating, horizontal and/or vertical movement, changing the orientation of the toy device 100a, 100b, or the like. The sensor 102a, 102b may be a physical orientation sensor so as to determine whether the toy device 100a, 100b is oriented, for example, upside-down, right-side-up, on its side, or the like. The sensor 102a, 102b may be a location sensor so that the toy device 100a, 100b may determine its position in its environment (e.g., a room, whether it is next to another toy 100a, 100b, or the like). The sensor 102a, 102b may be a smoke sensor, a carbon monoxide sensor, and/or water sensor to detect smoke, carbon monoxide, and/or water. When smoke, carbon monoxide, and/or water are detected, the toy device 100a, 100b may output a warning via the audio output device 112, the display 120, and/or to a device 10, 11, server 13, and/or remote platform 17 via the network interface 104. For example, an alert in the form of an audible alarm, text message, voice message or telephone call can be made to one or more parents or guardians or other third parties, such as the police, fire, and emergency medical service providers. [41] In some implementations, the sensor 102a, 102b may be a proximity sensor and/or a location sensor which may determine where the toy device 100a, 100b is located (e.g., within a home) and/or whether the toy device 100a, 100b is proximately located next to a user (e.g. a child, parent, relative, friend, or the like). For example, the user may have a wearable device (e.g., band, watch, or the like) which may output a radio frequency and/or other signal (e.g., which include identifier information), which may be detected by the sensor 102a, 102b of the toy device (e.g., where the identifier information may be used to determine whether the user is registered with the toy device 100a, 100b).
[42] The sensor 102a, 102b may be a pressure sensor that may detect pressure applied to the toy device 100a, 100b by a user. When pressure is detected, the output of a received audible message via the network interface 104 may be changed (e.g., a waveform of the audio signal may be altered to change the pitch, frequency, amplitude, envelope, or the like) during the output of the audible message by the audio output device 112. In some implementations, the waveform of the audio signal may be changed when the value detected by the sensor 102a, 102b is greater than a predetermined threshold value.
[43] In some implementations, one or more sounds may be digitally stored in the memory
108, the fixed storage 116, and/or the removable media 118 of the toy device 100a, 100b. These sounds may be different from the audible messages recorded by the toy device 100a, 100b. For example, the sensor 102a, 102b may detect that the toy device 100a, 100b may be tossed and/or thrown in the air, the toy device 100a, 100b may output a "WHEEEE!" sound is response to the detected movement. In another example, when the sensor 102a, 102b detects that the toy device 100a, 100b has impacted the floor and/or ground when dropped, the toy device 100a, 100b may output an "Ouch!" sound in response to the detected impact. In yet another example, if the sensor 102a, 102b detects that the toy device 100a, 100b is rotated onto its belly, the toy device may output a snoring sound in response to the detected movement. When this position is detected, the processor 106 may put the toy device 100a, 100b into a sleep mode with reduced functionality (e.g., the toy device 100a, 100b may not transmit or receive message to preserve battery life, or the like). In some implementations, when the sensor 102a, 102b detects that the toy device 100a, 100b has been oriented so as to be on its belly, the processor 106 may execute a "bedtime routine" application which may prompt a child to brush their teeth, or the toy device 100a, 100b may output a bedtime story or a bedtime song. In another example, if the sensor 102a, 102b detects that the child is attempting to make the toy device 100a, 100b dance and/or jump, the toy device 100a, 100b may output a song (e.g., selected from a list of songs stored by the toy device 100a, 100b) in response to the detected motion by the sensor 102a, 102b.
[44] When one or more of the sensors 102a, 102b detects a value that exceeds a threshold value, one or more stored sounds may be output by the audio output device 112. In some implementations, when the sound is output by the audio output device 112, the processor 106 may manipulate the waveform of the audio signal of the sound being output to change the pitch, frequency, amplitude, envelope, or the like. The manipulation of the waveform may be based, at least in part, on data captured by the sensor 102a, 102b. In these implementations, the interaction between a user and the toy device 100a, 100b may enable the toy device 100a, 100b to output stored sounds and/or manipulate the output of the stored sounds based on the data captured by the sensors 102a, 102b. In some implementations, the interaction between the user and the toy device 100a, 100b may further manipulate the output of a user's voice and/or other sounds that have been recorded and have been processed by changing the pitch, frequency, or the like.
[45] Although FIG. 1 shows sensor 102a and 102b, any suitable number of sensors may be included on toy device 100a, 100b. Alternatively, or in addition, one or more sensors 102a, 102b may be arranged in a single sensor package device.
[46] The network interface 104 may provide a wired and/or wireless communicative coupling between the toy devices 100a and 100b, between the toy device 100a, 100b and a computer and/or mobile computing device (e.g., device 10, 11 shown in FIG. 4), and/or between the toy device 100a, 100b and one or more devices coupled to the network 7 shown in FIG. 4, such as a server 13, a database 15, and/or a remote platform 17. The network interface 104 may provide such connection using any suitable technique and protocol as will be readily understood by one of skill in the art, including digital cellular telephone, Wi-Fi, Bluetooth(R), near-field, and the like. For example, the network interface 104 may allow the toy device 100a, 100b to communicate with other toy device, mobile devices, computers, servers, and the like via one or more home, local, wide-area, or other communication networks, as described in further detail below.
[47] The processor 106 may be any suitable processor, controller, field programmable gate array, programmable logic device, integrated circuit, or the like. The processor 106 may include an audio signal processor that may manipulate and/or change audio waveform signals (e.g., based in part on data received from the sensor 102a, 102b). Alternatively, the audio signal processor may be a separate device from the processor 106, and may be computed to the devices of the toy device 100a, 100b via the bus 120.
[48] The image sensor 109 may be a charged coupled device (CCD), a complementary metal oxide semiconductor (CMOS) image sensor, and/or any other suitable camera and/or image capture device to capture still images and/or video.
[49] The audio input device 110 may be a microphone and/or other suitable voice sensor that may detect a voice and/or other audible sound. The audio input device 110 may convert the detected audible sound into an electrical signal (e.g., an analog signal), and may convert the electrical signal into a digital signal that represents detected audible sound.
[50] In some implementations, the audio input device 110 may receive one or more voice commands and/or information to be stored in memory 108, fixed storage, and/or removable media 118. The audio input device 110 may be used to record an audible message, access a network profile of a user and/or a toy device 100a, 100b, provide network profile information, or the like. The processor 106 may perform one or more operations based on the voice command received by the audio input device 110.
[51] The audio output device 112 may be one or more speakers to output audio signals that may be stored in the fixed storage 116, the removable media 118, and/or may be received from the network interface 104 and/or the processor 106.
[52] The user interface 114 may be one or more buttons, touch input devices, or the like.
The user interface 114 may be used to play audio messages received by the toy device 100a, 100b via the network interface 104. The user interface may be used to record and transmit audio messages to other toy devices 100a, 100b, and/or to one or more mobile computing devices, computers, servers, remote services, and the like.
[53] The fixed storage 116 may be integral with the toy device 100a, 100b or may be separate and accessed through other interfaces.
[54] The display 120 may be a liquid crystal display (LCD), a light emitting diode (LED) and/or organic light emitting diode (OLED) display, and/or any suitable display device. The display 120 may be a touch screen to receive input from a user based on displayed options (e.g., in a menu or user interface).
[55] Many other devices or components (not shown) may be connected in a similar manner (e.g., pressure sensors or other suitable sensors, vibration components, other output devices such as lights, displays, and the like, etc.). Conversely, all of the components shown in FIG. 1 need not be present to practice the present disclosure. The components can be
interconnected in different ways from that shown. The operation of a computer such as that shown in FIG. 3 is readily known in the art and is not discussed in detail in this application. Code to implement the present disclosure can be stored in computer-readable storage media such as one or more of the memory 108, fixed storage 116, removable media 118, or on a remote storage location.
[56] FIGS. 2A-2C show front and back views of an outer casing of the toy device 100a, 100b according to an implementation of the disclosed subject matter. FIG. 2A shows an example outer casing of a front side of the toy device 100a, 100b. FIG. 2B shows an example outer casing of a back side of the toy device 100a, 100b. FIG. 2B shows a pass-through portion that may include one or more holes from an exterior of the casing to the interior of the casing. The pass-through portion may be such that audible sound may pass from an exterior portion of the casing so as to be received by the audio input device 110. Alternatively, or in addition, the pass-through portion may output sound from the audio output device 112 from the interior of the casing to the exterior of the casing of the toy device 100a, 100b, such that it may be heard by a user (e.g., a child). [57] FIG. 2B shows an example user input interface 114, which may include a plurality of buttons. For example, the under input interface 114 may include a button 114a which may be selected by a user (e.g., a child) so as to record an audio message and/or transmit it to another device (e.g., another toy device 100a, 100b, and/or a device 10, 11, a server 13, and/or a remote platform 17 shown in FIG. 4). For example, the button 114a may be select by a child user to send an audio message from the toy device 100a, 100b to a device 10, 11 of the child's parent. When the button 114a is selected by the child, the toy device 100a, 100b may prompt the child (e.g., via an audible message that is output via the audio output device 112) to state the audible message so that it may be recorded by the toy device 100a, 110b. When the audible message is recorded, it may be transmitted via the network interface 104 of the toy device 100a, 100b to a device communicatively coupled to the network 7 shown in FIG. 4 (e.g., the device 10, 11, the server 13, and/or the remote platform 17). For example, the child may select the button 114a after the message is recorded to transmit the recorded audible message.
[58] In some implementations, when the toy device 100a, 100b is not communicatively coupled to a communications network (e.g., network 7 shown in FIG. 4), one or more messages may be recorded by selecting button 114a, and may be stored by the memory 108, the fixed storage 116, and/or the removable media 118. When the toy device 100a, 100b connects to a communications network, the one or more messages may be automatically transmitted, and/or the toy device 100a, 100b may determine from the user whether to transmit the messages once the network communication has been enabled.
[59] In some implementations, the audio input device 110 may receive one or more voice commands from a user, which may be interpreted and/or processed by the processor 106. The received one or more voice commands may be used to record a message, send a recorded message, play a received message, accessing a network profile for a user and/or the toy device 100a, 100b (e.g., the user name, toy's name, and/or interests, such as favorite food, game, song, color, or the like, and/or communications network information and/or settings). The one or more voice commands may be used to select an authorized contact (e.g., a parent, a guardian, a relative, a friend, or the like), device 10, 11, and/or toy device 100a, 100b to send a recorded message to. The one or more voice commands may be used to transmit audio and/or video messages and/or sound effects (e.g., audible emojis), and/or play back received messages that include audio, video, and/or sound effects.
[60] In some implementations, the user input interface 114 may include a button to change the waveform and/or audio characteristics of the recorded message and/or a recorded audio (e.g., a child's voice, which may be for entertainment purposes). Alternatively, or in addition, a voice command may be received via the audio input device 110 which may enable the processor 106 to manipulate the waveform of the message and/or recorded audio. For example, when a button is selected and/or a voice command is received, the processor 106 and/or an audio processor may manipulate the audio signal and/or waveform of the recorded audible message and/or recorded audio (e.g., the child's voice that may be played back). In some implementations, the waveform may be altered based on the magnitude and direction of an acceleration, a movement, and/or an orientation movement. The changed and/or manipulated waveform may distort, change the pitch, change the frequency, change the amplitude, change the envelope, or the like of the recorded audible message and/or recorded audio (e.g., audio that may be recorded for entertainment and/or non-messaging purposes). The waveform may be altered based on a magnitude and direction of an acceleration, a movement, and/or an orientation movement as detected by the sensor 102a, 102b. In some implementations, the waveform of the audio signal may be manipulated when a value detected by the sensor 102a, 102b exceeds a predetermined threshold. The processed recorded audible message and/or processed recorded audio may be output via the audio output device 112 before transmission, and the message may be transmitted, for example, when the user selects the button 114a.
[61] In some implementations, the sensor 102a, 102b may detect motion of the toy device.
For example, motion of the device may be induced by the user when the user is recording the audible message. The user may move the toy device 100a, 100b horizontally and/or vertically, and/or may rotate, swing in an arc, shake, or move the device 100a, 100b in any other suitable manner. The motion sensor 102 may detect the motion of the toy device 100a, 100b by the user, and the processor 106 and/or and audio processor may change the waveform of the audio signal of the audible message that is captured by the audio input device 110 when the button 114a is selected. That is, the processor 106 and/or audio processor may use the motion data collected by the sensor 102a, 102b to manipulate the waveform of the captured audible message. The waveform may be changed by the processor 106 and/or the audio processor may change the pitch, frequency, amplitude, distortion, envelope, or the like of the audio signal of the captured audible message for recording. The waveform may be altered based at least on, for example, the magnitude and direction of an acceleration, a movement, and/or an orientation movement as detected by the sensor 102a, 102b. In some implementations, the waveform of the audio signal may be changed by the processor 106 when a value detected by the sensor 102a, 102b exceeds a predetermined threshold. The manipulated waveform of the audio signal of the captured audible message may be stored by the toy device 100a, 100b (e.g., by the memory 108, the fixed storage 116, and/or the removable media 118), and may be transmitted via the network interface 104 according to the input received from the user via the user input interface 114.
[62] The toy device 100a, 100b may indicate that an audible message has been received (e.g., via the network 7 from the device 10, 11, the server 13, and/or the remote platform 17 shown in FIG. 4). For example, the toy device 100a, 100b may vibrate (e.g., using a vibration device (not shown) that may be coupled to the bus 120 of FIG. 1) and/or may visually indicate (e.g., via a light, a display, or the like) on the external housing of the toy device 100a, 100b. In some implementations, a sound may be output by the audio output device 112 when a message has been received by the toy device 10, 11. When the user notices that a message has been received by the toy device 100a, 100b (e.g., from the vibration of the toy device 100a, 100b and/or via a visual and/or audible indicator), the user may select the button 114b to play the received audible message. When the button 114b is selected, the audio output device 112 may reproduce the received audible message for the user. If the received message includes a song (which may be included with the message, as discussed below), the song may be output by the audio output device 112. If the received audio and/or visual message includes one or more emoji (which may be included with the message, as discussed below), the audio output device 112 may output a sound that corresponds with the received emoji (e.g., a "happy" sound, a "sad" sound, an "angry" sound, or the like).
[63] In some implementations, messages that include emojis can be generated based on input detected by the sensor 102a, 102b. For example, a message may be generated that includes a hug emoji when the sensor 102a, 102b detects a predetermined amount of pressure to send to an approved contact. In another example, if the sensor 102a, 102b is a touch sensor, and the child imparts a tickling motion to the touch sensor, the toy device 100a, 100b may generate a message with a laugh emoji to an approved contact. In another example, the sensor 102a, 102b may detect a child moving a hand of the toy device 100a, 100b, and the toy device may output a message with a hello emoji to an approved contact.
[64] In some implementations, when the button 114b is selected so that the audio output device 112 may output the received audible message, data collected by the sensor 102a, 102b may be used by the processor 106 and/or an audio processor to manipulate the waveform of the audio signal of the received audible message to be output by the audio output device 112. For example, the user may move (e.g., horizontally, vertically, and/or rotationally move and/or shake) the toy device 100a, 100b, and the processor 106 may manipulate the waveform of the audio signal to be output by the audio output device 112 according to the detected movement. The processor 106 may manipulate the waveform of the audio signal so as to change its pitch, frequency, amplitude, envelope, or the like based at least in part on the magnitude and/or direction of the detected movement of the toy device 100a, 100b. In some implementations, the waveform of the audio signal may be manipulated when a value detected by the sensor 102a, 102b exceeds a predetermined threshold. The manipulated waveform of the audio signal may be output by the audio output device 112.
[65] In some implementations, a computing device such as a tablet, laptop, or smart phone may execute an application for manipulating aspects of toy device 100a, 100b. For example, toy device 100a, 100b may express one of several personalities. Each of the personalities may be associated with a selectable mode via the application. A personality for toy device 100a, 100b may be associated with a variety of toy behaviors and may change based on context. For example, personalities may include a warm personality, an indifferent personality, or a grumpy personality. When toy device 100a, 100b detects that a child is in proximity, it may greet the child. A toy device having the warm personality may greet the child with a statement such as "Hello, I hope you're doing well today!" or if the context indicates it is the morning, "Good Morning!". A toy device having the indifferent personality may greet the child with a statement such as "Hi." or if the context is at night, "I'm going to bed.". A toy device having the grumpy personality may not greet the child at all or may express a statement such as "What are you doing here?" or if the context indicates it is dinner time, "I'm hungry!" These or any other suitable personalities may be presented as selectable modes on an application suited for child interaction. A child or other user may engage with a computing device executing the application, select a mode associated with a desired personality, and the computing device may provide an instruction to toy device 100a, 100b to execute the selected personality mode.
[66] Other features may also be selectable via the application by the child or other user, such as particular sounds emitted or other behaviors expressed by toy device 100a, 100b when a sensor reading is received. For example, expressions such as screams may be emitted when an accelerometer in toy device 100a, 100b measures an acceleration value above a threshold level or a snoring sound may be expressed when an acceleration value below a threshold level is measured for a threshold period of time. The particular expression and/or threshold sensor reading levels linked to a particular expression may be selected by a user of the application, as well as other expressions or behaviors whether or not linked to sensor reading levels. In some implementations, any of the expressions, sounds, or other behaviors of toy device 100a, 100b as well as any coupled sensor reading thresholds, levels, or other values as discussed in this disclosure may be selected by a user of the application, such as a child.
[67] FIG. 2C shows an alternative implementation of the toy device 100a, 100b shown in
FIG. 2B that includes the display 120, and the image sensor 109. In the implementation shown in FIG. 2C, a user may record and send a message (e.g., by selecting button 114a and/or by selecting an option displayed on the display 120) that may include a still image, video, and/or an audible message using the image sensor 109 and/or the audio input device 110. When a user selects to play a received message (e.g., by selecting the button 114b and/or by selecting an option displayed on the display 120), an image and/or video portion of the received message may be displayed on the display 120, and the audible message may be output via the audio output device 112. The display 120 and/or the audio output device 112 may output a visual and/or audio notification that a message has been received by the toy device 100a, 100b.
[68] In some implementations, the image sensor 109 may be activated when the sensor
102a, 102b detects that the child has picked up the toy device 100a, 100b. The image sensor 109 may capture an image of the child to send to a parent and/or guardian from the approved contacts, and/or may capture an image of the child when the child's face is detected by the image sensor 109. When the parent and/or guardian receives the captured image of the child via the device 10, 11, the parent and/or guardian may retransmit the image to one or more contacts and/or social networks according to one or more selections.
[69] The implementations of the toy device 100a, 100b shown in FIGS. 2B-2C may include one or more lights (e.g., light emitting diodes (LEDs) and/or organic light emitting diodes) to inddicate a status of the device (not shown). When a light outputs green light that is pulsed, it may indicate that the toy device 100a, 100b has been configured to interoperate with a communications network (e.g., network 7 shown in FIG. 4). When a light outputs red light that is pulsed, it may indicate that the toy device 100a, 100b may have failed to be configured to a communications network. If an orange light is emitted that is slowly pulsed (e.g., at 1 Hz), there may be no Wi-Fi settings available for the toy device 100a, 100b. If a red light is slowly pulsed (e.g., at 1 Hz), it may indicate that the toy device 100a, 100b is attempting to join a Wi-Fi network. If a light sequence is red, orange, and then off, with slow pulsing (e.g. 1 Hz), the toy device 100a, 100b may be attempting to obtain an Internet Protocol (IP address). If a light sequence is orange, red, and then off and is slowly pulsed (e.g. 1 Hz), the IP address has been retrieved, and the toy device is attempting to connect with a server (e.g., server 13). If a light is green and pulsing slowly (e.g., 0.5 Hz), the toy device 100a, 100b may be attempting to connect to a cloud device. If a red light is pulsed fast (e.g., 2 Hz), the network and/or communicative connection may be lost, and the toy device 100a, 100b may be attempting to reconnect. If a light sequence is short green, off, long green, off with a slow pulse (e.g., 1 Hz), the toy device 100a, 100b may be attempting to verify a software (e.g., firmware update). If a green light is on (i.e., not flashing), the software and/or firmware update may be in progress. If there is no light being emitted, the toy device 100a, 100b may be operating normally.
[70] Implementations of the subject matter as disclosed throughout may be implemented in and used with a variety of component and network architectures. FIG. 3 is an example computing device 10, 11 suitable for implementations of the presently disclosed subject matter. The device 10, 11 may be, for example, a desktop or laptop computer, or a mobile computing device such as a smart phone, wearable computing device, smart watch, tablet, or the like. In some implementations, device 10, 11 may be mobile computing devices of a parent, guardian, and/or relative of a child user of the toy device 100a, 100b. As discussed in detail below in connection with FIGS. 5-8, the device 10, 11 and toy device 100a, 100b may be configured so as to communicate with one another via network 7, and/or with server 13, database 15, and/or remote platform 17. Based on a configuration of toy device 100a and/or toy device 110b, the device 10 and/or device 11 may communicate with one or both of toy devices 100a, 100b.
[71] The device 10, 11 may include a bus 21 which interconnects major components of the device 10, 11, such as a central processor 24, a memory 27 such as Random Access Memory (RAM), Read Only Memory (ROM), flash RAM, or the like, an output device 22 (e.g., a display screen, a touch screen, one or more audio speakers, or the like), a user input interface 26, which may include one or more controllers and associated user input devices such as a keyboard, buttons, mouse, touch screen, microphone, and the like, a fixed storage 23 such as a hard drive, flash storage, and the like, a removable media component 25 operative to control and receive an optical disk, flash drive, and the like, and a network interface 29 operable to communicate with one or more remote devices via a suitable network connection.
[72] The bus 21 allows data communication between the central processor 24 and one or more memory components, which may include RAM, ROM, and other memory, as previously noted. Typically RAM is the main memory into which an operating system and application programs are loaded. A ROM or flash memory component can contain, among other code, the Basic Input-Output system (BIOS) that controls basic hardware operation such as the interaction with peripheral components.
[73] Applications resident with the device 10, 11 are generally stored on and accessed via a computer readable medium, such as a hard disk drive (e.g., fixed storage 23), an optical drive, floppy disk, or other storage medium. For example, an application to interoperate with and/or control the operation of the toy device 100a, 100b may be stored on the device 10, 11. FIGS. 5- 8, as discussed below, show examples of display screens for the application to interoperate with and/or control the operation of the device 10, 11, the toy device 100a, 100b, and/or the server 13.
[74] The fixed storage 23 may be integral with the device 10, 11 or may be separate and accessed through other interfaces. The network interface 29 may provide a direct connection to a remote server via a wired or wireless connection. The network interface 29 may provide such connection using any suitable technique and protocol as will be readily understood by one of skill in the art, including digital cellular telephone, Wi-Fi, Bluetooth(R), near-field, and the like. For example, the network interface 29 may allow the computer to communicate with other computers via one or more local, wide-area, or other communication networks, as described in further detail below.
[75] Many other devices or components (not shown) may be connected in a similar manner (e.g., digital cameras, displays, microphones, audio signal processors, speakers, and so on). Conversely, all of the components shown in FIG. 3 need not be present to practice the present disclosure. The components can be interconnected in different ways from that shown. The operation of a computer such as that shown in FIG. 3 is readily known in the art and is not discussed in detail in this application. Code to implement the present disclosure can be stored in computer-readable storage media such as one or more of the memory 27, fixed storage 23, removable media 25, or on a remote storage location.
[76] FIG. 4 shows an example network arrangement according to an implementation of the disclosed subject matter. One or more devices 10, 11, such as local computers, smart phones, tablet computing devices, wearable computing devices, smart watches, and the like may connect to other devices via one or more networks 7. Each device may be a computing device as previously described. One or more toy devices 100a, 100b may connect to the one or more devices 10, 11, and/or with one another, and/or with a server 13, a database 15, and/or a remote platform 17.
[77] The network may be a home network, a local network, wide-area network, the Internet, or any other suitable communication network or networks, and may be implemented on any suitable platform including wired and/or wireless networks. The devices 10, 11 and/or the toy device 100a, 100b may communicate with one or more remote devices, such as servers 13 and/or databases 15. The remote devices may be directly accessible by the devices 10, 11, or one or more other devices may provide intermediary access such as where a server 13 provides access to resources stored in a database 15. The devices 10, 11 also may access remote platforms 17 or services provided by remote platforms 17 such as cloud computing arrangements and services. The remote platform 17 may include one or more servers 13 and/or databases 15. [78] The server 13, database 15, and/or remote platform 17 may store information related to one or more of the toy device 100a, 100b and/or the device 10, 11. For example, the server 13, database 15, and/or remote platform 17 may store information such as the devices 10, 11 that the toy device 100a, 100b is authorized to communicate with (e.g., transmit and/or receive audible messages), voice filtering settings, network configuration information, one or more recorded audible message, account information, or the like.
[79] In some implementations, the remote platform 17 may be a server and/or other computer which hosts an application such that one or more toy devices 100a, 100b may communicate directly with one another, and/or communicate amongst a group of toy devices 100a, 100b. In some implementations, the remote platform 17 may be server and/or other computer which hosts an application such that one or more toy devices 100a, 100b may determine the location of another one or more of the toy device 100a, 100b.
[80] FIGS. 5-8 show example displays of a mobile computing device (e.g., device 10, 11 shown in FIGS. 3-4) to an implementation of the disclosed subject matter. The device 10, 11 may execute at least a portion of one or more applications so as to display the example displays shown in FIGS. 5-8. In some implementations, the one or more applications may control and/or interoperate with the toy device 100a, 100b.
[81] FIG. 5 shows a display 200 displayed on the output device 22 of the device 10, 11 according to an implementation of the disclosed subject matter. Display 200 may include selectable option 202 to configure devices to a network, selectable option 204 to manage a child account, a selectable option 206 to record a message, a selectable option 208 to play a received message, and/or a selectable option 210 to change filter settings. The output device 22 of the device 10, 11 may be a touch screen, and the selectable options 202, 204, 206, 208, and/or 210 displayed on the display 200 may be selected by a user's touch via the touch screen of output device 22.
[82] When a user selects selectable option 202 to configure devices to the a network, the user may configure one or more of the device 10, 11 and/or the toy device 100a, 100b so that they may communicate with one another and/or other devices connected to the network 7 (e.g., server 13, remote platform 17, and the like). Selectable option 202 may be selected by a user so as to add and/or remove one or more devices (e.g., devices 10, 11 and/or toy device 100a, 100b) that can communicate with one another and/or via the network 7. FIG. 6 shows a display presented on device 10, 11 when selectable option 202 is selected, such the display presents selectable option 212 to select a network, a selectable option 214 to add or remove a toy device, and/or a selectable option 216 to add or remove a mobile device. When selectable option 212 is selected, a network (e.g., network 7 shown in FIG. 4) such as a home network, local area network, wide area network, or the like may be selected, and one or more devices (e.g., devices 10, 11 and/or toy devices 100a, 100b) may be added and/or removed from the selected network by the selection of selectable option 214 (to add and/or remove a toy device 100a, 100b), and/or by the selection of selectable option 216 (to add and/or remove a device 10, 11).
[83] As shown in FIG. 5, when a user selects selectable option 204 to manage a child account, the user may configure which one or more devices 10, 11 and/or people a child account for a toy device 100a, 100b may communicate with (e.g., transmit and/or receive audible messages). For example, when the selectable option 204 is selected, the user may select and/or approve device 10, 11 for a toy device 100a, 100b to communicate with, such as devices that belong to one or more parents, guardians, relatives, friends, and the like. In another example, when option 204 is selected, the may approve other toy device 100a, 100b that a child may communicate with via their toy device 100a, 100b. The option 204 may allow a parent and/or guardian to control and/or manage which friends (e.g., that have toy devices 100a, 100b) a child may communicate with via the toy device 100a, 100b. In some implementations, the selectable option 204 may allow a user to provide information about their child's favorites, include colors, food, or the like. In some implementations, the selectable options 204 may allow a user to identify other devices of a child that may communicate with a toy device 100a, 100b, so that a child may user the toy device 100a, 100b to locate other registered toy devices (e.g., in a room, in a house, or the like).
[84] In some implementations, when the user selects the selectable option 204, the display shown in FIG. 7 may be presented on the display of device 10, 11, which may include selectable option 218 to add/remove a child account, selectable option 220 to manage child accounts 220, selectable option 222 to select child favorites, and/or selectable option 224 to set audio filter for a child. [85] When the option 204 is selected, a parent and/or guardian may select an option to have all messages routed through the parent device 10, 11 before the messages are sent to the toy device 100a, 100b. That is, the parent and/or guardian may monitor and/or control
communication between to devices 100a, 100b and/or between toy devices 100a, 100b and other devices 10, 11.
[86] In some implementations, when option 204 is selected, a parent and/or guardian may access recordings that are save by the device 10, 11 and/or the toy 100a, 100b. The recordings may be stored, for example, the server 13, the database 15, and/or the remote platform 17. The recordings may be stored, as well as text (e.g., the audio recordings may be converted to text). The text may be analyzed by at least one of the device 10, 11, the server 13, and/or the remote platform 17 so that a parent and/or guardian may be able to determine what may be going on in a child's life and/or with their mental, emotional, and/or developmental state.
[87] When option 204 is selected, the parent and/or guardian may access programs and/or applications to be transmitted to and/or accessed by the toy devices 100a, 100b periodically (e.g., every day, every other day, every week, every month, or the like). The programs and/or applications may be educational, such as "word of the day" or "foreign language words" applications, song subscriptions, games, or the like. In some implementations, the child may send the program and/or application (or a link thereto) to a friend from the approved contacts list. For example, the program and/or application may be sent from one toy device 100a, 100b to another toy device 100a, 100b.
[88] When option 218 is selected from the option 204 display shown in FIG. 7, a user may add one or more child accounts to be managed by the application being executed, at least in part, by the device 10, 11. That is, a child may have one or more toy devices 100a, 100b that may be registered with a child account. A child account may be added when, for example, the child receives a new toy device 100a, 100b.
[89] When option 220 is selected, a user of the device 10, 11 may select and/or authorize one or more persons (e.g., parents, relatives, friends, or the like) of the child so that messages may be transmitted between the toy device 100a, 100b associated with a child account and the selected and/or authorized persons. For example, a parent and/or guardian having a device 10, 11 may authorize friends of a child to communicate with the child. The communication may be toy-to-toy communication between a plurality of toy devices 100a, 100b. By managing and/or authorizing the child contacts, a parent and/or guardian may have control over which persons may send messages to and/or receive messages from the child.
[90] In some implementations, when option 220 is selected, a user of the device 10, 11 may select and/or authorize one or more persons (e.g., guardians, relatives, friends, and the like) from one or more social networks. A request from a person associated with the user of the device 10, 11 may send a request via a social network to become an approved contact with a child user. An invite may be sent via the device 10, 11 to another device 10, 11 coupled to the network 7 to become an approved contact.
[91] When option 220 is selected, a list of one or more contacts may be displayed on the display of device 10, 11. In some implementations, a contact in a list of one of more contacts may include a picture of the contact, a phone number, an email address, a social network identifier, or the like.
[92] In option 220, the parent and/or guardian may manage contacts of friends of the child user who do not have a toy device 100a, 100b. For example, the child may record a message with the toy device 100a, 100b, and may select a contact of a friend that does not have a toy device 100a, 100b, and the recoded message may be converted to text, and may be printed and sent by regular postal mail (e.g., via a post card), and/or by email or text to a parent device 10, 11 of the child contact. The post card, email, and/or text communication may include a discount offer to purchase a toy device 100a, 100b.
[93] When option 222 is selected, a user may set and/or provide one or more child favorites, such as favorite color, food, animal, or the like. The favorites may be used, for example, to facilitate communications between toy devices 100a, 100b. In some
implementations, by selecting option 222, a parent, guardian, and/or relative may create a profile of a child's likes and dislikes. Alternatively, or in addition, the toy device 100a, 100b may provide the child's likes and/or dislikes to the device 10, 11 of the parent and/or guardian that may be learned by the toy device 100a, 100b and/or may be provided to the toy device 100a, 100b by the child (e.g., child is prompted by the toy device 100a, 100b to name a favorite food, a favorite song, a favorite color, or the like). In some implementations, the toy device 100a, 100b may prompt the child to provide information (e.g., likes, dislikes, favorites, etc.) that may be transmitted from the toy device 100a, 100b to the device 10, 11 and may be used to create and/or update a child's profile. For example, when the child begins responding to the prompt by the toy device 100a, 100b, the toy device 100a, 100b may record the child's voice answers, and provide the responses to the device 10, 11 so that they may be added to the child's profile. The child's responses may be transmitted to the device 10, 11 as audio data, and/or may be converted from audio into text and may be sent as text data.
[94] In some implementations, the profiles of one or more children that a parent, guardian, relative, and/or friend is connected to may be stored, e.g., at the device 10, 11, the server 13, the database 15, and/or the remote platform 17. The profile of each child may be periodically updated. For example, the toy device 100a, 100b may prompt the child to update their profile. Some questions that may prompt the child may include, for example: "Is your favorite color still green?", "What do you want to be called?", or the like). Based on the answer provided by the child, if a favorite has changed, all of those in the child's trusted network (e.g., the approved contacts) may be notified of the status update. For example, when providing the update, the devices 10, 11 and/or toy device 100a, 100b may encourage their respective users to "like" the status update and/or comment (e.g., via a message) on the status update. For example, a notification may be transmitted from the toy device 100a, 100b to device 10, 11 which indicates that: "Henry now wants to be called T-rex" to then encourage all of those in Henry's network to "Say hello to T-rex!"
[95] In some implementations, when the profile is updated in toy device 100a, 100b and/or the device 10, 11 (e.g., which may communicate the updates to the toy device 100a, 100b), the toy device 100a, 100b may begin to address the child by the child's profile name to let them know when they have a new message to listen to. The child may be identified by the toy device 100a, 100b, by the sensors 102a, 102b that may detect a wearable identification device on the child. The child may be identified based on the child's voice, which may be captured by the audio input device 110 and/or the sensors 102a, 102b. Alternatively, or in addition, the child may be identified by the image sensor 109. A captured image may be compared with a stored image in the toy device 100a, 100b to identify the child. [96] The favorites information included in the profile may be used by the device 10, 11 and/or the toy device 100a, 100b to match the child with potential playmates nearby (e.g., within a predetermined proximity, where the potential playmates would be approved by a parent and/or guardian). For example, a parent and/or guardian may find other playmates for the child by matching at least a portion of the favorites included in the profiles, and thus approve additional kids for their child to connect with and/or interact with (e.g., send messages, play games, or the like) via the toy device 100a, 100b.
[97] In some implementations, the profile information may be used (e.g., by the server 13 and/or the remote platform 17) to determine gifts and/or gift suggestion for a child. The determined suggestions may be distributed to the approved contacts periodically and/or upon request by a user.
[98] The profile may include, for example, the birthday of a child user and/or other important events. A reminder on the day of the child's birthday may be sent as a push notification (e.g., by the toy device 100a, 100b, the device 10, 11, the server 13, and/or the remote platform 17) to all of the child's contacts. In some implementations, the push notification may make gift suggestions for the child, including suggestions for audible gifts such as songs, stories, and/or sound effects. The toy device 100a, 100b may provide a notification to the child of the birthdays of their friends and family so that the child may record and send a message. For example, the child may record a birthday message with the toy device 100a, 100b, and transmit the birthday message to a friend who has a toy device 100a, 100b.
[99] The profile may include a child's bedtime. When it is bedtime, the toy device 100a, 100b may alert the child that, for example, it is time to brush their teeth or read a story. In some implementations, the toy device 100a, 100b say goodnight to the child's favorite objects from their profile (e.g., audible goodnight messages may be output by the toy device 100a, 100b to objects in the room).
[100] When option 224 is selected, a user may set one or more audio filter options for a child's toy device 100a, 100b. The audio filter may manipulate a waveform of an audio signal of a received audible message by changing, for example, a pitch, frequency, amplitude, envelope, or the like during playback. The waveform may be altered based on a magnitude and direction of an acceleration, a movement, and/or an orientation movement as detected by the sensor 102a, 102b. In some implementations, the audio filter may manipulate the waveform of the audio signal when a value detected by the sensor 102a, 102b exceeds a predetermined threshold.
Alternatively, the audio filter may be set at the toy device 100a, 100b.
[101] When a user selects selectable option 206 in FIG. 5, the user may record a message to be transmitted to one or more toy devices 100a, 100b. The message may be recorded regardless as to whether the device 10, 11 is communicatively connected to the network 7. The device 10, 11 may prompt the user by displaying a visual cue on the display 200 and/or may output an audio cue for the user to begin recording the audible message. The user may select the selectable option 206 and/or another selectable option to end recording of the audible message. A selectable option (not shown) may be displayed on the display 200 which, when selected by the user, may play the recorded message so that the user may hear it before the user transmits the audible message to one or more toy devices 100a, 100b.
[102] In some implementations, when the option 206 is selected to record a message, the display shown in FIG. 8 may be displayed, which may include option 226 to select a child, option 228 to record a message, option 230 to set audio filter for message to be recorded, and/or option 232 to send (i.e., transmit) the recorded message. For example, when a user desires to record a message to send to a child via toy device 100a, 100b, the user selects option 226 to select at least one child from a list of one or more children. The user may select one or more children to which a message to be recorded may be sent. When one or more children are selected using option 226, the user may select option 228 to record a message. The message may be an audio message and/or a video message, and/or a message that includes a still image and an audio message. When the message is recorded, a user may select option 230 to set an audio filter for the recorded message. By selecting option 230, a user may manipulate the waveform of an audio signal by, for example, adjusting the pitch, the frequency, the amplitude, envelope, or the like of the recorded message.
[103] When the user selects option 232, the recorded message (e.g., which may be filtered and/or manipulated) may be sent to the toy device 100a, 100b of child selected using option 226. In some implementations, the user may select a song (e.g., that is stored by the device 10, 11) and/or an emoji to be transmitted with the recorded message. In some implementations, the audible emoji may be transmitted without a recorded message. When the message is received by the toy device 100a, 100b, the processor 106 may output the song and/or one or more sounds (e.g. that are stored by the toy device 100a, 100b and/or are transmitted with the emoji and/or message) via the audio output device 112 based on the received emoji. For example, the toy device 100a, 100b may output a sound designated as a "happy" via the audio output device 112 when a happy and/or smiling emoji is received. In another example, the toy device 100a, 100b may output a sound designated as "sad" via the audio output device 112 when a sad and/or frowning emoji is received.
[104] In some implementations, the display 200 of FIG. 5 may display when a user has received an audible message from one or more of the toy devices 100a, 100b. Alternatively, or in addition, the device 10, 11 may output an audio notification via the output device 22 that the user has received an audible message. When a user selects selectable option 208, the device 10, 11 may output via a speaker and/or other output device (not shown). The user may select the selectable option 208 to repeat the recorded audible message, and/or may select an option to replay the message. In some implementations, the display may include one or more selectable options to "like" the received message, share the message with one or more approved contacts or with persons of a user's social network, and/or delete the message.
[105] When the selectable option 210 shown in FIG. 5 is selected, a user may change and/or configure one or more filter settings, which may be used to manipulate and/or change the waveform of the audio signal of the audible message to be output when selectable option 208 to play a received message is selected and/or when the selectable option 206 is selected to record an audible message. That is, the user may adjust the frequency, pitch, amplitude, distortion, envelope, or the like of the recorded audible message.
[106] FIG. 9 shows an example method 300 of communicating recorded messages between toy devices 100a, 100b and/or other approved network devices described above in connection with FIGS. 1-8 according to an implementation of the disclosed subject matter. At operation 302, a profile of a toy device 100a, 100b may be set, at a mobile computing device 10, 11, that includes one or more approved contacts and user interest information. Alternatively, or in addition, the profile of the toy device 100a, 100b (e.g., approved contacts and/or user interest information) may be set using the user interface 114, the audio input device 110, and/or the display 120, which may be a touch screen. In some implementations, the profile of the toy device 100a, 100b may be accessible to a child user using one or more voice commands that may be received by the audio input device 110, which may be processed by processor 106, and contacts and/or interest information may be output via the audio output device 112 as audible information and/or may be output via the display 120.
[107] The approved contacts of the profile of the toy device 100a, 100b may include a parent, a guardian, a relative, and/or a friend of the child associated with the toy device 100a, 100b. The approval of contacts by, for example, a parent and/or guardian enables them to control who and/or what devices (e.g., devices 10, 11, other toy device 100a, 100b, server 13, or the like) the child and the toy device 100a, 100b are able to transmit messages to and receive messages from. The approved contacts may include device identification information and/or network identification information so that the toy device 100a, 100b may be enabled to transmit messages to and/or receive messages from a device 10, 11. The user interest information may include information about the child user of the toy device 100a, 100b, such as their favorite color, song, food, games, and the like. The interest information, in some implementations, may include the child's name or nickname, the name of the toy device, or the like. In some implementations, the child may access, add to, and/or revise the user interest information using one or more audio commands input via the audio input device 110. That is, the child user may use the user interface 114 and/or the audio input device to input and/or record the user interests. The user interest information may be used in encouraging toy device to toy device
communication (e.g., prompting user to record and transmit a message from a toy device 100a, 100b to be sent to another toy device 100a, 100b), or communication between a toy device 100a, 100b and a device 10, 11.
[108] At operation 304, a child user may record a message that includes at least an audio portion. As described above in connection with FIGS. 2B and 2C, a child user may select button 114a to begin record an audio message and/or a video message that includes audio (e.g., where the image sensor 109 may capture the video images, and the audio input device 110 may capture the audible message). [109] At operation 306, the toy device 100a, 100b may receive a selection at least one of the approved contacts. For example, the toy device 100a, 100b may receive a user selection via the user interface 114 and/or via the display 120 (e.g., where the display is a touch screen).
Alternatively, or in addition, a selection may be made via a voice command by the child user. The selection made via voice may be received by the audio input device 110, and may be processed by the processor 106.
[110] At operation 308, the recorded message may be to at least one device associated with the selected at least one of the approved contacts, which may be associated with one or more devices 10, 11 communicatively coupled to the network 7. For example, the child user may make a selection via the user interface 114, via the display 120, and/or via a voice command received by the audio input device to transmit the recorded message to a device 10, 11 associated with at least one of the approved contacts.
[Ill] FIG. 10 shows an example method 400 of changing the output characteristics of a message according to an implementation of the disclosed subject matter. At operation 402, the toy device 100a, 100b may receive a message via the network 7 that includes at least an audio portion. That is the message may be an audible message, the message may include one or more still images and an audible message, and/or the message may include video and the audible message.
[112] At operation 404, the user interface 114 (and/or the display 120) may receive a selection of an option to output at least the audio portion of the message. At operation 406, when an audio output device 112 of the toy device 100a, 100b is outputting the audio portion of the message based on the selected option, the processor 106 may change the waveform of an audio signal of the audio portion based on sensor data collected (e.g., from sensors 102a, 102b) as the toy device 100a, 100b is moved so as to change at least one characteristic of the outputted audio portion. As discussed above in connection with FIG. 1, the sensors 102a, 102b may be a motion sensor, a physical orientation sensor, an acceleration sensor, a pressure sensor, or the like. The waveform of the audio signal may be manipulated so as to change the pitch, frequency, amplitude, envelope, distortion, or the like based at least in part on the collected sensor data. The manipulated audio waveform may be output by the audio output device 112. In some implementations, as the toy device 100a, 100b is moved, pressure is applied, and/or has its orientation changed, the sound characteristics of the audible message being output may correspondingly change as the audible message is being output by the audio output device 112.
[113] According to an implementation of this disclosure, a sensor 102a, 102b of a toy device 100a, 100b may detect a status of the toy device 100a, 100b. For example, an
accelerometer may be included with the toy device (e.g., as sensor 102a, 102b) and may detect a change in acceleration associated with motion of the toy device 100a, 100b. A communications interface (e.g., network interface 104) of the toy device 100a, 100b may transmit a message to a mobile device 10, 11 in response to the detected status. For example, a child may have picked up the toy device 100a, 100b, and the sensor 102a, 102b may detect the change in acceleration. The detected status may trigger a Wi-Fi radio within the toy device 100a, 100b to transmit a message to a smart phone (e.g., device 10, 1 1) of the child's parent. The message may be received by the parent's smartphone (e.g., device 10, 11) and trigger a notification via an application executing on the smart phone (e.g., device 10, 11). The application may prompt the parent to select a behavior, operation, and/or output of the toy device 100a, 100b. For example, the application may provide an option for the parent to record a voice message to be played for the child via the toy device 100a, 100b, such as "I love you" or "go back to bed." The application may also present an option for the parent to select a sound for the toy device to make, such as a "bark" if the toy device is a dog, or an "oink" if the toy device is a pig. The parent may select an option and the application may instruct a communication component of the smart phone (e.g., device 10, 11) to transmit an instruction for the selected behavior to the toy device 100a, 100b. The smart phone (e.g., device 10, 11) may transmit the instruction, and when the instruction is received, the toy device 100a, 100b may execute the instruction and perform the selected behavior and/or operation, and/or provide an output.
[114] In some implementations, motion sensors or direction sensors, such as accelerometers or compasses may be integrated with toy device 100a, 100b. One or more thresholds of acceleration or change in direction may be pre-determined and coupled to a status. For example, sensor measurements greater than or less than a specified acceleration or change of direction value or within a specified range may be coupled to a status that indicates a child is playing with toy device 100a, 100b. Sensor measurements outside of this predetermined range may be coupled to a status that indicates the child is not playing with toy device 100a, 100b or that someone has rapidly thrown toy device 100a, 100b. For example a sensor reading below a predetermined range such as a range of acceleration values may be coupled to a status that toy device 100a, 100b is not being played with. In another example, a sensor reading above a predetermined range, such as a range of acceleration values, may be coupled to a status that toy device 100a, 100b is being thrown or otherwise not properly played with. In another example, a sensor reading within a predetermined range, such as a range of acceleration values, may be coupled to a status that toy device 100a, 100b is being played with. In some implementations, indicators of any of the aforementioned statues may be transmitted by toy device 100a, 100b to a smart phone such as device 10, 11, or to another toy device, such as a toy device within a toy -to- toy social network with toy device 100a, 100b. In some implementations, indicators of any of the aforementioned statuses may serve as a basis for a communication performed by toy device 100a, 100b. For example, an acceleration of toy device 100a, 100b that is within a predetermined acceleration range may be linked to a behavior of toy device 100a, 100b of expressing a verbal indication that toy device 100a, 100b is being played with.
[115] A toy device 100a, 100b may be any object with which is suitable for a person, such as a child, to play. For example, FIG. 2A shows, according to an implementation of this disclosure, an example toy device 100a shaped in a "pig" form. Toy devices 100a, 100b, may have any shape suitable for the purposes of this disclosure. For example, a toy device 100a, 100b may take the shape of a "dog", a "robot", a "humanoid", and so forth. In other examples, toy device 100a, 100b may be embodied in any form factor suitable for the purposes of this disclosure. For example, the form factor of toy device 100a, 100b may include a plush toy, hard surfaced toy, pliable toy, and/or may be attached or integrated with a watch, hat, shoe, purse, bracelet, button, sweater, wristband, headband, jacket, bag, pants, shirt, or any other apparel or wearable item that is suitable for the purposes of this disclosure. Electronic hardware
components of the subject matter of this disclosure may be interchangeable among any suitable form factor. For example, the same electronic components may be operatively installed within a shark shaped plush toy, pig shaped plush toy, and/or a wearable item. In another example, a particular set of electronic hardware may be transferred among various form factors. For example a 2 year old child may interact with a plush toy having the particular electronic components. At 4 years old, the child may interact with a different plush toy having the particular electronic components. At 8 years old, the child may interact with yet a different form factor having the same electronic components, such as a book bag or a watch. Software that may interact with or execute on the particular electronic components may be updated over time and result in improved or more applicable functionality of toy device 100a, 100b as an applicable child or other user changes interests or ages.
[116] A toy device 100a, 100b may communicate with other devices such as other toy devices 100a, 100b, mobile devices 10, 11, or remote servers (e.g., server 13, remote platform 17, or the like). For example, a child named Karen may possess a toy device 100a, 100b, and the child's parent may want to monitor the child's usage of the toy device. The parent may possess a mobile device 10, 11 such as a smart phone or other computing device that executes an application for monitoring and communicating with the toy device. The application may include options such as those shown in FIGS. 5-8. A communications component (e.g., network interface 29) of the parent's mobile device 10, 11, such as a Wi-Fi or cellular radio may receive signals from and transmit signals to the toy device 100a, 100b. For example, the parent's mobile device 10, 11 may receive an indicator of a status of the toy device and/or push a notification via the application to the interface of the parent's mobile device 10, 11. Karen's parent may thereby be alerted of the status of the toy device 100a, 100b. The parent may choose to communicate with the Karen via the toy device 100a, 100b, for example, by entering a message into the application via the option 228 shown in FIG. 8 to record a message and transmitting the message to the toy device 100a, 100b via the option 232 to send a message. The toy device 100a, 100b may receive the message and communicate the message to the child via the toy device 100a, 100b.
[117] A parent or the child may also benefit from the transmission of automated messages to the toy device 100a, 100b. Messages may be transmitted to the toy device 100a, 100b from the parent's device 10, 11, and/or from a remote device (e.g., the server 13, the remote platform 17, or the like) via the network 7. For example, network interface 104 of Karen's toy device 100a, 100b may periodically transmit indicators of the status of toy device 100a, 100b to a remote server (e.g., server 13) that is managed by Karen's parent and/or a third party, such as a service provider and/or the manufacturer of toy device 100a, 100b. The status indicators may be received by the parent's device 10, 11 and retransmitted to a remote server (e.g., server 13), or the remote server may receive the indicators from the toy device 10, 11. The remote server (e.g., server 13) may receive the status indicator and compare the indicated status to a set of automated responses. For example, a status "Karen is present and not playing with toy" may be coupled and/or linked to the behavior, operation, and/or output of "Let's play!" Thus, if the received status indicator is "child present, not interacting with toy", then the remote server (e.g., server 13) may
automatically select an instruction for toy device 100a, 100b to execute the behavior, operation, and/or output of "Let's play!". The remote server (e.g., sever 13) may then transmit the selected instruction to the toy device 100a. Upon receipt, the processor 106 of the toy device 100a, 100b may execute the instruction. For example, by executing the instruction, processor 106 may generate a signal that when received by audio output device 112 causes a speaker of the audio output device 112 to emit the audible message "Let's play!".
[118] One or more sensors, such as sensor 102a or 102b, may detect a status of a toy device 100a, 100b. The sensor 102a, 102b may be attached to, stored or integrated within, or embedded on the surface of a toy device 100a, 100b. The sensor 102a, 102b may also be separate and distinct from the toy device 100a, 100b, and may be communicatively coupled to the toy device 100a, 100b. For example a sensor may not be attached to bus 120 of toy device 100a, 100b as shown in FIG 1. Rather, a sensor may located in a room, building, or area where the toy 100a, 100b is located, such as in the corner of Karen's room or near the doorway. The sensor 102a, 102b may be communicatively coupled to the network 7. Such a sensor 102a, 102b may thus be remote from the toy device 100a, 100b and rely on its own communications components to communicate with toy device 100a, 100b, the parent's device 10, 11, a remote server (e.g., server 13), other sensors, and/or other components of this disclosure.
[119] A status may be any status that describes the behavior, operation, and/or output of the toy device 100a, 100b, a behavior of a person or other entity interacting with the toy device 100a, 100b, a behavior of another toy device 100a, 100b interacting with the toy device 100a, 100b, or any combination of any of the forgoing interactions. The status of a toy device 100a, 100b may be detected by data received from one or more sensors 102a, 102b, other data received from non- sensor sources, or combinations of sensor data and non-sensor data. The following status table shows a non-exhaustive set of example statuses, descriptions of the statuses, and the sensor data or other data that may be coupled to those statuses and thereby result in their detection according to implementations of this disclosure: Status Table:
Status Description Sensor data / Other data
"Someone is A sensor has detected A PIR sensor has detected a change in signal in the room occupancy in the room in excess of an occupancy threshold.
with the toy" with the toy.
"Someone is A sensor has detected Accelerometer data changes from threshold playing with the toy has been picked associated with being at rest to threshold the toy" associated with being picked up.
"No one is A sensor has detected Accelerometer data stays below threshold playing with the toy has not been associated with being at rest for a period of the toy" picked up. time exceeding a threshold period.
"Karen i A sensor has identified An external imaging device has identified the Toy' the Karen and detected Karen through facial recognition and the room" she is in the same room image is captured from a room that contains
as the toy. GPS location data received from the toy.
"Karen is As sensor has identified A microphone identified speech associated playing with Karen and detected she with Karen, words spoken by Karen and the toy" is playing with the toy. associated with the toy are detected, and the accelerometer data has exceeded a threshold associated with being picked up.
"Karen is A sensor has identified An external imaging device has identified present and Karen and detected that Karen through facial recognition, while not playing no one is playing with accelerometer data stays below threshold with the toy" the toy. associated with being at rest
"Karen is A sensor has identified A microphone identified speech associated recording a Karen and detected that with Karen and data is received indicating the message" Karen is recording a audio input device has been activated.
message on the toy.
"Karen is A sensor has identified A microphone has received an audible sending a Karen and detected that password associated with Karen, and data is message" she is sending a message received indicating the network interface has
on the toy. been activated.
"Karen is A sensor has identified A microphone has identified speech sending a Karen, and detected that associated with Karen, data is received message to she is sending a message indicating the network interface has been Craig" to Craig toy device. activated, and data identifying the destination
for the message is associated with Craig's toy.
"Pig is A sensor has detected Oriented accelerometers detect the pig has sleeping" that that the pig is on its been turned on its side and data is received
side sleeping. that indicates the pig is sleeping.
"Karen made A sensor has identified An external imaging device has identified the pig go to Karen and detected that Karen through facial recognition, oriented sleep" she made the pig go to accelerometers detect the pig has been turned
sleep.
Figure imgf000039_0001
[120] A sensor such as sensor 102a or 102b may be any device that detects or measures a physical property, and records, indicates, or otherwise responds to it, as discussed above in connection with FIG. 1. Sensors may be attached or located within the toy device. Sensors may also be external to the toy device and located within the room or rooms where the toy might be found. Additional discussion of sensors and their functionality is set forth throughout this disclosure.
[121] In an implementation of this disclosure, a status of a toy device 100a, 100b may be detected by data received from a single sensor (e.g., sensor 102a). For example, the toy device 100a, 100b may include an accelerometer which may measure a change in acceleration when the toy device 100a, 100b is picked up by a child. One or more sensor threshold values may be linked to one or more interaction statuses. For example, an acceleration change from 0.0 meters per second squared (m/sA2) to greater than 0.3 m/sA2 may be a threshold acceleration difference linked to the interaction status "someone playing with the toy." Thus, when this threshold acceleration difference is measured by the accelerometer in the toy device, the interaction status may be determined to be "someone is playing with the toy."
[122] In an implementation of this disclosure, a status of a toy device 100a, 100b may be detected by data received from multiple sensors. For example, the toy device 100a, 100b may include a sensor 102a, which may be a microphone, and a sensor 102b, which may be accelerometer. The accelerometer (e.g., sensor 102b) may detect a first acceleration reading of approximately 0.0 m/sA2 at a first time. At a later time, the accelerometer may detect a second acceleration reading of 0.23 m/sA2. The change in acceleration measurements of 0.23 m/sA2 may exceed a threshold value of 0.1 m/sA2. This change in the acceleration measurement from a 0.0 m/sA2 initial value may be associated with the status "someone is playing with the toy" as shown in the status table above.
[123] The microphone (e.g., the sensor 102a) may detect audio content and identify the audio as being Karen's voice based on a voice recognition operation. For example, Karen may record samples of her voice using the microphone (e.g., audio input device 110 and/or sensor 102a) when she first registers her account with the toy device 100a, 100b. Suitable speech recognition procedures may analyze the samples for frequencies and/or waveforms that are characteristic of Karen's voice and store these frequencies and/or waveforms as templates of Karen's voice. When Karen utters audio content at a later time, this content may be compared to the stored templates (e.g., by processor 106). If there is match between the audio content and the template within a threshold amount, then the audio content may be determined to be uttered by Karen. In addition speech recognition procedures such as stochastic, probabilistic, and statistical techniques within natural language processing may be implemented to determine that Karen is speaking about the toy device 100a, 100b. For example, the words "pig," "piggy," "snort," "oink," and "pink" may be determined or predetermined to be associated with Karen's toy device because her toy device may be a pig. Speech recognition procedures may recognize these terms. Data representing the accelerometer measurements, the identification of Karen, and the recognition of associated terms may be combined and coupled or linked to the status "Karen is playing with the toy." Thus, when this combined set of data is received the status "Karen is playing with the toy" may be detected.
[124] In an implementation of this disclosure, a status of a toy device 100a, 100b may be determined by data received from a pressure sensor integrated within the toy device or remotely from the toy device. For example toy device 100a may contain sensor 102a, which may be a pressure sensor. The pressure sensor (e.g., sensor 102a) may be a piezoelectric sensor that contains a material that converts mechanical stress into a signal based on a generated electric potential. The pressure sensor may have sensing elements on the outer surfaces of toy device 100a, 100b. Signals representing pressure measurements may be compared to threshold pressure values, such as values representing the limits of the structural integrity of the toy device. These threshold values may be coupled or linked to the status "You're squeezing me too tightly". Thus when data from the pressure sensor reaches such a threshold, the status "You're squeezing me too tightly!" may be detected.
[125] In implementations of this disclosure, a status may be based on a measurement of one or more a passive infrared (PIR) sensors may be included in the toy device 100a, 100b or located remotely from and communicatively coupled to the toy device 100a, 100b. The PIR sensor may include radiation capture components composed of pyroelectric materials such as gallium nitride, cesium nitrate, polyvinyl fluorides, derivatives of phenylpyridine, cobalt phthalocyanine, other similar materials commonly used in PIR sensors, or any other material that generates energy when exposed to heat and that is suitable for the purposes of the disclosure. The energy generating materials may be formed in a thin film and positioned parallel with a sensor face of the toy device or in other formations or locations suitable to capture incident infrared radiation. A single PIR sensor or multiple PIR sensors may be employed in implementations of the disclosure.
[126] PIR sensors may detect events such as motion or temperature changes. For example, a PIR sensor integrated into the surface of the toy device may be calibrated with a threshold background voltage measurement generated by the environment of the room or rooms in which the toy device is typically located. When a person walks in front of the PIR sensor, radiation emitted from the person may exceed this background threshold. Circuitry in communication with the PIR sensor may compare this measured energy to the background threshold value, and if the difference exceeds a threshold amount, then the presence of a person may be detected. This sensor data may be coupled to the status "Someone is in the room with the toy."
[127] In implementations of this disclosure, a status may be detected based data received from one or more active infrared (AIR) sensors of the toy device 100a, 100b or located remotely from and communicatively coupled to the toy device 100a, 100b. For example, AIR sensors may be placed throughout a room where the toy device 100a, 100b is located. These AIR sensors may be configured in an array such that they project an array of beams that define regions throughout a room, within which presence and depth may be measured.
[128] A template made up of object characteristics of the child may be created and used to detect the presence of the child. For example, data may be collected from the AIR sensors when Karen walks throughout the room to determine object characteristics for Karen. For example approximate heights and widths of Karen may be determined based on the presence of the child within regions of the AIR array. These object characteristics may define a template against which future readings of the array elements may be compared for the purposes of determining statuses. For example, measurements of the presence of someone within array elements that are outside of the range of array elements associated with object characteristics of Karen may be coupled to the status of "Someone other than the Karen is in the room with the toy"
[129] AIR sensors may include an emission component such as a light emitting diode point source, a laser, or a lens-focused light source. Implementations of the disclosed subject matter may include non-point sources. Radiation may be emitted in a pattern such as a certain arrangement of projected pixels and other structured formats or unstructured radiation formats. For purposes of this disclosure, a pattern may include no more than a single region or a pattern may include multiple regions. For example, the pattern of radiation may be a single projected pixel or beam.
[130] AIR sensors may capture radiation through capture components of the sensor.
Capture components may be any suitable radiation sensor. For example the capture components may be image sensors such as photodiodes, charge-coupled devices, complementary metal- oxide- semi conductor devices, red-green-blue imaging cameras, red-green-blue-depth imaging cameras, infrared imaging sensors, and other components configured to detect electromagnetic radiation.
[131] The AIR sensors may emit patterns of radiation from emissions components, and the reflected patterns may be captured by capture components housed in the AIR sensor. Patterns of radiation may define no more than a single region or a series of regions. For example, an emission may be an emission of a single pattern for a single instance (e.g. a light pulse), an emission of a single pattern for multiple instances, or an emission may be an emission of multiple patterns for multiple instances. The patterns of radiation may vary in arrangement within a sequence or they may be constant. The time periods between emissions of instances of patterns of radiation may be constant or they may vary.
[132] Variations may be detected based on techniques such as structured light techniques, stereo techniques, and time-of-flight sensing. For example, fixed or programmable structured light techniques may be employed to detect variations in a pattern of radiation such as the dimensional spreading, geometrical skewing, or depth of its elements in order to determine information about an object. In addition, a time-of-flight variation may be measured between a pulse emission of a pattern of radiation and the captured reflection of that pattern of radiation, or a time-of-flight variation may be measured by determining the phase shift between an emitted pattern of radiation modulated by a continuous wave and the captured reflection of that pattern of radiation. Time-of-flight variations such as these may be used to determine depth information of an object. As another example, stereo techniques may be employed to detect a variation between the location of an aspect of a pattern of radiation captured in a first capture component and the location of the aspect in a second capture component. This variation may be used to determine depth information of the object from which the pattern is reflected.
[133] Object characteristics of an object may be determined based upon a detected presence of an object within a certain region. For example the initial width of emitted beams of radiation and the rate of width spreading as depth increases may be known based on the initial
arrangement of emission components. When Karen walks through the array the total number of beams she traverses at any given moment may be detected based on the reflection of the traversed beams into the capture component. The depth of Karen within the array may be determined by, for example, geometric or time of flight sensing techniques. Thus the width between two beams at Karen's location in the array may be determined based on the detected depth. The total number of beams traversed at a given moment may then be detected and the widths between them summed, resulting in an approximate width of Karen. In similar ways, an approximate height of Karen may also be determined. This approximate width and approximate height may then serve as object characteristics in a template for Karen. The template may be coupled to the status "Karen is in the toy's room". The detection of object characteristics outside of Karen's width and height within Karen's room may be coupled to the status "Someone other than the Karen is in toy's room".
[134] In an implementation of this disclosure, a status may also be based on an imaging sensor, which may be located within toy device 100a, 100b or remotely from the toy device 100a, 100b. Sensor data such as imaging sensor data, may be combined with other data, such as sensor data, in a sequence or usage pattern. For example a remote sensor, such as an imaging device, may detect events within the room where the toy device is located. In such
implementations, the imaging device may be in communication with the toy device 100a, 100b such as by a wireless radio and/or via network 7. The imaging device may be a video camera and may detect motion in room intermittently over a period of time. For example, Karen may walk into the room and play with several toys within the field of view of the camera over a ten-minute period. The motion detection data may be provided to the toy device. During the ten-minute period, the accelerometer sensor may not detect a difference value. The combination of the detection of motion by a remote sensor over a threshold period of time when no acceleration was detected by the accelerometer may be a usage pattern that may be coupled to the status "Karen is present and not playing with the toy."
[135] An imaging device (e.g., image sensor 109) may also identify Karen through facial recognition techniques. For example, when registering with the toy device, Karen may present her face to the image sensor 109 and an image of her face may be captured. Identifying features of Karen's face may be located and stored as a template. For example, facial recognition operations performed by the processor 106 may analyze the relative position, size, and/or shape of the eyes, nose, cheekbones, and jaw in Karen's image. These features may then be stored as a template for later comparisons during identification procedures. Other template generation processes are also contemplated by this disclosure. For example, geometric and/or photometric techniques are contemplated, including linear discriminate analysis, principal component analysis using eigenfaces, hidden Markov models, elastic bunch graph matching, neuronal motivated dynamic link matching, multilinear subspace learning techniques, eye contour matching, and skin texture analysis techniques. Identifying data received from imaging devices (e.g., image sensor 109) performing facial recognition may be coupled alone or with other data to a status. [136] A status may also be determined from other data such as data provided directly from Karen. For example, Karen may identify herself by saying her name or a password. This data may be received by audio input device 110 and compared to authorized user information stored in memory in communication with the toy device 100a, 100b. If Karen's data is matched then the speaker of the message may be identified as Karen. Karen may then record a message and send a message to another person, such as a person approved or authorized to participate in Karen's social network as described in other portions of this disclosure. For example, this person may be her mother or other authorized recipient within Karen's network. The toy device 100a, 100b may receive data indicating that voice recording and transmission functions are being accessed within the toy device 100a, 100b. For example, it may be determined by the processor 106 that control operations for audio input device 110 and/or the network interface 104 are being accessed. This determination may be coupled to the "Karen is recording a message" status if the microphone is recording a message or the "Karen is sending a message" status if the network interface is being accessed.
[137] In another example, one or more interactions between the child and toy device 100a, 100b may result in a status. For example, speech recognition procedures of the toy device may recognize certain types of speech as the child telling a joke. For example, Karen may utter the phrase "knock-knock". These terms may be stored and coupled to an automatic behavior of the toy device of emitting the statement "whose there?" when recognized. Whatever statement is recognized after the "whose there" emission by Karen may be automatically mimicked by the toy device with the addition of the ending phrase " who?" The child may then state the punch line of the joke, and as a result, the toy device may emit laughing sounds. Completing a script such as this may be coupled to the status "The toy laughed at Karen's joke."
[138] Once a status has been determined based on sensor 102a, 102b and/or other data, the toy device may provide an indicator of the status to a remote device (e.g., device 10, 11, server 13, or the like). For example the status, "Karen is playing with the toy" may be linked to an indicator associated with a record of the status stored on Karen's parent's mobile device 10, 11. Thus, when the status "Karen is playing with the toy" is determined, the toy device may transmit the associated indicator using the wireless interface to Karen's parent's mobile device 10, 11. In other implementations, the sensor data being provided for at least a portion of the status determination may be transmitted to a remote server (e.g., server 13 and/or remote platform 17) that is distinct from the mobile device 10, 11 and the toy device 100a, 100b. For example, an imaging device may include a wireless radio that is in communication with the remote sever (e.g., server 13 and/or remote platform 17) over the Internet or a virtual private network (e.g., network 7). In another example, an accelerometer (e.g., sensor 102a, 102b) implemented within the toy device 100a, 100b may provide data to network interface 104, which may transmit the sensor data over the Internet or a virtual private network (e.g., network 7) to the remote server (e.g., server 13). The remote server (e.g., server 13 and/or remote platform 17) may then determine the status of the toy device 100a, 100b based on received sensor data and other data and provide an indicator to the parent's device 10, 11, or another authorized person's device 10, 11.
[139] In an implementation of this disclosure, a toy device 100a, 100b behavior may be selected based on the determined status. The toy behavior may be user-directed or automated. For example, a user-directed behavior may be a behavior selected by a parent on their mobile device 10, 11 in response to receiving a notification of a status of the toy device 100a, 100b on an application on their device 10, 11. For example, a parent's device 10, 11 may receive a notification via an application that the status of the toy device 100a, 100b is "Karen is in the toy's room". In response to this communication, the parent may choose to cause the toy device 100a, 100b to execute a behavior, operation, and/or output of providing a direct message to Karen. For example, the parent may select option 206 to record a message on the application implemented on device 10, 11 and record the message "Hello Karen, I hope you're having fun". The application may then encode this recording and transmit it to the toy device 100a, 100b along with an instruction to execute the behavior of audibly emitting the message via the network 7. Thus, for example, upon receipt of the message, the processor 106 of toy device 100a, 100b may execute the instruction and cause the audio output device 112 to emit "Hello Karen, I hope you're having fun."
[140] Other user-directed behaviors are also contemplated by this disclosure. For example, upon receipt of the "Karen is in the room with the toy device" status, Karen's parent may choose to send a predefined message such as a message the parent has already recorded and stored in their mobile device 10, 11. For example, the parent may have recorded and stored the message "I love you" in their mobile device 10, 11. Thus upon receipt of a notification of the status, the parent may select this behavior, operation, and/or output. The application on the parent's mobile device 10, 11 may operate so as to transmit the message along with an instruction for the toy device 100a, 100b to perform the behavior of emitting "I love you." The toy device 100a, 100b may receive the instruction and execute the desired behavior.
[141] In other implementations of this disclosure, a toy device 100a, 100b behavior may be selected based on a graphical icon displayed on the parent's mobile application, such as an "audile emoji." For example, the icons may be customized based on the toy device 100a, 100b of the child such that if the toy device 100a, 100b is a pig, the graphical icons may represent behaviors that a pig might exhibit, such as emitting the sounds of "oink" or "Let's go play in the mud." Thus, in circumstances where the parent's mobile device 10, 11 receives a status notification of "Karen is in the room with the toy device" the parent may choose to select the behavior "oink" by selecting a graphical icon on the application implemented on their mobile device that represents a pig oinking. Upon selecting the icon, the application may transmit an instruction to the toy device that when executed by a processor of the toy device, causes a speaker within the toy device to emit the sound "oink." In some implementations the application stored and/or executed at least in part on the mobile device 100a, 100b may transmit an indicator of the selected behavior to a remote server (e.g., server 13), which may transmit an instruction to the toy device 100a 100b to configure the toy device 100a, 100b to perform one or more operations and/or output (e.g., exhibit a behavior), such as emitting the sound "oink" from the audio output device 112.
[142] In some implementations of this disclosure, a behavior may be selected automatically based on a determined status. For example, if the determined status is "Karen is present and not playing with the toy" and such a status is based on Karen being present for an extended period of time, such as twenty minutes, then a behavior may be automatically selected, such as causing the toy device to emit the statement "come play with me." Other behaviors may include
automatically sending a notification to other toy devices included in Karen's social network when the status "Karen is playing with her toy" is detected. In another example, a parent or other authorized individual may be presented with the option to instruct the toy device to perform the behavior of emitting "Pick me up," when the status "Karen is present and not playing with the toy is detected."
[143] Automatically selected behaviors may be default behaviors, may be configured and stored by a parent or other authorized individual, or may be learned and automatically generated by employing machine learning techniques. For example, a determined status may be "Karen is in the room with the toy." This status may be coupled to the automatic selection of the behavior that causes the toy device 100a, 100b to emit the statement "Hello Karen". An authorized user may also configure automatic selection of behaviors via their own instance of an application in communication with the toy device 100a, 100b. For example, a parent may link the status "someone is in the room with the toy device" to the behavior of the toy device emitting the statement "whose there?" The parent may pre-record this statement or it may be generated by speech generation procedures implemented in the toy device.
[144] Implementations of this disclosure may also employ machine learning techniques. Suitable machine learning techniques may include linear regression, naive Bayes, neural networks, logistic regression, and optimized logistic regression. Machine learning techniques may analyze prior statuses of the toy device and behaviors executed by the toy device to determine an expected behavior. For example, Karen's toy device and another toy device belonging to Craig may have participated in an interactive game where the toy devices function as elements of the game. For example, Karen and Craig may have played a game called "Candy Country" that involved features such as a map, accumulated points, questions, and the option to request help from your toy if the question is too difficult. Thus, for example, when playing Karen may be asked a question that is too difficult, such as "Where does wind come from." Karen may choose to ask her toy for help. The toy may access help files configured to give players a hint, or may access public sources such as over the Internet. In this way, the toy device may be an element of the Candy Country. After playing the game, Karen's toy device may store a record of the game, identifiers of the toys and people who participated, the score, and who was the winner. In the future, Karen's toy device may detect that it is in the presence of Craig's toy device. For example, Karen's toy device may detect Craig's by means of near field communication techniques, such that detection indicates the toy device is within several meters of Karen's toy device. Based on the historical behaviors of Karen and Craig's toy device, an automatic behavior may be generated that causes Karen's toy device to emit the statement "Does anyone want to play Candy Country?"
[145] In implementations of this disclosure an option for a selected behavior may be based on further sensor data collected after the behavior is executed by the toy device 100a, 100b. For example, Karen's parent may choose to transmit (e.g., from device 10, 11) a variety of customized messages to Karen's toy device 100a, 100b. One such message may be "Karen you're a silly goose." The audio input device 110 of the toy device 100a, 100b may detect laughter each time the words "you're a silly goose" are recognized by speech recognition procedures. A threshold frequency of detected laughs may be 0.333. Messages including the terms "you're a silly goose" may be followed by detected laughs at a frequency of 0.666. Thus the phrase "you're a silly goose" may receive detected laughs in excess of a threshold amount. As a result, the phrase "you're a silly goose" may be automatically added the predefined messaging options by the application executing on the parent's device 10, 11.
[146] In implementations of this disclosure, a status such as "Karen is present in the room" may be coupled to an automatic behavior that outputs questions Karen. For example, one or more questions may be asked of Karen (e.g., output by the audio output device 112) when she is present such as "what is your favorite food?" or "What is your favorite color?" More complex questions may be asked such as a fill-in-the-blank questions or "madlib" type questions that may be built into a story. Karen's answers to these questions may be recorded (e.g., by the audio input device 110) by the toy device 100a, 100b and shared with other members of her social network.
[147] In accordance with an implementation of this disclosure, examples of automated behaviors coupled to statuses and accompanying descriptions are set forth in the following behavior table:
Behavior table:
Figure imgf000049_0001
Figure imgf000050_0001
toy"
[148] FIG. 11 and FIG. 12 show example method for determining toy usage according to implementations of the disclosed subject matter. Instructions for executing the example operations shown in FIGS. 11 and 12 may be stored in storage such as memory 108 or fixed storage 116 of toy device 100a, 100b. The instructions may be executed by a processor such as processor 106 of toy device 100a, 100b. In other implementations, operations shown in FIGS. 11 and 12 may be stored as instructions on a remote device such as a device 10, 11 of FIG. 3 or server 13 of FIG. 4. The instructions may be stored in storage of the remote device and be provided via a network interface to the processor 106 of the toy device 100a, 110b or may be executed by one or more processors of the remote device.
[149] FIG. 11 shows an example method of determining toy usage according to an implementation of this disclosure. For example, a status of a toy device 100a, 100b may be detected at 1110. For example, a sensor 102a, 102b of a toy device 100a, 100b may detect sensor data that is coupled to a status. At 1120, an indicator of the status may be provided to a remote device. For example, network interface 104 of the toy device 100a, 100b may transmit an indicator of the status to mobile device 10, 11 belonging to a parent. At 1130, an instruction to execute a selected behavior, operation, and/or output may be received. For example, the toy device 100a, 100b may receive a message recorded by the parent on their mobile device 10, 11 and an instruction for the toy device 100a, 100b to emit the recorded message. At 1140, the selected behavior and/or operation may be executed. For example the toy device may emit the recorded message.
[150] FIG. 12 shows an example method according to an implementation of this disclosure. For example, at 1210, an indicator of a status of a remote device may be received. For example, a parent's mobile phone or tablet (e.g., device 10, 11) may receive an indicator of a status of the toy device 100a, 100b. At 1220, a behavior, operation, and/or output may be selected based on the indicated status. For example, a parent may record a message and select the behavior of the toy device 100a, 100b to output the message in response to receiving a notification of the toy device's status. At 1230, an instruction may be provided to execute the selected behavior. For example, the parent's tablet device (e.g., device 10, 11) may transmit an instruction to the toy device 100a, 100b so that the toy device 100a, 100b outputs the parent's recorded message (e.g., via the audio output device 112).
[151] The usage of a toy device 100a, 100b may be tracked, and rewards may be issued for a child based on the child's usage of the toy device 100a, 100b. For example, in accordance with an implementation of this disclosure, a status of a toy device 100a, 100b belonging to a child such as Karen may be determined (e.g., by the processor 106) in a manner such as that described throughout this disclosure. For example, a status such as "Karen is sending a message" may be determined by processor 106. A usage level for Karen may be updated as a result of the determined status. For example, a server 13 remote from the toy device 100a, 100b may receive an indicator of the status and increase a record of a number of times the toy device has entered that status by one. Above a certain usage threshold, a reward maybe provided to Karen. For example, if Karen exceeds 50 messages sent, then Karen's toy device may receive a new song that Karen can play. Karen's updated message interaction level may be compared to such a usage threshold and it may be determined that Karen's usage level has exceeded the threshold. For example, Karen's most recent message may have been her 51st message. Based on the comparison, a reward may be provided to Karen. For example, as a result of exceeding 50 messages, the remote server may send an instruction to Karen's toy device to enable playback of a new song.
[152] In implementations of this disclosure, usage levels for various statuses of a toy device 100a, 100b may be tracked for a user. For example, the status "someone is playing with the toy" may be based on data from a sensor 102a, such as an accelerometer, and data indicating the number of times a function of the toy device that is associated with play is accessed. Such functions may include accessing the user input interface 114, playing a message via audio output device 112, and recording a message on audio input device 110. The total number of times the status "someone is playing with the toy" may be calculated and this sum may serve as an indicator of usage of the toy device 100a, 100b. Similarly, subsets of total usage may be tracked, such as the total number of times a child such as Karen sends a message using the toy device, the number of times Karen sends a message to a particular person, such as her grandmother, and the frequency messages are sent, such as the total number of messages sent per day.
[153] Karen's usage levels may be maintained in an account accessible by Karen and other authorized individuals, such as her parents. This account may be stored in device such as sever 13 or database 15 of FIG. 4., fixed storage 23 of device 10 of FIG. 3, or fixed storage 116 or memory 108 of Toy device 100a. The account may be accessed through application functionality such as option 204 to manage a child account of the application implemented on device 10. [154] More complex interaction levels may also be tracked. For example, Karen's toy device 100a, 100b may assist in monitoring Karen's completion of certain tasks. For example, Karen may be asked by her mother to clean her room once a week. After successfully completing cleaning her room Karen's parent may provide Karen a password to share with her toy device 100a, 100b. Karen's parent my preconfigure this password via the application so that it is recognized by the toy device and associated with the status "Karen cleaned her room". For example the status "Karen cleaned her room" may be determined by audibly recognizing the preconfigured password from Karen. Each time the "Karen cleaned her room" status is determined, the usage level for that status may be updated. Similar tasks such a brushing teeth, making the bed, or taking a nap may also be updated. Sensor data indicative of such tasks may also be collected and serve as a basis for a status determination.
[155] Tokens or other demarcations of value may be assigned to each status update and serve as a basis for tracking usage levels. For example, each time that Karen cleans her room, rather than increase the usage level by one, a predetermined number of tokens may be assigned to her account. Differing numbers of tokens may be assigned for different usage levels. For example, an update to the status level for "Karen cleaned her room" may count for 50 tokens, whereas an update to "Karen played with her toy" may only count for a single token. Various other denominations and allocations of tokens may be assigned for various types of status updates.
[156] Usage levels tied to games may also be tracked. For example, Karen's toy device 100a, 100b may function as an element of a solo or multi-person game such as a spelling contest or "Hang-man." Karen and her toy device may play against Craig and his toy device 100a, 100b. Each toy device 100a, 100b may take turns generating the word for hangman and may periodically generate hints to the players if they are stuck. Karen's toy device may also track her performance in such games. For example, the toy device may track how many letters Karen guesses correctly, how many games she wins, who she wins against, her winning percentage, and so forth. Each time a tracked event occurs in the game a corresponding status may be
determined. The usage level for this status may also then be increased accordingly. For example, Karen may win hangman against Craig by guessing an 11 letter word. Her previous largest word may be only 9 letters. Thus her usage level for the status "Hangman victory" may be increased to 11 letters.
[157] In implementations of this disclosure, an updated usage level may be compared to a threshold level to determine whether a reward may be granted. For example, a threshold for the status "Karen is messaging her mom" may be three times in a day. If Karen exceeds this threshold then she may be presented with a reward. A threshold for "Karen played with her toy" may be set every 50 updates. Every time Karen exceeds the 50 status update threshold for "Karen played with her toy," a reward may be triggered. A threshold for the status "Karen won the spelling contest" may be a single update. Thus, each time Karen wins a spelling contest, she may be eligible for a reward. Games, such as spelling contests, may be held individually by Karen alone, or as a contest between Karen and one or more other children. The games may be coordinated with toy device a single toy device, such Karen's toy device, or each child's toy device may participate. Other games involving toy devices, such as those discussed in this disclosure, may be tracked in similar ways.
[158] Quantities of tokens may also serve as thresholds. For example, different rewards may be triggered by different token quantities, which may serve as thresholds. Tokens may be generated based on a variety of usage level updates. Thus, several different usage level updates may contribute to a single token threshold. For example, Karen may get seven tokens for updating the status "Karen is playing with her toy" seven times, and 10 tokens for updating the usage level of "Karen brushed her teeth" one time. A certain reward may be triggered at 15 tokens. Usage level updates may be combined to result in 17 tokens and enable Karen to receive the reward.
[159] A reward may be determined (e.g., by processor 106) when a threshold is reached. A reward may be trigged automatically or it may be made available and selected by a user. For example, as discussed in the above, Karen may become eligible for a certain reward upon amassing over 15 tokens. For example, this reward may be a new song. Karen may receive a notification via her toy device 100a, 100b when she has reached the threshold level and be presented to reward options (e.g., via the output audio device 112 and/or the display 120). For example, the audio output device 112 of Karen's toy device 100a, 100b may make a sound indicating a reward is available, and Karen may query the toy device 100a, 100b (e.g., via a voice command received by the audio input device 110) to determine what the reward options are. For example Karen might utter "what are my choices" upon hearing the audible alert. Voice recognition procedures implemented within the toy device 100a, 100b may detect Karen's query and emit three song options (e.g., via the audio output device 112 and/or the display 120) she can choose from. Karen may state the name of one of the songs and thereby make her choice.
[160] In some implementations, another authorized user may choose a reward for Karen. For example, in addition to or instead of Karen's toy device 100a, 100b receiving the award notification, a notification may be provided to an application executing on Karen's parent's smart phone (e.g., device 10, 11). Karen's parent may be presented with the three song options as graphical representations on her mobile device 10, 11. Karen's parent may select one of the options. As a result the application may transmit the song or a link to the song to Karen's toy device 100a, 100b along with an instruction to play the song or present it as an option for Karen to choose.
[161] Rewards may also be automatically generated. For example, upon winning at Hangman, Karen may be automatically alerted (e.g., via the audio output device 112) that she has won an award. The award may be tokens rather than a content based reward. The alert may be different than other alerts output from the toy device 100a, 100b in order to make an award alert identifiable (e.g., the tone or sound of the alert may be different). As discussed above, an award may be generated every time the usage level of "Karen is playing with her toy" exceeds a threshold value such as 50. Each time this threshold is exceeded, reward that is a new feature of Karen's toy device 100a, 100b may be downloaded to the device. For example, the toy device 100a, 100b may be able to play a new game, speak a new language, or tell new jokes. In this way, rewards may serve to keep the experience of the toy device 100a, 100b fresh and new for the child.
[162] In an implementation of this disclosure, upon determination of an award, the award may be provided to the toy device. Rewards may take various formats. For example, as discussed above, a reward may be a downloaded feature of the toy device 100a, 100b, such as a new song, a parent pre-recorded content such as a joke, a new story, a new accent, a new skill set, a new game, or a new ability of feature within the game. For example, in the game Candy Country described in this disclosure, new game skills may be downloaded to Karen's toy device, such as the ability to protect Karen's character in the game from being susceptible to certain obstacles in the game, or Karen's character may win double the award when certain circumstances in the game are encountered. Similarly, Karen's toy device 100a, 100b may access information, such as the favorite colors, songs, or foods, of other members Karen's social network, such as from the one or more devices communicatively coupled to network 7 of FIG. 4.
[163] Physical rewards may also be provided to Karen. For example, upon reaching a threshold level of tokens, Karen may be eligible to receive another toy device 100a, 100b or physical accessory to her current toy device 100a, 100b. For example, Karen's toy device 100a, 100b may be a Pig, and she may receive a new donkey toy device as a reward, or Karen may receive a "pig pen" set for her pig to inhabit, or clothing for her pig such as a hat and suspenders. Physical rewards may be mailed to Karen or her parent.
[164] FIG. 13 shows an example method for providing a reward based on usage according to an implementation of the disclosed subject matter. Instructions for executing the method may be stored in storage such as memory 108 or fixed storage 116 of toy device 100a. The instructions may be executed by a processor such as processor 106 of toy device 100a. In other implementations, the methods of this disclosure may be stored as instructions in fixed storage 23 or memory 27 of a remote device such as a device 10, 11 of FIG. 3 or server 13 of FIG. 4. The instructions may be stored in storage of the remote device and be provided via a network interface to a processor 106 of the toy device 100a, 100b or may be executed by one or more processors of the remote device.
[165] As shown at 1310, an indicator of a status of a toy device 100a, 100b may be received. For example, a remote server (e.g., server 13) may receive an indicator of a status transmitted over a local network (e.g., network 7), the Internet or a virtual private network. At 1320, the remote sever may update a usage level based on the received indicator of the status. For example, the remote server may increase the usage level by a by a certain value. At 1330, the updated usage level may be compared to a threshold level. For example an updated usage level may be at 51 and a threshold level may be at 50. The remote server may compare 51 to 50 and determine that the usage level exceeds the threshold level. At 1340, a reward may be determined based on the comparison. For example, exceeding the threshold of 50 may be linked to an automatic reward of a new song. Thus, the remote server may determine that the new song is the determined reward. At 1350, an instruction may be provided to the toy device 100a, 100b to execute the reward. For example, the new song may be transmitted by the remote server along with an instruction to play the new song at the toy device 100a, 100b.
[166] In implementations of this disclosure, a toy behavior may be based on a context. A context may include a status of a toy device 100a, 100b as described throughout this disclosure and other data, such as a profile of a child associated with a social network (e.g., a toy-to-toy social network), time and date data, seasonal data, weather data, and a child's behavior. Data from sources in communication with the toy device 100a, 100b may be received and compared to a condition associated with a particular context. The particular context may be coupled to a particular toy behavior. If the context condition is satisfied, then the instructions may be provided to the toy device to execute the behavior.
[167] In an implementation of this disclosure, a context may include data stored in a profile for a child included on a social network such as is described throughout in this disclosure. For example, a child's interests such as favorite songs, foods, games, colors, pets, sports, weather, holidays, flavors, friends, books, plays, movies, toys, and so forth may be stored on the child's profile. The behavior of a toy, such as toy device 100a, 100b in FIGS. 1-3 may be based on this context. For example, a child, such as Karen, may request that her toy device 100a, 100b sing her a song, or her toy device 100a, 100b may execute a procedure that triggers the toy device 100a, 100b to select a song to be output (e.g., via the audio output device 112). The toy device 100a, 100b may have access to source data such as library of songs stored on a remote server, such as server 13 shown in FIG. 4. The source data from this library may include artist and genre data for each song in the library. A context for Karen may include context conditions such as that a song emitted by the toy device 100a, 100b may either be by an artist of a favorite song in her profile or a genre of a favorite song in her profile. Artist data and genre data may be retrieved from the song library and compared to the context condition. The artist data may match the artist data contained in the context condition and thus satisfy the condition. Satisfaction of the artist context condition may be couple to the toy behavior of performing the song. As a result an instruction may be transmitted to Karen's toy device to emit audio content of the song. Similar procedures may be executed when determining a story to be told by the toy device 100a, 100b. For example, the context conditions may include author, genre, and length.
[168] In another example, a toy behavior of telling a child about an event may be coupled to the context of an event indicated in another person's profile on a social network. Such a social network may be a "toy -to-toy" social network where association with a toy device 100a, 100b may be needed for participation. For example, the toy behavior of reminding the child of her mom's birthday may be coupled to the context of her mom's birthday. Context conditions may include it being the date of her mom's birthday. Source data may include the current date. The current date may be received and it may match the date of her mom's birthday. The condition may be satisfied and the toy device may exhibit the behavior of emitting the statement "It's your mommy's birthday today! Don't forget to wish her a happy birthday!"
[169] In an implementation of this disclosure, a context may include time, date, and seasonal data. For example, a toy behavior of Karen's toy device emitting audio content associated with cold weather may be coupled to the context of a winter season. A context condition for the winter season context may be any month between November and March. A source data may be a calendar component implemented on a device such toy device 100a, 100b, user device 10, 11, or server 13. A source data may be retrieved indicating the month is
February. The source data may be compared to the context condition, and the condition may be satisfied because February in included in the months between November and March. As a result an instruction may be transmitted to Karen's toy device 100a, 100b to emit audio content associate with cold weather. For example, Karen's toy device 100a, 100b may emit sounds of shivering or periodically exclaim "Brrrrr! ." Similar procedures may be executed when determining recommendations to be told by the toy device 100a, 100b based on the season. For example, a toy device 100a, 100b may recommend wearing sunscreen or a hat during the summer months.
[170] Similar procedures may be executed when selecting a tone of speech for Karen's toy device 100a, 100b. For example, the toy device 100a, 100b may have a back-story that includes its "birthday". A toy behavior of having a happy or cheery tone of voice may be coupled to the context of being the toy device's birthday. Context conditions may include the date of birth for the toy device. Source data may include the current date. Thus, when the source data matches the toy device's date of birth, the toy device may exhibit a cheery tone. For example, the pitch of Karen's toy device's voice may increase or more enthusiastic language may be emitted. For example the toy device may use terms such as "fantastic" in place of "good" or "tremendous" in place of "a lot."
[171] In another example, the toy device 100a, 100b may be a turkey. A toy behavior of sounding afraid may be coupled to the context being near the Thanksgiving holiday. Context conditions may include a date before the fourth Thursday in the month of November. Thus when source data of the current date matches the context condition of being before the fourth Thursday in November, the toy device may exhibit fearful behavior. For example, the toy device may speak in a soft tone or periodically state "I'm afraid!" Similar procedures may be executed related to other holidays such as Valentine's Day. For example, if the context condition, of being February 14th is satisfied, then the toy device may periodically emit "Happy Valentine's day!"
[172] In an implementation of this disclosure, a context may include a weather prediction. For example, a toy behavior of reminding the child to pack an umbrella may be coupled to the context of a rainy day. A context condition may include a weather forecast that predicts over 50% chance of rain. The source data may be weather forecast data obtained from a weather site on the Internet. If the retrieved weather forecast includes a greater than 50% chance of rain, then the context condition may be satisfied. As a result the toy device 100a, 100b may state "Good Morning! It's going to rain today, so don't forget to take your umbrella!"
[173] In an implementation of this disclosure a context may include a child behavior. For example, a context may include the child falling asleep. Context conditions may include a sound template of the child sleeping. Source data may include sensor data, such as audio content collected from a microphone and processed by audio input device 110 of toy device 100a, 100b shown in FIG. 1. The audio input device may have recorded and stored audio of the child sleeping at a prior time. Characteristic features of the audio data may be extracted by one or more signal processing operations. These features may be stored as a template of the child sleeping. The toy device may be exhibiting the behavior of telling the child a story. While exhibiting this behavior an audio source data may be receive through the microphone. This source data may be compared to the stored template of the child sleeping. If there is a match, then the context condition may be satisfied, and an instruction may be transmitted to the toy device to gradually reduce the volume of the story telling.
[174] In another example, a context may include the child waking up. The context may be coupled to the toy behavior of exhibiting a morning wake up routine. Context conditions may include a motion being detected over the bed of the child. Source data may include sensor data, such as data from a PIR sensor and/or sensor 102a, 102b. A PIR sensor may be located with its sensor face directed at the bed of the child. The sensor may be included in the child's toy device 100a, lOObor remote from the toy device 100a, 100b. Motion data may be received from the PIR sensor. The received motion data may match the context condition of detecting motion over the bed of the child. As a result the context condition may be satisfied and an instruction may be provided to the toy device 100a, 100b to exhibit the morning wake up routine. For example, the toy device may gradually increase the volume on a favorite song of the child and may state "Good morning!"
[175] In implementations of this disclosure, instructions for executing some or all of the operations shown in FIG. 14 may be stored in storage such as memory 108 or fixed storage 116 of toy device 100a. The instructions may be executed by a processor such as processor 106 of toy device 100a. In other implementations, some or all of the operations shown in FIG. 14 may be stored as instructions in fixed storage 23 or memory 27 of a remote device such as a device 10 of FIG. 3 or server 13 of FIG. 4. The instructions may be stored in storage of the remote device and be provided via a network interface 104 to a processor 106 of the toy device 100a, 100b or may be executed by one or more processors of the remote device.
[176] FIG. 14 shows an example method according to an implementation of this disclosure. At 1410, a source data may be received, and at 1420 the source data may be compared to a context condition associated with a context. At 1430 it may be determined that the source data satisfies the context condition based on the comparison. A particular toy behavior may be coupled to the context. As a result, at 1440 the particular toy behavior may be selected based on the context. At 1450 an instruction may be provide to a toy device to execute the selected behavior.
[177] In implementations of this disclosure, a toy behavior may be based on other toy devices in its proximity. For example, a first toy device may detect a second toy device based on received sensor data such as data received over a network in accordance with a near field communications (NFC) protocol. Upon detection, joint behaviors for both toys may be triggered. For example if the first toy device is singing a song, an identifier of the song and track location data may be provided to the second toy device. In response to receiving the identifier and track location data, the second toy device may also sing the song. The received sensor data may also include an identifier of the second toy device. The identifier may be used to determine past interactions between the first toy device and the second toy device. A behavior of the first toy device may be selected base on the past interaction data. For example, a game played with the second toy device in the past may be suggested.
[178] A proximity may be defined to be various distances, including a maximum distance of a wireless communications protocol. A first toy may detect when it is in proximity to a second toy based on various sensor data. For example, each toy device may have wireless radios configured to transmit and receive in accordance with an NFC protocol. NFC transmissions are generally inoperable over distances greater than 20 centimeters. Therefore, when a connection is established between the two toy devices, it may be determined that both toy devices are within 20 centimeters of each other.
[179] In another example, a first toy device and a second toy device may each have wireless radios configured to transmit and receive in accordance with a Bluetooth protocol. Certain classes of the Bluetooth™ protocol may be inoperable over 1 meter. Therefore, when a connection is established between the two toy devices, it may be determined that both toy devices are within 1 meter of each other.
[180] In another example, a first toy device and a second toy device may each be connected to the same wireless Wi-Fi router. The first to device may be in communication with the router and may retrieve a list of devices connected to the router. A query of the list may return an identifier of the second device. The wireless router may have a maximum range of 46 meters. Therefore when identifiers of the first toy device and second toy device are both found on the list of devices connected to the router, it may be determined that both toy devices are within 92 meters of each other.
[181] In another example, a first toy device and a second toy device may each have a global positioning system (GPS) sensor. Each toy device may receive location data from its GPS sensor and transmit the location data over a Wi-Fi network. The fist toy device may receive the location data of the second toy device over the Wi-Fi network. The first toy device may compare its locations data to the location data received from the second toy device and calculate a distance between the two devices. This distance may be compared to a threshold value that is defined to be the proximity value, such as 10 meters. If the distance is less than the threshold value, then it may be determined that the first toy device, a first indicator of a second toy device.
[182] In an implementation of this disclosure, a toy behavior for a first toy may be triggered when a second toy is brought within its proximity. For example a first toy device may detect that it is in proximity to a second toy device in accordance with techniques such as those discussed above. Upon detection of proximity, the first toy device may present an option to a child such as "I have a friend nearby, should play together?" The child may respond affirmatively, such as by saying "yes." Speech recognition procedures implemented with the first toy device may recognize the child's response, and as a result generate options for playing together. For example, the first toy device may state "Great, what would you like to do? We can sing a songs, tell stories, or perform plays." The child may respond: "song." The child's response may be recognized and a list of song options may be presented. In a similar way a song may be chosen. The first device may then provide an identifier of the song to the second toy device, or the second toy device may recognize the child's selections. The first toy device and second toy device may then begin exhibiting the behavior of singing the selected song.
[183] In another example, the first toy device may present song options such as those above, but may also present role or part options. For example, the child may be presented with the option of choosing which toy device sings the verse, which toy device sings the chorus, and which part of the song is sung in harmony. These options may be beneficial because different toy devices may have different pitches to their voice. [184] In implementations of this disclosure a toy behavior may be triggered by the history between two toys brought into proximity. For example, a first toy device may detect that it is in proximity to a second toy device in accordance with techniques such as those discussed above. An identifier of the second toy device may be provided to the first toy device, such as by transmitting the identifier over the local Wi-Fi network. A history of toy interactions may be maintaining by the first toy device. For example, a log of the behaviors exhibited by the first toy device and the identifiers of other toy devices that interacted with the first toy devise may be stored on a remote server in communication with the first toy device. This history may indicate that the first toy device and the second toy device have played games together in the past. For example, the first toy device may have function as a game token against the second toy device in the game "Candy Country." A record of the state of the game of Candy Country, such as how many points each player had, which questions had be answered, and where on the board each toy device was located may also be kept in the log. The game may not have been finished.
[185] The first toy device may query its log with the identifier of the second toy deice. This query may return an indicator of the unfinished game of Candy Country. As a result, the first toy device may exhibit the behavior of prompting the child about the unfinished game. The child may respond affirmatively. The second child who is linked to the second toy device may similarly be queried either by the second toy device or the first toy device. If the second child also responds affirmatively, then the game state may be loaded from the log and the children may begin to play.
[186] In some implementations, a geofence can be established based on the location of the toy. The geofence encloses a geographical area, such as a zone around a home location for the toy. The implementation can detect when the toy moves outside the geofence and trigger an alert to be sent to a third party or a behavior in the toy. For example, a text message or call can be made to a parent when the system determines that the toy has left the geofenced area. Likewise, the speaker in the toy may play a message in the toy's or a parent's voice to the effect of "Take me home," or "Go to your room."
[187] In implementations of this disclosure, instructions for executing some or all operations shown in FIGS. 15-16 may be stored in storage such as memory 108 or fixed storage 116 of toy device 100a, 100b. The instructions may be executed by a processor such as processor 106 of toy device 100a, 100b. In other implementations, some or all operations shown in FIGS. 15-16 may be stored as instructions in fixed storage 23 or memory 27 of a remote device such as a device 10 of FIG. 3 or server 13 of FIG. 4. The instructions may be stored in storage of the remote device and be provided via a network interface to a processor of the toy device or may be executed by one or more processors of the remote device.
[188] FIG. 15 shows an example method according to an implementation of this disclosure. For example, a first toy device may detect a proximity to a second toy device at 1510. At 1520, the first toy devices may provide an indicator of the second toy device to a remote device, such as a remote server. The remote server may select a behavior based on the second toy device. The first toy device may receive an instruction to execute the selected behavior at 1530. At 1540, the second toy device may execute the selected.
[189] FIG. 16 shows an example method according to an implementation of this disclosure. For example, at 1610 a remote sever may receive an indicator of the second toy device. The remote server may compare the received indicator to an interaction history of the first toy device at 1620. At 1630, the remote sever may select a behavior based on the interaction history and the second toy device. At 1640, the remote sever may provide an instruction to execute the selected behavior to the first toy device.
[190] In the implementations of this disclosure, voice recognition and language processing procedures may provide an interface between a child, such as Karen, and her toy device. For example, profile information of Karen's friend Mary on Karen's social network may include Mary's favorite color, green. Karen may ask her toy device, "What is Mary's favorite color?", and via voice recognition and language processing techniques, the toy device 100a, 100b may recognize Karen's query. The toy device 100a, 100b may access Mary's profile, determine her favor color, and tell Karen the result "Mary's favorite color is green!" In another example, Karen may name her toy device 100a, 100b. For example, Karen may state "your name is Roxy." The toy device 100a, 100b may recognize that it is being named through voice recognition and language processing techniques, and when "Roxy" is uttered, the toy device 100a, 100b may recognize that it is being addressed. Likewise, the toy device 100a, 100b may ask Karen what her name is. Karen may provide a name that she wants to be called, such as "Karen." In future interactions, the toy device 100a, 100b may verbally address Karen by name when
communicating with her. Message recording may also be initiated via verbal instruction. For example, Karen may state "Toymail mom" and her toy device may recognize the instruction and begin recording a message to send to Karen's mom. Gifts such as songs, stories, sound effects, etc. may also be sent via verbal instruction. For example, Karen may state "Send a new song to Mary." Karen's toy device may recognize Karen's instruction and send a gift of whatever is stated after "send" to whoever is stated after "to." Activities such as games, stories, and songs may also be accessed via voice recognition and language processing techniques. For example, Karen may ask her toy device 100a, 100b to read her a story, and her toy device 100a, 100b may begin speaking the words of a story Karen requests. Karen's toy device 100a, 100b may increase its knowledge of Karen by asking Karen questions. For example, Karen's toy device 100a, 100b may ask her what her favorite subjects are. The toy device 100a, 100b may recognize Karen's answers and use such information to provide suggestions of future activities. Karen may audibly play games with her toy device. For example, Karen may sing a song and ask her toy device 100a, 100b to guess which song it is. Karen's toy device 100a, 100b may recognize the lyrics of the song and access information sources over networks such as the Internet in order to determine the song from which the lyrics are derived. In another example, the toy device 100a, 100b help Karen create a story. For example, the toy device 100a, 100b may tell a first part of the story to get Karen stared, and Karen may be asked to generate the second part of the story. The toy device 100a, 100b may record the entire story and provide a transcript to Karen or an authorized user of an application in communication with the toy device 100a, 100b.
[191] In implementations of the disclosure, such as those discussed above, the toy device 100a, 100b may operate in various modes. For example, in "offline mode" the toy device may not be connected to a network such a local wireless network. In this mode, the toy device's wireless radio may be deactivated. In an "online mode" the toy device may be connected to a network and may conserve battery life by only checking for messages from other devices periodically, such as every 10 minutes. In "walkie-talkie mode" the toy device 100a, 100b may communicate with other devices in near real time. For example, Karen may carry on a conversation with her mother's mobile device 10, 11 or another toy device 100a, 100b in a manner similar to a conventional telephone conversation via the toy device 100a, 100b. Conversations with multiple devices such as in a conventional conference call may also be conducted in "walkie-talkie mode." In "group messaging mode" messages may be transmitted to multiple recipient devices at one time. Such messages may be voice messages, images, symbols, or other messages capable of being captured and communicated via the toy device. For example lights on the toy device may be illuminated or appendages of the toy device may be articulated when a message is sent or received. Such actions may communicate the content of the message itself, such as an arm of a toy device waving goodbye when a message intended to convey "goodbye" is received by the toy device.
[192] In the implementations discussed above in connection with FIGS. 1-16, machine learning may be enabled on the toy device 100a, 100b and/or the device 10, 11. For example, the toy device 100a, 100b may use the sensors 102a, 102b to determine usage patterns of the toy device by the user, including the time of day used, the duration used, how many messages are recorded and/or transmitted, and the like. With machine learning, the toy device 100a, 100b may encourage a user to record and send a message to one or more contacts if they have not communicated with the contacts within a predetermined period of time. In another example, the toy device may encourage a learning activity when the user has not participated in a learning activity in a predetermined amount of time. The usage patterns of the toy device 100a, 100b may be transmitted to the device 10, 11, which may be used by the parent and/or guardian to determine when the child is using the toy device, and may prompt them to record and send a message during the child's period of use.
[193] In some of the implementations discussed above in connection with FIG. 1-16, there may be two-way communications between toy devices 100a, 100b and/or devices 10, 11. For example, a parent may send an audible message to one or more children having toy devices 100a, 100b. The one or more children may respond to the received messages, by recording and/or transmitting messages back to the parent's device 10, 11 within a predetermined period of time. In some implementations, the two-way communications may be between two or more toy devices 100a, 100b.
[194] In some implementations, toy device 100a, 100b may record audio or video of the a user's activities and generate a summary of the user's activities over a period of time. For example, the child may play a game with friends in the presence of toy device 100a, 100b. Toy device 100a, 100b may record the conversations the child has while playing the game. Toy device 100a, 100b may transmit the recording to a remote computing device such as one or more servers. The remote computing device may execute procedures to perform natural language processing or related language processing techniques on the audio recordings. These techniques may extract meaningful content from the audio recordings and automatically generate a summary of the content, for example in written or audio form. This summary may then be sent to a device associated with the toy device 100a, 100b such as a parent's mobile device. In this way the parent may monitor or periodically receive updates of the child's activities.
[195] In some implementations, one or more computing devices remote from toy device 100a, 100b may determine content to suggest and/or provide to toy device 100a, 100b based on natural language processing of audio recorded by toy device 100a, 100b. For example, toy device 100a, 100b may be a plush toy that a user such as a child carries with her to school. While at school toy device 100a, 100b may record audio such as the child's discussion about a new movie with another child, a teacher's announcement of the new lesson unit they will be covering this month, and an announcement of a guest speaker coming to class. One or more computing devices remote from toy device 100a, 100b may perform natural language processing or similar language processing techniques on the recorded content and extract information such as the enthusiasm for the movie discussed, the new course unit, and the guest speaker. The remote computing device may determine new content related to this extracted information that can be recommended to the child or the child's parents, such as a new song about characters in the movie, an audio book about a subject in the new unit, and biographical information about the guest speaker. These recommendations may be sent to the child's parent's mobile device or may be provided directly to toy device 100a, 100b. In some implementations the new content may be provided directly to toy device 100a, 100b and presented to the child. In some implementations the parent may select the recommendation on the parent's device and execute a transaction. In response to the transaction the new content can be provided to toy device 100a, 100b or otherwise accessed on a device linked to an account associated with toy device 100a, 100b, such as a smart television or projection system, tablet, laptop, or other content presentation device. [196] In another example, toy device 100a, 100b may be a wearable device such as a watch, book bag, or have any other form factor discussed in this disclosure. Audio content may be recorded and aggregated across toy devices belonging to users of multiple ages. The audio content may be analyzed by natural language processing techniques to, for example, determine interest trends among different ages of users. For example a certain type of shoe or other clothing may be determined to be desirable by a particular age range of users, but not relevant to a younger age range of users. Recommendations related to the desirable shoe type may be targeted at users within the older range and not at users in the younger range. In another example, users approaching the older age range of users, such as those users within a year of the older age range, may be targeted for recommendations of the desirable shoe type. In addition or instead of recommendations for wearable items, other age specific items may be determined such as games, songs, movies, or similar content that may be recommended or delivered to toy device 100a, 100b or otherwise accessed on a device linked to an account associated with toy device 100a, 100b, such as a smart television or projection system, tablet, laptop, or other content presentation device. Aggregated information collected from one or more toy devices 100a, 100b may be anonymized or otherwise treated in a manner sufficient to maintain the privacy of the users.
[197] More generally, various implementations of the presently disclosed subject matter may include or be embodied in the form of computer-implemented processes and apparatuses for practicing those processes. Implementations also may be embodied in the form of a computer program product having computer program code containing instructions embodied in non- transitory and/or tangible media, such as floppy diskettes, CD-ROMs, hard drives, USB
(universal serial bus) drives, or any other machine readable storage medium, such that when the computer program code is loaded into and executed by a computer, the computer becomes an apparatus for practicing implementations of the disclosed subject matter. Implementations also may be embodied in the form of computer program code, for example, whether stored in a storage medium, loaded into and/or executed by a computer, or transmitted over some transmission medium, such as over electrical wiring or cabling, through fiber optics, or via electromagnetic radiation, such that when the computer program code is loaded into and executed by a computer, the computer becomes an apparatus for practicing implementations of the disclosed subject matter. When implemented on a general -purpose microprocessor, the computer program code segments configure the microprocessor to create specific logic circuits. [198] In some configurations, a set of computer-readable instructions stored on a computer- readable storage medium may be implemented by a general-purpose processor, which may transform the general-purpose processor or a device containing the general-purpose processor into a special-purpose device configured to implement or carry out the instructions. The implementations may use hardware that may include a processor, such as a general-purpose microprocessor and/or an Application Specific Integrated Circuit (ASIC) that embodies all or part of the techniques according to implementations of the disclosed subject matter in hardware and/or firmware. The processor may be coupled to memory, such as RAM, ROM, flash memory, a hard disk or any other device capable of storing electronic information. The memory may store instructions adapted to be executed by the processor to perform the techniques according to implementations of the disclosed subject matter.
[199] The foregoing description, for purpose of explanation, has been described with reference to specific implementations. However, the illustrative discussions above are not intended to be exhaustive or to limit implementations of the disclosed subject matter to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The implementations were chosen and described in order to explain the principles of implementations of the disclosed subject matter and their practical applications, to thereby enable others skilled in the art to utilize those implementations as well as various
implementations with various modifications as may be suited to the particular use contemplated.

Claims

1. A method comprising:
receiving, by a first computing device from a first toy device that is remote from the first computing device and that is associated with a first user, a first sensor data detected by a first sensor in communication with the first toy device;
comparing, by the first computing device, at least the first sensor data to a plurality of conditions;
determining, by the first computing device, that a first condition of the plurality of conditions is satisfied based on the comparison of at least the first sensor data to the plurality of conditions;
in response to the determination that the first condition is satisfied, selecting, by the first computing device, a first contact from among a plurality of contacts based on an association of the first contact with the first condition, the plurality of contacts each approved by a second user for the first user;
providing, by the first computing device to a second computing device that is remote from the first computing device and that is associated with the first contact, an indicator of a first status of the first toy device, the first status associated with the satisfied first condition;
receiving, by the first computing device from the second computing device, an indicator of a first toy behavior; and
providing, by the first computing device to the first toy device, an instruction to perform the first toy behavior.
2. The method of claim 1, wherein:
the first user is a child; and
the second user is selected from the group consisting of: a parent of the child, a guardian of the child, and a non-parent and non-guardian who has legal responsibility for the child.
3. The method of claim 1, wherein:
the first toy device is animal shaped and houses the first sensor; the first computing device comprises one or more servers hosting an application that manages communication among devices comprising the first toy device and the second computing device; and
the second computing device comprises a mobile device associated with the second user.
4. The method of claim 1, wherein the first sensor data is selected from the group consisting of: audio data, accelerometer data, motion data, proximity data, smoke data, carbon monoxide data, temperature data, time data, orientation data, location data, pressure data, button selection data, light data, infrared data, and moisture data.
5. The method of claim 1, wherein:
the first sensor data comprises accelerometer data;
the determination that the first condition is satisfied comprises determining the accelerometer data exceeds a threshold acceleration value;
the first contact is associated with the second user;
the indicator of the first status comprises an indicator that the first user is playing with the first toy; and
the first toy behavior is selected by the second user on the second computing device.
6. The method of claim 1, further comprising: receiving, by the first computing device, a second sensor data comprising an indicator that a recording button had been selected on the first toy device, wherein:
the first sensor data comprises an audio communication,
the determination that the first condition is satisfied comprises determining the indicator of the recording button has been received and the audio communication has been received,
the first contact is associated with the second user, and
the indicator of the first status comprises an indicator that the first user has sent a voice message to the first contact.
7. The method of claim 1, further comprising: receiving, by the first computing device, a second sensor data comprising audio data, wherein: the first sensor data comprises accelerometer data,
the determination that the first condition is satisfied comprises determining the audio data corresponds to an audio template associated with the first user and a value of the accelerometer data is below a threshold value, and
the indicator of the first status comprises an indicator that the first user in in proximity to the first toy device and is not interacting with the first toy device.
8. The method of claim 1, wherein:
the determination that the first condition is satisfied comprises determining the first sensor data exceeds a threshold value associated with a usage level of the first toy device;
the indicator of the first status comprises an indicator that the first user has reached the usage level; and
the first toy behavior comprises a toy behavior that was unavailable to the first toy device before reaching the usage level.
9. The method of claim 1, wherein:
the first sensor data comprises a proximity data associated with a second toy device; the determination that the first condition is satisfied comprises determining the proximity data is not associated with a toy device that is included in a toy-to-toy social network of the first toy device;
the first contact is associated with the second user;
the indicator of the first status comprises an indicator that an unapproved toy device is in proximity to the first toy device; and
the first toy behavior is selected by the second user on the second computing device and indicates that the second toy device has been approved to join the toy -to-toy social network.
10. The method of claim 1, wherein the plurality of contacts are included in a first profile that is associated with the first toy device and that comprises: (i) a contact that is associated with the second user and (ii) a contact that is associated with a third user and a second toy device that is distinct from the first toy device.
11. The method of claim 1, wherein:
the plurality of contacts are included in a first profile that is associated with the first toy device;
the plurality of contacts comprises a second contact that is associated with a third user and a second toy device;
the second toy device is distinct from the first toy device;
the second contact is approved by the second user;
the second toy device is associated with a second profile;
the second profile is accessible by the first toy device over a toy-to-toy social network; the first profile is accessible by the second toy device over the toy-to-toy social network; and
the first profile is not accessible over the toy-to-toy social network by a third toy device that is not approved by the second user.
12. The method of claim 1, wherein the association of the first contact with the first condition comprises: (i) the first contact being coupled to the first status and (ii) the satisfied first condition being included in a set of one or more satisfied conditions that define the first status.
13. The method of claim 1, wherein the first toy behavior comprises a message selected on the second computing device by a user associated with the first contact.
14. The method of claim 1, wherein the first toy behavior comprises a message recorded by a user associated with the first contact.
15. The method of claim 1, wherein the first toy behavior comprises emitting an animal sound.
16. The method of claim 1, wherein the first toy behavior is selected from the group consisting of: singing a song, reminding the first user of an event, recommending a child behavior, expressing a selected tone of speech, and expressing a selected emotion.
17. The method of claim 1, wherein the instruction to perform the first toy behavior comprises an instruction for the first toy device to change the waveform of an audio
communication based on data from the first sensor.
18. The method of claim 1, further comprising:
comparing, by the first computing device, at least a first context data to a plurality of context conditions;
determining, by the first computing device, that a first context condition of the plurality of context conditions is satisfied based on the comparison of at least the first context data to the plurality of context conditions; and
in response to the determination that the first context condition is satisfied, providing, by the first computing device to the first toy device, an instruction to execute a second toy behavior that is associated with the satisfied first context condition,
wherein the first context data is selected from the group consisting of: the first status of the first toy device, data from a profile that is associated with the first toy device and that is on a toy -to-toy social network, time data, date data, seasonal data, weather data, and behavioral data of the first user.
19. A non-transitory, computer readable medium storing instruction that, when executed by a processor, cause the processor to perform operations comprising:
receiving, from a first computing device by a second computing device that is remote from the first computing device, an indicator of a status of a toy device that is associated with sensor data collected by a sensor in communication with the toy device, the toy device associated with a first user;
presenting, on a display of the second computing device, a representation of the status of the toy device;
receiving, by the second computing device, a selection of a toy behavior for the toy device by a first contact of a plurality of contacts, the plurality of contacts each approved by a second user for the first user; in response to the selection of the toy behavior by the first contact, providing, by the second computing device to the first computing device, an instruction for the toy device to perform the selected toy behavior.
20. A toy device comprising:
a sensor;
a network interface;
a processor; and
a non-transitory, computer readable medium in communication with the sensor, the network interface, and the processor and storing instruction that, when executed by the processor, cause the processor to perform operations comprising:
collecting, by the sensor, a sensor data that satisfies a condition associated with a status of the toy device, the toy device associated with a first user,
providing, by the network interface, the sensor data to a first computing device that is remote from the toy device, the first computing device executing an application approved by a second user for communication with the toy device,
receiving, by the network interface from the first computing device, an instruction to perform a toy behavior selected on a second computing device by a first contact of a plurality of contacts, the plurality of contacts each approved by the second user for the first user; and executing, by the processor, instructions to perform the toy behavior.
PCT/US2017/020899 2016-03-04 2017-03-06 Interactive toy device, and systems and methods of communication between the same and network devices WO2017152167A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662303957P 2016-03-04 2016-03-04
US62/303,957 2016-03-04

Publications (1)

Publication Number Publication Date
WO2017152167A1 true WO2017152167A1 (en) 2017-09-08

Family

ID=59743266

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/020899 WO2017152167A1 (en) 2016-03-04 2017-03-06 Interactive toy device, and systems and methods of communication between the same and network devices

Country Status (1)

Country Link
WO (1) WO2017152167A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11825004B1 (en) 2023-01-04 2023-11-21 Mattel, Inc. Communication device for children

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120295510A1 (en) * 2011-05-17 2012-11-22 Thomas Boeckle Doll Companion Integrating Child Self-Directed Execution of Applications with Cell Phone Communication, Education, Entertainment, Alert and Monitoring Systems
US20140187292A1 (en) * 2013-01-02 2014-07-03 Alice Poole Child cell phone apparatus and method
WO2015042376A1 (en) * 2013-09-19 2015-03-26 Toymail Co., Llc Interactive toy
US20150133025A1 (en) * 2013-11-11 2015-05-14 Mera Software Services, Inc. Interactive toy plaything having wireless communication of interaction-related information with remote entities
US20150251102A1 (en) * 2014-03-07 2015-09-10 Mooredoll Inc. Method and device for controlling doll with app and operating the interactive doll
US20150360139A1 (en) * 2014-06-16 2015-12-17 Krissa Watry Interactive cloud-based toy

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120295510A1 (en) * 2011-05-17 2012-11-22 Thomas Boeckle Doll Companion Integrating Child Self-Directed Execution of Applications with Cell Phone Communication, Education, Entertainment, Alert and Monitoring Systems
US20140187292A1 (en) * 2013-01-02 2014-07-03 Alice Poole Child cell phone apparatus and method
WO2015042376A1 (en) * 2013-09-19 2015-03-26 Toymail Co., Llc Interactive toy
US20150133025A1 (en) * 2013-11-11 2015-05-14 Mera Software Services, Inc. Interactive toy plaything having wireless communication of interaction-related information with remote entities
US20150251102A1 (en) * 2014-03-07 2015-09-10 Mooredoll Inc. Method and device for controlling doll with app and operating the interactive doll
US20150360139A1 (en) * 2014-06-16 2015-12-17 Krissa Watry Interactive cloud-based toy

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11825004B1 (en) 2023-01-04 2023-11-21 Mattel, Inc. Communication device for children

Similar Documents

Publication Publication Date Title
TWI692717B (en) Image display device, topic selection method and program
US20210399911A1 (en) Systems, methods, and apparatus for meeting management
US20230105041A1 (en) Multi-media presentation system
US11567586B2 (en) Systems, methods, and apparatus for enhanced presentation remotes
US20240046642A1 (en) Systems, methods, and apparatus for enhanced cameras
CN110139732B (en) Social robot with environmental control features
CN107000210A (en) Apparatus and method for providing lasting partner device
CN103748623B (en) Child-directed learning system integrating cellular communication, education, entertainment, alert and monitoring systems
JP6165302B1 (en) Image display device, topic selection method, topic selection program
CN105409197A (en) Apparatus and methods for providing persistent companion device
Soro et al. The ambient birdhouse: An IoT device to discover birds and engage with nature
US7137861B2 (en) Interactive three-dimensional multimedia I/O device for a computer
WO2008044325A1 (en) Information providing system
JP6682475B2 (en) Image display device, topic selection method, topic selection program
JP2018014575A (en) Image display device, image display method, and image display program
JP2019139170A (en) Image display device, image display method, and image display program
KR20150092179A (en) Systems and methods for managing the toilet training process of a child
KR20150091334A (en) Systems and methods for using images to generate digital interaction
JP2021128350A (en) Information processing system, information processing method, and recording medium
WO2017152167A1 (en) Interactive toy device, and systems and methods of communication between the same and network devices
Pellicane Calm, cool, and connected: 5 digital habits for a more balanced life
JP2019139169A (en) Image display device, image display method, and image display program
Seigel The Mouth Trap: Strategies, Tips, and Secrets to Keep Your Foot Out of Your Mouth
Clifton From New Age to New Life: A Passage from Darkness to the Light of Christ
WO2016054332A1 (en) Assisted animal activities

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17760977

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC, EPO FORM 1205A DATED 29.01.19

122 Ep: pct application non-entry in european phase

Ref document number: 17760977

Country of ref document: EP

Kind code of ref document: A1