WO2024152007A1 - Language and literacy learning system and method - Google Patents

Language and literacy learning system and method Download PDF

Info

Publication number
WO2024152007A1
WO2024152007A1 PCT/US2024/011485 US2024011485W WO2024152007A1 WO 2024152007 A1 WO2024152007 A1 WO 2024152007A1 US 2024011485 W US2024011485 W US 2024011485W WO 2024152007 A1 WO2024152007 A1 WO 2024152007A1
Authority
WO
WIPO (PCT)
Prior art keywords
learning
interaction
units
learning system
transmitters
Prior art date
Application number
PCT/US2024/011485
Other languages
French (fr)
Inventor
Kristy STARK
Dale Grover
Brian Flaherty
Original Assignee
Mindsemerge, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mindsemerge, Inc. filed Critical Mindsemerge, Inc.
Publication of WO2024152007A1 publication Critical patent/WO2024152007A1/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/04Speaking
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B17/00Teaching reading

Definitions

  • the present teachings generally relate to a phonetic learning device, system, and method.
  • the teachings particularly relate to a system and/or method for teaching a user to learning graphemes (e.g., letters, numbers), phonemes (e.g.. sounds of letters, numbers), associate phonemes to graphemes (e.g.. map phonemes to graphemes,), and/or associate graphemes to phonemes.
  • the teachings may find use in teaching a user, such as a child, the alphabetic principle.
  • the teachings may find use in recording user interactions with the device and system.
  • the Early Childhood Opportunity gap is significant for a number of reasons. From a developmental neuroscience perspective, it is critical to appreciate that rapid brain development in the first years of life forms the foundation for all future learning (Shonkoff & Phillips, 2000). As infants engage with their environments from birth, synaptic connections build neural networks for auditory and language development and higher cognitive functions. These interactions promote the creation of brain architecture that can bolster future success in life (Center on the Developing Child, 2011). Early life experience is the foundation upon which linguistic, perceptual, and cognitive development is dependent (Fox, Levitt, & Nelson, 2010, p. 28). Consequentially, if children are deprived of the opportunity to listen to and engage with rich language environments during early sensitive and critical periods, it can have a lasting negative impact (Sameroff & Fiese, 2000).
  • EHS Early Head Start
  • Phonetic learning devices, systems, and methods are widely used by educators as a tool to teach audible sounds associated with a letter or multiple letters combined. This audible noise is also known as a phoneme. A letter or multiple letters combined are also known as a grapheme. The association between graphemes and phonemes is fundamental in learning how to read and write an alphabetic language.
  • Some currently available phonetic learning devices are units in the shape of an alphabetic letter. Examples of such learning devices arc illustrated in US Patent No. US 5,188,533 and US Publication No. US 2016/0055755, incorporated herein by reference in their entirety. These units may teach the user phonetics by emitting as sound the phoneme associated with the grapheme that the unit embodies.
  • Other available phonetic learning systems include multiple units which represent an alphabetic letter and a central working platform. Such an exemplar ⁇ ’ learning system may be found in US 2005/0064372. incorporated by reference in its entirety. The individual units in these multiple unit systems are not capable of producing sounds themselves. The central working platform is the only component to these systems which is capable of producing the audible phoneme associated with a grapheme or group of graphemes represented on the units. These phonetic learning systems are an improvement over single unit phonetic learning devices because they enable the user to leam the phoneme associated with multiple graphemes. These phonetic learning systems introduce multiple disadvantages when compared to single unit phonetic learning devices. The most significant disadvantage of existing of phonetic learning systems with multiple letter units include the requirement of the central working platform.
  • Existing phonetic learning devices and systems lack the ability to record user interactions and assess the user’s progress in developing their phonetic skills. Patterns in the user’s interactions with a phonetic learning device or system can enable analysis of the user’s progress in developing phonetic skills. Understanding a user’s skill level can enable an educator to determine if a more advanced phonetic learning system is appropriate for a skilled user. Alternatively, a user who is demonstrating a skill level below a level typical for their age may be identified for supplemental assistance in learning phonetics.
  • What is needed is a phonetic learning system with multiple letter units which do not require a central working platform to teach the user phonemes associated with both single and multiple graphemes. What is needed is a phonetic learning system which is capable of recording a user’s interactions with the system for later analysis. What is needed is a phonetic learning system which is appropriate for young users who may not be able to press a button or squeeze a unit.
  • the learning system of the present disclosure proposed helps to solve the pervasive intergenerational transmission of poverty by offering a robust, evidence-informed language learning experience for children, regardless of the economic status they are assigned prenatally.
  • the system reinforces the child's learning of critical language, cognitive, and literacy skills and reinforces adult learning of critical behaviors to support child development.
  • the learning system may promote a warm, responsive, connected dyadic interactions that support healthy relationships, increase academic success, and strengthen opportunities for social mobility.
  • the learning system and method of the present disclosure utilizes education neuroscience and behavioral design to provide an evidence-based, scientifically grounded tool for teaching users language skills, including the alphabetic principle.
  • the learning system and method of the present teachings may teach varying aspects of language and literacy, in addition to the alphabetic principle, including blending, spelling, decoding, and the like.
  • the present disclosure relates to a learning system for use in educating of a user comprising: a) one or more learning units indicative of one or more symbols; b) optionally, one or more interaction members configured to interact with the one or more learning units; c) one or more transmitters in the one or more learning units; d) optionally, one or more other transmitters in the one or more interaction members configured to detect the one or more transmitters; e) one or more sensory output elements which output an auditory signal, a tactile signal, and/or a visual signal to an exterior of the one or more learning units, the one or more interaction members, or both to be sensed by the user and which are related to the one or more symbols; wherein the one or more learning units and optionally, the one or more interaction members, are configured to be manipulated by the user and based on an orientation, a position, a movement, an angle, an acceleration, an interaction with a learning unit, a change thereof, or any combination thereof related to the one or more learning units and/or the one or more
  • the present disclosure relates to a method of using the learning system by a user.
  • the present teachings relate to a method of using a learning system for a user to learn and associate one or more phonemes to one or more graphemes, the method including: a) the user physically manipulating one or more learning units and/or one or more interaction members to cause the learning system to be activated; and b) the user physically manipulating the one or more learning units and/or the one or more interaction members such that based on an orientation, a position, a movement, an angle, an acceleration, an interaction with a learning unit, a change thereof, or a combination thereof of the one or more learning units and/or the one or more interaction members, an auditory signal, and optionally, a tactile signal and/or a visual signal, is generated that is transmitted to an exterior of the one or more learning units and/or the one or more interaction members via one or more sensory output elements.
  • the disclosure may provide for a system for teaching a user phonetic skills comprising different learning units representing different symbols (e.g., letters, numbers, operators, etc.), sensors to detect movement of a learning unit, interaction member, or both; proximity to learning units; transmitters for interaction between one or more interaction members, one or more learning units, or both and other learning units and/or interaction members; and/or one or more sensory output elements which generate sound, vibrations, light, the like, or any combination thereof based on movement of the unit by the user and/or interaction with the other components of the system.
  • the sound produced may be the one or more graphemes associated with a single unit or multiple units in close proximity to each other.
  • FIG. 1 is a plan view of a learning system.
  • FIG. 2 is an interior view of an interaction member.
  • FIG. 3 A is a perspective view of a learning unit.
  • FIG. 3B is a front perspective view of a learning unit.
  • FIG. 3C is a rear perspective view of a learning unit.
  • FIG. 4A illustrates the functioning of a learning system.
  • FIG. 4B illustrates the functioning of a learning system.
  • FIG. 4C illustrates the functioning of a learning system.
  • FIG. 4D illustrates functioning of a learning system.
  • FIG. 5 is a perspective view of single learning unit.
  • FIG. 6 is a perspective view of an interior of a learning unit.
  • FIG. 7 illustrates wireless interaction between two learning units.
  • FIG. 8 is the process performed by a singular learning unit.
  • FIG. 9 is the process performed by a learning unit when in close proximity to another learning unit.
  • FIG. 10 illustrates a top plan view of a learning system.
  • FIG. 11 illustrates a side plan view of a learning system.
  • FIG. 12A illustrates the functioning of a learning system.
  • FIG. 12B illustrates the functioning of a learning system.
  • FIG. 12C illustrates the functioning of a learning system.
  • FIG. 12D illustrates functioning of a learning system.
  • the present teachings relate to a learning system.
  • the learning system may function as a phonetic learning device or other learning device.
  • the learning system may function to teach phonetics to users, such as infants or toddlers.
  • the learning system may function to audibly emit one or more phonemes relates to one or more graphemes.
  • the learning unit may function to teach a user a grapheme related to a phoneme, a phoneme related to a grapheme, or both.
  • the learning system may function to teach a user mathematical operators, equations, numbers, counting, resulting answers, and/or the like.
  • the learning system may function to teach a user chemical symbols, equations, and the like.
  • the learning system may even function to teach a user a different language.
  • the learning system may audibly emit the phoneme associated with a single learning unit representing a grapheme, or multiple learning units representing multiple graphemes.
  • the learning system may provide one or more sensory outputs (e.g., light, vibrations, etc.) to a user, such as to encourage and reinforce continued use (e.g.. play) of the learning system.
  • the learning system may include one or more interaction members, learning units, housings, light sources, speakers, circuit boards, processors, transmitters, sensors, vibrators, power converters, audio amplifiers, power sources, switches, the like, or any combination thereof.
  • the learning system may include or be free of one or more interaction members.
  • the one or more interaction members may function to cooperate with one or more learning units.
  • the one or more interaction members may function to detect and/or identify the grapheme or other symbol and/or shape represented by one or more learning units.
  • the one or more interaction members may function to relay (e.g., audibly relay) the phoneme associated with the grapheme(s) of one or more learning units.
  • the one or more interaction members may be configured to cooperate with one or more learning units.
  • the one or more interaction members may function to pair (e.g., wireless electronic transmission) with one or more learning units.
  • the one or more interaction members, or components thereof may transmit one or more signals to one or more learning units.
  • the one or more interaction members may function to house one or more learning units.
  • the one or more interaction members may be a wand, mouse, remote, tray, box, support, the like, or any combination thereof.
  • the one or more interaction members may be easily graspable by a user.
  • the one or more interaction members may be moveable such as to hover over and/or contact one or more learning units.
  • the one or more interaction members may be configured to remain static while one or more learning units are located thereon and/or therein.
  • the one or more interaction members may include a housing.
  • the one or more interaction members may include one or more electrical components. The electrical components may be located within the housing.
  • the one or more electrical components may include one or more sensors, transmitters, power sources, speakers, amplifiers, microphones, processors, circuit boards, non-transitory storage mediums, switches, power converters, wires, the like, or any combination thereof.
  • the one or more interaction members may include or be free of a graphic user interface (GUI). Being free of a graphic user interface allows for the learning system to be used without concern about screen time for infants and toddlers. It is foreseeable the learning system may be entirely free of an interaction member and learning units react with one another, are initiated directly by a user, and/or the like. [043]
  • the learning system may include one or more learning units.
  • the learning unit may function to phy sically and/or visually represent one or more graphemes associated with one or more phonemes, represent one or more alphanumeric characters, and/or represent one or more other symbols (e g., mathematical operators, chemistry' symbols).
  • the learning unit may function to audibly emit one or more phonemes associated with one or more graphemes.
  • the learning unit may audibly emit the phoneme associated with a single learning unit representing a grapheme or multiple learning units representing multiple graphemes.
  • an interaction member may function to audibly emit the one or more phonemes associated with the one or more graphemes.
  • a learning unit may also work as an interaction member, in lieu of an interaction member, or both.
  • a learning unit may also emit light or produce vibrations in addition to emitting the audible phoneme. By audibly relaying one or more phonemes and/or otherwise outputting one or more sensory outputs (e.g., light, vibrations, etc.), the learning unit may reinforce learning of a phoneme and grapheme (or other symbols).
  • the learning unit may transmit signals to or receive signals from other learning units, interaction members, or both.
  • the learning unit may contain sensors to detect manipulation by a user.
  • the structure of the learning unit may include a housing.
  • the learning system may include one or more housings.
  • a housing may function to represent a grapheme, display a grapheme, or both.
  • a housing may function to house one or more components of a learning unit, interaction member or both.
  • the housing may function to attract a user to play with and manipulate an interaction member, one or more learning units, or both.
  • the housing may cooperate or include a housing cover to enclose one or more components within an interaction member, one or more learning units, or both.
  • the housing may be rigid, flexible, or both.
  • the housing may be a one-piece or be comprised of multiple pieces.
  • the housing may be opaque, translucent, or a combination thereof.
  • the housing may be manipulated by a user.
  • the housing may include one or more sensors, transmitters, sensory output elements.
  • the housing may have a plurality of holes passing through the housing to allow the emission of sound and/or light.
  • the housing may be a thermoplastic or thermoset material.
  • the housing may be formed by injection molding, blow molding, vacuum forming, polymer casting, CNC machining, 3D printing, milling jointing, planing, cutting, sawing, drilling, boring, gluing, clamping, veneering, laminating, the like, or any combination thereof.
  • the housing may be formed by one or more teclmiques suitable for one or more polymers, organic materials, or both.
  • Organic material may include wood, sisal, rattan, cotton, the like, or any combination thereof.
  • the housing may include one or more safety features for being handled by a small child.
  • the safety feature(s) may prevent the housing being a choking hazard, laceration hazard, or both.
  • the housing may be sufficiently large to avoid being a choking hazard.
  • the interaction member may include a housing separate from a housing of one or more learning units.
  • a housing may have an overall width and/or length of about 1.25” or greater, about 1.5” or greater, about 1.75” or greater, or even 2” or greater.
  • a housing may have an overall width and/or length of about 20” or less, about 18” or less, or even about 16” or less.
  • Width may be measured side to side while length may be measured from a proximal end to a distal end.
  • the width and/or length of a housing of an interaction member may be same, similar, or different that that of the housing of a learning unit.
  • the housing may be designed such as to meet the small parts regulation set by the US Consumer Product Safe Commission (e.g., larger than 1.25” width and larger than 2.25” length).
  • the testing standard may be the cylinder testing standard set for in 16 C.F.R. 1501.4.
  • the one or more learning units may each include a housing.
  • the housing of a learning unit may embody the shape of a grapheme, alphanumeric character, or any other symbol (e g., mathematical operator, chemistry symbol).
  • the housing may not be in the shape of a grapheme, alphanumeric character, and/or symbol but have one or more representations thereof on the surface of the housing.
  • the housing may be or include the three-dimensional shape of one or more letters, numbers, symbols (e.g.. chemical symbols, chemical bonds, mathematical operators, punctuation symbols, grammar symbols), and/or the like.
  • a housing may be in a three-dimensional shape of a letter, such as the letter "H”.
  • the housing may be in a three-dimensional shape of a cube, cuboid, cylinder, pyramid, triangular prism, cone, sphere, partial sphere, hexagonal prism, the like, or any combination thereof.
  • the housing may be in the shape of a cube with a letter, or other character/symbol, located on one or more outer surfaces of the cube (e.g., printed, affixed, caned, molded into).
  • the housing may be the shape of common children’s toys with the one or more graphemes displayed thereon.
  • the housing(s) of learning unit(s) may be shaped like train cars and have the letters of the alphabet located thereon.
  • a learning system may include plurality of housings for forming a plurality of units. Each housing may be in the same or a different shape as another housing of another learning unit.
  • a first housing may be in a first shape while a second housing may be in a second shape, and so forth.
  • a first shape of a first learning unit may be one letter from the alphabet while the second shape of a second learning unit is another letter from the alphabet.
  • a plurality of learning units may provide for a portion of or an entirety of all alphanumeric representations.
  • a plurality of learning units of a learning system may include up to 36 housings, including 26 letter shapes (i.e., A to Z) and 10 number shapes (i.e., 0 to 9).
  • the one or more interaction members may include one or more housings.
  • the housing of an interaction member may function to cooperate with, detect, pair, house, and/or support one or more learning units, a plurality of electrical components, or a combination thereof.
  • the housing of an interaction member may have any suitable shape to allow a user to manipulate the interaction member to cooperate with one or more learning units, be passed over one or more learning units, be located below and support one or more learning units, the like, or any combination thereof.
  • the one or more housings may be shaped such as a wand, a rod. a pen, a remote, a mouse, a tray, a box, the like, or any combination thereof.
  • a housing of an interaction member may be wand-shaped.
  • Wand-shaped may mean having a handle and a head.
  • the handle may commence at a proximal end.
  • the head may be located at the distal end.
  • the handle and/or the head may be cuboidal. cylindrical, prismed, conical, spherical, pyramidical. the like, or any combination thereof.
  • the handle may have one or more contours formed therein for easier holding (e.g., a substantially cylindrical-shaped handle with a narrower diameter mid-section).
  • the head may also form one or more 2D and/or 3D shapes.
  • the shape may allure to users, especially children.
  • the shape may be star-shaped, cloud shaped, diamond-shaped, moon-shaped (e.g...
  • a tray-shaped housing may be any suitable 3D shape for supporting a plurality of learning units for a user to view and interact with.
  • a tray-shaped housing may include an upper surface opposing a lower surface.
  • a lower surface may function rest on a supporting surface (e g., table, floor).
  • An upper surface may function to display and/or store one or more learning units.
  • An upper surface may have one or more recognition holders, storage holders, or a combination thereof stored therein.
  • One or more recognition holders may function to temporarily store and display one or more learning units which are being interacted with (e.g...
  • One or more recognition holders may be located adjacent to one or more transmitters. Each individual recognition holder may be associated with its own individual transmitter or share a transmitter with one or more other recognition holders.
  • One or more storage holders may function to store one or more learning units not being actively interacted with.
  • an interaction member may be an interactive member of a play set. For example, an intersection, tunnel, sign, light, or a combination thereof in of a train set may be employed as an interaction member.
  • a housing may cooperate with or include a housing cover.
  • the housing cover may function to enclose any internal components in the housing, allow access to internal components, or both.
  • the housing cover may have a shape substantially reciprocal with at least a portion of a housing.
  • a housing cover may have a shape substantially reciprocal with a surface, side, or a portion thereof of a housing.
  • a housing cover may be removably affixed to a housing.
  • a housing cover may be affixed via one or more fasteners, snap fit, friction fit, the like, or any combination thereof.
  • the housing cover may be opaque, translucent, or both.
  • the housing cover may have an identical, similar, or different opacity and/or transparency as the remainder of the housing.
  • the housing may include one or more mating features.
  • One or more mating features may function to engage or otherwise mate one housing to one or more other housings.
  • a plurality of housings engaged together may make a digraph (e.g., two letters making sound), trigraph (e.g., three letters making a sound), a word, an equation, a number, the like, or any combination thereof.
  • the one or more mating features may include one or more projections, indentations, or both.
  • each housing may be equipped with male feature(s) on one side and female feature(s) on tire opposing side to provide for a universal mating scheme.
  • the one or more mating features may include magnets, hook and loop fasteners, pegs and clamps (e.g., similar to a children’s train set attachment), male keying features (e.g., tabs, pegs), female keying features (e.g., openings, slots), the like, or a combination thereof.
  • One housing may then be magnetically attached or otherwise temporarily attached to another housing.
  • the one or more mating features may be located on one or more sides (e.g., peripheral sides) of the one or more housings. The sides may be recognized as the left and right sides of the graphemes, such as in the case of typical letters and numbers.
  • One or more interior components of a plurality of housings may work together when the housings are joined or otherwise mated.
  • both speakers may work together to both simultaneously audibly relay a sound associated with a digraph or only one speaker may remain active while the others remain silent.
  • lights may all light up simultaneously or provide a flowing light pattern which follows the pronunciation of the letters (e.g., lights moving left to right as each phoneme is audibly relayed from one or more speakers).
  • the learning system may include one or more sensory output elements.
  • the one or more sensory output elements may function to output one or more auditory signals, tactile signals, visual signals, the like, or a combination thereof.
  • the sensory output may be transmitted to an exterior of the housing.
  • the sensory output may be sensed by a user.
  • the sensory output may be related to the one or more symbols of a housing, interaction between to housings, the like, or a combination thereof.
  • One or more sensory output elements may be part of an interaction member, one or more learning units, or both.
  • the one or more sensory output elements may include one or more speakers, light sources, vibration sources, the like, or any combination thereof.
  • the sensory output elements may be triggered when a learning unit and/or interaction member is manipulated by a user.
  • the sensory output elements may teach a user to distinguish between preferred combinations of graphemes and unpreferred combinations of graphemes.
  • the sensory output elements may output a first combination of auditory, visual, and tactile signals when a preferred combination of graphemes are combined and a second set of auditory, visual, and tactile signals when an unpreferred combination of graphemes are combined.
  • the sensory output elements may audibly emit the phoneme representing any unique combination of graphemes.
  • the sensory output elements output the phoneme representing one or more graphemes of learning units in close proximity by the learning unit detecting which other learning units are in close proximity and recognizing the unique graphemes those learning units represent.
  • the learning system may include one or more light sources.
  • a light source may function to emit light, gain attention of a user, maintain attention of a user, or any combination thereof.
  • a light source may function emit light with and/or related to one or more phonemes or other sounds being emitted.
  • a light source may emit light when an interaction member cooperates with one or more learning units.
  • a light source may emit light when one learning unit cooperates with another learning unit.
  • the light source may emit light when the learning unit is manipulated by a user.
  • the light source may be mechanically or adhesively affixed to the housing.
  • the light source may be affixed to the housing near a light source hole, a translucent portion, or both.
  • One or more light sources may be one or more light emitting diodes (LED), fluorescent bulbs, incandescent bulbs, the like, or a combination thereof.
  • the light source may be in communication (e.g.. electrical communication) with and/or controlled by a control system, a circuit board, processor, and/or the like.
  • the light source may indicate which portion of a phoneme corresponds to a specific grapheme when multiple learning units representing multiple graphemes are represented by the emitted phoneme.
  • the light sources of two or more learning units may emit light sequentially when the learning unit audibly emits a phoneme associated with multiple graphemes.
  • a light source of an interaction member may emit light when the interaction member successfully recognizes a learning unit (e.g., transmitter of interaction member detects transmitter of learning unit).
  • the learning system may have one or more light source paths.
  • a light source path may allow for emission of light from a housing or other surface.
  • the light source paths may be formed in a housing of an interaction member, one or more learning units, or both.
  • a light source path may include one or more light source holes, translucent portions, or both in a housing.
  • a light source hole may pass through the housing surface.
  • One or more light source holes may pass through one surface or a plurality of surfaces of die housing.
  • the light source hole may have a light source mounted in close proximity.
  • the housing may have one or more transparent portions. A transparent portion may allow for lighting emitted within the housing to be emitted outside of the housing.
  • the learning system may include one or more vibration sources.
  • the vibration source may provide a tactile signal to a user.
  • the vibration source may be provided in one or more learning units, an interaction member, or any combination thereof.
  • the vibration source may transfer a vibration force to the housing.
  • the vibration force may result from interaction of a housing by a user.
  • the vibration force may result from a user manipulating a housing, correctly, verbally repeating back a phoneme related to the grapheme represented by the housing, having one housing interact with one or more other housings, the like, or a combination thereof.
  • the vibrator may include an electric motor or an electro-mechanical transducer capable of producing vibrations.
  • the vibrator may be in communication with the circuit board, processor, and/or the like.
  • the vibrator may be turned on and off.
  • the one or more vibration sources may be in communication with, and/or controlled by one or more control systems, processors, circuit boards, and/or the like.
  • the learning system may include one or more speaker holes.
  • the speaker holes may allow for emission of audible sormd from the housing to the exterior of the housing.
  • One or more speaker holes may be formed in an interaction member, one or more learning units, or both.
  • the speaker holes may be formed as a single hole or as one or more holes through the surface of the housing.
  • the speaker holes may have a speaker located in close proximity.
  • the speaker holes may be formed in one or a plurality' of surfaces of the housing.
  • the speaker holes may be formed on a surface which faces toward and/or away from a user.
  • the learning system may include one or more speakers.
  • One or more speakers may function to provide audible sounds, make an audible phoneme sound associated with a grapheme, or both.
  • the speaker may emit audible sound when a learning unit is manipulated by a user, an interaction wand is manipulated by a user, an interaction member detects a learning rmit, the like, or any combination thereof.
  • the speaker may have any configuration suitable for providing audible phoneme(s) to a user.
  • One or more speakers may refer to one or more audio amplifiers, speakers, electrical component(s) configured to convert electrical signals into sound, or any combination thereof.
  • One or more speakers may be affixed to and/or reside within a housing.
  • the speaker may be mechanically and/or adhesively affixed to the housing.
  • One or more speakers may reside within an interior of the housing, such as to be protected and avoid being damaged by a user (e.g., played with by an infant or toddler).
  • the speaker may be mounted near the speaker holes.
  • the speaker may be any electrically driven transducer capable of audible sound emission.
  • the speaker may be in communication with and/or controlled by a control system, a circuit board, a processor, the like, or any combination thereof.
  • An exemplary audio amplifier may be a class-D amplifier.
  • the P AM8403 Mini 2-Channel 3W Stereo Class D Audio Amplifier by Envistia Mall may be suitable.
  • An exemplary audio speaker may be 4 ohm or 8 ohm speaker. For example, a 4 ohm 3W 1.5-inch diameter speaker.
  • the learning system may include one or more microphones.
  • the one or more microphones may function to receive sound (e.g., voice) emitted from one or more users, the ambient environment, or both.
  • One or more microphones may have any configuration suitable for receiving sound.
  • One or more microphones may be in electrical communication with, controlled by, or both a control system, a circuit board, a processor, or any combination thereof.
  • One or more microphones may be affixed within the housing.
  • One or more microphones may be near or even adjacent to one or more openings of housing.
  • the one or more openings may be separate or the same as the one or more speaker openings.
  • the opening may allow for sound waves from the voice of a user or the ambient environment to be received by the microphone.
  • the one or more microphones may transmit one or more speech signals toward one or more processors.
  • One or more processors may work to interpret and/or convert the incoming speech signal.
  • the learning system may include one or more Circuit Boards.
  • a circuit board may be a printed circuit board (PCB).
  • the circuit board may communicate with and/or include one or more processors, storage mediums, sensors, transmitters, sensory output elements, power sources, light sources, speakers, microphones, other electrical components, the like, or any combination thereof.
  • the circuit board may be mechanically or adhesively affixed to the housing.
  • the circuit board may be powered by a battery or other power source.
  • the circuit board may be powered inductively by an inductive field transmitted through the housing.
  • a circuit board may be associated with an interaction member, one or more learning units, or any combination thereof.
  • a circuit board may form a backbone of a control system of an interaction member, learning unit, or both.
  • a circuit board may be formed by a plurality of electronic modules cooperating together or may be custom made such that the modules are integrated directly into a custom circuit board.
  • the learning system may include one or more processors.
  • the one or more processors may function to initiate functionality of one or more components of tire learning system, analyze one or more signals incoming from one or more components, receive and/or transmit data signals, or any combination thereof.
  • One or more processors may function to receive signals from one or more electrical components, transmit signals to one or more electrical components, or both.
  • Exemplary ways the one or more processors may function and cooperate with electrical components include: receiving one or more identification signals (e.g., signal from one or more transmitters specific to a learning unit); identifying one or more learning units based on the identification signal (e.g., identifying the letter "A”); accessing one or more identification databases, phoneme databases, speech-to-text databases, and/or the like; matching one or more identification signals to one or more identifiers (e.g..
  • a storage medium or even a database accessing one or more audio files (e.g., audio database, storage medium); retrieving one or more audio files associated with one or more identification signals, identifiers, or both; retrieving one or more audio files based on a sequence and/or combination of a plurality of identification signals, identifiers, or both; decoding one or more audio files into one or more audio instruction signals; translating one or more audio files from a stored language in the one or more audio databases into a different language for transmitting as an audio instruction signal(s); accessing one or more translation services/systems and transmitting the one or more stored languages for translating into a desired language; retrieving one or more audio files after translation from a translation service/system; relaying one or more audio instruction signals to one or more audio amplifiers, speakers, or both; receive one or more speech signals; convert one or more speech signals into speech files; translating one or more speech files from the received language into the stored language; accessing one or more translation
  • One or more processors may be part of one or more interaction members, learning units, or any combination thereof.
  • One or more processors may be located within the housing, outside of the housing, within a base unit, within a computing device of the learning system, part of a server of a system, remotely located from the learning system, the like, or any combination thereof.
  • One or more processors may be in communication with one or more other processors.
  • a processor within a housing may be in direct or indirect communication with a processor part of a base unit or even a remotely located server.
  • One or more processors may be part of one or more hardware sy stems, software sy stems or any combination thereof.
  • One or more hardware processors may include one or more central processing units, multi-corc processors, front-end processors, microcontrollers, the like, or any combination thereof.
  • One or more processors may include one or more cloud-based processors.
  • An exemplary’ processor may be the Adafruit nRF52840 Express Feather ARM processor by Adafruit.
  • Another exemplary processor may be a Raspberry’ Pi Zero by Raspberry Pi.
  • Another exemplary processor may be a custom, embedded microcontroller formed directly into the circuit board.
  • the one or more processors may be in communication with and/or include one or more storage mediums.
  • the learning system may include one or more storage mediums.
  • the one or more storage mediums may function to receive and/or transmit one or more data entries from one or more components of the system, store one or more algorithms, store computer-readable instructions (e.g., software programs), or any combination thereof.
  • the one or more storage mediums may include one or more storage devices, memory storage devices, or both.
  • the one or more storage devices may include one or more non-transient storage devices.
  • a non-transient storage device may include one or more phy sical servers, virtual servers, physical computing devices, or a combmation thereof.
  • One or more servers may include one or more local servers, remote servers, or both.
  • One or more storage mediums may include one or more hard drives (e.g., hard drive memory), chips (e g., Random Access Memory “RAM”), discs, flash drives, memory cards, the like, or any combination thereof.
  • the one or more storage mediums may be part of one or more interaction mediums, learning units, or both.
  • the one or more storage mediums may be located within one or more housings, on a circuit board, part of a processor, a storage compartment, base unit, servers, computing devices, the like, or a combination thereof.
  • the one or more storage mediums may be in communication with one or more processors.
  • the one or more storage mediums may receive data entries from one or more processors, may transmit one or more data entries to one or more processors, or both.
  • One or more storage databases may have a volume capacity.
  • a volume capacity may be about 1 MB or greater, about 3 MB or greater, about 5 MB or greater, or even about 8 MB or greater.
  • a volume capacity may be about 50 MB or less, about 30 MB or less, or even about 15 MB or less.
  • the volume capacity should be configured to retain at least 100 words or more, 1,000 words or more, or even 5,000 words or more.
  • the volume capacity should be configured to retain 50,000 words or less, 35,000 words or less, or even 10,000 words or less. Words may be stored in any suitable format for accessing and transmitting to one or more speakers, such as in MP3 format.
  • the one or more storage mediums may store data in the form of one or more databases.
  • the learning system may include one or more databases.
  • One or more databases may function to receive, store, and allow for retrieval of information related to usage of the learning unit, instructions for the learning unit (e.g., software), or both.
  • the one or more databases may be located within (e.g., stored) one or more storage mediums.
  • the one or more databases may be located locally within the learning system, remotely from the learning system, or both (e.g., cloud storage).
  • the one or more databases may include any type of database able to store digital information.
  • the digital information may be stored within one or more databases in any suitable form using any suitable database management system (DBMS).
  • DBMS database management system
  • Exemplary' storage forms include relational databases (e.g., SQL database, row-oriented, column-oriented), non-relational databases (e.g., NoSQL database), correlation databases, ordered/unordered flat files, structured files, the like, or any combination thereof.
  • One or more databases may be located within or be part of hardware, software, or both.
  • One or more databases may be stored on a same or different hardware and/or software as one or more other databases.
  • the databases may be located within one or more non- transient storage mediums.
  • One or more databases may be located in a same or different non-transient storage medium as one or more other databases.
  • the one or more databases may be accessible by one or more processors to retrieve data entries for analysis via one or more algorithms, store one or more data entries, access instructions for execution, or any combination thereof.
  • the one or more databases may include one or more audio databases, speech databases, learning unit databases, instruction databases, user profde databases, or any combination thereof.
  • the one or more databases may include one or more audio databases.
  • One or more audio databases may function to store one or more phonemes, words, phrases, and/or the like. Data stored within an audio database representing varying phonemes, words, phrases, and/or the like may be referred to as audio file(s).
  • the one or more phonemes may include the phonetic sounds of a single letter, a plurality of letters, words, and/or the like.
  • the audio fdes may be stored as a pre-recorded voice, data for on-the-fly speech synthesis, or both. Speech synthesis may require the use of a synthesizer, which may also be in communication with the processor.
  • the audio files may include 100 or more, 1,000 or more, or even 5.000 or more different phonemes, words, and/or phrases.
  • the audio files may include 50,000 or less, 35.000 or less, or even 10,000 or less , 1.000 words or more, or even 5,000 words or more.
  • the volume capacity should be configured to retain 50,000 words or less. 35,000 words or less, or even 10.000 or less different phonemes, words, and/or phrases.
  • the one or more audio databases may store audio files in varying voices, may store different voices for use with different audio files, may work with a speech synthesizer for merging audio files with varying voices, the like, or any combination thereof.
  • the learning system may be able to output the one or more audio files in varying voices.
  • the varying voices may be of varying accents, genders, tones, speeds, the like, or any combination thereof.
  • the vary ing voices may allure to a user’s interest, more closely represent their community, may work on their attention capabilities, and/or the like.
  • the one or more audio databases may 7 store audio files in varying languages, may be able to work with one or more processors to translate audio files into different languages, the like, or any combination thereof.
  • a user may be able to set a preferred language via their user profile.
  • the one or more audio databases may include a single database or a plurality of databases which store audio files in varying languages (e.g., English, Spanish, French, Chinese, Portuguese, etc.).
  • the one or more audio databases may be in a single language.
  • the one or more processors may access one or more audio files from the audio database(s) and access a translation sen ice for automatically translating the one or more audio files to a dcsircd/prcfcrrcd language before transmitting an audio signal to one or more amplifiers and/or speakers.
  • translation may work similar to Google® Translate and/or Microsoft® Translator serv ices.
  • the one or more databases may include one or more learning unit databases.
  • One or more learning unit databases may store the identify of one or more learning units, one or more signals related to one or more learning units, or both.
  • One or more learning unit databases may correlate an identification signal received by a transmitter and sent to a processor to the identity (e.g.. grapheme) represented by the learning unit.
  • a database may correlate a specific RFID tag with a specific letter of the alphabet.
  • One or more learning databases may provide for letters/alphabets of multiple languages.
  • the one or more databases may include one or more speech databases.
  • One or more speech databases may function to store incoming speech, such as that recorded by a microphone. Data stored within a speech database may be referred to as a speech file(s).
  • One or more speech files may be compared to one or more audio files by one or more processors. For example, after the system audibly emits a phoneme and/or word, a user may repeat it back. The user’s voice may be received via the microphone and recorded for storage into the speech database. The processor may then compare what was audibly emitted by the system to what was spoken by the user.
  • One or more speech databases may be in a single language or a plurality of languages.
  • One or more processors may transmit the one or more speech files in the language spoken by the user into the one or more storage mediums, may first transmit to one or more translation services/systems, or both. One or more processors may receive the translated speech files from the one or more translation services/systems and then store within one or more speech databases.
  • the one or more databases may include one or more instruction databases.
  • One or more instruction databases may have one or more instruction algorithms stored therein.
  • the one or more instruction algorithms may instruct one or more processors how to react to one or more signals received from one or more transmitters, microphones, other processors, sensors, the like, or any combination thereof.
  • the one or more instruction algorithms may instruction one or more processors how to transmit one or more signals toward one or more other transmitters, speakers, other processors, sensors, the like, or any combination thereof.
  • the one or more instruction algorithms may instruct the one or more processors how to automatically execute any of the methods disclosed herein.
  • the one or more instruction algorithms may instruct the one or more processors how to automatically analyze a user’s profile, history, and/or progress according to any of the methods disclosed herein.
  • the one or more databases may include one or more user profile databases.
  • the one or ore user profde databases may include one or more user profiles, user histories, user performance, the like, or any combination thereof.
  • User profiles may include an individual s name, age, gender, race, ethnicity, language preference, and/or the like.
  • User history may include a history of the user’s use of the learning system.
  • a user’s performance may include tracking progression of the user when playing with the learning system. For example, how often the user's voice correctly mimics the one or more phonemes audibly relayed by the learning sy stem. As another example, how often a user sequences a plurality of learning units to fonn one or more words.
  • the learning system may include a mobile application.
  • the mobile application may be interacted with by a user to learn information about the learning system, send commands to the learning system, or even track progress.
  • the mobile application may provide information about user interactions with the learning system.
  • the mobile application may communicate one or more interaction members, learning units, or any combination thereof over an Internet to provide information to a user.
  • the mobile application may send user commands to an interaction member, learning unit, or both which change how the learning system functions. For example, changing from phoneme sounds (e.g.. sound out any combination of learning units) to word recognition.
  • the learning unit may include one or more transmitters.
  • the transmitter may function to send and/or receive signals; sense the presence or proximity to other transmitters; allow for an interaction member to detect and recognize a specific learning unit; please one interaction member and/or learning unit in communication with another interaction member and/or learning unit; place a housing in communication with another housing, storage compartment, base unit, an Internet and/or the like; or a combination thereof.
  • a transmitter may be considered a transceiver.
  • the transmitter may interact with other transmitters.
  • a transmitter of one interaction member and/or learning unit may interact with a transmitter of another interaction member, learning unit, housing, storage compartment, base unit, computing device, the like, or any combination thereof.
  • One or more transmitters may be located within one or more interaction members, learning units, or both.
  • One or more transmitters may be located within a housing, affixed to a housing, within a storage compartment, within a base unit, the like, or any combination thereof.
  • the transmitter may be uniquely identifiable.
  • a transmitter may function to identify a specific interaction member, learning unit, housing, symbol (e.g., grapheme), sound (e.g., phoneme), or any combination thereof.
  • a transmitter may be a single transmitter or a plurality of transmitters.
  • the one or more transmitters may include one or more radio frequency identification transmitters, wireless modules, Bluetooth® transmitters, near-field communication transmitters (NFC), global positioning system transmitters, cameras, scanners, the like, or any combination thereof.
  • the transmitter may transmit wireless communication through the housing.
  • the transmitter may only receive a signal when a learning unit comes into view.
  • the transmitter may be active, passive, or both. Active may mean that the transmitter is powered and continuously broadcast its own signal. Passive may mean that the transmitter is not internally powered and may be powered by a reader, such as an electromagnetic
  • an interaction member may include one or more radio frequency identification transmitters (e g., readers, active) while the one or more learning units include one or more radio frequency identification tags (e.g.. passive).
  • a plurality of learning units may all have active radio frequency identification tags and/or readers. Due to the proximity 7 of the learning units and for properly identifying the intended learning unit, short range (e.g., 0.5 inches to 4 inches, 1 in to 2 inches) transmission distance is preferred. In other words, RFID technology which reads on a longer distance is problematic as it does not clearly identify what learning units are being interacted with and in what sequence.
  • An exemplary radio frequency reader is the RDM6300 RFID Reader.
  • An exemplary’ radio frequency tag is a 125 kHz EM4100 protocol tag.
  • Tags may be sized (e.g., diameter) of 10 mm or greater, 12 mm or greater, or even 14 mm or greater.
  • Tags may be sized 50 mm or less, 40 mm or less, or even 30 mm or less. The intent is to balance out size with detection range.
  • One or more transmitters may be read-only or rewritable.
  • the RFID tags may be read-only or rewritable.
  • the transmitter may be in communication with the control sy stem, circuit board, processor, storage mediums, the like, or any combination thereof.
  • One or more transmitters located within and/or otherwise associated with an interaction member may be referred to as an interaction transmitter(s).
  • One or more transmitters located within and/or otherwise associated with a learning unit may be referred to as a unit transmitter.
  • an interaction member may include one or more scanners or readers.
  • the scanner(s) may function to read one or more QR codes, barcodes, and/or the like.
  • the code(s) may be located on an exterior surface of a learning unit.
  • the code(s) may be used in lieu of and similar to the RFID reader and RFID tag.
  • the one or more scanners or readers may be mounted onto an interaction member, may be static, may be moveable, or any combination thereof.
  • the one or more codes coming into readable view of the one or more scanners or readers may be considered a transmission signal.
  • an interaction member may include one or more cameras.
  • the one or more cameras may be able to visually detect and see the one or more learning units.
  • the one or more cameras may transmit the captured image(s) to the one or more processors such as to determine the grapheme or oilier shape represented by the learning unit.
  • One or more cameras may be mounted onto one or more interaction members.
  • One or more cameras may look down on, over, up to, or any viewing direction such as to have a view of one or more learning units.
  • One or more learning units coming into and/or being in view of the one or more cameras may be considered a transmission signal.
  • the learning system may emit one or more transmission signals.
  • a transmission signal may be a wireless communication signal.
  • a transmission signal may be emitted by one or more transmitters.
  • a transmission signal may be emitted continuously, in response to movement of an interaction member and or learning unit, upon turning on a switch of the system, the like, or any combination thereof.
  • a transmission signal from one transmitter may be received by another transmitter.
  • a transmission signal relayed by a transmitter of an interaction member and/or learning unit toward a learning unit may be referred to as a primary or first transmission signal.
  • a transmission signal relayed by a transmitter of a learning unit toward the interaction member and/or another learning unit may be referred to as a secondary or second transmission signal.
  • the learning unit may include one or more sensors.
  • the sensors may function to detect changes in the position and/or orientation of a learning unit, overall speed of movement of a learning unit, track location of a learning unit, the like, or a combination thereof.
  • the one or more sensors may communicate position, orientation, angle (e.g., tilt), velocity, acceleration, or other data to a control system, processor, circuit board, transmitters, one or more other sensors, the like, or any combination thereof.
  • the one or more sensors may allow for one or more components of the learning system (e.g., interaction member, learning unit) to turn off (e.g.. go to sleep, enter low power mode) when not in use for an extended duration of time (e.g., 5 minutes or greater, 10 minutes or greater).
  • the one or more sensors may allow for one or more components of the learning system to turn on (e.g., be reactivated, enter regular power mode) upon detecting movement or other interaction with the component.
  • the one or more sensors may work with or in lieu of a power switch.
  • the one or more sensors may be located inside and/or outside of one or more interaction members, learning units, or both.
  • the one or more sensors may include an inertial measurement unit (IMU), accelerometer, tilt switch, gyrometer, force sensor, near-field communication module, RFID module, Bluetooth module. Wi-Fi module, the like or any combination thereof.
  • the sensors may detect position or orientation, acceleration, rotation, the like, and/or changes thereof which result from use (e.g., manipulation) of a component by a user.
  • RFID may include localizable RFID.
  • RFID may have a short transmission distance (e.g., less than 10”, less than 5”, less than 2”). Short transmission may provide for accuracy in learning units interacting.
  • An exemplary inertial measurement unit (IMU) may include 9-DOF Orientation IMU Fusion Breakout BNO085 (BN0080) by Adafruit.
  • IMU inertial measurement unit
  • one or more sensors may detect a user picking up a learning unit, moving around (e.g., walking) with a learning unit, rotating or similarly manipulating a learning unit, shaking a learning unit, contacting (e.g., hitting) another learning unit or housing, interacting with a single housing or a plurality of housings, the like, or any combination thereof.
  • Another exemplary’ sensor may be a tilt switch, for example a rolling ball sensor switch with product number RB-231X2 manufactured by C&K.
  • a tilt switch may be useful in detecting if a component (e.g., interaction member, learning unit) has moved from a horizontal and/or steady position into an off- horizontal and/or vertical position or vice-versa. Detecting such movement may allow for a device to be turned on or off without the need for a switch.
  • the learning unit may include a plurality of wires.
  • the wires may enable communication between electronic components of the learning unit.
  • the wires may create communication between the circuit board and the sensory output elements, transmitter, sensors, power source, portions of a control system, or any combination thereof. Electrical connections may also be due to electrical contact between components, soldering, or any other suitable means.
  • the learning system may include one or more power sources.
  • the power source may provide electrical power to the electronic components.
  • Each component with active electronic components may have its own power source.
  • an interaction member may have a power source.
  • One or more learning units may be free of or include a power source. Learning units with passive transmitters may be free of a power source. Learning units with active transmitters may include or be free of a power source.
  • the power source may be coimected to other electrical components through wires or be directly mounted to a circuit board.
  • the power source may be a battery, capacitor, the like, or any combination thereof.
  • the power source may be a disposable and/or rechargeable battery.
  • the power source may be recharged by direct contact with a voltage source or by inductive charging through the housing of the learning unit.
  • An exemplary power source may be a 500 mAH lithium-ion battery, one or more double-A batteries, or even one or more C batteries.
  • the power source may include or be in communication with a power converter.
  • a power converter may function to convert an incoming voltage into another voltage compatible with the electronic components.
  • a power converter may be a direct current to direct current power converter (DC-to-DC), an alternating current to direct current power converter (AC-to-DC), or both.
  • An exemplary power converter may be a Comidox DC-DC Step Up Power Module Voltage Boost Converter Board for incoming 0.9 to 5V to 5V.
  • the learning system may include one or more unpowered learning units.
  • An unpowered learning unit may be a learning unit without a power source.
  • the unpowered learning unit may communicate with a learning unit through wireless communication.
  • the unpowered learning unit may be temporarily supplied with power using an inductive field which may pass through the housing of the unpowered learning unit.
  • the unpowered learning unit may contain a radiofrequency identification tag to wirelessly communicate with a learning unit.
  • the radiofrequency identification tag may be a unique radiofrequency identification tag which is associated with the grapheme represented by the unpowered learning unit.
  • the learning system may include one or more base units.
  • the base unit may send information to or receive information from a learning unit.
  • the base unit may include one or more sensors, transmitters, sensory output elements, circuit boards, processors, power sources, user interfaces, or any combmation thereof.
  • the base unit may support learning units on a surface of the base unit.
  • the base unit may be in the form of a tray, mat, easel, storage container, or the like.
  • a base unit may be die same as, separate from, in addition to, or in lieu of an interaction member.
  • One or more features application to an interaction member may be suitable for the base unit are incorporated herein.
  • the base unit may teach the user phonemes associated with graphemes.
  • the base unit may sense the presence of learning units on a surface of die base unit and audibly emit the phoneme associated with the graphemes represented by the learning units on the surface of the base unit.
  • the base unit may sense learning units using a transmitter or sensor.
  • the base unit may include a user interface.
  • the learning system may include or be free of a graphic user interface.
  • the graphic user interface may be a screen or touch screen mounted to a surface of the base unit, interaction member, or both.
  • the user interface may communicate information about user interactions with the learning system to a user.
  • the base unit and/or interaction member may communicate with a user through the user interface by displaying information on the screen or touch screen.
  • the user may send commands to the base unit and/or interaction member through the user interface.
  • the base unit may be commanded by a user through touching the touch screen.
  • the user interface may provide instructions to a supervisory’ user which aid the supervisory user in teaching the user to associate graphemes with phonemes.
  • the user interface may display’ information on the screen or touchscreen which will aid the supervisory user in instructing the user.
  • the learning system may include one or more storage containers.
  • the storage containers may house the one or more main components (e.g., interaction member, learning units) of the learning system.
  • the storage containers may house the one or more components of the learning system in an internal cavity or on a surface of the storage container.
  • the storage containers may contain recessed surfaces having a shape substantially reciprocal with a surface, side, or a portion thereof of an interaction member, learning unit, or both. The recessed surfaces of a storage container may prevent the one or more components from moving relative to the storage container.
  • the storage containers may communicate with a learning unit.
  • the storage container may receive information from a learning unit including patterns of interaction between the user and the learning unit.
  • the storage container may wirelessly communicate with a learning unit through a transmitter.
  • the storage container may provide electrical energy to the learning unit.
  • the storage container may contain an inductive coil which emits an inductive field to wirelessly charge the learning unit.
  • the learning system may include one or more communication modules.
  • the one or more communication modules may allow for the learning unit to receive and/or transmit one or more signals from one or more computing devices, to a mobile application, be integrated into a network, or both.
  • the one or more communication modules may have any configuration which may allow for one or more data signals from one or more learning units to be relayed to one or more other learning units, controllers, communication modules, communication hubs, networks, computing devices, processors, the like, or any combination thereof located external of the learning unit.
  • the one or more communication modules may include one or more wired communication modules, wireless communication modules, or both.
  • a wired communication module may be any module capable of transmitting and/or receiving one or more data signals via a wired connection.
  • One or more wired communication modules may communicate via one or more networks via a direct, wired connection.
  • a wired connection may include a local area network wired connection by an ethemet port.
  • a wired communication module may include a PC Card, PCMCIA card, PCI card, tire like, or any combination diereof.
  • a wireless communication module may include any module capable of transmitting and/or receiving one or more data signals via a wireless connection.
  • One or more wireless communication modules may communicate via one or more networks via a wireless connection.
  • One or more wireless communication modules may include a Wi-Fi transmitter, a Bluetooth transmitter, an infrared transmitter, a radio frequency transmitter, an IEEE 802.15.4 compliant transmitter, the like, or any combination thereof.
  • a Wi-Fi transmitter may be any transmitter complaint with IEEE 802.11.
  • a communication module may be single band, multi -band (e.g., dual band), or both.
  • a communication module may operate at 2.4 Ghz, 5 Ghz, the like, or a combination thereof.
  • a communication module may communicate with one or more other learning units, communication modules, computing devices, processors, or any combination thereof directly; via one or more communication hubs, networks, or both; via one or more interaction interfaces; via one or more mobile applications; or any combination thereof.
  • the learning system may form a kit.
  • the kit may include one or more learning units, one or more interaction members, one or more base units, one or more storage containers, the like, or any combination thereof.
  • the kit may enable a user to learn phonemes associated with a plurality of combinations of graphemes.
  • the kit may include two or more learning units which represent two or more graphemes which can be combined to enable the user to learn the associated phoneme for any combination of graphemes represented by the learning units.
  • the kit may record user interactions with learning units and interaction member and/or base unit.
  • the kit may be one of multiple unique kits which each contain a different combination of learning units and housings.
  • the kit may include learning units, interaction members, base units, and/or storage containers which arc designed to match a user’s age or phonetic skill level.
  • the kit may use information recorded to the interaction member and/or base unit to determine which of the multiple unique kits is appropriate for a user when age or skill level advance.
  • the kit may use information recorded on the base unit to detect abnormalities in user interactions with learning units.
  • the abnormalities that the learning unit may detect include autism, dyslexia, other medical conditions, and the like.
  • the abnormalities may be predictively detected by the learning unit for later diagnosis by a licensed medical professional.
  • the present disclosure relates to one or more methods of using a learning system.
  • the method may include using a learning system for a user to learn and associate one or more phonemes to one or more graphemes
  • the methods may function to allow a user to audibly hear, learn, and/or even communicate one or more audible sounds related to one or more visual symbols represented by the learning unit.
  • the one or more methods may include a single learning unit process, a multiple learning unit process, or both.
  • a single learning unit process may relate to a user manipulating and interacting with only one housing of a learning unit at a time.
  • a multiple learning unit process may relate to a user manipulating and interacting with a plurality of housings together.
  • the methods may include a process incorporating an interaction member or free of an interaction member. The method may employ the learning sy stem as discussed herein.
  • the method may include the user physically manipulating one or more learning units and/or one or more interaction members to cause the learning system to be activated.
  • the manipulating and activation may include moving a switch of the learning system. Upon moving the switch, the learning system maypower on and/or wake out of a sleep mode. Moving may include sliding, depressing, rotating, and/or the like.
  • the switch may be part of one or more learning units and/or interaction members.
  • the manipulating and activation may include physically moving the one or more learning units, interaction members, or both.
  • one or more sensors may detect the movement. The one or more sensors may then cause the learning system to power on and/or wake out of a sleep mode.
  • the one or more sensors may transmit sensing of the movement to one or more processors of the learning system.
  • the one or more sensors may work in conjunction with and/or in lieu of one or more switches.
  • a switch may fully power on/off the learning system.
  • the learning system While still being powered on, the learning system may enter a sleep mode (e.g., low power mode) after a period of time of not being moved (e.g.. as sensed by the one or more sensors).
  • the learning system may wake out of the sleep mode.
  • the one or more sensors may be any of the one or more sensors as discussed above with respect to the learning system.
  • the method may include tire user physically manipulating the one or more learning units and/or the one or more interaction members such that based on an orientation, a position, a movement, an angle, an acceleration, an interaction with a learning unit, a change thereof, or a combination thereof of the one or more learning units and/or the one or more interaction members, an auditory signal, and optionally , a tactile signal and/or a visual signal, is generated that is transmitted to an exterior of the one or more learning units and/or the one or more interaction members via one or more sensory output elements.
  • the one or more sensory output elements may include any sensory output element as discussed above with respect to the learning system.
  • the one or more sensory output elements may include any element which outputs an auditory signal, visual signal, and/or tactile signal.
  • the one or more sensory output elements may include one or more speakers which transmit an auditory signal.
  • the one or more sensory output elements may include one or more light sources which transmit a visual signal.
  • the one or more sensory output elements may include one or more electrical motors and/or piezoelectric transducers which transmit the tactile signal.
  • the method in which manipulating one or more learning units and/or one or more interaction members to result in an auditory signal, visual signal, and/or tactile signal may include: physically moving a single learning unit to result in the a nd i ton signal being a phoneme related to a grapheme represented by the single learning unit; physically moving an interaction member into a detection range of the single learning unit to result in the auditory signal being the phoneme related to the grapheme represented by the single learning unit;
  • the method may include physically moving the single learning unit to result in the auditor ⁇ ' signal.
  • the auditory signal may be related to the grapheme represented by the single learning unit.
  • a single learning unit may include one or more sensors which are configured to detect the movement of the single learning unit.
  • the one or more sensors may be the same one or more sensors which detect movement to power on, off, and/or wake out of a sleep mode.
  • the one or more sensors may be any suitable sensor as discussed herein.
  • the one or more sensory output elements may include one or more speakers.
  • the one or more speakers may be located within the single learning unit.
  • the single learning unit may include one or more processors.
  • the single learning unit may include one or more audio amplifiers.
  • the one or more sensors may communicate the detected movement to the one or more processors.
  • the one or more processors may communicate one or more audio signals.
  • the one or more audio signals may be related to the phoneme to the one or more audio amplifiers.
  • the one or more audio signals may be relayed to the speaker to result in the auditory signal.
  • the auditory signal may be a vocal representation of the phoneme.
  • the method may include the physically moving one or more learning units (e.g., subsequent learning units) into the detection range of the one or more other learning units (e.g.. preceding learning units).
  • the movement into the detection range may result in the auditory signal.
  • the auditory signal may be one or more phonemes.
  • the one or more phonemes may be related to the one or more graphemes represented by the one or more preceding learning units and tire one or more subsequent learning units.
  • the representation may be the sequence of the learning units combined together.
  • the detection range may be in close proximity to, in viewing proximity, in sensing proximity', in contact with, mated to, or any combination thereof of one learning unit to another learning unit. Mated may be via one or more mating features.
  • the learning units may each include one or more transmitters configured to detect and identify one another. There may only be one master learning unit with an active transmitter while the others are passive transmitters.
  • the first learning unit in the sequence may act as a master unit. Acting as a master unit may mean that the master unit’s speaker, processor, and other electrical components arc the ones that arc active while the other subsequent learning units arc attached and recognized, but are otherwise inactive or dormant. As an alternative, all of the learning units may be active.
  • the one or more processors and/or speakers may sync with one another and together result in tire auditory signal.
  • the one or more sensory output elements may include one or more speakers.
  • the one or more speakers may be located within any (e.g., preceding and/or subsequent) learning units.
  • the one or more learning units may include one or more processors.
  • the one or more learning units may include one or more audio amplifiers.
  • the one or more processors may communication one or more audio signals.
  • the one or more audio signals may be related to one or more phonemes.
  • the one or more phonemes may be related to the one or more graphemes represented by the plurality of learning units and their arrange sequence. The sequence may be left to right, up to down, right to left, down to up, any other direction, or a combination thereof.
  • the one or more audio signals may be transmitted to one or more audio amplifiers.
  • the one or more audio signals may then be relayed to the one or more speakers to result in the auditory signal.
  • the auditory signal may be a vocal representation of the phoneme.
  • the method may include physically moving an interaction member into the detection range of a single learning unit, moving a single learning unit into the detection range of an interaction member, or both to result in the auditory signal.
  • the auditory signal may be the phoneme related to the grapheme represented by the single learning unit.
  • the detection range may be in close proximity to, in viewing proximity, in sensing proximity, in contact with, or any combination thereof.
  • the detection range may be determined by the type of transmitters employed within the interaction member, learning unit, or both.
  • the interaction member may include one or more interaction transmitters.
  • the single learning unit may include one or more unit transmitters. The one or more interaction transmitters may detect and identify the one or more unit transmitters when one or the other is moved into the detection range.
  • the detection range may be in close proximity to, in viewing proximity , in sensing proximity, in contact with, mated to, or any combination thereof of one learning unit to another learning unit. Mated may be via one or more mating features.
  • Detection range may include a learning unit located in a recognition holder, a unit slot, or both of an interaction member. Detection range may include a head of an interaction member hovering over and/or contacting an upper facing surface of a learning unit.
  • the interaction member, the learning unit, or both may include the one or more sensory output elements.
  • the one or more sensory output elements may be a speaker.
  • the interaction member, the learning unit, or both may include one or more processors.
  • the interaction member, the learning unit, or both may include one or more audio amplifiers.
  • the one or more interaction transmitters, the unit transmitters, or both may communicate the detection and/or identification of the single learning unit to the one or more processors.
  • the one or more processors communicate one or more audio signals.
  • the one or more audio signals may be related to the phoneme.
  • the one or more audio signals may be relayed to one or more audio amplifiers.
  • the one or more audio signals may then be relayed to the speaker to result in the auditory signal.
  • the auditor ⁇ ' signal may be a vocal representation of the phoneme.
  • the method may include phy sically moving one or more interaction members into the detection range of the plurality of learning units in a sequence (e.g., detecting one after the other in sequential order), moving a plurality of learning units in a sequence into the detection range of the interaction member(s). or both to result in the auditory signal.
  • the auditory signal may be one or more phonemes.
  • the one or more phonemes may be related to one or more graphemes.
  • the one or more graphemes may be represented by the sequence of the plurality of learning units.
  • the sequence of the plurality of learning units may be left to right, right to left, top to bottom, bottom to top, on diagonal, the like, or any combination thereof.
  • the one or more interaction members may include one or more interaction transmitters.
  • the plurality of learning units may each include one or more unit transmitters.
  • the one or more interaction transmitters may detect and/or identify the one or more unit transmitters and their sequence when moved into the detection range.
  • the detection range may be in close proximity to. in viewing proximity, in sensing proximity, in contact with, mated to, or any combination thereof of one learning unit to another learning unit. Mated may be via one or more mating features.
  • Detection range may include a learning unit located in a recognition holder, a unit slot, or both of an interaction member. Detection range may include a head of an interaction member hovering over and/or contacting an upper facing surface of a learning unit.
  • the interaction member(s), the learning unit(s), or both may include the one or more sensory output elements.
  • the one or more sensory output elements may be one or more speakers.
  • the interaction member(s), the learning unit(s), or both may include one or more processors.
  • the interaction member(s), the learning unit(s), or both may include one or more audio amplifiers.
  • the one or more interaction transmitters, the unit transmitters, or both may communicate the detection and/or identification of the single learning unit to the one or more processors.
  • the one or more processors communicate one or more audio signals.
  • the one or more audio signals may be related to one or more phonemes.
  • the one or more audio signals may be relayed to one or more audio amplifiers.
  • the one or more audio signals may then be relayed to the one or more speakers to result in one or more auditor ⁇ ' signals.
  • the one or more auditor ⁇ signals may be a vocal representation of the one or more phonemes.
  • the method may include one or more transmitters automatically relaying a detection and/or identification signal to one or more processors.
  • One or more transmitters may establish a connection with or more other transmitters.
  • the connection may be one or more interaction transmitters with one or more unit transmitters, one or more unit transmitters with one or more other unit transmitters, or both.
  • a unit transmitter and/or interaction transmitter may automatically deploy a first signal.
  • the first signal may be a signal looking for another unit and/or interaction transmitter.
  • Another unit and/or interaction transmitter upon receiving the first signal, continuously, or both may automatically deploy a return, second signal.
  • the second signal may carry the identity back to the unit and/or interaction transmitter.
  • the received signal may then be automatically transmitted from the transmitter(s) to one or more processors.
  • the one or more processor(s) may then automatically determine the identity of the interaction member, the learning unit, or both based on the returned, second signal.
  • the one or more processors may automatically access one or more storage mediums (e.g., non-transitory) to retrieve an identity which matches with the second signal.
  • the method may include one or more processors executing one or more instruction algorithms.
  • One or more instruction algorithms may instruct one or more processors what to do with a received signal from one or more transmitters.
  • One or more instruction algorithms may instruct one or more processors to automatically access one or more databases (e.g., one or more audio databases, learning unit databases, or both).
  • One or more instruction algorithms may instruct one or more processors to automatically correlate an identity of a signal from a transmitter to one or more graphemes represented by the one or more learning units.
  • One or more instruction algorithms may instruct one or more processors to automatically access one or more audio files related to one or more identified graphemes.
  • One or more instruction algorithms may instruct one or more processors to convert one or more audio files to one or more auditory signals.
  • One or more instruction algorithms may instruct one or more processors to automatically transmit one or more auditory signals related to the one or more graphemes.
  • One or more auditory signals may be communicated to one or more audio amplifiers, speakers, or both.
  • the method may include one or more processors automatically retrieving one or more audio files representing one or more phonemes, words, phrases, and/or the like.
  • the method may include retrieving or more audio files which match the one or more learning units, a sequence of a plurality of learning units, or both.
  • One or more instruction algorithms may instruction tire processor(s). For each learning unit detected, a phoneme may be generated. For a sequence of learning units detected, one or more phonemes may be generated.
  • the method may include automatically receiving and/or storing one or more speech files.
  • the method may include a user playing with the learning system audibly speaking one or more phonemes into a microphone of the learning system.
  • the user may repeat audibly repeat a phoneme(s) after the learning system audibly playing the phoneme(s).
  • the received speech may enter into the microphone.
  • the microphone may convert the user’s voice into one or more speech signals.
  • the one or more speech signals may be automatically communicated (e.g.. transmitted) toward the one or more processors.
  • the one or more processors may then store the one or more speech signals in one or more storage mediums. For example, the one or more processors may direct the one or more speech signals toward one or more speech databases.
  • the one or more processors may automatically convert the one or more speech signals into one or more speech files.
  • the one or more speech files may then be compared to the one or more audio files, or other audio representations of the one or more phonemes.
  • the comparison may be completed by the same or different (e.g., remote) processors.
  • the method may include a user logging into the learning system.
  • Logging in may include a user accessing one or more user profiles. Logging in may be done by vocally reciting a user’s identification (e.g., name, nickname, or other identifier) into one or more microphones of the learning system. Logging in may even simply use voice recognition or even facial recognition if a camera is used within the learning system. Logging in may allow for a user’s interactions with the learning system to be recorded and correlated with the user, for tracking of performance to be enabled, or both.
  • the method may include automatically recording a user’s activity.
  • the user's activity may include physical manipulation of one or more learning units, interaction members, or both. Physical manipulation may include frequency, speed, direction, angles, the like, or any combination thereof.
  • the user's activity may include recording of a user’s speech via a microphone.
  • the user’s activity may include recording of one or more speech files and correlating to a specific user.
  • the recording may be completed by the one or more processors via the one or ore sensors, microphones, and any other electrical components of the learning system.
  • the method may include automatically determining a user’s progress.
  • the learning system may record a user’s learning and correctly and/or incorrectly repeating one or more phonemes, placing learning units in sequences to build words, learning and sequencing learning units into new words than before, the like, or any combination thereof.
  • the method may include automatically analyzing vocalizations, speech patterns, and/or the like. This may include analyzing one or more speech files; user’s profile, history, progress; physical manipulation of learning units; and/or the like.
  • the method may include identify ing one or more learning disabilities based on a user’s activity and/or progress.
  • a user’s profile, activity, and/or progress may be correlated to one or more standard profiles with similar demographic data, stored data across the system from other users with similar demographic data, or both.
  • a user’s incorrect use of learning units, recurring confusion of certain learning units, long lead time to learning how to correctly pronounce a phoneme, speech patterns, and/or the like, may identify the potential presence of one or more learning disabilities, psychiatric diagnoses, and/or the like.
  • a recurring confusion with placement of learning units representing the letters b, d, p, and q may indicate the potential presence of dyslexia.
  • more frequent and large movements of learning units indicating flailing, shaking, etc. may indicate the potential presence of autism.
  • the method may include the learning unit beginning at a ready state.
  • the ready state may function to allow a learning unit which is at rest to detect subsequent motion, position, interaction, and/or the like.
  • the ready state may automatically progress to sensing movement.
  • the method may include sensing movement of a learning unit. Sensing movement of the learning unit may determine whether the learning unit returns to the ready state or progresses to a subsequent state. Sensing movement may be determined by the output of the sensors of the learning unit. Sensing movement may be achieved by determining that the learning unit is in motion, the learning unit is being interacted with, the position is different than a previously registered position, or any combination thereof. When sensing movement is achieved, the learning unit may 7 then begin to sense interactions with one or more other learning units. When sensing movement is not achieved, the learning unit may return to the ready state. [100] The method may include sensing interaction with one or more other learning units.
  • Sensing interaction with other learning rmits may result in the learning unit emitting an audible phoneme associated with the grapheme represented by the learning rmit combined with the graphemes represented by the one or more other learning units which are sensed.
  • the learning unit may sense other learning units using sensors, transmitters, or both which detect transmitters in the other learning units.
  • the learning unit may sense the distance to other learning units and only sense other learning units within a threshold distance.
  • the learning unit may progress to triggering one or more sensory outputs when other learning units are sensed or not sensed.
  • the method may include triggering one or more sensory outputs.
  • the sensory outputs may be triggered after movement is sensed.
  • the sensory outputs may be triggered when other learning units have been sensed or when no learning units have been sensed.
  • the sensory outputs triggered may include an auditory signal, a tactile signal, a visible signal, or any combination thereof.
  • the sensory output may include an auditory signal which is a phoneme.
  • the sensory output may be a phoneme associated with a single grapheme represented by the learning unit when no other learning units were sensed during the previous step of sensing interaction with other learning units.
  • the sensory output may be a phoneme associated with multiple graphemes represented by the learning unit and other learning units when other learning units were sensed during the previous step of sensing interaction with other learning units.
  • the learning unit may return to the ready state after sensory outputs have been triggered.
  • the method may include returning to a ready state and/or rest mode.
  • the learning unit may return to the ready state and/or a rest mode at any time during use by a user or after sensory outputs have been triggered.
  • Any of the method steps as discussed herein which are completed by one or more of the electronic components, are with respect to analyzing, and/or tire like may be automatically executed.
  • the one or more processors may automatically execute one or more of the identified steps of the method upon a user interacting with the learning system.
  • the learning system may also be used as an augmentative and/or alternative communication tool.
  • Users with speech troubles e.g., mute, slurred, learning a language
  • the physical learning units may provide a screen-free way for users to communicate which may be beneficial in keeping screen time reduced, such as in the case for young children and screen time practices.
  • FIG. 1 illustrates a learning system 10.
  • the learning system 10 includes an interaction member 12 and a plurality of learning units 1.
  • the interaction member 12 includes a handle 38 and a head 30.
  • the head 30 is formed at a distal end 32 of the interaction member 12.
  • the handle 38 commences at a proximal end 34 of the interaction member 12.
  • the learning units 1 are illustrated in a plurality of shapes 100. As an example, the plurality of shapes 100 are illustrated as letters of the alphabet.
  • FIG. 2 illustrates an open interaction member 12.
  • the interaction member 12 includes a housing 3.
  • the housing 3 may include a housing cover 5.
  • the interaction member 12 includes a processor 36.
  • the interaction member 12 includes a sensor 19.
  • the sensor 19 may be an inertial measurement unit (IMU) 42.
  • the interaction member 12 includes a power converter 38 (e.g., DC-DC converter).
  • the interaction member 12 includes an audio amplifier 40.
  • the interaction member 12 includes a transmitter 17.
  • the transmitter 17 may be a radio frequency identification detection transmitter or transceiver 44.
  • the processor 36. sensor 19, power converter 38, audio amplifier 40, and transmitter 17 may make up a control system 46 of the interaction member 12.
  • the control system 46 reside within the handle 38. It is also foreseeable that varying portions of the control system 46 reside within the head 38.
  • the transmitter 17 may be at the head 38 such as to provide for close proximity' and accuracy with a learning unit.
  • the interaction member 12 also includes a speaker 13.
  • the speaker 13 is located within the head 30.
  • the interaction member 12 also includes a power source 48.
  • the power source 48 may be a battery 50.
  • the interaction member 12 includes a switch 54.
  • the electrical components e.g., 36, 19, 38, 40, 17, 13, 48, 54
  • the electrical components may all be in electrical communication (direct and/or indirect) with one another via one or more wires 23 or other electrical connections.
  • FIGS. 3A-3D illustrate a learning unit 1.
  • the learning unit 1 is in a shape 100.
  • the shape 100 is in the form of a letter of the alphabet.
  • FIGS. 3B and 3C the learning unit 1 in in an overall block shape.
  • the learning unit 1 depicts a first shape 101 and a second shape 102 as an uppercase letter and lowercase letter of the alphabet.
  • the first shape 101 and the second shape 102 are formed as depressions on opposing surfaces of the housing 3.
  • the learning unit 1 houses a transmitter 17.
  • the transmitter 17 may be a transmitting tag. such as a radio frequency identification tag 52.
  • the transmitter 17 of the learning unit 1 may be configured to communicate, be detected by, and/or otherwise establish a wireless connection with the transmitter 17 of an interaction member 12.
  • FIGS. 4A-4D and 12A-12D illustrate a method of using the learning system 10.
  • the learning system 10 includes an interaction member 12 and a plurality of learning units 1.
  • a user for example, a child
  • the learning units 1 may be arranged on any support surface (e.g., floor, table) and in any sequence.
  • the interaction member 12 may be turned on. Turning on may be completed such as by moving and/or depressing a switch 54 (not shown).
  • the user may move the interaction member 12 to be in proximity of a learning unit 1, such as the first learning unit.
  • the head 30 of the interaction member 12 may pass over, hover, or touch a learning unit 1 at about its middle area.
  • the interaction member 12 may come into sufficient proximity of the learning unit 12 such as the transmitter 17 of the interaction member 12 detects and recognizes the transmitter 17 of the learning unit 1.
  • the interaction member 12 passes over the first of the learning units 1.
  • the transmitter 17 of the interaction member 12 detects and identifies the transmitter 17 of the learning unit 1.
  • the identification member 12 then recognizes the first shape 101 as the letter “C” which the learning unit 1 represents (via identification by the transmitters 17 and processor 36).
  • the processor 36 (not shown) transmits an audio signal to the audio amplifier 40 (not shown) which then amplifies the signal to drive the speaker 13 (not shown).
  • the interaction member 12 announces “This is the letter C” and/or announces the phonetic sound of the letter “C” such as “K-u-h.” Once the first learning unit 1 having the first shape 101 is recognized and announced, the user moves the interaction member 12 to the subsequent learning unit 1.
  • the interaction member 12 passes over the second of the learning units 1. Again, the transmitters 17 of the interaction member 12 and learning unit 1 make a connection.
  • the interaction member 12 recognizes the identity of the second shape 102 the second learning unit 1 represents. In this example, the letter “A.”
  • the interaction member 12 announces “I don’t know the word spelled C - A” and/or may announce the phonetic sound of the two letters combined, such as “K-a-h.”
  • the interaction member 12 passes over the third of the learning units 1. Similar as to before, the transmitters 17 of the interaction member 12 and learning unit 1 make a connection. The interaction member 12 recognizes the identity of the third shape 103 the third learning unit 1 represents. In this example, the letter “R.” The interaction member 12, similar as to before, announces “This is the letter R” and/or announces the phonetic sound of the letter “R” such as “Rrrrr” or “R-u-h” or similar. As the three shapes 101, 102, and 103 spell the word “CAR,” the interaction member 12 then announces “CAR”, “The word spelled is CAR” or the phonetic sound “K-a-h-r.”
  • the interaction member 12 passes over the fourth of the learning units 1. Similar as to before, the transmitters 17 of the interaction member 12 and learning unit 1 make a connection. The interaction member 12 recognizes the identity of the fourth shape 104 the fourth learning unit 1 represents. In this example, the letter “T.” The interaction member 12, similar as to before, announces “This is the letter T” and/or the phonetic sound “T-e-h”. As the four shapes 101. 102, 103, and 104 spell the word “CART.” the interaction member 12 then announces “CART”, or similarly “The word spelled is CART”, and/or even the phonetic sound "K-a-h-r-t.”
  • one or more light sources 7 may light up to signal to the user that the interaction member 12 has been appropriately positioned relative to the learning unit 1.
  • one or more light sources 7 may be one or more light emitting diodes (LEDs) within the handle 28 or head 30 of the interaction member 12.
  • FIG. 5 illustrates a learning unit 1.
  • the learning unit 1 has a first shape 101.
  • the learning unit 1 includes a housing 3.
  • the housing 3 is affixable to a housing cover 5.
  • the housing 3 includes a light source hole 9.
  • a light source 7 emits light through the light source hole 9.
  • the housing 3 includes speaker holes 11.
  • FIG. 6 illustrates a learning unit 1.
  • the learning unit 1 has a second shape 102.
  • a printed circuit board 15 is mounted within the learning unit 1.
  • the printed circuit board 15 is in communication with a transmitter 17.
  • the transmitter 17 is mounted within the learning unit 1.
  • the printed circuit board 15 is in communication with a sensor 19.
  • the sensor 19 is mounted within the learning unit 1.
  • the printed circuit board 15 is in communication with a speaker 13.
  • the speaker 13 is mounted within the learning unit 1.
  • the printed circuit board 15 is in communication with a light source 7.
  • the light source 7 is mounted within the learning unit 1.
  • the printed circuit board 15 is in communication with a vibrator 21.
  • the vibrator 21 is mounted within the learning unit 1.
  • FIG. 7 illustrates two learning units 1.
  • the two learning units 1 include a first shape 101 and a second shape 102.
  • a transmitter 17 (not shown) in the learning unit 1 with a first shape 101 emits a primary transmission 25.
  • a transmitter 17 (not shown) in the learning unit 1 with a second shape 102 emits a secondary transmission 27.
  • FIG. 8 illustrates a single letter process 200.
  • the single letter process 200 begins in the ready state 201.
  • the ready state 201 progresses to the movement detection state 202. If no movement is detected in the movement detection state 202, the single letter process 200 returns to the ready state 201. If movement is detected in the movement detection state 202, the single letter process 200 progresses to the signal state 203.
  • FIG. 9 illustrates a multiple letter process 300.
  • the multiple letter process 300 begins in the ready state 301.
  • the ready state 301 progresses to the movement detection state 302. If no movement is detected in the movement detection state 302, the multiple letter process 300 returns to the ready state 301. If movement is detected in the movement detection state 302, the multiple letter process 300 progresses to the unit detection state 303. If no learning unit in close proximity is detected in the unit detection state 303, the multiple letter process 300 returns to the ready state 301. If a learning unit in close proximity is detected in the unit detection state 303, the multiple letter process 300 progresses to the signal state 304.
  • FIGS. 10 and 11 illustrate a learning system 10.
  • the learning system 10 includes an interaction member 12.
  • the interaction member 12 is in the form of a tray.
  • the interaction member 12 has an exterior formed as a housing 3.
  • the interaction member 12 has formed thereon (e.g., on an upper, display surface), a plurality of recognition holders 56.
  • Each recognition holder 56 temporarily holds a single learning unit 1.
  • the interaction member 12 also includes a storage holder 58.
  • the storage holder 58 retains a plurality of learning units 1 in a variety of shapes 100.
  • Each learning unit 1 includes a transmitter 17.
  • Each recognition holder 56 is associated with an individual transmitter 17 retained within the housing 3 of the interaction member 12.
  • the transmitter 17 associated with an individual recognition holder 56 of the interaction member 12 detects and identifies the transmitter 17 of the learning unit 1 located within the individual recognition holder 56.
  • the interaction member 12 also includes speaker holes 11 formed in the housing.
  • the interaction member 12 includes a speaker 13.
  • the interaction member 12 includes a processor 36.
  • the interaction member 12 includes an audio amplifier 40 and a power converter 38.
  • the learning system 10 illustrated in FIGS. 10 and 11 may function similarly to the learning sy stem illustrated in FIGS. 4A-4D. But instead of communication being established between a wand-shaped interaction member 12 and one or more learning units 1, the communication is established between the tray interaction member 12 and the learning unit(s) 1 as they are located into the individual recognition holders 56.
  • Housing At least 10 letters, 4" high, sans serif, 3/4" thick, Baltic Birch plywood (for robustness), sanded, corners rounded betw een 1/8" and 1/4" radius, treated with butcher block finish or other non-toxic, food-grade finish, with 12.4mm or 16mm diameter RFID tags (125kHz EM4100/EM4200) embedded midthickness.
  • Wand A plastic or wood "wand", with handle diameter between 3/4" and 1" for ease of grasping by young children; a 2" -wide star or other shape at the end earn ing the RFID reader coil (RDM6300); a Bluetooth audio output speaker; Bluetooth link to a base computer (built into the 11RF52840 microcontroller board); battery operated (either 3-4 AA/AAA cells or a rechargeable LiFO battery); communicating with a base PC via Bluetooth for processing and logging (base computer TBD). Processing on the base computer to be written in Python when possible.
  • RDM6300 RFID reader coil
  • base computer built into the 11RF52840 microcontroller board
  • battery operated either 3-4 AA/AAA cells or a rechargeable LiFO battery
  • base computer TBD base computer
  • Additional Wand Features Multiple, color-changeable LEDs (e.g.. DotStar) for feedback; accelerometer/gyro for measuring wand movement (e.g., BNO085 9-DOF IMU fusion board); one or two buttons for extra input; audio recording for on-the-fly addition of words (via PDM MEMS microphone).
  • DotStar color-changeable LEDs
  • accelerometer/gyro for measuring wand movement
  • buttons e.g., BNO085 9-DOF IMU fusion board
  • buttons for extra input
  • audio recording for on-the-fly addition of words (via PDM MEMS microphone).
  • any numerical values recited in the above application include all values from the lower value to the upper value in increments of one unit provided that there is a separation of at least 2 units between any lower value and any higher value. These are only examples of what is specifically intended and all possible combinations of numerical values between the lowest value, and the highest value enumerated are to be considered to be expressly stated in this application in a similar manner. Unless otherwise stated, all ranges include both endpoints and all numbers between the endpoints.
  • the terms “generally” or “substantially” to describe angular measurements may mean about +/- 10° or less, about +/- 5° or less, or even about +/- 1° or less.
  • the terms “generally” or “substantially” to describe angular measurements may mean about +/- 0.01° or greater, about +/- 0.1° or greater, or even about +/- 0.5° or greater.
  • the terms “generally” or “substantially” to describe linear measurements, percentages, or ratios may mean about +/- 10% or less, about +/- 5% or less, or even about +/- 1% or less.
  • the terms “generally” or “substantially” to describe linear measurements, percentages, or ratios may mean about +/- 0.01% or greater, about +/- 0.1% or greater, or even about +/- 0.5% or greater.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

A learning system for use in educating of a user comprising: a) one or more learning units indicative of one or more symbols; b) one or more interaction members configured to interact with the one or more learning units; c) one or more transmitters in the one or more learning units; d) one or more other transmitters in the one or more interaction members configured to detect tire one or more transmitters; e) one or more sensory output elements which output an auditory signal, a tactile signal, and/or a visual signal to an exterior of the one or more learning units. The auditory' signal relays one or more phonemes associated with one or more graphemes represented by the learning imit(s).

Description

LANGUAGE AND LITERACY LEARNING SYSTEM AND METHOD
CROSS-REFERENCE TO RELATED APPLICATIONS
[001] This application claims priority benefit to US Provisional Application Nos. 63/479,661, filed on January 12, 2023; 63/479,892, filed on January 13, 2023; and 63/495,855, filed on April 13, 2023; the contents of which are incorporated herein by reference in their entirety.
FIELD
[002] The present teachings generally relate to a phonetic learning device, system, and method. The teachings particularly relate to a system and/or method for teaching a user to learning graphemes (e.g., letters, numbers), phonemes (e.g.. sounds of letters, numbers), associate phonemes to graphemes (e.g.. map phonemes to graphemes,), and/or associate graphemes to phonemes. The teachings may find use in teaching a user, such as a child, the alphabetic principle. The teachings may find use in recording user interactions with the device and system.
BACKGROUND
[003] On average, children from low-income families enter school with a full year deficit in cognitive and language development in relation to their higher-income counterparts (U.S. Department of Education, 2007). Although programs targeting this problem have demonstrated evidence of effectiveness (Rand, 2014). scaling the solutions is difficult. As a result, children and families in need are not receiving access to valuable resources, which threatens their development and social mobility. Accordingly, state and federal representatives are urged to collaborate with innovators and enact policies in order to support the creation, dissemination, and implementation of empirically informed products, programs, and services to support low-incomc children and families. Research has shown that the language environment and the quality of adult-child interactions account for much of the development gap and have the potential for remediation to break the intergenerational transmission of poverty (U.S. Department of Education, 2007).
[004] The Early Childhood Longitudinal Study found that 93% of low-income children born in 2001 were in poor or mediocre care from infancy through preschool (USDOE, 2007). The divergence of early opportunities between economic groups helps to explain how the Early Childhood Opportunity gap emerges prior to school entry. Importantly, these findings highlight promising pathways for intervention for impoverished families. It has been established that children from the most disadvantaged families are often further marginalized by their early experiences, so it is important to improve each environment in which they may learn, including childcare settings and their home environments.
[005] The Early Childhood Opportunity gap is significant for a number of reasons. From a developmental neuroscience perspective, it is critical to appreciate that rapid brain development in the first years of life forms the foundation for all future learning (Shonkoff & Phillips, 2000). As infants engage with their environments from birth, synaptic connections build neural networks for auditory and language development and higher cognitive functions. These interactions promote the creation of brain architecture that can bolster future success in life (Center on the Developing Child, 2011). Early life experience is the foundation upon which linguistic, perceptual, and cognitive development is dependent (Fox, Levitt, & Nelson, 2010, p. 28). Consequentially, if children are deprived of the opportunity to listen to and engage with rich language environments during early sensitive and critical periods, it can have a lasting negative impact (Sameroff & Fiese, 2000).
[006] Aligned with this developmental neuroscience research, the Nobel Prize -winning economist James Heckman contends that the earlier an investment is taken in the life of a child, the higher the return on that investment (Heckman, 2006). Although efforts are made to minimize this Early Childhood Opportunity gap upon school entry, the timing of these educational interventions weakens their efficacy. Early disadvantage harms children long before they enter the U.S. education system. In fact, research demonstrates that disadvantaged children as young as nine months of age already have delays in cognitive, language, and social-emotional development (USDOE, 2007).
[007] Although there are moves by the current administration to increase the provision of Early Childhood Care and Education, funding remains stalled in Congress as political will on a federal level is low. In addition to this challenge, many states believe that ECCE should be under the domain of state legislation, which serves to make it an even more complicated political issue, competing with a myriad of other important issues for state funding (Klein, 2015). Although some federal efforts aim to mitigate the Early Childhood Opportunity gap by promoting more enriching opportunities for children prior to school entry, inadequate funding denies the majority of children services they require. One such program is Early Head Start (EHS). EHS programming is designed to promote healthy early development in disadvantaged children, but it reaches only 4% of all eligible children in the U.S. (U.S. Department of Health and Human Services, 2013).
[008] Social class is a moderator of parent approaches to interacting with and teaching infants and toddlers (Britto, Fuligni, & Brooks-Gunn, 2002). Strikingly, research has revealed that teaching strategies and language use in the home could account for 25% - 60% of the Early Childhood Opportunity gap that emerges between socioeconomic groups (Britto, Brooks-Gumi, & Griffin, 2006). For instance, in the domain of language and literacy , Hart and Risley ’ s work reveals that by age four, a boy from a low-income family may hear 32 million words fewer than his peers (1995). This early lack of opportunity for linguistic exchanges that promote the capacities for learning is undermining impoverished children from the moment they are bom. Although many infants and toddlers in poverty encounter an abundance of risk factors that threaten their opportunity to flourish, intervention research has demonstrated that positive experiences can shape adult behavior (Suskind et al.. 2016) and have a buffering effect on child development (Landry, Smith, Swank. & Guttentag, 2008).
[009] Phonetic learning devices, systems, and methods are widely used by educators as a tool to teach audible sounds associated with a letter or multiple letters combined. This audible noise is also known as a phoneme. A letter or multiple letters combined are also known as a grapheme. The association between graphemes and phonemes is fundamental in learning how to read and write an alphabetic language. [010] Some currently available phonetic learning devices are units in the shape of an alphabetic letter. Examples of such learning devices arc illustrated in US Patent No. US 5,188,533 and US Publication No. US 2016/0055755, incorporated herein by reference in their entirety. These units may teach the user phonetics by emitting as sound the phoneme associated with the grapheme that the unit embodies. Existing devices of this type rely on the user pressing a button on the unit or squeezing the unit. These devices have a limited ability to teach a user phonetics. Learning the phoneme associated with an individual grapheme is an initial step in teaching phonetics but these devices do not enable the user to leam the grapheme associated two or more graphemes combined. Additionally, the user is required to have the proficiency to push a button, squeeze the device, or otherwise actively initiate the audible phoneme. Young users, such as infants or toddlers, or disabled users may be unable to effectively interact with these devices if they are incapable of pressing a button, squeezing the device with sufficient strength, or even not yet able to understand cause and effect.
[01 1] Other available phonetic learning systems include multiple units which represent an alphabetic letter and a central working platform. Such an exemplar}’ learning system may be found in US 2005/0064372. incorporated by reference in its entirety. The individual units in these multiple unit systems are not capable of producing sounds themselves. The central working platform is the only component to these systems which is capable of producing the audible phoneme associated with a grapheme or group of graphemes represented on the units. These phonetic learning systems are an improvement over single unit phonetic learning devices because they enable the user to leam the phoneme associated with multiple graphemes. These phonetic learning systems introduce multiple disadvantages when compared to single unit phonetic learning devices. The most significant disadvantage of existing of phonetic learning systems with multiple letter units include the requirement of the central working platform. Young users, such as an infant or toddler, may not be capable of understanding how to create an interaction betw een the units and the central working platform, and may not have the attention span to be interact with a stationary central work platform. Additionally, the central working platform must be able to accommodate multiple letter units on its surface which makes the central working platform undesirably large.
[012] Existing phonetic learning devices and systems lack the ability to record user interactions and assess the user’s progress in developing their phonetic skills. Patterns in the user’s interactions with a phonetic learning device or system can enable analysis of the user’s progress in developing phonetic skills. Understanding a user’s skill level can enable an educator to determine if a more advanced phonetic learning system is appropriate for a skilled user. Alternatively, a user who is demonstrating a skill level below a level typical for their age may be identified for supplemental assistance in learning phonetics.
[013] What is needed is a phonetic learning system with multiple letter units which do not require a central working platform to teach the user phonemes associated with both single and multiple graphemes. What is needed is a phonetic learning system which is capable of recording a user’s interactions with the system for later analysis. What is needed is a phonetic learning system which is appropriate for young users who may not be able to press a button or squeeze a unit. SUMMARY
[014] The learning system of the present disclosure proposed helps to solve the pervasive intergenerational transmission of poverty by offering a robust, evidence-informed language learning experience for children, regardless of the economic status they are assigned prenatally. The system reinforces the child's learning of critical language, cognitive, and literacy skills and reinforces adult learning of critical behaviors to support child development. The learning system may promote a warm, responsive, connected dyadic interactions that support healthy relationships, increase academic success, and strengthen opportunities for social mobility. The learning system and method of the present disclosure utilizes education neuroscience and behavioral design to provide an evidence-based, scientifically grounded tool for teaching users language skills, including the alphabetic principle. The learning system and method of the present teachings may teach varying aspects of language and literacy, in addition to the alphabetic principle, including blending, spelling, decoding, and the like.
[015] The present disclosure relates to a learning system for use in educating of a user comprising: a) one or more learning units indicative of one or more symbols; b) optionally, one or more interaction members configured to interact with the one or more learning units; c) one or more transmitters in the one or more learning units; d) optionally, one or more other transmitters in the one or more interaction members configured to detect the one or more transmitters; e) one or more sensory output elements which output an auditory signal, a tactile signal, and/or a visual signal to an exterior of the one or more learning units, the one or more interaction members, or both to be sensed by the user and which are related to the one or more symbols; wherein the one or more learning units and optionally, the one or more interaction members, are configured to be manipulated by the user and based on an orientation, a position, a movement, an angle, an acceleration, an interaction with a learning unit, a change thereof, or any combination thereof related to the one or more learning units and/or the one or more interaction members, the auditory’ signal, the tactile signal, and/or the visual signal is generated that is transmitted to tire exterior of the one or more learning units and/or the one or more interaction members via the one or more sensory output elements.
[016] The present disclosure relates to a method of using the learning system by a user.
[017] The present teachings relate to a method of using a learning system for a user to learn and associate one or more phonemes to one or more graphemes, the method including: a) the user physically manipulating one or more learning units and/or one or more interaction members to cause the learning system to be activated; and b) the user physically manipulating the one or more learning units and/or the one or more interaction members such that based on an orientation, a position, a movement, an angle, an acceleration, an interaction with a learning unit, a change thereof, or a combination thereof of the one or more learning units and/or the one or more interaction members, an auditory signal, and optionally, a tactile signal and/or a visual signal, is generated that is transmitted to an exterior of the one or more learning units and/or the one or more interaction members via one or more sensory output elements.
[018] The disclosure may provide for a system for teaching a user phonetic skills comprising different learning units representing different symbols (e.g., letters, numbers, operators, etc.), sensors to detect movement of a learning unit, interaction member, or both; proximity to learning units; transmitters for interaction between one or more interaction members, one or more learning units, or both and other learning units and/or interaction members; and/or one or more sensory output elements which generate sound, vibrations, light, the like, or any combination thereof based on movement of the unit by the user and/or interaction with the other components of the system. The sound produced may be the one or more graphemes associated with a single unit or multiple units in close proximity to each other.
BRIEF DESCRIPTION OF DRAWINGS
[019] FIG. 1 is a plan view of a learning system.
[020] FIG. 2 is an interior view of an interaction member.
[021] FIG. 3 A is a perspective view of a learning unit.
[022] FIG. 3B is a front perspective view of a learning unit.
[023] FIG. 3C is a rear perspective view of a learning unit.
[024] FIG. 4A illustrates the functioning of a learning system.
[025] FIG. 4B illustrates the functioning of a learning system.
[026] FIG. 4C illustrates the functioning of a learning system.
[027] FIG. 4D illustrates functioning of a learning system.
[028] FIG. 5 is a perspective view of single learning unit.
[029] FIG. 6 is a perspective view of an interior of a learning unit.
[030] FIG. 7 illustrates wireless interaction between two learning units.
[031] FIG. 8 is the process performed by a singular learning unit.
[032] FIG. 9 is the process performed by a learning unit when in close proximity to another learning unit.
[033] FIG. 10 illustrates a top plan view of a learning system.
[034] FIG. 11 illustrates a side plan view of a learning system.
[035] FIG. 12A illustrates the functioning of a learning system.
[036] FIG. 12B illustrates the functioning of a learning system.
[037] FIG. 12C illustrates the functioning of a learning system.
[038] FIG. 12D illustrates functioning of a learning system.
DETAILED DESCRIPTION
[039] The explanations and illustrations presented herein are intended to acquaint others skilled in the art with the present teachings, its principles, and its practical application. The specific embodiments of the present teachings as set forth are not intended as being exhaustive or limiting of the present teachings. The scope of the present teachings should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. The disclosures of all articles and references, including patent applications and publications, are incorporated by reference for all purposes. Other combinations are also possible as will be gleaned from the following claims, which are also hereby incorporated by reference into this written description. [040] Learning System
[041] The present teachings relate to a learning system. The learning system may function as a phonetic learning device or other learning device. The learning system may function to teach phonetics to users, such as infants or toddlers. The learning system may function to audibly emit one or more phonemes relates to one or more graphemes. The learning unit may function to teach a user a grapheme related to a phoneme, a phoneme related to a grapheme, or both. The learning system may function to teach a user mathematical operators, equations, numbers, counting, resulting answers, and/or the like. The learning system may function to teach a user chemical symbols, equations, and the like. The learning system may even function to teach a user a different language. The learning system may audibly emit the phoneme associated with a single learning unit representing a grapheme, or multiple learning units representing multiple graphemes. The learning system may provide one or more sensory outputs (e.g., light, vibrations, etc.) to a user, such as to encourage and reinforce continued use (e.g.. play) of the learning system. The learning system may include one or more interaction members, learning units, housings, light sources, speakers, circuit boards, processors, transmitters, sensors, vibrators, power converters, audio amplifiers, power sources, switches, the like, or any combination thereof.
[042] The learning system may include or be free of one or more interaction members. The one or more interaction members may function to cooperate with one or more learning units. The one or more interaction members may function to detect and/or identify the grapheme or other symbol and/or shape represented by one or more learning units. The one or more interaction members may function to relay (e.g., audibly relay) the phoneme associated with the grapheme(s) of one or more learning units. The one or more interaction members may be configured to cooperate with one or more learning units. The one or more interaction members may function to pair (e.g., wireless electronic transmission) with one or more learning units. The one or more interaction members, or components thereof, may transmit one or more signals to one or more learning units. The one or more interaction members may function to house one or more learning units. The one or more interaction members may be a wand, mouse, remote, tray, box, support, the like, or any combination thereof. The one or more interaction members may be easily graspable by a user. The one or more interaction members may be moveable such as to hover over and/or contact one or more learning units. The one or more interaction members may be configured to remain static while one or more learning units are located thereon and/or therein. The one or more interaction members may include a housing. The one or more interaction members may include one or more electrical components. The electrical components may be located within the housing. The one or more electrical components may include one or more sensors, transmitters, power sources, speakers, amplifiers, microphones, processors, circuit boards, non-transitory storage mediums, switches, power converters, wires, the like, or any combination thereof. The one or more interaction members may include or be free of a graphic user interface (GUI). Being free of a graphic user interface allows for the learning system to be used without concern about screen time for infants and toddlers. It is foreseeable the learning system may be entirely free of an interaction member and learning units react with one another, are initiated directly by a user, and/or the like. [043] The learning system may include one or more learning units. The learning unit may function to phy sically and/or visually represent one or more graphemes associated with one or more phonemes, represent one or more alphanumeric characters, and/or represent one or more other symbols (e g., mathematical operators, chemistry' symbols). The learning unit may function to audibly emit one or more phonemes associated with one or more graphemes. The learning unit may audibly emit the phoneme associated with a single learning unit representing a grapheme or multiple learning units representing multiple graphemes. In addition to. or in the alternative, an interaction member may function to audibly emit the one or more phonemes associated with the one or more graphemes. A learning unit may also work as an interaction member, in lieu of an interaction member, or both. A learning unit may also emit light or produce vibrations in addition to emitting the audible phoneme. By audibly relaying one or more phonemes and/or otherwise outputting one or more sensory outputs (e.g., light, vibrations, etc.), the learning unit may reinforce learning of a phoneme and grapheme (or other symbols). The learning unit may transmit signals to or receive signals from other learning units, interaction members, or both. The learning unit may contain sensors to detect manipulation by a user. The structure of the learning unit may include a housing.
[044] The learning system may include one or more housings. A housing may function to represent a grapheme, display a grapheme, or both. A housing may function to house one or more components of a learning unit, interaction member or both. The housing may function to attract a user to play with and manipulate an interaction member, one or more learning units, or both. The housing may cooperate or include a housing cover to enclose one or more components within an interaction member, one or more learning units, or both. The housing may be rigid, flexible, or both. The housing may be a one-piece or be comprised of multiple pieces. The housing may be opaque, translucent, or a combination thereof. The housing may be manipulated by a user. The housing may include one or more sensors, transmitters, sensory output elements. PCBs, power converts, processors, audio amplifiers, speakers, power sources, light sources, and/or the like mounted to an internal surface, external surface, or both. The housing may have a plurality of holes passing through the housing to allow the emission of sound and/or light. The housing may be a thermoplastic or thermoset material. The housing may be formed by injection molding, blow molding, vacuum forming, polymer casting, CNC machining, 3D printing, milling jointing, planing, cutting, sawing, drilling, boring, gluing, clamping, veneering, laminating, the like, or any combination thereof. The housing may be formed by one or more teclmiques suitable for one or more polymers, organic materials, or both. Organic material may include wood, sisal, rattan, cotton, the like, or any combination thereof. The housing may include one or more safety features for being handled by a small child. The safety feature(s) may prevent the housing being a choking hazard, laceration hazard, or both. The housing may be sufficiently large to avoid being a choking hazard. The interaction member may include a housing separate from a housing of one or more learning units. A housing may have an overall width and/or length of about 1.25” or greater, about 1.5” or greater, about 1.75” or greater, or even 2” or greater. A housing may have an overall width and/or length of about 20” or less, about 18” or less, or even about 16” or less. Width may be measured side to side while length may be measured from a proximal end to a distal end. The width and/or length of a housing of an interaction member may be same, similar, or different that that of the housing of a learning unit. The housing may be designed such as to meet the small parts regulation set by the US Consumer Product Safe Commission (e.g., larger than 1.25” width and larger than 2.25” length). The testing standard may be the cylinder testing standard set for in 16 C.F.R. 1501.4.
[045] The one or more learning units may each include a housing. The housing of a learning unit may embody the shape of a grapheme, alphanumeric character, or any other symbol (e g., mathematical operator, chemistry symbol). The housing may not be in the shape of a grapheme, alphanumeric character, and/or symbol but have one or more representations thereof on the surface of the housing. The housing may be or include the three-dimensional shape of one or more letters, numbers, symbols (e.g.. chemical symbols, chemical bonds, mathematical operators, punctuation symbols, grammar symbols), and/or the like. For example, a housing may be in a three-dimensional shape of a letter, such as the letter "H”. As another example, the housing may be in a three-dimensional shape of a cube, cuboid, cylinder, pyramid, triangular prism, cone, sphere, partial sphere, hexagonal prism, the like, or any combination thereof. For example, the housing may be in the shape of a cube with a letter, or other character/symbol, located on one or more outer surfaces of the cube (e.g., printed, affixed, caned, molded into). And as a further example, the housing may be the shape of common children’s toys with the one or more graphemes displayed thereon. For example, the housing(s) of learning unit(s) may be shaped like train cars and have the letters of the alphabet located thereon.
[046] A learning system may include plurality of housings for forming a plurality of units. Each housing may be in the same or a different shape as another housing of another learning unit. A first housing may be in a first shape while a second housing may be in a second shape, and so forth. For example, a first shape of a first learning unit may be one letter from the alphabet while the second shape of a second learning unit is another letter from the alphabet. A plurality of learning units may provide for a portion of or an entirety of all alphanumeric representations. For example, a plurality of learning units of a learning system may include up to 36 housings, including 26 letter shapes (i.e., A to Z) and 10 number shapes (i.e., 0 to 9).
[047] The one or more interaction members may include one or more housings. The housing of an interaction member may function to cooperate with, detect, pair, house, and/or support one or more learning units, a plurality of electrical components, or a combination thereof. The housing of an interaction member may have any suitable shape to allow a user to manipulate the interaction member to cooperate with one or more learning units, be passed over one or more learning units, be located below and support one or more learning units, the like, or any combination thereof. The one or more housings may be shaped such as a wand, a rod. a pen, a remote, a mouse, a tray, a box, the like, or any combination thereof. For example, a housing of an interaction member may be wand-shaped. Wand-shaped may mean having a handle and a head. The handle may commence at a proximal end. The head may be located at the distal end. The handle and/or the head may be cuboidal. cylindrical, prismed, conical, spherical, pyramidical. the like, or any combination thereof. The handle may have one or more contours formed therein for easier holding (e.g., a substantially cylindrical-shaped handle with a narrower diameter mid-section). The head may also form one or more 2D and/or 3D shapes. The shape may allure to users, especially children. The shape may be star-shaped, cloud shaped, diamond-shaped, moon-shaped (e.g.. crescent), heart-shaped, animal-shaped (e.g., butterfly, bear), flower-shaped, car-shaped, rocket-shaped, plane-shaped, the like, or any combination thereof. A tray-shaped housing may be any suitable 3D shape for supporting a plurality of learning units for a user to view and interact with. A tray-shaped housing may include an upper surface opposing a lower surface. A lower surface may function rest on a supporting surface (e g., table, floor). An upper surface may function to display and/or store one or more learning units. An upper surface may have one or more recognition holders, storage holders, or a combination thereof stored therein. One or more recognition holders may function to temporarily store and display one or more learning units which are being interacted with (e.g.. recognizing, identifying, audibly relaying). One or more recognition holders may be located adjacent to one or more transmitters. Each individual recognition holder may be associated with its own individual transmitter or share a transmitter with one or more other recognition holders. One or more storage holders may function to store one or more learning units not being actively interacted with. As another variation, an interaction member may be an interactive member of a play set. For example, an intersection, tunnel, sign, light, or a combination thereof in of a train set may be employed as an interaction member.
[048] A housing may cooperate with or include a housing cover. The housing cover may function to enclose any internal components in the housing, allow access to internal components, or both. The housing cover may have a shape substantially reciprocal with at least a portion of a housing. A housing cover may have a shape substantially reciprocal with a surface, side, or a portion thereof of a housing. A housing cover may be removably affixed to a housing. A housing cover may be affixed via one or more fasteners, snap fit, friction fit, the like, or any combination thereof. The housing cover may be opaque, translucent, or both. The housing cover may have an identical, similar, or different opacity and/or transparency as the remainder of the housing.
[049] The housing may include one or more mating features. One or more mating features may function to engage or otherwise mate one housing to one or more other housings. A plurality of housings engaged together may make a digraph (e.g., two letters making sound), trigraph (e.g., three letters making a sound), a word, an equation, a number, the like, or any combination thereof. The one or more mating features may include one or more projections, indentations, or both. For example, each housing may be equipped with male feature(s) on one side and female feature(s) on tire opposing side to provide for a universal mating scheme. The one or more mating features may include magnets, hook and loop fasteners, pegs and clamps (e.g., similar to a children’s train set attachment), male keying features (e.g., tabs, pegs), female keying features (e.g., openings, slots), the like, or a combination thereof. One housing may then be magnetically attached or otherwise temporarily attached to another housing. The one or more mating features may be located on one or more sides (e.g., peripheral sides) of the one or more housings. The sides may be recognized as the left and right sides of the graphemes, such as in the case of typical letters and numbers. One or more interior components of a plurality of housings may work together when the housings are joined or otherwise mated. For example, both speakers may work together to both simultaneously audibly relay a sound associated with a digraph or only one speaker may remain active while the others remain silent. For example, lights may all light up simultaneously or provide a flowing light pattern which follows the pronunciation of the letters (e.g., lights moving left to right as each phoneme is audibly relayed from one or more speakers).
[050] The learning system may include one or more sensory output elements. The one or more sensory output elements may function to output one or more auditory signals, tactile signals, visual signals, the like, or a combination thereof. The sensory output may be transmitted to an exterior of the housing. The sensory output may be sensed by a user. The sensory output may be related to the one or more symbols of a housing, interaction between to housings, the like, or a combination thereof. One or more sensory output elements may be part of an interaction member, one or more learning units, or both. The one or more sensory output elements may include one or more speakers, light sources, vibration sources, the like, or any combination thereof. The sensory output elements may be triggered when a learning unit and/or interaction member is manipulated by a user. The sensory output elements may teach a user to distinguish between preferred combinations of graphemes and unpreferred combinations of graphemes. The sensory output elements may output a first combination of auditory, visual, and tactile signals when a preferred combination of graphemes are combined and a second set of auditory, visual, and tactile signals when an unpreferred combination of graphemes are combined. The sensory output elements may audibly emit the phoneme representing any unique combination of graphemes. The sensory output elements output the phoneme representing one or more graphemes of learning units in close proximity by the learning unit detecting which other learning units are in close proximity and recognizing the unique graphemes those learning units represent.
[051] The learning system may include one or more light sources. A light source may function to emit light, gain attention of a user, maintain attention of a user, or any combination thereof. A light source may function emit light with and/or related to one or more phonemes or other sounds being emitted. A light source may emit light when an interaction member cooperates with one or more learning units. A light source may emit light when one learning unit cooperates with another learning unit. The light source may emit light when the learning unit is manipulated by a user. The light source may be mechanically or adhesively affixed to the housing. The light source may be affixed to the housing near a light source hole, a translucent portion, or both. One or more light sources may be one or more light emitting diodes (LED), fluorescent bulbs, incandescent bulbs, the like, or a combination thereof. The light source may be in communication (e.g.. electrical communication) with and/or controlled by a control system, a circuit board, processor, and/or the like. The light source may indicate which portion of a phoneme corresponds to a specific grapheme when multiple learning units representing multiple graphemes are represented by the emitted phoneme. The light sources of two or more learning units may emit light sequentially when the learning unit audibly emits a phoneme associated with multiple graphemes. A light source of an interaction member may emit light when the interaction member successfully recognizes a learning unit (e.g., transmitter of interaction member detects transmitter of learning unit). [052] The learning system may have one or more light source paths. A light source path may allow for emission of light from a housing or other surface. The light source paths may be formed in a housing of an interaction member, one or more learning units, or both. A light source path may include one or more light source holes, translucent portions, or both in a housing. A light source hole may pass through the housing surface. One or more light source holes may pass through one surface or a plurality of surfaces of die housing. The light source hole may have a light source mounted in close proximity. In addition to, or alternatively, the housing may have one or more transparent portions. A transparent portion may allow for lighting emitted within the housing to be emitted outside of the housing.
[053] The learning system may include one or more vibration sources. The vibration source may provide a tactile signal to a user. The vibration source may be provided in one or more learning units, an interaction member, or any combination thereof. The vibration source may transfer a vibration force to the housing. The vibration force may result from interaction of a housing by a user. The vibration force may result from a user manipulating a housing, correctly, verbally repeating back a phoneme related to the grapheme represented by the housing, having one housing interact with one or more other housings, the like, or a combination thereof. The vibrator may include an electric motor or an electro-mechanical transducer capable of producing vibrations. The vibrator may be in communication with the circuit board, processor, and/or the like. The vibrator may be turned on and off. The one or more vibration sources may be in communication with, and/or controlled by one or more control systems, processors, circuit boards, and/or the like.
[054] The learning system may include one or more speaker holes. The speaker holes may allow for emission of audible sormd from the housing to the exterior of the housing. One or more speaker holes may be formed in an interaction member, one or more learning units, or both. The speaker holes may be formed as a single hole or as one or more holes through the surface of the housing. The speaker holes may have a speaker located in close proximity. The speaker holes may be formed in one or a plurality' of surfaces of the housing. The speaker holes may be formed on a surface which faces toward and/or away from a user. [055] The learning system may include one or more speakers. One or more speakers may function to provide audible sounds, make an audible phoneme sound associated with a grapheme, or both. The speaker may emit audible sound when a learning unit is manipulated by a user, an interaction wand is manipulated by a user, an interaction member detects a learning rmit, the like, or any combination thereof. The speaker may have any configuration suitable for providing audible phoneme(s) to a user. One or more speakers may refer to one or more audio amplifiers, speakers, electrical component(s) configured to convert electrical signals into sound, or any combination thereof. One or more speakers may be affixed to and/or reside within a housing. The speaker may be mechanically and/or adhesively affixed to the housing. One or more speakers may reside within an interior of the housing, such as to be protected and avoid being damaged by a user (e.g., played with by an infant or toddler). The speaker may be mounted near the speaker holes. The speaker may be any electrically driven transducer capable of audible sound emission. The speaker may be in communication with and/or controlled by a control system, a circuit board, a processor, the like, or any combination thereof. An exemplary audio amplifier may be a class-D amplifier. For example, the P AM8403 Mini 2-Channel 3W Stereo Class D Audio Amplifier by Envistia Mall may be suitable. An exemplary audio speaker may be 4 ohm or 8 ohm speaker. For example, a 4 ohm 3W 1.5-inch diameter speaker.
[056] The learning system may include one or more microphones. The one or more microphones may function to receive sound (e.g., voice) emitted from one or more users, the ambient environment, or both. One or more microphones may have any configuration suitable for receiving sound. One or more microphones may be in electrical communication with, controlled by, or both a control system, a circuit board, a processor, or any combination thereof. One or more microphones may be affixed within the housing. One or more microphones may be near or even adjacent to one or more openings of housing. The one or more openings may be separate or the same as the one or more speaker openings. The opening may allow for sound waves from the voice of a user or the ambient environment to be received by the microphone. The one or more microphones may transmit one or more speech signals toward one or more processors. One or more processors may work to interpret and/or convert the incoming speech signal.
[057] The learning system may include one or more Circuit Boards. A circuit board may be a printed circuit board (PCB). The circuit board may communicate with and/or include one or more processors, storage mediums, sensors, transmitters, sensory output elements, power sources, light sources, speakers, microphones, other electrical components, the like, or any combination thereof. The circuit board may be mechanically or adhesively affixed to the housing. The circuit board may be powered by a battery or other power source. The circuit board may be powered inductively by an inductive field transmitted through the housing. A circuit board may be associated with an interaction member, one or more learning units, or any combination thereof. A circuit board may form a backbone of a control system of an interaction member, learning unit, or both. A circuit board may be formed by a plurality of electronic modules cooperating together or may be custom made such that the modules are integrated directly into a custom circuit board. [058] The learning system may include one or more processors. The one or more processors may function to initiate functionality of one or more components of tire learning system, analyze one or more signals incoming from one or more components, receive and/or transmit data signals, or any combination thereof. One or more processors may function to receive signals from one or more electrical components, transmit signals to one or more electrical components, or both. Exemplary ways the one or more processors may function and cooperate with electrical components include: receiving one or more identification signals (e.g., signal from one or more transmitters specific to a learning unit); identifying one or more learning units based on the identification signal (e.g., identifying the letter "A”); accessing one or more identification databases, phoneme databases, speech-to-text databases, and/or the like; matching one or more identification signals to one or more identifiers (e.g.. matching the unique signal from a transmitter to an identifier, such as in a storage medium or even a database); accessing one or more audio files (e.g., audio database, storage medium); retrieving one or more audio files associated with one or more identification signals, identifiers, or both; retrieving one or more audio files based on a sequence and/or combination of a plurality of identification signals, identifiers, or both; decoding one or more audio files into one or more audio instruction signals; translating one or more audio files from a stored language in the one or more audio databases into a different language for transmitting as an audio instruction signal(s); accessing one or more translation services/systems and transmitting the one or more stored languages for translating into a desired language; retrieving one or more audio files after translation from a translation service/system; relaying one or more audio instruction signals to one or more audio amplifiers, speakers, or both; receive one or more speech signals; convert one or more speech signals into speech files; translating one or more speech files from the received language into the stored language; accessing one or more translation services/systems and transmitting the one or more received language files; retrieving one or more received speech files after translation as one or more speech files ready for storage; store one or more speech files in one or more storage mediums; and/or analyzing one or more speech files, user profiles, user history’, user progress, and/or the like.
One or more processors may be part of one or more interaction members, learning units, or any combination thereof. One or more processors may be located within the housing, outside of the housing, within a base unit, within a computing device of the learning system, part of a server of a system, remotely located from the learning system, the like, or any combination thereof. One or more processors may be in communication with one or more other processors. For example, a processor within a housing may be in direct or indirect communication with a processor part of a base unit or even a remotely located server. One or more processors may be part of one or more hardware sy stems, software sy stems or any combination thereof. One or more hardware processors may include one or more central processing units, multi-corc processors, front-end processors, microcontrollers, the like, or any combination thereof. One or more processors may include one or more cloud-based processors. An exemplary’ processor may be the Adafruit nRF52840 Express Feather ARM processor by Adafruit. Another exemplary processor may be a Raspberry’ Pi Zero by Raspberry Pi. Another exemplary processor may be a custom, embedded microcontroller formed directly into the circuit board. The one or more processors may be in communication with and/or include one or more storage mediums.
[059] The learning system may include one or more storage mediums. The one or more storage mediums may function to receive and/or transmit one or more data entries from one or more components of the system, store one or more algorithms, store computer-readable instructions (e.g., software programs), or any combination thereof. The one or more storage mediums may include one or more storage devices, memory storage devices, or both. The one or more storage devices may include one or more non-transient storage devices. A non-transient storage device may include one or more phy sical servers, virtual servers, physical computing devices, or a combmation thereof. One or more servers may include one or more local servers, remote servers, or both. One or more storage mediums may include one or more hard drives (e.g., hard drive memory), chips (e g., Random Access Memory “RAM”), discs, flash drives, memory cards, the like, or any combination thereof. The one or more storage mediums may be part of one or more interaction mediums, learning units, or both. The one or more storage mediums may be located within one or more housings, on a circuit board, part of a processor, a storage compartment, base unit, servers, computing devices, the like, or a combination thereof. The one or more storage mediums may be in communication with one or more processors. The one or more storage mediums may receive data entries from one or more processors, may transmit one or more data entries to one or more processors, or both. One or more storage databases may have a volume capacity. A volume capacity may be about 1 MB or greater, about 3 MB or greater, about 5 MB or greater, or even about 8 MB or greater. A volume capacity may be about 50 MB or less, about 30 MB or less, or even about 15 MB or less. The volume capacity should be configured to retain at least 100 words or more, 1,000 words or more, or even 5,000 words or more. The volume capacity should be configured to retain 50,000 words or less, 35,000 words or less, or even 10,000 words or less. Words may be stored in any suitable format for accessing and transmitting to one or more speakers, such as in MP3 format. The one or more storage mediums may store data in the form of one or more databases.
[060] The learning system may include one or more databases. One or more databases may function to receive, store, and allow for retrieval of information related to usage of the learning unit, instructions for the learning unit (e.g., software), or both. The one or more databases may be located within (e.g., stored) one or more storage mediums. The one or more databases may be located locally within the learning system, remotely from the learning system, or both (e.g., cloud storage). The one or more databases may include any type of database able to store digital information. The digital information may be stored within one or more databases in any suitable form using any suitable database management system (DBMS). Exemplary' storage forms include relational databases (e.g., SQL database, row-oriented, column-oriented), non-relational databases (e.g., NoSQL database), correlation databases, ordered/unordered flat files, structured files, the like, or any combination thereof. One or more databases may be located within or be part of hardware, software, or both. One or more databases may be stored on a same or different hardware and/or software as one or more other databases. The databases may be located within one or more non- transient storage mediums. One or more databases may be located in a same or different non-transient storage medium as one or more other databases. The one or more databases may be accessible by one or more processors to retrieve data entries for analysis via one or more algorithms, store one or more data entries, access instructions for execution, or any combination thereof. The one or more databases may include one or more audio databases, speech databases, learning unit databases, instruction databases, user profde databases, or any combination thereof.
[061] The one or more databases may include one or more audio databases. One or more audio databases may function to store one or more phonemes, words, phrases, and/or the like. Data stored within an audio database representing varying phonemes, words, phrases, and/or the like may be referred to as audio file(s). The one or more phonemes may include the phonetic sounds of a single letter, a plurality of letters, words, and/or the like. The audio fdes may be stored as a pre-recorded voice, data for on-the-fly speech synthesis, or both. Speech synthesis may require the use of a synthesizer, which may also be in communication with the processor. The audio files may include 100 or more, 1,000 or more, or even 5.000 or more different phonemes, words, and/or phrases. The audio files may include 50,000 or less, 35.000 or less, or even 10,000 or less , 1.000 words or more, or even 5,000 words or more. The volume capacity should be configured to retain 50,000 words or less. 35,000 words or less, or even 10.000 or less different phonemes, words, and/or phrases. The one or more audio databases may store audio files in varying voices, may store different voices for use with different audio files, may work with a speech synthesizer for merging audio files with varying voices, the like, or any combination thereof. The learning system may be able to output the one or more audio files in varying voices. The varying voices may be of varying accents, genders, tones, speeds, the like, or any combination thereof. The vary ing voices may allure to a user’s interest, more closely represent their community, may work on their attention capabilities, and/or the like. The one or more audio databases may7 store audio files in varying languages, may be able to work with one or more processors to translate audio files into different languages, the like, or any combination thereof. A user may be able to set a preferred language via their user profile. One example, the one or more audio databases may include a single database or a plurality of databases which store audio files in varying languages (e.g., English, Spanish, French, Chinese, Portuguese, etc.). As another example, the one or more audio databases may be in a single language. The one or more processors may access one or more audio files from the audio database(s) and access a translation sen ice for automatically translating the one or more audio files to a dcsircd/prcfcrrcd language before transmitting an audio signal to one or more amplifiers and/or speakers. For example, translation may work similar to Google® Translate and/or Microsoft® Translator serv ices.
[062] The one or more databases may include one or more learning unit databases. One or more learning unit databases may store the identify of one or more learning units, one or more signals related to one or more learning units, or both. One or more learning unit databases may correlate an identification signal received by a transmitter and sent to a processor to the identity (e.g.. grapheme) represented by the learning unit. For example, a database may correlate a specific RFID tag with a specific letter of the alphabet. One or more learning databases may provide for letters/alphabets of multiple languages. One or more
[063] The one or more databases may include one or more speech databases. One or more speech databases may function to store incoming speech, such as that recorded by a microphone. Data stored within a speech database may be referred to as a speech file(s). One or more speech files may be compared to one or more audio files by one or more processors. For example, after the system audibly emits a phoneme and/or word, a user may repeat it back. The user’s voice may be received via the microphone and recorded for storage into the speech database. The processor may then compare what was audibly emitted by the system to what was spoken by the user. One or more speech databases may be in a single language or a plurality of languages. One or more processors may transmit the one or more speech files in the language spoken by the user into the one or more storage mediums, may first transmit to one or more translation services/systems, or both. One or more processors may receive the translated speech files from the one or more translation services/systems and then store within one or more speech databases.
[064] The one or more databases may include one or more instruction databases. One or more instruction databases may have one or more instruction algorithms stored therein. The one or more instruction algorithms may instruct one or more processors how to react to one or more signals received from one or more transmitters, microphones, other processors, sensors, the like, or any combination thereof. The one or more instruction algorithms may instruction one or more processors how to transmit one or more signals toward one or more other transmitters, speakers, other processors, sensors, the like, or any combination thereof. The one or more instruction algorithms may instruct the one or more processors how to automatically execute any of the methods disclosed herein. The one or more instruction algorithms may instruct the one or more processors how to automatically analyze a user’s profile, history, and/or progress according to any of the methods disclosed herein.
[065] The one or more databases may include one or more user profile databases. The one or ore user profde databases may include one or more user profiles, user histories, user performance, the like, or any combination thereof. User profiles may include an individual s name, age, gender, race, ethnicity, language preference, and/or the like. User history may include a history of the user’s use of the learning system. A user’s performance may include tracking progression of the user when playing with the learning system. For example, how often the user's voice correctly mimics the one or more phonemes audibly relayed by the learning sy stem. As another example, how often a user sequences a plurality of learning units to fonn one or more words.
[066] The learning system may include a mobile application. The mobile application may be interacted with by a user to learn information about the learning system, send commands to the learning system, or even track progress. The mobile application may provide information about user interactions with the learning system. The mobile application may communicate one or more interaction members, learning units, or any combination thereof over an Internet to provide information to a user. The mobile application may send user commands to an interaction member, learning unit, or both which change how the learning system functions. For example, changing from phoneme sounds (e.g.. sound out any combination of learning units) to word recognition.
[067] The learning unit may include one or more transmitters. The transmitter may function to send and/or receive signals; sense the presence or proximity to other transmitters; allow for an interaction member to detect and recognize a specific learning unit; please one interaction member and/or learning unit in communication with another interaction member and/or learning unit; place a housing in communication with another housing, storage compartment, base unit, an Internet and/or the like; or a combination thereof. A transmitter may be considered a transceiver. The transmitter may interact with other transmitters. A transmitter of one interaction member and/or learning unit may interact with a transmitter of another interaction member, learning unit, housing, storage compartment, base unit, computing device, the like, or any combination thereof. One or more transmitters may be located within one or more interaction members, learning units, or both. One or more transmitters may be located within a housing, affixed to a housing, within a storage compartment, within a base unit, the like, or any combination thereof. The transmitter may be uniquely identifiable. A transmitter may function to identify a specific interaction member, learning unit, housing, symbol (e.g., grapheme), sound (e.g., phoneme), or any combination thereof. A transmitter may be a single transmitter or a plurality of transmitters. The one or more transmitters may include one or more radio frequency identification transmitters, wireless modules, Bluetooth® transmitters, near-field communication transmitters (NFC), global positioning system transmitters, cameras, scanners, the like, or any combination thereof. The transmitter may transmit wireless communication through the housing. The transmitter may only receive a signal when a learning unit comes into view. The transmitter may be active, passive, or both. Active may mean that the transmitter is powered and continuously broadcast its own signal. Passive may mean that the transmitter is not internally powered and may be powered by a reader, such as an electromagnetic energy transmitter from a reader.
[068] As one example, an interaction member may include one or more radio frequency identification transmitters (e g., readers, active) while the one or more learning units include one or more radio frequency identification tags (e.g.. passive). As another example, a plurality of learning units may all have active radio frequency identification tags and/or readers. Due to the proximity7 of the learning units and for properly identifying the intended learning unit, short range (e.g., 0.5 inches to 4 inches, 1 in to 2 inches) transmission distance is preferred. In other words, RFID technology which reads on a longer distance is problematic as it does not clearly identify what learning units are being interacted with and in what sequence. An exemplary radio frequency reader is the RDM6300 RFID Reader. An exemplary’ radio frequency tag is a 125 kHz EM4100 protocol tag. Tags may be sized (e.g., diameter) of 10 mm or greater, 12 mm or greater, or even 14 mm or greater. Tags may be sized 50 mm or less, 40 mm or less, or even 30 mm or less. The intent is to balance out size with detection range. One or more transmitters may be read-only or rewritable. For example, the RFID tags may be read-only or rewritable. The transmitter may be in communication with the control sy stem, circuit board, processor, storage mediums, the like, or any combination thereof. One or more transmitters located within and/or otherwise associated with an interaction member may be referred to as an interaction transmitter(s). One or more transmitters located within and/or otherwise associated with a learning unit may be referred to as a unit transmitter.
[069] As another example, an interaction member may include one or more scanners or readers. The scanner(s) may function to read one or more QR codes, barcodes, and/or the like. The code(s) may be located on an exterior surface of a learning unit. The code(s) may be used in lieu of and similar to the RFID reader and RFID tag. The one or more scanners or readers may be mounted onto an interaction member, may be static, may be moveable, or any combination thereof. The one or more codes coming into readable view of the one or more scanners or readers may be considered a transmission signal.
[070] As another example, an interaction member may include one or more cameras. The one or more cameras may be able to visually detect and see the one or more learning units. The one or more cameras may transmit the captured image(s) to the one or more processors such as to determine the grapheme or oilier shape represented by the learning unit. One or more cameras may be mounted onto one or more interaction members. One or more cameras may look down on, over, up to, or any viewing direction such as to have a view of one or more learning units. One or more learning units coming into and/or being in view of the one or more cameras may be considered a transmission signal.
[071] The learning system may emit one or more transmission signals. A transmission signal may be a wireless communication signal. A transmission signal may be emitted by one or more transmitters. A transmission signal may be emitted continuously, in response to movement of an interaction member and or learning unit, upon turning on a switch of the system, the like, or any combination thereof. A transmission signal from one transmitter may be received by another transmitter. A transmission signal relayed by a transmitter of an interaction member and/or learning unit toward a learning unit may be referred to as a primary or first transmission signal. A transmission signal relayed by a transmitter of a learning unit toward the interaction member and/or another learning unit may be referred to as a secondary or second transmission signal.
[072] The learning unit may include one or more sensors. The sensors may function to detect changes in the position and/or orientation of a learning unit, overall speed of movement of a learning unit, track location of a learning unit, the like, or a combination thereof. The one or more sensors may communicate position, orientation, angle (e.g., tilt), velocity, acceleration, or other data to a control system, processor, circuit board, transmitters, one or more other sensors, the like, or any combination thereof. The one or more sensors may allow for one or more components of the learning system (e.g., interaction member, learning unit) to turn off (e.g.. go to sleep, enter low power mode) when not in use for an extended duration of time (e.g., 5 minutes or greater, 10 minutes or greater). The one or more sensors may allow for one or more components of the learning system to turn on (e.g., be reactivated, enter regular power mode) upon detecting movement or other interaction with the component. The one or more sensors may work with or in lieu of a power switch. The one or more sensors may be located inside and/or outside of one or more interaction members, learning units, or both. The one or more sensors may include an inertial measurement unit (IMU), accelerometer, tilt switch, gyrometer, force sensor, near-field communication module, RFID module, Bluetooth module. Wi-Fi module, the like or any combination thereof. The sensors may detect position or orientation, acceleration, rotation, the like, and/or changes thereof which result from use (e.g., manipulation) of a component by a user. RFID may include localizable RFID. RFID may have a short transmission distance (e.g., less than 10”, less than 5”, less than 2”). Short transmission may provide for accuracy in learning units interacting. An exemplary inertial measurement unit (IMU) may include 9-DOF Orientation IMU Fusion Breakout BNO085 (BN0080) by Adafruit. For example, one or more sensors may detect a user picking up a learning unit, moving around (e.g., walking) with a learning unit, rotating or similarly manipulating a learning unit, shaking a learning unit, contacting (e.g., hitting) another learning unit or housing, interacting with a single housing or a plurality of housings, the like, or any combination thereof. Another exemplary’ sensor may be a tilt switch, for example a rolling ball sensor switch with product number RB-231X2 manufactured by C&K. A tilt switch may be useful in detecting if a component (e.g., interaction member, learning unit) has moved from a horizontal and/or steady position into an off- horizontal and/or vertical position or vice-versa. Detecting such movement may allow for a device to be turned on or off without the need for a switch.
[073] The learning unit may include a plurality of wires. The wires may enable communication between electronic components of the learning unit. The wires may create communication between the circuit board and the sensory output elements, transmitter, sensors, power source, portions of a control system, or any combination thereof. Electrical connections may also be due to electrical contact between components, soldering, or any other suitable means.
[074] The learning system may include one or more power sources. The power source may provide electrical power to the electronic components. For example, the circuit board, processors, storage mediums, speakers, microphones, light sources, sensors, transmitters, sensory output elements, the like, or any combination thereof. Each component with active electronic components may have its own power source. For example, an interaction member may have a power source. One or more learning units may be free of or include a power source. Learning units with passive transmitters may be free of a power source. Learning units with active transmitters may include or be free of a power source. The power source may be coimected to other electrical components through wires or be directly mounted to a circuit board. The power source may be a battery, capacitor, the like, or any combination thereof. The power source may be a disposable and/or rechargeable battery. The power source may be recharged by direct contact with a voltage source or by inductive charging through the housing of the learning unit. An exemplary power source may be a 500 mAH lithium-ion battery, one or more double-A batteries, or even one or more C batteries. The power source may include or be in communication with a power converter. A power converter may function to convert an incoming voltage into another voltage compatible with the electronic components. A power converter may be a direct current to direct current power converter (DC-to-DC), an alternating current to direct current power converter (AC-to-DC), or both. An exemplary power converter may be a Comidox DC-DC Step Up Power Module Voltage Boost Converter Board for incoming 0.9 to 5V to 5V.
[075] The learning system may include one or more unpowered learning units. An unpowered learning unit may be a learning unit without a power source. The unpowered learning unit may communicate with a learning unit through wireless communication. The unpowered learning unit may be temporarily supplied with power using an inductive field which may pass through the housing of the unpowered learning unit. The unpowered learning unit may contain a radiofrequency identification tag to wirelessly communicate with a learning unit. The radiofrequency identification tag may be a unique radiofrequency identification tag which is associated with the grapheme represented by the unpowered learning unit. [076] The learning system may include one or more base units. The base unit may send information to or receive information from a learning unit. The base unit may include one or more sensors, transmitters, sensory output elements, circuit boards, processors, power sources, user interfaces, or any combmation thereof. The base unit may support learning units on a surface of the base unit. The base unit may be in the form of a tray, mat, easel, storage container, or the like. A base unit may be die same as, separate from, in addition to, or in lieu of an interaction member. One or more features application to an interaction member may be suitable for the base unit are incorporated herein. The base unit may teach the user phonemes associated with graphemes. The base unit may sense the presence of learning units on a surface of die base unit and audibly emit the phoneme associated with the graphemes represented by the learning units on the surface of the base unit. The base unit may sense learning units using a transmitter or sensor. The base unit may include a user interface.
[077] The learning system may include or be free of a graphic user interface. The graphic user interface may be a screen or touch screen mounted to a surface of the base unit, interaction member, or both. The user interface may communicate information about user interactions with the learning system to a user. The base unit and/or interaction member may communicate with a user through the user interface by displaying information on the screen or touch screen. The user may send commands to the base unit and/or interaction member through the user interface. The base unit may be commanded by a user through touching the touch screen. The user interface may provide instructions to a supervisory’ user which aid the supervisory user in teaching the user to associate graphemes with phonemes. The user interface may display’ information on the screen or touchscreen which will aid the supervisory user in instructing the user.
[078] The learning system may include one or more storage containers. The storage containers may house the one or more main components (e.g., interaction member, learning units) of the learning system. The storage containers may house the one or more components of the learning system in an internal cavity or on a surface of the storage container. The storage containers may contain recessed surfaces having a shape substantially reciprocal with a surface, side, or a portion thereof of an interaction member, learning unit, or both. The recessed surfaces of a storage container may prevent the one or more components from moving relative to the storage container. The storage containers may communicate with a learning unit. The storage container may receive information from a learning unit including patterns of interaction between the user and the learning unit. The storage container may wirelessly communicate with a learning unit through a transmitter. The storage container may provide electrical energy to the learning unit. The storage container may contain an inductive coil which emits an inductive field to wirelessly charge the learning unit.
[079] The learning system may include one or more communication modules. The one or more communication modules may allow for the learning unit to receive and/or transmit one or more signals from one or more computing devices, to a mobile application, be integrated into a network, or both. The one or more communication modules may have any configuration which may allow for one or more data signals from one or more learning units to be relayed to one or more other learning units, controllers, communication modules, communication hubs, networks, computing devices, processors, the like, or any combination thereof located external of the learning unit. The one or more communication modules may include one or more wired communication modules, wireless communication modules, or both. A wired communication module may be any module capable of transmitting and/or receiving one or more data signals via a wired connection. One or more wired communication modules may communicate via one or more networks via a direct, wired connection. A wired connection may include a local area network wired connection by an ethemet port. A wired communication module may include a PC Card, PCMCIA card, PCI card, tire like, or any combination diereof. A wireless communication module may include any module capable of transmitting and/or receiving one or more data signals via a wireless connection. One or more wireless communication modules may communicate via one or more networks via a wireless connection. One or more wireless communication modules may include a Wi-Fi transmitter, a Bluetooth transmitter, an infrared transmitter, a radio frequency transmitter, an IEEE 802.15.4 compliant transmitter, the like, or any combination thereof. A Wi-Fi transmitter may be any transmitter complaint with IEEE 802.11. A communication module may be single band, multi -band (e.g., dual band), or both. A communication module may operate at 2.4 Ghz, 5 Ghz, the like, or a combination thereof. A communication module may communicate with one or more other learning units, communication modules, computing devices, processors, or any combination thereof directly; via one or more communication hubs, networks, or both; via one or more interaction interfaces; via one or more mobile applications; or any combination thereof. [080] The learning system may form a kit. The kit may include one or more learning units, one or more interaction members, one or more base units, one or more storage containers, the like, or any combination thereof. The kit may enable a user to learn phonemes associated with a plurality of combinations of graphemes. The kit may include two or more learning units which represent two or more graphemes which can be combined to enable the user to learn the associated phoneme for any combination of graphemes represented by the learning units. The kit may record user interactions with learning units and interaction member and/or base unit. The kit may be one of multiple unique kits which each contain a different combination of learning units and housings. The kit may include learning units, interaction members, base units, and/or storage containers which arc designed to match a user’s age or phonetic skill level. The kit may use information recorded to the interaction member and/or base unit to determine which of the multiple unique kits is appropriate for a user when age or skill level advance. The kit may use information recorded on the base unit to detect abnormalities in user interactions with learning units. The abnormalities that the learning unit may detect include autism, dyslexia, other medical conditions, and the like. The abnormalities may be predictively detected by the learning unit for later diagnosis by a licensed medical professional.
[081] Method of Using Learning Unit
[082] The present disclosure relates to one or more methods of using a learning system. The method may include using a learning system for a user to learn and associate one or more phonemes to one or more graphemes The methods may function to allow a user to audibly hear, learn, and/or even communicate one or more audible sounds related to one or more visual symbols represented by the learning unit. The one or more methods may include a single learning unit process, a multiple learning unit process, or both. A single learning unit process may relate to a user manipulating and interacting with only one housing of a learning unit at a time. A multiple learning unit process may relate to a user manipulating and interacting with a plurality of housings together. The methods may include a process incorporating an interaction member or free of an interaction member. The method may employ the learning sy stem as discussed herein.
[083] The method may include the user physically manipulating one or more learning units and/or one or more interaction members to cause the learning system to be activated. The manipulating and activation may include moving a switch of the learning system. Upon moving the switch, the learning system maypower on and/or wake out of a sleep mode. Moving may include sliding, depressing, rotating, and/or the like. The switch may be part of one or more learning units and/or interaction members. The manipulating and activation may include physically moving the one or more learning units, interaction members, or both. Upon movement, one or more sensors may detect the movement. The one or more sensors may then cause the learning system to power on and/or wake out of a sleep mode. The one or more sensors may transmit sensing of the movement to one or more processors of the learning system. The one or more sensors may work in conjunction with and/or in lieu of one or more switches. For example, a switch may fully power on/off the learning system. While still being powered on, the learning system may enter a sleep mode (e.g., low power mode) after a period of time of not being moved (e.g.. as sensed by the one or more sensors). Upon the one or more sensors detecting the movement, the learning system may wake out of the sleep mode. The one or more sensors may be any of the one or more sensors as discussed above with respect to the learning system.
[084] The method may include tire user physically manipulating the one or more learning units and/or the one or more interaction members such that based on an orientation, a position, a movement, an angle, an acceleration, an interaction with a learning unit, a change thereof, or a combination thereof of the one or more learning units and/or the one or more interaction members, an auditory signal, and optionally , a tactile signal and/or a visual signal, is generated that is transmitted to an exterior of the one or more learning units and/or the one or more interaction members via one or more sensory output elements. The one or more sensory output elements may include any sensory output element as discussed above with respect to the learning system. The one or more sensory output elements may include any element which outputs an auditory signal, visual signal, and/or tactile signal. The one or more sensory output elements may include one or more speakers which transmit an auditory signal. The one or more sensory output elements may include one or more light sources which transmit a visual signal. The one or more sensory output elements may include one or more electrical motors and/or piezoelectric transducers which transmit the tactile signal. [085] The method in which manipulating one or more learning units and/or one or more interaction members to result in an auditory signal, visual signal, and/or tactile signal may include: physically moving a single learning unit to result in the a nd i ton signal being a phoneme related to a grapheme represented by the single learning unit; physically moving an interaction member into a detection range of the single learning unit to result in the auditory signal being the phoneme related to the grapheme represented by the single learning unit;
- physically moving the single learning unit into a detection range of an interaction member such as to result in the auditory signal being the phoneme related to the grapheme represented by the single learning unit; physically moving one or more subsequent learning units into a detection range of one or more preceding learning units such as to result in the auditory signal being one or more phonemes related to both a grapheme represented by the one or more preceding learning units and the one or more subsequent learning units; physically moving the interaction member into the detection range of a plurality of learning units in a sequence such as to detect each learning unit and result in the auditor,' signal related to one or more phonemes related to the grapheme represented by the sequence of the plurality of learning units; and/or physically moving the plurality of learning units in a sequence and into a detection range of an interaction member such as to result in the auditory signal being the one or more phonemes related to the grapheme represented by the sequence of the plurality of learning units.
[086] The method may include physically moving the single learning unit to result in the auditor}' signal. The auditory signal may be related to the grapheme represented by the single learning unit. A single learning unit may include one or more sensors which are configured to detect the movement of the single learning unit. The one or more sensors may be the same one or more sensors which detect movement to power on, off, and/or wake out of a sleep mode. The one or more sensors may be any suitable sensor as discussed herein. The one or more sensory output elements may include one or more speakers. The one or more speakers may be located within the single learning unit. The single learning unit may include one or more processors. The single learning unit may include one or more audio amplifiers. The one or more sensors may communicate the detected movement to the one or more processors. The one or more processors may communicate one or more audio signals. The one or more audio signals may be related to the phoneme to the one or more audio amplifiers. The one or more audio signals may be relayed to the speaker to result in the auditory signal. The auditory signal may be a vocal representation of the phoneme.
[087] The method may include the physically moving one or more learning units (e.g., subsequent learning units) into the detection range of the one or more other learning units (e.g.. preceding learning units). The movement into the detection range may result in the auditory signal. The auditory signal may be one or more phonemes. The one or more phonemes may be related to the one or more graphemes represented by the one or more preceding learning units and tire one or more subsequent learning units. The representation may be the sequence of the learning units combined together. The detection range may be in close proximity to, in viewing proximity, in sensing proximity', in contact with, mated to, or any combination thereof of one learning unit to another learning unit. Mated may be via one or more mating features. The learning units (e.g., preceding and/or subsequent) may each include one or more transmitters configured to detect and identify one another. There may only be one master learning unit with an active transmitter while the others are passive transmitters. The first learning unit in the sequence may act as a master unit. Acting as a master unit may mean that the master unit’s speaker, processor, and other electrical components arc the ones that arc active while the other subsequent learning units arc attached and recognized, but are otherwise inactive or dormant. As an alternative, all of the learning units may be active. The one or more processors and/or speakers may sync with one another and together result in tire auditory signal. The one or more sensory output elements may include one or more speakers. The one or more speakers may be located within any (e.g., preceding and/or subsequent) learning units. The one or more learning units may include one or more processors. The one or more learning units may include one or more audio amplifiers. The one or more processors may communication one or more audio signals. The one or more audio signals may be related to one or more phonemes. The one or more phonemes may be related to the one or more graphemes represented by the plurality of learning units and their arrange sequence. The sequence may be left to right, up to down, right to left, down to up, any other direction, or a combination thereof. The one or more audio signals may be transmitted to one or more audio amplifiers. The one or more audio signals may then be relayed to the one or more speakers to result in the auditory signal. The auditory signal may be a vocal representation of the phoneme.
[088] The method may include physically moving an interaction member into the detection range of a single learning unit, moving a single learning unit into the detection range of an interaction member, or both to result in the auditory signal. The auditory signal may be the phoneme related to the grapheme represented by the single learning unit. The detection range may be in close proximity to, in viewing proximity, in sensing proximity, in contact with, or any combination thereof. The detection range may be determined by the type of transmitters employed within the interaction member, learning unit, or both. The interaction member may include one or more interaction transmitters. The single learning unit may include one or more unit transmitters. The one or more interaction transmitters may detect and identify the one or more unit transmitters when one or the other is moved into the detection range. The detection range may be in close proximity to, in viewing proximity , in sensing proximity, in contact with, mated to, or any combination thereof of one learning unit to another learning unit. Mated may be via one or more mating features. Detection range may include a learning unit located in a recognition holder, a unit slot, or both of an interaction member. Detection range may include a head of an interaction member hovering over and/or contacting an upper facing surface of a learning unit. The interaction member, the learning unit, or both may include the one or more sensory output elements. The one or more sensory output elements may be a speaker. The interaction member, the learning unit, or both may include one or more processors. The interaction member, the learning unit, or both may include one or more audio amplifiers. The one or more interaction transmitters, the unit transmitters, or both may communicate the detection and/or identification of the single learning unit to the one or more processors. The one or more processors communicate one or more audio signals. The one or more audio signals may be related to the phoneme. The one or more audio signals may be relayed to one or more audio amplifiers. The one or more audio signals may then be relayed to the speaker to result in the auditory signal. The auditor}' signal may be a vocal representation of the phoneme.
[089] The method may include phy sically moving one or more interaction members into the detection range of the plurality of learning units in a sequence (e.g., detecting one after the other in sequential order), moving a plurality of learning units in a sequence into the detection range of the interaction member(s). or both to result in the auditory signal. The auditory signal may be one or more phonemes. The one or more phonemes may be related to one or more graphemes. The one or more graphemes may be represented by the sequence of the plurality of learning units. The sequence of the plurality of learning units may be left to right, right to left, top to bottom, bottom to top, on diagonal, the like, or any combination thereof. The one or more interaction members may include one or more interaction transmitters. The plurality of learning units may each include one or more unit transmitters. The one or more interaction transmitters may detect and/or identify the one or more unit transmitters and their sequence when moved into the detection range. The detection range may be in close proximity to. in viewing proximity, in sensing proximity, in contact with, mated to, or any combination thereof of one learning unit to another learning unit. Mated may be via one or more mating features. Detection range may include a learning unit located in a recognition holder, a unit slot, or both of an interaction member. Detection range may include a head of an interaction member hovering over and/or contacting an upper facing surface of a learning unit. The interaction member(s), the learning unit(s), or both may include the one or more sensory output elements. The one or more sensory output elements may be one or more speakers. The interaction member(s), the learning unit(s), or both may include one or more processors. The interaction member(s), the learning unit(s), or both may include one or more audio amplifiers. The one or more interaction transmitters, the unit transmitters, or both may communicate the detection and/or identification of the single learning unit to the one or more processors. The one or more processors communicate one or more audio signals. The one or more audio signals may be related to one or more phonemes. The one or more audio signals may be relayed to one or more audio amplifiers. The one or more audio signals may then be relayed to the one or more speakers to result in one or more auditor}' signals. The one or more auditor} signals may be a vocal representation of the one or more phonemes.
[090] The method may include one or more transmitters automatically relaying a detection and/or identification signal to one or more processors. One or more transmitters may establish a connection with or more other transmitters. The connection may be one or more interaction transmitters with one or more unit transmitters, one or more unit transmitters with one or more other unit transmitters, or both. A unit transmitter and/or interaction transmitter may automatically deploy a first signal. The first signal may be a signal looking for another unit and/or interaction transmitter. Another unit and/or interaction transmitter upon receiving the first signal, continuously, or both may automatically deploy a return, second signal. The second signal may carry the identity back to the unit and/or interaction transmitter. The received signal may then be automatically transmitted from the transmitter(s) to one or more processors. The one or more processor(s) may then automatically determine the identity of the interaction member, the learning unit, or both based on the returned, second signal. The one or more processors may automatically access one or more storage mediums (e.g., non-transitory) to retrieve an identity which matches with the second signal. [091] The method may include one or more processors executing one or more instruction algorithms. One or more instruction algorithms may instruct one or more processors what to do with a received signal from one or more transmitters. One or more instruction algorithms may instruct one or more processors to automatically access one or more databases (e.g., one or more audio databases, learning unit databases, or both). One or more instruction algorithms may instruct one or more processors to automatically correlate an identity of a signal from a transmitter to one or more graphemes represented by the one or more learning units. One or more instruction algorithms may instruct one or more processors to automatically access one or more audio files related to one or more identified graphemes. One or more instruction algorithms may instruct one or more processors to convert one or more audio files to one or more auditory signals. One or more instruction algorithms may instruct one or more processors to automatically transmit one or more auditory signals related to the one or more graphemes. One or more auditory signals may be communicated to one or more audio amplifiers, speakers, or both. Receipt of the auditory signal(s) by the audio amplifier and/or speaker(s) may then result in one or more auditory signals as vocal representations of the phonemes. [092] The method may include one or more processors automatically retrieving one or more audio files representing one or more phonemes, words, phrases, and/or the like. The method may include retrieving or more audio files which match the one or more learning units, a sequence of a plurality of learning units, or both. One or more instruction algorithms may instruction tire processor(s). For each learning unit detected, a phoneme may be generated. For a sequence of learning units detected, one or more phonemes may be generated.
[093] The method may include automatically receiving and/or storing one or more speech files. The method may include a user playing with the learning system audibly speaking one or more phonemes into a microphone of the learning system. The user may repeat audibly repeat a phoneme(s) after the learning system audibly playing the phoneme(s). The received speech may enter into the microphone. The microphone may convert the user’s voice into one or more speech signals. The one or more speech signals may be automatically communicated (e.g.. transmitted) toward the one or more processors. The one or more processors may then store the one or more speech signals in one or more storage mediums. For example, the one or more processors may direct the one or more speech signals toward one or more speech databases. The one or more processors may automatically convert the one or more speech signals into one or more speech files. The one or more speech files may then be compared to the one or more audio files, or other audio representations of the one or more phonemes. The comparison may be completed by the same or different (e.g., remote) processors.
[094] The method may include a user logging into the learning system. Logging in may include a user accessing one or more user profiles. Logging in may be done by vocally reciting a user’s identification (e.g., name, nickname, or other identifier) into one or more microphones of the learning system. Logging in may even simply use voice recognition or even facial recognition if a camera is used within the learning system. Logging in may allow for a user’s interactions with the learning system to be recorded and correlated with the user, for tracking of performance to be enabled, or both.
[095] The method may include automatically recording a user’s activity. The user's activity may include physical manipulation of one or more learning units, interaction members, or both. Physical manipulation may include frequency, speed, direction, angles, the like, or any combination thereof. The user's activity may include recording of a user’s speech via a microphone. The user’s activity may include recording of one or more speech files and correlating to a specific user. The recording may be completed by the one or more processors via the one or ore sensors, microphones, and any other electrical components of the learning system.
[096] The method may include automatically determining a user’s progress. The learning system may record a user’s learning and correctly and/or incorrectly repeating one or more phonemes, placing learning units in sequences to build words, learning and sequencing learning units into new words than before, the like, or any combination thereof. The method may include automatically analyzing vocalizations, speech patterns, and/or the like. This may include analyzing one or more speech files; user’s profile, history, progress; physical manipulation of learning units; and/or the like.
[097] The method may include identify ing one or more learning disabilities based on a user’s activity and/or progress. A user’s profile, activity, and/or progress may be correlated to one or more standard profiles with similar demographic data, stored data across the system from other users with similar demographic data, or both. A user’s incorrect use of learning units, recurring confusion of certain learning units, long lead time to learning how to correctly pronounce a phoneme, speech patterns, and/or the like, may identify the potential presence of one or more learning disabilities, psychiatric diagnoses, and/or the like. For example, a recurring confusion with placement of learning units representing the letters b, d, p, and q may indicate the potential presence of dyslexia. For example, more frequent and large movements of learning units indicating flailing, shaking, etc. may indicate the potential presence of autism.
[098] The method may include the learning unit beginning at a ready state. The ready state may function to allow a learning unit which is at rest to detect subsequent motion, position, interaction, and/or the like. The ready state may automatically progress to sensing movement.
[099] The method may include sensing movement of a learning unit. Sensing movement of the learning unit may determine whether the learning unit returns to the ready state or progresses to a subsequent state. Sensing movement may be determined by the output of the sensors of the learning unit. Sensing movement may be achieved by determining that the learning unit is in motion, the learning unit is being interacted with, the position is different than a previously registered position, or any combination thereof. When sensing movement is achieved, the learning unit may7 then begin to sense interactions with one or more other learning units. When sensing movement is not achieved, the learning unit may return to the ready state. [100] The method may include sensing interaction with one or more other learning units. Sensing interaction with other learning rmits may result in the learning unit emitting an audible phoneme associated with the grapheme represented by the learning rmit combined with the graphemes represented by the one or more other learning units which are sensed. The learning unit may sense other learning units using sensors, transmitters, or both which detect transmitters in the other learning units. The learning unit may sense the distance to other learning units and only sense other learning units within a threshold distance. The learning unit may progress to triggering one or more sensory outputs when other learning units are sensed or not sensed.
[101] The method may include triggering one or more sensory outputs. The sensory outputs may be triggered after movement is sensed. The sensory outputs may be triggered when other learning units have been sensed or when no learning units have been sensed. The sensory outputs triggered may include an auditory signal, a tactile signal, a visible signal, or any combination thereof. The sensory output may include an auditory signal which is a phoneme. The sensory output may be a phoneme associated with a single grapheme represented by the learning unit when no other learning units were sensed during the previous step of sensing interaction with other learning units. The sensory output may be a phoneme associated with multiple graphemes represented by the learning unit and other learning units when other learning units were sensed during the previous step of sensing interaction with other learning units. The learning unit may return to the ready state after sensory outputs have been triggered.
[102] The method may include returning to a ready state and/or rest mode. The learning unit may return to the ready state and/or a rest mode at any time during use by a user or after sensory outputs have been triggered.
[103] Any of the method steps as discussed herein which are completed by one or more of the electronic components, are with respect to analyzing, and/or tire like may be automatically executed. The one or more processors may automatically execute one or more of the identified steps of the method upon a user interacting with the learning system.
[104] The learning system may also be used as an augmentative and/or alternative communication tool. Users with speech troubles (e.g., mute, slurred, learning a language) may use the learning units to organize words and/or phrases for outputting via the one or more speakers. The physical learning units may provide a screen-free way for users to communicate which may be beneficial in keeping screen time reduced, such as in the case for young children and screen time practices.
[105] Illustrative Examples
[106] The following descriptions of the Figures are provided to illustrate the teachings herein but are not intended to limit the scope thereof. Features of any one figure may be employed in another.
[107] FIG. 1 illustrates a learning system 10. The learning system 10 includes an interaction member 12 and a plurality of learning units 1. The interaction member 12 includes a handle 38 and a head 30. The head 30 is formed at a distal end 32 of the interaction member 12. The handle 38 commences at a proximal end 34 of the interaction member 12. The learning units 1 are illustrated in a plurality of shapes 100. As an example, the plurality of shapes 100 are illustrated as letters of the alphabet.
[108] FIG. 2 illustrates an open interaction member 12. The interaction member 12 includes a housing 3. The housing 3 may include a housing cover 5. The interaction member 12 includes a processor 36. The interaction member 12 includes a sensor 19. The sensor 19 may be an inertial measurement unit (IMU) 42. The interaction member 12 includes a power converter 38 (e.g., DC-DC converter). The interaction member 12 includes an audio amplifier 40. The interaction member 12 includes a transmitter 17. The transmitter 17 may be a radio frequency identification detection transmitter or transceiver 44. The processor 36. sensor 19, power converter 38, audio amplifier 40, and transmitter 17 may make up a control system 46 of the interaction member 12. The control system 46 reside within the handle 38. It is also foreseeable that varying portions of the control system 46 reside within the head 38. For example, the transmitter 17 may be at the head 38 such as to provide for close proximity' and accuracy with a learning unit. The interaction member 12 also includes a speaker 13. The speaker 13 is located within the head 30. The interaction member 12 also includes a power source 48. The power source 48 may be a battery 50. The interaction member 12 includes a switch 54. The electrical components (e.g., 36, 19, 38, 40, 17, 13, 48, 54) may all be in electrical communication (direct and/or indirect) with one another via one or more wires 23 or other electrical connections.
[109] FIGS. 3A-3D illustrate a learning unit 1. In FIG. 3 A, the learning unit 1 is in a shape 100. The shape 100 is in the form of a letter of the alphabet. In FIGS. 3B and 3C, the learning unit 1 in in an overall block shape. The learning unit 1 then depicts a first shape 101 and a second shape 102 as an uppercase letter and lowercase letter of the alphabet. As depicted, the first shape 101 and the second shape 102 are formed as depressions on opposing surfaces of the housing 3. The learning unit 1 houses a transmitter 17. The transmitter 17 may be a transmitting tag. such as a radio frequency identification tag 52. The transmitter 17 of the learning unit 1 may be configured to communicate, be detected by, and/or otherwise establish a wireless connection with the transmitter 17 of an interaction member 12.
[110] FIGS. 4A-4D and 12A-12D illustrate a method of using the learning system 10. The learning system 10 includes an interaction member 12 and a plurality of learning units 1. A user (for example, a child) may arrange one or more learning units 1. The learning units 1 may be arranged on any support surface (e.g., floor, table) and in any sequence. The interaction member 12 may be turned on. Turning on may be completed such as by moving and/or depressing a switch 54 (not shown). The user may move the interaction member 12 to be in proximity of a learning unit 1, such as the first learning unit. For example, the head 30 of the interaction member 12 may pass over, hover, or touch a learning unit 1 at about its middle area. The interaction member 12 may come into sufficient proximity of the learning unit 12 such as the transmitter 17 of the interaction member 12 detects and recognizes the transmitter 17 of the learning unit 1.
[11 1] For example, in FIGS. 4A and 12A, the interaction member 12 passes over the first of the learning units 1. The transmitter 17 of the interaction member 12 detects and identifies the transmitter 17 of the learning unit 1. The identification member 12 then recognizes the first shape 101 as the letter “C” which the learning unit 1 represents (via identification by the transmitters 17 and processor 36). The processor 36 (not shown) transmits an audio signal to the audio amplifier 40 (not shown) which then amplifies the signal to drive the speaker 13 (not shown). Via the speaker 13, the interaction member 12 announces “This is the letter C” and/or announces the phonetic sound of the letter “C” such as “K-u-h.” Once the first learning unit 1 having the first shape 101 is recognized and announced, the user moves the interaction member 12 to the subsequent learning unit 1.
[112] Continuing with the example, as in FIG. 4B and 12B, the interaction member 12 passes over the second of the learning units 1. Again, the transmitters 17 of the interaction member 12 and learning unit 1 make a connection. The interaction member 12 recognizes the identity of the second shape 102 the second learning unit 1 represents. In this example, the letter “A.” The interaction member 12, similar as to before, announces “This is the letter A” and/or announces the phonetic sound of the letter “A” such as “A-h.” But also, as the second shape 102 representing the letter “A” was detected after the first shape 101 representing the letter “C,” the interaction member 12 determines if the two shapes 101, 102 create a word. In this instance, “C” and “A” do not create a word. The interaction member 12 announces “I don’t know the word spelled C - A” and/or may announce the phonetic sound of the two letters combined, such as “K-a-h.”
[113] Continuing with the example, in FIG. 4C and 12C, the interaction member 12 passes over the third of the learning units 1. Similar as to before, the transmitters 17 of the interaction member 12 and learning unit 1 make a connection. The interaction member 12 recognizes the identity of the third shape 103 the third learning unit 1 represents. In this example, the letter “R.” The interaction member 12, similar as to before, announces “This is the letter R” and/or announces the phonetic sound of the letter “R” such as “Rrrrr” or “R-u-h” or similar. As the three shapes 101, 102, and 103 spell the word “CAR,” the interaction member 12 then announces “CAR”, “The word spelled is CAR” or the phonetic sound “K-a-h-r.”
[114] Finishing with the example, in FIG. 4D and 12D, the interaction member 12 passes over the fourth of the learning units 1. Similar as to before, the transmitters 17 of the interaction member 12 and learning unit 1 make a connection. The interaction member 12 recognizes the identity of the fourth shape 104 the fourth learning unit 1 represents. In this example, the letter “T.” The interaction member 12, similar as to before, announces “This is the letter T” and/or the phonetic sound “T-e-h”. As the four shapes 101. 102, 103, and 104 spell the word “CART.” the interaction member 12 then announces “CART”, or similarly “The word spelled is CART”, and/or even the phonetic sound "K-a-h-r-t.”
[115] It is easily foreseeable, that when a connection is successfully established between the interaction member 12 and a learning unit 1, one or more light sources 7 (not shown) may light up to signal to the user that the interaction member 12 has been appropriately positioned relative to the learning unit 1. For example, one or more light sources 7 may be one or more light emitting diodes (LEDs) within the handle 28 or head 30 of the interaction member 12.
[116] FIG. 5 illustrates a learning unit 1. The learning unit 1 has a first shape 101. The learning unit 1 includes a housing 3. The housing 3 is affixable to a housing cover 5. The housing 3 includes a light source hole 9. A light source 7 emits light through the light source hole 9. The housing 3 includes speaker holes 11.
[117] FIG. 6 illustrates a learning unit 1. The learning unit 1 has a second shape 102. A printed circuit board 15 is mounted within the learning unit 1. The printed circuit board 15 is in communication with a transmitter 17. The transmitter 17 is mounted within the learning unit 1. The printed circuit board 15 is in communication with a sensor 19. The sensor 19 is mounted within the learning unit 1. The printed circuit board 15 is in communication with a speaker 13. The speaker 13 is mounted within the learning unit 1. The printed circuit board 15 is in communication with a light source 7. The light source 7 is mounted within the learning unit 1. The printed circuit board 15 is in communication with a vibrator 21. The vibrator 21 is mounted within the learning unit 1. Although not illustrated, it is envisioned that the learning unit 1 as illustrated in FIG. 5 may have similar componentry within, and even mounted to, its interior.
[118] FIG. 7 illustrates two learning units 1. The two learning units 1 include a first shape 101 and a second shape 102. A transmitter 17 (not shown) in the learning unit 1 with a first shape 101 emits a primary transmission 25. A transmitter 17 (not shown) in the learning unit 1 with a second shape 102 emits a secondary transmission 27.
[119] FIG. 8 illustrates a single letter process 200. The single letter process 200 begins in the ready state 201. The ready state 201 progresses to the movement detection state 202. If no movement is detected in the movement detection state 202, the single letter process 200 returns to the ready state 201. If movement is detected in the movement detection state 202, the single letter process 200 progresses to the signal state 203.
[120] FIG. 9 illustrates a multiple letter process 300. The multiple letter process 300 begins in the ready state 301. The ready state 301 progresses to the movement detection state 302. If no movement is detected in the movement detection state 302, the multiple letter process 300 returns to the ready state 301. If movement is detected in the movement detection state 302, the multiple letter process 300 progresses to the unit detection state 303. If no learning unit in close proximity is detected in the unit detection state 303, the multiple letter process 300 returns to the ready state 301. If a learning unit in close proximity is detected in the unit detection state 303, the multiple letter process 300 progresses to the signal state 304.
[121] FIGS. 10 and 11 illustrate a learning system 10. The learning system 10 includes an interaction member 12. The interaction member 12 is in the form of a tray. The interaction member 12 has an exterior formed as a housing 3. The interaction member 12 has formed thereon (e.g., on an upper, display surface), a plurality of recognition holders 56. Each recognition holder 56 temporarily holds a single learning unit 1. The interaction member 12 also includes a storage holder 58. The storage holder 58 retains a plurality of learning units 1 in a variety of shapes 100. Each learning unit 1 includes a transmitter 17. Each recognition holder 56 is associated with an individual transmitter 17 retained within the housing 3 of the interaction member 12. The transmitter 17 associated with an individual recognition holder 56 of the interaction member 12 detects and identifies the transmitter 17 of the learning unit 1 located within the individual recognition holder 56. The interaction member 12 also includes speaker holes 11 formed in the housing. The interaction member 12 includes a speaker 13. The interaction member 12 includes a processor 36. The interaction member 12 includes an audio amplifier 40 and a power converter 38.
[122] The learning system 10 illustrated in FIGS. 10 and 11 may function similarly to the learning sy stem illustrated in FIGS. 4A-4D. But instead of communication being established between a wand-shaped interaction member 12 and one or more learning units 1, the communication is established between the tray interaction member 12 and the learning unit(s) 1 as they are located into the individual recognition holders 56.
[123] Reference Number Listing:
[124] 1 - Learning unit; 3 - Housing; 5 - Housing cover; 7 - Light source; 9 - Light source hole(s); 10 - Learning system; 11 - Speaker hole(s); 12 - Interaction member; 13 - Speaker; 15 - Circuit board; 17 - Transmitter; 19 - Sensor; 21 - Vibrator; 23 - Wires; 25 - Primary transmission; 27 - Secondary transmission; 28 - Handle; 30 - Head; 32 - Distal end; 34 - Proximal end; 36 - Processor; 38 - Power converter; 40 - Audio amplifier; 42 - Inertial measurement unit (IMU); 44 - Radio frequency transceiver; 46 - Control system; 48 - Power source; 50 - Batten : 52 - Radio frequency identification tag; 54 - Switch;
56 - Recognition holder; 58 - Storage holder; 100 - Plurality of shapes; 101 - First shape; 102 - Second shape; 103 - Third shape; 104 - Fourth shape; 200 - Single letter process; 300 - Multiple letter process
[125] Working Examples:
[126] Housing: At least 10 letters, 4" high, sans serif, 3/4" thick, Baltic Birch plywood (for robustness), sanded, corners rounded betw een 1/8" and 1/4" radius, treated with butcher block finish or other non-toxic, food-grade finish, with 12.4mm or 16mm diameter RFID tags (125kHz EM4100/EM4200) embedded midthickness.
[127] Wand: A plastic or wood "wand", with handle diameter between 3/4" and 1" for ease of grasping by young children; a 2" -wide star or other shape at the end earn ing the RFID reader coil (RDM6300); a Bluetooth audio output speaker; Bluetooth link to a base computer (built into the 11RF52840 microcontroller board); battery operated (either 3-4 AA/AAA cells or a rechargeable LiFO battery); communicating with a base PC via Bluetooth for processing and logging (base computer TBD). Processing on the base computer to be written in Python when possible.
[128] Additional Wand Features: Multiple, color-changeable LEDs (e.g.. DotStar) for feedback; accelerometer/gyro for measuring wand movement (e.g., BNO085 9-DOF IMU fusion board); one or two buttons for extra input; audio recording for on-the-fly addition of words (via PDM MEMS microphone).
[129] Operation Example:
[130] The user places the letters wherever they like— on the floor, table top, etc.
[131] Hovering tire wand over a letter emits the sound the letter makes, and potentially other information like letter name, example usage. The nature of the response could be varied with the length of time it has been since that letter was last seen-e.g., a full response would happen if it has been more than a minute, but just the letter sound if less than a minute. LEDs could provide feedback on consonant/vowel.
[132] Waving the wand past a number of letters will speak the entire word: a. a recording (pre-stored, or customized) b. text to speech
[133] If a word is found in a dictionary, additional feedback can be provided— e.g., "that is how you spell dog." (LEDs can also provide feedback on whether a word is found in the dictionary .)
[134] Any numerical values recited in the above application include all values from the lower value to the upper value in increments of one unit provided that there is a separation of at least 2 units between any lower value and any higher value. These are only examples of what is specifically intended and all possible combinations of numerical values between the lowest value, and the highest value enumerated are to be considered to be expressly stated in this application in a similar manner. Unless otherwise stated, all ranges include both endpoints and all numbers between the endpoints.
[135] The terms “generally” or “substantially” to describe angular measurements may mean about +/- 10° or less, about +/- 5° or less, or even about +/- 1° or less. The terms “generally” or “substantially” to describe angular measurements may mean about +/- 0.01° or greater, about +/- 0.1° or greater, or even about +/- 0.5° or greater. The terms “generally” or “substantially” to describe linear measurements, percentages, or ratios may mean about +/- 10% or less, about +/- 5% or less, or even about +/- 1% or less. The terms “generally” or “substantially” to describe linear measurements, percentages, or ratios may mean about +/- 0.01% or greater, about +/- 0.1% or greater, or even about +/- 0.5% or greater.
[136] The term “consisting essentially of’ to describe a combination shall include the elements, ingredients, components, or steps identified, and such other elements ingredients, components or steps that do not materially affect the basic and novel characteristics of the combination. The use of the terms “comprising” or “including” to describe combinations of elements, ingredients, components, or steps herein also contemplates embodiments that consist essentially of the elements, ingredients, components, or steps.
[137] Plural elements, ingredients, components, or steps can be provided by a single integrated element, ingredient, component, or step. Alternatively, a single integrated element, ingredient, component, or step might be divided into separate plural elements, ingredients, components, or steps. The disclosure of "a” or “one” to describe an element, ingredient, component, or step is not intended to foreclose additional elements, ingredients, components, or steps.

Claims

CLAIMS What is claimed is:
Claim 1. A learning system for use in educating of a user comprising: a) one or more learning units indicative of one or more symbols; b) optionally, one or more interaction members configured to interact with the one or more learning units; c) one or more transmitters in the one or more learning units; d) optionally, one or more other transmitters in the one or more interaction members configured to detect the one or more transmitters; e) one or more sensory output elements which output an auditory signal, a tactile signal, and/or a visual signal to an exterior of the one or more learning units, the one or more interaction members, or both to be sensed by the user and which are related to the one or more symbols; and wherein the one or more learning units and optionally, the one or more interaction members, are configured to be manipulated by the user and based on an orientation, a position, a movement, an angle, an acceleration, an interaction with a learning unit, a change thereof, or any combination thereof related to the one or more learning units and/or the one or more interaction members, the auditory signal, the tactile signal, and/or the visual signal is generated that is transmitted to the exterior of the one or more learning units and/or the one or more interaction members via the one or more sensory output elements.
Claim 2. The learning system of Claim 1 , wherein the one or more learning units includes a plurality of learning units.
Claim 3. The learning system of Claim 2. wherein the one or more interaction members are part of the learning system and include the one or more other transmitters.
Claim 4. The learning system of Claim 3, wherein the one or more transmitters are referred to as one or more unit transmitters and the one or more other transmitters are referred to as one or more interaction transmitters.
Claim 5. The learning system of Claim 1, wherein die learning system includes one or more sensors in the one or more learning units and/or the one or more interaction members configured to detect the orientation, the position, the movement, the angle, the acceleration, a change thereof, or a combination thereof of the one or more learning units and/or the one or more interaction members.
Claim 6. The learning system of Claim 1, wherein the one or more symbols include one or more graphemes and the auditory signal includes one or more phonemes.
Claim 7. The learning system of Claim 6, wherein the one or more phonemes is an audible representation of an individual grapheme or a combination of the one or more graphemes and relayed as feedback to the user.
Claim 8. The learning system of Claim 7, wherein the one or more symbols include one or more letters, numbers, or both.
Claim 9. The learning system of Claim 7, wherein the one or more learning units is a plurality of learning units and each learning unit represent a letter of the alphabet.
Claim 10. The learning system of Claim 9, wherein one or more phonemes include a phoneme of a single grapheme (e.g., sound of a single letter), a phoneme of a pair of graphemes (e.g., sound of two adjacent letters), one or more phonemes related to a plurality of graphemes (e.g.. sound of three or more adjacent letters).
Claim 11. The learning system of Claim 10, wherein the one or more phonemes is a word.
Claim 12. The learning system of Claim 1, wherein the one or more learning units include a housing which is in a shape of an alphanumeric character, a mathematical symbol, or both; or wherein the housing includes the alphanumeric character, the mathematical operator, or both formed thereon or therein such that it is represented on an exterior of the housing.
Claim 13. The learning system of Claim 1, wherein the one or more learning units include a plurality of learning units which share a common shape, not share any common shape, or share a mixture of common and uncommon shapes.
Claim 14. The learning system of Claim 5, wherein the one or more sensors include an inertial measurement unit (IMU). tilt switch, accelerometer, a gy rometer, a force sensor, near-field communication module (e.g., NFC tag), radio frequency identification module (e.g., RFID tag), Bluetooth module, a Wi-Fi module, the like, or any combination thereof.
Claim 15. The learning system of Claim 14, wherein the one or more sensors are configured to turn the one or more learning units and/or the one or more interaction members on, off, and/or in sleep mode upon a change in position and/or movement being detected of the one or more learning units and/or the one or more interaction members by the one or more sensors.
Claim 16. The learning system of Claim 1 , wherein an individual transmitter of an individual learning unit is a unique transmitter associated with the one or more symbols indicated by the individual learning unit.
Claim 17. The learning system of Claim 16, wherein the one or more transmitters associated with other learning units and/or the one or more other transmitters of the one or more interaction members are configured to detect the unique transmitter and capable of distinguishing it from any of the other transmitters.
Claim 18. The learning system of Claim 1, wherein the one or more transmitters include a radio frequency transmitter, a near field communication transmitter, a Bluetooth module, a wireless (e.g., Wi-Fi) module, a camera, a scanner, the like, or any combination thereof.
Claim 19. The learning system of Claim 18, wherein the one or more transmitters are also capable of acting as a receiver.
Claim 20. The learning system of Claim 1, wherein the learning system includes the interaction member which includes the one or more other transmitters.
Claim 21. The learning system of Claim 20, wherein the one or more transmitters of the one or more learning units are radio frequency identification tags; and wherein the one or more other transmitters of the one or more interaction members are radio frequency identification detection readers.
Claim 22. The learning system of Claim 1, wherein the one or more transmitters are configured to receive a signal generated by another of the one or more transmitters and returning a response signal to the one or more transmitters.
Claim 23. The learning system of Claim 1, wherein the one or more transmitters arc configured to transmit the signal betw een the one or more learning units, the one or more interaction members, an Internet, or any combination thereof.
Claim 24. The learning system of Claim 1, wherein the learning system includes one or more processors which are part of the one or more interaction members, the one or more learning units, or both.
Claim 25. The learning system of Claim 24, wherein the learning system includes one or more non- transitory storage mediums in communication with the one or more processors.
Claim 26. The learning system of Claim 25. wherein stored within the one or more non-transitory storage mediums are one or more audio databases which store one or more phonemes, words, phrases, and/or the like associated with the one or more graphemes of the one or more learning units.
Claim 27. The learning system of Claim 24, wherein the one or more transmitters and/or the one or more other transmitters are in communication with the one or more processors.
Claim 28. The learning system of Claim 24, wherein the one or more sensors are in communication with the one or more processors.
Claim 29. The learning system of Claim 24, wherein the one or more processors are part of a printed circuit board mounted within an interior of the one or more learning units, the one or more learning members, or both.
Claim 30. The learning system of Claim 29, wherein the one or more sensors, the one or more transmitters, and/or the one or more sensory output elements are integrated into the printed circuit board.
Claim 31. The learning system of Claim 1, wherein the one or more interaction members, the one or more learning units, or both contains one or more power sources.
Claim 32. The learning system of Claim 31 , wherein the one or more power sources is a battery, a capacitor, or both.
Claim 33. The learning system of Claim 31, wherein the one or more power sources is rechargeable and/or replaceable.
Claim 34. The learning system of Claim 33, wherein the one or more power sources is rechargeable by inductively transferring power through at least one housing of the one or more interaction members, one or more learning units, or both.
Claim 35. The learning system of Claim 1, wherein a mobile application is hosted on an Internet and is adapted to record signals generated by the one or more transmitters and/or the one or more other transmitters.
Claim 36. The learning system of Claim 1. wherein the learning system comprises one or more communication modules.
Claim 37. The learning system of Claim 36, wherein the one or more communication modules are configured to interact with a Wi-Fi network, a Bluetooth network, a cellular network, GPS network, the like, or any combination thereof.
Claim 38. The learning system of Claim 1, wherein the one or more interaction members arc shaped as a wand, a rod, a pen, a remote, a mouse, a tray, a box, the like, or any combination thereof.
Claim 39. The learning system of Claim 38, wherein the one or more interaction members includes an interaction member which is shaped as the wand.
Claim 40. The learning system of Claim 39, wherein the wand includes a head at a distal end affixed to a handle extending to a proximal end.
Claim 41. The learning system of Claim 38. wherein the one or more interaction members includes one or more processors, one or more sensors, the one or more other transmitters, and the one or more sensory output elements; and wherein the one or more sensory output elements includes one or more speakers and optionally, one or more light sources.
Claim 42. The learning system of Claim 41. wherein the one or more sensory output elements are located within a head of a wand-shaped interaction member.
Claim 43. The learning system of Claim 42, wherein the one or more other transmitters are located within a handle or the head of the wand-shaped interaction member.
Claim 44. The learning system of Claim 38, wherein tire one or more interaction members includes an interaction member which is shaped as the tray.
Claim 45. The learning system of Claim 44, wherein the tray includes a plurality of recognition holders formed in an upper surface and configured to detect an individual learning unit located therein.
Claim 46. The learning system of Claim 45, wherein the one or more other transmitters are located below the plurality of recognition holders.
Claim 47. The learning system of Claim 38, wherein the one or more interaction members is shaped as the box and includes one or more unit slots form therein; and wherein the one or more unit slots provide access into an interior and are configured to have the one or more learning units passed therethrough to be disposed into an interior of the one or more interaction members.
Claim 48. The learning system of Claim 47, wherein the one or more other transmitters arc located in proximity to the one or more unit slots.
Claim 49. The learning system of Claim 1, wherein the learning system includes a storage container and/or a base unit which stores the one or more learning units, and optionally, tire one or more interaction members.
Claim 50. The learning system of Claim 49, wherein the storage container and/or the base unit includes a processor; a communication module; and optionally, one or more container transmitters, one or more container sensors; the like, or any combination thereof.
Claim 51. The learning system of Claim 1. wherein the learning system includes a graphic user interface (GUI).
Claim 52. The learning system of Claim 51, wherein the graphic user interface is part of the one or more interaction members, a storage container, a base unit, the like, or any combination thereof.
Claim 53. The learning system of Claim 51, wherein the graphic user interface is a screen capable of displaying information to the user, a touch-screen capable of displaying information to the user and receiving inputs from the user, or both.
Claim 54. The learning system of Claim 1, wherein the one or more learning units includes two or more learning units; and wherein placement and detection of the two or more learning units by one another and/or placement of the interaction member in proximity to and detection of the tw o or more learning units in sequence results in a phoneme audibly generated representing the grapheme formed by the two or more learning units.
Claim 55. The learning system of Claim 1. wherein the one or more sensory output elements change the auditory signal, the tactile signal, and/or the visual signal based on at least one second learning unit being in close proximity to the one or more learning units, the at least one second unit being detected by an interaction member after detection of the one or more learning units, or a combination thereof.
Claim 56. The learning system of Claim 1, wherein one or more housings of the one or more learning units and/or the one or more interaction members are made of one or more polymers, organic materials, or both.
Claim 57. The learning sy stem of Claim 56, wherein the one or more organic materials include wood, sisal, rattan, cotton, and/or the like.
Claim 58. The learning system of Claim 57, wherein the one or more organic materials include the wood.
Claim 59. The learning system of Claim 1, wherein one or more housings of the one or more learning units, the one or more interaction members, or both is injection molded, blow molded, vacuum formed, polymer casted, CNC machined, 3D printed, the like, or a combination thereof.
Claim 60. The learning system of Claim 1, wherein one or more interactions of the user with the learning system determine which of a plurality of advanced learning units are appropriate for the user, a skill level of the user, progress of learning of the user, a potential presence of one or more learning disabilities of the user, the like, or a combination thereof.
Claim 61. The learning system of Claim 60, wherein the plurality of advanced learning units contain different symbols than the one or more symbols included in an initial set of one or more learning units.
Claim 62. The learning system of Claim 61, wherein the plurality’ of advanced learning units are designed to match an age, a skill level, or both of the user.
Claim 63. The learning system of Claim 1, wherein at least one of the one or more learning units are unpowered while still providing for operation of the one or more transmitters.
Claim 64. The learning system of Claim 63, wherein the one or more transmitters associated with the one or more learning units are one or more passive identification transmitters.
Claim 65. The learning system of Claim 1, wherein the auditory’ signal is generated by a speaker contained within the one or more learning units, the one or more interaction members, or both.
Claim 66. The learning system of Claim 1, wherein the tactile signal is generated by an electrical motor or piezoelectric transducer contained within the one or more learning units, the one or more interaction members, or both.
Claim 67. The learning system of Claim 1, wherein the visual signal is generated by one or more light sources, including one or more light emitting diodes, one or more incandescent light bulbs, one or more fluorescent light bulbs, or a combination thereof located within the one or more learning units, the one or more interaction members, or both.
Claim 68. A method of using a learning system for a user to learn and associate one or more phonemes to one or more graphemes, the method including: a) optionally, the user physically manipulating one or more learning units and/or one or more interaction members to cause the learning system to be activated; and b) the user physically manipulating the one or more learning units and/or the one or more interaction members such that based on an orientation, a position, a movement, an angle, an acceleration, an interaction with a learning unit, a change thereof, or a combination thereof of the one or more learning units and/or the one or more interaction members, an auditory signal, and optionally, a tactile signal and/or a visual signal, is generated that is transmitted to an exterior of the one or more learning units and/or the one or more interaction members via one or more sensory output elements.
Claim 69. The method of Claim 68, wherein the manipulating and activation includes: i) moving a switch of the learning system such as to cause the learning system to power on and/or wake out of a sleep mode; and/or ii) physically moving the one or more learning units, the one or more interaction members, or both such as to cause one or more sensors to detect the movement and cause the learning system to power on and/or wake out of the sleep mode.
Claim 70. The method of Claim 69, wherein the learning system includes the switch as part of the one or more learning units, the one or more interaction members, or both.
Claim 71. The method of Claim 69, wherein the one or more sensors include one an inertial measurement unit (IMU), tilt switch, accelerometer, a gyrometer, a force sensor, a near-field communication module (c.g., NFC tag), radio frequency identification module (c.g., RFID tag), Bluetooth module, a Wi-Fi module, the like, or any combination thereof.
Claim 72. The method of Claim 71, wherein the one or more sensors are integrated as part of the one or more interaction members.
Claim 73. The method of Claim 68, wherein the one or more sensory output elements include one or more speakers which transmit the auditory' signal.
Claim 74. The method of Claim 73, wherein the one or more sensory output elements include one or more light sources which transmit the visual signal.
Claim 75. The method of Claim 73, wherein the one or more sensory output elements include one or more electrical motors and/or piezoelectric transducers which transmit the tactile signal.
Claim 76. The method of Claim 68, wherein the manipulating to result in the auditory signal includes: a) physically moving a single learning unit to result in the auditory' signal being a phoneme related to a grapheme represented by the single learning unit; b) physically moving an interaction member into a detection range of the single learning unit to result in the auditory signal being the phoneme related to the grapheme represented by the single learning unit; c) physically moving the single learning unit into a detection range of an interaction member such as to result in the auditory signal being the phoneme related to the grapheme represented by the single learning unit; d) physically moving one or more subsequent learning units into a detection range of one or more preceding learning units such as to result in the auditory signal being one or more phonemes related to both a grapheme represented by the one or more preceding learning units and the one or more subsequent learning units; e) physically moving the interaction member into the detection range of a plurality of learning units in a sequence such as to detect each learning unit and result in the auditory signal related to one or more phonemes related to the grapheme represented by the sequence of the plurality of learning units; and/or f) physically moving the plurality of learning units in a sequence and into a detection range of an interaction member such as to result in the auditory signal being the one or more phonemes related to the grapheme represented by the sequence of the plurality of learning units.
Claim 77. The method of Claim 76, wherein the method includes the physically moving the single learning unit to result in the auditory' signal related to the grapheme represented by the single learning unit; and wherein the single learning unit includes one or more sensors which are configured to detect the movement.
Claim 78. The method of Claim 77, wherein the one or more sensory output elements include one or more speakers which are located within the single learning unit; wherein the single learning unit includes one or more processors and one or more audio amplifiers; wherein the one or more sensors communicate the detected movement to the one or more processors; wherein the one or more processors communicate one or more audio signals related to the phoneme to the one or more audio amplifiers; and wherein the one or more audio signals are then relayed to the speaker to result in the auditory signal which is a vocal representation of the phoneme.
Claim 79. The method of Claim 76, wherein the method includes the physically moving the interaction member into tire detection range of the single learning unit to result in the auditory' signal being the phoneme related to the grapheme represented by the single learning unit; wherein the interaction member includes one or more interaction transmitters; wherein the single learning unit includes one or more unit transmitters; and wherein the one or more interaction transmitters detect and identify the one or more unit transmitters when moved into the detection range.
Claim 80. The method of Claim 78, wherein the one or more sensory output elements include one or more speakers which are located within the interaction member; wherein the interaction member includes one or more processors and one or more audio amplifiers; wherein the one or more interaction transmitters communicate the detection and identification of the single learning unit to the one or more processors; wherein the one or more processors communicate one or more audio signals related to the phoneme to the one or more audio amplifiers; and wherein the one or more audio signals are then relayed to the speaker to result in the auditory signal which is a vocal representation of the phoneme.
Claim 81. The method of Claim 76, wherein the method includes the physically moving the single learning unit into the detection range of the interaction member to result in the auditory signal being the phoneme related to the grapheme represented by tire single learning unit; wherein the interaction member includes one or more interaction transmitters; wherein the single learning unit includes one or more unit transmitters; and wherein the one or more interaction transmitters detect and identify the one or more unit transmitters when moved into the detection range.
Claim 82. The method of Claim 81, wherein the one or more sensory output elements include one or more speakers which are located within the interaction member; wherein the interaction member includes one or more processors and one or more audio amplifiers; wherein the one or more interaction transmitters communicate the detection and identification of the single learning unit to the one or more processors; wherein the one or more processors communicate one or more audio signals related to the phoneme to the one or more audio amplifiers; and wherein the one or more audio signals are then relayed to the speaker to result in the auditory signal which is a vocal representation of the phoneme.
Claim 83. The method of Claim 76, wherein the method includes the physically moving the one or more subsequent learning units into the detection range of the one or more preceding learning units to result in the auditory signal related to the grapheme represented by the one or more preceding learning units and the one or more subsequent learning units; and wherein the one or more preceding learning units and the one or more subsequent learning units each include one or more transmitters configured to detect and identify one another.
Claim 84. The method of Claim 83, wherein the one or more sensory output elements include one or more speakers which are located within one or more of the preceding learning units and/or the one or more subsequent learning units; wherein the one or more learning units include one or more processors and one or more audio amplifiers; wherein the one or more processors communicate one or more audio signals related to the phoneme to the one or more audio amplifiers; and wherein the one or more audio signals are then relayed to the speaker to result in the auditory signal which is a vocal representation of the phoneme.
Claim 85. The method of Claim 76. wherein the method includes the physically moving the interaction member into the detection range of the plurality of learning units in the sequence (e g., detecting one after the other in sequential order) to result in the auditory signal being the phoneme related to the grapheme represented by the sequence of the plurality of learning units; wherein the interaction member includes one or more interaction transmitters; wherein each of the plurality of learning units includes one or more unit transmitters; and wherein the one or more interaction transmitters detect and identify the one or more unit transmitters and their sequence when moved into the detection range.
Claim 86. The method of Claim 85, wherein the one or more sensory output elements include one or more speakers which are located within the interaction member; wherein the interaction member includes one or more processors and one or more audio amplifiers; wherein the one or more interaction transmitters communicate the detection, identification, and sequence of the plurality of learning units to the one or more processors; wherein the one or more processors communicate one or more audio signals related to the phoneme to the one or more audio amplifiers; and wherein the one or more audio signals are then relayed to the speaker to result in the auditory signal which is a vocal representation of the phoneme.
Claim 87. The method of Claim 76, wherein the method includes the physically moving the plurality of learning rmits in the sequence and into the detection range of the interaction member to result in the auditory signal being the phoneme related to the grapheme represented by the sequence; wherein the interaction member includes one or more interaction transmitters; wherein each of the plurality of learning units includes one or more unit transmitters; and wherein the one or more interaction transmitters detect and identify the one or more unit transmitters and their sequence when moved into the detection range.
Claim 88. The method of Claim 87, wherein the one or more sensory output elements include one or more speakers which are located within the interaction member; wherein the interaction member includes one or more processors and one or more audio amplifiers; wherein the one or more interaction transmitters communicate the detection, identification, and sequence of the plurality of learning units to the one or more processors; wherein the one or more processors communicate one or more audio signals related to the phoneme to the one or more audio amplifiers; and wherein the one or more audio signals are then relayed to the speaker to result in the auditory signal which is a vocal representation of the phoneme.
Claim 89. The method of Claim 76, wherein the one or more interaction members, learning units, or both include one or more processors and non-transitory storage mediums; wherein the non-transitory storage mediums include one or more algorithms stored therein which are accessible and executable by the one or more processors; and the one or more algorithms instruct the one or more processors how to react to one or more signals received from one or more transmitters and what auditory’ signals to relay toward one or more audio amplifiers and/or speakers.
Claim 90. The method of Claim 89, wherein the one or more non-transitory' storage mediums include one or more audio databases which include one or more audio files stored therein and associated with the one or more phonemes represented by the one or more graphemes of the one or more learning units.
Claim 91. The method of Claim 68, wherein the learning system includes one or more user profile databases stored in a non-transitory storage medium in the interaction member, the learning unit, remotely from the interaction member and the learning unit, accessible via the Internet, or any combination thereof.
Claim 92. The method of Claim 91, wherein the method includes the user logging in or being automatically recognized by the learning system and the learning system recording the user’s activity and progression with learning phonemes associated with graphemes over a duration of time of interaction with the learning system.
Claim 93. The method of any of Claims 68 to 92 using the learning system according to any and all combinations of Claims 1 to 67.
Claim 94. The learning system of Claim 1 in combination with any and all combinations of the features of any of Claims 2 to 67.
Claim 95. The method of Claim 68 in combination with any and all combinations of the features of any of Claims 69 to 92.
Claim 96. A learning system as described in the description, as illustrated in the drawings, or both.
Claim 97. A method of using a learning system as described in the description, as illustrated in the drawings, or both.
PCT/US2024/011485 2023-01-12 2024-01-12 Language and literacy learning system and method WO2024152007A1 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US202363479661P 2023-01-12 2023-01-12
US63/479,661 2023-01-12
US202363479892P 2023-01-13 2023-01-13
US63/479,892 2023-01-13
US202363495855P 2023-04-13 2023-04-13
US63/495,855 2023-04-13

Publications (1)

Publication Number Publication Date
WO2024152007A1 true WO2024152007A1 (en) 2024-07-18

Family

ID=91897736

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2024/011485 WO2024152007A1 (en) 2023-01-12 2024-01-12 Language and literacy learning system and method

Country Status (1)

Country Link
WO (1) WO2024152007A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130302763A1 (en) * 2010-11-15 2013-11-14 Smalti Technology Limited Interactive system and method of modifying user interaction therein
US20140248590A1 (en) * 2013-03-01 2014-09-04 Learning Circle Kids LLC Keyboard for entering text and learning to read, write and spell in a first language and to learn a new language
US20150356881A1 (en) * 2014-06-04 2015-12-10 Andrew Butler Phonics Exploration Toy
US20180339226A1 (en) * 2003-03-25 2018-11-29 Mq Gaming, Llc Wireless interactive game having both physical and virtual elements
US10332417B1 (en) * 2014-09-22 2019-06-25 Foundations in Learning, Inc. System and method for assessments of student deficiencies relative to rules-based systems, including but not limited to, ortho-phonemic difficulties to assist reading and literacy skills

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180339226A1 (en) * 2003-03-25 2018-11-29 Mq Gaming, Llc Wireless interactive game having both physical and virtual elements
US20130302763A1 (en) * 2010-11-15 2013-11-14 Smalti Technology Limited Interactive system and method of modifying user interaction therein
US20140248590A1 (en) * 2013-03-01 2014-09-04 Learning Circle Kids LLC Keyboard for entering text and learning to read, write and spell in a first language and to learn a new language
US20150356881A1 (en) * 2014-06-04 2015-12-10 Andrew Butler Phonics Exploration Toy
US10332417B1 (en) * 2014-09-22 2019-06-25 Foundations in Learning, Inc. System and method for assessments of student deficiencies relative to rules-based systems, including but not limited to, ortho-phonemic difficulties to assist reading and literacy skills

Similar Documents

Publication Publication Date Title
JP5154399B2 (en) Operable interactive device
US7347760B2 (en) Interactive toy
EP3228370A1 (en) Puzzle system interworking with external device
CN108109622A (en) A kind of early education robot voice interactive education system and method
CN101105894B (en) Multifunctional language learning machine
AU2006226156B2 (en) Manipulable interactive devices
KR100940299B1 (en) Educational toys to output joining information
US20190270026A1 (en) Automatic Mobile Robot For Facilitating Activities To Improve Child Development
EP4263013B1 (en) Interactive toy-set for playing digital media
US7351062B2 (en) Educational devices, systems and methods using optical character recognition
US9111463B2 (en) Interactive speech synthesizer for enabling people who cannot talk but who are familiar with use of anonym moveable picture communication to autonomously communicate using verbal language
WO2024152007A1 (en) Language and literacy learning system and method
US20250037600A1 (en) Method for providing chatbot for rehabilitation education for hearing loss patient, and system therefor
US20180165986A1 (en) Real Time Phonetic Learning System
CN110047341A (en) Contextual language learning apparatus, system and method
CN113435549A (en) Device and method for assisting sensory integration training
KR101407594B1 (en) Appratus for providing educational contents, and method for providing contents thereof
CN206421266U (en) Baby's computer learning machine
JP2023512379A (en) educational toy set
WO2012056459A1 (en) An apparatus for education and entertainment
CN2524295Y (en) Interactive teaching device
WO2018079018A1 (en) Information processing device and information processing method
KR102732445B1 (en) AI Interactive Robot and Driving Method Thereof, and Conversation System by Using AI Interactive Robot
US20060020470A1 (en) Interactive speech synthesizer for enabling people who cannot talk but who are familiar with use of picture exchange communication to autonomously communicate using verbal language
KR20030080818A (en) Speaking Story Book

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24742116

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)