WO2015075107A1 - Interactive and adaptable learning environment - Google Patents

Interactive and adaptable learning environment Download PDF

Info

Publication number
WO2015075107A1
WO2015075107A1 PCT/EP2014/075089 EP2014075089W WO2015075107A1 WO 2015075107 A1 WO2015075107 A1 WO 2015075107A1 EP 2014075089 W EP2014075089 W EP 2014075089W WO 2015075107 A1 WO2015075107 A1 WO 2015075107A1
Authority
WO
WIPO (PCT)
Prior art keywords
interactive unit
building blocks
interactive
unit
user input
Prior art date
Application number
PCT/EP2014/075089
Other languages
French (fr)
Inventor
Peder Esben BILDE
Original Assignee
Fonden For Helene Elsass Centeret
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fonden For Helene Elsass Centeret filed Critical Fonden For Helene Elsass Centeret
Publication of WO2015075107A1 publication Critical patent/WO2015075107A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H33/00Other toys
    • A63H33/04Building blocks, strips, or similar building parts
    • A63H33/042Mechanical, electrical, optical, pneumatic or hydraulic arrangements; Motors
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B1/00Manually or mechanically operated educational appliances using elements forming, or bearing, symbols, signs, pictures, or the like which are arranged or adapted to be arranged in one or more particular ways
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass

Definitions

  • the present invention relates to an interactive and physically adaptable learning environment - also known as an enhanced environment.
  • the environment is built by positioning and combining building blocks, visual displays, and touch- and motion- sensors.
  • the purpose of the system is to build playful spacious environments, tracks, mazes and the like, that may challenge motor as well as cognitive skills and entertain children.
  • the disclosed environment is targeted at children of age 1 -5 years.
  • the child In order to stimulate development of motor and cognitive skills of children, the child needs physical as well as mental challenges. Furthermore, the challenges need to be inspiring, fun, motivational and adaptable to fit the age and skill of the child. As an even further requirement, challenges need to be safe and designed such that they do not harm or frighten the child.
  • One way of stimulating the development of children's motor and cognitive skills, in a safe and motivating manner, is by creating environments of walls and obstacles which may be of a soft material such that the child is not hurt if bumping into the object.
  • Such an environment sometimes referred to as an enhanced environment, may be obtained by the use of foam building blocks, which in some cases are designed to fit together by the use of recesses and protrusions or hook-and-loop fasteners such as Velcro.
  • a system is sought that allow and stimulate the child to be physical active, employ brain-body interaction, and utilize social skills. Physical activity improves body- consciousness, flexibility, and power of coordination. Brain-body interaction enhances spacious awareness and develops fantasy. Social relations builds self confidence, ability to interact and cooperate in groups, and pride in own ability. Thus, a system is sought that is able to manage one user as well as multiple users cooperating. In creating a stimulating and interactive environment for playing children, a system is developed that allows connecting soft building blocks and interactive devices, hence creating a spacious environment where the physical and mental skill of the child is challenged. Furthermore, a system is developed that may be physically adapted to the increasing motor skill level of the child. This adaptive feature may even add to stimulate the child, by letting the child assist in the construction of the environment.
  • a system providing a physical learning environment such as a physical learning environment stimulating physical activity.
  • the system comprises a plurality of building blocks including at least a first building block and a second building block, the plurality of building blocks forming the physical learning environment, e.g. building blocks of the plurality of building blocks may have a dimension of at least 20 cm.
  • At least some of the plurality of building blocks comprising an interactive unit.
  • Each interactive unit comprising a visual display configured to display visual information, a user interface for receiving a user input, and a speaker configured to reproduce audible information.
  • the system further comprising a control unit electronically connected to each interactive unit.
  • the control unit being configured to receive user input data from at least a first interactive unit and to control at least a second interactive unit to elicit audible and/or visual information in response to the received user input data.
  • the control unit is configured to receive user input data from a first interactive unit and to control a second interactive unit to elicit audible and/or visual information in response to the received user input data
  • Eliciting of informaion may comprise generating and/or producing information, such as sounding, transmitting, playing, displaying, etc.
  • Eliciting of information, such as audible and/or visual information may comprise a speaker to transmit a desired sound, such as a dog barking, a piano playing, a sirene, and/or a person speaking.
  • Eliciting of information, such as audible and/or visual information may comprise a visual display viewing a desired visual information, such as a still picture, a moving picture, and/or an animation, e.g. the visual information may comprise a dog, a piano, an ambulance, and/or a person.
  • a method for providing a physical learning environment comprising a plurality of building blocks such as a physical learning environment stimulating physical activity, wherein at least some of the plurality of building blocks comprises an interactive unit.
  • the method comprises connecting a first number of building blocks in a first location.
  • the first number of building blocks includes at least a first first building block comprising a first interactive unit.
  • the method further comprises connecting a second number of building blocks in a second location.
  • the second number of building blocks includes at least a second building block comprising a second interactive unit.
  • a first audible and/or visual information is elicited in the first interactive unit, and user input data is received, and second audible and/or visual information is elicited in the second interactive unit in response to the received user input data.
  • the method may be implemented in a system providing a physical learning
  • At least a part of the method may be incorporated in software adapted to run in a processing unit.
  • the present invention provides a learning environment that stimulates children to perform physical and cognitive challenging tasks in a challenging, motivational and amusing manner. Furthermore, the interactive element allows challenge of cognitive skills, and allows adjustment of the task difficulty to fit the skill of the user. Also, the building block arrangement allows the physical challenges to be adjusted to fit the skill of the user.
  • the disclosure provides interactivity to a physical learning environment stimulating physical activity.
  • the disclosure provides an increased user experience. It is an even further advange of the disclosure, that a user is encouraged to physically move from one location to another location.
  • the disclosure provides that a user input in one location provides a response in a distant location. For example, an output is elicited in a second location in response to an input in a first location, therebye, the user is encouraged to move from the first location to the second location.
  • the system may comprise a plurality of interactive units, the plurality of interactive units may include the first, the second, a third, a fourth, and a fifth interactive unit, etc.
  • the plurality of building blocks may comprise an interactive unit.
  • the first building block may comprise a first interactive unit, such as the first interactive unit.
  • the second building block may comprise a second interactive unit, such as the second interactive unit.
  • the system may comprise building blocks, such as some of the plurality of building blocks, which does not comprise an interactive unit.
  • the plurality of building blocks may comprise a processing unit and/or memory. At least some of the plurality of interactive units may have a user interface and the user interface may comprise any user manipulable input, such as a touch sensor, a joy stick, a trackball, a push button, a mouse, etc.; the user interface may further comprise audio input and/or audio output means, such as microphones and/or speakers.
  • the user interface may further comprise visual sensors, such as a video camera or an infrared detector.
  • a visual sensor may be configured to receive user input data, such as user input data comprising a predetermined gesture, and/or presentation of a predetermined object, such as an object having a predetermined colour, such as a ball being red or blue or green or yellow.
  • the plurality of interactive units may be any computing device, such as computers, such as laptops, tablet computers, smart phones, etc.
  • the plurality of interactive units may be of a same kind or may be different interactive units.
  • the control unit may comprise one or more of a user interface, a processing unit, a memory, and a visual display.
  • the user interface of the control unit comprise any user manipulable input, such as a touch sensor, a joy stick, a trackball, a push button, a mouse, etc.; the user interface of the control unit may further comprise audio inputs and/or audio outputs, such as microphones and/or speakers.
  • the control unit may be any computing device, such as computers, such as laptops, tablet computers, smart phones, etc. At least some of the plurality of interactive units, including the first and/or the second interactive unit, may comprise wireless communication means for enabling wireless communication. Also, the control unit may comprise wireless communication means for enabling wireless communication.
  • the wireless communication means may be configured to communicate according to wireless communication protocols such as WiFi, Bluetooth, BLE, NFC, RFID and/or GSM.
  • the plurality of interactive units may communicate wirelessly and at least some of the plurality of interactive unists may communicate wirelessly with the control unit. At least some of the plurality of interactive units may communicate either directly from one interactive unit to another, or from one interactive unit via the control unit to another interactive unit.
  • At least some of the interactive units may have limited processing power and may be controlled by the control unit.
  • any of the plurality of interactive units may comprise the control unit.
  • the system may comprise fewer parts and may be less expensive, if one of the interactive units can be configured to work as the control unit, such as to perform the actions of the control unit.
  • Any of the plurality of interactive units may comprise the control unit.
  • all of the plurality of interactive units may be configurable to comprise the control unit.
  • the control unit may be provided in one processor, or the control unit may be provided in a plurality of processors, i.e. the control unit may be provided in one or more interactive units.
  • the system may comprise a wearable identification tag.
  • the wearable identification tag may be configured to be worn by a user of the learning environment.
  • At least some of the plurality of interactive units, including the first interactive unit and/or the second interactive unit may comprise a detector or reader to detect the presence and/or identity of a wearable identification tag.
  • At least some of the plurality of interactive units, including the first interactive unit and/or the second interactive unit may be configured to send a control signal to the control unit when a wearable identification tag is in proximity of the interactive unit.
  • the first interactive unit may be configured to send a control signal to the control unit when the wearable identification tag is in proximity of the first interactive unit.
  • the wearable identification tag may comprise any presence or proximity sensor unit, such as an RFID tag or an NFC tag.
  • the detector or reader may be any presence or proximity reader, such as RFID reader or a NFC device, appropriate for reading the appropriate type of wearable identification tag.
  • the second interactive unit may be configured to be positioned at a distance from the position of the first interactive unit, such that the second interactive unit is out of reach from the position of the first interactive unit.
  • the second interactive unit may be configured to be positioned at a distance of more than 1 meter from the position of the first interactive unit, such as more than 2 meters such than more than 3 meters.
  • the second interactive unit being positioned at a distance of more than 1 meter from the position of the first interactive unit, such as more than 2 meters, such as more than 3 meters, provides a way of positioning the second interactive unit out of reach from the position of the first interactive unit.
  • the second interactive unit being out of reach from the position of the first interactive unit prompts the user to physically move from one position to another position. Likewise, any further interactive unit may be provided out of reach of a previous interactive unit.
  • the system may comprise further sensors, such as proximity sensors, etc., visual markers, such as lamps, such as lights, and/or audio markers, such as speakers, horns, etc., which may be connected to one or more of the interactive units and/or to the control unit.
  • visual markers such as lamps, such as lights
  • audio markers such as speakers, horns, etc.
  • the plurality of building blocks may have several shapes and sizes, and the system may provide building blocks of different shapes and sizes.
  • shapes may include mats, logs, cubes, cuboids, prisms, cylinders, pyramids, etc.
  • the building blocks may have any dimensions, such as dimensions between 5-100 cm, such as between 10-80 cm, such as between 20-60 cm.
  • the building blocks may have at least one dimension of more than 5 cm, such as more than 10 cm, such as more than 20 cm, such as 30 cm.
  • the building blocks may have dimensions which allow creation of obstacles, such as obstacles between interactive units.
  • the building blocks may have dimensions which allow a child to move the building blocks around.
  • the plurality of building blocks may comprise building blocks of at least two different shapes, such as at least three different shapes, such as at least four different shapes. Different shapes provide the possibility of constructing different environments and constructions.
  • the plurality of building blocks may comprise different building blocks having different weight such as weight in the range of 0.05-5.0 kg, such as in the range of 0.2-3.0 kg, such as in the range of 0.5-2.0 kg. Weight of the building blocks preferably should be such that a child is able to move the building blocks around.
  • the plurality of building blocks may comprise different building blocks having different softness, ductility and/or tactility.
  • the plurality of building blocks may be resilient building blocks, such as elastic or flexible.
  • the plurality of building blocks may be made primarily from a foam material, such as EVA foam. Providing the building blocks in a resilient material provides a reduced risk of injuries.
  • the plurality of building blocks may be connected in a plurality of groups of building blocks.
  • a first group may comprise the first building block comprising the first interactive unit
  • a second group may comprise the second building block comprising the second interactive unit.
  • the plurality of building blocks may comprise a connector element.
  • the connector element may be configured to connect the plurality of building blocks, and the connector element may be configured to connect any of the plurality of interactive units with any of the plurality of building blocks.
  • the plurality of building blocks may be configured to be connectable in order to build mazes and/or obstacles such as walls tunnels, stairs and/or steps, etc.
  • the plurality of building blocks may be configured to create obstacles between the first interactive unit and the second interactive unit, such as physical obstacles, such as obstacles to be surmounted by the user to reach the second interactive unit.
  • the second interactive unit may be configured to be positioned such that it is not visible from the position of the first interactive unit. Hence, the user will need to locate the second interactive unit from any elicited audible information.
  • the second interactive unit may be hidden behind a plurality of building blocks.
  • the receiving of user input data may be received from the first interactive unit.
  • the receiving of user input data may be received from the second interactive unit.
  • the user input data may comprise a predetermined user input data part.
  • the user input data part may comprise touch of a specific part of a touch screen, a touch of a button, a movement of a joystick, a recording of a microphone input signal, etc.
  • predetermined user input data part may describe user input data that is adequate to complete a task.
  • the user input data may comprise a predetermined gesture, and/or presentation of a predetermined object, such as an object having a predetermined colour, such as a ball being red or blue or green or yellow.
  • a task may be to choose and show to the interactive unit a certain colour object, e.g. a red ball.
  • the user may be presented with a plurality of coloured physical balls.
  • the task may be completed if the user picks up the red ball and show it to a camera of the interactive unit.
  • the method may further comprise calculating a score.
  • the score may be indicative of the performance of the user.
  • the score may be determined at least partly by time elapsed between the eliciting of the first audible and/or visual information and the reception of user input data.
  • the score may be stored.
  • the control unit processor may perform the calculation of the score, and likewise, the score may be stored in the control unit memory.
  • the first and/or second audible and/or visual information may be determined based on a stored score, e.g. a stored score from a previous execution of the method.
  • the stored score may comprise user identification.
  • the difficulty of tasks associated with the first and/or second audible and/or visual information may be adjusted in accordance with the stored score.
  • the difficulty may be adjusted by changing the complexity of a task presented and/or a user input to receive to complete the task, e.g. an easy task may be completed by simply touching anywhere on a touch screen, while a more difficult task may include touching a specific object shown on a touch screen and/or provide more possible choices.
  • the first audible and/or visual information may be repeated while waiting for the receiving of user input data.
  • the invention disclosed is especially suitable for training of children suffering from cerebral palsy, autism spectrum, ADHD, learning disabilities and the like but it may equally be adapted to challenge and thus train the motor and cognitive skills of any child or even adults.
  • Fig. 1 schematically illustrates an exemplary system for providing a learning
  • FIG. 2 schematically illustrates an exemplary system for providing a learning environment
  • Fig. 3 schematically illustrates an exemplary system, further comprising wireless communication means and an identification tag
  • Fig. 4 illustrates exemplary parts of an exemplary system for providing a learning environment
  • Fig. 5 illustrates an exemplary system of connected building blocks
  • Fig. 6 illustrates a flow diagram of a method for providing a learning environment
  • Fig. 7 illustrates a flow diagram of a method, further comprising calculation
  • Fig. 8 illustrates a flow diagram of a method, further comprising retrieving a score
  • Fig. 9 illustrates a scene of an exemplary scenario
  • Fig. 10 illustrates a scene of an exemplary scenario.
  • Fig. 1 schematically illustrates an exemplary system 2 for providing a learning environment.
  • the system 2 comprises a plurality of building blocks 4, 6, 8, herein shown including a first building block 4, a second building block 6 and a third building block 8.
  • the first building block 4 and the second building block 6 comprise interactive units 12, 14.
  • the first building block 4 comprises a first interactive unit 12, and the second building block 6 comprises a second interactive unit 14.
  • Each interactive unit 12, 14 comprise a visual display 18, a user interface 20, and a speaker 22.
  • the system 2 further comprises a control unit 40.
  • the control unit 40 comprises a processing unit 42, memory 44, and a user interface 46.
  • the visual display 18 is configured to display visual information, such as movies, images, text, animations, colors, or similar.
  • the user interface 20 is configured for receiving user input, such as push of a button, touch of a touch screen, sound from a microphone, etc.
  • the speaker 22 is configured to reproduce audible information, such as a command, a song and/or any sound connected to the visual information of the display 18.
  • the interactive units 12, 14 may be custom build or they may be any suitable tablet computer or smart phone.
  • the control unit 40 receives user input data 60 from the first interactive unit 12, and the control unit 40 transmits a control signal 62 to control the second interactive unit 14.
  • the user input data 60 may be a touch of the visual display of the first interactive unit 12, which is transmitted to the control unit 40.
  • the control unit 40 responds e.g. according to a preprogrammed algorithm, by controlling 62 the second interactive unit 14 to elicit an audible sound from the speaker 22 of the second interactive unit 14.
  • the user input data 60 may be preceded by eliciting audible information in the speaker 22 and/or visual information in the visual display 18 of the first interactive unit 12.
  • the plurality of building blocks 4, 6, 8, may be mechanically interconnected, either in a single group or in several groups.
  • the building blocks comprising the first building block 4 comprising first interactive unit 12 is placed distantly from the building blocks comprising the second building block 6 comprising the second interactive unit 14.
  • distantly locating the interactive units 12, 14 the user needs to move around in order to get from the first interactive unit 12 to the second interactive unit 14.
  • the preprogrammed algorithm may be a program, such as an educational or entertainment program configured to perform sequentially on the plurality of interactive units.
  • the interactive units may be connected in a network, such as a wired or a wireless network, or any combination thereof.
  • Fig. 2 schematically illustrates an exemplary system 2 for providing a learning environment.
  • the control unit 40 is comprised within the first interactive unit 12.
  • the control unit 40 may therefore, as shown, be provided without a user interface, and instead utilize the user interface 20 of the interactive unit 12.
  • the system 2 shown in Fig. 2 has the same functionality as the system 2 shown in Fig. 2. However, the system 2 of Fig. 2 is provided with the control unit 40 and the first interactive unit 12 being integrated into a single unit. Hence, the system 2 is provided with fewer parts.
  • all interactive units 12, 14 may hold the possibility to function as the control unit, and the user may select the unit to work as the control unit. In the latter exemplified case, all interactive units 12, 14 will be as the first interactive unit 12 schematically illustrated in Fig. 2. In some examples, the interactive units may sequentially work as control units.
  • Fig. 3 schematically illustrates an exemplary system 2.
  • the system 2 of Fig. 3 comprises similar elements as described in relation to Fig. 1.
  • the system 2 of Fig. 3 further comprises wireless communication means 24, 48 for communicating between interactive units 12, 14 and the control unit 40.
  • the wireless communication means 24, 48 may be such as WiFi, Bluetooth, GSM or similar.
  • User input data 60 from the first interactive unit 12 may be transmitted to the control unit 40 via the wireless communication means 24 of the first interactive unit 12.
  • the control unit 40 may receive user input data 60 from the first interactive unit 12 via the wireless communication means 48.
  • the control unit may transmit a control signal 62 to the second interactive unit 14 via the wireless communication means 48.
  • the second interactive unit 14 may receive a control signal 60 via the wireless communication means 24 of the second interactive unit 12.
  • system 2 of Fig. 3 comprises an identification tag 70.
  • identification tag 70 may be a wearable identification tag 70.
  • the first interactive unit 12 may be configured with a presence detector 26 for detecting the presence and/or identity of an identification tag.
  • the first interactive unit 12 may be configured to transmit a control signal 60 to the control unit 40, when the presence 64 of the identification tag 70 is detected.
  • the identification tag 70 may be such as an RFID or an NFC tag, and the presence detector 26 may correspondingly be an RFID or an NFC reader.
  • the second interactive unit 14 may equally be equipped with a presence detector 26, and thus be able to detect the presence and/or identity of the identification tag 70.
  • Fig. 4 illustrates exemplary parts 80 of an exemplary system 2 for providing a learning environment.
  • the exemplary part 80 is illustrated comprising building blocks 82, 84, 86 having different shapes.
  • the illustrated exemplary parts 80 comprise a building block having a rectangular base 82, a circular base 84 and a triangular base 86.
  • a connector element 88 which is adapted to fit in recesses 92 of the building blocks 82, 84, 86, so as to combine individual building blocks 82, 84, 86.
  • an interactive unit 90 that is fitted with an element 92 similar to the connector element 88, such that the interactive unit 90 is attachable to a building block 82, 84, 86.
  • an interactive unit 90' may be provided with an element 94 similar to a building block 82.
  • the element 94 is in Fig. 4 shown as being similar to the rectangular base building block 82. However, the element 94 may be similar to any building block, e.g. the building block with a circular base 84 or the building block with a triangular base 86.
  • Fig. 5 illustrates an exemplary system 2 of connected building blocks 4, 6, 8.
  • the building blocks 4, 6, 8 may be of similar or different shapes, and may be grouped together. Here shown as being grouped together in two groups 30, 32.
  • the first group 30 comprises the first building block 4, and the first building block comprises the first interactive unit 12.
  • the second group 32 comprises the second building block 6 and the third building block 8, and the second building block 6 comprises the second interactive unit 14.
  • the system 2 may comprise several building blocks, however, to keep a sense of perspective referencing of these has been omitted in Fig. 5.
  • Fig. 6 illustrates a flow diagram of a method 100 for providing a learning environment.
  • the method 100 comprises connecting a first number of building blocks 102, connecting a second number of building blocks 104, eliciting a first audible and/or visual information 106, receiving user input data 108 and eliciting a second audible and/or visual information 1 10.
  • the step of connecting a first number of building blocks 102 comprise connecting a first number of building blocks including a first building block, wherein the first building block comprises a first interactive unit.
  • the first number of building blocks is connected in a first location.
  • the step of connecting a second number of building blocks 104 comprise connecting a second number of building blocks including a second building block, wherein the second building block comprises a second interactive unit.
  • the second number of building blocks is connected in a second location.
  • the eliciting of first audible and/or visual information 106 is elicited in the first interactive unit, i.e. in the first location.
  • the receiving of user input data 108 is associated with the user performing an action at a user interface in a connected interactive unit, i.e. the first, the second and or a third, etc., interactive unit.
  • the eliciting of second audible and/or visual information 1 10 is elicited in the second interactive unit, i.e. in the second location.
  • the second audible and/or visual information is based on the received user input data.
  • the second audible and/or visual information may be dependent on whether the received user input data satisfies a certain criterion, e.g. a correct or a false answer.
  • Fig. 7 illustrates a flow diagram of a method 200.
  • the method 200 describes the control of the learning environment.
  • the method 200 comprises eliciting audible and/or visual information 206 in an interactive unit, receiving user input data 208.
  • the eliciting of audible and/or visual information 206 may be a first, third, fifth or seventh audible and/or visual information in a first interactive unit, or the eliciting of audible and/or visual information 206 may be a second, fourth, sixth or eighth audible and/or visual information in the second interactive unit.
  • the method 200 comprise calculation/update of a score part 212.
  • a score part is calculated based on the received user input data, and in subsequent stages the score part is updated based on the received user input data.
  • the score may e.g. be determined based on time elapsed between eliciting of audible and/or visual information 206 and the receiving of user input data 208, and/or the score may be based on a correct/false answer.
  • a sequence comprising eliciting audible and/or visual information 206, receiving of user input data 208, and updating of the score part 212, may be executed several consecutive times, wherein audible and/or visual information is elicited 206 in changing interactive units, e.g. initially in a first interactive unit, secondly in a second interactive unit, thirdly in the first or a third interactive unit, and so forth.
  • the method 200 comprises a determination 214, after calculation/update of the score part 212.
  • the determination 214 determines if the sequence is to be executed again or not.
  • the outcome of the determination 214 may be set to end the sequence after a specific number of loops, or after a specific amount of time. If the outcome of the determination 214 is to execute the sequence again, the method 200 loops back to eliciting audible and/or visual information 206, preferably in an interactive unit different from the latest eliciting of audible and/or visual information. If the outcome of the determination 214 is not to execute the sequence again, the method 200 comprises calculation of a score 216.
  • the calculation of a score 216 may be based on the score parts that have been consecutively updated 212.
  • the calculated score is presented to the user 218, e.g. on a visual display.
  • the calculated score is stored 220 e.g. in an internal memory of the control unit. In another exemplary method, the score may not be presented, but only stored in internal memory.
  • the method 200 may have been preceded by initial steps of assembling the learning environment comprising connecting a first number of building blocks in a first location and connecting a second number of building blocks in a second location, such as described in relation to Fig. 6.
  • Fig. 8 illustrates a flow diagram of a method 300 similar to the method 200 as illustrated above.
  • the method 300 further comprises retrieving a score 322 and determining the difficulty of the sequence to execute 324.
  • the retrieving of a score 322 may be the retrieval of a last stored score, i.e. the score calculated 216 and stored 220 during a previous completion of the method.
  • the retrieved score may be used for determining the difficulty of the sequence to execute 324.
  • the difficulty may be adjusted according to a previous executed sequence, i.e. a high retrieved score may cause the difficulty to be raised, while a low retrieved score may cause the difficulty to be lowered.
  • the determination of difficulty 324 may include the value of the retrieved score to correspond to any of a number of predefined levels.
  • the difficulty may be adjusted by changing the complexity of a task presented and a desired user input to receive, e.g. an easy task may be completed by simply touching anywhere on a touch screen, while a more difficult task may include touching a specific object shown on a touch screen.
  • a first interactive unit elicits a sound and displays an image, e.g. a red ball, a car, a plane, a teddy-bear etc.
  • the first interactive unit continues to show the image and provide the sound to catch the attention of the user.
  • a second interactive unit elicits a sound and displays an image, e.g. a green ball, and continues to do so until the user approach and touch the second interactive unit.
  • a third or the first interactive unit elicit a sound and displays an image, e.g. a blue ball, and so forth.
  • the scenario may include that the image changes, e.g. the ball starts bouncing on the screen, when the user approach the interactive unit.
  • a score may be calculated based on the time elapsed from the appearance of the image or the eliciting of the sound on the interactive unit to the user has touched the screen. Further, the calculation of the score may comprise whether or not the user touches a specific part of the screen, e.g. the ball, or not.
  • the difficulty of the scenario may be adapted by increasing the difficulty of touching the desired part of the screen, e.g. by increasing the movement of the ball, or the difficulty may be increased by the number of tasks to complete.
  • the sound elicited from an interactive unit may be linked to the image, e.g. the sound of a cat is provided if a cat is shown.
  • Fig. 9 illustrates a first interactive unit 12, wherein the visual display 18 shows a scene of another exemplary scenario.
  • the exemplary scenario shown comprises a milk carton 400, an empty bowl 402 and a cat 404.
  • the first interactive unit 12 may concurrently with showing the image as illustrated, elicit a sound, e.g. the sound of a cat meowing.
  • the task for the user to perform may be to touch the milk carton 400 and pour milk into the empty bowl 402.
  • bowl 402 is filled with milk and the cat 404 drinks from the bowl 402, as shown in Fig 10.
  • the task may be adequately performed simply by touching the screen.
  • a second interactive unit When the user has adequately completed the task, a second interactive unit provides a sound as a cow and displays a cow, a bag of forage and a tray.
  • the scenario can include any animal and forage combination.
  • more than one type of forage may be available on the screen, and the correct forage needs to be selected to complete the task.
  • a score may be calculated based on the time elapsed from the appearance of the image or the eliciting of the sound on the interactive unit, to the user has completed the task. Further, the calculation of the score may comprise the number of attempts the user employ to complete the task.
  • the aim is to catch as many objects as possible.
  • a first interactive unit is randomly or pseudo-randomly selected from a plurality of interactive units e.g. a first and a second interactive unit.
  • the first interactive unit elicits a first audible and/or visual information i.e. a sound, an image or both, representing an object.
  • the aim is to catch the object by touching the first interactive unit eliciting the information as fast as possible.
  • a second interactive unit is randomly or pseudo-randomly selected from the plurality of interactive units.
  • the second interactive unit elicits a second audible and/or visual information.
  • This scenario may allow multiple users helping each other.
  • the scenario may be adjusted by only eliciting any of the audible and/or visual information, for a predetermined time, the objective may then be to catch as many objects as possible.
  • Calculation of a score may comprise the time between touches, the total time for completing the whole scenario, and/or the number of objects caught.
  • a first interactive unit displays an object missing a part, e.g. a bicycle with no wheels.
  • the objective is to find and touch a second interactive unit showing the missing part.
  • the difficulty of the scenario may be adjusted by adjusting the complexity of the objects, as well as the missing parts.
  • a score may be calculated comprising the time and the number of rights and wrongs.
  • a first interactive unit elicits a sound and displays two, three or more objects for a predetermined period, e.g. a couple of seconds.
  • a second interactive unit elicits a sound and displays the same objects as the first interactive unit together with one, two, three or more objects.
  • the objective is to choose and touch the objects on the second interactive unit that was also displayed on the first interactive unit.
  • the difficulty may be adjusted by adjusting the number of objects to remember, by increasing or decreasing the time that the objects are displayed, or by adjusting the number of additional objects displayed on the second interactive unit together with the objects to remember.
  • a score may be calculated comprising the time and the number of right and wrongs.
  • a first interactive unit asks a question, either by audible and/or visual information, e.g. "how many red balls can you see?"; or "what color is the bird?".
  • a second interactive unit which subsequently elicit a sound, a number of possible answers are displayed, e.g. two, three or more than three.
  • the objective is to choose and touch the correct answer.
  • the difficulty of the scenario may be adjusted by adjusting the difficulty of the question, the likelihood of the answer, and the number of available answers.
  • a score may be calculated comprising the time and the number of right and wrongs.
  • Either of the above exemplary scenarios may be combined with features of other of the exemplary scenarios. Completion of one scenario may lead to another scenario. All scenarios may be expanded by utilizing further features, e.g. more interactive units, other input devices such as RFID tag detection, audible detection or alike.
  • the method and/or the scenarios may be implemented in software to be performed in a processing unit, wherein the processing unit may be a processor of a computer, a tablet computer and/or a smartphone.
  • the processing unit may be a processor of a computer, a tablet computer and/or a smartphone.

Abstract

A system and a method for providing a physical learning environment comprising: a plurality of building blocks including at least a first building block and a second building block; at least some of the plurality of building blocks comprising an interactive unit, each interactive unit comprising a visual display configured to display visual information, a user interface for receiving a user input, and a speaker configured to reproduce audible information; a control unit electronically connected to each interactive unit, the control unit being configured to receive user input data from at least a first interactive unit and to control at least a second interactive unit to elicit audible and/or visual information in response to the received user input data.

Description

INTERACTIVE AND ADAPTABLE LEARNING ENVIRONMENT
The present invention relates to an interactive and physically adaptable learning environment - also known as an enhanced environment. The environment is built by positioning and combining building blocks, visual displays, and touch- and motion- sensors. The purpose of the system is to build playful spacious environments, tracks, mazes and the like, that may challenge motor as well as cognitive skills and entertain children. The disclosed environment is targeted at children of age 1 -5 years.
BACKGROUND
In order to stimulate development of motor and cognitive skills of children, the child needs physical as well as mental challenges. Furthermore, the challenges need to be inspiring, fun, motivational and adaptable to fit the age and skill of the child. As an even further requirement, challenges need to be safe and designed such that they do not harm or frighten the child.
One way of stimulating the development of children's motor and cognitive skills, in a safe and motivating manner, is by creating environments of walls and obstacles which may be of a soft material such that the child is not hurt if bumping into the object. Such an environment, sometimes referred to as an enhanced environment, may be obtained by the use of foam building blocks, which in some cases are designed to fit together by the use of recesses and protrusions or hook-and-loop fasteners such as Velcro.
Some children however, especially children suffering from cerebral palsy, autism spectrum, ADHD, learning disabilities and the like may have a strong indisposition against surfaces with unfamiliar or diverse feel, e.g. hooks and loops or recess and protrusions on flat surfaces.
To encourage the child to perform tasks which challenges their motor and cognitive skills, ways of incorporating interactivity into a spacious environment is needed. The tasks further need to be balanced to be challenging but at the same time easy enough to avoid frustration. As the child improves, the difficulty of the task needs to improve as well in order to challenge the cognitive level. Thus, the interactive environment needs to be adaptable to the level and skill of the child.
Relating to miniature building blocks, systems, such as US 2013/0217295 and US 2010/0001923, have been disclosed providing building blocks having display regions, touch sensitive surfaces, and communication means. However, usage of such systems challenge fine motor skills, as opposed to gross motor skills, and are thus unsuitable for children aged 1 -5. Also, since these systems relate to miniture building blocks, they do not address encouriging a user to physically move from one location to another.
SUMMARY
Despite the known examples, there is a need for a system which stimulates physical activity, such as gross motor skills, such as encouraging physical movement, such as physical movement from one location to another location.
A system is sought that allow and stimulate the child to be physical active, employ brain-body interaction, and utilize social skills. Physical activity improves body- consciousness, flexibility, and power of coordination. Brain-body interaction enhances spacious awareness and develops fantasy. Social relations builds self confidence, ability to interact and cooperate in groups, and pride in own ability. Thus, a system is sought that is able to manage one user as well as multiple users cooperating. In creating a stimulating and interactive environment for playing children, a system is developed that allows connecting soft building blocks and interactive devices, hence creating a spacious environment where the physical and mental skill of the child is challenged. Furthermore, a system is developed that may be physically adapted to the increasing motor skill level of the child. This adaptive feature may even add to stimulate the child, by letting the child assist in the construction of the environment.
Accordingly, a system providing a physical learning environment is provided, such as a physical learning environment stimulating physical activity. The system comprises a plurality of building blocks including at least a first building block and a second building block, the plurality of building blocks forming the physical learning environment, e.g. building blocks of the plurality of building blocks may have a dimension of at least 20 cm. At least some of the plurality of building blocks comprising an interactive unit. Each interactive unit comprising a visual display configured to display visual information, a user interface for receiving a user input, and a speaker configured to reproduce audible information. The system further comprising a control unit electronically connected to each interactive unit. The control unit being configured to receive user input data from at least a first interactive unit and to control at least a second interactive unit to elicit audible and/or visual information in response to the received user input data. For example, the control unit is configured to receive user input data from a first interactive unit and to control a second interactive unit to elicit audible and/or visual information in response to the received user input data
Eliciting of informaion may comprise generating and/or producing information, such as sounding, transmitting, playing, displaying, etc. Eliciting of information, such as audible and/or visual information may comprise a speaker to transmit a desired sound, such as a dog barking, a piano playing, a sirene, and/or a person speaking. Eliciting of information, such as audible and/or visual information may comprise a visual display viewing a desired visual information, such as a still picture, a moving picture, and/or an animation, e.g. the visual information may comprise a dog, a piano, an ambulance, and/or a person.
In a further aspect, a method for providing a physical learning environment comprising a plurality of building blocks is provided, such as a physical learning environment stimulating physical activity, wherein at least some of the plurality of building blocks comprises an interactive unit. The method comprises connecting a first number of building blocks in a first location. The first number of building blocks includes at least a first first building block comprising a first interactive unit. The method further comprises connecting a second number of building blocks in a second location. The second number of building blocks includes at least a second building block comprising a second interactive unit. A first audible and/or visual information is elicited in the first interactive unit, and user input data is received, and second audible and/or visual information is elicited in the second interactive unit in response to the received user input data.
The method may be implemented in a system providing a physical learning
environment as also disclosed. At least a part of the method may be incorporated in software adapted to run in a processing unit.
It is an advantage of the present invention that it provides a learning environment that stimulates children to perform physical and cognitive challenging tasks in a challenging, motivational and amusing manner. Furthermore, the interactive element allows challenge of cognitive skills, and allows adjustment of the task difficulty to fit the skill of the user. Also, the building block arrangement allows the physical challenges to be adjusted to fit the skill of the user.
It is a further advantage of the disclosure that it provides interactivity to a physical learning environment stimulating physical activity. Thus, the disclosure provides an increased user experience. It is an even further advange of the disclosure, that a user is encouraged to physically move from one location to another location. For example, the disclosure provides that a user input in one location provides a response in a distant location. For example, an output is elicited in a second location in response to an input in a first location, therebye, the user is encouraged to move from the first location to the second location.
The system may comprise a plurality of interactive units, the plurality of interactive units may include the first, the second, a third, a fourth, and a fifth interactive unit, etc.
Some of the plurality of building blocks may comprise an interactive unit. The first building block may comprise a first interactive unit, such as the first interactive unit. The second building block may comprise a second interactive unit, such as the second interactive unit. The system may comprise building blocks, such as some of the plurality of building blocks, which does not comprise an interactive unit.
The plurality of building blocks, including the first interactive unit and/or the second interactive unit and/or any further interactive units, may comprise a processing unit and/or memory. At least some of the plurality of interactive units may have a user interface and the user interface may comprise any user manipulable input, such as a touch sensor, a joy stick, a trackball, a push button, a mouse, etc.; the user interface may further comprise audio input and/or audio output means, such as microphones and/or speakers. The user interface may further comprise visual sensors, such as a video camera or an infrared detector. A visual sensor may be configured to receive user input data, such as user input data comprising a predetermined gesture, and/or presentation of a predetermined object, such as an object having a predetermined colour, such as a ball being red or blue or green or yellow.
The plurality of interactive units may be any computing device, such as computers, such as laptops, tablet computers, smart phones, etc. The plurality of interactive units may be of a same kind or may be different interactive units.
The control unit may comprise one or more of a user interface, a processing unit, a memory, and a visual display. The user interface of the control unit comprise any user manipulable input, such as a touch sensor, a joy stick, a trackball, a push button, a mouse, etc.; the user interface of the control unit may further comprise audio inputs and/or audio outputs, such as microphones and/or speakers. The control unit may be any computing device, such as computers, such as laptops, tablet computers, smart phones, etc. At least some of the plurality of interactive units, including the first and/or the second interactive unit, may comprise wireless communication means for enabling wireless communication. Also, the control unit may comprise wireless communication means for enabling wireless communication. The wireless communication means may be configured to communicate according to wireless communication protocols such as WiFi, Bluetooth, BLE, NFC, RFID and/or GSM.
The plurality of interactive units may communicate wirelessly and at least some of the plurality of interactive unists may communicate wirelessly with the control unit. At least some of the plurality of interactive units may communicate either directly from one interactive unit to another, or from one interactive unit via the control unit to another interactive unit.
In some embodiments, at least some of the interactive units may have limited processing power and may be controlled by the control unit.
Any of the plurality of interactive units, including the first interactive unit and/or the second interactive unit, may comprise the control unit. The system may comprise fewer parts and may be less expensive, if one of the interactive units can be configured to work as the control unit, such as to perform the actions of the control unit. Any of the plurality of interactive units may comprise the control unit. Alternatively, all of the plurality of interactive units may be configurable to comprise the control unit. Thus, the control unit may be provided in one processor, or the control unit may be provided in a plurality of processors, i.e. the control unit may be provided in one or more interactive units.
The system may comprise a wearable identification tag. The wearable identification tag may be configured to be worn by a user of the learning environment. At least some of the plurality of interactive units, including the first interactive unit and/or the second interactive unit, may comprise a detector or reader to detect the presence and/or identity of a wearable identification tag. At least some of the plurality of interactive units, including the first interactive unit and/or the second interactive unit, may be configured to send a control signal to the control unit when a wearable identification tag is in proximity of the interactive unit. E.g. the first interactive unit may be configured to send a control signal to the control unit when the wearable identification tag is in proximity of the first interactive unit.
The wearable identification tag may comprise any presence or proximity sensor unit, such as an RFID tag or an NFC tag. The detector or reader may be any presence or proximity reader, such as RFID reader or a NFC device, appropriate for reading the appropriate type of wearable identification tag.
The second interactive unit may be configured to be positioned at a distance from the position of the first interactive unit, such that the second interactive unit is out of reach from the position of the first interactive unit. The second interactive unit may be configured to be positioned at a distance of more than 1 meter from the position of the first interactive unit, such as more than 2 meters such than more than 3 meters. The second interactive unit being positioned at a distance of more than 1 meter from the position of the first interactive unit, such as more than 2 meters, such as more than 3 meters, provides a way of positioning the second interactive unit out of reach from the position of the first interactive unit. The second interactive unit being out of reach from the position of the first interactive unit prompts the user to physically move from one position to another position. Likewise, any further interactive unit may be provided out of reach of a previous interactive unit.
The system may comprise further sensors, such as proximity sensors, etc., visual markers, such as lamps, such as lights, and/or audio markers, such as speakers, horns, etc., which may be connected to one or more of the interactive units and/or to the control unit.
The plurality of building blocks may have several shapes and sizes, and the system may provide building blocks of different shapes and sizes. E.g. shapes may include mats, logs, cubes, cuboids, prisms, cylinders, pyramids, etc. The building blocks may have any dimensions, such as dimensions between 5-100 cm, such as between 10-80 cm, such as between 20-60 cm. The building blocks may have at least one dimension of more than 5 cm, such as more than 10 cm, such as more than 20 cm, such as 30 cm. The building blocks may have dimensions which allow creation of obstacles, such as obstacles between interactive units. The building blocks may have dimensions which allow a child to move the building blocks around.
The plurality of building blocks may comprise building blocks of at least two different shapes, such as at least three different shapes, such as at least four different shapes. Different shapes provide the possibility of constructing different environments and constructions.
The plurality of building blocks may comprise different building blocks having different weight such as weight in the range of 0.05-5.0 kg, such as in the range of 0.2-3.0 kg, such as in the range of 0.5-2.0 kg. Weight of the building blocks preferably should be such that a child is able to move the building blocks around.
The plurality of building blocks may comprise different building blocks having different softness, ductility and/or tactility. The plurality of building blocks may be resilient building blocks, such as elastic or flexible. For example, the plurality of building blocks may be made primarily from a foam material, such as EVA foam. Providing the building blocks in a resilient material provides a reduced risk of injuries.
The plurality of building blocks may be connected in a plurality of groups of building blocks. A first group may comprise the first building block comprising the first interactive unit, and a second group may comprise the second building block comprising the second interactive unit.
The plurality of building blocks may comprise a connector element. The connector element may be configured to connect the plurality of building blocks, and the connector element may be configured to connect any of the plurality of interactive units with any of the plurality of building blocks.
The plurality of building blocks may be configured to be connectable in order to build mazes and/or obstacles such as walls tunnels, stairs and/or steps, etc.
The plurality of building blocks may be configured to create obstacles between the first interactive unit and the second interactive unit, such as physical obstacles, such as obstacles to be surmounted by the user to reach the second interactive unit.
The second interactive unit may be configured to be positioned such that it is not visible from the position of the first interactive unit. Hence, the user will need to locate the second interactive unit from any elicited audible information. The second interactive unit may be hidden behind a plurality of building blocks. In an exemplary method the receiving of user input data, may be received from the first interactive unit. In another exemplary method, the receiving of user input data may be received from the second interactive unit.
The user input data may comprise a predetermined user input data part. The user input data part may comprise touch of a specific part of a touch screen, a touch of a button, a movement of a joystick, a recording of a microphone input signal, etc. The
predetermined user input data part may describe user input data that is adequate to complete a task. The user input data may comprise a predetermined gesture, and/or presentation of a predetermined object, such as an object having a predetermined colour, such as a ball being red or blue or green or yellow. For example, a task may be to choose and show to the interactive unit a certain colour object, e.g. a red ball. The user may be presented with a plurality of coloured physical balls. The task may be completed if the user picks up the red ball and show it to a camera of the interactive unit.
The method may further comprise calculating a score. The score may be indicative of the performance of the user. The score may be determined at least partly by time elapsed between the eliciting of the first audible and/or visual information and the reception of user input data. The score may be stored. For example, the control unit processor may perform the calculation of the score, and likewise, the score may be stored in the control unit memory.
The first and/or second audible and/or visual information may be determined based on a stored score, e.g. a stored score from a previous execution of the method. The stored score may comprise user identification. The difficulty of tasks associated with the first and/or second audible and/or visual information may be adjusted in accordance with the stored score. The difficulty may be adjusted by changing the complexity of a task presented and/or a user input to receive to complete the task, e.g. an easy task may be completed by simply touching anywhere on a touch screen, while a more difficult task may include touching a specific object shown on a touch screen and/or provide more possible choices.
The first audible and/or visual information may be repeated while waiting for the receiving of user input data. The invention disclosed is especially suitable for training of children suffering from cerebral palsy, autism spectrum, ADHD, learning disabilities and the like but it may equally be adapted to challenge and thus train the motor and cognitive skills of any child or even adults.
BRIEF DESCRIPTION OF THE DRAWINGS
The above and other features and advantages of the present invention will become readily apparent to those skilled in the art by the following detailed description of exemplary embodiments thereof with reference to the attached drawings, in which:
Fig. 1 schematically illustrates an exemplary system for providing a learning
environment, Fig. 2 schematically illustrates an exemplary system for providing a learning environment,
Fig. 3 schematically illustrates an exemplary system, further comprising wireless communication means and an identification tag,
Fig. 4 illustrates exemplary parts of an exemplary system for providing a learning environment,
Fig. 5 illustrates an exemplary system of connected building blocks,
Fig. 6 illustrates a flow diagram of a method for providing a learning environment,
Fig. 7 illustrates a flow diagram of a method, further comprising calculation and
storing of a score,
Fig. 8 illustrates a flow diagram of a method, further comprising retrieving a score, Fig. 9 illustrates a scene of an exemplary scenario, and
Fig. 10 illustrates a scene of an exemplary scenario.
The figures are schematic and simplified for clarity, and they merely show details which are essential to the understanding of the invention, while other details have been left out. Throughout, the same reference numerals are used for identical or corresponding parts.
DETAILED DESCRIPTION
Fig. 1 schematically illustrates an exemplary system 2 for providing a learning environment. The system 2 comprises a plurality of building blocks 4, 6, 8, herein shown including a first building block 4, a second building block 6 and a third building block 8. The first building block 4 and the second building block 6 comprise interactive units 12, 14. The first building block 4 comprises a first interactive unit 12, and the second building block 6 comprises a second interactive unit 14. Each interactive unit 12, 14 comprise a visual display 18, a user interface 20, and a speaker 22. The system 2 further comprises a control unit 40. The control unit 40 comprises a processing unit 42, memory 44, and a user interface 46.
The visual display 18 is configured to display visual information, such as movies, images, text, animations, colors, or similar. The user interface 20 is configured for receiving user input, such as push of a button, touch of a touch screen, sound from a microphone, etc. The speaker 22 is configured to reproduce audible information, such as a command, a song and/or any sound connected to the visual information of the display 18.
The interactive units 12, 14 may be custom build or they may be any suitable tablet computer or smart phone.
The control unit 40 receives user input data 60 from the first interactive unit 12, and the control unit 40 transmits a control signal 62 to control the second interactive unit 14. In an example, the user input data 60 may be a touch of the visual display of the first interactive unit 12, which is transmitted to the control unit 40. The control unit 40 responds e.g. according to a preprogrammed algorithm, by controlling 62 the second interactive unit 14 to elicit an audible sound from the speaker 22 of the second interactive unit 14.
The user input data 60, may be preceded by eliciting audible information in the speaker 22 and/or visual information in the visual display 18 of the first interactive unit 12.
The plurality of building blocks 4, 6, 8, may be mechanically interconnected, either in a single group or in several groups. In a specific exemplary system the building blocks comprising the first building block 4 comprising first interactive unit 12 is placed distantly from the building blocks comprising the second building block 6 comprising the second interactive unit 14. By distantly locating the interactive units 12, 14 the user needs to move around in order to get from the first interactive unit 12 to the second interactive unit 14.
The preprogrammed algorithm may be a program, such as an educational or entertainment program configured to perform sequentially on the plurality of interactive units. The interactive units may be connected in a network, such as a wired or a wireless network, or any combination thereof.
Fig. 2 schematically illustrates an exemplary system 2 for providing a learning environment. In the system 2 as shown in Fig 2, the control unit 40 is comprised within the first interactive unit 12. The control unit 40 may therefore, as shown, be provided without a user interface, and instead utilize the user interface 20 of the interactive unit 12.
The system 2 shown in Fig. 2 has the same functionality as the system 2 shown in Fig. 2. However, the system 2 of Fig. 2 is provided with the control unit 40 and the first interactive unit 12 being integrated into a single unit. Hence, the system 2 is provided with fewer parts. In an alternative system (not shown) all interactive units 12, 14 may hold the possibility to function as the control unit, and the user may select the unit to work as the control unit. In the latter exemplified case, all interactive units 12, 14 will be as the first interactive unit 12 schematically illustrated in Fig. 2. In some examples, the interactive units may sequentially work as control units.
Fig. 3 schematically illustrates an exemplary system 2. The system 2 of Fig. 3 comprises similar elements as described in relation to Fig. 1. The system 2 of Fig. 3 further comprises wireless communication means 24, 48 for communicating between interactive units 12, 14 and the control unit 40.
The wireless communication means 24, 48 may be such as WiFi, Bluetooth, GSM or similar.
User input data 60 from the first interactive unit 12, may be transmitted to the control unit 40 via the wireless communication means 24 of the first interactive unit 12. The control unit 40 may receive user input data 60 from the first interactive unit 12 via the wireless communication means 48. The control unit may transmit a control signal 62 to the second interactive unit 14 via the wireless communication means 48. The second interactive unit 14 may receive a control signal 60 via the wireless communication means 24 of the second interactive unit 12.
Furthermore, the system 2 of Fig. 3 comprises an identification tag 70. The
identification tag 70 may be a wearable identification tag 70. The first interactive unit 12 may be configured with a presence detector 26 for detecting the presence and/or identity of an identification tag. Hence, the first interactive unit 12 may be configured to transmit a control signal 60 to the control unit 40, when the presence 64 of the identification tag 70 is detected. In a more specific exemplary system, the identification tag 70 may be such as an RFID or an NFC tag, and the presence detector 26 may correspondingly be an RFID or an NFC reader. It is emphasized that the second interactive unit 14 may equally be equipped with a presence detector 26, and thus be able to detect the presence and/or identity of the identification tag 70.
Fig. 4 illustrates exemplary parts 80 of an exemplary system 2 for providing a learning environment. The exemplary part 80 is illustrated comprising building blocks 82, 84, 86 having different shapes. The illustrated exemplary parts 80 comprise a building block having a rectangular base 82, a circular base 84 and a triangular base 86. Also illustrated in Fig. 4 is a connector element 88 which is adapted to fit in recesses 92 of the building blocks 82, 84, 86, so as to combine individual building blocks 82, 84, 86.
Further illustrated in Fig. 4 is an interactive unit 90 that is fitted with an element 92 similar to the connector element 88, such that the interactive unit 90 is attachable to a building block 82, 84, 86. Alternatively, or additionally, an interactive unit 90' may be provided with an element 94 similar to a building block 82. The element 94 is in Fig. 4 shown as being similar to the rectangular base building block 82. However, the element 94 may be similar to any building block, e.g. the building block with a circular base 84 or the building block with a triangular base 86.
Fig. 5 illustrates an exemplary system 2 of connected building blocks 4, 6, 8. The building blocks 4, 6, 8 may be of similar or different shapes, and may be grouped together. Here shown as being grouped together in two groups 30, 32. The first group 30 comprises the first building block 4, and the first building block comprises the first interactive unit 12. The second group 32 comprises the second building block 6 and the third building block 8, and the second building block 6 comprises the second interactive unit 14. As seen, the system 2 may comprise several building blocks, however, to keep a sense of perspective referencing of these has been omitted in Fig. 5.
Fig. 6 illustrates a flow diagram of a method 100 for providing a learning environment. The method 100 comprises connecting a first number of building blocks 102, connecting a second number of building blocks 104, eliciting a first audible and/or visual information 106, receiving user input data 108 and eliciting a second audible and/or visual information 1 10.
The step of connecting a first number of building blocks 102 comprise connecting a first number of building blocks including a first building block, wherein the first building block comprises a first interactive unit. The first number of building blocks is connected in a first location.
The step of connecting a second number of building blocks 104 comprise connecting a second number of building blocks including a second building block, wherein the second building block comprises a second interactive unit. The second number of building blocks is connected in a second location.
The eliciting of first audible and/or visual information 106 is elicited in the first interactive unit, i.e. in the first location. The receiving of user input data 108 is associated with the user performing an action at a user interface in a connected interactive unit, i.e. the first, the second and or a third, etc., interactive unit.
The eliciting of second audible and/or visual information 1 10 is elicited in the second interactive unit, i.e. in the second location. The second audible and/or visual information is based on the received user input data. The second audible and/or visual information may be dependent on whether the received user input data satisfies a certain criterion, e.g. a correct or a false answer.
Fig. 7 illustrates a flow diagram of a method 200. The method 200 describes the control of the learning environment. The method 200 comprises eliciting audible and/or visual information 206 in an interactive unit, receiving user input data 208. The eliciting of audible and/or visual information 206 may be a first, third, fifth or seventh audible and/or visual information in a first interactive unit, or the eliciting of audible and/or visual information 206 may be a second, fourth, sixth or eighth audible and/or visual information in the second interactive unit.
After the receiving of user input data 208 the method 200 comprise calculation/update of a score part 212. In an initial stage, a score part is calculated based on the received user input data, and in subsequent stages the score part is updated based on the received user input data. The score may e.g. be determined based on time elapsed between eliciting of audible and/or visual information 206 and the receiving of user input data 208, and/or the score may be based on a correct/false answer.
A sequence comprising eliciting audible and/or visual information 206, receiving of user input data 208, and updating of the score part 212, may be executed several consecutive times, wherein audible and/or visual information is elicited 206 in changing interactive units, e.g. initially in a first interactive unit, secondly in a second interactive unit, thirdly in the first or a third interactive unit, and so forth.
To determine if the sequence should be executed again, the method 200 comprises a determination 214, after calculation/update of the score part 212. The determination 214 determines if the sequence is to be executed again or not. In a more specific example, the outcome of the determination 214 may be set to end the sequence after a specific number of loops, or after a specific amount of time. If the outcome of the determination 214 is to execute the sequence again, the method 200 loops back to eliciting audible and/or visual information 206, preferably in an interactive unit different from the latest eliciting of audible and/or visual information. If the outcome of the determination 214 is not to execute the sequence again, the method 200 comprises calculation of a score 216. The calculation of a score 216 may be based on the score parts that have been consecutively updated 212. The calculated score is presented to the user 218, e.g. on a visual display. Furthermore, the calculated score is stored 220 e.g. in an internal memory of the control unit. In another exemplary method, the score may not be presented, but only stored in internal memory.
The method 200 may have been preceded by initial steps of assembling the learning environment comprising connecting a first number of building blocks in a first location and connecting a second number of building blocks in a second location, such as described in relation to Fig. 6.
Fig. 8 illustrates a flow diagram of a method 300 similar to the method 200 as illustrated above. The method 300 further comprises retrieving a score 322 and determining the difficulty of the sequence to execute 324.
The retrieving of a score 322 may be the retrieval of a last stored score, i.e. the score calculated 216 and stored 220 during a previous completion of the method. The retrieved score may be used for determining the difficulty of the sequence to execute 324. In an exemplary method 300, the difficulty may be adjusted according to a previous executed sequence, i.e. a high retrieved score may cause the difficulty to be raised, while a low retrieved score may cause the difficulty to be lowered. In an alternative exemplary method 300, the determination of difficulty 324 may include the value of the retrieved score to correspond to any of a number of predefined levels.
The difficulty may be adjusted by changing the complexity of a task presented and a desired user input to receive, e.g. an easy task may be completed by simply touching anywhere on a touch screen, while a more difficult task may include touching a specific object shown on a touch screen.
In cooperation with changing the physical environment by connecting the building blocks differently, different scenarios may be incorporated into the disclosed system. Different scenarios are exemplified in the following.
In one exemplary scenario, a first interactive unit elicits a sound and displays an image, e.g. a red ball, a car, a plane, a teddy-bear etc. The first interactive unit continues to show the image and provide the sound to catch the attention of the user. When the user approach and touch the first interactive unit, a second interactive unit elicits a sound and displays an image, e.g. a green ball, and continues to do so until the user approach and touch the second interactive unit. When the user approach and touch the second interactive unit, a third or the first interactive unit elicit a sound and displays an image, e.g. a blue ball, and so forth.
If utilizing a wearable identification tag worn by the user and a presence detector is provided in or near the interactive units, the scenario may include that the image changes, e.g. the ball starts bouncing on the screen, when the user approach the interactive unit.
A score may be calculated based on the time elapsed from the appearance of the image or the eliciting of the sound on the interactive unit to the user has touched the screen. Further, the calculation of the score may comprise whether or not the user touches a specific part of the screen, e.g. the ball, or not. The difficulty of the scenario may be adapted by increasing the difficulty of touching the desired part of the screen, e.g. by increasing the movement of the ball, or the difficulty may be increased by the number of tasks to complete.
The sound elicited from an interactive unit may be linked to the image, e.g. the sound of a cat is provided if a cat is shown.
Fig. 9 illustrates a first interactive unit 12, wherein the visual display 18 shows a scene of another exemplary scenario. The exemplary scenario shown comprises a milk carton 400, an empty bowl 402 and a cat 404. The first interactive unit 12 may concurrently with showing the image as illustrated, elicit a sound, e.g. the sound of a cat meowing. The task for the user to perform may be to touch the milk carton 400 and pour milk into the empty bowl 402. When adequately performed, bowl 402 is filled with milk and the cat 404 drinks from the bowl 402, as shown in Fig 10. For a less skilled user, the task may be adequately performed simply by touching the screen.
When the user has adequately completed the task, a second interactive unit provides a sound as a cow and displays a cow, a bag of forage and a tray. The scenario can include any animal and forage combination. To increase the difficulty, more than one type of forage may be available on the screen, and the correct forage needs to be selected to complete the task.
A score may be calculated based on the time elapsed from the appearance of the image or the eliciting of the sound on the interactive unit, to the user has completed the task. Further, the calculation of the score may comprise the number of attempts the user employ to complete the task. In another exemplary scenario, the aim is to catch as many objects as possible. A first interactive unit is randomly or pseudo-randomly selected from a plurality of interactive units e.g. a first and a second interactive unit. The first interactive unit elicits a first audible and/or visual information i.e. a sound, an image or both, representing an object. The aim is to catch the object by touching the first interactive unit eliciting the information as fast as possible. When the first interactive unit detects a touch, a second interactive unit is randomly or pseudo-randomly selected from the plurality of interactive units. The second interactive unit elicits a second audible and/or visual information. This scenario may allow multiple users helping each other.
The scenario may be adjusted by only eliciting any of the audible and/or visual information, for a predetermined time, the objective may then be to catch as many objects as possible. Calculation of a score may comprise the time between touches, the total time for completing the whole scenario, and/or the number of objects caught.
In another exemplary scenario, a first interactive unit displays an object missing a part, e.g. a bicycle with no wheels. The objective is to find and touch a second interactive unit showing the missing part. The difficulty of the scenario may be adjusted by adjusting the complexity of the objects, as well as the missing parts. A score may be calculated comprising the time and the number of rights and wrongs.
In another exemplary scenario, a first interactive unit elicits a sound and displays two, three or more objects for a predetermined period, e.g. a couple of seconds. A second interactive unit elicits a sound and displays the same objects as the first interactive unit together with one, two, three or more objects. The objective is to choose and touch the objects on the second interactive unit that was also displayed on the first interactive unit. The difficulty may be adjusted by adjusting the number of objects to remember, by increasing or decreasing the time that the objects are displayed, or by adjusting the number of additional objects displayed on the second interactive unit together with the objects to remember. A score may be calculated comprising the time and the number of right and wrongs.
In another exemplary scenario, a first interactive unit asks a question, either by audible and/or visual information, e.g. "how many red balls can you see?"; or "what color is the bird?". In a second interactive unit, which subsequently elicit a sound, a number of possible answers are displayed, e.g. two, three or more than three. The objective is to choose and touch the correct answer. The difficulty of the scenario may be adjusted by adjusting the difficulty of the question, the likelihood of the answer, and the number of available answers. A score may be calculated comprising the time and the number of right and wrongs.
Either of the above exemplary scenarios may be combined with features of other of the exemplary scenarios. Completion of one scenario may lead to another scenario. All scenarios may be expanded by utilizing further features, e.g. more interactive units, other input devices such as RFID tag detection, audible detection or alike.
The method and/or the scenarios may be implemented in software to be performed in a processing unit, wherein the processing unit may be a processor of a computer, a tablet computer and/or a smartphone.
2 system
4 first building block
6 second building block
8 third building block
12 first interactive unit
14 second interactive unit
18 visual display
20 user interface
22 speaker
24 wireless communication means
26 identification tag presence detector
30 first group
32 second group
40 control unit
42 processing unit
44 memory
46 user interface
48 wireless communication means
60 receive user input data
62 control signal
64 presence detection
70 identification tag
80 parts of an exemplary system
82 rectangular building block
84 circular building block
86 triangular building block 88 connector element
90, 90' interactive unit
92 element similar to a connector element
94 element similar to a building block
100, 200, 300 method for providing a physical learning environment
102 connect a first number of building blocks
104 connect a second number of building blocks
106, 206 elicit audible and/or visual information in an interactive unit
108, 208 receive user input data
1 10 elicit a second audible and/or visual information
212 calculate/update a score part
214 determination to execute sequence again
216 calculate score
218 present score
220 store score
322 retrieve score
324 determine difficulty of sequence
400 milk
402 bowl
404 cat

Claims

1 . A system providing a physical learning environment stimulating physical activity, the system comprising
a plurality of building blocks including at least a first building block and a second building block, the plurality of building blocks forming the physical learning environment,
at least some of the plurality of building blocks comprising an interactive unit, each interactive unit comprising a visual display configured to display visual information, a user interface for receiving a user input, and a speaker configured to reproduce audible information,
a control unit electronically connected to each interactive unit, the control unit being configured to receive user input data from at least a first interactive unit and to control at least a second interactive unit to elicit audible and/or visual information in response to the received user input data.
2. A system according to claim 1 , wherein the first and/or second interactive unit comprises the control unit.
3. A system according to any of the preceding claims, wherein at least the first interactive unit comprises wireless communication means for enabling wireless communication.
4. A system according to any of the preceding claims, wherein the system comprises a wearable identification tag, the wearable identification tag being configured to be worn by a user of the learning environment, wherein the first interactive unit sends a control signal to the control unit when the wearable
identification tag is in proximity of the first interactive unit.
5. A system according to any of the preceding claims, wherein the user interface comprises a touch sensor, a push button, and/or a microphone.
6. A system according to any of the preceding claims, wherein the second interactive unit is configured to be positioned at a distance of more than 2 meters from the position of the first interactive unit.
7. A system according to any of the preceding claims, wherein the plurality of building blocks comprises different building blocks such as building blocks having different shapes, sizes, softness, ductility, tactility, and/or weight.
8. A system according to any of the preceding claims, wherein the plurality of building blocks are configured to create obstacles between the first interactive unit and the second interactive unit.
9. A system according to any of the preceding claims, wherein the second interactive unit is configured to be positioned such that it is not visible from the position of the first interactive unit.
10. A method for providing a physical learning environment stimulating physical activity, the physical learning environment comprising a plurality of building blocks, wherein at least some of the plurality of building blocks comprising an interactive unit, the method comprising
connecting a first number of building blocks in a first location, the first number of building blocks including at least a first building block comprising a first interactive unit,
connecting a second number of building blocks in a second location, the second number of building blocks including at least a second building block comprising a second interactive unit,
- eliciting a first audible and/or visual information in the first interactive unit, receiving user input data, and
eliciting a second audible and/or visual information in the second interactive unit in response to the received user input data.
1 1 . A method according to claim 10, wherein the method further comprises calculating a score.
12. A method according to claim 1 1 , wherein the score is determined at least partly by elapsed time between the eliciting of the first audible and/or visual information and the receiving of user input data.
13. A method according to any of claims 1 1 or 12, wherein the score is stored, and the first and/or second audible and/or visual information is determined based on a stored score.
14. A method according to any of claims 10-13, wherein the receiving of user input data is received from the first interactive unit.
15. A method according to any of claims 10-14, wherein the first audible and/or visual information is repeated while waiting for the receiving of user input data.
PCT/EP2014/075089 2013-11-21 2014-11-20 Interactive and adaptable learning environment WO2015075107A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP13193864 2013-11-21
EP13193864.9 2013-11-21

Publications (1)

Publication Number Publication Date
WO2015075107A1 true WO2015075107A1 (en) 2015-05-28

Family

ID=49712924

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2014/075089 WO2015075107A1 (en) 2013-11-21 2014-11-20 Interactive and adaptable learning environment

Country Status (1)

Country Link
WO (1) WO2015075107A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021173472A1 (en) * 2020-02-24 2021-09-02 Karsten Joel Container devices, systems, and methods

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001097937A1 (en) * 2000-06-19 2001-12-27 Judith Ann Shackelford Smart blocks
US20100001923A1 (en) 2008-07-02 2010-01-07 Med Et Al, Inc. Communication blocks having multiple-planes of detection components and associated method of conveying information based on their arrangement
KR20130047377A (en) * 2011-10-31 2013-05-08 최길용 Assembling block toy
US20130217295A1 (en) 2012-02-17 2013-08-22 Technology One, Inc. Baseplate assembly for use with toy pieces

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001097937A1 (en) * 2000-06-19 2001-12-27 Judith Ann Shackelford Smart blocks
US20100001923A1 (en) 2008-07-02 2010-01-07 Med Et Al, Inc. Communication blocks having multiple-planes of detection components and associated method of conveying information based on their arrangement
KR20130047377A (en) * 2011-10-31 2013-05-08 최길용 Assembling block toy
US20130217295A1 (en) 2012-02-17 2013-08-22 Technology One, Inc. Baseplate assembly for use with toy pieces

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021173472A1 (en) * 2020-02-24 2021-09-02 Karsten Joel Container devices, systems, and methods
US11926448B2 (en) 2020-02-24 2024-03-12 Joel Karsten Container devices, systems, and methods

Similar Documents

Publication Publication Date Title
US20200376398A1 (en) Interactive plush character system
US10089772B2 (en) Context-aware digital play
CN103657087B (en) Formula narration environment on the spot in person
JP6817198B2 (en) Game system
US8595336B1 (en) Portable universal personal storage, entertainment, and communication device
CN110121748A (en) The mobile display of the Reality therapy of enhancing and gesture analysis device
US9333427B2 (en) System and method for using interconnecting blocks as input and output for electronic devices
CN102592046A (en) Companion object customization
EP3228370A1 (en) Puzzle system interworking with external device
CN103764236A (en) Connected multi functional system and method of use
US20160166876A1 (en) Exercise mat, entertainment device and method of interaction between them
TW201429521A (en) System and software product for digital pet
KR101685401B1 (en) Smart toy and service system thereof
WO2006098299A1 (en) Information processing system and information input device for the same
WO2015075107A1 (en) Interactive and adaptable learning environment
JP2005319191A (en) Game system, program, information storage medium, and image generating method
US20220126439A1 (en) Information processing apparatus and information processing method
US10242241B1 (en) Advanced mobile communication device gameplay system
KR101604591B1 (en) Sound steps
KR20160133744A (en) Adaptive smart toy system and configuring method thereof
TWI806727B (en) Intelligent board game teaching aid
US11547929B2 (en) Interactive play system
JP7329470B2 (en) Information processing device, method, program
US20230221566A1 (en) Vr headset with integrated thermal/motion sensors
US11358059B2 (en) Live toy system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14799811

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14799811

Country of ref document: EP

Kind code of ref document: A1