WO2019043350A1 - A system and method for teaching sign language - Google Patents

A system and method for teaching sign language Download PDF

Info

Publication number
WO2019043350A1
WO2019043350A1 PCT/GB2017/052540 GB2017052540W WO2019043350A1 WO 2019043350 A1 WO2019043350 A1 WO 2019043350A1 GB 2017052540 W GB2017052540 W GB 2017052540W WO 2019043350 A1 WO2019043350 A1 WO 2019043350A1
Authority
WO
WIPO (PCT)
Prior art keywords
control module
sign language
central control
hand
user
Prior art date
Application number
PCT/GB2017/052540
Other languages
French (fr)
Inventor
Al-Jefairi MOHAMMED
Original Assignee
Hoarton, Lloyd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hoarton, Lloyd filed Critical Hoarton, Lloyd
Priority to PCT/GB2017/052540 priority Critical patent/WO2019043350A1/en
Publication of WO2019043350A1 publication Critical patent/WO2019043350A1/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B21/00Teaching, or communicating with, the blind, deaf or mute
    • G09B21/009Teaching or communicating with deaf persons

Definitions

  • the present invention relates to a system and method for teaching sign language.
  • the present invention more particularly relates to a system and method for teaching sign language using a robotic hand.
  • Sign language is an important form of communication for people who are deaf or hard of hearing but it takes time to learn. It can be particularly difficult and time consuming for a child to learn sign language because of the level of concentration required to learn the many sign language gestures. A great deal of effort and patience also is required by parents or teachers to teach a child sign language.
  • an adult would teach a child a sign language gesture by pointing to an object or a picture and performing the corresponding sign language gesture. The child can then perform the sign language gesture back to the adult. This process is repeated multiple times until the child has learnt the sign language gesture. Over time, the child builds up their knowledge of sign language gestures and can eventually communicate effectively using sign language.
  • Teaching aids have been proposed previously to facilitate the teaching of sign language to children. Teaching aids typically use flash cards showing objects and corresponding sign language gestures. It has also been proposed previously to provide software applications which display images or videos along with corresponding sign language gestures which can be viewed by children.
  • the present invention seeks to provide an improved system and method for teaching sign language.
  • a system for teaching sign language comprising: a central control module; a motion sensor coupled for communication with the central control module, the motion sensor being configured to sense movement of a user's hand and to provide movement data indicative of the movement of the user's hand to the central control module, wherein the central control module is configured to process the movement data to recognise a sign language gesture performed by the user's hand; a robotic hand which incorporates a plurality of moveable fingers and a moveable thumb, wherein the robotic hand is coupled for communication with the central control module and the central control module is configured to control the robotic hand to perform a sign language gesture by moving one or more of the fingers or the thumb; and a display screen coupled for communication with the central control module, wherein the central control module is configured to provide display data to the display screen which causes the display screen to display at least one of a text output or an image output which corresponds to a sign language gesture performed by the robotic hand or a sign language gesture performed by the user's hand.
  • the system further comprises: a further robotic hand which incorporates a plurality of moveable fingers and a moveable thumb, wherein the further robotic hand is coupled for communication with the central control module and the central control module is configured to control the further robotic hand to perform a sign language gesture by moving one or more of the fingers or the thumb.
  • a further robotic hand which incorporates a plurality of moveable fingers and a moveable thumb
  • the further robotic hand is coupled for communication with the central control module and the central control module is configured to control the further robotic hand to perform a sign language gesture by moving one or more of the fingers or the thumb.
  • each robotic hand is carried by a robotic arm which is moveably mounted to a robot body.
  • the robot body is carried by a base unit, the base unit being provided with a movement arrangement which is configured to move the base unit over a surface on which the base unit rests.
  • the system further comprises: a robot head which is carried by the robot body, the robot head comprising a face which is configured to convey a plurality of different facial expressions in response to instructions sent by the central control module.
  • the motion sensor comprises a plurality of cameras and wherein the motion sensor is configured to capture three-dimensional image data and to provide the three-dimensional image data to the central control module.
  • the cameras of the motion sensor are infrared cameras.
  • the display screen is a touchscreen which is configured to receive a touch input from a user and to provide a signal indicative of the touch input to the central control module.
  • the central control module is configured to process movement data from the movement sensor to recognise a sign language gesture performed by the user's hand and to compare the recognised sign language gesture with a text output or an image output displayed on the display screen.
  • the central control module is configured to process movement data from the movement sensor to recognise a sign language gesture performed by the user's hand and to compare the recognised sign language gesture with a sign language gesture performed by the robotic hand.
  • the system further comprises: a memory storing sign language data which is indicative of a plurality of sign language gestures, the memory being coupled for communication with the central control module.
  • the central control module is configured to communicate via a network with a remote server to retrieve sign language data stored on the remote server which is indicative of a plurality of sign language gestures.
  • the central control module is configured to process the movement data and to calculate an accuracy indicator which is indicative of the accuracy at which the user performed a sign language gesture.
  • the system is configured to output feedback information corresponding to the accuracy indicator to the user via the display screen.
  • a method for teaching sign language comprising: providing: a central control module; a motion sensor configured to sense movement of a user's hand; a robotic hand which incorporates a plurality of moveable fingers and a moveable thumb; and a display screen, wherein the motion sensor, the robotic hand and the display screen are coupled for communication with the central control module, and wherein the method further comprises: providing, by the central control module, display data to the display screen which causes the display screen to display at least one of a text output or an image output which corresponds to a stored sign language gesture; sensing, using the motion sensor, movement of a user's hand; providing, from the motion sensor, movement data indicative of the movement of the user's hand to the central control module; processing, at the central control module, the movement data to recognise a sign language gesture performed by the user's hand; comparing, using the central control module, the sign language gesture performed by the user's hand with the stored sign language gesture; and providing feedback to the user via the
  • a method for teaching sign language comprising: providing: a central control module; a motion sensor configured to sense movement of a user's hand; a robotic hand which incorporates a plurality of moveable fingers and a moveable thumb; and a display screen, wherein the motion sensor, the robotic hand and the display screen are coupled for communication with the central control module, and wherein the method further comprises: controlling, by the central control module, the robotic hand to perform a sign language gesture which corresponds to a stored sign language gesture by moving one or more of the fingers or the thumb; sensing, using the motion sensor, movement of a user's hand; providing, from the motion sensor, movement data indicative of the movement of the user's hand to the central control module; processing, at the central control module, the movement data to recognise a sign language gesture performed by the user's hand; comparing, using the central control module, the sign language gesture performed by the user's hand with the stored sign language gesture; and providing feedback to the user via the display screen
  • the method further comprises providing: a further robotic hand which incorporates a plurality of moveable fingers and a moveable thumb, wherein the further robotic hand is coupled for communication with the central control module and the central control module is configured to control the further robotic hand to perform a sign language gesture by moving one or more of the fingers or the thumb.
  • each robotic hand is carried by a robotic arm which is moveably mounted to a robot body.
  • the robot body is carried by a base unit, the base unit being provided with a movement arrangement which is configured to move the base unit over a surface on which the base unit rests.
  • the system further comprises: a robot head which is carried by the robot body, the robot head comprising a face which is configured to convey a plurality of different facial expressions in response to instructions sent by the central control module.
  • the motion sensor comprises a plurality of cameras and wherein the method further comprises: capturing, using cameras of the motion sensor, three-dimensional image data; and providing the three-dimensional image data from the camera to the central control module.
  • At least some of the cameras of the motion sensor are infra- red cameras.
  • the display screen is a touchscreen and the method further comprises: receive, at the touchscreen, a touch input from a user; and providing a signal indicative of the touch input from the touchscreen to the central control module.
  • the method further comprises providing: a memory storing sign language data which is indicative of a plurality of sign language gestures, the memory being coupled for communication with the central control module.
  • the method further comprises: retrieving, via a network, sign language data stored on a remote server which is indicative of a plurality of sign language gestures.
  • the method further comprises: processing, by the central control module, the movement data to calculate an accuracy indicator which is indicative of the accuracy at which the user performed a sign language gesture.
  • the method further comprises: outputting feedback information corresponding to the accuracy indicator to the user via the display screen.
  • Figure 1 is a diagrammatic perspective view of a system for teaching sign language of some embodiments
  • Figure 2 is a diagrammatic front view of the system of figure 1 ,
  • Figure 3 is a diagrammatic side view of the system of figure 1 .
  • Figure 4 is a diagrammatic top view of the system of figure 1
  • Figure 5 is a block diagram showing components of a system of some embodiments
  • Figure 6 is a flow diagram showing a first mode of operation of a system of some embodiments.
  • Figure 7 is a flow diagram showing a second mode of operation of a system of some embodiments.
  • a system 1 of some embodiments comprises a robotic hand 2.
  • the robotic hand 2 incorporates a plurality of moveable fingers 3 and a moveable thumb 4.
  • the robotic hand 2 is carried by a robotic arm 5.
  • the robotic arm 5 comprises a first support portion 6 onto which the robotic hand 2 is movably mounted.
  • the first support portion 6 is moveably coupled to one end of a second support portion 7.
  • the other end of the second support portion 7 is moveably coupled to a robot body 8.
  • the support portions 6, 7 of the robotic arm 5 and the robotic hand 2 are configured to be moved relative to one another by drive motors (not shown) to change the position and orientation of the robotic hand 2.
  • the robotic hand 2 is configured to move the fingers 3 and the thumb 4 using drive motors (not shown) within the robotic hand 2.
  • the robot body 8 comprises a housing 9 which is shaped to represent the chest of a human.
  • the second support portion 7 of the robotic arm 5 is moveably mounted to one side of an upper portion of the robot body 8 at a first shoulder portion 10 of the robot body 8.
  • the system 1 comprises a further robotic hand 1 1 which comprises a plurality of moveable fingers 12 and a moveable thumb 13.
  • the further robotic hand 1 1 is moveably mounted to a further robotic arm 14 on one end of a first further support portion 15.
  • the first further support portion 15 is moveably mounted to a second further support portion 16 which is, in turn, moveably mounted to a second shoulder portion 17 of the robot body 8.
  • the following description will refer to only the first robotic hand 2 and the first robotic arm 5 but it is to be appreciated that the description applies also to the further robotic hand 1 1 and the further robotic arm 14.
  • the further robotic hand 1 1 and the further robotic arm 14 may operate instead of or in combination with the first robotic hand 2 and the first robotic arm 5.
  • a robot head 18 is moveably mounted to the robot body 8.
  • the robot head 18 comprises a face 19 which is provided with two eyes 20, 21 and a mouth 22.
  • the robot head 18 is configured to alter the appearance of the eyes 20, 21 and/or the mouth 22 to convey a plurality of different facial expressions.
  • the robot body 8 is carried by a base unit 23.
  • the base unit 23 comprises a movement arrangement 24 which is configured to move the base unit 23 over a surface on which the base unit 23 rests.
  • the base unit 24 comprises wheels which are driven by an electric motor (not shown) to move the base unit 23 across a surface.
  • the base unit 23 is a modified TurtelBotTM.
  • system 1 comprises all or a portion of a NAOTM robot manufactured by SoftBank RoboticsTM. In other embodiments, the system 1 comprises a different robot or portion of a robot. In some embodiments some of the robotic components described above may be omitted.
  • the system 1 further comprises a central control module 25 which, in this embodiment, is in the form of a laptop computer. In other embodiments, the central control module 25 is integrated into part of the base unit 23 or a different part of the system 1 .
  • the central control module 25 is carried by the base unit 23 and is positioned adjacent to the robot body 8.
  • the central control module 25 is coupled for communication with the robotic components of the system 1 and is configured to control the operation of the robotic components of the system 1 .
  • the central control module 25 is configured to control the robotic hand 2 to perform a sign language gesture by moving one or more of the fingers 3 or the thumb 4.
  • the system 1 further comprises a motion sensor 26 which is coupled for communication with the central control module 25.
  • the motion sensor 26 is carried by a motion sensor support member 27 which extends upwardly from the base unit 23 at the rear of the robot body 8, such that the motion sensor 26 is positioned above the robot head 18.
  • the motion sensor 26 is configured to sense movement of a user's hand and to provide movement data indicative of the movement of the user's hand to the central control module 25.
  • the central control module 25 is configured to process the movement data to recognise a sign language gesture performed by the user's hand.
  • the central control module 25 processes the movement data using image processing software to recognise the positions and motion of the fingers and thumb of the user's hand.
  • the central control module compares the positions and motions of the user's fingers and thumb with stored data which is indicative of a sign language gesture.
  • the motion sensor 26 comprises a plurality of cameras 28 which are spaced apart from one another such that the cameras 28 can capture three-dimensional image data and provide the three-dimensional image data to the central control module 25.
  • the motion sensor 26 comprises a light source 29 which is configured to output light to illuminate a user positioned in front of the movement sensor 26.
  • the light source 29 is an infra-red light source.
  • the cameras 28 of the motion sensor 26 are infra-red cameras. However, in other embodiments, the cameras 28 are visible light cameras.
  • the motion sensor 26 is a KinectTM manufactured by Microsoft CorporationTM. In other embodiments, the motion sensor 26 is a Leap MotionTM controller manufactured by Leap Motion Inc.TM.
  • the system 1 further comprises a display screen 30 which is coupled for communication with the central control module 25.
  • the display screen 30 is carried by the base unit 23.
  • the display screen 30 is mounted to the base unit 23 but in other embodiments, the display screen 30 is removable from the base unit 23.
  • the display screen 30 comprises a touchscreen 31 which is configured to receive a touch input from a user and to provide a signal indicative of the touch input to the central control module 25.
  • the robotic hand 2 comprises two moveable fingers 3 and a moveable thumb 4.
  • the robotic hand 2 comprises five moveable fingers and a moveable thumb.
  • the robotic hand 2 comprises a different numbers of moveable fingers along with a moveable thumb.
  • the system 1 of some embodiments comprises a memory 32 which is coupled for communication with the central control module 25.
  • the memory 32 is integrated within the central control module 25.
  • the memory 32 stores sign language data which is indicative of a plurality of sign language gestures.
  • the central control module 25 is configured to communicate via a network, such as the Internet, with a remove server to retrieve sign language data stored on the remote server which is indicative of a plurality of sign language gestures.
  • the system 1 is configured for teaching sign language to any user, whether or not the user is hard of hearing. However, it will be appreciated that the system 1 is particularly beneficial for teaching sign language to children who are hard of hearing because of the interactive nature of the system which is appealing for children.
  • the central control module 25 is configured to provide display data to the display screen 30 which causes the display screen 30 to display at least one of a text output or an image output.
  • a text output might, for instance, be a word which describes an object or action to prompt a user to perform a sign language gesture corresponding to the object or action.
  • An image output might, for instance, be an image of an object which prompts a user to perform a sign language gesture corresponding to that object.
  • a user upon seeing the text output or the image output on the display screen 30 moves one or more of their hands to perform a sign language gesture which corresponds to the text output or the image output.
  • the motion sensor 26 senses the movement of the user's hands and provides movement data indicative of the movement of the user's hands to the central control module 25.
  • the central control module 25 processes the movement data to recognise a sign language gesture performed by the user's hands.
  • the central control module 25 compares the recognised sign language gesture with a sign language gesture stored in the memory 32. If the central control module 25 recognises the sign language gesture performed by the user to match one of the stored sign language gestures, the central control module 25 provides feedback to the user via the display screen 30 to indicate to the user that they have correctly performed a sign language gesture which matches the object described by the text output or the image output.
  • the central control module 25 is configured to compare the positions of the fingers and thumb of the user with positions of the fingers and thumb of a hand of a stored sign language gesture and to calculate an accuracy indicator which indicates the accuracy of the position of the fingers and thumb of the user's hand in performing the sign language gesture.
  • This accuracy indicator is preferably provided as part of the feedback to the user via the display screen 30 so that the user can tell how accurately they have performed the sign language gesture. The process can be repeated to enable the user to refine the positions of their fingers and thumb in performing the sign language gesture.
  • the central control module 25 is configured to control the robotic hand 2 to perform a sign language gesture by moving the fingers 3 and the thumb 4.
  • a user views the sign language gesture performed by the robotic hand 2 and attempts to copy the sign language gesture by moving their hand.
  • the movement sensor 26 senses the movement of the user's hand and provides movement data indicative of the movement of the user's hand to the central control module 25.
  • the central control module 25 processes the movement data to recognise the sign language gesture performed by the user's hand and compares the recognised gesture with a stored sign language gesture corresponding to the sign language gesture performed by the robotic hand 2.
  • the system 1 provides feedback to the user via the display screen 30 which indicates the accuracy at which the user performed the sign language gesture.
  • the system 1 of some embodiments provides an effective means for a user, such as a child who is hard of hearing, to learn sign language.
  • the system 1 is configured to follow a predetermined series of teaching instructions by providing a series of text or image outputs via the display screen 30 or by providing a series of sign language gestures using at least one of the robotic hands 2, 1 1 .
  • a user can follow this predetermined set of teaching instructions to learn a combination of sign language gestures.
  • the central control module 25 is configured to control the display screen 30 to output graphical animations or videos that are intended to make learning sign language more fun. These animations or videos are, in some embodiments, interactive and configured to respond to a touch input from the user via the touchscreen.
  • the system 1 of some embodiments provides an interactive means for teaching a user sign language without the need for another user to be present. A parent or teacher can therefore configure the system to teach a child a sign language gesture or a series of gestures.

Abstract

A system (1) for teaching sign language, the system (1) comprising: a central control module (25); a motion sensor (26) coupled for communication with the central control module (25), the motion sensor (26) being configured to sense movement of a user's hand and to provide movement data indicative of the movement of the user's hand to the central control module (25), wherein the central control module (25) is configured to process the movement data to recognise a sign language gesture performed by the user's hand; a robotic hand (2) which incorporates a plurality of moveable fingers (3) and a moveable thumb (4), wherein the robotic hand (2) is coupled for communication with the central control module (25) and the central control module (25) is configured to control the robotic hand (2) to perform a sign language gesture by moving one or more of the fingers (3) or the thumb (4); and a display screen (30) coupled for communication with the central control module (25), wherein the central control module (25) is configured to provide display data to the display screen (30) which causes the display screen (30) to display at least one of a text output or an image output which corresponds to a sign language gesture performed by the robotic hand (2) or a sign language gesture performed by the user's hand.

Description

A system and method for teaching sign language
Technical field The present invention relates to a system and method for teaching sign language. The present invention more particularly relates to a system and method for teaching sign language using a robotic hand.
Background
Sign language is an important form of communication for people who are deaf or hard of hearing but it takes time to learn. It can be particularly difficult and time consuming for a child to learn sign language because of the level of concentration required to learn the many sign language gestures. A great deal of effort and patience also is required by parents or teachers to teach a child sign language.
Traditionally, an adult would teach a child a sign language gesture by pointing to an object or a picture and performing the corresponding sign language gesture. The child can then perform the sign language gesture back to the adult. This process is repeated multiple times until the child has learnt the sign language gesture. Over time, the child builds up their knowledge of sign language gestures and can eventually communicate effectively using sign language.
Teaching aids have been proposed previously to facilitate the teaching of sign language to children. Teaching aids typically use flash cards showing objects and corresponding sign language gestures. It has also been proposed previously to provide software applications which display images or videos along with corresponding sign language gestures which can be viewed by children.
Unfortunately, a child who is deaf or hard of hearing can fall behind in their studies, compared with children who are not hard of hearing, if the child finds it difficult to communicate using sign language. This can be a particular problem in the case of a child who does not have a responsible adult or teacher that is prepared to devote the level of time required to teach the child sign language. Software applications which are designed to teach children sign language go some way to help a child learn. However, a child can grow tired of learning via a screen-based aid and cease to make progress.
The present invention seeks to provide an improved system and method for teaching sign language.
Summary of invention
According to one aspect of the present invention, there is provided a system for teaching sign language, the system comprising: a central control module; a motion sensor coupled for communication with the central control module, the motion sensor being configured to sense movement of a user's hand and to provide movement data indicative of the movement of the user's hand to the central control module, wherein the central control module is configured to process the movement data to recognise a sign language gesture performed by the user's hand; a robotic hand which incorporates a plurality of moveable fingers and a moveable thumb, wherein the robotic hand is coupled for communication with the central control module and the central control module is configured to control the robotic hand to perform a sign language gesture by moving one or more of the fingers or the thumb; and a display screen coupled for communication with the central control module, wherein the central control module is configured to provide display data to the display screen which causes the display screen to display at least one of a text output or an image output which corresponds to a sign language gesture performed by the robotic hand or a sign language gesture performed by the user's hand.
Preferably, the system further comprises: a further robotic hand which incorporates a plurality of moveable fingers and a moveable thumb, wherein the further robotic hand is coupled for communication with the central control module and the central control module is configured to control the further robotic hand to perform a sign language gesture by moving one or more of the fingers or the thumb.
Conveniently, each robotic hand is carried by a robotic arm which is moveably mounted to a robot body.
Advantageously, the robot body is carried by a base unit, the base unit being provided with a movement arrangement which is configured to move the base unit over a surface on which the base unit rests.
Preferably, the system further comprises: a robot head which is carried by the robot body, the robot head comprising a face which is configured to convey a plurality of different facial expressions in response to instructions sent by the central control module.
Conveniently, the motion sensor comprises a plurality of cameras and wherein the motion sensor is configured to capture three-dimensional image data and to provide the three-dimensional image data to the central control module. Advantageously, at least some of the cameras of the motion sensor are infrared cameras.
Preferably, the display screen is a touchscreen which is configured to receive a touch input from a user and to provide a signal indicative of the touch input to the central control module.
Conveniently, the central control module is configured to process movement data from the movement sensor to recognise a sign language gesture performed by the user's hand and to compare the recognised sign language gesture with a text output or an image output displayed on the display screen.
Advantageously, the central control module is configured to process movement data from the movement sensor to recognise a sign language gesture performed by the user's hand and to compare the recognised sign language gesture with a sign language gesture performed by the robotic hand.
Preferably, the system further comprises: a memory storing sign language data which is indicative of a plurality of sign language gestures, the memory being coupled for communication with the central control module.
Conveniently, the central control module is configured to communicate via a network with a remote server to retrieve sign language data stored on the remote server which is indicative of a plurality of sign language gestures.
Advantageously, the central control module is configured to process the movement data and to calculate an accuracy indicator which is indicative of the accuracy at which the user performed a sign language gesture. Preferably, the system is configured to output feedback information corresponding to the accuracy indicator to the user via the display screen.
According to another aspect of the present invention, there is provided a method for teaching sign language, the method comprising: providing: a central control module; a motion sensor configured to sense movement of a user's hand; a robotic hand which incorporates a plurality of moveable fingers and a moveable thumb; and a display screen, wherein the motion sensor, the robotic hand and the display screen are coupled for communication with the central control module, and wherein the method further comprises: providing, by the central control module, display data to the display screen which causes the display screen to display at least one of a text output or an image output which corresponds to a stored sign language gesture; sensing, using the motion sensor, movement of a user's hand; providing, from the motion sensor, movement data indicative of the movement of the user's hand to the central control module; processing, at the central control module, the movement data to recognise a sign language gesture performed by the user's hand; comparing, using the central control module, the sign language gesture performed by the user's hand with the stored sign language gesture; and providing feedback to the user via the display screen which is indicative of the comparison of the sign language gesture performed by the user's hand and the stored sign language gesture.
According to a further aspect of the present invention, there is provided amethod for teaching sign language, the method comprising: providing: a central control module; a motion sensor configured to sense movement of a user's hand; a robotic hand which incorporates a plurality of moveable fingers and a moveable thumb; and a display screen, wherein the motion sensor, the robotic hand and the display screen are coupled for communication with the central control module, and wherein the method further comprises: controlling, by the central control module, the robotic hand to perform a sign language gesture which corresponds to a stored sign language gesture by moving one or more of the fingers or the thumb; sensing, using the motion sensor, movement of a user's hand; providing, from the motion sensor, movement data indicative of the movement of the user's hand to the central control module; processing, at the central control module, the movement data to recognise a sign language gesture performed by the user's hand; comparing, using the central control module, the sign language gesture performed by the user's hand with the stored sign language gesture; and providing feedback to the user via the display screen which is indicative of the comparison of the sign language gesture performed by the user's hand and the stored sign language gesture.
Preferably, the method further comprises providing: a further robotic hand which incorporates a plurality of moveable fingers and a moveable thumb, wherein the further robotic hand is coupled for communication with the central control module and the central control module is configured to control the further robotic hand to perform a sign language gesture by moving one or more of the fingers or the thumb.
Conveniently, each robotic hand is carried by a robotic arm which is moveably mounted to a robot body.
Advantageously, the robot body is carried by a base unit, the base unit being provided with a movement arrangement which is configured to move the base unit over a surface on which the base unit rests.
Preferably, the system further comprises: a robot head which is carried by the robot body, the robot head comprising a face which is configured to convey a plurality of different facial expressions in response to instructions sent by the central control module.
Conveniently, the motion sensor comprises a plurality of cameras and wherein the method further comprises: capturing, using cameras of the motion sensor, three-dimensional image data; and providing the three-dimensional image data from the camera to the central control module.
Advantageously, at least some of the cameras of the motion sensor are infra- red cameras.
Preferably, the display screen is a touchscreen and the method further comprises: receive, at the touchscreen, a touch input from a user; and providing a signal indicative of the touch input from the touchscreen to the central control module.
Conveniently, the method further comprises providing: a memory storing sign language data which is indicative of a plurality of sign language gestures, the memory being coupled for communication with the central control module.
Advantageously, the method further comprises: retrieving, via a network, sign language data stored on a remote server which is indicative of a plurality of sign language gestures. Preferably, the method further comprises: processing, by the central control module, the movement data to calculate an accuracy indicator which is indicative of the accuracy at which the user performed a sign language gesture. Conveniently, the method further comprises: outputting feedback information corresponding to the accuracy indicator to the user via the display screen.
Brief description of drawings
In order that the invention may be more readily understood, and so that further features thereof may be appreciated, embodiments of the invention will now be described, by way of example, with reference to the accompanying drawings in which:
Figure 1 is a diagrammatic perspective view of a system for teaching sign language of some embodiments,
Figure 2 is a diagrammatic front view of the system of figure 1 ,
Figure 3 is a diagrammatic side view of the system of figure 1 ,
Figure 4 is a diagrammatic top view of the system of figure 1 , Figure 5 is a block diagram showing components of a system of some embodiments,
Figure 6 is a flow diagram showing a first mode of operation of a system of some embodiments, and
Figure 7 is a flow diagram showing a second mode of operation of a system of some embodiments.
Detailed description Referring initially to figures 1 -4 of the accompanying drawings, a system 1 of some embodiments comprises a robotic hand 2. The robotic hand 2 incorporates a plurality of moveable fingers 3 and a moveable thumb 4. In this embodiment, the robotic hand 2 is carried by a robotic arm 5. The robotic arm 5 comprises a first support portion 6 onto which the robotic hand 2 is movably mounted. The first support portion 6 is moveably coupled to one end of a second support portion 7. The other end of the second support portion 7 is moveably coupled to a robot body 8. The support portions 6, 7 of the robotic arm 5 and the robotic hand 2 are configured to be moved relative to one another by drive motors (not shown) to change the position and orientation of the robotic hand 2. The robotic hand 2 is configured to move the fingers 3 and the thumb 4 using drive motors (not shown) within the robotic hand 2.
In this embodiment, the robot body 8 comprises a housing 9 which is shaped to represent the chest of a human. The second support portion 7 of the robotic arm 5 is moveably mounted to one side of an upper portion of the robot body 8 at a first shoulder portion 10 of the robot body 8.
In this embodiment, the system 1 comprises a further robotic hand 1 1 which comprises a plurality of moveable fingers 12 and a moveable thumb 13. The further robotic hand 1 1 is moveably mounted to a further robotic arm 14 on one end of a first further support portion 15. The first further support portion 15 is moveably mounted to a second further support portion 16 which is, in turn, moveably mounted to a second shoulder portion 17 of the robot body 8.
For simplicity, the following description will refer to only the first robotic hand 2 and the first robotic arm 5 but it is to be appreciated that the description applies also to the further robotic hand 1 1 and the further robotic arm 14. The further robotic hand 1 1 and the further robotic arm 14 may operate instead of or in combination with the first robotic hand 2 and the first robotic arm 5.
In this embodiment, a robot head 18 is moveably mounted to the robot body 8. The robot head 18 comprises a face 19 which is provided with two eyes 20, 21 and a mouth 22. The robot head 18 is configured to alter the appearance of the eyes 20, 21 and/or the mouth 22 to convey a plurality of different facial expressions. The robot body 8 is carried by a base unit 23. In this embodiment, the base unit 23 comprises a movement arrangement 24 which is configured to move the base unit 23 over a surface on which the base unit 23 rests. In some embodiments, the base unit 24 comprises wheels which are driven by an electric motor (not shown) to move the base unit 23 across a surface.
In this embodiment, the base unit 23 is a modified TurtelBot™.
In this embodiment the system 1 comprises all or a portion of a NAO™ robot manufactured by SoftBank Robotics™. In other embodiments, the system 1 comprises a different robot or portion of a robot. In some embodiments some of the robotic components described above may be omitted.
The system 1 further comprises a central control module 25 which, in this embodiment, is in the form of a laptop computer. In other embodiments, the central control module 25 is integrated into part of the base unit 23 or a different part of the system 1 .
In this embodiment, the central control module 25 is carried by the base unit 23 and is positioned adjacent to the robot body 8. The central control module 25 is coupled for communication with the robotic components of the system 1 and is configured to control the operation of the robotic components of the system 1 . The central control module 25 is configured to control the robotic hand 2 to perform a sign language gesture by moving one or more of the fingers 3 or the thumb 4.
The system 1 further comprises a motion sensor 26 which is coupled for communication with the central control module 25. In this embodiment, the motion sensor 26 is carried by a motion sensor support member 27 which extends upwardly from the base unit 23 at the rear of the robot body 8, such that the motion sensor 26 is positioned above the robot head 18. The motion sensor 26 is configured to sense movement of a user's hand and to provide movement data indicative of the movement of the user's hand to the central control module 25. The central control module 25 is configured to process the movement data to recognise a sign language gesture performed by the user's hand. In this embodiment, the central control module 25 processes the movement data using image processing software to recognise the positions and motion of the fingers and thumb of the user's hand. The central control module then compares the positions and motions of the user's fingers and thumb with stored data which is indicative of a sign language gesture.
A user can therefore move to a position in front of the movement sensor 26 and perform a sign language gesture which is sensed by the motion sensor 26 and interpreted by the central control module 25. In this embodiment, the motion sensor 26 comprises a plurality of cameras 28 which are spaced apart from one another such that the cameras 28 can capture three-dimensional image data and provide the three-dimensional image data to the central control module 25. In some embodiments, the motion sensor 26 comprises a light source 29 which is configured to output light to illuminate a user positioned in front of the movement sensor 26. In this embodiment, the light source 29 is an infra-red light source.
In this embodiment, the cameras 28 of the motion sensor 26 are infra-red cameras. However, in other embodiments, the cameras 28 are visible light cameras.
In this embodiment, the motion sensor 26 is a Kinect™ manufactured by Microsoft Corporation™. In other embodiments, the motion sensor 26 is a Leap Motion™ controller manufactured by Leap Motion Inc.™.
The system 1 further comprises a display screen 30 which is coupled for communication with the central control module 25. The display screen 30 is carried by the base unit 23. In this embodiment, the display screen 30 is mounted to the base unit 23 but in other embodiments, the display screen 30 is removable from the base unit 23.
In this embodiment, the display screen 30 comprises a touchscreen 31 which is configured to receive a touch input from a user and to provide a signal indicative of the touch input to the central control module 25.
In this embodiment, the robotic hand 2 comprises two moveable fingers 3 and a moveable thumb 4. However, in other embodiments, the robotic hand 2 comprises five moveable fingers and a moveable thumb. In further embodiments, the robotic hand 2 comprises a different numbers of moveable fingers along with a moveable thumb.
Referring now to figure 5 of the accompanying drawings, the system 1 of some embodiments comprises a memory 32 which is coupled for communication with the central control module 25. In this embodiment, the memory 32 is integrated within the central control module 25. The memory 32 stores sign language data which is indicative of a plurality of sign language gestures.
In some embodiments, the central control module 25 is configured to communicate via a network, such as the Internet, with a remove server to retrieve sign language data stored on the remote server which is indicative of a plurality of sign language gestures.
Now that the components of the system 1 have been described, methods for teaching sign language using the system 1 will now be described with reference to figures 6 and 7.
The system 1 is configured for teaching sign language to any user, whether or not the user is hard of hearing. However, it will be appreciated that the system 1 is particularly beneficial for teaching sign language to children who are hard of hearing because of the interactive nature of the system which is appealing for children.
Referring now to figure 6 of the accompanying drawings, in a first mode of operation, the central control module 25 is configured to provide display data to the display screen 30 which causes the display screen 30 to display at least one of a text output or an image output. A text output might, for instance, be a word which describes an object or action to prompt a user to perform a sign language gesture corresponding to the object or action. An image output might, for instance, be an image of an object which prompts a user to perform a sign language gesture corresponding to that object.
A user, upon seeing the text output or the image output on the display screen 30 moves one or more of their hands to perform a sign language gesture which corresponds to the text output or the image output. The motion sensor 26 senses the movement of the user's hands and provides movement data indicative of the movement of the user's hands to the central control module 25. The central control module 25 processes the movement data to recognise a sign language gesture performed by the user's hands.
The central control module 25 compares the recognised sign language gesture with a sign language gesture stored in the memory 32. If the central control module 25 recognises the sign language gesture performed by the user to match one of the stored sign language gestures, the central control module 25 provides feedback to the user via the display screen 30 to indicate to the user that they have correctly performed a sign language gesture which matches the object described by the text output or the image output.
In some embodiments, the central control module 25 is configured to compare the positions of the fingers and thumb of the user with positions of the fingers and thumb of a hand of a stored sign language gesture and to calculate an accuracy indicator which indicates the accuracy of the position of the fingers and thumb of the user's hand in performing the sign language gesture. This accuracy indicator is preferably provided as part of the feedback to the user via the display screen 30 so that the user can tell how accurately they have performed the sign language gesture. The process can be repeated to enable the user to refine the positions of their fingers and thumb in performing the sign language gesture.
Referring now to figure 7 of the accompanying drawings, in a second mode of operation, the central control module 25 is configured to control the robotic hand 2 to perform a sign language gesture by moving the fingers 3 and the thumb 4. A user views the sign language gesture performed by the robotic hand 2 and attempts to copy the sign language gesture by moving their hand. The movement sensor 26 senses the movement of the user's hand and provides movement data indicative of the movement of the user's hand to the central control module 25.
The central control module 25 processes the movement data to recognise the sign language gesture performed by the user's hand and compares the recognised gesture with a stored sign language gesture corresponding to the sign language gesture performed by the robotic hand 2.
In some embodiments, the system 1 provides feedback to the user via the display screen 30 which indicates the accuracy at which the user performed the sign language gesture.
The system 1 of some embodiments provides an effective means for a user, such as a child who is hard of hearing, to learn sign language. In some embodiments, the system 1 is configured to follow a predetermined series of teaching instructions by providing a series of text or image outputs via the display screen 30 or by providing a series of sign language gestures using at least one of the robotic hands 2, 1 1 . A user can follow this predetermined set of teaching instructions to learn a combination of sign language gestures. In some embodiments, the central control module 25 is configured to control the display screen 30 to output graphical animations or videos that are intended to make learning sign language more fun. These animations or videos are, in some embodiments, interactive and configured to respond to a touch input from the user via the touchscreen. The system 1 of some embodiments provides an interactive means for teaching a user sign language without the need for another user to be present. A parent or teacher can therefore configure the system to teach a child a sign language gesture or a series of gestures.
When used in this specification and the claims, the term "comprises" and "comprising" and variations thereof mean that specified features, steps or integers and included. The terms are not to be interpreted to exclude the presence of other features, steps or compounds.
The features disclosed in the foregoing description, or the following claims, or the accompanying drawings, expressed in their specific forms or in terms of a means for performing the disclosed function, or a method or process for attaining the disclosed result, as appropriate, may, separately, or in any combination of such features, be utilised for realising the invention in diverse forms thereof.

Claims

1 . A system for teaching sign language, the system comprising:
a central control module;
a motion sensor coupled for communication with the central control module, the motion sensor being configured to sense movement of a user's hand and to provide movement data indicative of the movement of the user's hand to the central control module, wherein the central control module is configured to process the movement data to recognise a sign language gesture performed by the user's hand;
a robotic hand which incorporates a plurality of moveable fingers and a moveable thumb, wherein the robotic hand is coupled for communication with the central control module and the central control module is configured to control the robotic hand to perform a sign language gesture by moving one or more of the fingers or the thumb; and
a display screen coupled for communication with the central control module, wherein the central control module is configured to provide display data to the display screen which causes the display screen to display at least one of a text output or an image output which corresponds to a sign language gesture performed by the robotic hand or a sign language gesture performed by the user's hand.
2. The system of claim 1 , wherein the system further comprises:
a further robotic hand which incorporates a plurality of moveable fingers and a moveable thumb, wherein the further robotic hand is coupled for communication with the central control module and the central control module is configured to control the further robotic hand to perform a sign language gesture by moving one or more of the fingers or the thumb.
3. The system of claim 1 or claim 2, wherein each robotic hand is carried by a robotic arm which is moveably mounted to a robot body.
4. The system of claim 3, wherein the robot body is carried by a base unit, the base unit being provided with a movement arrangement which is configured to move the base unit over a surface on which the base unit rests.
5. The system of claim 3 or claim 4, wherein the system further comprises: a robot head which is carried by the robot body, the robot head comprising a face which is configured to convey a plurality of different facial expressions in response to instructions sent by the central control module.
6. The system of any one of the preceding claims, wherein the motion sensor comprises a plurality of cameras and wherein the motion sensor is configured to capture three-dimensional image data and to provide the three- dimensional image data to the central control module.
7. The system of claim 6, wherein at least some of the cameras of the motion sensor are infra-red cameras.
8. The system of any one of the preceding claims, wherein the display screen is a touchscreen which is configured to receive a touch input from a user and to provide a signal indicative of the touch input to the central control module.
9. The system of any one of the preceding claims, wherein the central control module is configured to process movement data from the movement sensor to recognise a sign language gesture performed by the user's hand and to compare the recognised sign language gesture with a text output or an image output displayed on the display screen.
10. The system of any one of the preceding claims, wherein the central control module is configured to process movement data from the movement sensor to recognise a sign language gesture performed by the user's hand and to compare the recognised sign language gesture with a sign language gesture performed by the robotic hand.
1 1 . The system of any one of the preceding claims, wherein the system further comprises:
a memory storing sign language data which is indicative of a plurality of sign language gestures, the memory being coupled for communication with the central control module.
12. The system of any one of claims 1 to 10, wherein the central control module is configured to communicate via a network with a remote server to retrieve sign language data stored on the remote server which is indicative of a plurality of sign language gestures.
13. The system of any one of the preceding claims, wherein the central control module is configured to process the movement data and to calculate an accuracy indicator which is indicative of the accuracy at which the user performed a sign language gesture.
14. The system of claim 13, wherein the system is configured to output feedback information corresponding to the accuracy indicator to the user via the display screen.
15. A method for teaching sign language, the method comprising:
providing:
a central control module; a motion sensor configured to sense movement of a user's hand; a robotic hand which incorporates a plurality of moveable fingers and a moveable thumb; and
a display screen, wherein the motion sensor, the robotic hand and the display screen are coupled for communication with the central control module, and wherein the method further comprises:
providing, by the central control module, display data to the display screen which causes the display screen to display at least one of a text output or an image output which corresponds to a stored sign language gesture;
sensing, using the motion sensor, movement of a user's hand;
providing, from the motion sensor, movement data indicative of the movement of the user's hand to the central control module;
processing, at the central control module, the movement data to recognise a sign language gesture performed by the user's hand;
comparing, using the central control module, the sign language gesture performed by the user's hand with the stored sign language gesture; and
providing feedback to the user via the display screen which is indicative of the comparison of the sign language gesture performed by the user's hand and the stored sign language gesture.
16. A method for teaching sign language, the method comprising:
providing:
a central control module;
a motion sensor configured to sense movement of a user's hand; a robotic hand which incorporates a plurality of moveable fingers and a moveable thumb; and
a display screen, wherein the motion sensor, the robotic hand and the display screen are coupled for communication with the central control module, and wherein the method further comprises: controlling, by the central control module, the robotic hand to perform a sign language gesture which corresponds to a stored sign language gesture by moving one or more of the fingers or the thumb;
sensing, using the motion sensor, movement of a user's hand;
providing, from the motion sensor, movement data indicative of the movement of the user's hand to the central control module;
processing, at the central control module, the movement data to recognise a sign language gesture performed by the user's hand;
comparing, using the central control module, the sign language gesture performed by the user's hand with the stored sign language gesture; and
providing feedback to the user via the display screen which is indicative of the comparison of the sign language gesture performed by the user's hand and the stored sign language gesture.
17. The method of claim 15 or claim 16, wherein the method further comprises providing:
a further robotic hand which incorporates a plurality of moveable fingers and a moveable thumb, wherein the further robotic hand is coupled for communication with the central control module and the central control module is configured to control the further robotic hand to perform a sign language gesture by moving one or more of the fingers or the thumb.
18. The method of any one of claims 15 to 17, wherein each robotic hand is carried by a robotic arm which is moveably mounted to a robot body.
19. The method of claim 18, wherein the robot body is carried by a base unit, the base unit being provided with a movement arrangement which is configured to move the base unit over a surface on which the base unit rests.
20. The method of claim 18 or claim 19, wherein the system further comprises:
a robot head which is carried by the robot body, the robot head comprising a face which is configured to convey a plurality of different facial expressions in response to instructions sent by the central control module.
21 . The method of any one of claims 15 to 20, wherein the motion sensor comprises a plurality of cameras and wherein the method further comprises: capturing, using cameras of the motion sensor, three-dimensional image data; and
providing the three-dimensional image data from the camera to the central control module.
22. The method of claim 21 , wherein at least some of the cameras of the motion sensor are infra-red cameras.
23. The method of any one of claims 15 to 22, wherein the display screen is a touchscreen and the method further comprises:
receive, at the touchscreen, a touch input from a user; and
providing a signal indicative of the touch input from the touchscreen to the central control module.
24. The method of any one of claims 15 to 23, wherein the method further comprises providing:
a memory storing sign language data which is indicative of a plurality of sign language gestures, the memory being coupled for communication with the central control module.
25. The method of any one of claims 15 to 23, wherein the method further comprises: retrieving, via a network, sign language data stored on a remote server which is indicative of a plurality of sign language gestures.
26. The method of any one of the claims 15 to 25, wherein the method further comprises:
processing, by the central control module, the movement data to calculate an accuracy indicator which is indicative of the accuracy at which the user performed a sign language gesture.
27. The method of claim 26, wherein the method further comprises:
outputting feedback information corresponding to the accuracy indicator to the user via the display screen.
PCT/GB2017/052540 2017-09-01 2017-09-01 A system and method for teaching sign language WO2019043350A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/GB2017/052540 WO2019043350A1 (en) 2017-09-01 2017-09-01 A system and method for teaching sign language

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/GB2017/052540 WO2019043350A1 (en) 2017-09-01 2017-09-01 A system and method for teaching sign language

Publications (1)

Publication Number Publication Date
WO2019043350A1 true WO2019043350A1 (en) 2019-03-07

Family

ID=59895328

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2017/052540 WO2019043350A1 (en) 2017-09-01 2017-09-01 A system and method for teaching sign language

Country Status (1)

Country Link
WO (1) WO2019043350A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109712454A (en) * 2019-03-11 2019-05-03 荆州职业技术学院 A kind of intelligence Language for English learning system
CN110085096A (en) * 2019-04-26 2019-08-02 南安易盾格商贸有限公司 A kind of multi-functional robot teaching aid
CN110580827A (en) * 2019-08-14 2019-12-17 张忠 Teaching show intelligent mobile robot with protection that drops
CN110751050A (en) * 2019-09-20 2020-02-04 郑鸿 Motion teaching system based on AI visual perception technology
CN114842712A (en) * 2022-04-12 2022-08-02 汕头大学 Sign language teaching system based on gesture recognition

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110234384A1 (en) * 2010-03-24 2011-09-29 Agrawal Dharma P Apparatus for instantaneous translation of sign language
US20110301934A1 (en) * 2010-06-04 2011-12-08 Microsoft Corporation Machine based sign language interpreter
US20130115578A1 (en) * 2011-11-04 2013-05-09 Honda Motor Co., Ltd. Sign language action generating device and communication robot
US20130204435A1 (en) * 2012-02-06 2013-08-08 Samsung Electronics Co., Ltd. Wearable robot and teaching method of motion using the same

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110234384A1 (en) * 2010-03-24 2011-09-29 Agrawal Dharma P Apparatus for instantaneous translation of sign language
US20110301934A1 (en) * 2010-06-04 2011-12-08 Microsoft Corporation Machine based sign language interpreter
US20130115578A1 (en) * 2011-11-04 2013-05-09 Honda Motor Co., Ltd. Sign language action generating device and communication robot
US20130204435A1 (en) * 2012-02-06 2013-08-08 Samsung Electronics Co., Ltd. Wearable robot and teaching method of motion using the same

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109712454A (en) * 2019-03-11 2019-05-03 荆州职业技术学院 A kind of intelligence Language for English learning system
CN109712454B (en) * 2019-03-11 2021-04-30 荆州职业技术学院 Intelligent English teaching language learning system
CN110085096A (en) * 2019-04-26 2019-08-02 南安易盾格商贸有限公司 A kind of multi-functional robot teaching aid
CN110580827A (en) * 2019-08-14 2019-12-17 张忠 Teaching show intelligent mobile robot with protection that drops
CN110580827B (en) * 2019-08-14 2021-04-27 南京浦创智谷企业管理有限公司 Teaching show intelligent mobile robot with protection that drops
CN110751050A (en) * 2019-09-20 2020-02-04 郑鸿 Motion teaching system based on AI visual perception technology
CN114842712A (en) * 2022-04-12 2022-08-02 汕头大学 Sign language teaching system based on gesture recognition
CN114842712B (en) * 2022-04-12 2023-10-17 汕头大学 Sign language teaching system based on gesture recognition

Similar Documents

Publication Publication Date Title
WO2019043350A1 (en) A system and method for teaching sign language
Rouanet et al. The impact of human–robot interfaces on the learning of visual objects
Fernandez et al. Natural user interfaces for human-drone multi-modal interaction
US8723801B2 (en) More useful man machine interfaces and applications
US9092698B2 (en) Vision-guided robots and methods of training them
US11836294B2 (en) Spatially consistent representation of hand motion
CN106462242A (en) User interface control using gaze tracking
Crossan et al. Multimodal trajectory playback for teaching shape information and trajectories to visually impaired computer users
WO2017186001A1 (en) Education system using virtual robots
CN107077229A (en) human-machine interface device and system
US11048375B2 (en) Multimodal 3D object interaction system
CN109144244A (en) A kind of method, apparatus, system and the augmented reality equipment of augmented reality auxiliary
US20170124762A1 (en) Virtual reality method and system for text manipulation
Kosmyna et al. Designing guiding systems for brain-computer interfaces
Gladence et al. A research on application of human-robot interaction using artifical intelligence
Fontana Association of haptic trajectories to takete and maluma
Ugur et al. Learning to grasp with parental scaffolding
Butnariu et al. DEVELOPMENT OF A NATURAL USER INTERFACE FOR INTUITIVE PRESENTATIONS IN EDUCATIONAL PROCESS.
Rusanu et al. Virtual robot arm controlled by hand gestures via Leap Motion Sensor
Papadopoulos et al. An Advanced Human-Robot Interaction Interface for Teaching Collaborative Robots New Assembly Tasks
Yang et al. Research on a visual sensing and tracking system for distance education
Cohen Dynamical system representation, generation, and recognition of basic oscillatory motion gestures, and application for the control of actuated mechanisms
Strentzsch et al. Digital map table VR: Bringing an interactive system to virtual reality
KR102477613B1 (en) Mission execution method using coding learning tools
KR102464419B1 (en) virtual reality-based self-directed practice device for medication education on nursing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17768502

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 01/09/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17768502

Country of ref document: EP

Kind code of ref document: A1