WO2021230759A1 - System for assisting a visually impaired user - Google Patents
System for assisting a visually impaired user Download PDFInfo
- Publication number
- WO2021230759A1 WO2021230759A1 PCT/PH2021/050008 PH2021050008W WO2021230759A1 WO 2021230759 A1 WO2021230759 A1 WO 2021230759A1 PH 2021050008 W PH2021050008 W PH 2021050008W WO 2021230759 A1 WO2021230759 A1 WO 2021230759A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- module
- user
- learning
- camera
- navigation
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61H—PHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
- A61H1/00—Apparatus for passive exercising; Vibrating apparatus ; Chiropractic devices, e.g. body impacting devices, external devices for briefly extending or aligning unbroken bones
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
Definitions
- the present invention relates to assistive devices and more particularly to a system for assisting a visually impaired user.
- Vision impairment is a condition which affects roughly two billion people around the world, with around 39 million people completely blind. Eighty-two percent of people with vision impairment are fifty years and older, making them significantly more likely to be involved in accidents.
- the system comprises a user device that includes a visual data collector and an audible output device.
- the device is capable of detecting all objects within the user's environment such as objects, obstacles, and persons.
- the device also has a GPS to determine the user's location.
- the device communicates to the user if there is an obstacle, communicates search results, provides instructions, and reads a menu to the user.
- US20130253818A1 discloses an indoor guidance system for a blind or visually impaired person.
- the system uses a mobile device to provide maps and visual and voice guidance to assist individuals in navigating through indoor locations. Maps are transmitted to the mobile device and the system uses a combination of GPS and other technologies (Bluetooth, infrared, Wi-Fi, RFID, etc.) to provide location information to the user.
- GPS and other technologies Bluetooth, infrared, Wi-Fi, RFID, etc.
- the object of the present invention is to provide a faster, more versatile, and more reliable system for assisting a visually impaired user.
- the present invention provides a system for assisting a visually impaired user, comprising: a. a vision assistant module for answering queries given by the user, giving instructions, and providing alert messages; b. a real time scanning module for obstacle detection, person tracking, object detection and tracking, and optical character recognition; c. a navigation module for outdoor navigation and indoor map tracking; and d. a processing module using a machine learning algorithm for processing data to and from the vision assistant module, real time scanning module, and navigation module, wherein the machine learning algorithm is selected from the group consisting of regression algorithm, decision tree, association rule learning, artificial neural network, support vector machine, inductive logic programming, Bayesian network, instance-based learning, manifold learning, sub-space learning, deep learning, dictionary learning, or combinations thereof.
- the machine learning algorithm is selected from the group consisting of regression algorithm, decision tree, association rule learning, artificial neural network, support vector machine, inductive logic programming, Bayesian network, instance-based learning, manifold learning, sub-space learning, deep learning, dictionary learning, or combinations thereof.
- FIG. 1 illustrates a system for assisting a visually impaired user according to a preferred embodiment of the present invention.
- FIG. 2 shows the preferred setup for the system for assisting a visually impaired user.
- FIG. 1 illustrates a system for assisting a visually impaired user according to a preferred embodiment of the present invention.
- the system comprises an assistive device 100 further comprising a vision assistant module 102 for answering queries given by the user, giving instructions, and providing alert messages.
- the vision assistant module 102 is further connected to an audio device 112 for capturing audio information from the user and for delivering audio information to the user.
- the audio device 112 is preferably both an input and output device such as microphone, speaker, headphone, earphone, and bone conduction headphone.
- the input aspect of the audio device 112 captures voice command or questions from the user and captures sound from the environment.
- the output aspect of the audio device 112 provides the user audio information such voice from a call, alarm, or alert messages.
- the assistive device 100 comprises a real time scanning module 104 for obstacle detection, person tracking, object detection and tracking, and optical character recognition.
- the real time scanning module 104 is also connected to an at least one camera 114 for capturing visual information for the user.
- the camera 114 may be placed in front of the user to imitate the user's first-person point-of-view. There may be also other camera 114 pointed at the side direction and back direction of the user.
- the camera 114 may be any suitable video input device such as video camera, infrared camera, night vision camera, or thermal camera.
- the assistive device 100 also comprises a navigation module 106 for outdoor navigation and indoor map tracking.
- the assistive device 100 comprises a processing module 108 for processing data to and from the vision assistant module 102, real time scanning module 104, and navigation module 106.
- the processing module 108 uses a machine learning algorithm, such as deep learning, to process the data sent to and received from the vision assistant module 102, real time scanning module 104, and navigation module 106.
- the processing module may use other machine learning algorithms such as regression algorithms (linear, non-linear, or logistic), decision trees or graphs, association rule learning, artificial neural networks, support vector machines, inductive logic programming, Bayesian networks, instance-based learning, manifold learning, sub-space learning, deep learning, dictionary learning, or combinations thereof.
- the processing module 108 is further connected to a transceiver 110 to connect to different systems, devices, and networks such as the internet, a mobile device, and GPS satellite.
- the transceiver 110 may use wired or wireless connection such as USB, Bluetooth, Wi-Fi, 3G, 4G, and ZigBee.
- Fig. 2 shows the preferred setup for the system for assisting a visually impaired user.
- the assistive device 200 is preferably enclosed in an enclosure and connected to a contraption 202 so that the user may wear the assistant device 200 on the user's head.
- the contraption 202 is preferably an eyeglasses or sunglasses.
- the camera of the assistant device 200 is pointed towards the direction where the user is facing.
- the assistive device may be placed anywhere within the vicinity of the user, for example, in the pocket of the user.
- the audio device and the camera of the assistive device is connected to the sunglasses contraption and is remotely connected to the assistive device.
- the contraption may also be worn on different parts of body.
- the contraption may be a headband, necklace, shoulder strap, chest strap, wrist band, belt, or any other contraptions worn on the body.
- the user may ask the question via the audio device connected to the vision assistant module. For example, the user asks, "What is the capital of Kenya?".
- a microphone captures the verbal question of the user and the vision assistant module sends the question to the processing module.
- the processing module checks the questions and searches an internet search engine for an answer. The answer is then sent to the vision assistant module, then the answer is relayed to the user via an earphone.
- the real time scanning module is capable of person tracking.
- the camera captures video in the vicinity of the user, and the real time scanning module may use face recognition to identify persons.
- the vision assistant module may also inform the user if the identified person is one of the user's known contacts. The user may then ask the real time scanning module to track the identified person.
- the real time scanning module is capable of detecting and tracking objects.
- the user may ask the assistive device to find a vacuum cleaner.
- the camera connected to the real time scanning module will capture video and will detect the vacuum cleaner.
- the real time scanning module will then track the vacuum cleaner until the user gets hold of the vacuum cleaner.
- the real time scanning module of the assistive device is also capable of obstacle detection.
- the camera captures video and the real time scanning module detects objects from the video.
- the real time scanning module sends the detected objects to the processing module.
- the processing module uses deep learning to check if the detected object may be an obstacle to the user. If the detected object is considered an obstacle, the vision assistant module will alert the user that there is an obstacle close to the user.
- the alert can contain information such as the direction of the obstacle, the size of the obstacle, and the distance of the obstacle.
- the real time scanning module is capable of optical character recognition.
- the camera captures a picture or a video of the object that contains text, e.g., menu or signage. Then, the real time scanning module will detect and decipher the text.
- the vision assistant module may also use text-to-speech technology to deliver the text to the user in audio form.
- the navigation module has an outdoor navigation function.
- the navigation module can identify the location of the user and the destination via GPS.
- the user requests the assistive device to provide instructions on how to go to the mall.
- the navigation module will identify the user's current location as well as the location of the mall. Then, the navigation module will the check the fastest and easiest path to go the mall.
- the vision assistant module will then provide instructions/directions to the user. For example, the assistant module will say to the user, "In 500 meters, turn right.”
- the navigation module is also capable of indoor map tracking.
- the navigation module may use different indoor positioning technologies including, but not limited to, magnetic positioning, inertial measurements, visual marker positioning, Wi-Fi positioning, and Bluetooth positioning.
- magnetic positioning inertial measurements
- visual marker positioning Wi-Fi positioning
- Bluetooth positioning For example, once inside the mall, the navigation module will download the map of the mall. The navigation module will also connect to the mall's Wi-Fi to determine the location of the user.
- the system for assisting a visually impaired user may be implemented on any device that has a processor, memory storage, and power supply.
- Such device may include, but is not limited to, microcomputer, cell phone, smartphone, personal digital assistant, smart watch, tablet computer, desktop computer, and laptop computer.
Abstract
The present invention relates to a system for assisting a visually impaired user. Particularly, the invention pertains to a system for assisting a visually impaired user, comprising a vision assistant module for answering queries given by the user, giving instructions, and providing alert messages; a real time scanning module for obstacle detection, person tracking, object detection and tracking, and optical character recognition; a navigation module for outdoor navigation and indoor map tracking; and a processing module using a machine learning algorithm for processing data to and from the vision assistant module, real time scanning module, and navigation module.
Description
SYSTEM FOR ASSISTING A VISUALLY IMPAIRED USER
Technical Field
The present invention relates to assistive devices and more particularly to a system for assisting a visually impaired user.
Background of the Invention
Vision impairment is a condition which affects roughly two billion people around the world, with around 39 million people completely blind. Eighty-two percent of people with vision impairment are fifty years and older, making them significantly more likely to be involved in accidents.
Solutions that allow visually impaired users to "feel" and "see" their environment range from service dogs to assistive technology devices. Common service dogs are generally expensive, with breeds such as the Golden Retriever costing from USD 500 to as much as USD 2,000 per dog. Smart canes such as the popular WeWALK Smart Cane cost at least USD 500 per cane. Assistive technology devices such as screen readers and braille both suffer from the problem of being unable to interface with the real world. Moreover, braille significantly suffers from having a steep learning curve, and is rather difficult to adopt in public and private institutions.
Different systems for assisting a visually impaired user are currently disclosed in the art. For instance, US9508269B2 relates to a system for providing assistance from an assistant to a user. The system comprises a user device that includes a visual data collector and an audible output device. The device is capable of detecting all objects within the user's environment such as objects, obstacles, and persons. The device also has a GPS to determine the user's location. In addition, the device communicates to the user if there is an obstacle, communicates search results, provides instructions, and reads a menu to the user.
US20130253818A1 discloses an indoor guidance system for a blind or visually impaired person. The system uses a mobile device to provide maps and visual and voice guidance to assist individuals in navigating through indoor locations. Maps are transmitted to the mobile device and the system uses a combination of GPS and other technologies (Bluetooth, infrared, Wi-Fi, RFID, etc.) to provide location information to the user.
However, none of these disclose a system using machine learning algorithms, such as deep learning, for assisting a visually impaired user as shown by the present invention.
Summary of the Invention
The object of the present invention is to provide a faster, more versatile, and more reliable system for assisting a visually impaired user.
Accordingly, the present invention provides a system for assisting a visually impaired user, comprising: a. a vision assistant module for answering queries given by the user, giving instructions, and providing alert messages; b. a real time scanning module for obstacle detection, person tracking, object detection and tracking, and optical character recognition; c. a navigation module for outdoor navigation and indoor map tracking; and d. a processing module using a machine learning algorithm for processing data to and from the vision assistant module, real time scanning module, and navigation module, wherein the machine learning algorithm is selected from the group consisting of regression algorithm, decision tree, association rule learning, artificial neural network, support vector machine, inductive logic programming, Bayesian network, instance-based learning, manifold learning, sub-space
learning, deep learning, dictionary learning, or combinations thereof.
Brief Description of the Figures FIG. 1 illustrates a system for assisting a visually impaired user according to a preferred embodiment of the present invention.
FIG. 2 shows the preferred setup for the system for assisting a visually impaired user.
Detailed Description of the Invention
FIG. 1 illustrates a system for assisting a visually impaired user according to a preferred embodiment of the present invention. The system comprises an assistive device 100 further comprising a vision assistant module 102 for answering queries given by the user, giving instructions, and providing alert messages. The vision assistant module 102 is further connected to an audio device 112 for capturing audio information from the user and for delivering audio information to the user. The audio device 112 is preferably both an input and output device such as microphone, speaker, headphone, earphone, and bone conduction headphone. The input aspect of the audio device 112 captures voice command or questions from the user and captures sound from the environment. The output aspect of the audio device 112
provides the user audio information such voice from a call, alarm, or alert messages.
Further, the assistive device 100 comprises a real time scanning module 104 for obstacle detection, person tracking, object detection and tracking, and optical character recognition. In addition, the real time scanning module 104 is also connected to an at least one camera 114 for capturing visual information for the user. The camera 114 may be placed in front of the user to imitate the user's first-person point-of-view. There may be also other camera 114 pointed at the side direction and back direction of the user. The camera 114 may be any suitable video input device such as video camera, infrared camera, night vision camera, or thermal camera.
The assistive device 100 also comprises a navigation module 106 for outdoor navigation and indoor map tracking.
Lastly, the assistive device 100 comprises a processing module 108 for processing data to and from the vision assistant module 102, real time scanning module 104, and navigation module 106. The processing module 108 uses a machine learning algorithm, such as deep learning, to process the data sent to and received from the vision assistant module 102, real time scanning module 104, and navigation module 106. The processing module may use other machine learning algorithms such as regression algorithms (linear, non-linear, or logistic), decision trees or graphs, association rule learning, artificial neural networks, support
vector machines, inductive logic programming, Bayesian networks, instance-based learning, manifold learning, sub-space learning, deep learning, dictionary learning, or combinations thereof.
The processing module 108 is further connected to a transceiver 110 to connect to different systems, devices, and networks such as the internet, a mobile device, and GPS satellite. The transceiver 110 may use wired or wireless connection such as USB, Bluetooth, Wi-Fi, 3G, 4G, and ZigBee.
Fig. 2 shows the preferred setup for the system for assisting a visually impaired user. The assistive device 200 is preferably enclosed in an enclosure and connected to a contraption 202 so that the user may wear the assistant device 200 on the user's head. The contraption 202 is preferably an eyeglasses or sunglasses. Preferably, the camera of the assistant device 200 is pointed towards the direction where the user is facing.
In another embodiment of the invention, the assistive device may be placed anywhere within the vicinity of the user, for example, in the pocket of the user. The audio device and the camera of the assistive device is connected to the sunglasses contraption and is remotely connected to the assistive device.
In yet another embodiment of the invention, the contraption may also be worn on different parts of body. The contraption may be a headband, necklace, shoulder strap, chest strap, wrist band, belt, or any other contraptions worn on the body.
In an embodiment of the invention, if a user wants to ask the assistant device for a piece of information, the user may ask the question via the audio device connected to the vision assistant module. For example, the user asks, "What is the capital of Kenya?". A microphone captures the verbal question of the user and the vision assistant module sends the question to the processing module. The processing module checks the questions and searches an internet search engine for an answer. The answer is then sent to the vision assistant module, then the answer is relayed to the user via an earphone.
In another embodiment of the invention, the real time scanning module is capable of person tracking. The camera captures video in the vicinity of the user, and the real time scanning module may use face recognition to identify persons. The vision assistant module may also inform the user if the identified person is one of the user's known contacts. The user may then ask the real time scanning module to track the identified person.
In a further embodiment of the invention, the real time scanning module is capable of detecting and tracking objects. For example, the user may ask the assistive device to find a vacuum cleaner. The camera connected to the real time scanning module will capture video and will detect the vacuum cleaner. The real time scanning module will then track the vacuum cleaner until the user gets hold of the vacuum cleaner.
The real time scanning module of the assistive device is also capable of obstacle detection. The camera captures video and the real time scanning module detects objects from the video. Then, the real time scanning module sends the detected objects to the processing module. The processing module uses deep learning to check if the detected object may be an obstacle to the user. If the detected object is considered an obstacle, the vision assistant module will alert the user that there is an obstacle close to the user. The alert can contain information such as the direction of the obstacle, the size of the obstacle, and the distance of the obstacle.
Also, the real time scanning module is capable of optical character recognition. The camera captures a picture or a video of the object that contains text, e.g., menu or signage. Then, the real time scanning module will detect and decipher the text. The vision assistant module may also use text-to-speech technology to deliver the text to the user in audio form.
In yet another embodiment of the invention, the navigation module has an outdoor navigation function. Preferably, the navigation module can identify the location of the user and the destination via GPS. For example, the user requests the assistive device to provide instructions on how to go to the mall. The navigation module will identify the user's current location as well as the location of the mall. Then, the navigation module will the check the fastest and easiest path to go the mall. The
vision assistant module will then provide instructions/directions to the user. For example, the assistant module will say to the user, "In 500 meters, turn right."
The navigation module is also capable of indoor map tracking. The navigation module may use different indoor positioning technologies including, but not limited to, magnetic positioning, inertial measurements, visual marker positioning, Wi-Fi positioning, and Bluetooth positioning. For example, once inside the mall, the navigation module will download the map of the mall. The navigation module will also connect to the mall's Wi-Fi to determine the location of the user.
In another embodiment of the invention, the system for assisting a visually impaired user may be implemented on any device that has a processor, memory storage, and power supply. Such device may include, but is not limited to, microcomputer, cell phone, smartphone, personal digital assistant, smart watch, tablet computer, desktop computer, and laptop computer.
Claims
1. A system for assisting a visually impaired user, comprising: a.a vision assistant module for answering queries given by the user, giving instructions, and providing alert messages; b.a real time scanning module for obstacle detection, person tracking, object detection and tracking, and optical character recognition; c.a navigation module for outdoor navigation and indoor map tracking; and d.a processing module using a machine learning algorithm for processing data to and from the vision assistant module, real time scanning module, and navigation module, wherein the machine learning algorithm is selected from the group consisting of regression algorithm, decision tree, association rule learning, artificial neural network, support vector machine, inductive logic programming, Bayesian network, instance-based learning, manifold learning, sub-space learning, deep learning, dictionary learning, or combinations thereof.
2. The system of claim 1, wherein the vision assistant module is connected to an audio device.
3. The system of claim 1, wherein the real time scanning module is connected to an at least one camera.
4. The system of claim 3, wherein the at least one camera is any suitable video input device, including but not limited to, video camera, infrared camera, night vision camera, and thermal camera.
5. The system of claim 1, wherein the outdoor navigation of the navigation module is done via GPS.
6. The system of claim 1, wherein the processing module is connected to a transceiver.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021600195U JP3239665U (en) | 2020-05-11 | 2021-05-10 | Systems to assist visually impaired users |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PH12020050128 | 2020-05-11 | ||
PH12020050128 | 2020-05-11 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021230759A1 true WO2021230759A1 (en) | 2021-11-18 |
Family
ID=78524792
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/PH2021/050008 WO2021230759A1 (en) | 2020-05-11 | 2021-05-10 | System for assisting a visually impaired user |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP3239665U (en) |
WO (1) | WO2021230759A1 (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090122161A1 (en) * | 2007-11-08 | 2009-05-14 | Technical Vision Inc. | Image to sound conversion device |
US20130253818A1 (en) * | 2012-03-20 | 2013-09-26 | Xerox Corporation | System for indoor guidance with mobility assistance |
US20140267650A1 (en) * | 2013-03-15 | 2014-09-18 | Orcam Technologies Ltd. | Apparatus and method for hierarchical object identification using a camera on glasses |
US20160078278A1 (en) * | 2014-09-17 | 2016-03-17 | Toyota Motor Engineering & Manufacturing North America, Inc. | Wearable eyeglasses for providing social and environmental awareness |
US9508269B2 (en) * | 2010-08-27 | 2016-11-29 | Echo-Sense Inc. | Remote guidance system |
US20170162076A1 (en) * | 2014-09-03 | 2017-06-08 | Aira Tech Corporation | Methods, apparatus and systems for providing remote assistance for visually-impaired users |
-
2021
- 2021-05-10 JP JP2021600195U patent/JP3239665U/en active Active
- 2021-05-10 WO PCT/PH2021/050008 patent/WO2021230759A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090122161A1 (en) * | 2007-11-08 | 2009-05-14 | Technical Vision Inc. | Image to sound conversion device |
US9508269B2 (en) * | 2010-08-27 | 2016-11-29 | Echo-Sense Inc. | Remote guidance system |
US20130253818A1 (en) * | 2012-03-20 | 2013-09-26 | Xerox Corporation | System for indoor guidance with mobility assistance |
US20140267650A1 (en) * | 2013-03-15 | 2014-09-18 | Orcam Technologies Ltd. | Apparatus and method for hierarchical object identification using a camera on glasses |
US20170162076A1 (en) * | 2014-09-03 | 2017-06-08 | Aira Tech Corporation | Methods, apparatus and systems for providing remote assistance for visually-impaired users |
US20160078278A1 (en) * | 2014-09-17 | 2016-03-17 | Toyota Motor Engineering & Manufacturing North America, Inc. | Wearable eyeglasses for providing social and environmental awareness |
Also Published As
Publication number | Publication date |
---|---|
JP3239665U (en) | 2022-10-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11785395B2 (en) | Hearing aid with voice recognition | |
US10360907B2 (en) | Smart necklace with stereo vision and onboard processing | |
KR100405636B1 (en) | Wearable walking guidance device and method for the blind | |
US9922236B2 (en) | Wearable eyeglasses for providing social and environmental awareness | |
US20220159117A1 (en) | Server, client terminal, control method, and storage medium | |
Paul et al. | Smart Eye for Visually Impaired-An aid to help the blind people | |
WO2016019265A1 (en) | Wearable earpiece for providing social and environmental awareness | |
US11526590B2 (en) | Automatic low radiation mode for a wearable device | |
US20210350823A1 (en) | Systems and methods for processing audio and video using a voice print | |
US11670157B2 (en) | Augmented reality system | |
WO2017216629A1 (en) | Systems and methods for directing audio output of a wearable apparatus | |
KR101584685B1 (en) | A memory aid method using audio-visual data | |
JP2015219768A (en) | Information processing system, storage medium, and information processing method | |
US20230005471A1 (en) | Responding to a user query based on captured images and audio | |
WO2021230759A1 (en) | System for assisting a visually impaired user | |
CN107848125A (en) | Robot and robot system | |
EP3915124A2 (en) | Wearable apparatus and methods | |
KR20130131511A (en) | Guide apparatus for blind person | |
Bhati et al. | CAPture: a vision assistive cap for people with visual impairment | |
Kumar et al. | Smart glasses embedded with facial recognition technique | |
US10387114B1 (en) | System to assist visually impaired user | |
Ghazal et al. | Localized assistive scene understanding using deep learning and the IoT | |
Saha et al. | Vision maker: An audio visual and navigation aid for visually impaired person | |
KR102516155B1 (en) | Wearable device based on voice recognition that works with home automation | |
KR20160015704A (en) | System and method for recognition acquaintance by wearable glass device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2021600195 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21804206 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21804206 Country of ref document: EP Kind code of ref document: A1 |