WO2024018310A1 - WISE-i: AN ELECTRONIC TRAVEL AND COMMUNICATION AID DEVICE FOR THE VISUALLY IMPAIRED - Google Patents

WISE-i: AN ELECTRONIC TRAVEL AND COMMUNICATION AID DEVICE FOR THE VISUALLY IMPAIRED Download PDF

Info

Publication number
WO2024018310A1
WO2024018310A1 PCT/IB2023/056897 IB2023056897W WO2024018310A1 WO 2024018310 A1 WO2024018310 A1 WO 2024018310A1 IB 2023056897 W IB2023056897 W IB 2023056897W WO 2024018310 A1 WO2024018310 A1 WO 2024018310A1
Authority
WO
WIPO (PCT)
Prior art keywords
response system
electronic
data
multitude
communication
Prior art date
Application number
PCT/IB2023/056897
Other languages
French (fr)
Inventor
MohammadFawzi BAJNAID
Original Assignee
Bajnaid Mohammadfawzi
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bajnaid Mohammadfawzi filed Critical Bajnaid Mohammadfawzi
Priority to PCT/IB2023/056897 priority Critical patent/WO2024018310A1/en
Publication of WO2024018310A1 publication Critical patent/WO2024018310A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/90Services for handling of emergency or hazardous situations, e.g. earthquake and tsunami warning systems [ETWS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72475User interfaces specially adapted for cordless or mobile telephones specially adapted for disabled users
    • H04M1/72481User interfaces specially adapted for cordless or mobile telephones specially adapted for disabled users for visually impaired users
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/025Services making use of location information using location based information parameters
    • H04W4/027Services making use of location information using location based information parameters using movement velocity, acceleration information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/38Services specially adapted for particular environments, situations or purposes for collecting sensor information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/80Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72418User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for supporting emergency services

Definitions

  • the disclosed invention presents a self-learning electronic travel and communication aid (ETCA).
  • the ETCA includes a modulated Radar-based distance sensor augmented with wide angle night vision camera, a processing unit, haptic vibration -based interface, and an audio output.
  • the device is configured as a phone with synchronized data and may act as a calling and messaging unit in emergencies and in normal conditions.
  • the device is built robust by complying with IP68W standards to withstand dust, weather and falling into wet conditions.
  • the radar unit supplies depth and distance measurements for the images taken by the camera, therefore building a 3D information in advance of the upcoming obstacles.
  • the processing unit augments these data and provide relevance-based abstraction to the 3D augmented data.
  • the unit then delivers the abstraction to the haptic feedback and the audio output devices.
  • the unit embraces an Artificial Intelligence edge by means of self-learning algorithms, updating the database with its user’s walking habits. Additional Al tools based on neural networks and fuzzy logic are implemented for object recognition and detection.
  • the user could receive three levels of data according to his requirement and situation:
  • GPS based guidance combined with GSM-based triangulation using signal fingerprinting for wayfinding to prechosen destination both indoors and outdoors.
  • Field scan information provides 7 data classes classified based upon proximity on 175° scan, each class represents different distance from device. Classi :1 m range; Class2:2m Range; Class3:3m; Class4:4m; Class5:5m; Class6:6m; Class7:7m range.
  • the field scan information is translated into 7 rays of 7 levels haptic vibration elements providing kinaesthetic information to user.
  • Case Device has fallen: In situations when the user has fallen and remain in tilted position for more than 1 -minute, preassigned users are contacted, emergency services contacted and requested, and an alarm siren is activated so that nearby people could help.
  • Normal imaging suffers from dimensional uncertainty, methods exist using focal lens calculations but suffer from its inability to recover or calculate the real-live scale of a mapped scene once using complex camera images as inputs.
  • Another shared technique is to estimate depth from two or more offset photos of the same scene (stereo or multi-view matching). To resolve that, it is necessary to either specify a known length between 2 mapped points in the resulting image or modify the algorithm to make use of IMU, GPS or ultrasound-distance sensor data in order to calculate real-world scale.
  • Another collection of methods are time-of-flight and phase difference methods, most often using light emitters and receivers. These methods offer many leads over stereo and multi-view matching, but necessitate specialised, expensive, and power consumptive equipment.
  • ultrasound provides cheaper and less power-hungry alternative.
  • ultrasound methods enable fast frame acquisition and accurate distance calculations.
  • Typical ultrasound approaches make use of arrays of transducers whilst performing beamforming algorithms and sound localisation techniques.
  • one air coupled single ultrasound transducer acts as a sender and receiver, consequently scanning the field by means of a rotating mechanism performing 20 scans per second with a signal settling time of 0.01 seconds to damp-out interferences and avoid performing processing time consumptive beamforming algorithms.
  • the rotating-scanning distance ultrasound sensor (sonar) is built up on a rotating mechanism of two servo motors providing rotation for 2-DoF (Yaw-Pitch) angles.
  • the radar images provide accurate depth measurements up to 7m radius on a 3D spherical sector of 175°.
  • weather correction aspect is considered in this invention for making it usable during all seasons for robust depth imaging.
  • the Ultrasound scans produce quite sparse images with detailed depth information; to enhance the sparse information it is necessary to fuse the ultrasound-depth data with another sensor.
  • night-vision imaging sensor and a rotating-scanning distance ultrasound sensor are fused together to produce radar Images.
  • a fusion sensor in this invention is night-vision enabled cameras. Night vision images are taken in low light conditions using the infrared camera, and the image is enhanced on the processor to obtain an image with higher contrast at pitch-dark conditions using Contrast Limited AHE (CLAHE), which is a variant of adaptive histogram equalisation in which the contrast amplification is restricted, so as to reduce problem of noise enlargement in IR-images based with ill-illuminated conditions.
  • CLAHE Contrast Limited AHE
  • the enhanced image is then sent to the classification process.
  • the classification is done by using an efficient convolutional neural network followed by a fast fully connected layer of neurons.
  • a fast Region Proposal Network that shares full-image convolutional features with the detection database is implemented. Therefore, simultaneously predicting object bounds and objectness scores at each position.
  • RPNs are trained end-to-end to generate high quality region proposals with no region proposal computation as a bottleneck.
  • the conventional algorithm is further optimised to share convolutional features. Therefore, fast R-CNN, achieves near real-time rates using very deep networks.
  • the detection system has a frame rate of 35 fps on the hardware provided with an accuracy of 70.2% mAP (mean Average Precision for Object Detection)
  • the obstacle avoidance algorithm is based upon a 19 DoF (Degrees of Freedom) lattice bounceback algorithm.
  • the algorithm extracted from particle dynamics uses minimum energy bouncing back in other words: which way with minimum effort to bounce for avoiding an obstacle and manoeuvre around it i.e., East (Right) or West (Left) or East-North (Diagonally to the right-front).
  • the algorithm has an extended option of finding the safest path in the sensed 7m by 7m space providing audio and kinaesthetic guidance.
  • the scheme is based upon compact self-learning Graph Neural Networks.
  • the self-learning part measures and collects the time user takes to make certain manoeuvres and steps; feeds it into a database and assigning different neuron weights according to time taken by user. Therefore, optimising the path choice according to each user’s timing for different manoeuvres.
  • the whole Al is programmed in compact vectorisation techniques for fast processing providing inline guidance without delay.
  • a haptic platform is included in the invention.
  • the platform acts as an output haptic display of 7x7 matrix of kinaesthetic 3D communication. This creates an experience of touch by applying vibrations and motions to the nodes on the interface which is felt by the user’s hands.
  • the device comes with an add on application providing help for the visually impaired by support from a plural of followers or friends whom have the app installed on their phone devices.
  • the app has to be coupled in prior with the ETA by approved authentication from the ETA owner .
  • This app provides a plural of features such as:

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Emergency Management (AREA)
  • Environmental & Geological Engineering (AREA)
  • Public Health (AREA)
  • Human Computer Interaction (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

A portable Electronic Travel (Device) and communication Aid device for the visually impaired or for usage under pitch-dark conditions is disclosed. The device includes a 3D ultrasound sonar sensor and fish-eye camera with night vision capabilities and a multitude of sensors. The device is configured such that the sensors assimilate simultaneously to generate audio 3D images. The images are processed and abstracted by an intelligent platform utilising means of Al (Artificial Intelligence). A system of outputs is also controlled by the platform that has audio signals, a matrix of kinaesthetic 3D Haptic communication or any means of dedicated communication such as: Braille communication, Braille Buzzer pads, Electrically Active Polymer Surfaces and Human Machine Interface. The device is configured as a phone with synchronized data and may act as a calling and messaging unit in emergencies and in normal conditions.

Description

Description
Title of Invention : i Wise-i: A Portable Electronic Travel and Communication Aid Device for the Visually Impaired
[0001 ] [The disclosed invention presents a self-learning electronic travel and communication aid (ETCA). The ETCA includes a modulated Radar-based distance sensor augmented with wide angle night vision camera, a processing unit, haptic vibration -based interface, and an audio output. The device is configured as a phone with synchronized data and may act as a calling and messaging unit in emergencies and in normal conditions. The device is built robust by complying with IP68W standards to withstand dust, weather and falling into wet conditions. The radar unit supplies depth and distance measurements for the images taken by the camera, therefore building a 3D information in advance of the upcoming obstacles. The processing unit augments these data and provide relevance-based abstraction to the 3D augmented data. The unit then delivers the abstraction to the haptic feedback and the audio output devices. The unit embraces an Artificial Intelligence edge by means of self-learning algorithms, updating the database with its user’s walking habits. Additional Al tools based on neural networks and fuzzy logic are implemented for object recognition and detection.
The user could receive three levels of data according to his requirement and situation:
1 . GPS based guidance combined with GSM-based triangulation using signal fingerprinting for wayfinding to prechosen destination both indoors and outdoors.
2. Abstract field scan information or inline abstract manoeuvre guidance. Field scan information provides 7 data classes classified based upon proximity on 175° scan, each class represents different distance from device. Classi :1 m range; Class2:2m Range; Class3:3m; Class4:4m; Class5:5m; Class6:6m; Class7:7m range. The field scan information is translated into 7 rays of 7 levels haptic vibration elements providing kinaesthetic information to user.
3. On the other hand, abstract manoeuvring in confined spaces provides tactical guidance through all classes with obstacle avoidance algorithm based upon minimal path optimisation schemes.
Additionally, in hazardous situations an intervening procedure takes place accordingly:
• Case Sudden obstacle: An audio alert message warns of obstacle and gives short manoeuvring messages in addition to vibration signals of right or left commands.
• Case Device has fallen: In situations when the user has fallen and remain in tilted position for more than 1 -minute, preassigned users are contacted, emergency services contacted and requested, and an alarm siren is activated so that nearby people could help.
Normal imaging suffers from dimensional uncertainty, methods exist using focal lens calculations but suffer from its inability to recover or calculate the real-live scale of a mapped scene once using complex camera images as inputs. Another shared technique is to estimate depth from two or more offset photos of the same scene (stereo or multi-view matching). To resolve that, it is necessary to either specify a known length between 2 mapped points in the resulting image or modify the algorithm to make use of IMU, GPS or ultrasound-distance sensor data in order to calculate real-world scale. Another collection of methods are time-of-flight and phase difference methods, most often using light emitters and receivers. These methods offer many leads over stereo and multi-view matching, but necessitate specialised, expensive, and power consumptive equipment. On the contrary, ultrasound provides cheaper and less power-hungry alternative. As well, ultrasound methods enable fast frame acquisition and accurate distance calculations. Typical ultrasound approaches make use of arrays of transducers whilst performing beamforming algorithms and sound localisation techniques. In this invention one air coupled single ultrasound transducer acts as a sender and receiver, consequently scanning the field by means of a rotating mechanism performing 20 scans per second with a signal settling time of 0.01 seconds to damp-out interferences and avoid performing processing time consumptive beamforming algorithms.
The rotating-scanning distance ultrasound sensor (sonar) is built up on a rotating mechanism of two servo motors providing rotation for 2-DoF (Yaw-Pitch) angles. The radar images provide accurate depth measurements up to 7m radius on a 3D spherical sector of 175°.
A major problem associated with quantitative ultrasound measurement, particularly in remote applications, is the variability of environmental temperature, affecting the characteristics of the signals received.
In this invention development of temperature compensation function and frequency-scatter avoidance applicable to most environmental conditions, such as: extremely hot temperatures -20° to +60°, light Rain, showers, snow, and dust, were verified and implemented.
Therefore, weather correction aspect is considered in this invention for making it usable during all seasons for robust depth imaging.
The Ultrasound scans produce quite sparse images with detailed depth information; to enhance the sparse information it is necessary to fuse the ultrasound-depth data with another sensor.
The fusion of data from different sensorial sources is the most promising method to increase robustness and reliability of environmental perception today. In this invention, night-vision imaging sensor and a rotating-scanning distance ultrasound sensor are fused together to produce radar Images.
A fusion sensor in this invention is night-vision enabled cameras. Night vision images are taken in low light conditions using the infrared camera, and the image is enhanced on the processor to obtain an image with higher contrast at pitch-dark conditions using Contrast Limited AHE (CLAHE), which is a variant of adaptive histogram equalisation in which the contrast amplification is restricted, so as to reduce problem of noise enlargement in IR-images based with ill-illuminated conditions. The enhanced image is then sent to the classification process. The classification is done by using an efficient convolutional neural network followed by a fast fully connected layer of neurons.
In this invention, a fast Region Proposal Network (RPN) that shares full-image convolutional features with the detection database is implemented. Therefore, simultaneously predicting object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high quality region proposals with no region proposal computation as a bottleneck. The conventional algorithm is further optimised to share convolutional features. Therefore, fast R-CNN, achieves near real-time rates using very deep networks. The detection system has a frame rate of 35 fps on the hardware provided with an accuracy of 70.2% mAP (mean Average Precision for Object Detection)
The obstacle avoidance algorithm is based upon a 19 DoF (Degrees of Freedom) lattice bounceback algorithm. The algorithm extracted from particle dynamics uses minimum energy bouncing back in other words: which way with minimum effort to bounce for avoiding an obstacle and manoeuvre around it i.e., East (Right) or West (Left) or East-North (Diagonally to the right-front). The algorithm has an extended option of finding the safest path in the sensed 7m by 7m space providing audio and kinaesthetic guidance.
Furthermore, minimal path optimisation schemes for manoeuvring locally in a room or confined space is implemented in the processing unit. The scheme is based upon compact self-learning Graph Neural Networks. The self-learning part measures and collects the time user takes to make certain manoeuvres and steps; feeds it into a database and assigning different neuron weights according to time taken by user. Therefore, optimising the path choice according to each user’s timing for different manoeuvres. The whole Al is programmed in compact vectorisation techniques for fast processing providing inline guidance without delay.
As an additional output platform to the audio transferred commands a haptic platform is included in the invention. The platform acts as an output haptic display of 7x7 matrix of kinaesthetic 3D communication. This creates an experience of touch by applying vibrations and motions to the nodes on the interface which is felt by the user’s hands.
The device (ETA) comes with an add on application providing help for the visually impaired by support from a plural of followers or friends whom have the app installed on their phone devices. The app has to be coupled in prior with the ETA by approved authentication from the ETA owner . This app provides a plural of features such as:
• Ability to geo-locate and follow the visually impaired
• Ability to get live streams from the ETA’s camera, sonar sensor and updates from the multitude of sensors ( GPS, Altimeter, Pressure, Temperature, Humidity, Acceleration, Velocity, tilt and falling information)
• Ability to convey the same data analysis results done on the device to the followers or friends in real-time
• Ability to directly convey audio communications between both sides wither in group form or singular
• In cases of Emergency an alerting system is activated therefore, alerting the priorly decided list of emergency contacts to take actions to help the visually impaired or to alert him.
• Both sides have the option to priorly choose the size of data parsing levels to manage their data download/upload budget i

Claims

Claims
[Claim 1 ] A multi-sensory response system or an Electronic Travel and Communication Aid comprising: a wearable or placeable device having a sensor array configured to collect and scan three-dimensional sensory data associated with an environment, or the field ahead of a person, or a travelling unit; the device could be wearable or placeable; the device is complying with IP68W standards; the device interfaces with an audio output, high definition HMI for displaying emergency information for assistant of medical and emergency contacts and a haptic device providing 2D environmental representation; and an electronic processor constructed to: convert the three-dimensional sensory data into a two dimensional abstract representation; locate obstacles within the scanned field ahead based upon the sensory data and object recognition/detection Neural Network -Based algorithms; map the obstacles onto the two-dimensional representation; to provide audio and tactile input to the person of the scanned field of obstacles ahead, The device is configured as a phone with synchronized data and may act as a calling and messaging unit in emergencies and in normal conditions.
[Claim 2] A multi-sensory response system or an Electronic Travel and Communication Aid of claim 1 , wherein a radar ultrasound scans the forthcoming three-dimensional field by means of two servo motors (two angles of rotation) providing field information for up to 7m in a spherical sector with two angles extent of 175°.
[Claim 3] A multi-sensory response system or an Electronic Travel and Communication Aid of claim 1 , wherein the sensor array includes infrared sensors.
[Claim 4] A multi-sensory response system or an Electronic Travel and Communication Aid of claim 1 , wherein the sensor array includes fisheye 175° imaging sensors.
[Claim 5] A multi-sensory response system or an Electronic Travel and Communication Aid of claim 1 , wherein the sensor array includes at least one of a gyroscope, an inertia-based sensor, an accelerometer, Temperature, Pressure, Humidity, and altitude sensor.
[Claim 6] A multi-sensory response system or an Electronic Travel and Communication Aid of claim 1 , wherein the sensor array includes at least one velocity sensor.
[Claim 7] A multi-sensory response system or an Electronic Travel and Communication Aid of claim 1 , wherein the sensor array includes GPS sensors.
[Claim 8] A multi-sensory response system or an Electronic Travel and Communication Aid of claim 1 , wherein the sensor array includes pedometer sensors.
[Claim 9] A multi-sensory response system or an Electronic Travel and Communication Aid of claim 1 , wherein the distance data to determine at least one of a position relative to a landmark and a heading relative to the landmark.
[Claim 10] A multi-sensory response system or an Electronic Travel and Communication Aid of claim 1 , wherein the sensor array includes tilt sensors.
[Claim 11] A multi-sensory response system or an Electronic Travel and Communication Aid of claim 1 , wherein an electronic processor is further configured to acquire and filtrate data to receive from the number of the multitude of sensors.
[Claim 12] The system of claims 1 & 11 further including: a memory coupled to the processor, an input port coupled to the memory, and further wherein map data can be received at the input port and stored in the memory.
[Claim 13] The system of claim 1 further including a housing having disposed thereon the array of sensors, the processor, and at least one of the output devices.
[Claim 14] A multi-sensory response system or an Electronic Travel and Communication Aid of claim 1 , wherein an electronic processor is further configured to determine a number of the multitude of haptic 7x7 matrix and Braille of kinaesthetic 3D communication outputs to control.
[Claim 15] A multi-sensory response system or an Electronic Travel and Communication Aid of claim 1 , wherein an electronic processor is further configured to determine a number of the variety of audio outputs to control.
[Claim 16] A multi-sensory response system or an Electronic Travel and Communication Aid of claim 1 , wherein an electronic processor is further configured to the number of the multitude of outputs which are determined based upon the size of the obstacle.
[Claim 17] A multi-sensory response system or an Electronic Travel and Communication Aid of claim 1 , wherein the one or more electronic processors are further configured to control the one or more of the multitude of outputs to represent movement of the obstacle.
[Claim 18] A multi-sensory response system or an Electronic Travel and Communication Aid of claim 1 , wherein the one or more electronic processors are further configured to include a personal data assistant.
[Claim 19] A multi-sensory response system or an Electronic Travel and Communication Aid of claim 1 10, wherein an electronic processor is further configured to control emergency actions based on the status of the device or the user with the aid of the settling time of the tilt position.
[Claim 20] A method comprising: receiving; from a sensor array, three-dimensional sensory data associated with an environment or the field ahead of a person or a travelling unit. Converting; using an electronic processor, the three- dimensional sensory data into a two-dimensional abstract representation; locating obstacles within the environment or the field ahead based upon the sensory data; mapping the obstacle onto the two-dimensional representation; mapping the two-dimensional representation onto a two dimensional haptic interface, Braille Interface, high definition HMI for displaying emergency information for assistant of medical and emergency contacts, and an audio device comprising a multitude of outputs; and activating one or more of a multitude of outputs based upon the mapping of the two-dimensional representation onto the multitude of outputs to provide haptic matrix, Braille output, audio and visual input to the person that represents a location of the obstacle wherein the number of the multitude of outputs activated is correlated with the size and danger of the obstacles. The processor in communication with the array of sensors: the field scanning provides distance databased on a distance to a surface; the gyroscope to provide rotational databased on rotational movement; an accelerometer; Temperature; Pressure; Humidity; altitude sensor; a pedometer to provide travel databased on linear movement; a GPS combined with GSM-based triangulation using signal fingerprinting to provide location and guidance to destination both indoors and outdoors; having instructions for determining navigation databased on the distance data, the rotational data, and the travel data; and an output device in communication with the processor and configured to render the navigation data in a human perceivable manner by means of multitude of outputs: Audio output, Haptic Matrix, Braille output and Human Machine Interface.
[Claim 21 ] The method of claim 15, further determining a number of the multitude of outputs to control.
[Claim 22] The method of claim 15, wherein the numbers of the multitude of outputs are determined based upon size of the surrounding obstacles.
[Claim 23] The method of claim 15, further comprising controlling the one or more of the multitude of outputs to represent static or dynamic movement of obstacles.
[Claim 24] A multi-sensory response system or an Electronic Travel and Communication Aid of claim 1 , wherein the device comes with an installable application providing help for the visually impaired by support from a plural of followers or friends who have the app installed on their phone devices. [Claim 25] The method of claim 24, further comprising a plural of features listed below are not limited explicitly as expressed below:
• Ability to geo-locate and follow the visually impaired.
• Ability to get live streams from the ETA’s camera, sonar sensor and updates from the multitude of sensors (GPS, Altimeter, Pressure, Temperature, Humidity, Acceleration, Velocity, tilt and falling information)
• Ability to convey the same data analysis results done on the device to the followers or friends in real-time.
• Ability to directly convey audio communications between both sides wither in group form or singular.
• In cases of Emergency an alerting system is activated therefore, alerting the priorly decided list of emergency contacts to take actions to help the visually impaired or to alert him.
• Both sides have the option to priorly choose the size of data parsing levels to manage their data download/upload budget i
PCT/IB2023/056897 2023-07-03 2023-07-03 WISE-i: AN ELECTRONIC TRAVEL AND COMMUNICATION AID DEVICE FOR THE VISUALLY IMPAIRED WO2024018310A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/IB2023/056897 WO2024018310A1 (en) 2023-07-03 2023-07-03 WISE-i: AN ELECTRONIC TRAVEL AND COMMUNICATION AID DEVICE FOR THE VISUALLY IMPAIRED

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2023/056897 WO2024018310A1 (en) 2023-07-03 2023-07-03 WISE-i: AN ELECTRONIC TRAVEL AND COMMUNICATION AID DEVICE FOR THE VISUALLY IMPAIRED

Publications (1)

Publication Number Publication Date
WO2024018310A1 true WO2024018310A1 (en) 2024-01-25

Family

ID=89617266

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2023/056897 WO2024018310A1 (en) 2023-07-03 2023-07-03 WISE-i: AN ELECTRONIC TRAVEL AND COMMUNICATION AID DEVICE FOR THE VISUALLY IMPAIRED

Country Status (1)

Country Link
WO (1) WO2024018310A1 (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060105301A1 (en) * 2004-11-02 2006-05-18 Custom Lab Software Systems, Inc. Assistive communication device
WO2012104626A1 (en) * 2011-01-31 2012-08-09 The University Of Sheffield Active sensory augmentation device
US20170224573A1 (en) * 2014-11-10 2017-08-10 Pranav Challa Assistive support systems and devices for automatic feedback
US20170239130A1 (en) * 2012-06-01 2017-08-24 New York University Somatosensory feedback wearable object
US20180079429A1 (en) * 2016-09-16 2018-03-22 Toyota Motor Engineering & Manufacturing North America, Inc. Human-machine interface device and method for sensory augmentation in a vehicle environment
US20190055835A1 (en) * 2017-08-18 2019-02-21 AquaSwift Inc. Method and System for Collecting and Managing Remote Sensor Data
US20200271446A1 (en) * 2018-01-12 2020-08-27 Trimble Ab Geodetic instrument with reduced drift
US20210137772A1 (en) * 2019-11-12 2021-05-13 Elnathan J. Washington Multi-Functional Guide Stick
WO2023061927A1 (en) * 2021-10-15 2023-04-20 Fusion Lab Technologies SARL Method for notifying a visually impaired user of the presence of object and/or obstacle

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060105301A1 (en) * 2004-11-02 2006-05-18 Custom Lab Software Systems, Inc. Assistive communication device
WO2012104626A1 (en) * 2011-01-31 2012-08-09 The University Of Sheffield Active sensory augmentation device
US20170239130A1 (en) * 2012-06-01 2017-08-24 New York University Somatosensory feedback wearable object
US20170224573A1 (en) * 2014-11-10 2017-08-10 Pranav Challa Assistive support systems and devices for automatic feedback
US20180079429A1 (en) * 2016-09-16 2018-03-22 Toyota Motor Engineering & Manufacturing North America, Inc. Human-machine interface device and method for sensory augmentation in a vehicle environment
US20190055835A1 (en) * 2017-08-18 2019-02-21 AquaSwift Inc. Method and System for Collecting and Managing Remote Sensor Data
US20200271446A1 (en) * 2018-01-12 2020-08-27 Trimble Ab Geodetic instrument with reduced drift
US20210137772A1 (en) * 2019-11-12 2021-05-13 Elnathan J. Washington Multi-Functional Guide Stick
WO2023061927A1 (en) * 2021-10-15 2023-04-20 Fusion Lab Technologies SARL Method for notifying a visually impaired user of the presence of object and/or obstacle

Similar Documents

Publication Publication Date Title
CN108673501B (en) Target following method and device for robot
EP3321888B1 (en) Projected image generation method and device, and method for mapping image pixels and depth values
US10437252B1 (en) High-precision multi-layer visual and semantic map for autonomous driving
US10794710B1 (en) High-precision multi-layer visual and semantic map by autonomous units
US11317079B2 (en) Self-supervised training of a depth estimation model using depth hints
US11427218B2 (en) Control apparatus, control method, program, and moving body
US11353891B2 (en) Target tracking method and apparatus
US20080118104A1 (en) High fidelity target identification and acquisition through image stabilization and image size regulation
WO2016031105A1 (en) Information-processing device, information processing method, and program
CN111226154B (en) Autofocus camera and system
CN112020411B (en) Mobile robot apparatus and method for providing service to user
JP6758068B2 (en) Autonomous mobile robot
CN113920258A (en) Map generation method, map generation device, map generation medium and electronic equipment
US20230103650A1 (en) System and method for providing scene information
WO2024018310A1 (en) WISE-i: AN ELECTRONIC TRAVEL AND COMMUNICATION AID DEVICE FOR THE VISUALLY IMPAIRED
Cai et al. Heads-up lidar imaging with sensor fusion
US20240098225A1 (en) System and method for providing scene information
US20220319016A1 (en) Panoptic segmentation forecasting for augmented reality
Tyagi et al. Sensor Based Wearable Devices for Road Navigation
CA3221072A1 (en) Image depth prediction with wavelet decomposition
Raju et al. Navigating By Means Of Electronic Perceptible Assistance System
Todd et al. EYESEE; AN ASSISTIVE DEVICE FOR BLIND NAVIGATION WITH MULTI-SENSORY AID
Raju et al. PERCEPTIBLE PATH ORGANISM IN SUPPORT OF VISUALLY IMPAIRED PEOPLE

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23842516

Country of ref document: EP

Kind code of ref document: A1