US20130250078A1 - Visual aid - Google Patents
Visual aid Download PDFInfo
- Publication number
- US20130250078A1 US20130250078A1 US13/770,560 US201313770560A US2013250078A1 US 20130250078 A1 US20130250078 A1 US 20130250078A1 US 201313770560 A US201313770560 A US 201313770560A US 2013250078 A1 US2013250078 A1 US 2013250078A1
- Authority
- US
- United States
- Prior art keywords
- objects
- knowledge database
- user
- new
- visual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61F—FILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
- A61F9/00—Methods or devices for treatment of the eyes; Devices for putting-in contact lenses; Devices to correct squinting; Apparatus to guide the blind; Protective devices for the eyes, carried on the body or in the hand
- A61F9/08—Devices or methods enabling eye-patients to replace direct visual perception by another kind of perception
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61H—PHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
- A61H3/00—Appliances for aiding patients or disabled persons to walk about
- A61H3/06—Walking aids for blind persons
- A61H3/061—Walking aids for blind persons with electronic detecting or guiding means
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61H—PHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
- A61H3/00—Appliances for aiding patients or disabled persons to walk about
- A61H3/06—Walking aids for blind persons
- A61H3/061—Walking aids for blind persons with electronic detecting or guiding means
- A61H2003/063—Walking aids for blind persons with electronic detecting or guiding means with tactile perception
Definitions
- Embodiments of the present invention relate to visual aid systems, devices and methods, for example, to aid visually impaired or blind users.
- Embodiments of the invention may provide a visual aid system, device and method.
- the visual aid system may include an imaging unit to capture images of a user's surroundings, a knowledge database to store object recognition information for a plurality of image objects, an object recognition module to match and identify an object in one or more captured images with an object in the knowledge database, and an output device to output a non-visual indication of the identified object.
- FIG. 1 is a schematic illustration of a visual aid system in accordance with embodiments of the invention.
- FIG. 2 is a flowchart of a method for using the visual aid system of FIG. 1 in accordance with embodiments of the invention.
- Embodiments of the present invention allow blind or visually impaired users to “see” with their other senses, for example, by hearing an oral description of their visual environment or by feeling a tactile stimulus.
- Embodiments of the present invention may include an imaging system to image a user's surroundings in real-time, an object recognition module to automatically recognize and identify visual objects in the user's path, and an output device to notify the user of those visual objects using non-visual (e.g., oral or tactile) descriptions to aid the visually impaired user.
- the object recognition module may access a knowledge database of stored image objects to compare to the captured image objects and, upon detecting a match, may identify the captured object as its matched database counterpart.
- Each matched database object may be associated with a data file of a non-visual description of the object, such as an audio file of a voice stating the name and features of the object or a tactile stimulus.
- the output device may output or play the non-visual description to the visually impaired user.
- Such embodiments may report visual objects, people, movement and scenery in the user's field of view using non-visual descriptors.
- a user may use such systems, for example, to recognize people, places and objects while they walk outside, to find the correct drug label when they open a medicine cabinet, to find the correct street address from among a row of houses or businesses, to avoid collisions, etc.
- Embodiments of the invention may not only be a tool to avoid obstacles, but may actually mimic the sense of sight including both the functionality of the eyes (e.g., using a camera) and the visual recognition and cognition pathways in the brain (e.g., using an object recognition module).
- FIG. 1 schematically illustrates a visual aid system 100 in accordance with embodiments of the invention.
- System 100 may include an imaging system 102 to image light reflected from objects 104 in a field of view (FOV) 106 of imaging system 102 .
- System 100 may include a distance measuring module 108 to measure the distance between module 108 and object 104 , an object recognition module 110 to identify images of object 104 , an output device 112 to output a non-visual indication of the identity of object 104 and a position module 114 to determine the position of system 100 or object 104 .
- System 100 may include a transmitter or other communication module 120 to communicate with other devices, e.g., via a wireless network.
- System 100 may optionally include a recorder to record data gathered by the device, such as, the captured image data.
- Imaging system 102 may include an imager or camera 105 and an optical system including one or more lens(es), prisms, or mirrors, etc. to capture images of physical objects 104 via the reflection of light waves therefrom in the imager' s field of view 106 .
- Camera 105 may capture individual images or a stream of images in rapid succession to generate a movie or moving image stream.
- Camera 105 may include, for example, a micro-camera, such as, a “camera on a chip” imager, a charge-coupled device (CCD) and/or metal-oxide-semiconductor (CMOS) camera.
- the captured image data may be digital color image data, although other image formats may be used.
- Camera 105 may be worn by the user, such as, on a hat, a belt, the bridge of a pair of glasses or sunglasses, an accessory worn around the neck to suspend camera 105 near chest level, or worn near ground level attached to shoe laces or the tongue of a shoe.
- the camera's field of view 106 may be similar to that of the human eye system (e.g., approximately 160° in the horizontal direction and 140° in the vertical direction) or may be wider (e.g., approximately 180° or 360°) or narrower (e.g., approximately 90°) in the vertical and/or horizontal directions.
- camera 105 may move or rotate to scan its surroundings for a dynamic field of view 106 .
- Scanning may be initiated automatically or upon detecting a predetermined trigger, such as, a moving object.
- a predetermined trigger such as, a moving object.
- Imaging system 102 may capture images periodically. The periodicity may be set and/or adjusted by the programmer or user to be a predetermined time, for example, either in relatively fast succession (e.g., 10-100 frames per second (fps)) to resemble a movie or in relatively slow succession (e.g., 0.1-1 frames per second) to resemble individual images.
- imaging system 102 may capture images in response to a stimulus, such as, a change in visual background, change in light levels, rapid motion, etc. Imaging system 102 may capture images in real-time.
- Object recognition module 110 may analyze the image data collected by imaging system 102 .
- Object recognition module 110 may include a processor 118 to execute object recognition logic including, for example, image recognition, pattern recognition, spatial perception, motion analysis and/or artificial intelligence (AI) functionalities.
- the logic may be used to compare features of the collected image data to known images or object recognition information stored in an image dictionary or knowledge database, e.g., located in a memory 116 or an external database.
- object recognition module 110 may identify and extract a main, moving or new object in an image and compare it to known image objects represented in the knowledge database.
- Object recognition module 110 may compare the captured extracted object and dictionary objects using the actual images of the objects, metadata derived from the images or annotated or summary information associated with the images.
- object recognition module 110 may compare images based on one or more features of the imaged object 104 , such as, object name (e.g., an apple vs. a hammer), object type or category (e.g., plant vs. tool), color, size, shape, texture, pattern, brightness, distance to the object and direction or orientation of the object. Each feature may be determined using a separate comparison.
- the knowledge database may also store a data file of a non-visual description of each database object and/or feature, such as an audio file reciting the name of the object and its associated features, a tactile stimulus defining the presence of a new object or a near object likely to cause a collision, etc.
- output device 112 may output or play the associated non-visual description to the visually impaired user for recognition of his/her surroundings.
- Output device 112 may include headphones, speakers, etc., to output sound data files and a buzzer, micro-electromechanical systems (MEMS) switch or vibrator to output tactile stimuli.
- MEMS micro-electromechanical systems
- the knowledge database may be adaptive.
- An adaptive knowledge database may store object recognition information, not only for generic objects, like apple or chair, but also individualized objects for user-specific recognition capabilities.
- the adaptive knowledge database may be used, for example, to recognize a user's family, friend and co-workers identifying each individual by name, to recognize the streets in the user's town, the office where the user works, etc.
- a machine-learning or “training” mode may be used to add objects into the knowledge database, for example, where the user may put the target object into field of view 106 of camera 105 and input (e.g., type or speak) the name or features of the new imaged object.
- knowledge database may be self-adaptive or self-taught.
- object recognition module 110 may automatically access a secondary knowledge database, e.g., via communication module 120 , to find the recognition information associated with that object and add it to the primary knowledge database.
- Distance measuring module 108 may measure the distance between module 108 and object 104 .
- Distance measuring module 108 may include a transmitter/receiver 107 to transmit waves, such as, sonar, ultrasonic, and/or laser waves, and receive the reflections of those waves off of object 104 (and noise from other objects) to gauge the distance to object 104 .
- Distance measuring module 108 may emit waves in a range 122 by scanning an area, for example, approximating field of view 106 .
- the received wave information may be input into a microcontroller (e.g., in module 108 ) programmed to identify the distance to object 104 .
- the distance measurement may be used for collision avoidance to alert the user with an alarm (e.g., an auditory or tactile stimulus) via output device 112 when a possible collision with object 104 is detected.
- a possible collision may be detected when the distance measured to object 104 is less than or equal to a predetermined threshold and/or when the user and/or object 104 is moving.
- Distance measuring module 108 may alert the user to avoid objects 104 (still or moving) which are pre-identified as threatening and/or may recommend the user to halt or change course (e.g., “turn left to avoid couch”).
- the distance measurement may also be used for size calculations, for example, scaling the size of object 104 in the image by a factor of the distance measured, to determine the actual size of object 104 , for example, to describe object 104 as “large,” “medium” or “small” relative to a predefined standard size of the object.
- a position module 114 may include a global positioning system (GPS), accelerometer, compass, gyroscope etc., to determine the position, speed, orientation or other motion parameters of system 100 and/or object 104 .
- Position module 114 may report a current location to the user and/or guide the user as a navigator device.
- position module 114 may provide oral navigation directions responsive to avoid obstructive objects identified, for example, by object recognition module 110 using the captured images and/or by distance measuring module 108 using wave reflection data, for real-time guidance adaptive to the user's environment. For example, if object recognition module 110 detects an obstruction in a navigational path proposed by position module 114 , such as a closed road or a pot hole, position module 114 may re-route the navigational path around the obstruction.
- Communication module 120 may include a transmitter and receiver to allow system 100 to communicate with remote servers or databases over networks, such as the Internet, e.g., via wireless connection.
- Communication module 120 in conjunction with position module 114 , may allow a remote server to track the user, communicate with the user via output device 112 , and send information to the user, such as, auditory reports of street closings, risky situations, the news or the weather.
- Components of system 100 may have artificial intelligence logic installed, for example, to fully interact with the user based on non-visual queues, for example, by accepting and responding to vocal commands, vocal inquiries and other voice activated triggers.
- the user may state a command, e.g., via a microphone or other input device, causing camera 105 to scan its surroundings or position module 116 to navigate the user to a requested destination.
- System 100 components may communicate with each other and/or external units via wired or wireless connections, such as, Bluetooth or the Internet.
- Components of system 100 may be integrated into a single unit (e.g., all-in-one) or may include multiple separate pieces or sub-units.
- One example of an integrated system 100 may include glasses or sunglasses in which camera 105 is placed at the nose bridge and/or earphone output devices 112 extend from the temple arms. Micro or lightweight components may be used for the comfort of the glasses system 100 .
- Another example of an integrated system 100 may include a headphone output device 112 with camera 105 attached at the top of the headphone bridge.
- System 100 may be configured to exclude some components, such as, communication module 120 , or include additional components, such as, a recorder. Other system 100 designs may be used.
- Embodiments of the present invention may describe visual aspects of the user's environment using non-visual descriptions, triggers or alarms. For example, sensory input related to a first sense (e.g., visual stimulus) may be translated or mapped to sensory information related to a second sense (e.g., auditory or tactile stimulus), for example, when the first sense is impaired. Such embodiments may convey details or features of the sensed objects, such as, the color or shape of the object extending beyond the capabilities of current collision avoidance mechanisms. Embodiments of the invention may use artificial intelligence to interpret images in the user's field of view 106 , similarly to the human visual recognition process, and to provide such information orally to the visually impaired user.
- a first sense e.g., visual stimulus
- a second sense e.g., auditory or tactile stimulus
- Such embodiments may convey details or features of the sensed objects, such as, the color or shape of the object extending beyond the capabilities of current collision avoidance mechanisms.
- Embodiments of the invention may use artificial intelligence to interpret
- Such description may provide insights and detail beyond what visually impaired user can recognize simply by feeling objects around them or listening to ambient sounds.
- visual descriptions may evoke memory and visual queues present for users who previously had a functioning sense of sight.
- auditory descriptions of visual objects may activate regions of the brain, such as the occipital lobe, designated for visual function, even without the function of the eyes.
- Embodiments of the invention may allow users to “visualize” the world through an orally description of the images captured by imaging system 102 .
- FIG. 2 is a flowchart of a method for using a visual aid system (e.g., system 100 of FIG. 1 ) in accordance with embodiments of the invention.
- a visual aid system e.g., system 100 of FIG. 1
- an imaging system e.g., imaging system 102 of FIG. 1
- an object recognition module may compare the images captured in operation 210 with images or associated object recognition information for predefined objects stored in a knowledge database (e.g., memory 116 of FIG. 1 ).
- the object recognition module may match the captured image objects (e.g., objects 104 of FIG. 1 ) with image objects represented in the knowledge database. When a match is found, the captured image object may be identified or recognized as the predefined database object.
- an output device may output or play a non-visual descriptive file associated with the matching database object (e.g., a sound file or command to activate a tactile stimulus device).
- a non-visual descriptive file associated with the matching database object (e.g., a sound file or command to activate a tactile stimulus device).
- a distance measuring module may measure the distance between the imaging system and the imaged object.
- the distance measuring module may emit and receive waves to gauge the distance of the reflected wave path to/from the object.
- a position module (e.g., position module 114 of FIG. 1 ) may determine the position or motion parameters of the user and/or the imaged object.
- a communication module (e.g., communication module 120 of FIG. 1 ) may allow the user and the system components to communicate with other external devices.
- visually impaired may refer to a full or partial loss of sight in humans (partially sighted, low vision, legally blind, totally blind, etc.) or may refer to users for whose visual field is obstructed, e.g., from viewing the rear or periphery in a plane or car, but who otherwise have acceptable vision.
- embodiments of the invention may be used in other contexts when vision is not an issue, for example, for identifying individuals in a diplomat meeting, identifying landmark structures as a tourist, identifying works of art in a museum, for teaching object recognition to children, etc.
- a soldier or a policeman may use the device in situations where they may be attacked from behind.
- an imaging system e.g., imaging system 102 of FIG. 1
- an imaging system may include a probe or robot (e.g., detached from the user) that may enter areas restricted to humans, such as, dangerous areas during a war, areas with chemical or biological leaks, extra-planetary space missions, etc. If the operation is executed in darkness, the imaging system may use night-vision technology to detect objects, where the user (e.g., located remotely) may request oral descriptions of the images due to the darkness. In another example, if the camera is equipped with night-vision, a user may be able to use it to visualize dimly lit streets or other dark places.
- embodiments of the invention are described herein to translate visual sensory input for sight to auditory sensory input for hearing, such embodiments may be generalized to translate sensory input from any first sense to any second sense, for example, when the first sense is impaired.
- sound input may be translated to visual stimulus, for example, to aid deaf or hearing impaired people.
- a tactile stimulus may be used to convey the visual and/or auditory world to a blind and/or deaf person.
- robotics object recognition maps visual, auditory and all other sensory input to non-sensory data since robots, unlike humans, do not have senses. Accordingly, the object recognition systems of robotics networks would not be modified to transcribe visual sensory data into auditory data, since the auditory output would be inoperable in commanding or communicating with a robot.
- capturing images and recognizing and reporting imaged objects in “real-time” may refer to operations that occur instantly, at a small time delay of, for example, between 0.01 and 10 seconds, while the object is in front of the viewer, etc.
- Embodiments of the invention may include an article such as a computer or processor readable non-transitory storage medium, such as for example a memory, a disk drive, or a USB flash memory encoding, including or storing instructions, e.g., computer-executable instructions, which when executed by a processor or controller (e.g., such as processor 118 of FIG. 1 ), cause the processor or controller to carry out methods disclosed herein.
- a processor or controller e.g., such as processor 118 of FIG. 1
Abstract
A visual aid system, device and method is provided. The visual aid system may include an imaging unit to capture images of a user's surroundings, a knowledge database to store object recognition information for a plurality of image objects, an object recognition module to match and identify an object imaged in one or more captured images with an object in the knowledge database, and an output device to output a non-visual indication of the identified object.
Description
- This application claims the benefit of prior U.S. Provisional Application Ser. No. 61/615,401, filed Mar. 26, 2012, which is incorporated by reference herein in its entirety.
- Embodiments of the present invention relate to visual aid systems, devices and methods, for example, to aid visually impaired or blind users.
- Visually impaired and blind people currently rely on dogs or canes to detect obstacles and move forward safely. Recent advances include a device referred to as a “virtual cane” that uses sonar technology to detect obstacles by transmission and reception of sonic waves.
- However, these solutions only detect the presence of an obstruction and are simply tools to avoid collision. They cannot identify the actual object that causes the obstruction, for example, distinguishing between a chair and a pole, or provide a spatial view of the landscape in front of the user.
- There is a long felt need in the art to provide visually impaired users with an understanding of their environment that mimics the visual sense.
- Embodiments of the invention may provide a visual aid system, device and method. The visual aid system may include an imaging unit to capture images of a user's surroundings, a knowledge database to store object recognition information for a plurality of image objects, an object recognition module to match and identify an object in one or more captured images with an object in the knowledge database, and an output device to output a non-visual indication of the identified object.
- The principles and operation of the system, apparatus, and method according to embodiments of the present invention may be better understood with reference to the drawings, and the following description, it being understood that these drawings are given for illustrative purposes only and are not meant to be limiting.
-
FIG. 1 is a schematic illustration of a visual aid system in accordance with embodiments of the invention; and -
FIG. 2 is a flowchart of a method for using the visual aid system ofFIG. 1 in accordance with embodiments of the invention. - For simplicity and clarity of illustration, elements shown in the drawings have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity.
- In the following description, various aspects of the present invention will be described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the present invention. However, it will also be apparent to one skilled in the art that the present invention may be practiced without the specific details presented herein. Furthermore, well known features may be omitted or simplified in order not to obscure the present invention.
- Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining,” or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulates and/or transforms data represented as physical, such as electronic, quantities within the computing system's registers and/or memories into other data similarly represented as physical quantities within the computing system's memories, registers or other such information storage, transmission or display devices.
- Embodiments of the present invention allow blind or visually impaired users to “see” with their other senses, for example, by hearing an oral description of their visual environment or by feeling a tactile stimulus. Embodiments of the present invention may include an imaging system to image a user's surroundings in real-time, an object recognition module to automatically recognize and identify visual objects in the user's path, and an output device to notify the user of those visual objects using non-visual (e.g., oral or tactile) descriptions to aid the visually impaired user. The object recognition module may access a knowledge database of stored image objects to compare to the captured image objects and, upon detecting a match, may identify the captured object as its matched database counterpart. Each matched database object may be associated with a data file of a non-visual description of the object, such as an audio file of a voice stating the name and features of the object or a tactile stimulus. The output device may output or play the non-visual description to the visually impaired user.
- Such embodiments may report visual objects, people, movement and scenery in the user's field of view using non-visual descriptors. A user may use such systems, for example, to recognize people, places and objects while they walk outside, to find the correct drug label when they open a medicine cabinet, to find the correct street address from among a row of houses or businesses, to avoid collisions, etc. Embodiments of the invention may not only be a tool to avoid obstacles, but may actually mimic the sense of sight including both the functionality of the eyes (e.g., using a camera) and the visual recognition and cognition pathways in the brain (e.g., using an object recognition module).
- Reference is made to
FIG. 1 , which schematically illustrates avisual aid system 100 in accordance with embodiments of the invention. -
System 100 may include animaging system 102 to image light reflected fromobjects 104 in a field of view (FOV) 106 ofimaging system 102.System 100 may include a distance measuringmodule 108 to measure the distance betweenmodule 108 andobject 104, anobject recognition module 110 to identify images ofobject 104, anoutput device 112 to output a non-visual indication of the identity ofobject 104 and aposition module 114 to determine the position ofsystem 100 orobject 104.System 100 may include a transmitter orother communication module 120 to communicate with other devices, e.g., via a wireless network.System 100 may optionally include a recorder to record data gathered by the device, such as, the captured image data. -
Imaging system 102 may include an imager orcamera 105 and an optical system including one or more lens(es), prisms, or mirrors, etc. to capture images ofphysical objects 104 via the reflection of light waves therefrom in the imager' s field ofview 106. Camera 105 may capture individual images or a stream of images in rapid succession to generate a movie or moving image stream.Camera 105 may include, for example, a micro-camera, such as, a “camera on a chip” imager, a charge-coupled device (CCD) and/or metal-oxide-semiconductor (CMOS) camera. The captured image data may be digital color image data, although other image formats may be used.Camera 105 may be worn by the user, such as, on a hat, a belt, the bridge of a pair of glasses or sunglasses, an accessory worn around the neck to suspendcamera 105 near chest level, or worn near ground level attached to shoe laces or the tongue of a shoe. The camera's field ofview 106 may be similar to that of the human eye system (e.g., approximately 160° in the horizontal direction and 140° in the vertical direction) or may be wider (e.g., approximately 180° or 360°) or narrower (e.g., approximately 90°) in the vertical and/or horizontal directions. In some embodiments,camera 105 may move or rotate to scan its surroundings for a dynamic field ofview 106. Scanning may be initiated automatically or upon detecting a predetermined trigger, such as, a moving object. In some embodiments, asingle camera 105 may be used, while in other embodiments,multiple cameras 105 may be used, for example, to assemble a panoramic view from the individual cameras.Imaging system 102 may capture images periodically. The periodicity may be set and/or adjusted by the programmer or user to be a predetermined time, for example, either in relatively fast succession (e.g., 10-100 frames per second (fps)) to resemble a movie or in relatively slow succession (e.g., 0.1-1 frames per second) to resemble individual images. In other embodiments,imaging system 102 may capture images in response to a stimulus, such as, a change in visual background, change in light levels, rapid motion, etc.Imaging system 102 may capture images in real-time. -
Object recognition module 110 may analyze the image data collected byimaging system 102.Object recognition module 110 may include aprocessor 118 to execute object recognition logic including, for example, image recognition, pattern recognition, spatial perception, motion analysis and/or artificial intelligence (AI) functionalities. The logic may be used to compare features of the collected image data to known images or object recognition information stored in an image dictionary or knowledge database, e.g., located in amemory 116 or an external database. For example,object recognition module 110 may identify and extract a main, moving or new object in an image and compare it to known image objects represented in the knowledge database.Object recognition module 110 may compare the captured extracted object and dictionary objects using the actual images of the objects, metadata derived from the images or annotated or summary information associated with the images. In some embodiments,object recognition module 110 may compare images based on one or more features of theimaged object 104, such as, object name (e.g., an apple vs. a hammer), object type or category (e.g., plant vs. tool), color, size, shape, texture, pattern, brightness, distance to the object and direction or orientation of the object. Each feature may be determined using a separate comparison. The knowledge database may also store a data file of a non-visual description of each database object and/or feature, such as an audio file reciting the name of the object and its associated features, a tactile stimulus defining the presence of a new object or a near object likely to cause a collision, etc. Accordingly, when a match is found between the imaged object and database object,output device 112 may output or play the associated non-visual description to the visually impaired user for recognition of his/her surroundings.Output device 112 may include headphones, speakers, etc., to output sound data files and a buzzer, micro-electromechanical systems (MEMS) switch or vibrator to output tactile stimuli. - In some embodiment, the knowledge database may be adaptive. An adaptive knowledge database may store object recognition information, not only for generic objects, like apple or chair, but also individualized objects for user-specific recognition capabilities. The adaptive knowledge database may be used, for example, to recognize a user's family, friend and co-workers identifying each individual by name, to recognize the streets in the user's town, the office where the user works, etc. A machine-learning or “training” mode may be used to add objects into the knowledge database, for example, where the user may put the target object into field of
view 106 ofcamera 105 and input (e.g., type or speak) the name or features of the new imaged object. In other embodiments, knowledge database may be self-adaptive or self-taught. In one example, when an unknown object commonly appears in the user's path, objectrecognition module 110 may automatically access a secondary knowledge database, e.g., viacommunication module 120, to find the recognition information associated with that object and add it to the primary knowledge database. - Distance measuring
module 108 may measure the distance betweenmodule 108 andobject 104. Distance measuringmodule 108 may include a transmitter/receiver 107 to transmit waves, such as, sonar, ultrasonic, and/or laser waves, and receive the reflections of those waves off of object 104 (and noise from other objects) to gauge the distance to object 104. Distance measuringmodule 108 may emit waves in arange 122 by scanning an area, for example, approximating field ofview 106. The received wave information may be input into a microcontroller (e.g., in module 108) programmed to identify the distance to object 104. The distance measurement may be used for collision avoidance to alert the user with an alarm (e.g., an auditory or tactile stimulus) viaoutput device 112 when a possible collision withobject 104 is detected. A possible collision may be detected when the distance measured to object 104 is less than or equal to a predetermined threshold and/or when the user and/or object 104 is moving. Distance measuringmodule 108 may alert the user to avoid objects 104 (still or moving) which are pre-identified as threatening and/or may recommend the user to halt or change course (e.g., “turn left to avoid couch”). The distance measurement may also be used for size calculations, for example, scaling the size ofobject 104 in the image by a factor of the distance measured, to determine the actual size ofobject 104, for example, to describeobject 104 as “large,” “medium” or “small” relative to a predefined standard size of the object. - A
position module 114 may include a global positioning system (GPS), accelerometer, compass, gyroscope etc., to determine the position, speed, orientation or other motion parameters ofsystem 100 and/orobject 104.Position module 114 may report a current location to the user and/or guide the user as a navigator device. For example,position module 114 may provide oral navigation directions responsive to avoid obstructive objects identified, for example, byobject recognition module 110 using the captured images and/or bydistance measuring module 108 using wave reflection data, for real-time guidance adaptive to the user's environment. For example, ifobject recognition module 110 detects an obstruction in a navigational path proposed byposition module 114, such as a closed road or a pot hole,position module 114 may re-route the navigational path around the obstruction. -
Communication module 120 may include a transmitter and receiver to allowsystem 100 to communicate with remote servers or databases over networks, such as the Internet, e.g., via wireless connection.Communication module 120, in conjunction withposition module 114, may allow a remote server to track the user, communicate with the user viaoutput device 112, and send information to the user, such as, auditory reports of street closings, risky situations, the news or the weather. - Components of
system 100 may have artificial intelligence logic installed, for example, to fully interact with the user based on non-visual queues, for example, by accepting and responding to vocal commands, vocal inquiries and other voice activated triggers. For example, the user may state a command, e.g., via a microphone or other input device, causingcamera 105 to scan its surroundings orposition module 116 to navigate the user to a requested destination. -
System 100 components may communicate with each other and/or external units via wired or wireless connections, such as, Bluetooth or the Internet. Components ofsystem 100 may be integrated into a single unit (e.g., all-in-one) or may include multiple separate pieces or sub-units. One example of anintegrated system 100 may include glasses or sunglasses in whichcamera 105 is placed at the nose bridge and/orearphone output devices 112 extend from the temple arms. Micro or lightweight components may be used for the comfort of theglasses system 100. Another example of anintegrated system 100 may include aheadphone output device 112 withcamera 105 attached at the top of the headphone bridge. -
System 100 may be configured to exclude some components, such as,communication module 120, or include additional components, such as, a recorder.Other system 100 designs may be used. - Embodiments of the present invention may describe visual aspects of the user's environment using non-visual descriptions, triggers or alarms. For example, sensory input related to a first sense (e.g., visual stimulus) may be translated or mapped to sensory information related to a second sense (e.g., auditory or tactile stimulus), for example, when the first sense is impaired. Such embodiments may convey details or features of the sensed objects, such as, the color or shape of the object extending beyond the capabilities of current collision avoidance mechanisms. Embodiments of the invention may use artificial intelligence to interpret images in the user's field of
view 106, similarly to the human visual recognition process, and to provide such information orally to the visually impaired user. Such description may provide insights and detail beyond what visually impaired user can recognize simply by feeling objects around them or listening to ambient sounds. Such visual descriptions may evoke memory and visual queues present for users who previously had a functioning sense of sight. For example, auditory descriptions of visual objects may activate regions of the brain, such as the occipital lobe, designated for visual function, even without the function of the eyes. Embodiments of the invention may allow users to “visualize” the world through an orally description of the images captured byimaging system 102. - Reference is made to
FIG. 2 , which is a flowchart of a method for using a visual aid system (e.g.,system 100 ofFIG. 1 ) in accordance with embodiments of the invention. - In
operation 210, an imaging system (e.g.,imaging system 102 ofFIG. 1 ) may capture images in the user's surroundings or field of view (e.g., field ofview 106 ofFIG. 1 ). - In
operation 220, an object recognition module (e.g., objectrecognition module 110 ofFIG. 1 ) may compare the images captured inoperation 210 with images or associated object recognition information for predefined objects stored in a knowledge database (e.g.,memory 116 ofFIG. 1 ). - In
operation 230, the object recognition module may match the captured image objects (e.g., objects 104 ofFIG. 1 ) with image objects represented in the knowledge database. When a match is found, the captured image object may be identified or recognized as the predefined database object. - In
operation 240, an output device (e.g.,output device 112 ofFIG. 1 ) may output or play a non-visual descriptive file associated with the matching database object (e.g., a sound file or command to activate a tactile stimulus device). - In
operation 250, a distance measuring module (e.g.,distance measuring module 108 ofFIG. 1 ) may measure the distance between the imaging system and the imaged object. In one example, the distance measuring module may emit and receive waves to gauge the distance of the reflected wave path to/from the object. - In
operation 260, a position module (e.g.,position module 114 ofFIG. 1 ) may determine the position or motion parameters of the user and/or the imaged object. - In
operation 270, a communication module (e.g.,communication module 120 ofFIG. 1 ) may allow the user and the system components to communicate with other external devices. - Other operations or orders of operations may be used.
- When used herein, “visually impaired” may refer to a full or partial loss of sight in humans (partially sighted, low vision, legally blind, totally blind, etc.) or may refer to users for whose visual field is obstructed, e.g., from viewing the rear or periphery in a plane or car, but who otherwise have acceptable vision. Furthermore, embodiments of the invention may be used in other contexts when vision is not an issue, for example, for identifying individuals in a diplomat meeting, identifying landmark structures as a tourist, identifying works of art in a museum, for teaching object recognition to children, etc. In one example, a soldier or a policeman may use the device in situations where they may be attacked from behind. Their device, e.g., worn on the back of a helmet or vest, may scan a field of view behind them and alert them orally of danger, thus allowing them to remain visually focused on events in front of them. In another example, an imaging system (e.g.,
imaging system 102 ofFIG. 1 ) may include a probe or robot (e.g., detached from the user) that may enter areas restricted to humans, such as, dangerous areas during a war, areas with chemical or biological leaks, extra-planetary space missions, etc. If the operation is executed in darkness, the imaging system may use night-vision technology to detect objects, where the user (e.g., located remotely) may request oral descriptions of the images due to the darkness. In another example, if the camera is equipped with night-vision, a user may be able to use it to visualize dimly lit streets or other dark places. - Although embodiments of the invention are described herein to translate visual sensory input for sight to auditory sensory input for hearing, such embodiments may be generalized to translate sensory input from any first sense to any second sense, for example, when the first sense is impaired. For example, sound input may be translated to visual stimulus, for example, to aid deaf or hearing impaired people. In another example, a tactile stimulus may be used to convey the visual and/or auditory world to a blind and/or deaf person.
- It may be noted that robotics object recognition maps visual, auditory and all other sensory input to non-sensory data since robots, unlike humans, do not have senses. Accordingly, the object recognition systems of robotics networks would not be modified to transcribe visual sensory data into auditory data, since the auditory output would be inoperable in commanding or communicating with a robot.
- It may be appreciated that capturing images and recognizing and reporting imaged objects in “real-time” may refer to operations that occur instantly, at a small time delay of, for example, between 0.01 and 10 seconds, while the object is in front of the viewer, etc.
- Different embodiments are disclosed herein. Features of certain embodiments may be combined with features of other embodiments; thus certain embodiments may be combinations of features of multiple embodiments.
- Embodiments of the invention may include an article such as a computer or processor readable non-transitory storage medium, such as for example a memory, a disk drive, or a USB flash memory encoding, including or storing instructions, e.g., computer-executable instructions, which when executed by a processor or controller (e.g., such as
processor 118 ofFIG. 1 ), cause the processor or controller to carry out methods disclosed herein. - The foregoing description of the embodiments of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed. It should be appreciated by persons skilled in the art that many modifications, variations, substitutions, changes, and equivalents are possible in light of the above teaching. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention.
Claims (18)
1. A system comprising:
an imaging unit to capture images of a user's surroundings;
a knowledge database storing object recognition information for a plurality of image objects;
an object recognition module to match and identify an object in one or more captured images with an object in the knowledge database; and
an output device to output a non-visual indication of the identified object.
2. The system of claim 1 , wherein the non-visual indication is an audio file reciting the name of the object.
3. The system of claim 1 , wherein the non-visual indication is a tactile stimulus.
4. The system of claim 1 , further comprising a collision avoidance module to detect obstructions by transmitting and receiving of waves.
5. The system of claim 4 , further comprising a positioning system to navigate the user using non-visual indications of directions that is responsive to avoid obstructive objects identified in the captured images or in the reflection of transmitted waves.
6. The system of claim 1 , wherein the knowledge database is adaptive enabling new image recognition information to be added to the knowledge database for recognizing new objects.
7. The system of claim 1 , wherein the object in the captured images is matched to multiple objects in the knowledge database, each knowledge database object associated with a different feature of the captured image object.
8. The system of claim 7 , wherein the features are selected from the group consisting of: object name, object type, color, size, shape, texture, pattern, brightness, distance to the object, direction to the object and orientation of the object.
9. The system of claim 7 comprising a plurality of modes for object recognition selected from the group consisting of: standard mode indicating one or more features identified for each new object, quiet mode indicating only the object type feature for each new object, motion mode indicating new objects only when the environment changes, emergency mode indicating objects only when a collision is anticipated, scan mode identifying a plurality of objects in a current environment.
10. A method comprising:
capturing images of a user's surroundings;
storing object recognition information for a plurality of image objects;
identifying an object in one or more captured images that matches an object in the knowledge database; and
outputting a non-visual indication of the identified object.
11. The method of claim 10 , wherein the non-visual indication is an audio file reciting the name of the object.
12. The method of claim 10 , wherein the non-visual indication is a tactile stimulus.
13. The method of claim 10 , further comprising detecting obstructions by transmitting and receiving of waves.
14. The method of claim 13 , further comprising navigating the user using non-visual indications of directions that is responsive to avoid obstructive objects identified in the captured images or in the reflection of transmitted waves.
15. The method of claim 10 comprising adapting the knowledge database by enabling new image recognition information to be added to the knowledge database for recognizing new objects.
16. The method of claim 10 comprising matching the object in the captured images to multiple objects in the knowledge database, each knowledge database object associated with a different feature of the captured image object.
17. The method of claim 16 , wherein the features are selected from the group consisting of: object name, object type, color, size, shape, texture, pattern, brightness, distance to the object, direction to the object and orientation of the object.
18. The method of claim 16 comprising operating according to a selected one of a plurality of modes for object recognition selected from the group consisting of: standard mode indicating one or more features identified for each new object, quiet mode indicating only the object type feature for each new object, motion mode indicating new objects only when the environment changes, emergency mode indicating objects only when a collision is anticipated, scan mode identifying a plurality of objects in a current environment.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/770,560 US20130250078A1 (en) | 2012-03-26 | 2013-02-19 | Visual aid |
IL224862A IL224862A0 (en) | 2012-03-26 | 2013-02-21 | Visual aid |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201261615401P | 2012-03-26 | 2012-03-26 | |
US13/770,560 US20130250078A1 (en) | 2012-03-26 | 2013-02-19 | Visual aid |
Publications (1)
Publication Number | Publication Date |
---|---|
US20130250078A1 true US20130250078A1 (en) | 2013-09-26 |
Family
ID=49211425
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/770,560 Abandoned US20130250078A1 (en) | 2012-03-26 | 2013-02-19 | Visual aid |
Country Status (1)
Country | Link |
---|---|
US (1) | US20130250078A1 (en) |
Cited By (64)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140267649A1 (en) * | 2013-03-15 | 2014-09-18 | Orcam Technologies Ltd. | Apparatus and method for automatic action selection based on image context |
US20140266570A1 (en) * | 2013-03-12 | 2014-09-18 | Anirudh Sharma | System and method for haptic based interaction |
US20140266571A1 (en) * | 2013-03-12 | 2014-09-18 | Anirudh Sharma | System and method for haptic based interaction |
US20150112593A1 (en) * | 2013-10-23 | 2015-04-23 | Apple Inc. | Humanized Navigation Instructions for Mapping Applications |
US20150125831A1 (en) * | 2013-11-07 | 2015-05-07 | Srijit Chandrashekhar Nair | Tactile Pin Array Device |
US20150189071A1 (en) * | 2013-12-31 | 2015-07-02 | Sorenson Communications, Inc. | Visual assistance systems and related methods |
KR20150086840A (en) * | 2014-01-20 | 2015-07-29 | 삼성전자주식회사 | Apparatus and control method for mobile device using multiple cameras |
WO2015184299A1 (en) * | 2014-05-30 | 2015-12-03 | Frank Wilczek | Systems and methods for expanding human perception |
US20160033280A1 (en) * | 2014-08-01 | 2016-02-04 | Toyota Motor Engineering & Manufacturing North America, Inc. | Wearable earpiece for providing social and environmental awareness |
US20160078278A1 (en) * | 2014-09-17 | 2016-03-17 | Toyota Motor Engineering & Manufacturing North America, Inc. | Wearable eyeglasses for providing social and environmental awareness |
US9355316B2 (en) | 2014-05-22 | 2016-05-31 | International Business Machines Corporation | Identifying an obstacle in a route |
US9355547B2 (en) * | 2014-05-22 | 2016-05-31 | International Business Machines Corporation | Identifying a change in a home environment |
CN105701811A (en) * | 2016-01-12 | 2016-06-22 | 浙江大学 | Sound coding interaction method based on RGB-IR camera |
CN105686936A (en) * | 2016-01-12 | 2016-06-22 | 浙江大学 | Sound coding interaction system based on RGB-IR camera |
US20160265917A1 (en) * | 2015-03-10 | 2016-09-15 | Toyota Motor Engineering & Manufacturing North America, Inc. | System and method for providing navigation instructions at optimal times |
USD768024S1 (en) | 2014-09-22 | 2016-10-04 | Toyota Motor Engineering & Manufacturing North America, Inc. | Necklace with a built in guidance device |
US9578307B2 (en) | 2014-01-14 | 2017-02-21 | Toyota Motor Engineering & Manufacturing North America, Inc. | Smart necklace with stereo vision and onboard processing |
US9576460B2 (en) | 2015-01-21 | 2017-02-21 | Toyota Motor Engineering & Manufacturing North America, Inc. | Wearable smart device for hazard detection and warning based on image and audio data |
US9586318B2 (en) * | 2015-02-27 | 2017-03-07 | Toyota Motor Engineering & Manufacturing North America, Inc. | Modular robot with smart device |
US9629774B2 (en) | 2014-01-14 | 2017-04-25 | Toyota Motor Engineering & Manufacturing North America, Inc. | Smart necklace with stereo vision and onboard processing |
WO2017156021A1 (en) * | 2016-03-07 | 2017-09-14 | Wicab, Inc. | Object detection, analysis, and alert system for use in providing visual information to the blind |
CN107157717A (en) * | 2016-03-07 | 2017-09-15 | 维看公司 | Object detection from visual information to blind person, analysis and prompt system for providing |
US9811752B2 (en) | 2015-03-10 | 2017-11-07 | Toyota Motor Engineering & Manufacturing North America, Inc. | Wearable smart device and method for redundant object identification |
CN107402018A (en) * | 2017-09-21 | 2017-11-28 | 北京航空航天大学 | A kind of apparatus for guiding blind combinatorial path planing method based on successive frame |
US9898039B2 (en) | 2015-08-03 | 2018-02-20 | Toyota Motor Engineering & Manufacturing North America, Inc. | Modular smart necklace |
US9915545B2 (en) | 2014-01-14 | 2018-03-13 | Toyota Motor Engineering & Manufacturing North America, Inc. | Smart necklace with stereo vision and onboard processing |
CN107820615A (en) * | 2017-08-23 | 2018-03-20 | 深圳前海达闼云端智能科技有限公司 | Send the method, apparatus and server of prompt message |
US9958275B2 (en) | 2016-05-31 | 2018-05-01 | Toyota Motor Engineering & Manufacturing North America, Inc. | System and method for wearable smart device communications |
US20180125716A1 (en) * | 2016-11-10 | 2018-05-10 | Samsung Electronics Co., Ltd. | Visual aid display device and method of operating the same |
US9972216B2 (en) | 2015-03-20 | 2018-05-15 | Toyota Motor Engineering & Manufacturing North America, Inc. | System and method for storing and playback of information for blind users |
US10012505B2 (en) | 2016-11-11 | 2018-07-03 | Toyota Motor Engineering & Manufacturing North America, Inc. | Wearable system for providing walking directions |
US10024679B2 (en) | 2014-01-14 | 2018-07-17 | Toyota Motor Engineering & Manufacturing North America, Inc. | Smart necklace with stereo vision and onboard processing |
US10024678B2 (en) * | 2014-09-17 | 2018-07-17 | Toyota Motor Engineering & Manufacturing North America, Inc. | Wearable clip for providing social and environmental awareness |
US10024680B2 (en) | 2016-03-11 | 2018-07-17 | Toyota Motor Engineering & Manufacturing North America, Inc. | Step based guidance system |
US20180243157A1 (en) * | 2015-09-08 | 2018-08-30 | Sony Corporation | Information processing apparatus, information processing method, and program |
US10113877B1 (en) * | 2015-09-11 | 2018-10-30 | Philip Raymond Schaefer | System and method for providing directional information |
US10149101B2 (en) * | 2016-06-02 | 2018-12-04 | Chiun Mai Communication Systems, Inc. | Electronic device and reminder method |
US10172760B2 (en) | 2017-01-19 | 2019-01-08 | Jennifer Hendrix | Responsive route guidance and identification system |
US10248856B2 (en) | 2014-01-14 | 2019-04-02 | Toyota Motor Engineering & Manufacturing North America, Inc. | Smart necklace with stereo vision and onboard processing |
WO2019059869A3 (en) * | 2017-09-20 | 2019-05-23 | Seyisco Bilisim Elektronik Danismalik Egitim Sanayi Ve Ticaret Anonim Sirketi | Rough road detection system and method |
US10299982B2 (en) * | 2017-07-21 | 2019-05-28 | David M Frankel | Systems and methods for blind and visually impaired person environment navigation assistance |
JPWO2018025531A1 (en) * | 2016-08-05 | 2019-05-30 | ソニー株式会社 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM |
US10360907B2 (en) | 2014-01-14 | 2019-07-23 | Toyota Motor Engineering & Manufacturing North America, Inc. | Smart necklace with stereo vision and onboard processing |
US10432851B2 (en) | 2016-10-28 | 2019-10-01 | Toyota Motor Engineering & Manufacturing North America, Inc. | Wearable computing device for detecting photography |
US10490102B2 (en) | 2015-02-10 | 2019-11-26 | Toyota Motor Engineering & Manufacturing North America, Inc. | System and method for braille assistance |
US10521669B2 (en) | 2016-11-14 | 2019-12-31 | Toyota Motor Engineering & Manufacturing North America, Inc. | System and method for providing guidance or feedback to a user |
GB2575165A (en) * | 2018-05-13 | 2020-01-01 | Oscar Thomas Wood Billy | Object identification system |
US10561519B2 (en) | 2016-07-20 | 2020-02-18 | Toyota Motor Engineering & Manufacturing North America, Inc. | Wearable computing device having a curved back to reduce pressure on vertebrae |
US10617567B2 (en) * | 2017-06-10 | 2020-04-14 | Manjinder Saini | Intraocular implant device |
CN111161507A (en) * | 2019-12-26 | 2020-05-15 | 潘文娴 | Danger sensing method and device based on intelligent wearable equipment and intelligent wearable equipment |
US10841476B2 (en) | 2014-07-23 | 2020-11-17 | Orcam Technologies Ltd. | Wearable unit for selectively withholding actions based on recognized gestures |
US10900788B2 (en) * | 2018-12-03 | 2021-01-26 | Sidharth ANANTHA | Wearable navigation system for the visually impaired |
US10931916B2 (en) | 2019-04-24 | 2021-02-23 | Sorenson Ip Holdings, Llc | Apparatus, method and computer-readable medium for automatically adjusting the brightness of a videophone visual indicator |
US11017017B2 (en) * | 2019-06-04 | 2021-05-25 | International Business Machines Corporation | Real-time vision assistance |
US11036391B2 (en) | 2018-05-16 | 2021-06-15 | Universal Studios LLC | Haptic feedback systems and methods for an amusement park ride |
DE102014003331B4 (en) | 2014-03-08 | 2022-04-07 | Aissa Zouhri | Visual aid for blind or visually impaired people |
CN114404239A (en) * | 2022-01-21 | 2022-04-29 | 池浩 | Blind aid |
US11354907B1 (en) * | 2016-08-10 | 2022-06-07 | Vivint, Inc. | Sonic sensing |
US11416111B2 (en) * | 2018-04-06 | 2022-08-16 | Capital One Services, Llc | Dynamic design of user interface elements |
US20220262074A1 (en) * | 2019-07-19 | 2022-08-18 | Huawei Technologies Co., Ltd. | Interaction Method in Virtual Reality Scenario and Apparatus |
US11432989B2 (en) * | 2020-04-30 | 2022-09-06 | Toyota Jidosha Kabushiki Kaisha | Information processor |
US20220370283A1 (en) * | 2021-05-13 | 2022-11-24 | Toyota Jidosha Kabushiki Kaisha | Walking support system |
US11547608B2 (en) | 2017-06-10 | 2023-01-10 | Manjinder Saini | Comprehensive intraocular vision advancement |
US20230026575A1 (en) * | 2021-07-26 | 2023-01-26 | Google Llc | Augmented reality depth detection through object recognition |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050208457A1 (en) * | 2004-01-05 | 2005-09-22 | Wolfgang Fink | Digital object recognition audio-assistant for the visually impaired |
US20100308999A1 (en) * | 2009-06-05 | 2010-12-09 | Chornenky Todd E | Security and monitoring apparatus |
-
2013
- 2013-02-19 US US13/770,560 patent/US20130250078A1/en not_active Abandoned
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050208457A1 (en) * | 2004-01-05 | 2005-09-22 | Wolfgang Fink | Digital object recognition audio-assistant for the visually impaired |
US20100308999A1 (en) * | 2009-06-05 | 2010-12-09 | Chornenky Todd E | Security and monitoring apparatus |
Cited By (97)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140266570A1 (en) * | 2013-03-12 | 2014-09-18 | Anirudh Sharma | System and method for haptic based interaction |
US20140266571A1 (en) * | 2013-03-12 | 2014-09-18 | Anirudh Sharma | System and method for haptic based interaction |
US20140267649A1 (en) * | 2013-03-15 | 2014-09-18 | Orcam Technologies Ltd. | Apparatus and method for automatic action selection based on image context |
US9101459B2 (en) * | 2013-03-15 | 2015-08-11 | OrCam Technologies, Ltd. | Apparatus and method for hierarchical object identification using a camera on glasses |
US9436887B2 (en) * | 2013-03-15 | 2016-09-06 | OrCam Technologies, Ltd. | Apparatus and method for automatic action selection based on image context |
US20150112593A1 (en) * | 2013-10-23 | 2015-04-23 | Apple Inc. | Humanized Navigation Instructions for Mapping Applications |
US20150125831A1 (en) * | 2013-11-07 | 2015-05-07 | Srijit Chandrashekhar Nair | Tactile Pin Array Device |
US9307073B2 (en) * | 2013-12-31 | 2016-04-05 | Sorenson Communications, Inc. | Visual assistance systems and related methods |
US20150189071A1 (en) * | 2013-12-31 | 2015-07-02 | Sorenson Communications, Inc. | Visual assistance systems and related methods |
US9843678B2 (en) | 2013-12-31 | 2017-12-12 | Sorenson Ip Holdings, Llc | Visual assistance systems and related methods |
US9578307B2 (en) | 2014-01-14 | 2017-02-21 | Toyota Motor Engineering & Manufacturing North America, Inc. | Smart necklace with stereo vision and onboard processing |
US10248856B2 (en) | 2014-01-14 | 2019-04-02 | Toyota Motor Engineering & Manufacturing North America, Inc. | Smart necklace with stereo vision and onboard processing |
US9629774B2 (en) | 2014-01-14 | 2017-04-25 | Toyota Motor Engineering & Manufacturing North America, Inc. | Smart necklace with stereo vision and onboard processing |
US9915545B2 (en) | 2014-01-14 | 2018-03-13 | Toyota Motor Engineering & Manufacturing North America, Inc. | Smart necklace with stereo vision and onboard processing |
US10360907B2 (en) | 2014-01-14 | 2019-07-23 | Toyota Motor Engineering & Manufacturing North America, Inc. | Smart necklace with stereo vision and onboard processing |
US10024679B2 (en) | 2014-01-14 | 2018-07-17 | Toyota Motor Engineering & Manufacturing North America, Inc. | Smart necklace with stereo vision and onboard processing |
KR102263695B1 (en) * | 2014-01-20 | 2021-06-10 | 삼성전자 주식회사 | Apparatus and control method for mobile device using multiple cameras |
US20160335916A1 (en) * | 2014-01-20 | 2016-11-17 | Samsung Electronics Co., Ltd | Portable device and control method using plurality of cameras |
KR20150086840A (en) * | 2014-01-20 | 2015-07-29 | 삼성전자주식회사 | Apparatus and control method for mobile device using multiple cameras |
DE102014003331B4 (en) | 2014-03-08 | 2022-04-07 | Aissa Zouhri | Visual aid for blind or visually impaired people |
US20160247416A1 (en) * | 2014-05-22 | 2016-08-25 | International Business Machines Corporation | Identifying a change in a home environment |
US9984590B2 (en) * | 2014-05-22 | 2018-05-29 | International Business Machines Corporation | Identifying a change in a home environment |
US20160242988A1 (en) * | 2014-05-22 | 2016-08-25 | International Business Machines Corporation | Identifying a change in a home environment |
US9355547B2 (en) * | 2014-05-22 | 2016-05-31 | International Business Machines Corporation | Identifying a change in a home environment |
US9978290B2 (en) * | 2014-05-22 | 2018-05-22 | International Business Machines Corporation | Identifying a change in a home environment |
US9613274B2 (en) | 2014-05-22 | 2017-04-04 | International Business Machines Corporation | Identifying an obstacle in a route |
US9355316B2 (en) | 2014-05-22 | 2016-05-31 | International Business Machines Corporation | Identifying an obstacle in a route |
WO2015184299A1 (en) * | 2014-05-30 | 2015-12-03 | Frank Wilczek | Systems and methods for expanding human perception |
US10089900B2 (en) | 2014-05-30 | 2018-10-02 | Wolfcub Vision, Inc. | Systems and methods for expanding human perception |
US10841476B2 (en) | 2014-07-23 | 2020-11-17 | Orcam Technologies Ltd. | Wearable unit for selectively withholding actions based on recognized gestures |
US10024667B2 (en) * | 2014-08-01 | 2018-07-17 | Toyota Motor Engineering & Manufacturing North America, Inc. | Wearable earpiece for providing social and environmental awareness |
US20160033280A1 (en) * | 2014-08-01 | 2016-02-04 | Toyota Motor Engineering & Manufacturing North America, Inc. | Wearable earpiece for providing social and environmental awareness |
US10024678B2 (en) * | 2014-09-17 | 2018-07-17 | Toyota Motor Engineering & Manufacturing North America, Inc. | Wearable clip for providing social and environmental awareness |
US9922236B2 (en) * | 2014-09-17 | 2018-03-20 | Toyota Motor Engineering & Manufacturing North America, Inc. | Wearable eyeglasses for providing social and environmental awareness |
US20160078278A1 (en) * | 2014-09-17 | 2016-03-17 | Toyota Motor Engineering & Manufacturing North America, Inc. | Wearable eyeglasses for providing social and environmental awareness |
USD768024S1 (en) | 2014-09-22 | 2016-10-04 | Toyota Motor Engineering & Manufacturing North America, Inc. | Necklace with a built in guidance device |
US9576460B2 (en) | 2015-01-21 | 2017-02-21 | Toyota Motor Engineering & Manufacturing North America, Inc. | Wearable smart device for hazard detection and warning based on image and audio data |
US10490102B2 (en) | 2015-02-10 | 2019-11-26 | Toyota Motor Engineering & Manufacturing North America, Inc. | System and method for braille assistance |
US10391631B2 (en) | 2015-02-27 | 2019-08-27 | Toyota Motor Engineering & Manufacturing North America, Inc. | Modular robot with smart device |
US9586318B2 (en) * | 2015-02-27 | 2017-03-07 | Toyota Motor Engineering & Manufacturing North America, Inc. | Modular robot with smart device |
US20160265917A1 (en) * | 2015-03-10 | 2016-09-15 | Toyota Motor Engineering & Manufacturing North America, Inc. | System and method for providing navigation instructions at optimal times |
US9811752B2 (en) | 2015-03-10 | 2017-11-07 | Toyota Motor Engineering & Manufacturing North America, Inc. | Wearable smart device and method for redundant object identification |
US9677901B2 (en) * | 2015-03-10 | 2017-06-13 | Toyota Motor Engineering & Manufacturing North America, Inc. | System and method for providing navigation instructions at optimal times |
US9972216B2 (en) | 2015-03-20 | 2018-05-15 | Toyota Motor Engineering & Manufacturing North America, Inc. | System and method for storing and playback of information for blind users |
US9898039B2 (en) | 2015-08-03 | 2018-02-20 | Toyota Motor Engineering & Manufacturing North America, Inc. | Modular smart necklace |
US20180243157A1 (en) * | 2015-09-08 | 2018-08-30 | Sony Corporation | Information processing apparatus, information processing method, and program |
US20220331193A1 (en) * | 2015-09-08 | 2022-10-20 | Sony Group Corporation | Information processing apparatus and information processing method |
US11801194B2 (en) * | 2015-09-08 | 2023-10-31 | Sony Group Corporation | Information processing apparatus and information processing method |
US10806658B2 (en) * | 2015-09-08 | 2020-10-20 | Sony Corporation | Information processing apparatus and information processing method |
US11406557B2 (en) * | 2015-09-08 | 2022-08-09 | Sony Corporation | Information processing apparatus and information processing method |
US10113877B1 (en) * | 2015-09-11 | 2018-10-30 | Philip Raymond Schaefer | System and method for providing directional information |
CN105701811A (en) * | 2016-01-12 | 2016-06-22 | 浙江大学 | Sound coding interaction method based on RGB-IR camera |
CN105686936A (en) * | 2016-01-12 | 2016-06-22 | 浙江大学 | Sound coding interaction system based on RGB-IR camera |
WO2017156021A1 (en) * | 2016-03-07 | 2017-09-14 | Wicab, Inc. | Object detection, analysis, and alert system for use in providing visual information to the blind |
CN107157717A (en) * | 2016-03-07 | 2017-09-15 | 维看公司 | Object detection from visual information to blind person, analysis and prompt system for providing |
EP3427255A4 (en) * | 2016-03-07 | 2019-11-20 | Wicab, INC. | Object detection, analysis, and alert system for use in providing visual information to the blind |
US10024680B2 (en) | 2016-03-11 | 2018-07-17 | Toyota Motor Engineering & Manufacturing North America, Inc. | Step based guidance system |
US9958275B2 (en) | 2016-05-31 | 2018-05-01 | Toyota Motor Engineering & Manufacturing North America, Inc. | System and method for wearable smart device communications |
US10149101B2 (en) * | 2016-06-02 | 2018-12-04 | Chiun Mai Communication Systems, Inc. | Electronic device and reminder method |
US10561519B2 (en) | 2016-07-20 | 2020-02-18 | Toyota Motor Engineering & Manufacturing North America, Inc. | Wearable computing device having a curved back to reduce pressure on vertebrae |
US10765588B2 (en) * | 2016-08-05 | 2020-09-08 | Sony Corporation | Information processing apparatus and information processing method |
US20190307632A1 (en) * | 2016-08-05 | 2019-10-10 | Sony Corporation | Information processing device, information processing method, and program |
JPWO2018025531A1 (en) * | 2016-08-05 | 2019-05-30 | ソニー株式会社 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM |
US20200368098A1 (en) * | 2016-08-05 | 2020-11-26 | Sony Corporation | Information processing apparatus, information processing method, and program |
US11744766B2 (en) * | 2016-08-05 | 2023-09-05 | Sony Corporation | Information processing apparatus and information processing method |
US11354907B1 (en) * | 2016-08-10 | 2022-06-07 | Vivint, Inc. | Sonic sensing |
US10432851B2 (en) | 2016-10-28 | 2019-10-01 | Toyota Motor Engineering & Manufacturing North America, Inc. | Wearable computing device for detecting photography |
US20180125716A1 (en) * | 2016-11-10 | 2018-05-10 | Samsung Electronics Co., Ltd. | Visual aid display device and method of operating the same |
US11160688B2 (en) * | 2016-11-10 | 2021-11-02 | Samsung Electronics Co., Ltd. | Visual aid display device and method of operating the same |
US10012505B2 (en) | 2016-11-11 | 2018-07-03 | Toyota Motor Engineering & Manufacturing North America, Inc. | Wearable system for providing walking directions |
US10521669B2 (en) | 2016-11-14 | 2019-12-31 | Toyota Motor Engineering & Manufacturing North America, Inc. | System and method for providing guidance or feedback to a user |
US10172760B2 (en) | 2017-01-19 | 2019-01-08 | Jennifer Hendrix | Responsive route guidance and identification system |
US11696853B2 (en) | 2017-06-10 | 2023-07-11 | Manjinder Saini | Intraocular implant device method |
US11564840B2 (en) | 2017-06-10 | 2023-01-31 | Manjinder Saini | Artificial vision intraocular implant device |
US11547608B2 (en) | 2017-06-10 | 2023-01-10 | Manjinder Saini | Comprehensive intraocular vision advancement |
US10617567B2 (en) * | 2017-06-10 | 2020-04-14 | Manjinder Saini | Intraocular implant device |
US10624791B2 (en) * | 2017-06-10 | 2020-04-21 | Manjinder Saini | Artificial vision intraocular implant device |
US10299982B2 (en) * | 2017-07-21 | 2019-05-28 | David M Frankel | Systems and methods for blind and visually impaired person environment navigation assistance |
CN107820615A (en) * | 2017-08-23 | 2018-03-20 | 深圳前海达闼云端智能科技有限公司 | Send the method, apparatus and server of prompt message |
WO2019059869A3 (en) * | 2017-09-20 | 2019-05-23 | Seyisco Bilisim Elektronik Danismalik Egitim Sanayi Ve Ticaret Anonim Sirketi | Rough road detection system and method |
CN107402018A (en) * | 2017-09-21 | 2017-11-28 | 北京航空航天大学 | A kind of apparatus for guiding blind combinatorial path planing method based on successive frame |
US11416111B2 (en) * | 2018-04-06 | 2022-08-16 | Capital One Services, Llc | Dynamic design of user interface elements |
GB2575165B (en) * | 2018-05-13 | 2022-07-20 | Oscar Thomas Wood Billy | Object identification system |
GB2575165A (en) * | 2018-05-13 | 2020-01-01 | Oscar Thomas Wood Billy | Object identification system |
US11036391B2 (en) | 2018-05-16 | 2021-06-15 | Universal Studios LLC | Haptic feedback systems and methods for an amusement park ride |
US10900788B2 (en) * | 2018-12-03 | 2021-01-26 | Sidharth ANANTHA | Wearable navigation system for the visually impaired |
US10931916B2 (en) | 2019-04-24 | 2021-02-23 | Sorenson Ip Holdings, Llc | Apparatus, method and computer-readable medium for automatically adjusting the brightness of a videophone visual indicator |
US11017017B2 (en) * | 2019-06-04 | 2021-05-25 | International Business Machines Corporation | Real-time vision assistance |
US20220262074A1 (en) * | 2019-07-19 | 2022-08-18 | Huawei Technologies Co., Ltd. | Interaction Method in Virtual Reality Scenario and Apparatus |
US11798234B2 (en) * | 2019-07-19 | 2023-10-24 | Huawei Technologies Co., Ltd. | Interaction method in virtual reality scenario and apparatus |
CN111161507A (en) * | 2019-12-26 | 2020-05-15 | 潘文娴 | Danger sensing method and device based on intelligent wearable equipment and intelligent wearable equipment |
US11432989B2 (en) * | 2020-04-30 | 2022-09-06 | Toyota Jidosha Kabushiki Kaisha | Information processor |
US11938083B2 (en) * | 2021-05-13 | 2024-03-26 | Toyota Jidosha Kabushiki Kaisha | Walking support system |
US20220370283A1 (en) * | 2021-05-13 | 2022-11-24 | Toyota Jidosha Kabushiki Kaisha | Walking support system |
US20230026575A1 (en) * | 2021-07-26 | 2023-01-26 | Google Llc | Augmented reality depth detection through object recognition |
US11935199B2 (en) * | 2021-07-26 | 2024-03-19 | Google Llc | Augmented reality depth detection through object recognition |
CN114404239A (en) * | 2022-01-21 | 2022-04-29 | 池浩 | Blind aid |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20130250078A1 (en) | Visual aid | |
US20210081650A1 (en) | Command Processing Using Multimodal Signal Analysis | |
US9805619B2 (en) | Intelligent glasses for the visually impaired | |
US9035970B2 (en) | Constraint based information inference | |
US10571715B2 (en) | Adaptive visual assistive device | |
US9105210B2 (en) | Multi-node poster location | |
Tapu et al. | A survey on wearable devices used to assist the visual impaired user navigation in outdoor environments | |
US20130131985A1 (en) | Wearable electronic image acquisition and enhancement system and method for image acquisition and visual enhancement | |
US20130177296A1 (en) | Generating metadata for user experiences | |
CN105431763A (en) | Tracking head movement when wearing mobile device | |
Patel et al. | Multisensor-based object detection in indoor environment for visually impaired people | |
KR20190111262A (en) | Portable device for measuring distance from obstacle for blind person | |
US11670157B2 (en) | Augmented reality system | |
US9996730B2 (en) | Vision-assist systems adapted for inter-device communication session | |
Manjari et al. | CREATION: Computational constRained travEl aid for objecT detection in outdoor eNvironment | |
Madake et al. | A Qualitative and Quantitative Analysis of Research in Mobility Technologies for Visually Impaired People | |
KR102081193B1 (en) | Walking assistance device for the blind and walking system having it | |
Botezatu et al. | Development of a versatile assistive system for the visually impaired based on sensor fusion | |
US20200159318A1 (en) | Information processing device, information processing method, and computer program | |
Nguyen et al. | A vision aid for the visually impaired using commodity dual-rear-camera smartphones | |
US11816886B1 (en) | Apparatus, system, and method for machine perception | |
US20210232219A1 (en) | Information processing apparatus, information processing method, and program | |
US20240062548A1 (en) | Converting spatial information to haptic and auditory feedback | |
Udayagini et al. | Smart Cane For Blind People | |
EP3882894B1 (en) | Seeing aid for a visually impaired individual |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |