CN210091198U - Intelligent visual impairment auxiliary system based on distributed multi-source heterogeneous sensing technology - Google Patents

Intelligent visual impairment auxiliary system based on distributed multi-source heterogeneous sensing technology Download PDF

Info

Publication number
CN210091198U
CN210091198U CN201921060939.1U CN201921060939U CN210091198U CN 210091198 U CN210091198 U CN 210091198U CN 201921060939 U CN201921060939 U CN 201921060939U CN 210091198 U CN210091198 U CN 210091198U
Authority
CN
China
Prior art keywords
sensor
module
camera
rgb camera
distance measuring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201921060939.1U
Other languages
Chinese (zh)
Inventor
徐书凝
胡孟晗
李庆利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China Normal University
Original Assignee
East China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China Normal University filed Critical East China Normal University
Priority to CN201921060939.1U priority Critical patent/CN210091198U/en
Application granted granted Critical
Publication of CN210091198U publication Critical patent/CN210091198U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Rehabilitation Tools (AREA)
  • Manipulator (AREA)

Abstract

The utility model discloses an obstacle auxiliary system is looked to intelligence based on heterogeneous sensing technology of distributing type multisource, its characteristics are that this obstacle auxiliary system is looked to intelligence includes that the distributed sets up and the head-mounted device that two-way communication is connected, the undershirt, the waistband, the bracelet, stick and blind guiding machine dog, infrared distance measuring sensor has been integrated, the far infrared imaging technique, GPS, ultrasonic distance measuring sensor, the degree of depth camera, sensing module such as RGB camera, carry out the cooperation data between the heterogeneous sensor of multisource and acquire and the coprocessing personage analysis between the data, in order to satisfy the indoor of looking the obstacle, outer location and keep away the demand of different applications such as the obstacle. Compared with the prior art, the utility model have indoor, outer location and function such as mutual reading, accurate effective information and accurate comprehensive transmission under acquireing the complicated scene of using to it is supplementary to satisfy the visual barrier under the different environment, and the barrier is effectively kept away to the barrier personage is looked in the help, simple structure, convenient to use, the reliability is high.

Description

Intelligent visual impairment auxiliary system based on distributed multi-source heterogeneous sensing technology
Technical Field
The utility model belongs to the technical field of multiple locate mode and character image recognition processing technique and specifically relates to an intelligent visual impairment auxiliary system based on heterogeneous sensing technology of distributing type multisource.
Background
People with visual impairment (including partially visually impaired people and totally blind people) have many inconveniences in life due to visual deficiency, so people are trying to research auxiliary devices to improve the quality of life of visually impaired people, and designing an intelligent visual impairment auxiliary system to provide help for visually impaired people becomes a research focus nowadays, and many achievements and applications for assisting visually impaired people exist in the field.
Application No.: 2013203817903 discloses a walking stick for visually impaired people, which is a traditional auxiliary tool for visually impaired people, has a limited detection distance, can only detect obstacles beside feet, and cannot reflect the conditions in the distance and in the air, and when in use, the visually impaired people need to repeatedly move the walking stick to know the situation in front, thus the walking stick is time-consuming and labor-consuming.
Application No.: 2015205123794 discloses an environment sensor for visually impaired people, which adopts ultrasonic technology, laser ranging technology, binocular vision technology, laser speckle coding technology, laser radar technology, millimeter wave radar technology, thermal imaging technology and Global Positioning System (GPS), but has limited range based on ultrasonic technology and laser ranging technology, can only realize single-point ranging, and has little information amount, much power consumption, heavy equipment, alarm function, and easy environmental interference. In addition, the device is in a centralized sensing form, so that the limitations of limited sensing distance, redundant sensing information and the like exist, and as the visually impaired people distinguish the peripheral environment in a changeable scene, the requirement on information acquisition is high.
The blind guide dog can provide help for the visually impaired, but the training and maintenance cost of the blind guide dog is high, and the blind guide dog is hard to bear by a common family. In some occasions, the guide dog cannot accompany the visually impaired people to enter, such as buses and railway stations, so that the assistance of the guide dog is limited.
Most of the blind assisting tools in the prior art are systems based on single-source sensing, and in order to solve the problems, development of an intelligent vision impairment assisting system based on a distributed multi-source heterogeneous sensing technology and capable of meeting various environmental conditions is urgently needed. At present, no description or report about similar technologies is found, and similar data at home and abroad is not collected.
SUMMERY OF THE UTILITY MODEL
The utility model aims at the not enough and intelligent visual barrier auxiliary system based on heterogeneous sensing technology of distributing type multisource that designs to prior art, the heterogeneous sensing technology of multisource that adopts the distributing type framework, through the information that different sensors gathered, accurate effective information and accurate comprehensive transmission under acquireing the complicated scene of using to it is supplementary to satisfy the visual barrier under the different environment, realizes indoor, outer location and keeps away functions such as barrier, simple structure, convenient to use, the reliability is high.
The purpose of the utility model is realized like this: an intelligent visual impairment auxiliary system based on a distributed multi-source heterogeneous sensing technology is characterized by comprising head-wearing equipment, a vest, a waistband, a bracelet, a walking stick and a blind guiding robot dog which are arranged in a distributed mode and connected in a bidirectional communication mode, wherein a voice feedback module, a depth camera and a first RGB camera are arranged on the head-wearing equipment; the vest is provided with a first GPU processor and a first vibration feedback module which are connected with the first CPU processor; the waistband is provided with an annular ultrasonic ranging and sensor; the bracelet is provided with a second vibration feedback module and a GPS module which are connected with the Bluetooth module; the walking stick is provided with a wireless positioning module, a fence-shaped laser and an infrared distance measuring and sensor; the blind guiding robot dog is composed of a second RGB camera, a far infrared camera, a light supplementing module, a second GPU processor and an ultrasonic ranging and sensor which are connected with a second CPU processor; the first GPU processor is in communication connection with the voice feedback module, the depth camera, the first RGB camera and the annular ultrasonic ranging and sensing device; the Bluetooth module is in communication connection with the annular ultrasonic ranging and sensor, the wireless positioning module, the fence-shaped laser and the infrared ranging and sensor; the second CPU processor is in communication connection with the wireless positioning module, the fence-shaped laser and the infrared distance measuring and sensing device; the voice feedback module, the first vibration feedback module and the second vibration feedback module feed back obstacle avoidance information obtained by the head-mounted equipment, the waistband, the walking stick and the blind guiding robot dog to the user through voice and touch.
The depth camera calculates pixel depths according to matching results to obtain distance information.
The first RGB camera and the second RGB camera achieve acquisition of orientation information through calculation of rotation angles.
The depth camera, the first RGB camera, the second RGB camera, the wireless positioning module and the Bluetooth module realize indoor positioning, and the depth camera, the first RGB camera, the second RGB camera, the GPS module, the fence-shaped laser, the infrared distance measurement and sensor, the far infrared camera, the ultrasonic distance measurement and sensor and the infrared distance measurement and sensor realize outdoor positioning.
Compared with the prior art, the utility model have indoor, outer location and function such as mutual reading, accurate effective information and accurate comprehensive transmission under acquireing the complicated scene of using to it is supplementary to satisfy the visual barrier under the different environment, and the barrier is effectively kept away to the barrier personage is looked in the help, simple structure, convenient to use, the reliability is high.
Drawings
FIG. 1 is a schematic structural view of the present invention;
fig. 2 is a schematic diagram of the present invention.
Detailed Description
Referring to fig. 1, the utility model discloses by head-mounted device 1, undershirt 2, waistband 3, bracelet 4, stick 5 and the guide's robot dog 6 that two-way communication connects that distributed setting constitutes. The head-mounted equipment 1 is provided with a voice feedback module 11, a depth camera 12 and a first RGB camera 13; the vest 2 is provided with a first GPU processor 22 and a first vibration feedback module 23 which are connected with the first CPU processor 21; the waistband 3 is provided with an annular ultrasonic ranging and sensor 31; the bracelet 4 is provided with a second vibration feedback module 42 and a GPS module 43 which are connected with the Bluetooth module 41; the walking stick 5 is provided with a wireless positioning module 51, a fence-shaped laser 52 and an infrared distance measuring and sensor 53; the guide robot dog 6 consists of a second RGB camera 61, a far infrared camera 62, a supplementary lighting module 63, a second GPU processor 65 and an ultrasonic ranging and sensor 66 which are connected with a second CPU processor 64; the first GPU processor 22 is in communication connection with the voice feedback module 11, the depth camera 12, the first RGB camera 13, and the annular ultrasonic ranging and sensor 31; the Bluetooth module 41 is in communication connection with the annular ultrasonic ranging and sensor 31, the wireless positioning module 51, the fence-shaped laser 52 and the infrared ranging and sensor 53; the second CPU 64 is in communication connection with the wireless positioning module 51, the fence-shaped laser 52 and the infrared distance measuring and sensor 53; the voice feedback module 11, the first vibration feedback module 23 and the second vibration feedback module 42 feed the obstacle avoidance information obtained by the head-wearing device 1, the waistband 3, the walking stick 5 and the blind guiding robot dog 6 back to the user through voice and touch; the blind guiding robot dog 6 inputs the acquired data into the depth network model for auxiliary obstacle avoidance; the first RGB camera 13 and the second RGB camera 61 realize the acquisition of orientation information through the calculation of the rotation angle; the depth camera 12, the first RGB camera 13, the second RGB camera 61, the wireless positioning 51 and the bluetooth module 43 realize indoor positioning; the depth camera 12, the first RGB camera 13, the second RGB camera 61, the GPS module 42, the barrier laser 52, the infrared ranging and sensor 53, the far infrared camera 62, and the ultrasonic ranging and sensor 66 achieve outdoor positioning.
Referring to fig. 2, the utility model discloses with the distributed framework of the heterogeneous sensor of multisource, wear equipment 1, undershirt 2, waistband 3, bracelet 4, stick 5 and lead blind machine dog 6 from the top down and set up respectively in each position of looking barrier personage 7 to and the sensor overall arrangement between each part.
The utility model discloses a realize like this that indoor, outer location and keep away functions such as barrier:
indoor positioning
The visually impaired people 7 realize coarse positioning through the wireless positioning module 51 on the walking stick 5, the first RGB camera 13 and the two-dimensional code acquire orientation information, the depth camera 12 acquires distance information and the Bluetooth module 41 realizes fine positioning, and the acquired information is fed back to the visually impaired people in a voice mode through the voice feedback module 11, so that indoor positioning is realized.
(II) outdoor positioning
The conventional object recognition is carried out through the second RGB camera 61, the depth camera 12 obtains two-dimensional distance information, the far infrared camera 62 recognizes a heat source object, the GPS module 43 obtains position information, the fence-shaped laser 52 detects a corner-shaped object, the infrared ranging and sensor 53 and the ultrasonic ranging and sensor 66 obtain one-dimensional distance information, and the acquired information is fed back to the visually impaired people in a voice mode through the voice feedback module 11 after fusion decision-making, so that the outdoor positioning is realized.
(III) obstacle avoidance
The first RGB camera 1 and the depth camera 12 on the head-mounted equipment 1 identify a conventional object and acquire two-dimensional distance information, the monitoring height is determined by the visual field of the cameras, and the blind guiding robot dog 6 inputs acquired data into a depth network model to assist in obstacle avoidance; the annular ultrasonic ranging and sensor 31 on the waistband 3 and the ultrasonic ranging and sensor 66 on the blind guiding robot dog 6 acquire one-dimensional distance information, and the detection of the obstacle at the waist alignment position is realized after threshold judgment is carried out on data acquired by the annular ultrasonic ranging and sensor 31 and the ultrasonic ranging and sensor 66; the fence-shaped laser 52 on the walking stick 5 identifies angular objects, the detection of obstacles below 0.5m is realized through image processing, the acquired information is fed back to the visually impaired people in a voice mode by the voice feedback module 11, and therefore the function of avoiding obstacles is realized.
The first RGB camera 13 and the depth camera 12 on the head-mounted device 1 recognize a conventional object and acquire two-dimensional distance information, an obstacle avoidance function can be realized, the detection height is determined by the camera view, the voice feedback module 11 can feed back real-time environment information, and the depth camera 12 calculates the pixel depth according to a matching result to acquire the distance information.
The first vibration feedback module 23 on the vest 2 can give vibration feedback to the visually impaired in obstacle avoidance and identification.
The annular ultrasonic ranging and sensor 31 on the waistband 3 can acquire one-dimensional distance information, after threshold judgment is carried out on acquired data, detection of an obstacle at the waist alignment position can be achieved, the annular ultrasonic ranging and sensor 31 is strong in directivity and adjustable in angle, and the one-dimensional distance information is acquired through sound velocity measurement.
The GPS module 43 on the bracelet 4 can acquire geographical position information, and the second vibration feedback module 42 can give vibration feedback to the visually impaired in obstacle avoidance and identification.
The infrared distance measuring and sensor 53 on the walking stick 5 uses the light intensity of a phase type photoelectric distance measuring instrument with infrared as a light source to change along with the injected electric signal, has the dual functions of the light source and a modulator, obtains one-dimensional distance information, and the palisade laser 52 is matched with the second RGB camera 61 to identify angular objects such as stairs and the like, and realizes the detection of obstacles below 0.5m through image processing, thereby realizing the obstacle avoidance function.
The blind guiding robot dog 6 inputs the acquired data into the depth network model for auxiliary obstacle avoidance, the far infrared camera 62 can identify a heat source object, and can identify an object at night, and the acutance of the object is lower than that of visible light; the ultrasonic ranging and sensor 66 has strong directivity and adjustable angle, and obtains one-dimensional distance information through sound velocity measurement.
The present invention is further described above, but not limited to this patent, and all equivalent implementations of the present invention are intended to be encompassed by the scope of the claims of this patent.

Claims (4)

1. An intelligent visual impairment auxiliary system based on a distributed multi-source heterogeneous sensing technology is characterized by comprising head-wearing equipment, a vest, a waistband, a bracelet, a walking stick and a blind guiding robot dog which are arranged in a distributed mode and connected in a bidirectional communication mode, wherein a voice feedback module, a depth camera and a first RGB camera are arranged on the head-wearing equipment; the vest is provided with a first GPU processor and a first vibration feedback module which are connected with the first CPU processor; the waistband is provided with an annular ultrasonic ranging and sensor; the bracelet is provided with a second vibration feedback module and a GPS module which are connected with the Bluetooth module; the walking stick is provided with a wireless positioning module, a fence-shaped laser and an infrared distance measuring and sensor; the blind guiding robot dog is composed of a second RGB camera, a far infrared camera, a light supplementing module, a second GPU processor and an ultrasonic ranging and sensor which are connected with a second CPU processor; the first GPU processor is in communication connection with the voice feedback module, the depth camera, the first RGB camera and the annular ultrasonic ranging and sensing device; the Bluetooth module is in communication connection with the annular ultrasonic ranging and sensor, the wireless positioning module, the fence-shaped laser and the infrared ranging and sensor; the second CPU processor is in communication connection with the wireless positioning module, the fence-shaped laser and the infrared distance measuring and sensing device; the voice feedback module, the first vibration feedback module and the second vibration feedback module feed back obstacle avoidance information obtained by the head-mounted equipment, the waistband, the walking stick and the blind guiding robot dog to the user through voice and touch.
2. The system of claim 1, wherein the depth camera calculates pixel depth based on the matching result to obtain distance information.
3. The intelligent visual impairment assisting system based on the distributed multi-source heterogeneous sensing technology as claimed in claim 1, wherein the first RGB camera and the second RGB camera achieve acquisition of orientation information through calculation of rotation angles.
4. The system of claim 1, claim 2 or claim 3, wherein the depth camera, the first RGB camera and the second RGB camera implement indoor positioning with a wireless positioning and Bluetooth module, and the depth camera, the first RGB camera and the second RGB camera implement outdoor positioning with a GPS module, a fence-like laser, an infrared distance measuring and sensor, a far infrared camera, an ultrasonic distance measuring and sensor, and an infrared distance measuring and sensor.
CN201921060939.1U 2019-07-09 2019-07-09 Intelligent visual impairment auxiliary system based on distributed multi-source heterogeneous sensing technology Active CN210091198U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201921060939.1U CN210091198U (en) 2019-07-09 2019-07-09 Intelligent visual impairment auxiliary system based on distributed multi-source heterogeneous sensing technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201921060939.1U CN210091198U (en) 2019-07-09 2019-07-09 Intelligent visual impairment auxiliary system based on distributed multi-source heterogeneous sensing technology

Publications (1)

Publication Number Publication Date
CN210091198U true CN210091198U (en) 2020-02-18

Family

ID=69485270

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201921060939.1U Active CN210091198U (en) 2019-07-09 2019-07-09 Intelligent visual impairment auxiliary system based on distributed multi-source heterogeneous sensing technology

Country Status (1)

Country Link
CN (1) CN210091198U (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110321860A (en) * 2019-07-09 2019-10-11 华东师范大学 A kind of intelligence auxiliary system visually impaired based on distributed multi-source heterogeneous sensing technology
CN113332110A (en) * 2021-06-02 2021-09-03 西京学院 Scene auditory perception-based blind-guiding flashlight and blind-guiding method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110321860A (en) * 2019-07-09 2019-10-11 华东师范大学 A kind of intelligence auxiliary system visually impaired based on distributed multi-source heterogeneous sensing technology
CN113332110A (en) * 2021-06-02 2021-09-03 西京学院 Scene auditory perception-based blind-guiding flashlight and blind-guiding method
CN113332110B (en) * 2021-06-02 2023-06-27 西京学院 Blind guiding flashlight and blind guiding method based on scenery auditory perception

Similar Documents

Publication Publication Date Title
CN110321860A (en) A kind of intelligence auxiliary system visually impaired based on distributed multi-source heterogeneous sensing technology
CN106214437B (en) A kind of intelligent blind auxiliary eyeglasses
CN210091198U (en) Intelligent visual impairment auxiliary system based on distributed multi-source heterogeneous sensing technology
CN103135754B (en) Adopt interactive device to realize mutual method
CN110147106A (en) Has the intelligent Mobile Service robot of laser and vision fusion obstacle avoidance system
WO2017219751A1 (en) Mobile suitcase having automatic following and obstacle avoidance functions, and using method therefor
CN108536145A (en) A kind of robot system intelligently followed using machine vision and operation method
US7755744B1 (en) Environment sensor that conveys information about objects in the vicinity of the visually impaired user
CN208953962U (en) A kind of robot tracking control and robot
US11755030B2 (en) Mapping and tracking system for robots
CN106441306A (en) Intelligent life detecting robot with capabilities of independent positioning and map building
WO2018058936A1 (en) Blind guiding belt
CN105044754A (en) Mobile platform outdoor positioning method based on multi-sensor fusion
JPWO2019065045A1 (en) Object search system, object search device, and object search method
CN102636152B (en) Active visual ranging system of movable platform
CN112870033A (en) Intelligent blind guiding helmet system for unstructured road and navigation method
CN110584962A (en) Combined obstacle-detection intelligent blind-guiding system
CN106898249A (en) A kind of map structuring system and its construction method for earthquake-stricken area communication failure region
CN109998873A (en) A kind of wearable blindmen intelligent positioning and blind guiding system
CN106038183A (en) Wearable device and navigation system for blind people
CN107588780A (en) A kind of intelligent blind guiding system
CN110623820A (en) Blind device is led to wearable intelligence
CN111840016A (en) Flexible and configurable intelligent navigation device for blind people
CN107976685A (en) A kind of infrared sensor indoor human body Target Tracking System based on Internet of Things
CN111035543A (en) Intelligent blind guiding robot

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant