CN114712181B - Blind person obstacle avoidance navigation system based on visual SLAM - Google Patents

Blind person obstacle avoidance navigation system based on visual SLAM Download PDF

Info

Publication number
CN114712181B
CN114712181B CN202210489717.1A CN202210489717A CN114712181B CN 114712181 B CN114712181 B CN 114712181B CN 202210489717 A CN202210489717 A CN 202210489717A CN 114712181 B CN114712181 B CN 114712181B
Authority
CN
China
Prior art keywords
module
map
obstacle
planner
local
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210489717.1A
Other languages
Chinese (zh)
Other versions
CN114712181A (en
Inventor
李昭斌
刘逸辰
芮楚梵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hohai University HHU
Original Assignee
Hohai University HHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU filed Critical Hohai University HHU
Priority to CN202210489717.1A priority Critical patent/CN114712181B/en
Publication of CN114712181A publication Critical patent/CN114712181A/en
Application granted granted Critical
Publication of CN114712181B publication Critical patent/CN114712181B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H3/00Appliances for aiding patients or disabled persons to walk about
    • A61H3/06Walking aids for blind persons
    • A61H3/061Walking aids for blind persons with electronic detecting or guiding means
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • G06Q10/047Optimisation of routes or paths, e.g. travelling salesman problem
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H3/00Appliances for aiding patients or disabled persons to walk about
    • A61H3/06Walking aids for blind persons
    • A61H3/061Walking aids for blind persons with electronic detecting or guiding means
    • A61H2003/063Walking aids for blind persons with electronic detecting or guiding means with tactile perception

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Epidemiology (AREA)
  • Pain & Pain Management (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Rehabilitation Therapy (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Development Economics (AREA)
  • Biomedical Technology (AREA)
  • Game Theory and Decision Science (AREA)
  • Artificial Intelligence (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)

Abstract

The invention discloses a blind person obstacle avoidance navigation system based on a visual SLAM, which comprises a data collection module, an obstacle avoidance navigation module and a scene reappearing module, wherein the data collection module is used for collecting data; the data collection module comprises two RGBD sensors and a development board, wherein the two RGBD sensors are used for collecting image information and distance information in front of and below the data collection module; the obstacle avoidance navigation module comprises a local planner, a global planner and a vibration device, and a local obstacle map of the current moment in the data collection module is acquired at each moment and is transmitted to the local planner and the global planner; the scene reappearing module comprises a receiver and an earphone, and finally the object information is output to the earphone. The invention enhances the convenience and functionality of the blind guiding device through multi-sensory prompts of touch and hearing, and simultaneously improves the accuracy and speed of forecasting.

Description

Blind person obstacle avoidance navigation system based on visual SLAM
Technical Field
The invention relates to the field of artificial intelligence, in particular to a blind person obstacle avoidance navigation system based on a visual SLAM.
Background
With the progress of the current society, the number of the home and electronic equipment of people in the room gradually increases, and the infrastructure and various vehicles on various roads and greening aspects such as signboards, street lamps, trees and the like are visible everywhere on the road outdoors. For the average person, this means a richer life. For the blind, these items can become their "stumbling stones" during travel, perhaps implying danger and trouble. The popularization of the outdoor blind road is not fully used due to unbalanced development among regions. The people can freely go out and can 'see' the scene ahead, which is more difficult for the people with visual disabilities to realize.
The existing auxiliary articles for the blind to go out are a traditional tactile stick and a novel electronic tactile stick. The traditional blind stick is a white long stick and consists of a wrist strap, a handle, a stick body and a stick tip. The blind man can judge the height difference in front by scanning the barriers in all directions left and right and knocking the ground, so as to achieve the purpose of safe trip. The novel electronic tactile stick enhances the function of the tactile stick by adding devices such as a sensor and the like on the basis of the traditional tactile stick.
CN107390703A discloses an intelligent blind guiding robot and a blind guiding method thereof, comprising a vehicle body, a driving wheel, a radar module, a camera module and a blind guiding connecting rod. The blind guiding method uses the visual SLAM to build a map for navigation of surrounding scenes, and the blind guiding robot pulls the blind to walk through the blind guiding rod. However, the robot has the problems of precision and safety when towing the blind, and the blind cannot know the information of the surrounding environment and the obstacles.
CN110623820A discloses a wearable intelligence leads blind device, includes that 3D infrared imaging camera, SOC chip system, ultrasonic wave matrix radar, GPS module, natural language interact treater constitute, uses the blind instruction of speech reception and suggestion to keep away barrier information. However, the voice cannot accurately describe the environmental barrier information in real time, and a large angle error exists when the blind is prompted to move forward.
On the basis of the traditional tactile stick, the invention is additionally provided with the following functions: obstacle avoidance navigation and scene reproduction. Firstly, the invention uses two RGBD sensors for detecting obstacles in front and below to realize local obstacle avoidance, then combines with SLAM algorithm to establish a global map for guiding the blind, and indicates the best direction for the blind through a vibration device. The method uses the YOLO algorithm to identify the types of the articles, and realizes scene reappearance for the blind through the 3D sound technology.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to provide a visual SLAM-based blind person obstacle avoidance navigator which can locally avoid obstacles for the blind person in a complex indoor environment, navigate the blind person by establishing a three-dimensional environment map, and simultaneously prompt the type and the direction of an object in the surrounding environment of the blind person by using a target recognition and 3D sound technology.
The technical scheme is as follows: a blind person obstacle avoidance navigation system based on visual SLAM comprises a data collection module, an obstacle avoidance navigation module and a scene reappearing module;
the data collection module comprises two RGBD sensors and a development board, wherein the two RGBD sensors are used for collecting image information and distance information in front and below;
the obstacle avoidance navigation module comprises a local planner, a global planner and a vibration device, and a local obstacle map of the current moment in the data collection module is acquired at each moment and is transmitted to the local planner and the global planner;
the local planner can use filtering, point cloud projection and expansion processing on the local barrier map, and finally calculate the current feasible advancing direction, and the blind person can independently select from the feasible directions;
the global planner inputs a local obstacle map into an SLAM system, the system identifies frame-by-frame images by using a feature vector created based on image key points, mapping information in a three-dimensional space is obtained through tracking, mapping and closed loop detection, a global navigation map is maintained through the SLAM system, then a path planning algorithm is used in the global navigation map to calculate an optimal path from a current position to a destination, and the initial direction of the path is the optimal direction output by the global planner;
the scene reappearing module comprises a receiver and an earphone, wherein the receiver is used for receiving the local obstacle map in the data collecting module, inputting the image information into a YOLO system for target detection and direction calculation, the YOLO system adopts a convolution network to extract the characteristics in the image and identify a target object, the type and the position information of the object in the image are output, and finally the object information is output to the earphone through a sound technology.
The vibration device is used for outputting obstacle information and comprises six electromagnetic valves in two rows and three columns, and the electromagnetic valve in the first row corresponds to the obstacle conditions in the left front direction, the right front direction and the right front direction respectively; the electromagnetic valves in the second row respectively correspond to the obstacle conditions in the left front direction, the right front direction and the right front direction, when the electromagnetic valves vibrate, the obstacle in the direction cannot pass through or is not in the optimal direction, and the direction represented by the electromagnetic valves not vibrating is the optimal advancing direction obtained by fusing the results of the local planner and the global planner.
The local planner processes local obstacle map data using a filtering technique, including the steps of:
a. adopting through filtering, filtering points with the depth exceeding the obstacle avoidance detection distance threshold, wherein the threshold is set to be 3m;
b. points outside the filtration height of [0,0.5m ];
c. adopting a statistical filter, and when the point cloud number at a certain position is less than 50/cubic meter, invalidating the point;
in the local planner, the depth information detected by the RGBD sensor is specially processed: if the obstacle of the object with the lower height of more than H needs to be avoided, assuming that the deflection angle of the camera is theta, the distance between the crutch and the ground is H, and the upper limit of the depth is calculated as follows:
Figure GDA0004044170840000021
the upper depth limit is the obstacle avoidance detection distance threshold in the step a, a point cloud projection is obtained after the first-step straight-through filtering, and the point cloud projection is obtained by mapping each point (x, y, depth) in the three-dimensional space to a plane
Figure GDA0004044170840000032
scale represents resolution, then a matrix Mat in OpenCV is used for storing filtered point cloud projection, pixel values of points mapped in the point cloud projection are all set to be 255, points which are not mapped are set to be 0, the mapped points are obstacle points, and the points which are not mapped are obstacle points; expanding the point cloud projection to prevent the blind from colliding with the boundary of the barrier, then carrying out angle sampling on the point cloud projection, wherein the angle sampling is that from the position of a sensor, n rays in different directions are emitted to serve as sampling rays, for each sampling ray, the number of the barrier on the ray is detected, and the sampling rays are divided into feasible rays and infeasible rays according to the ratio of the number of the barrier on the ray to the total number of the barrier; after sampling is finished, generating control information according to an angle sampling result; because the two directions below the front part are divided into three sub-directions of a left direction, a middle direction and a right direction, each direction is respectively provided with three vibration motors corresponding to each sub-direction, a 180-degree area of each direction is divided into three areas of 0-60 degrees, 60-120 degrees and 120-180 degrees which respectively correspond to the three sub-directions of the vibration deviceAnd for the area in each direction, calculating the ratio of the number of feasible sampling rays in the area to the total number of sampling rays, and if the ratio is lower than a threshold value, vibrating the corresponding vibrating motor.
The SLAM system comprises a tracking module, a mapping module and a closed loop detection module; the tracking module extracts ORB characteristics from the image, performs attitude estimation through characteristic matching with a previous frame image frame detected by an RGBD sensor, and determines pose information and a key frame; the map building module is used for building feature points which are generated recently by map verification and screening the feature points; the closed-loop detection module and the SLAM system are divided into two parts, namely closed-loop detection and closed-loop correction, and are used for eliminating errors, the closed-loop detection firstly uses DBoW2 for detection, then calculates similarity transformation through a Sim3 algorithm, and finally generates a dense map through connecting lines to characteristic points of obstacles and using a Bresenham line drawing algorithm.
The global planner uses a path planning algorithm for calculating the shortest path between two points on a dense map, the path planning algorithm obtains map information of the current environment and the position of a camera in the map through a SLAM system, specifically uses A * Algorithm planning of a Path, A * The algorithm uses the Manhattan distance from the current point to the terminal point as a priority queue of keywords, preferentially accesses the point with smaller Manhattan distance, the path corresponding to the terminal point is accessed firstly and is the final result, in order to avoid the problem of local optimal solution, the path planning algorithm records the shortest path from the starting point to the current point, and updates the shortest path condition of other points when other points are expanded each time.
The scene reappearing module uses a YOLO algorithm to realize target detection, firstly, for each video frame, an image is divided into an S multiplied by S network, for each network, a convolutional neural network CNN is used for outputting a vector which comprises B (x, y, h, w, c) vectors and a One-hot coded classification and represents the classification of articles in the network, wherein c represents confidence coefficient, and (x, y, h, w) represents the positions and sizes of five candidate frames in the network; the confidence coefficient is calculated by the formula
Figure GDA0004044170840000031
P r Indicates whether an object is to be detected in the network, and>
Figure GDA0004044170840000041
and representing the intersection and combination ratio of the actual candidate frame and the predicted candidate frame, namely the area intersection and combination ratio of the two rectangular areas, so as to measure the similarity degree of the two rectangular areas, then removing redundant candidate frames by using an NMS non-maximum suppression algorithm, reserving the one with higher confidence coefficient for the two intersected candidate frames, finally obtaining a target detection result and outputting the target detection result to an earphone, and realizing scene reappearance.
Has the beneficial effects that: compared with the prior art, the blind guiding device has the advantages that the convenience and the functionality of the blind guiding device are enhanced through multi-sensory prompts of touch and hearing, and meanwhile, the accuracy and the speed of forecasting are improved. The blind person can receive the direction information of the hand vibration device at every moment, the blind person can go to a destination only by moving towards a vibration-free direction, and meanwhile, the blind person can receive 3D sound information of the earphones at every moment to obtain the types and the directions of surrounding objects. Therefore, the problems of poor blind guiding effect and single prompting way are solved.
Detailed Description
In practical use, the blind person needs to turn on the equipment switch, input the navigation mode and the destination by voice, then hold the blind stick and hold the vibration device, and then start navigation. In the navigation process, the blind person needs to avoid the vibration direction and move forward in the vibration-free direction. The blind can hear the information of surrounding articles in the process of advancing, and the blind is assisted to be familiar with the surrounding environment.
A blind person obstacle avoidance navigation system based on visual SLAM comprises a data collection module, an obstacle avoidance navigation module and a scene reappearing module;
the data collection module comprises two RGBD sensors and a development board, wherein the two RGBD sensors are used for collecting image information and distance information in front of and below the data collection module;
the obstacle avoidance navigation module comprises a local planner, a global planner and a vibration device, and a local obstacle map of the current moment in the data collection module is acquired at each moment and is transmitted to the local planner and the global planner;
the local planner can use filtering, point cloud projection and expansion processing on the local barrier map, and finally calculate the current feasible advancing direction, and the blind person can independently select from the feasible directions;
the global planner inputs a local obstacle map into an SLAM system, the system identifies frame-by-frame images by using a feature vector created based on image key points, mapping information in a three-dimensional space is obtained through tracking, mapping and closed loop detection, a global navigation map is maintained through the SLAM system, then a path planning algorithm is used in the global navigation map to calculate an optimal path from a current position to a destination, and the initial direction of the path is the optimal direction output by the global planner;
the scene reappearing module comprises a receiver and an earphone, wherein the receiver is used for receiving the local obstacle map in the data collecting module, the image information is input to a YOLO system for target detection and direction calculation, the YOLO system adopts a convolution network to extract features in the image and identify target objects, the type and the position information of the objects in the image are output, and finally the object information is output to the earphone through a sound technology.
The vibration device is used for outputting obstacle information and comprises six electromagnetic valves in two rows and three columns, and the electromagnetic valve in the first row corresponds to the obstacle conditions in the left front direction, the right front direction and the right front direction respectively; the electromagnetic valves in the second row respectively correspond to the conditions of the obstacles in the left front direction, the right front direction and the right front direction, when the electromagnetic valves vibrate, the situation that the obstacles in the direction cannot pass through or are not in the optimal direction is shown, and the direction represented by the electromagnetic valves which do not vibrate is the optimal advancing direction obtained by fusing the results of the local planner and the global planner.
The local planner processes local obstacle map data using filtering techniques, including the steps of:
a. adopting through filtering, filtering points with the depth exceeding the obstacle avoidance detection distance threshold, wherein the threshold is set to be 3m;
b. points outside the filtration height of [0,0.5m ];
c. adopting a statistical filter, and when the point cloud number at a certain position is less than 50/cubic meter, invalidating the point;
in the local planner, the depth information detected by the RGBD sensor is specially processed: if the obstacle avoidance needs to be carried out on the object with the lower height of more than H, assuming that the deflection angle of the camera is theta, and the distance between the crutch and the ground is H, calculating the upper limit of the depth as follows:
Figure GDA0004044170840000051
the upper depth limit is the obstacle avoidance detection distance threshold in the step a, point cloud projection is obtained after the first-step straight-through filtering, and the point cloud projection is obtained by mapping each point (x, y, depth) in the three-dimensional space to a plane
Figure GDA0004044170840000052
scale represents resolution, then a matrix Mat in OpenCV is used for storing filtered point cloud projection, pixel values of points mapped in the point cloud projection are all set to be 255, points which are not mapped are set to be 0, the mapped points are obstacle points, and the points which are not mapped are obstacle points; expanding the point cloud projection to prevent the blind from colliding with the boundary of the barrier, then carrying out angle sampling on the point cloud projection, wherein the angle sampling is that from the position of a sensor, n rays in different directions are emitted to serve as sampling rays, for each sampling ray, the number of the barrier on the ray is detected, and the sampling rays are divided into feasible rays and infeasible rays according to the ratio of the number of the barrier on the ray to the total number of the barrier; after sampling is finished, generating control information according to an angle sampling result; as the two directions below the front part are divided into a left sub-direction, a middle sub-direction and a right sub-direction, each direction is respectively provided with three vibration motors corresponding to each sub-direction, a 180-degree area of each direction is equally divided into three areas of 0-60 degrees, 60-120 degrees and 120-180 degrees, the three areas respectively correspond to the three sub-directions of the vibration device, and the number and the total number of the feasible sampling rays in the area are calculated for the area in each directionAnd sampling the ratio of the number of rays, and if the ratio is lower than a threshold value, vibrating the corresponding vibrating motor.
The SLAM system comprises a tracking module, a mapping module and a closed loop detection module; the tracking module extracts ORB characteristics from the image, performs attitude estimation through characteristic matching between the ORB characteristics and a previous frame of image frame detected by an RGBD sensor, and determines pose information and key frames; the map building module is used for building feature points which are generated recently by map verification and screening the feature points; the closed-loop detection module and the SLAM system are divided into two parts, namely closed-loop detection and closed-loop correction, and are used for eliminating errors, the closed-loop detection firstly uses DBoW2 for detection, then calculates similarity transformation through a Sim3 algorithm, and finally generates a dense map through connecting lines to characteristic points of an obstacle and using a Bresenham line drawing algorithm.
The global planner uses a path planning algorithm for calculating the shortest path between two points on a dense map, the path planning algorithm obtains map information of the current environment and the position of a camera in the map through a SLAM system, specifically uses A * Algorithm planning of paths, A * The algorithm uses the Manhattan distance from the current point to the terminal point as a priority queue of keywords, preferentially accesses the point with smaller Manhattan distance, the path corresponding to the terminal point is accessed firstly and is the final result, in order to avoid the problem of local optimal solution, the path planning algorithm records the shortest path from the starting point to the current point, and updates the shortest path condition of other points when other points are expanded each time.
The scene reappearing module uses a YOLO algorithm to realize target detection, firstly, an image is divided into an S multiplied by S network for each video frame, for each network, a convolutional neural network CNN is used for outputting a vector which comprises B (x, y, h, w, c) vectors and One-hot coded classification and represents the classification of articles in the network, wherein c represents confidence coefficient, and (x, y, h, w) represents the positions and the sizes of five candidate boxes in the network; the confidence coefficient is calculated by the formula
Figure GDA0004044170840000061
P r Indicating whether the network has an object to beWill detect a combination of>
Figure GDA0004044170840000062
And representing the intersection and combination ratio of the actual candidate frame and the predicted candidate frame, namely the area intersection and combination ratio of the two rectangular areas, so as to measure the similarity degree of the two rectangular areas, then removing redundant candidate frames by using an NMS non-maximum suppression algorithm, reserving the one with higher confidence coefficient for the two intersected candidate frames, finally obtaining a target detection result and outputting the target detection result to an earphone, and realizing scene reappearance. />

Claims (5)

1. A blind person obstacle avoidance navigation system based on visual SLAM is characterized by comprising a data collection module, an obstacle avoidance navigation module and a scene reappearing module;
the data collection module comprises two RGBD sensors and a development board, wherein the two RGBD sensors are used for collecting image information and distance information in front and below;
the obstacle avoidance navigation module comprises a local planner, a global planner and a vibration device, and a local obstacle map of the current moment in the data collection module is acquired at each moment and is transmitted to the local planner and the global planner;
the local planner can use filtering, point cloud projection and expansion processing on the local barrier map, and finally calculate the current feasible advancing direction, and the blind person can independently select from the feasible directions;
the local planner processes local obstacle map data using filtering techniques, including the steps of:
a. adopting through filtering, filtering points with the depth exceeding an obstacle avoidance detection distance threshold, wherein the threshold is set to be 3m;
b. filtration height at a point other than [0,0.5m ];
c. adopting a statistical filter, and when the point cloud number at a certain position is less than 50/cubic meter, invalidating the point;
in the local planner, the depth information detected by the RGBD sensor is specially processed: if the obstacle avoidance needs to be carried out on the object with the lower height of more than H, assuming that the deflection angle of the camera is theta, and the distance between the crutch and the ground is H, calculating the upper limit of the depth as follows:
Figure FDA0004044170830000011
the upper depth limit is the obstacle avoidance detection distance threshold in the step a, a point cloud projection is obtained after the first-step straight-through filtering, and the point cloud projection is obtained by mapping each point (x, y, depth) in the three-dimensional space to a plane
Figure FDA0004044170830000012
scale represents resolution, then a matrix Mat in OpenCV is used for storing filtered point cloud projection, pixel values of mapped points in the point cloud projection are all set to be 255, unmapped points are set to be 0, the mapped points are obstacle points, and the unmapped points are obstacle points; expanding the point cloud projection to prevent the blind from colliding with the boundary of the barrier, then carrying out angle sampling on the point cloud projection, wherein the angle sampling is that from the position of a sensor, n rays in different directions are emitted to serve as sampling rays, for each sampling ray, the number of the barrier on the ray is detected, and the sampling rays are divided into feasible rays and infeasible rays according to the ratio of the number of the barrier on the ray to the total number of the barrier; after sampling is finished, generating control information according to an angle sampling result; because the two directions below the front part are divided into a left sub-direction, a middle sub-direction and a right sub-direction, each direction is respectively provided with three vibration motors corresponding to each sub-direction, a 180-degree area in each direction is equally divided into three areas of 0-60 degrees, 60-120 degrees and 120-180 degrees, the three areas respectively correspond to the three sub-directions of the vibration device, the ratio of the number of feasible sampling rays in the area to the number of total sampling rays is calculated for the area in each direction, and if the ratio is lower than a threshold value, the corresponding vibration motors need to vibrate;
the global planner inputs a local obstacle map into an SLAM system, the system identifies frame-by-frame images by using a feature vector created based on image key points, mapping information in a three-dimensional space is obtained through tracking, mapping and closed loop detection, a global navigation map is maintained through the SLAM system, then a path planning algorithm is used in the global navigation map to calculate an optimal path from a current position to a destination, and the initial direction of the path is the optimal direction output by the global planner;
the scene reappearing module comprises a receiver and an earphone, wherein the receiver is used for receiving the local obstacle map in the data collecting module, inputting the image information into a YOLO system for target detection and direction calculation, the YOLO system adopts a convolution network to extract the characteristics in the image and identify a target object, the type and the position information of the object in the image are output, and finally the object information is output to the earphone through a sound technology.
2. The blind obstacle avoidance navigation system based on the visual SLAM as claimed in claim 1, wherein the vibration device is used for outputting obstacle information, and comprises six electromagnetic valves, namely two rows and three columns, the electromagnetic valve in the first row respectively corresponds to the obstacle conditions in the left front direction, the right front direction and the right front direction; the electromagnetic valves in the second row respectively correspond to the conditions of the obstacles in the left front direction, the right front direction and the right front direction, when the electromagnetic valves vibrate, the situation that the obstacles in the direction cannot pass through or are not in the optimal direction is shown, and the direction represented by the electromagnetic valves which do not vibrate is the optimal advancing direction obtained by fusing the results of the local planner and the global planner.
3. The blind obstacle avoidance navigation system based on the visual SLAM as claimed in claim 1, wherein the SLAM system comprises a tracking module, a mapping module and a closed loop detection module; the tracking module extracts ORB characteristics from the image, performs attitude estimation through characteristic matching between the ORB characteristics and a previous frame of image frame detected by an RGBD sensor, and determines pose information and key frames; the map building module is used for building map verification recently generated feature points and screening the feature points; the closed-loop detection module and the SLAM system are divided into two parts, namely closed-loop detection and closed-loop correction, and are used for eliminating errors, the closed-loop detection firstly uses DBoW2 for detection, then calculates similarity transformation through a Sim3 algorithm, and finally generates a dense map through connecting lines to characteristic points of an obstacle and using a Bresenham line drawing algorithm.
4. The blind obstacle avoidance navigation system based on visual SLAM as claimed in claim 1, wherein the global planner uses a path planning algorithm for calculating the shortest path between two points on a dense map, the path planning algorithm obtains the map information of the current environment and the position of the camera in the map through SLAM system, specifically uses A * Algorithm planning of paths, A * The algorithm uses Manhattan distance from a current point to a terminal point as a priority queue of key words, preferentially accesses a point with smaller Manhattan distance, and accesses a path corresponding to the terminal point first to obtain a final result.
5. The blind obstacle avoidance navigation system based on visual SLAM as claimed in claim 1, wherein the scene reappearing module uses a YOLO algorithm to realize target detection, for each video frame, firstly, the image is divided into S x S networks, for each network, a convolutional neural network CNN is used to output a vector, the vector comprises B (x, y, h, w, c) vectors and a One-hot encoded classification, the classification represents the classification of the objects in the network, wherein c represents confidence, and (x, y, h, w) represents the position and size of five candidate frames in the network; the confidence coefficient is calculated by the formula
Figure FDA0004044170830000031
P r Indicates whether an object is to be detected in the network, and>
Figure FDA0004044170830000032
representing the intersection ratio of the actual candidate frame and the predicted candidate frame, i.e. the area intersection ratio of the two rectangular regions, to measure the similarity of the two rectangular regions, and thenAnd removing redundant candidate frames by using an NMS non-maximum suppression algorithm, and reserving the candidate frame with the higher confidence coefficient for the two intersected candidate frames to finally obtain a target detection result and output the target detection result to the earphone so as to realize scene reproduction. />
CN202210489717.1A 2022-05-07 2022-05-07 Blind person obstacle avoidance navigation system based on visual SLAM Active CN114712181B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210489717.1A CN114712181B (en) 2022-05-07 2022-05-07 Blind person obstacle avoidance navigation system based on visual SLAM

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210489717.1A CN114712181B (en) 2022-05-07 2022-05-07 Blind person obstacle avoidance navigation system based on visual SLAM

Publications (2)

Publication Number Publication Date
CN114712181A CN114712181A (en) 2022-07-08
CN114712181B true CN114712181B (en) 2023-04-07

Family

ID=82230688

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210489717.1A Active CN114712181B (en) 2022-05-07 2022-05-07 Blind person obstacle avoidance navigation system based on visual SLAM

Country Status (1)

Country Link
CN (1) CN114712181B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110538051A (en) * 2019-08-27 2019-12-06 华南理工大学 intelligent blind person auxiliary device capable of automatically finding way and method thereof
CN110955242A (en) * 2019-11-22 2020-04-03 深圳市优必选科技股份有限公司 Robot navigation method, system, robot and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109144057A (en) * 2018-08-07 2019-01-04 上海大学 A kind of guide vehicle based on real time environment modeling and autonomous path planning
US20220015982A1 (en) * 2018-11-30 2022-01-20 University Of Southern California Double-blinded, randomized trial of augmented reality low-vision mobility and grasp aid
CN111035543A (en) * 2019-12-31 2020-04-21 北京新能源汽车技术创新中心有限公司 Intelligent blind guiding robot
CN113181015A (en) * 2021-04-27 2021-07-30 涵涡智航科技(玉溪)有限公司 Head-mounted multi-source auxiliary navigation device and method for blind person

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110538051A (en) * 2019-08-27 2019-12-06 华南理工大学 intelligent blind person auxiliary device capable of automatically finding way and method thereof
CN110955242A (en) * 2019-11-22 2020-04-03 深圳市优必选科技股份有限公司 Robot navigation method, system, robot and storage medium

Also Published As

Publication number Publication date
CN114712181A (en) 2022-07-08

Similar Documents

Publication Publication Date Title
US10794710B1 (en) High-precision multi-layer visual and semantic map by autonomous units
CN108802785B (en) Vehicle self-positioning method based on high-precision vector map and monocular vision sensor
CN107144285B (en) Pose information determination method and device and movable equipment
JP6790417B2 (en) Information processing equipment and information processing server
EP3371671B1 (en) Method, device and assembly for map generation
CN103901895B (en) Target positioning method based on unscented FastSLAM algorithm and matching optimization and robot
CN112639882B (en) Positioning method, device and system
CN111461048B (en) Vision-based parking lot drivable area detection and local map construction method
US11657719B2 (en) System for sparsely representing and storing geographic and map data
KR102373492B1 (en) Method for correcting misalignment of camera by selectively using information generated by itself and information generated by other entities and device using the same
CN111077907A (en) Autonomous positioning method of outdoor unmanned aerial vehicle
CN109164802A (en) A kind of robot maze traveling method, device and robot
CN111258311A (en) Obstacle avoidance method of underground mobile robot based on intelligent vision
CN111168685B (en) Robot control method, robot, and readable storage medium
CN113112491A (en) Cliff detection method and device, robot and storage medium
CN114034299A (en) Navigation system based on active laser SLAM
Fernandes et al. Intelligent robotic car for autonomous navigation: Platform and system architecture
Rui et al. A multi-sensory blind guidance system based on YOLO and ORB-SLAM
CN114712181B (en) Blind person obstacle avoidance navigation system based on visual SLAM
Golovnin et al. Video processing method for high-definition maps generation
CN112869968B (en) Autonomous operation method and device based on electric wheelchair
CN112802095B (en) Positioning method, device and equipment, and automatic driving positioning system
US11915436B1 (en) System for aligning sensor data with maps comprising covariances
CN117346791B (en) Intelligent wheelchair path planning method and system based on visual images
Unnisa et al. Obstacle detection for self driving car in Pakistan's perspective

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant