CN115416047A - Blind assisting system and method based on multi-sensor quadruped robot - Google Patents
Blind assisting system and method based on multi-sensor quadruped robot Download PDFInfo
- Publication number
- CN115416047A CN115416047A CN202211074959.0A CN202211074959A CN115416047A CN 115416047 A CN115416047 A CN 115416047A CN 202211074959 A CN202211074959 A CN 202211074959A CN 115416047 A CN115416047 A CN 115416047A
- Authority
- CN
- China
- Prior art keywords
- blind
- controller
- quadruped robot
- information
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 14
- 230000009471 action Effects 0.000 claims abstract description 27
- 230000003993 interaction Effects 0.000 claims abstract description 21
- 230000008447 perception Effects 0.000 claims abstract description 10
- 230000011218 segmentation Effects 0.000 claims abstract description 6
- 230000008569 process Effects 0.000 claims abstract description 5
- 230000002457 bidirectional effect Effects 0.000 claims description 7
- 230000000007 visual effect Effects 0.000 claims description 7
- 230000033001 locomotion Effects 0.000 claims description 6
- 230000036541 health Effects 0.000 claims description 4
- 238000001514 detection method Methods 0.000 claims description 2
- 238000000605 extraction Methods 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 claims 1
- 238000013135 deep learning Methods 0.000 claims 1
- 230000005021 gait Effects 0.000 claims 1
- 238000012544 monitoring process Methods 0.000 claims 1
- 230000009467 reduction Effects 0.000 claims 1
- 230000004888 barrier function Effects 0.000 abstract description 4
- 230000004927 fusion Effects 0.000 abstract description 4
- 230000007613 environmental effect Effects 0.000 abstract description 3
- 230000008859 change Effects 0.000 abstract 1
- 241000282472 Canis lupus familiaris Species 0.000 description 8
- 230000003068 static effect Effects 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 230000000875 corresponding effect Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- VYPSYNLAJGMNEJ-UHFFFAOYSA-N Silicium dioxide Chemical compound O=[Si]=O VYPSYNLAJGMNEJ-UHFFFAOYSA-N 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000013872 defecation Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 210000003414 extremity Anatomy 0.000 description 1
- 210000004247 hand Anatomy 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 239000000741 silica gel Substances 0.000 description 1
- 229910002027 silica gel Inorganic materials 0.000 description 1
- 230000003997 social interaction Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 210000000707 wrist Anatomy 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
- A61B5/024—Detecting, measuring or recording pulse rate or heart rate
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
- A61B5/024—Detecting, measuring or recording pulse rate or heart rate
- A61B5/02438—Detecting, measuring or recording pulse rate or heart rate with portable devices, e.g. worn by the patient
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61H—PHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
- A61H3/00—Appliances for aiding patients or disabled persons to walk about
- A61H3/06—Walking aids for blind persons
- A61H3/061—Walking aids for blind persons with electronic detecting or guiding means
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61H—PHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
- A61H3/00—Appliances for aiding patients or disabled persons to walk about
- A61H3/06—Walking aids for blind persons
- A61H3/061—Walking aids for blind persons with electronic detecting or guiding means
- A61H2003/063—Walking aids for blind persons with electronic detecting or guiding means with tactile perception
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61H—PHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
- A61H2201/00—Characteristics of apparatus not provided for in the preceding codes
- A61H2201/50—Control means thereof
- A61H2201/5058—Sensors or detectors
- A61H2201/5071—Pressure sensors
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Cardiology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Heart & Thoracic Surgery (AREA)
- Surgery (AREA)
- Physiology (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Rehabilitation Therapy (AREA)
- Epidemiology (AREA)
- Pain & Pain Management (AREA)
- Physical Education & Sports Medicine (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Manipulator (AREA)
Abstract
The invention discloses a blind assisting system and method based on a multi-sensor quadruped robot, which mainly comprise an environment perception module, a decision planning module and a man-machine interaction module, wherein the environment perception module determines system position information and course by double GPS (global positioning system), a multi-thread radar scans environment point cloud information, a front depth camera provides semantic segmentation input, a fisheye binocular camera acquires image information of ground obstacles, and a pressure sensor detects the sole stress condition of the quadruped robot; the decision planning module processes the multi-sensor fusion data by the controller and plans the next action; the man-machine interaction part controls the direction and the amplitude of the vibration degree by the vibration bracelet, the elastic traction rope provides auxiliary traction force, the rear depth camera matched with the cradle head tracks the blind person in real time, key points of the blind person are detected, action change of the blind person is deduced, and the blind person is helped by the voice module when accidents such as falling down and the like happen. The voice module receives the voice instruction of the blind person, and broadcasts barrier information, the body state of the blind person, planning path information and the like. The invention can realize accurate environmental perception, proper decision planning and convenient human-computer interaction on the premise of ensuring comfort.
Description
Technical Field
The invention relates to the field of mobile service robots, in particular to a multi-sensor quadruped robot-based blind assisting system and method.
Technical Field
China is the country with the most blind people in the world, and the number of blind people in China is up to 1731 ten thousand according to statistics in 2021. The number of blind people will further increase as the population ages. The situations are special, and a lot of difficulties which are hard to imagine by common people can be met in daily life and work, especially in the aspect of traveling. However, the existing medical aid, infrastructure and other conditions in China cannot meet the requirement of the blind on normal travel. A mode for assisting the blind in going out is as follows: the wearable blind-aiding device is worn by the blind person, mostly senses the environment of the blind person through various sensors, and provides a forward instruction for the blind person through an interaction module after being decided by a processor. However, wearing various sensors is easy to cause heavy physical burden to the blind, and the problem is particularly prominent when the blind goes out for a long time and a long distance. Besides, most of the blind assisting devices ignore the subjective initiative of the blind, feasible routes and advancing guide are provided for the blind only by the blind assisting devices, the intention of the blind cannot be fully understood through the voice or the action of the blind, and therefore more convenient human-computer interaction and target navigation are executed. Another common method for assisting the blind to go out is as follows: and (5) training the guide dog. The trained guide dog can understand various passwords of the blind person to help the blind person to go out for activities. However, the number of guide dogs in China is far from meeting the requirement of most blind people for going out so far. In addition, the training period of the guide dog is long, the training cost is high, the trained skills cannot be transferred in a short time, and the guide dog is difficult to popularize due to the problems. Furthermore, although carrying the guide dog can solve many inconvenient traveling problems, many people have low acceptance of the guide dog in society due to the consideration of sanitation and safety, and such problems further block the walking of the blind. The problem of difficult travel causes a plurality of troubles for the blind in the aspects of receiving education, social interaction and the like, and the blind has adverse effects on all aspects of life. Therefore, the blind assisting system capable of helping the blind to conveniently and safely go out has very important practical significance.
The blind assisting system for helping the blind to safely go out is set up for the requirement of the blind to go out normally, and relevant exploration is already made in domestic and foreign colleges and enterprises. However, the study of the blind-assisting system, although some results are achieved, still has the following problems: (1) the existing blind assisting system is mostly wearable, and a large amount of sensor equipment is reduced in portability and comfortableness. (2) The existing blind assisting system is lack of judgment on the voice and action information of the blind, and effective safety guarantee and bidirectional man-machine interaction are difficult to realize.
A set of wearable multi-sensor blind guiding system is set up by the institute of computing technology of Chinese academy of sciences, the equipment consists of a battery, a computing unit, a GPS, a depth camera, an ultrasonic sensor, a human-computer interaction module and the like, and action instructions are generated by processing multi-sensor information in a fusion mode to assist the blind in going out. However, the large number of sensor devices increases the burden of the blind people in going out, and the comfort and the portability are lacked. In addition, similar devices do not adequately consider the actions and intentions of the blind person themselves. That is to say, the man-machine interaction of the device is only one-way, the blind person is only provided with forward guidance by the blind person guiding device, the intention of the blind person cannot be understood, and then the operations such as map switching navigation, safety supervision and guarantee and the like are carried out through voice or action instructions.
Compared with a wearable blind assisting system, the quadruped robot is relatively independent of the blind, can ensure the comfort of the blind when going out, can carry various external sensors, realizes accurate sensing of external environment and evaluation of actions and intentions of the blind, and performs convenient and effective safety supervision and bidirectional man-machine interaction. In addition, the quadruped robot is similar to a pet dog in shape and size, is more easily accepted by people than other blind assisting systems, and cannot generate the problems of defecation and social dispute of entering public places. With the gradual maturity of the environment perception technology and the robot control technology, the development potential of the quadruped robot in the direction of helping the blind is gradually improved.
The blind assisting system needs to give consideration to environmental perception, decision planning and man-machine interaction, however, no blind assisting system can achieve accurate environmental perception, accurate decision planning and bidirectional man-machine interaction on the premise of ensuring convenience and comfort.
Disclosure of Invention
In order to fully consider the state of the blind and the surrounding environment information of the blind and realize safe and collision-free blind-assisting navigation on the basis of bidirectional interaction between the blind and a blind-assisting system, the following provides a quadruped robot blind-assisting system based on multiple sensors and a specific realization method:
the invention relates to a multi-sensor quadruped robot-based blind assisting system and a method, which are realized by adopting the following technical means:
a blind assisting system based on a multi-sensor quadruped robot is matched with the quadruped robot for use, and the quadruped robot comprises a machine body 6 and a power supply 13. The blind assisting system comprises an environment sensing module, a decision planning module, a two-way man-machine interaction module and other parts, wherein the environment sensing module comprises four pressure sensors 5, a front depth camera 8, a multi-thread laser radar 9, a fisheye binocular camera 11, a rear GPS 20 and a front GPS 21; the decision planning module comprises: a controller 12; the bidirectional human-computer interaction module comprises a vibration bracelet 1, an elastic traction rope 2, a rear depth camera 3 and a voice module 7; the other parts also comprise a rear connecting piece 4, a two-degree-of-freedom tripod head 10 and a front connecting piece 19. The following is a detailed description:
the controller 12 is adjacent to the power supply 13 and fixed in the four-foot robot, the four pressure sensors 5 are respectively assembled on the soles of the four-foot robot, the front connecting piece 19 is installed on the back of the four-foot robot body 6 close to the front, and the front depth camera 8, the voice interaction module 7 and the multi-thread laser radar 9 are respectively fixed in front of, on two sides of and on the front connecting piece 19. The rear connecting piece 4 is installed at the back of the quadruped robot, the elastic traction rope 2 is connected with the rear connecting piece and the vibration bracelet 1, and the vibration bracelet 1 is worn on the wrist of the blind. The rear depth camera 3 is mounted on the two-degree-of-freedom pan/tilt 10, and a wide-range view is obtained by the rotation of the two-degree-of-freedom pan/tilt 10. The two-degree-of-freedom holder 10 is fixed in the middle of the back of the quadruped robot and is positioned between the front connecting piece 19 and the rear connecting piece 4. In addition, two pairs of fisheye binocular cameras 11 are respectively fixed on the front and rear trunks below the four-footed robot body 6.
First, each motor of the quadruped robot is set to a position mode, and an angle and moment control signal of the controller 12 is received. The mapping relation from the sole pressure of the four-legged robot to each motor angle and torque required by the robot to keep the body 6 stable in various terrain environments is trained in advance by adopting a deep neural network vgg, the sole force received by a pressure sensor 5 is transmitted to a controller 12 in the body 6 as a feedback signal, the controller 12 outputs the motor angle according to training weight to keep the body 6 stable, and a blind assisting system is started to prepare for assisting the blind in going out.
Secondly, a two-dimensional grid map is pre-established in a place where the blind people often go, and the controller 12 runs a cartographer algorithm to conduct multi-thread laser radar 9 mapping. When the blind goes out, the blind gives the position of a target global coordinate system or a target alias through voice or actions, for example: in a certain supermarket, based on a pre-established global static map and blind person positions and course angles obtained by a front GPS and a rear GPS, a controller 12 runs a variant Dijkstra algorithm to carry out global path planning, in addition, a multi-thread laser radar 9 scans the surrounding environment in real time to obtain point cloud information of dynamic obstacles in a local range, the controller 12 runs a dynamic time window algorithm to carry out small-range path planning, and fine adjustment of the local range is carried out on a global path. And then, converting the three-dimensional path points in the planned world coordinate system into a mass center coordinate system of the machine body 6, carrying out inverse solution according to the kinematics model of the four-footed robot, determining the rotation angle of each joint, controlling the four-footed robot to move forward along the planned path, and realizing tracking. In an unknown environment, the blind person does not have a prior map, after the blind person gives a target position through voice or action, a vector path from a global starting point to a target point is obtained, global positioning is carried out only by adopting a Global Positioning System (GPS), a laser radar 9 scans local obstacles in real time, a planned path is adjusted in real time, and exploration and navigation are completed.
Meanwhile, the front depth camera 8 acquires RGB images and depth information in a front visual field range of the quadruped robot, after pixel alignment, the controller 12 outputs the RGB information to a semantic segmentation neural network model trained through supervised learning to obtain semantic information of the environment, including road surfaces, steps, signboards, pedestrians and the like. The quadruped robot determines to detour or cross according to the topographic information, transmits the detour or cross to the blind through the voice module 7, and reminds the blind to pay attention to safety. The controller 12 runs the multi-thread Pyaudio to receive and play the specific sound through the voice module 7, receives the voice information of the blind as a control instruction, and controls the corresponding execution unit to execute actions through the controller 12, so as to realize the interaction from the blind to the blind assisting system, such as: the quadruped robot is controlled to be static on the subway, so that noise is prevented from being generated; and broadcasting the global positioning information outdoors, acquiring the position and the like.
The two-degree-of-freedom cradle head 10 drives the rear-mounted depth camera 3 to rotate, the obtained image information is input into the controller 12, the controller 12 controls the two-degree-of-freedom cradle head 10 to move along with the blind person by applying a target tracking SimFC + + algorithm and a Q-learning-based steering engine control algorithm, so that the blind person is always in the visual field range of the camera, the limb action of the blind person is captured at any time, a map is switched or the action is executed according to rules preset in a knowledge base, information transmission from the blind person to a blind assisting system is realized, meanwhile, the body state of the blind person can be evaluated according to the posture action of the blind person in the image, and help is sought to the outside through the vibration bracelet 1 or the voice module 7 after dangerous actions such as falling down are found.
The front and rear pairs of fisheye binocular cameras 11 positioned below the four-footed robot body 6 sense the road environment of the four-footed robot in the advancing process, picture information is transmitted to the controller 12 after distortion correction to analyze and judge the depth information of the soles of the four-footed robot, the controller 12 plans local path points again, the motor of the body is controlled to execute corresponding actions to avoid obstacles, the safe passing of the four-footed robot and the blind is ensured, and the voice module 7 reminds the blind to pay attention to avoiding.
The rear connecting piece 4 is connected with the machine body 6 and the traction rope 2, the other end of the traction rope 2 is connected with the vibration bracelet 1, and therefore physical contact between a blind assisting system and a blind person is established, and the blind person can feel traction force of the quadruped robot in the motion direction in real time. In addition, the vibration bracelet can evaluate the physical health conditions of the blind such as the heart rate and the like, and when the key is needed, a help-seeking telephone is called and the position is sent according to the GPS information so as to prevent accidents.
In addition, the power supply 13 is responsible for overall power supply of the body 6, the front depth camera 8, the rear depth camera 3, the fisheye binocular camera 11, the two-degree-of-freedom pan/tilt head 10, the multi-thread radar 9, the controller 12, the language module 7, the pressure sensor 5 and other sensors.
Drawings
FIG. 1 is an overall schematic view of the present invention;
FIG. 2 is a side view of the present invention;
FIG. 3 is a top view of the present invention;
FIG. 4 is a front view of the present invention;
FIG. 5 is a rear view of the present invention;
FIG. 6 is an internal schematic view of the present invention;
FIG. 7 is a schematic view of a front end module of the present invention;
FIG. 8 is a schematic view of a rear two degree-of-freedom pan/tilt head and depth camera of the present invention;
FIG. 9 is a schematic view of the vibrating bracelet of the present invention;
FIG. 10 is a schematic representation of an elastic pull cord of the present invention;
in the figure: 1 is vibrations bracelet, 2 is the elastic traction rope, 3 is rearmounted degree of depth camera, 4 is leading connecting piece, 5 is pressure sensor, 6 is the four-footed robot fuselage, 7 is voice module, 8 is leading degree of depth camera, 9 is the multithread radar, 10 is rearmounted two degree of freedom cloud platforms, 11 is the two mesh camera of flake, 12 is the controller, 13 is power module, 14 is the cloud platform connecting piece, 15 is the silica gel watchband, 16 is heart rate detection module, 17 is the rigidity rope, 18 is the spring module, 19 is rearmounted connecting piece, 20 is rearmounted GPS,21 is leading GPS.
Detailed Description
The invention is further described below with reference to the accompanying drawings. It is noted that the aspects described below in connection with the figures and the specific embodiments are only exemplary and should not be construed as imposing any limitation on the scope of the present invention.
The invention is divided into an environment perception module, a decision planning module and a man-machine interaction module:
the environment perception module: the multi-thread lidar 9 emits laser beams at a fixed frequency and angular range to form a local range of laser point clouds. The dense point cloud detects the distance between all the positions around the blind where the laser can directly reach and the blind, and thus describes the surrounding environment for the outline. And the front depth camera 8 is used for acquiring an RGB image and a depth image of the environment in front of the quadruped robot. The RGB image is input into a pre-trained semantic segmentation neural network through basic data enhancement methods such as zooming and translation, feature extraction is carried out through a convolution pooling layer, and finally classification and regression are carried out through a full connection layer, so that objects in a front visual field range can be segmented in the RGB image, for example: road surfaces, wall surfaces, signboards, pedestrians, vehicles, etc. After the obtained depth image is aligned with the RGB image, distance information of an object and the front of the blind assisting system is obtained according to the result obtained by the semantic segmentation network, and data fusion is carried out on the distance information and the multi-thread laser radar 9, so that accurate distance information of obstacles in the front visual field range of the blind is guaranteed. The front GPS 21 and the rear GPS 20 acquire the positioning information and the course information of the robot in real time. And finally, fusing the semantic information and the depth information, and obtaining a next action instruction of the system and the blind after the processing of the controller 12, thereby realizing the information transmission from the blind assisting system to the blind. Two groups of fisheye binocular cameras 11 arranged below the robot body acquire a wider visual field range below the four-footed robot body while avoiding the barrier being blocked, are used for accurately sensing the barrier below the four-footed robot body, calculate depth information after the acquired image information is calibrated, and plan a path again after the acquired image information is analyzed and synthesized by a controller 12 to realize local barrier avoidance at a foot end.
And (3) decision planning: the blind assisting system can be used for establishing a grid map in advance in places around the blind, a static global map of a large-range environment is established on the basis that the controller 12 runs the cartographer algorithm, and the Dijstra algorithm is used for global path planning on the global map when the blind needs to go. Meanwhile, based on the real-time positioning of the multi-thread laser radar 9 and the acquisition of the point cloud map, the global static map is considered to be free of obstacles and real-time dynamic obstacles, the dynamic time window algorithm is guaranteed to follow the global path in the local range, the local path is re-planned, and the local obstacle avoidance under the changing environment is guaranteed. Under an unknown environment, a global vector path is formed by a starting point and a target point, the multi-thread laser radar 9 obtains local obstacle information, the controller 12 runs a dynamic time window algorithm to plan the local path, the global path is continuously corrected, and autonomous exploration and target navigation are achieved.
Bidirectional human-computer interaction: the RGB image obtained by the rear depth camera 3 is input into the controller 12 as the preset SiamFC series algorithm, the relative position of the blind in the image is given in each frame, the two-freedom-degree holder 10 is controlled to enable the blind to be positioned at the central position in each frame of image, and the body state of the blind is supervised at any time. In addition, a parallel visual processing program runs on the controller 12, and a plurality of joint key points of the blind body are obtained by the Alphaphase algorithm to form the action of the blind and monitor the body state of the blind. Such as: when the blind person falls down or other accidents happen, the help seeking information is sent to the outside through the voice module 7. The voice module 7 is the "mouth" and "ear" of the quadruped robot. The front depth camera 8 or the fisheye binocular camera 11 selectively broadcasts after detecting obstacles, pedestrians and identification boards, and effective support is provided for autonomous decision making of the blind. The controller 12 processes the multi-sensor fusion data in real time to guide the blind to move. In addition, the blind person can also directly issue a voice command or an action command, the voice module 7 receives the command and then transfers the command to the controller 12 for movement planning, and the rear camera 3 switches the map for navigation after receiving the action command. The vibration bracelet 1 connects the blind with the blind assisting system through the elastic traction rope 2 and the connecting piece 4, and when the system senses that the surrounding environment makes navigation planning and executes corresponding actions, the elastic traction rope 2 transmits the action of force to the hands of the blind to drive the blind to move next step. In addition, if an obstacle is detected and the front cannot pass through, the vibration bracelet transmits signals to the blind through different vibration directions and amplitudes to guide the blind to give system control instructions through voice or actions, and man-machine cooperative path exploration and navigation are carried out.
In addition, the blind assisting system receives decision planning information from the blind, and completes blind voice control, action understanding and state supervision through the man-machine interaction module.
In the description of the present invention, it is to be understood that the terms "middle", "inner", "upper", "lower", "front", "back", "center", "inlay", and the like, indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, are only for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention.
In the description of the present invention, it should be noted that, unless explicitly stated or limited otherwise, the terms "connected" and "connected" are to be interpreted broadly, e.g., as a fixed connection, a detachable connection, or an integral connection; can be mechanically or electrically connected; either directly or through an intermediate medium, or the two elements are interconnected. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The purpose of the invention can be achieved according to the implementation of the scheme.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made in the above embodiments by those of ordinary skill in the art without departing from the principle and spirit of the present invention. The scope of the invention is defined by the appended claims and equivalents thereof.
Claims (7)
1. A blind assisting system based on a multi-sensor quadruped robot is matched with the quadruped robot for use, the quadruped robot comprises a robot body (6) and a power supply (13), and is characterized by comprising an environment perception module, a decision planning module, a two-way man-machine interaction module and other parts, wherein the environment perception module comprises a pressure sensor (5), a front depth camera (8), a multi-thread laser radar (9), a fisheye binocular camera (11), a rear GPS (20) and a front GPS (21); the decision planning module comprises: a controller (12); the bidirectional human-computer interaction module comprises a vibration bracelet (1), an elastic traction rope (2), a rear depth camera (3) and a voice module (7), and the other parts comprise a rear connecting piece (4), a two-degree-of-freedom cradle head (10) and a front connecting piece (19);
the four pressure sensors 5 are respectively assembled on the soles of the robot, collect the stress of the four feet and transmit the stress to the controller (12) as a motion feedback signal, and the motion stability of the robot body (6) is kept after the stress is processed by the controller (12);
the front depth camera (8) acquires RGB images and depth information in the front visual field range of the quadruped robot, and semantic information including road surfaces, steps, signboards and pedestrians is acquired after feature extraction of the controller (12);
the multithreading laser radar (9) scans surrounding obstacle information in real time, and path planning is carried out by the controller (12) after noise reduction is carried out by the controller (12);
the front and back pairs of fisheye binocular cameras (11) positioned below the machine body (6) sense the road environment of the quadruped robot in the advancing process, the controller (12) analyzes the road environment and judges whether an obstacle exists or not, and the voice module (7) and the vibration bracelet module (1) remind the blind to pay attention to avoidance according to the vibration direction and amplitude;
the voice module (7) is used for receiving and playing sound, on one hand, a control instruction of the blind person is received, the control instruction controls the corresponding execution unit to execute the action through the controller (12), on the other hand, the rear depth camera (3) acquires the body action of the blind person to analyze the body state, and if the blind person falls down, help seeking information is transmitted to the outside;
the rear connecting piece (4) is used for connecting the traction rope (2) to the machine body (6), the other end of the traction rope (2) is connected with the vibrating bracelet (1), the vibrating bracelet (1) gives out a warning to the blind person through vibration on one hand, and monitors and evaluates the body health condition of the blind person on the other hand, and is used for preventing accidents;
the rear depth camera (3) is fixed on the machine body (6) through the two-degree-of-freedom cradle head (10), the two-degree-of-freedom cradle head drives the rear depth camera to rotate, the controller (12) judges the key point state of the body of the blind person through analyzing the acquired image, the health condition of the blind person is supervised through the information, and the blind person is helped by the vibration bracelet (1) or the voice module (7) after detecting a falling accident.
The front GPS (21) and the rear GPS (20) are respectively positioned above the front and rear machine bodies of the quadruped robot to acquire longitude and latitude information of the quadruped robot, current position information and a course angle are obtained through calculation of the controller (12), the outdoor navigation task of the quadruped robot is assisted to be completed, and the position of a global coordinate system is provided.
2. The blind-aiding navigation system based on the multi-sensor quadruped robot as claimed in claim 1, wherein: the controller (12) predicts the rotation angle and the moment of the motor by adopting a deep learning algorithm, and ensures the stability of the body (6) of the quadruped robot.
3. The multi-sensor quadruped robot-based blind assisting system according to claim 1, characterized in that: the controller (12) adopts a semantic segmentation algorithm to obtain semantic information of the front environment of the blind assisting system.
4. The multi-sensor quadruped robot-based blind assisting system according to claim 1, characterized in that: the controller (12) runs a planning algorithm according to the prior map and the local point cloud to realize path planning.
5. The multi-sensor quadruped robot-based blind assisting system according to claim 1, characterized in that: the controller (12) runs a local obstacle avoidance algorithm and a motion control algorithm to realize the local obstacle avoidance.
6. The multi-sensor quadruped robot-based blind assisting system according to claim 1, characterized in that: the controller (12) captures the limb actions of the blind by adopting a single-target tracking algorithm, a steering engine control algorithm and a pedestrian key point detection algorithm and is used for monitoring the physical health condition of the blind.
7. A blind assisting method based on a multi-sensor quadruped robot is characterized by comprising the following steps: the blind assisting system is assembled on the quadruped robot, a grid map is pre-established by the multi-thread laser radar (9), a voice instruction or an action instruction is issued to the voice module (7) when the blind goes out to select a map and a destination, the controller (12) realizes global path planning and local path planning by the prior map, GPS data and local point cloud, the fisheye binocular camera (11) acquires sole pavement information, fine adjustment is carried out on the planned path, and the voice module (7) sends a forward instruction to the blind; the front depth camera (8) acquires an RGB image for semantic segmentation, and controls the quadruped robot to select walking or crossing gait according to different terrains; the two-degree-of-freedom cradle head (10) is matched with the rear depth camera (3) to visually track the blind person and deduce body key points, action information of the blind person in the movement process is obtained, whether the blind person falls down or not is detected, and a help seeking signal is sent out by the voice module (7) when an accident occurs; the vibration bracelet (1) receives the direction information of the controller (12), adjusts the vibration direction and amplitude to guide the blind to avoid obstacles, detects the heart rate of the blind, and transmits help seeking information to the outside by the voice module according to positioning information when an accident occurs.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211074959.0A CN115416047B (en) | 2022-09-02 | 2022-09-02 | Blind assisting system and method based on multi-sensor four-foot robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211074959.0A CN115416047B (en) | 2022-09-02 | 2022-09-02 | Blind assisting system and method based on multi-sensor four-foot robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115416047A true CN115416047A (en) | 2022-12-02 |
CN115416047B CN115416047B (en) | 2024-06-25 |
Family
ID=84202252
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211074959.0A Active CN115416047B (en) | 2022-09-02 | 2022-09-02 | Blind assisting system and method based on multi-sensor four-foot robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115416047B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115993829A (en) * | 2023-03-21 | 2023-04-21 | 安徽大学 | Machine dog blind guiding movement control method based on blind road recognition |
CN116135497A (en) * | 2023-04-04 | 2023-05-19 | 长三角一体化示范区(江苏)中连智能教育科技有限公司 | Fault early warning system for industrial robot practical training platform |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103279113A (en) * | 2013-06-27 | 2013-09-04 | 山东大学 | Distributed type control system of hydraulic quadruped robot and control method |
CN109144057A (en) * | 2018-08-07 | 2019-01-04 | 上海大学 | A kind of guide vehicle based on real time environment modeling and autonomous path planning |
CN111368755A (en) * | 2020-03-09 | 2020-07-03 | 山东大学 | Vision-based pedestrian autonomous following method for quadruped robot |
CN113885704A (en) * | 2021-09-30 | 2022-01-04 | 紫清智行科技(北京)有限公司 | Man-machine interaction method and system for blind guiding vehicle |
CN114104139A (en) * | 2021-09-28 | 2022-03-01 | 北京炎凌嘉业机电设备有限公司 | Bionic foot type robot walking platform fusion obstacle crossing and autonomous following system |
WO2022160430A1 (en) * | 2021-01-27 | 2022-08-04 | Dalian University Of Technology | Method for obstacle avoidance of robot in the complex indoor scene based on monocular camera |
-
2022
- 2022-09-02 CN CN202211074959.0A patent/CN115416047B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103279113A (en) * | 2013-06-27 | 2013-09-04 | 山东大学 | Distributed type control system of hydraulic quadruped robot and control method |
CN109144057A (en) * | 2018-08-07 | 2019-01-04 | 上海大学 | A kind of guide vehicle based on real time environment modeling and autonomous path planning |
CN111368755A (en) * | 2020-03-09 | 2020-07-03 | 山东大学 | Vision-based pedestrian autonomous following method for quadruped robot |
WO2022160430A1 (en) * | 2021-01-27 | 2022-08-04 | Dalian University Of Technology | Method for obstacle avoidance of robot in the complex indoor scene based on monocular camera |
CN114104139A (en) * | 2021-09-28 | 2022-03-01 | 北京炎凌嘉业机电设备有限公司 | Bionic foot type robot walking platform fusion obstacle crossing and autonomous following system |
CN113885704A (en) * | 2021-09-30 | 2022-01-04 | 紫清智行科技(北京)有限公司 | Man-machine interaction method and system for blind guiding vehicle |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115993829A (en) * | 2023-03-21 | 2023-04-21 | 安徽大学 | Machine dog blind guiding movement control method based on blind road recognition |
CN116135497A (en) * | 2023-04-04 | 2023-05-19 | 长三角一体化示范区(江苏)中连智能教育科技有限公司 | Fault early warning system for industrial robot practical training platform |
CN116135497B (en) * | 2023-04-04 | 2023-11-03 | 长三角一体化示范区(江苏)中连智能教育科技有限公司 | Fault early warning system for industrial robot practical training platform |
Also Published As
Publication number | Publication date |
---|---|
CN115416047B (en) | 2024-06-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115416047B (en) | Blind assisting system and method based on multi-sensor four-foot robot | |
Wang et al. | Enabling independent navigation for visually impaired people through a wearable vision-based feedback system | |
Bousbia-Salah et al. | A navigation aid for blind people | |
Shoval et al. | NavBelt and the Guide-Cane [obstacle-avoidance systems for the blind and visually impaired] | |
Shoval et al. | Computerized obstacle avoidance systems for the blind and visually impaired | |
US7865267B2 (en) | Environment recognizing device, environment recognizing method, route planning device, route planning method and robot | |
CN109917786A (en) | A kind of robot tracking control and system operation method towards complex environment operation | |
US11020294B2 (en) | Mobility and mobility system | |
Pradeep et al. | A wearable system for the visually impaired | |
Borenstein | The navbelt-a computerized multi-sensor travel aid for active guidance of the blind | |
CN107049718B (en) | Obstacle avoidance device | |
CN113520812B (en) | Four-foot robot blind guiding system and method | |
US12032377B2 (en) | Mobility aid robot navigating method and mobility aid robot using the same | |
Alhmiedat et al. | A prototype navigation system for guiding blind people indoors using NXT Mindstorms | |
CN108762253A (en) | A kind of man-machine approach to formation control being applied to for people's navigation system | |
Gribble et al. | Integrating vision and spatial reasoning for assistive navigation | |
Hong et al. | Development and application of key technologies for Guide Dog Robot: A systematic literature review | |
CN111035543A (en) | Intelligent blind guiding robot | |
Chiu et al. | FUMA: environment information gathering wheeled rescue robot with one-DOF arm | |
Wachaja et al. | A navigation aid for blind people with walking disabilities | |
CN113499229B (en) | Rehabilitation institution control method, rehabilitation institution control system and rehabilitation device | |
Jain et al. | Review on lidar-based navigation systems for the visually impaired | |
Thiyagarajan et al. | Intelligent guide robots for people who are blind or have low vision: A review | |
Kress et al. | Pose based trajectory forecast of vulnerable road users | |
Pawar et al. | Smartphone based tactile feedback system providing navigation and obstacle avoidance to the blind and visually impaired |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |