CN112932911A - Blind guiding robot based on hybrid sensing system - Google Patents

Blind guiding robot based on hybrid sensing system Download PDF

Info

Publication number
CN112932911A
CN112932911A CN202110360844.7A CN202110360844A CN112932911A CN 112932911 A CN112932911 A CN 112932911A CN 202110360844 A CN202110360844 A CN 202110360844A CN 112932911 A CN112932911 A CN 112932911A
Authority
CN
China
Prior art keywords
unit
control unit
blind guiding
guiding robot
sensing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110360844.7A
Other languages
Chinese (zh)
Inventor
梁慧琰
王诗瑶
孙一晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaide College of Changzhou University
Original Assignee
Huaide College of Changzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaide College of Changzhou University filed Critical Huaide College of Changzhou University
Priority to CN202110360844.7A priority Critical patent/CN112932911A/en
Publication of CN112932911A publication Critical patent/CN112932911A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H3/00Appliances for aiding patients or disabled persons to walk about
    • A61H3/06Walking aids for blind persons
    • A61H3/061Walking aids for blind persons with electronic detecting or guiding means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5007Control means thereof computer controlled
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5023Interfaces to the user
    • A61H2201/5048Audio interfaces, e.g. voice or music controlled
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5058Sensors or detectors
    • A61H2201/5079Velocity sensors
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5058Sensors or detectors
    • A61H2201/5084Acceleration sensors
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5058Sensors or detectors
    • A61H2201/5092Optical sensor

Abstract

The invention discloses a blind guiding robot based on a hybrid sensing system, which comprises a sensing unit, a control unit and a motion unit, wherein the sensing unit acquires the surrounding environment of the blind guiding robot and transmits the sensing data of the acquired surrounding environment to the control unit; the signal output end of the control unit is connected with the motion unit and sends an instruction to the motion unit; the motion unit drives the blind guiding robot to execute actions according to the received driving signals; the sensing unit comprises a visual sensing unit, a laser sensing unit and a sound sensing unit which are respectively connected with the signal input end of the control unit. The invention utilizes various sensors to detect the surrounding environment and make corresponding feedback, and the visual perception function effectively avoids obstacles and distinguishes target objects. The laser perception function provides a correct path for the blind to move, and autonomous navigation is realized. The sound perception function realizes sound source positioning and voice interaction, and provides blind guiding help for the movement of the visually impaired.

Description

Blind guiding robot based on hybrid sensing system
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a blind guiding robot based on a hybrid sensing system.
Background
According to the display of the relevant data of the world health organization, as long as 2010, about 824.8 million blind people exist in China, and the number of newly added blind people is about 45 million every year. According to the trend, the number of the blind people in China is more than 1000 thousands for a long time. The blind and eyesight damage problem is one of serious social and public health problems in the world, more than 70% of information of human beings is acquired by vision, and various blind guiding devices appear in order to meet the living needs of blind people.
However, the conventional blind guiding device has the following problems:
1. the blind guiding dog is mainly dependent on the blind guiding dog and the blind road, but auxiliary facilities such as the blind road and the like are not available in many places, and for people in public places, although the acceptance of the blind guiding dog is gradually enhanced, the blind guiding dog is frequently rejected by news outside the door on occasions where the entry of the blind guiding dog is forbidden.
2. In addition, most of the currently developed blind guiding sticks have no function or incomplete function, the currently researched intelligent blind guiding stick with distance measurement or positioning is lack of voice broadcasting, more functions such as advancing guidance and the like are used for guaranteeing the blind, and meanwhile, the shaking of the stick can also influence the result of environment monitoring.
3. The wearable blind guiding device is assembled on the coat, glasses, backpack and other devices of the blind, but the weight of the device, the wearing comfort and the acceptance of blind people are still the problems to be solved.
4. The handheld blind guiding instrument has the biggest characteristics of lightness, convenience and portability, but the system is greatly influenced by shielding objects, reflecting objects and other light sources, and is only suitable for simple and dark indoor environments.
Disclosure of Invention
In view of the above problems, the present invention aims to provide a blind guiding robot based on a hybrid perception system, which has a better blind guiding service experience.
The technical scheme for realizing the invention is as follows
The blind guiding robot based on the hybrid sensing system comprises a sensing unit, a control unit and a motion unit, wherein the sensing unit acquires the surrounding environment of the blind guiding robot in the traveling process and transmits the acquired sensing data of the surrounding environment to the control unit;
the control unit is responsible for processing and analyzing the received sensing data, translating the processing result into a driving signal of the blind guiding robot and sending an instruction to the motion unit;
and the motion unit drives the blind guiding robot to execute the action according to the received driving signal.
The sensing unit comprises a visual sensing unit, a laser sensing unit and a sound sensing unit;
the visual perception unit adopts a binocular depth visual camera, the depth range of the visual perception unit is 0.6-8 m, the depth resolution is 1280 x 1024, and the image definition is greater than 1080P;
the laser sensing unit comprises a laser radar sensor and a serial port conversion unit connected with the output end of the laser radar sensor; the range of the laser radar sensor is 0.15-20 m, the scanning angle is 360 degrees, and the measuring frequency is more than 8000 times/second; the serial port conversion unit converts a serial port protocol of the laser radar sensor into a USB protocol to form signal connection with the control unit;
the sound perception unit comprises a microphone array and a signal processing unit; the microphone array collects user voice instruction input signals, the microphone array is connected with the signal processing unit, and the signal processing unit converts voice signals collected by the microphone array into electric signals to be output.
The microphone array is in an annular equidistant arrangement mode of 6 microphones, and the difference between every two adjacent microphones is 60 degrees; the microphone is a digital silicon microphone type microphone, and the sampling frequency of the microphone is 16kHz and 32 bit.
The control unit comprises a linux control unit and an embedded control unit; the linux control unit and the embedded control unit form data interaction in a UART serial port communication mode.
The linux control unit is a computer host running a linux ubuntu operating system, and is provided with more than 4 USB interfaces, the memory is more than 4GB, and the storage space is more than 64G; the visual perception unit, the laser perception unit and the sound perception unit are respectively connected with the signal input end of the linux control unit through USB interfaces; and the output end of the linux control unit is connected with a loudspeaker.
The embedded control unit comprises a single chip microcomputer, and a crystal oscillator circuit, a reset circuit, a display circuit and a serial port communication circuit which are connected with the single chip microcomputer;
the single chip microcomputer is of the type STM32F103VET6 and is provided with 64k of RAM,512k of FLASH and hardware interfaces of ADC, IIC, UART and TIMER;
the display circuit is an OLED liquid crystal display and an LED display.
The control unit further comprises a wireless communication unit supporting WIFI, BLE, 4G and 5G communication modes, and the wireless communication unit is communicated between the control unit and the cloud server to form data transmission.
The motion unit comprises a motor driving unit, a speed feedback unit, an attitude feedback unit, a direct current speed reduction motor and a Mecanum wheel for bearing the blind guiding robot;
the signal output end of the control unit is in signal connection with the motor driving unit, the motor driving unit drives the direct current speed reducer, and the direct current speed reducer motor drives the Mecanum wheel to rotate;
the speed feedback unit acquires the rotating speed of the direct current speed reducing motor and feeds back rotating speed data to the control unit;
the posture feedback unit collects the posture information of the blind guiding robot and feeds the posture information back to the control unit.
The motor driving unit is an H-bridge integrated circuit 2 based on MOSFET, the rated driving current is 1.2A, the peak current is 3.2A, and the driving signal is PWM pulse width modulation signal;
the speed feedback unit is a 2-path 500-line photoelectric encoder, the encoder is connected with a primary rotor of the motor, namely the primary rotor of the motor rotates for 1 circle, the encoder outputs 500 pulse signals, and the pulse signals are connected with a TIMER interface of the single chip microcomputer;
the attitude feedback unit comprises a 3-axis accelerometer, a 3-axis gyroscope and a 3-axis electronic compass, wherein the gyroscope measuring range is as follows: 250 °/second, 500 °/second, 1000 °/second, 2000 °/second; the accelerometer range is: 2g, ± 4g, ± 8, ± 16 g; the range of the electronic compass is as follows: +/-4800 uT; the attitude feedback unit is connected with an IIC interface of the single chip microcomputer;
the rated voltage of the direct current speed reducing motor is DC12V, and the speed reducing ratio is 1: 30.
The invention utilizes various sensors to detect the surrounding environment and make corresponding feedback, and the visual perception function effectively avoids obstacles and distinguishes target objects. The laser perception function provides a correct path for the blind to move, and autonomous navigation is realized. The sound perception function realizes sound source positioning and voice interaction, and provides blind guiding help for the movement of the visually impaired. The invention has the following advantages:
1. based on the laser SLAM technology, the robot can realize remote navigation, provides path planning for a user, builds a model in real time, is accurate in positioning, and can navigate to any destination in the model.
2. The voice input and output type man-machine interaction mode is adopted, the quantity of collected and output information is rich, more comprehensive user information can be obtained to be used as an input instruction of a system adjustment navigation strategy, and the user can give feedback in multiple aspects.
3. The blind person identification device has the functions of visual perception, laser perception and the like, the visual perception function can distinguish information such as the type, the appearance, the color and the like of an object, the distance of the target object can be identified through two eyes, and the blind person identification device plays a certain role in helping the blind person to identify the object and the blind person to be aware.
Drawings
FIG. 1 is a schematic diagram of the system architecture of the present application;
FIG. 2 is a hardware framework diagram of the present application;
FIG. 3 is a schematic illustration of sound source localization in the present application;
in the drawing, 100 is a sensing unit, 101 is a control unit, 102 is a motion unit, 103 is a visual sensing unit, 104 is a laser sensing unit, and 105 is a sound sensing unit.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the drawings of the embodiments of the present invention. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the invention without any inventive step, are within the scope of protection of the invention.
Referring to fig. 1-3, the blind guiding robot based on the hybrid sensing system comprises a sensing unit 100, a control unit 101 and a motion unit 102, wherein the sensing unit acquires the surrounding environment of the blind guiding robot during the traveling process and transmits the sensing data of the acquired surrounding environment to the control unit.
The control unit is responsible for processing and analyzing the received sensing data and translating the processing result into a driving signal of the blind guiding robot so as to send an instruction to the motion unit.
And the motion unit drives the blind guiding robot to execute actions according to the received driving signals.
The perception unit comprises a visual perception unit 103, a laser perception unit 104 and a sound perception unit 105, so that the robot has visual perception, laser perception and acoustic perception functions.
The visual perception unit adopts a binocular depth visual camera, the depth range of the visual perception unit is 0.6-8 m, the depth resolution is 1280 x 1024, and the image definition is greater than 1080P; the visual perception function can distinguish information such as the types, the shapes and the colors of objects, and identify the distance of the target object through two eyes.
The laser sensing unit comprises a laser radar sensor and a serial port conversion unit connected with the output end of the laser radar sensor; the range of the laser radar sensor is 0.15-20 m, the scanning angle is 360 degrees, and the measuring frequency is more than 8000 times/second; the serial port conversion unit converts a serial port protocol of the laser radar sensor into a USB protocol to form signal connection with the control unit; the laser perception function can construct an environment map through a laser radar sensor, and realize an autonomous navigation function through a series of software algorithms.
The sound perception unit comprises a microphone array and a signal processing unit; the microphone array collects user voice instruction input signals, the microphone array is connected with the signal processing unit, and the signal processing unit converts voice signals collected by the microphone array into electric signals to be output. The acoustic perception function can calculate the position of a sound source through a microphone array consisting of a plurality of microphones through an algorithm, and can realize the functions of only voice conversation and voice control. The user can call the robot through a voice instruction, and the robot can communicate with the user through a voice talkback function to know the requirements of the user; the robot can autonomously navigate and plan a path to lead a user to a target point; the robot can recognize objects through visual functions and communicate the objects to a user through voice broadcast.
The microphone array is in an annular equidistant arrangement mode with 6 microphones, and the difference between every two adjacent microphones is 60 degrees; the microphone is a digital silicon microphone type microphone, and the sampling frequency of the microphone is 16kHz and 32 bit.
The control unit comprises a linux control unit and an embedded control unit; the linux control unit and the embedded control unit form data interaction in a UART serial port communication mode.
The linux control unit is a computer host running a linux ubuntu operating system, and is provided with more than 4 paths of USB2.0 interfaces, the memory is more than 4GB, and the storage space is more than 64G; the visual perception unit, the laser perception unit and the sound perception unit are respectively connected with the signal input end of the linux control unit through USB interfaces; and the output end of the linux control unit is connected with a loudspeaker. The embedded control unit comprises a single chip microcomputer, and a crystal oscillator circuit, a reset circuit, a display circuit and a serial port communication circuit which are connected with the single chip microcomputer.
The single chip microcomputer is of the type STM32F103VET6 and is provided with 64k of RAM,512k of FLASH and hardware interfaces of ADC, IIC, UART and TIMER; the display circuit is OLED liquid crystal display and LED display.
Still include the high in the clouds server, the control unit still includes the wireless communication unit who supports WIFI, BLE, 4G, 5G communication mode, and the wireless communication unit that communicates between control unit and the high in the clouds server forms data transmission.
The motion unit comprises a motor driving unit, a speed feedback unit, an attitude feedback unit, a direct current speed reduction motor and a Mecanum wheel for bearing the blind guiding robot;
the signal output end of the control unit is in signal connection with the motor driving unit, the motor driving unit drives the direct current speed reducer, and the direct current speed reducer motor drives the Mecanum wheel to rotate;
the speed feedback unit acquires the rotating speed of the direct current speed reducing motor and feeds back rotating speed data to the control unit;
the posture feedback unit collects the posture information of the blind guiding robot and feeds the posture information back to the control unit.
The motor driving unit is an H-bridge integrated circuit 2 based on MOSFET, the rated driving current is 1.2A, the peak current is 3.2A, and the driving signal is PWM pulse width modulation signal;
the speed feedback unit is 2 way 500 linear photoelectric encoders, the encoders are connected with the primary rotor of the motor, namely the primary rotor of the motor rotates for 1 circle, the encoders output 500 pulse signals, and the pulse signals are connected with the TIMER interface of the single chip microcomputer.
The attitude feedback unit comprises a 3-axis accelerometer, a 3-axis gyroscope and a 3-axis electronic compass, wherein the gyroscope measuring range is as follows:
250 °/second, 500 °/second, 1000 °/second, 2000 °/second; the accelerometer range is: 2g, ± 4g, ± 8, ± 16 g; the range of the electronic compass is as follows: +/-4800 uT; the attitude feedback unit is connected with an IIC interface of the single chip microcomputer; the rated voltage of the direct current speed reducing motor is DC12V, and the speed reducing ratio is 1: 30.
Fig. 1 shows that the system mainly comprises a sensing unit, a control unit, an execution mechanism and a cloud end, and the sensing unit is divided into a voice part, a vision part and a laser part.
When the task is executed, the voice perception function can identify the speaking direction of the user, when the user speaks the awakening word and speaks the calling command, the robot drives towards the speaking direction of the user and stops at a position which is about 0.5m away from a sound source, and the robot waits for the next instruction of the user. If a user puts forward a navigation demand, the robot can be said to be navigated to remove the point A, then the laser sensing function of the robot starts to be started, the environment characteristic points are scanned by the laser sensor to be matched with the position of the robot in the map, and a navigation path is planned by an algorithm. The vision perception function of the robot can recognize the surrounding environment and describes the surrounding environment to the user in a voice mode in the running process of the robot, for example, the user asks a question of what you see, and the robot answers that a garbage can is arranged in the front left and an elevator is arranged on the right.
The control unit is responsible for calculating and analyzing the data of the sensing unit, recognizing the voice command of the user, analyzing the environmental characteristics and the navigation path planning, and translating the calculation result into a machine driving signal to control the action of the actuating mechanism (wheel).
The cloud end is a cloud server at the rear end of the robot and is mainly used for calculating some complex algorithm analyses, and some complex algorithms are realized by the cloud end due to limited computing power of a local computing unit of the robot and return computing results to a CPU of a robot control unit. Meanwhile, the using process is a continuous machine learning process, new problems brought forward by a user are solved, and identified new objects are improved to a cloud material base and are continuously expanded.
The robot hardware architecture is as shown in fig. 2, 2 processors, namely Linux PC and embedded MCU, are arranged in a robot system, wherein a laser radar, a visual camera, a microphone array and a loudspeaker are connected with the Linux PC; the motor drive, the encoder, the attitude sensor and the communication unit are connected with the embedded MCU.
Firstly, a Ubuntu operating system is installed on a Linux PC, then an ROS working space is installed in the Ubuntu system, and after a static IP of the Ubuntu is configured, the Ubuntu system can be remotely logged in for development through a development computer SSH.
The laser navigation implementation method comprises the following steps: the laser radar is arranged at the top of the robot and is communicated with the LinuxPC through the USB, the radar transmits a detection signal (laser beam) to a target, then the received signal (target echo) reflected from the target is compared with the transmitted signal, and after appropriate processing is carried out, the distance information of the target can be obtained. The embedded MCU is connected with the Linux PC through a serial port mode, the embedded MCU collects encoder data and attitude sensor data, namely speed information and trolley attitude information are reported to the Linux PC, and the Linux PC realizes a laser navigation function in an SLAM algorithm after acquiring the data.
The sound source positioning realization method comprises the following steps: a microphone annular array is formed by adopting 6 digital microphones, at least three microphones are needed for determining the position coordinates of a sound source on a two-dimensional plane under a near-field model, the waveform of the sound source reaching the microphone array under the near-field model is regarded as a spherical wave, and tau is assumed to be12,τ13Time delays of the second and third microphones, respectively, and the first microphone, then
τ12=(L2-L1)/C
τ13=(L3-L1)/C
Wherein C is the speed of sound; according to the geometrical relationship of the microphone array, the method can obtain
L2 2=L1 2+D2+2L1Dcosθ1
L3 2=L1 2+4D2+4L1Dcosθ1
Wherein tau is12,τ13The time delay estimation is used for obtaining the time delay, so that an equation system can be solved, and then the sine theorem can be used for obtaining theta 2 and theta 3.
The visual identification implementation method comprises the following steps: the camera transmits the shot image to the Linux PC, after the image is obtained, the pixel points of the image are assigned one by one, and the image in the designated area and HSV and LAB color gamut conversion are intercepted. And filtering noise signals in the image, extracting a clear outline of the object, and then comparing the processed image in an object class library in a deep learning frame to obtain a similarity ratio, namely the similarity between the uploaded image and the object in the system library.

Claims (10)

1. The blind-guiding robot based on the hybrid sensing system comprises a sensing unit, a control unit and a motion unit, and is characterized in that the sensing unit acquires the surrounding environment of the blind-guiding robot and transmits the sensing data of the acquired surrounding environment to the control unit;
the signal output end of the control unit is connected with the motion unit and sends an instruction to the motion unit;
the motion unit drives the blind guiding robot to execute actions according to the received driving signals;
the sensing unit comprises a visual sensing unit, a laser sensing unit and a sound sensing unit which are respectively connected with the signal input end of the control unit.
2. The blind guiding robot based on the hybrid perception system as claimed in claim 1, wherein the vision perception unit employs binocular depth vision cameras, the depth range is 0.6-8 m, the depth resolution is 1280 x 1024, and the image definition is greater than 1080P.
3. The blind guiding robot based on the hybrid sensing system as claimed in claim 1, wherein the laser sensing unit comprises a serial port conversion unit which is connected with the output end of the laser radar sensor; the range of the laser radar sensor is 0.15-20 m, the scanning angle is 360 degrees, and the measuring frequency is more than 8000 times/second; the serial port conversion unit converts a serial port protocol of the laser radar sensor into a USB protocol to form signal connection with the control unit.
4. The blind guiding robot based on the hybrid perception system as claimed in claim 1, wherein the sound perception unit comprises a microphone array, a signal processing unit; the microphone array collects user voice instruction input signals, the microphone array is connected with the signal processing unit, and the signal processing unit converts voice signals collected by the microphone array into electric signals to be output;
the microphone array is in an annular equidistant arrangement mode of 6 microphones, and the difference between every two adjacent microphones is 60 degrees; the microphone is a digital silicon microphone type microphone, and the sampling frequency of the microphone is 16kHz and 32 bit.
5. The blind guiding robot based on the hybrid perception system according to claim 2, wherein the control unit comprises a linux control unit and an embedded control unit; the linux control unit and the embedded control unit form data interaction in a UART serial port communication mode.
6. The blind guiding robot based on the hybrid sensing system as claimed in claim 5, wherein the linux control unit is a computer host running a linux ubuntu operating system, and has more than 4 USB interfaces, a memory is larger than 4GB, and a storage space is larger than 64G; the visual perception unit, the laser perception unit and the sound perception unit are respectively connected with the signal input end of the linux control unit through USB interfaces; and the output end of the linux control unit is connected with a loudspeaker.
7. The blind guiding robot based on the hybrid sensing system is characterized in that the embedded control unit comprises a single chip microcomputer, and a crystal oscillator circuit, a reset circuit, a display circuit and a serial port communication circuit which are connected with the single chip microcomputer;
the single chip microcomputer is of the type STM32F103VET6 and is provided with 64k of RAM,512k of FLASH and hardware interfaces of ADC, IIC, UART and TIMER;
the display circuit is an OLED liquid crystal display and an LED display.
8. The blind guiding robot based on the hybrid perception system according to any one of claims 1-7, further comprising a cloud server, wherein the control unit further comprises a wireless communication unit supporting WIFI, BLE, 4G and 5G communication modes, and the wireless communication unit is communicated with the cloud server to form data transmission.
9. The blind guiding robot based on the hybrid sensing system is characterized in that the motion units comprise a motor driving unit, a speed feedback unit, an attitude feedback unit, a direct current speed reduction motor and a Mecanum wheel carrying the blind guiding robot;
the signal output end of the control unit is in signal connection with the motor driving unit, the motor driving unit drives the direct current speed reducer, and the direct current speed reducer motor drives the Mecanum wheel to rotate;
the speed feedback unit acquires the rotating speed of the direct current speed reducing motor and feeds back rotating speed data to the control unit;
the posture feedback unit collects the posture information of the blind guiding robot and feeds the posture information back to the control unit.
10. The blind guiding robot based on the hybrid sensing system according to claim 9, wherein the motor driving unit is a 2-MOSFET based H-bridge integrated circuit, the rated driving current is 1.2A, the peak current is 3.2A, and the driving signal is a PWM pulse width modulation signal;
the speed feedback unit is a 2-path 500-line photoelectric encoder, the encoder is connected with a primary rotor of the motor, namely the primary rotor of the motor rotates for 1 circle, the encoder outputs 500 pulse signals, and the pulse signals are connected with a TIMER interface of the single chip microcomputer;
the attitude feedback unit comprises a 3-axis accelerometer, a 3-axis gyroscope and a 3-axis electronic compass, wherein the gyroscope measuring range is as follows: 250 °/second, 500 °/second, 1000 °/second, 2000 °/second; the accelerometer range is: 2g, ± 4g, ± 8, ± 16 g; the range of the electronic compass is as follows: +/-4800 uT; the attitude feedback unit is connected with an IIC interface of the single chip microcomputer;
the rated voltage of the direct current speed reducing motor is DC12V, and the speed reducing ratio is 1: 30.
CN202110360844.7A 2021-04-02 2021-04-02 Blind guiding robot based on hybrid sensing system Pending CN112932911A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110360844.7A CN112932911A (en) 2021-04-02 2021-04-02 Blind guiding robot based on hybrid sensing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110360844.7A CN112932911A (en) 2021-04-02 2021-04-02 Blind guiding robot based on hybrid sensing system

Publications (1)

Publication Number Publication Date
CN112932911A true CN112932911A (en) 2021-06-11

Family

ID=76232319

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110360844.7A Pending CN112932911A (en) 2021-04-02 2021-04-02 Blind guiding robot based on hybrid sensing system

Country Status (1)

Country Link
CN (1) CN112932911A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104934033A (en) * 2015-04-21 2015-09-23 深圳市锐曼智能装备有限公司 Control method of robot sound source positioning and awakening identification and control system of robot sound source positioning and awakening identification
CN105094136A (en) * 2015-09-14 2015-11-25 桂林电子科技大学 Adaptive microphone array sound positioning rescue robot and using method thereof
CN205375186U (en) * 2015-12-08 2016-07-06 广东德泷智能科技有限公司 Intelligence traveling system of robot
CN106214436A (en) * 2016-07-22 2016-12-14 上海师范大学 A kind of intelligent blind guiding system based on mobile phone terminal and blind-guiding method thereof
CN107390703A (en) * 2017-09-12 2017-11-24 北京创享高科科技有限公司 A kind of intelligent blind-guidance robot and its blind-guiding method
CN108710375A (en) * 2018-06-12 2018-10-26 芜湖乐创电子科技有限公司 A kind of blind-guidance robot control system based on navigation solution and sensor monitoring
CN109144057A (en) * 2018-08-07 2019-01-04 上海大学 A kind of guide vehicle based on real time environment modeling and autonomous path planning
CN111035543A (en) * 2019-12-31 2020-04-21 北京新能源汽车技术创新中心有限公司 Intelligent blind guiding robot

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104934033A (en) * 2015-04-21 2015-09-23 深圳市锐曼智能装备有限公司 Control method of robot sound source positioning and awakening identification and control system of robot sound source positioning and awakening identification
CN105094136A (en) * 2015-09-14 2015-11-25 桂林电子科技大学 Adaptive microphone array sound positioning rescue robot and using method thereof
CN205375186U (en) * 2015-12-08 2016-07-06 广东德泷智能科技有限公司 Intelligence traveling system of robot
CN106214436A (en) * 2016-07-22 2016-12-14 上海师范大学 A kind of intelligent blind guiding system based on mobile phone terminal and blind-guiding method thereof
CN107390703A (en) * 2017-09-12 2017-11-24 北京创享高科科技有限公司 A kind of intelligent blind-guidance robot and its blind-guiding method
CN108710375A (en) * 2018-06-12 2018-10-26 芜湖乐创电子科技有限公司 A kind of blind-guidance robot control system based on navigation solution and sensor monitoring
CN109144057A (en) * 2018-08-07 2019-01-04 上海大学 A kind of guide vehicle based on real time environment modeling and autonomous path planning
CN111035543A (en) * 2019-12-31 2020-04-21 北京新能源汽车技术创新中心有限公司 Intelligent blind guiding robot

Similar Documents

Publication Publication Date Title
Hartman et al. Human-machine interface for a smart wheelchair
Van den Bergh et al. Real-time 3D hand gesture interaction with a robot for understanding directions from humans
CN110605724B (en) Intelligence endowment robot that accompanies
CN205219101U (en) Service robot of family
JP2021522564A (en) Systems and methods for detecting human gaze and gestures in an unconstrained environment
CN105058389A (en) Robot system, robot control method, and robot
Gomez et al. RoboGuideDog: Guiding blind users through physical environments with laser range scanners
US11409295B1 (en) Dynamic positioning of an autonomous mobile device with respect to a user trajectory
US11372408B1 (en) Dynamic trajectory-based orientation of autonomous mobile device component
CN104287946A (en) Device and method for prompting blind persons to avoid obstacles
Xing et al. People-following system design for mobile robots using kinect sensor
US11433546B1 (en) Non-verbal cuing by autonomous mobile device
Grewal et al. Autonomous wheelchair navigation in unmapped indoor environments
Khanom et al. A comparative study of walking assistance tools developed for the visually impaired people
Wang et al. An environmental perception and navigational assistance system for visually impaired persons based on semantic stixels and sound interaction
Kaur et al. A scene perception system for visually impaired based on object detection and classification using multi-modal DCNN
CN115416047A (en) Blind assisting system and method based on multi-sensor quadruped robot
WO2022188333A1 (en) Walking method and apparatus, and computer storage medium
Hakim et al. Goal location prediction based on deep learning using RGB-D camera
Wang et al. A survey of 17 indoor travel assistance systems for blind and visually impaired people
Sun et al. “Watch your step”: precise obstacle detection and navigation for Mobile users through their Mobile service
CN112932911A (en) Blind guiding robot based on hybrid sensing system
Zatout et al. A Novel Output Device for visually impaired and blind people’s aid systems
Christian et al. Hand gesture recognition and infrared information system
Jin et al. Human-robot interaction for assisted object grasping by a wearable robotic object manipulation aid for the blind

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination