CN219595151U - Interactive blind guide device based on intelligent perception - Google Patents

Interactive blind guide device based on intelligent perception Download PDF

Info

Publication number
CN219595151U
CN219595151U CN202320605455.0U CN202320605455U CN219595151U CN 219595151 U CN219595151 U CN 219595151U CN 202320605455 U CN202320605455 U CN 202320605455U CN 219595151 U CN219595151 U CN 219595151U
Authority
CN
China
Prior art keywords
control module
module
information
main control
blind
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202320605455.0U
Other languages
Chinese (zh)
Inventor
王嫄
武振华
刘雯
王慧
刘建征
史松云
张帅
陈亚瑞
赵婷婷
刘洁
于钊
何小栋
徐以鹏
魏洪浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Assistive Devices And Technology Centre For Persons With Disabilities
Tianjin University of Science and Technology
Original Assignee
China Assistive Devices And Technology Centre For Persons With Disabilities
Tianjin University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Assistive Devices And Technology Centre For Persons With Disabilities, Tianjin University of Science and Technology filed Critical China Assistive Devices And Technology Centre For Persons With Disabilities
Priority to CN202320605455.0U priority Critical patent/CN219595151U/en
Application granted granted Critical
Publication of CN219595151U publication Critical patent/CN219595151U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B80/00Architectural or constructional elements improving the thermal performance of buildings

Landscapes

  • Blinds (AREA)

Abstract

The utility model discloses an interactive blind guide based on intelligent perception, which comprises the following steps: the system comprises a chassis, a main control module, a power module traction control module, a voice recognition module, a motion control module and a sensor module, wherein the main control module, the power module traction control module, the voice recognition module, the motion control module and the sensor module are arranged on the chassis; the traction control module comprises a telescopic pull rod and a button control unit, and the button control unit is used for sending a key instruction to the main control module and activating the voice recognition module; the voice recognition module is used for converting voice information into text information; the main control module is used for acquiring scene modes and target points according to key instructions and text information, controlling the sensor module to acquire surrounding image information, surrounding environment point cloud information and position information, planning a path according to the surrounding environment point cloud information and the target points, driving the motion control module to control the blind guiding device to move to the target points, performing tracking control according to the surrounding image information, and driving the motion control module to control the blind guiding device to move to the target points according to the position information.

Description

Interactive blind guide device based on intelligent perception
Technical Field
The utility model relates to the technical field of blind guiding, in particular to an interactive blind guiding device based on intelligent perception.
Background
According to statistics, about 1700 ten thousand blind people exist in China, and vision disorder brings great difficulty to daily life of the blind people. The blind guiding devices represented by the blind guiding walking stick can not meet the increasing action demands of the blind, and the biological blind guiding devices represented by the blind guiding dogs are limited by long training period and high cultivation cost, so that the blind guiding devices are rare in quantity and cannot be popularized on a large scale. Therefore, if an intelligent blind guide can replace a blind guide dog, the blind guide device is extremely beneficial to the blind. However, the existing blind guiding equipment in the market cannot provide specific navigation information for the blind and has poor navigation precision performance, the interaction mode is single, accurate road surface information cannot be provided for the blind in real-time information feedback, the sensor is applied singly, and road surface conditions cannot be synthesized, so that the travel danger of the blind is remarkably increased.
Therefore, how to provide an interactive blind guide based on intelligent perception is a problem to be solved by those skilled in the art.
Disclosure of Invention
In view of the above, the utility model provides an interactive blind guide based on intelligent perception to solve some of the technical problems mentioned in the background art.
In order to achieve the above purpose, the utility model adopts the following technical scheme:
an intelligent perception-based interactive blind guide, comprising: the system comprises a chassis, and a main control module, a power module traction control module, a voice recognition module, a motion control module and a sensor module which are arranged on the chassis, wherein the voice recognition module, the motion control module and the sensor module are all connected with the main control module, and the voice recognition module, the motion control module, the sensor module and the main control module are all connected with the power module;
the traction control module comprises a telescopic pull rod and a button control unit, and the button control unit is respectively connected with the main control module and the voice recognition module and is used for sending a key instruction to the main control module and activating the voice recognition module;
the voice recognition module is used for converting voice information into text information and transmitting the text information to the main control module;
the sensor module is used for receiving a scene control command of the main control module to acquire surrounding image information, surrounding environment point cloud information and position information, and transmitting the surrounding image information, the surrounding environment point cloud information and the position information to the main control module;
the main control module is used for acquiring a scene mode and a target point according to a key instruction and the text information, sending the scene mode and the target point to the sensor module, planning a path according to the surrounding environment point cloud information and the target point, driving the motion control module to control the blind guiding device to move to the target point, performing tracking control according to the surrounding image information, and driving the motion control module to control the blind guiding device to move to the target point according to the position information.
Preferably, the sensor module includes a lidar, a GPS and a depth camera;
the depth camera is used for collecting surrounding image information and transmitting the surrounding image information to the main control module;
the laser radar is used for collecting surrounding environment point cloud information and transmitting the surrounding environment point cloud information to the main control module.
Preferably, the interactive blind guide device based on intelligent perception further comprises a car body support frame, wherein the car body support frame is fixed on the upper surface of the chassis, the telescopic pull rod, the main control module, the laser radar and the depth camera are fixed on the support frame, and two layers of glass fiber boards are arranged on the support frame.
Preferably, the voice recognition module comprises a microphone and a loudspeaker, wherein the microphone is arranged at the handle of the telescopic pull rod, and the loudspeaker is fixed on the vehicle body support frame.
Preferably, the main control module comprises a scene control unit, a global path planner and a local path planner;
the scene control unit is respectively connected with the key control unit, the voice recognition module and the sensor module and is used for acquiring a target point and a scene mode according to the key instruction and the text information and sending an acquisition command to the sensor module;
the global path planner is respectively connected with the laser radar and the local path planner, and is used for carrying out global path planning according to the surrounding environment point cloud information and the target point and sending planning path information to the local path planner;
the local path planner is connected with the laser radar, and is used for storing the planned path information, driving the motion control module to control the blind guiding device to move to the target point, and controlling the blind guiding device to avoid the obstacle according to the surrounding environment point cloud information of the laser radar.
Preferably, the sensor module further comprises a GPS control unit, wherein the GPS control unit is respectively connected with the scene control unit and the motion control module and is used for collecting the position of the blind guide and driving the motion control module to control the blind guide to move until the position of the blind guide coincides with the target point.
Preferably, the main control module further comprises an offset feedback controller, and the offset feedback controller is respectively connected with the depth camera and the motion control module;
and the GPS control unit drives the motion control module to control the blind guide to move along the central line of the blind road according to surrounding image information acquired by the depth camera in the process of controlling the motion of the blind guide by the offset feedback controller.
Preferably, the button control unit comprises an Arduino control board and a key, the key is mounted at the handle of the telescopic pull rod, the Arduino control board is mounted on the upper surface of the chassis, and the key is connected with the main control module through the Arduino control board.
Preferably, the motion control module comprises a two-wheel differential driving motor, differential wheels and universal wheels, wherein the two-wheel differential driving motor is respectively connected with the main control module and the differential wheels, and the differential wheels and the universal wheels are fixed below the chassis.
Preferably, the main control module adopts an X86 architecture industrial personal computer.
Compared with the prior art, the intelligent perception-based interactive blind guide device provided by the utility model realizes man-machine interaction through a button and a voice mode, the button rapidly opens a common function, the voice is used for opening a more complex function, and the multi-sensor is used for collecting surrounding image information, surrounding environment point cloud information and position information to carry out different modes of driving control on the blind guide device, so that the requirements of different scenes of a user are met; the length and the angle of the traction controller can be adjusted according to the needs of users to adapt to different needs of the users; the robot is adopted for leading the blind person to follow the navigation mode, so that the probability of injury of the blind person is greatly reduced, the safety of indoor and outdoor walking of the blind person is improved, and the self-confidence and feasibility of independent travel of the blind person are improved.
Drawings
In order to more clearly illustrate the embodiments of the present utility model or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present utility model, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a front view of the overall structure of the blind guide hardware provided by the utility model;
FIG. 2 is an isometric view of a blind guide according to the present utility model;
FIG. 3 is an isometric view of a blind guide according to the present utility model;
FIG. 4 is a side view, partially in section, of a blind guide according to the present utility model;
FIG. 5 is a partial view of an axial measurement of a blind guide according to the present utility model;
FIG. 6 is a schematic diagram of a blind guide workflow provided by the present utility model;
the device comprises a 1-microphone, a 2-key, a 3-telescopic pull rod, a 4-liquid crystal display, a 5-depth camera, a 6-GPS control unit, a 7-vehicle body support frame, an 8-left driving wheel, a 9-right driving wheel, a 10-lithium battery, an 11-laser radar, a 12-USB hub, a 13-main control module, a 14-loudspeaker, a 15-Arduino development board and a 16-universal wheel.
Detailed Description
The following description of the embodiments of the present utility model will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present utility model, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the utility model without making any inventive effort, are intended to be within the scope of the utility model.
The embodiment of the utility model discloses an interactive blind guide based on intelligent perception, which comprises the following steps: the chassis, and a main control module, a power module traction control module, a voice recognition module, a motion control module and a sensor module which are arranged on the chassis are all connected with the main control module, and the voice recognition module, the motion control module, the sensor module and the main control module are all connected with the power module;
the traction control module comprises a telescopic pull rod 3 and a button control unit, wherein the button control unit is respectively connected with the main control module and the voice recognition module and is used for sending a key instruction to the main control module and activating the voice recognition module;
the voice recognition module is used for converting voice information into text information and transmitting the text information to the main control module;
the sensor module is used for receiving a scene control command of the main control module to acquire surrounding image information, surrounding environment point cloud information and position information and transmitting the surrounding image information, the surrounding environment point cloud information and the position information to the main control module;
the main control module is used for acquiring a scene mode and a target point according to the key instruction and the text information, sending the scene mode and the target point to the sensor module, planning a path according to the surrounding environment point cloud information and the target point, driving the motion control module to control the blind guiding device to move to the target point, performing tracking control according to the surrounding image information, and driving the motion control module to control the blind guiding device to move to the target point according to the position information.
In this embodiment, the main control module obtains the target point according to the key instruction and the text information, determines that the scene is "indoor" or "outdoor" according to the position of the target point, sends the scene control command and the target point to the sensor module, then performs path planning or tracking control according to the surrounding environment information of the sensor module and the position of the target point, avoids the obstacle in real time, and drives the motion control module to control the blind guide to move to the target point according to the position information.
In order to further implement the above technical solution, the sensor module comprises a lidar 11 and a depth camera 5;
the depth camera 5 is used for collecting surrounding image information and transmitting the surrounding image information to the main control module 13;
the laser radar 11 is used for collecting surrounding environment point cloud information and transmitting the surrounding environment point cloud information to the main control module 13.
In practical applications, the sensor module further comprises an IMU.
In order to further implement the technical scheme, the interactive blind guide based on intelligent perception further comprises a car body support frame 7, wherein the car body support frame 7 is fixed on the upper surface of the chassis, the telescopic pull rod 3, the main control module 13, the laser radar 11 and the depth camera 5 are fixed on the car body support frame 7, and two layers of glass fiber boards are arranged on the car body support frame 7.
In order to further implement the above technical solution, the voice recognition module includes a microphone 1 and a speaker 14, the microphone 4 is mounted at the handle of the telescopic pull rod 3, and the speaker 14 is fixed on the vehicle body support frame 7.
In order to further implement the technical scheme, the main control module comprises a scene control unit, a global path planner and a local path planner;
the scene control unit is respectively connected with the key control unit, the voice recognition module and the sensor module and is used for acquiring a target point and a scene mode according to the key instruction and the text information and sending an acquisition command to the sensor module;
the global path planner is respectively connected with the laser radar and the local path planner, and is used for carrying out global path planning according to surrounding environment point cloud information and target points and sending planning path information to the local path planner;
the local path planner is connected with the laser radar, and is used for storing planning path information, driving the motion control module to control the blind guiding device to move to the target point, and controlling the blind guiding device to avoid the obstacle according to surrounding environment point cloud information of the laser radar.
In this embodiment, the keys include one-key walk, one-key navigation, and one-key scram.
When one-key navigation is pressed, a voice recognition module is activated to collect voice information, the voice information is converted into text information, the text information is uploaded to a scene control unit, the scene control unit obtains a target point according to the text information and judges whether the target point scene is in an indoor mode or an outdoor mode, the scene control unit switches the indoor mode or the outdoor mode, when the scene control unit is in the indoor mode, the scene control unit sends a collection command to a laser radar to collect surrounding environment point cloud information to construct an indoor map, a global path planner plans a path according to the indoor map and the target point and stores the path into a local path planner, and the local path planner drives a blind guider to move according to the planned path and bypasses obstacles which are blocked in front of the blind guider but are not in the map according to the surrounding environment point cloud information of the laser radar.
If the local path planner is stuck with an unplanned path, the global path planner is requested to plan a new path and re-follow the new path.
In order to further implement the technical scheme, the sensor module further comprises a GPS control unit 6, the GPS control unit 6 is respectively connected with the scene control unit and the motion control module and is used for collecting the position of the blind guide, and the motion control module is driven to control the blind guide to move until the position of the blind guide coincides with the target point.
In order to further implement the technical scheme, the main control module further comprises an offset feedback controller which is respectively connected with the depth camera and the motion control module;
and in the process of controlling the blind guide to move by the GPS control unit driving the motion control module, the offset feedback controller controls the blind guide to move along the central line of the blind road according to surrounding image information acquired by the depth camera.
When the one-key navigation is pressed, and the outdoor mode is judged, the scene control unit starts the depth camera, the laser radar and the GPS to carry out tracking control, the GPS control unit acquires the position of the blind guider in real time, the motion control module is driven to control the blind guider to move according to the current position of the blind guider and the target point until the current position coincides with the target point, in the motion process, the depth camera acquires surrounding environment image information in real time, the offset feedback controller controls the blind guider to carry out tracking motion along the blind sidewalk according to the geometric center of the surrounding environment image and the offset of the central line of the blind sidewalk, the laser radar detects the obstacle in front of the blind guider in real time, and the blind guider is driven to avoid the obstacle if the obstacle exists in front.
When a key is pressed to walk, the scene control unit judges that the scene is indoor or outdoor according to the current position information; if the system is in a room, sending an acquisition command to a laser radar to acquire surrounding environment information to construct a room map and randomly selecting four target points, planning a path according to the map and the target points by a global path planner, storing the path into a local path planner, driving a blind guider to move by the local path planner according to the planned path, and bypassing obstacles which are blocked in front of the blind guider but not in the map according to the surrounding environment information of the laser radar; if the blind guiding device is outdoor, four target points are selected randomly, a depth camera, a laser radar and a GPS are started to carry out tracking control, meanwhile, obstacles are avoided in real time, and a motion control module is driven according to position information to control the blind guiding device to move to the target points.
In order to further implement above-mentioned technical scheme, button control unit includes Arduino control panel 15 and button 2, and button 2 installs in scalable pull rod 3's handle department, and Arduino control panel 15 installs in the chassis upper surface, and button 2 passes through Arduino control panel 15 and is connected with main control module 13.
In order to further implement the technical scheme, the motion control module comprises two-wheel differential drive motors, differential wheels 8 (9) and universal wheels 15, wherein the two-wheel differential drive motors are respectively connected with the main control module 13 and the differential wheels, and the differential wheels and the universal wheels are fixed below the chassis.
In order to further implement the above technical scheme, the main control module 13 adopts an X86 architecture industrial personal computer.
In this embodiment, the depth camera 5 and the speaker 14 are disposed on the top layer of the vehicle body support frame 7, the main control module 13 is disposed on the middle layer of the vehicle body support frame 7, the GPS control unit 6, the lithium battery 10, the laser radar 11, and the arduino development board 15 are all disposed on the upper surface of the chassis.
In practical application, the liquid crystal display 4 is provided for a worker to debug, and the USB hub 12 is provided for bundling the wiring harness of each component.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present utility model. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the utility model. Thus, the present utility model is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. An interactive blind guide based on intelligent perception, which is characterized by comprising: the device comprises a chassis, and a main control module, a power supply module, a traction control module, a voice recognition module, a motion control module and a sensor module which are arranged on the chassis, wherein the voice recognition module, the motion control module and the sensor module are all connected with the main control module, and the voice recognition module, the motion control module, the sensor module and the main control module are all connected with the power supply module;
the traction control module comprises a telescopic pull rod and a button control unit, and the button control unit is respectively connected with the main control module and the voice recognition module and is used for sending a key instruction to the main control module and activating the voice recognition module;
the voice recognition module is used for converting voice information into text information and transmitting the text information to the main control module;
the sensor module is used for receiving a scene control command of the main control module to acquire surrounding image information, surrounding environment point cloud information and position information, and transmitting the surrounding image information, the surrounding environment point cloud information and the position information to the main control module;
the main control module is used for acquiring a scene mode and a target point according to a key instruction and the text information, sending the scene mode and the target point to the sensor module, planning a path according to the surrounding environment point cloud information and the target point, driving the motion control module to control the blind guiding device to move to the target point, performing tracking control according to the surrounding image information, and driving the motion control module to control the blind guiding device to move to the target point according to the position information.
2. The intelligent perception based interactive blind guide of claim 1 wherein the sensor module comprises a lidar and a depth camera;
the depth camera is used for collecting surrounding image information and transmitting the surrounding image information to the main control module;
the laser radar is used for collecting surrounding environment point cloud information and transmitting the surrounding environment point cloud information to the main control module.
3. The interactive blind guide based on intelligent perception according to claim 2, further comprising a car body support frame, wherein the car body support frame is fixed on the upper surface of the chassis, the telescopic pull rod, the main control module, the laser radar and the depth camera are fixed on the support frame, and two layers of glass fiber boards are arranged on the support frame.
4. The interactive blind guide based on intelligent perception according to claim 3, wherein the voice recognition module comprises a microphone and a loudspeaker, the microphone is mounted at the handle of the telescopic pull rod, and the loudspeaker is fixed on the car body support frame.
5. The interactive blind guide based on intelligent perception according to claim 2, wherein the main control module comprises a scene control unit, a global path planner and a local path planner;
the scene control unit is respectively connected with the button control unit, the voice recognition module and the sensor module and is used for acquiring a target point and a scene mode according to the key instruction and the text information and sending an acquisition command to the sensor module;
the global path planner is respectively connected with the laser radar and the local path planner, and is used for carrying out global path planning according to the surrounding environment point cloud information and the target point and sending planning path information to the local path planner;
the local path planner is connected with the laser radar, and is used for storing the planned path information, driving the motion control module to control the blind guiding device to move to the target point, and controlling the blind guiding device to avoid the obstacle according to the surrounding environment point cloud information of the laser radar.
6. The interactive blind guide based on intelligent perception according to claim 5, wherein the sensor module further comprises a GPS control unit, the GPS control unit is respectively connected with the scene control unit and the motion control module and used for collecting the position of the blind guide, and the motion control module is driven to control the blind guide to move until the position of the blind guide coincides with a target point.
7. The intelligent perception based interactive blind guide according to claim 6, wherein the main control module further comprises an offset feedback controller, the offset feedback controller being respectively connected with the depth camera and the motion control module;
and the GPS control unit drives the motion control module to control the blind guide to move along the central line of the blind road according to surrounding image information acquired by the depth camera in the process of controlling the motion of the blind guide by the offset feedback controller.
8. The interactive blind guide device based on intelligent perception according to claim 1, wherein the button control unit comprises an Arduino control board and keys, the keys are installed at the handles of the telescopic pull rods, the Arduino control board is installed on the upper surface of the chassis, and the keys are connected with the main control module through the Arduino control board.
9. The interactive blind guide based on intelligent perception according to claim 1, wherein the motion control module comprises a two-wheel differential driving motor, a differential wheel and universal wheels, the two-wheel differential driving motor is respectively connected with the main control module and the differential wheel, and the differential wheel and the universal wheels are fixed below the chassis.
10. The interactive blind guide based on intelligent perception according to claim 1, wherein,
the main control module adopts an X86 architecture industrial personal computer.
CN202320605455.0U 2023-03-24 2023-03-24 Interactive blind guide device based on intelligent perception Active CN219595151U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202320605455.0U CN219595151U (en) 2023-03-24 2023-03-24 Interactive blind guide device based on intelligent perception

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202320605455.0U CN219595151U (en) 2023-03-24 2023-03-24 Interactive blind guide device based on intelligent perception

Publications (1)

Publication Number Publication Date
CN219595151U true CN219595151U (en) 2023-08-29

Family

ID=87741805

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202320605455.0U Active CN219595151U (en) 2023-03-24 2023-03-24 Interactive blind guide device based on intelligent perception

Country Status (1)

Country Link
CN (1) CN219595151U (en)

Similar Documents

Publication Publication Date Title
CN106340912B (en) A kind of charging pile system, control method and carport
CN206644887U (en) Acoustic control intelligent car based on myRIO platforms
CN203142835U (en) Mobile detecting cart
CN113140140A (en) Education training is with driving car with autopilot function
CN105816303A (en) GPS and visual navigation-based blind guiding system and method thereof
CN111035543A (en) Intelligent blind guiding robot
CN211364762U (en) Automatic drive manned trolley
WO2021114220A1 (en) Full-autonomous unmanned mowing system
CN219595151U (en) Interactive blind guide device based on intelligent perception
CN102240244A (en) Blind guide crutch device and blind guide implementation method thereof
CN105997447A (en) Blind guiding robot with wheel leg structure and utilization method for same
CN107221178A (en) A kind of traffic command control system based on unmanned plane
CN113160691A (en) Unmanned automatic driving vehicle for education and training
CN211906081U (en) Unmanned small-sized sweeping machine control system based on path tracking
CN209904906U (en) Parking area patrol robot based on license plate identification
CN209717726U (en) A kind of remote replacement robot used in grain depot garden
CN111367273A (en) Unmanned small-sized sweeping machine control system based on path tracking and control method thereof
CN113143610A (en) Intelligent wheelchair based on Mecanum wheel mechanism
CN217046427U (en) Intelligent blind guiding robot
CN216388538U (en) Unmanned automatic driving vehicle for education and training
CN103661303A (en) Intelligent electric car battery system capable of automatically moving
CN205801341U (en) Climb step electrodynamic balance car
CN212439313U (en) Intelligent blind guiding stick
CN211750566U (en) Intelligent wheelchair with autonomous calling function
CN209703242U (en) Novel intelligentized miniature laser paver

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant