CN114935340A - Indoor navigation robot, control system and method - Google Patents
Indoor navigation robot, control system and method Download PDFInfo
- Publication number
- CN114935340A CN114935340A CN202210559772.3A CN202210559772A CN114935340A CN 114935340 A CN114935340 A CN 114935340A CN 202210559772 A CN202210559772 A CN 202210559772A CN 114935340 A CN114935340 A CN 114935340A
- Authority
- CN
- China
- Prior art keywords
- robot
- control system
- terminal control
- navigation
- map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
- G01C21/30—Map- or contour-matching
- G01C21/32—Structuring or formatting of map data
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
- G01S17/931—Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0214—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0238—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Electromagnetism (AREA)
- Aviation & Aerospace Engineering (AREA)
- Computer Networks & Wireless Communication (AREA)
- Manipulator (AREA)
- Navigation (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
An indoor navigation robot includes: the device comprises a visual camera, a touch screen, a laser radar, a support shaft, a bearing disc A, a bearing disc B, a main controller and a chassis; the touch screen and the vision camera are horizontally arranged on the bearing plate A, the laser radar is horizontally arranged on the bearing plate B, and the main controller is horizontally arranged on the chassis; the invention takes the advanced technology in the artificial intelligence industry as a lead, develops multi-aspect exploration on the research and development and application of a navigation robot system, improves various problems of complex navigation structure, small application range, low working efficiency, complex operation, high maintenance cost and the like of the traditional robot, and has certain innovation and commercial value.
Description
Technical Field
The invention relates to the technical field of intelligent robots, in particular to an indoor navigation robot, a control system and a method.
Background
With the improvement of the living standard of modern substances, people begin to pay more and more attention to the enjoyment of mental levels, and people begin to become more and more choices when going out for travel in holiday time. However, at present, problems of disordered order, crowded stream and the like exist in various exhibition halls and scenic spots. The service is delayed, which causes the feeling of the tourists to be very poor. Particularly, after an epidemic situation occurs, "a contactless exhibition area, an unmanned museum and the like" explode with fire for a while, and the navigation robot can be brought into operation under the large background of the artificial intelligence era.
At present, many researches on a robot navigation system exist at home and abroad, the researches achieve the purpose of autonomous robot navigation through technologies such as path planning and cost maps, and meanwhile, the effects of rapid obstacle avoidance and accurate positioning are achieved by means of strengthening the functions of various sensors according to various conditions of the surrounding environment, but the application scene is limited, and the problems of complex structure, complex operation, limited navigation range, high obstacle avoidance risk and the like also exist.
Disclosure of Invention
In order to solve the above problems, the present invention provides the following technical solutions:
the control method of the indoor navigation robot comprises the indoor navigation robot and a terminal control system, wherein the indoor navigation robot is connected with the terminal control system through WIFI or 5G communication, and the control method is characterized in that the indoor navigation robot comprises a sending unit and a receiving unit and is used for receiving and sending control information with the terminal control system; the terminal control system comprises a memory, an environment map is stored in the terminal control system, and the method for controlling the indoor navigation of the robot by the terminal control system comprises the following steps:
and 8, repeating the steps 3 to 7 until the planned walking route is finished.
Preferably, in step 1, both a keyboard input mode and a voice input mode can be used for inputting information.
Preferably, in step 5, the posture information of the robot includes a direction, a position, and a travel speed.
Preferably, in step 5, the navigation and obstacle avoidance strategy includes:
step 51, the robot runs through a map in a barrier-free environment, the initial point of the robot is an origin coordinate, the laser radar scans the environment, continuous environment information is sampled and discretized, a Hector-SLAM algorithm is used for map construction, an environment two-dimensional grid map a is established, the final end position is a robot end coordinate b, and the map a is stored and led into a memory;
step 53, placing the robot at the end position, giving an initial coordinate (namely a final coordinate b) of the current position of the robot, inputting a target point coordinate, randomly generating a path 1 going to the point by the system, scanning the environmental information in real time by the laser radar in the subsequent moving process, and comparing the environmental information with the map a to obtain the real-time coordinate of the robot;
step 54, when an obstacle suddenly appears in the environment, the real-time map information scanned by the laser radar is different from the map a, at the moment, a signal is fed back to the controller, and the robot re-randomly plans a path 2 going to the target point again according to the principle that the edge of the shell is absolutely not allowed to intersect with the obstacle boundary so as to bypass the obstacle;
and step 55, if the obstacle is encountered for many times, repeating the obstacle avoidance strategy until the target point is reached, and if the rescheduling is carried out for more than thirty times, outputting the target point which cannot be reached.
Preferably, the indoor intelligent navigation robot includes: the device comprises a visual camera, a touch screen, a laser radar, a support shaft, a bearing disc A, a bearing disc B, a main controller and a chassis; the touch screen and the vision camera are horizontally arranged on the bearing plate A, the laser radar is horizontally arranged on the bearing plate B, and the main controller is horizontally arranged on the chassis; the bearing plate A is vertically connected with the bearing plate B through a plurality of support shafts, and the bearing plate B is vertically connected with the chassis through a plurality of support shafts;
the laser radar, the visual camera and the touch screen are connected with the main controller to perform information interaction;
the main controller is internally provided with a voice recognition module, so that the functions of robot voice recognition and intelligent explanation can be realized.
Preferably, the chassis adopts a three-wheel omnidirectional moving structure and comprises omnidirectional wheels, a servo motor and a connecting plate, the three omnidirectional wheels are connected with the motor and then fixed with the chassis through the connecting plate, and the wheels are arranged at an angle of 120 degrees.
Preferably, the indoor navigation robot can also adopt the cell-phone APP to realize controlling the robot, realizes functions such as path planning, environmental scanning, can look over each item index condition of robot simultaneously, such as functioning speed, electric quantity, sensor operation conditions etc..
Compared with the prior art, the invention has the beneficial effects that:
1. the invention provides an indoor navigation robot with novel design and strong practicability, which applies various technical means including sensor technology, kinematic analysis and the like and has high technological content.
2. The invention takes the advanced technology in the artificial intelligence industry as a guide, develops multi-aspect exploration on the research and development and application of the navigation robot system, improves various problems of complex navigation structure, small application range, low working efficiency, complex operation, high maintenance cost and the like of the traditional robot, and has certain innovativeness and commercial value.
3. The invention has certain positive effect on the enhancement, supplement and perfection of subject knowledge, and future teams can further combine the advantages of automatic research and development and the development of the robot engineering subjects to match the leading service robot technology in the industry and enable the traditional service industry.
Drawings
FIG. 1 is a general structure diagram of an indoor navigation robot according to the present invention;
FIG. 2 is a block diagram of the exercise system of the present invention;
FIG. 3 is a schematic diagram of the motion formula of the motion system of the present invention;
FIG. 4 is a schematic view of a connection plate structure of the exercise system of the present invention;
FIG. 5 is a schematic view of a motor and an omni-directional wheel in the exercise system of the present invention;
FIG. 6 is an interface diagram of an operating system of the present invention;
FIG. 7 is a flow chart of the voice control of the system of the present invention;
FIG. 8 is a flowchart of a method routine of the present invention;
FIG. 9 is a schematic diagram of a map contour in an obstacle avoidance strategy according to the present invention;
FIG. 10 is a flow chart illustrating the principle of the navigation and obstacle avoidance strategy of the present invention.
In the figure: 1-a visual camera; 2-touch screen; 3-carrying tray A; 4-carrying tray B; 5-a chassis; 6-supporting the shaft; 7-laser radar; 8-a main controller; 9-omni wheels; 10-a servo motor; 11-connecting plate.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
As shown in fig. 1, an indoor navigation robot includes: the device comprises a visual camera 11, a touch screen 2, a laser radar 7, a support shaft 66, a bearing disc A3, a bearing disc B4, a main controller 8 and a chassis 5; the touch screen 2 and the visual camera 1 are horizontally arranged on a bearing plate A3, the laser radar 7 is horizontally arranged on a bearing plate B4, and the main controller 8 is horizontally arranged on the chassis 5; the bearing disc A3 is vertically connected with the bearing disc B4 through six support shafts 6, and the bearing disc B4 is vertically connected with the chassis 5 through six support shafts 6;
the laser radar 7, the vision camera 1 and the touch screen 2 are connected with the main controller 8 to carry out information interaction.
And a voice recognition module is installed in the main controller 8, so that the functions of robot voice recognition and intelligent explanation can be realized.
The intelligent tourist navigation system is characterized in that the head of the vehicle is provided with the visual camera 1, so that not only plane images can be obtained, but also position and distance information of a shot object can be obtained, and basic hardware support is provided for motion capture, three-dimensional modeling, three-dimensional measurement, face recognition, gesture recognition, navigation and positioning by obtaining three-dimensional size and space information of an environmental object in real time.
As shown in fig. 2-5, the motion system adopts a three-wheel omnidirectional movement structure, which mainly comprises omnidirectional wheels 9, a servo motor 10, a connecting plate 11 and a chassis 5, wherein the circumference part of the chassis 5 is provided with a hollow groove, the two omnidirectional wheels are designed at 120 degrees, so that the installation of the omnidirectional wheels 9 is convenient, the three omnidirectional wheels 9 are connected with the motor and are fixed with the chassis 55 through the connecting plate 11, the wheels are mutually installed at 120 degrees, the motion in any direction in a horizontal plane can be realized without a steering structure, the relationship between the rotating speeds of the three omnidirectional wheels and the center point speed of a mobile platform can be quantitatively analyzed by establishing a kinematics model, so that the accurate control of the movement of the omnidirectional mobile platform is realized, the relationship between the rotating speeds of the three omnidirectional wheels and the speed of a geometric center is established through a robot kinematics model, omega is the angular speed of the robot through speed decomposition, and L is the distance between the centers of the omnidirectional wheels and the center of the chassis 5, VA, VB and VC are the rotating speeds of three omnidirectional wheels respectively, Vx and Vy are the moving speeds of the robot in the direction X, Y, the formula in the figure 3 can be obtained, and the accurate control of the movement of the omnidirectional moving platform is further realized.
The invention provides an indoor navigation robot control system, which comprises an indoor navigation robot and a terminal control system, wherein the indoor navigation robot and the terminal control system are connected through WIFI or 5G communication;
the indoor navigation robot comprises a sending unit and a receiving unit, and is used for receiving and sending control information with a terminal control system;
the terminal control system comprises a memory, and an environment map is stored in the memory.
As shown in fig. 6, in the aspect of software design, the robot is directly controlled by the operating system of the touch screen 22, and the robot can also be controlled by the APP end of the mobile phone, so that functions such as path planning and environment scanning are realized. Various index conditions of the robot, such as running speed, electric quantity, sensor running conditions and the like, can be observed through the control end, the black area in the figure is the obstacle detected by the laser radar 7, the robot can avoid the obstacle, and the robot carries out route cruising operation around the planned 0, 1 and 2 exhibition points.
As shown in fig. 7, when the user uses the voice function, the control process includes firstly triggering a command for starting the voice wake-up function to the voice control module, the voice control module transmitting the information collected from the microphone to the voice recognition module in the form of voice current for analysis and feature extraction to convert the information into a corresponding recognition result, the recognition result being received and processed by the MCU as a pinyin string and transmitted back to the voice library for keyword analysis, recognizing a specific command to be input into the controller for execution, the voice library generating an answer in the form of a character, then converting the answer from the character into a voice file, and playing the file through the device, thereby completing a conversation.
As shown in fig. 8, the present invention further provides an indoor navigation robot control method, which is applied to the indoor navigation robot control system, and includes:
step S1, firstly, starting a terminal control system for initialization, establishing connection with the navigation robot, and then inputting the current coordinate and the target point coordinate of the navigation robot;
step S2, the navigation robot marks the position of the navigation robot and sends the marking information to the terminal control system through a sending unit, if the response message is successfully received, the step III is carried out, and if the response message is not received, the terminal control system is initialized again;
step S3, after the terminal control system successfully receives the response message, the terminal control system starts to implement navigation, the system randomly generates a planned route going to a target point, and divides the route into a plurality of positioning points (grids), and then the laser radar 7 collects environmental information and compares the environmental information with a planned route model built in a memory in real time;
step S4, the main controller 8 obtains the action information of the robot, identifies the next positioning point of the robot at this time according to the built-in planned route, compares the coordinates of the positioning point with the coordinates of the position of the robot, plans the direction and distance that the robot needs to travel according to the straight-line walking rule, and transmits the planned robot action to the transmitting unit;
step S5, the sending unit sends the planned robot action to a terminal control system, the terminal control system plans the direction and distance of the robot needed to advance according to the attitude information of the robot at the moment and the navigation and obstacle avoidance strategies, and the direction and the distance are converted into an angle and an advancing speed of the robot needed to turn as operation instructions;
step S6, the terminal control system sends the operation instruction to a receiving unit, the receiving unit sends the operation instruction of the robot to the main controller 8, and the main controller 8 controls the robot to walk according to the operation instruction;
step S7, repeating the steps S3 to S6 until the obtained position coordinates of the robot are consistent with the coordinates of the next positioning point, namely when the operation commands received by the terminal control system are all 0, sending the operation commands to the receiving unit;
and step S8, repeating the steps S3 to S7 until the planned walking route is finished.
In step S1, both keyboard input and voice input can be used to input information.
In step S5, the posture information of the robot includes a direction, a position, and a travel speed.
The robot internally mounted has laser radar 7, can carry out cyclic annular scanning to the surrounding complex environment, and the embedded development board of information input that will gather to the high accuracy profile information of the environment of construction place and self location helps the machine perception surrounding environment in unknown environment, and the robot can be connected to the wiFi network, and after the completion of construction two-dimensional grid map, the user controlled the machine through the control end and cruises along fixed map route, realizes convenient, intelligent control.
According to the map contour information constructed by the laser radar 7, a red circle in fig. 9 is a preset contour of the robot, a red point connecting area is an obstacle area scanned by the laser radar 7, a blue part is a safe expansion area, and the size of the safe expansion area is the safe moving radius of the robot. Therefore, when a path is planned, the robot is only required to be regarded as a particle in the environment, the size of the robot is not required to be considered, and the safety extension has the other advantage that adjacent small obstacles can be connected into a whole, so that the complexity of a map is reduced, the robot can accurately avoid the obstacles in the moving process, the edge of the shell is absolutely not allowed to intersect with a red area, and the center of the robot is not allowed to intersect with a blue area.
As shown in fig. 10, in the step S5, the navigation and obstacle avoidance strategy includes:
step S51, the robot runs through a map in a barrier-free environment, the initial point of the robot is an original point coordinate, the laser radar 7 scans the environment, samples and discretizes continuous environment information, a Hector-SLAM algorithm is used for map construction, an environment two-dimensional grid map a is established, the final end position is a robot end coordinate b, and the map a is stored and led into a storage;
step S52, the system is started again, a grid map model a is imported from the map file, and the safety extension and the landmark point set are extracted;
step S53, placing the robot at the end position, giving an initial coordinate (namely a final coordinate b) of the current position of the robot, inputting the coordinate of a target point, randomly generating a path 1 going to the target point by the system, scanning the environmental information in real time by the laser radar 7 in the subsequent moving process, and comparing the environmental information with the map a to obtain the real-time coordinate of the robot;
step S54, when an obstacle suddenly appears in the environment, the real-time map information scanned by the laser radar 7 is different from the map a, at this time, a signal is fed back to the controller, and the robot re-randomly plans a path 2 going to a target point again according to the principle that the edge of the shell is absolutely not allowed to intersect with the obstacle boundary so as to bypass the obstacle;
and step S55, if the obstacle is encountered for many times, repeating the obstacle avoidance strategy until the target point is reached, and if the rescheduling exceeds thirty times, outputting the target point which cannot be reached.
Based on the establishment of a kinematic model, the path planning is realized by matching with an algorithm, a path is planned according to a known environment map, then the robot is controlled to move along the fixed path, and when the environmental information detected by the sensor is inconsistent with the original environmental information, the robot can re-plan the path from the current position to the target point. This loops until the robot reaches the target point or the target point is detected as unreachable.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that various changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (7)
1. The indoor navigation robot control method is characterized by comprising a sending unit and a receiving unit, wherein the sending unit and the receiving unit are used for sending and receiving control information with the terminal control system; the terminal control system comprises a memory, an environment map is stored in the terminal control system, and the method for controlling the indoor navigation of the robot by the terminal control system comprises the following steps:
step 1, firstly, starting a terminal control system for initialization, establishing connection with the navigation robot, and then inputting the current coordinate and the target point coordinate of the navigation robot;
step 2, the navigation robot marks the position of the navigation robot and sends the marking information to the terminal control system through a sending unit, if the response message is successfully received, the step three is carried out, and if the response message is not received, the terminal control system is initialized again;
step 3, after the terminal control system successfully receives the response message, the terminal control system starts to implement navigation, the system randomly generates a planned route going to a target point, divides the route into a plurality of positioning points (grids), acquires environmental information through the laser radar and compares the environmental information with a planned route model built in a memory in real time;
step 4, the main controller acquires action information of the robot, identifies the next positioning point of the robot at the moment according to the built-in planned route, compares the coordinates of the positioning point with the position coordinates of the robot, plans the direction and distance of the robot to travel according to the straight-line walking rule, and transmits the planned robot action to the sending unit;
step 5, the sending unit sends the planned robot action to a terminal control system, and the terminal control system plans the direction and the distance of the robot to be traveled according to the attitude information of the robot at the moment and the navigation and obstacle avoidance strategies and converts the direction and the distance into an angle and a traveling speed of the robot to be steered as an operation instruction;
step 6, the terminal control system sends the operation instruction to a receiving unit, the receiving unit sends the operation instruction of the robot to a main controller, and the main controller controls the robot to walk according to the operation instruction;
step 7, repeating the steps 3 to 6 until the acquired position coordinate of the robot is consistent with the coordinate of the next positioning point, namely when the operation instruction received by the terminal control system is 0, sending the operation instruction to the receiving unit;
and 8, repeating the steps 3 to 7 until the planned walking route is finished.
2. The indoor navigation robot control method of claim 1, wherein in the step 1, both a keyboard input mode and a voice input mode can be used for inputting information.
3. The indoor navigation robot control method of claim 1, wherein in the step 5, the attitude information of the robot includes orientation, position and traveling speed.
4. The indoor navigation robot control method of claim 1, wherein in the step 5, the navigation and obstacle avoidance strategy includes:
step 51, the robot runs through a map in a barrier-free environment, the initial point of the robot is an origin coordinate, the laser radar scans the environment, continuous environment information is sampled and discretized, a Hector-SLAM algorithm is used for map construction, an environment two-dimensional grid map a is established, the final end position is a robot end coordinate b, and the map a is stored and led into a memory;
step 52, starting the system again, importing a grid map model a from the map file, and extracting the safety extension and the landmark point set;
step 53, placing the robot at the end position, giving an initial coordinate (namely a final coordinate b) of the current position of the robot, inputting a target point coordinate, randomly generating a path 1 going to the point by the system, scanning the environmental information in real time by the laser radar in the subsequent moving process, and comparing the environmental information with the map a to obtain the real-time coordinate of the robot;
step 54, when the obstacle suddenly appears in the environment, the real-time map information scanned by the laser radar is different from the map a, at the moment, a signal is fed back to the controller, and the robot re-randomly plans a path 2 going to the target point again according to the principle that the edge of the shell is absolutely not allowed to intersect with the obstacle boundary so as to avoid the obstacle;
and step 55, if the obstacle is encountered for many times, repeating the obstacle avoidance strategy until the target point is reached, and if the rescheduling time exceeds thirty times, outputting the target point which cannot be reached.
5. The indoor navigation robot control method of claim 1, wherein the navigation robot comprises: the device comprises a visual camera, a touch screen, a laser radar, a support shaft, a bearing disc A, a bearing disc B, a main controller and a chassis; the touch screen and the vision camera are horizontally arranged on the bearing plate A, the laser radar is horizontally arranged on the bearing plate B, and the main controller is horizontally arranged on the chassis; the bearing plate A is vertically connected with the bearing plate B through a plurality of support shafts, and the bearing plate B is vertically connected with the chassis through a plurality of support shafts;
the laser radar, the visual camera and the touch screen are connected with the main controller to perform information interaction;
the main controller is internally provided with a voice recognition module, so that the functions of robot voice recognition and intelligent explanation can be realized.
6. The indoor navigation robot control method of claim 5, wherein the chassis adopts a three-wheel omnidirectional movement structure, and comprises omnidirectional wheels, a servo motor and a connecting plate, wherein three omnidirectional wheels are connected with the motor and then fixed with the chassis through the connecting plate, and the wheels are installed at an angle of 120 degrees with each other.
7. The indoor navigation robot control method of claim 1, wherein the indoor navigation robot further adopts a mobile phone APP to control the robot, so as to realize functions of path planning, environment scanning and the like, and meanwhile, various index conditions of the robot, such as running speed, electric quantity, sensor running conditions and the like, can be checked.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210559772.3A CN114935340A (en) | 2022-05-23 | 2022-05-23 | Indoor navigation robot, control system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210559772.3A CN114935340A (en) | 2022-05-23 | 2022-05-23 | Indoor navigation robot, control system and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114935340A true CN114935340A (en) | 2022-08-23 |
Family
ID=82864543
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210559772.3A Withdrawn CN114935340A (en) | 2022-05-23 | 2022-05-23 | Indoor navigation robot, control system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114935340A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116518969A (en) * | 2023-04-25 | 2023-08-01 | 南京艾小宝智能科技有限公司 | Visual positioning system and method under indoor scene |
-
2022
- 2022-05-23 CN CN202210559772.3A patent/CN114935340A/en not_active Withdrawn
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116518969A (en) * | 2023-04-25 | 2023-08-01 | 南京艾小宝智能科技有限公司 | Visual positioning system and method under indoor scene |
CN116518969B (en) * | 2023-04-25 | 2023-10-20 | 南京艾小宝智能科技有限公司 | Visual positioning system and method under indoor scene |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11914369B2 (en) | Multi-sensor environmental mapping | |
US11900536B2 (en) | Visual-inertial positional awareness for autonomous and non-autonomous tracking | |
CN108673501B (en) | Target following method and device for robot | |
JP3994950B2 (en) | Environment recognition apparatus and method, path planning apparatus and method, and robot apparatus | |
US11422566B2 (en) | Artificial intelligence robot cleaner | |
US11269328B2 (en) | Method for entering mobile robot into moving walkway and mobile robot thereof | |
CN114474061B (en) | Cloud service-based multi-sensor fusion positioning navigation system and method for robot | |
Tian et al. | RGB-D based cognitive map building and navigation | |
WO2020226085A1 (en) | Information processing device, information processing method, and program | |
US11385643B2 (en) | Artificial intelligence moving agent | |
US11540690B2 (en) | Artificial intelligence robot cleaner | |
CN113085896B (en) | Auxiliary automatic driving system and method for modern rail cleaning vehicle | |
JP2020079997A (en) | Information processing apparatus, information processing method, and program | |
Cui et al. | Search and rescue using multiple drones in post-disaster situation | |
EP3992747A1 (en) | Mobile body, control method, and program | |
Tuvshinjargal et al. | Hybrid motion planning method for autonomous robots using kinect based sensor fusion and virtual plane approach in dynamic environments | |
CN114935340A (en) | Indoor navigation robot, control system and method | |
CN117406771B (en) | Efficient autonomous exploration method, system and equipment based on four-rotor unmanned aerial vehicle | |
CN113081525A (en) | Intelligent walking aid equipment and control method thereof | |
Lim et al. | Evolution of a reliable and extensible high-level control system for an autonomous car | |
Wang et al. | Smart seeing eye dog wheeled assistive robotics | |
CN115359222A (en) | Unmanned interaction control method and system based on augmented reality | |
Zhang et al. | A novel system for guiding unmanned vehicles based on human gesture recognition | |
CN114782639A (en) | Rapid differential latent AGV dense three-dimensional reconstruction method based on multi-sensor fusion | |
Piponidis et al. | Towards a Fully Autonomous UAV Controller for Moving Platform Detection and Landing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20220823 |