CN117908031A - Autonomous navigation system of robot - Google Patents

Autonomous navigation system of robot Download PDF

Info

Publication number
CN117908031A
CN117908031A CN202410085125.2A CN202410085125A CN117908031A CN 117908031 A CN117908031 A CN 117908031A CN 202410085125 A CN202410085125 A CN 202410085125A CN 117908031 A CN117908031 A CN 117908031A
Authority
CN
China
Prior art keywords
image
obstacle avoidance
robot
obstacle
wid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410085125.2A
Other languages
Chinese (zh)
Inventor
王武东
雷宁
李晓萍
许剑铭
林绵峰
邱志豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Turingzhi New Technology Co ltd
Original Assignee
Guangdong Turingzhi New Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Turingzhi New Technology Co ltd filed Critical Guangdong Turingzhi New Technology Co ltd
Priority to CN202410085125.2A priority Critical patent/CN117908031A/en
Publication of CN117908031A publication Critical patent/CN117908031A/en
Pending legal-status Critical Current

Links

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention belongs to the field of navigation, and discloses an autonomous navigation system of a robot, which comprises an obstacle avoidance module and a mobile module; the obstacle avoidance module comprises an ultrasonic ranging sensor, a distance judging device, a camera device, an image recognition device and an obstacle avoidance device; the ultrasonic ranging sensor acquires the distance between an object right in front of the moving direction of the robot and the robot; the distance judging device judges whether the distance obtained by the ultrasonic ranging sensor is smaller than a set distance threshold value or not; shooting the right front of the moving direction of the robot by the camera device to obtain an obstacle avoidance image; the image recognition device recognizes the obstacle avoidance image and judges whether an obstacle exists in the obstacle avoidance image; when the distance is smaller than a set distance threshold or an obstacle exists in the obstacle avoidance image, the obstacle avoidance device controls the moving module so that the robot can bypass the obstacle. The invention enables the robot to identify dynamic obstacles in the measurement blind area of the ultrasonic ranging sensor.

Description

Autonomous navigation system of robot
Technical Field
The invention relates to the field of navigation, in particular to an autonomous navigation system of a robot.
Background
In the autonomous navigation process of the robot, the obstacle on the travelling path is usually required to be identified, then a preset obstacle avoidance algorithm is adopted to bypass the obstacle, and then the robot returns to the planned travelling path again. In the prior art, an ultrasonic ranging sensor is usually used for avoiding an obstacle, but the ultrasonic obstacle avoidance sensor has a measurement blind area, because the ultrasonic wave is usually excited by a high-voltage pulse during transmission, attenuation oscillation occurs after the pulse ends to generate a tailing signal, and the tailing signal and an echo signal cannot be effectively distinguished, so that in order to avoid the influence of the tailing signal, the echo signal is usually received after waiting for a period of time, and in the period of time, the distance D passed by the ultrasonic wave belongs to a detection blind area.
This presents difficulties for robot obstacle avoidance, where the ultrasonic sensor on the robot is able to detect at a relatively long distance for static obstacles, but where dynamic obstacles, such as dynamic obstacles that suddenly intrude in front of the robot travel path and have a distance to the robot less than distance D, are not detected.
Disclosure of Invention
The invention aims to disclose a robot autonomous navigation system, which solves the problem of how to identify dynamic obstacles in time when the robot avoids the obstacles by adopting ultrasonic waves in the autonomous navigation process.
In order to achieve the above purpose, the present invention provides the following technical solutions:
The invention provides a robot autonomous navigation system, which comprises an obstacle avoidance module and a mobile module;
The obstacle avoidance module comprises an ultrasonic ranging sensor, a distance judging device, a camera device, an image recognition device and an obstacle avoidance device;
The ultrasonic ranging sensor is used for acquiring the distance between an object right in front of the moving direction of the robot and the robot;
the distance judging device is used for judging whether the distance obtained by the ultrasonic ranging sensor is smaller than a set distance threshold value or not;
the camera device is used for shooting the right front of the moving direction of the robot to obtain an obstacle avoidance image;
The image recognition device is used for recognizing the obstacle avoidance image and judging whether an obstacle exists in the obstacle avoidance image;
The obstacle avoidance device is used for controlling the mobile module based on a preset obstacle avoidance algorithm when the distance is smaller than a set distance threshold value or an obstacle exists in the obstacle avoidance image, so that the robot can bypass the obstacle.
Optionally, the system further comprises a positioning module and a path planning module;
the positioning module is used for re-acquiring the coordinates of the current position of the robot after the robot bypasses the obstacle;
the path planning module is used for re-planning the navigation path between the coordinates obtained by the positioning module and the coordinates of the end point.
Optionally, the moving module is further configured to control the robot to move according to the navigation path obtained by the path planning module.
Optionally, the image capturing apparatus includes a capturing interval determining unit and a capturing unit;
The shooting interval determining unit is used for determining the shooting interval of the shooting unit periodically according to the obstacle avoidance image obtained by the shooting unit;
The shooting unit is used for shooting the right front of the moving direction of the robot according to the shooting interval, and obstacle avoidance images are obtained.
Optionally, determining the shooting interval of the shooting unit according to the obstacle avoidance image obtained by the shooting unit includes:
identifying the obstacle avoidance image, and obtaining the pavement proportion in the obstacle avoidance image;
the photographing interval is calculated based on the road surface proportion.
Optionally, identifying the obstacle avoidance image, and acquiring the road surface proportion in the obstacle avoidance image includes:
Preprocessing the obstacle avoidance image to obtain an image to be identified;
inputting an image to be identified into a pre-trained neural network for identification, and obtaining a pavement area in the image to be identified;
acquiring the number of columns of pixel points in a road surface area;
Dividing the number of columns of the pixel points of the road surface area by the number of columns of the image to be identified to obtain the road surface proportion.
Optionally, identifying the obstacle avoidance image, and determining whether an obstacle exists in the obstacle avoidance image includes:
The serial number of the obstacle avoidance image obtained by the latest image pick-up device is denoted as k, and the obstacle avoidance image obtained by the latest image pick-up device is denoted as p k;
Obtaining an obstacle avoidance image p k-1 with the sequence number of k-1;
acquiring the moving speed v k-1 of the robot when the camera device shoots p k-1;
calculating the width wid k-1 of the edge comparison region based on v k-1;
Obtaining comparison images cmp k and cmp k-1 corresponding to p k and p k-1 respectively based on wid k-1;
Calculating an approximate ratio between cmp k and cmp k-1;
if the approximate proportion is less than the set approximate proportion threshold, then the existence of an obstacle in p k is indicated.
Optionally, calculating the width wid k-1 of the edge comparison region based on v k-1 includes:
wid k-1 was calculated using the following formula:
v std denotes the highest moving speed of the robot, and wid std denotes a set width.
Optionally, the process of acquiring the comparison image cmp k corresponding to p k based on wid k-1 includes:
respectively using n and m to represent the row number and the column number of pixel points in the obstacle avoidance image;
storing the pixel points in the interval [1, wid k-1 ] in p k into a set pixuel k;
Storing the pixel points in the interval [ m-wid k-1, m ] in p k into a set pixuel k;
Storing the pixel points with the ordinate in the interval [1, wid k-1 ] in the p k into a set pixuel k;
Storing pixels in p k, whose ordinate is in the interval [ n-wid k-1, n ] into a set pixuel k;
And a comparison image cmp k corresponding to p k is formed by pixel points in pixuel k.
Optionally, the process of acquiring the comparison image cmp k-1 corresponding to p k-1 based on wid k-1 includes:
respectively using n and m to represent the row number and the column number of pixel points in the obstacle avoidance image;
Storing the pixel points in the interval [1, wid k-1 ] in p k-1 into a set pixuel k-1;
Storing the pixel points in the interval [ m-wid k-1, m ] in p k-1 into a set pixuel k-1;
storing the pixel points with the ordinate in the interval [1, wid k-1 ] in the p k-1 into a set pixuel k-1;
Storing pixels in p k-1, whose ordinate is in the interval [ n-wid k-1, n ] into a set pixuel k-1;
And a comparison image cmp k-1 corresponding to p k-1 is formed by pixel points in pixuel k-1.
The beneficial effects are that:
Compared with the prior art, the method has the advantages that the camera device is added in the obstacle avoidance process of the robot to obtain the obstacle avoidance image in front of the travelling path of the robot, and then the ultrasonic ranging sensor is combined, so long as the distance obtained by the ultrasonic ranging sensor is smaller than the set distance threshold value or the obstacle is detected to exist in the obstacle avoidance image, the moving module is controlled by using the preset obstacle avoidance algorithm, and the robot can identify the dynamic obstacle in the measurement blind area of the ultrasonic ranging sensor.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a robot autonomous navigation system according to the present invention.
Fig. 2 is another schematic view of a robot autonomous navigation system according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention provides an autonomous navigation system of a robot, which is an embodiment shown in fig. 1, and comprises an obstacle avoidance module and a mobile module;
The obstacle avoidance module comprises an ultrasonic ranging sensor, a distance judging device, a camera device, an image recognition device and an obstacle avoidance device;
The ultrasonic ranging sensor is used for acquiring the distance between an object right in front of the moving direction of the robot and the robot;
the distance judging device is used for judging whether the distance obtained by the ultrasonic ranging sensor is smaller than a set distance threshold value or not;
the camera device is used for shooting the right front of the moving direction of the robot to obtain an obstacle avoidance image;
The image recognition device is used for recognizing the obstacle avoidance image and judging whether an obstacle exists in the obstacle avoidance image;
The obstacle avoidance device is used for controlling the mobile module based on a preset obstacle avoidance algorithm when the distance is smaller than a set distance threshold value or an obstacle exists in the obstacle avoidance image, so that the robot can bypass the obstacle.
According to the invention, the camera device is added in the obstacle avoidance process of the robot to obtain the obstacle avoidance image in front of the travelling path of the robot, and then the ultrasonic ranging sensor is combined, so long as the distance obtained by the ultrasonic ranging sensor is smaller than the set distance threshold value or the obstacle is detected to exist in the obstacle avoidance image, the moving module is controlled by using the preset obstacle avoidance algorithm, so that the robot can identify the dynamic obstacle in the measurement blind area of the ultrasonic ranging sensor.
In addition, as the two obstacle detection modes with different principles are arranged, the obstacle avoidance capability of the robot has certain redundancy, and the robot can avoid the obstacle by means of one obstacle avoidance means when the other obstacle avoidance means cannot be used. The two obstacle avoidance means operate simultaneously, so that the detection and recognition capability of the robot to dynamic obstacles can be improved.
Further, the preset obstacle avoidance algorithm may be a Bug algorithm, a Bug1 algorithm, a Bug2 algorithm, a PFM algorithm, a VFH algorithm, or the like.
The basic idea of the Bug algorithm is to walk around an obstacle after it has been found to avoid it.
Further, the set distance threshold may be 1 meter.
Further, the mobile module includes a traveling structure provided at the bottom of the robot and a device for controlling the traveling structure.
The travel structure may be tracks, wheels, or the like.
Optionally, as shown in fig. 2, the system further comprises a positioning module and a path planning module;
the positioning module is used for re-acquiring the coordinates of the current position of the robot after the robot bypasses the obstacle;
the path planning module is used for re-planning the navigation path between the coordinates obtained by the positioning module and the coordinates of the end point.
Further, the positioning module may be a GPS positioning device, UWB positioning device, etc. The GPS positioning device is used for positioning outdoors, and the UWB positioning device is used for positioning indoors.
Furthermore, the path planning module can carry out path planning again through Dijkstra algorithm, A algorithm, PRM algorithm and other algorithms to obtain a navigation path.
Optionally, the moving module is further configured to control the robot to move according to the navigation path obtained by the path planning module.
Optionally, the image capturing apparatus includes a capturing interval determining unit and a capturing unit;
The shooting interval determining unit is used for determining the shooting interval of the shooting unit periodically according to the obstacle avoidance image obtained by the shooting unit;
The shooting unit is used for shooting the right front of the moving direction of the robot according to the shooting interval, and obstacle avoidance images are obtained.
Further, the time period of determining the shooting interval of the shooting units twice adjacent is 20 seconds.
After the photographing interval is determined, the photographing unit continuously photographs the right front of the moving direction of the robot according to the photographing interval.
Optionally, determining the shooting interval of the shooting unit according to the obstacle avoidance image obtained by the shooting unit includes:
identifying the obstacle avoidance image, and obtaining the pavement proportion in the obstacle avoidance image;
the photographing interval is calculated based on the road surface proportion.
Optionally, calculating the shooting interval based on the road surface proportion includes:
The shooting interval is calculated using the following formula:
shtitr denotes a photographing interval, v std denotes the highest moving speed of the robot, v rec denotes the moving speed of the robot at the start of calculating the photographing interval, rodpro denotes the road surface proportion in the obstacle avoidance image, stdpro denotes the set road surface proportion, stditr denotes the standard photographing interval, and α 1 and α 2 denote preset speed weights and proportion weights, respectively.
In the invention, by adjusting the shooting interval regularly, when the probability of a dynamic obstacle is low, the shooting interval is properly increased, so that the increasing speed of the service life of the shutter of the camera device is reduced, the service life of the camera device is prolonged, and meanwhile, the shooting power consumption and the power consumption for identifying obstacle avoidance images can be reduced, and the endurance time of the robot is prolonged.
In the moving process of the robot, the situation of illegal parking or traffic jam can be possibly met, at this time, the width of the road capable of passing in front of the robot is reduced, and the visual field blind area is increased.
The invention expresses the probability of encountering dynamic obstacle according to the difference between the moving speed and the highest moving speed and the road surface proportion, the lower the road surface proportion is, the larger the vision blind area right in front of the moving direction of the robot is, the lower the probability of being able to timely identify the moving obstacle rushed out from the two sides of the road is, at this time, if the moving speed is faster, the response time left for the robot is smaller, therefore, the invention can reduce the shooting interval, thereby improving the probability of being able to timely identify the dynamic obstacle. When the road surface proportion is larger and the moving speed of the robot is lower, the blind area of the visual field is smaller, the probability that moving obstacles rushed out from two sides of the road can be recognized in time is higher, and at the moment, the shooting interval is correspondingly increased, so that the power consumption of the robot is reduced.
In addition, the change amplitude of the shooting interval is adaptively related to the moving speed of the robot and the road surface proportion, so that the shooting interval can be more accurately regulated, the power consumption is reduced, and the probability of timely finding dynamic obstacles is ensured.
Further, the road surface proportion is set as follows
Further, the standard photographing interval is 0.1 seconds.
Further, the speed weight and the ratio weight are 0.4 and 0.6, respectively.
Optionally, identifying the obstacle avoidance image, and acquiring the road surface proportion in the obstacle avoidance image includes:
Preprocessing the obstacle avoidance image to obtain an image to be identified;
inputting an image to be identified into a pre-trained neural network for identification, and obtaining a pavement area in the image to be identified;
acquiring the number of columns of pixel points in a road surface area;
Dividing the number of columns of the pixel points of the road surface area by the number of columns of the image to be identified to obtain the road surface proportion.
The neural network can identify the obstacle avoidance image, and the obstacle avoidance image is acquired, so that the road surface area is occupied. The neural network may be a CNN.
Optionally, preprocessing the obstacle avoidance image to obtain an image to be identified, including:
and filtering the obstacle avoidance image to obtain an image to be identified.
The filtering process can be performed by adopting a Gaussian filtering algorithm, a bilateral filtering algorithm and the like. Filtering the obstacle image can reduce noise in the image, and improve the quality of the image, so that the road surface proportion can be more accurately identified.
Optionally, identifying the obstacle avoidance image, and determining whether an obstacle exists in the obstacle avoidance image includes:
The serial number of the obstacle avoidance image obtained by the latest image pick-up device is denoted as k, and the obstacle avoidance image obtained by the latest image pick-up device is denoted as p k;
Obtaining an obstacle avoidance image p k-1 with the sequence number of k-1;
acquiring the moving speed v k-1 of the robot when the camera device shoots p k-1;
calculating the width wid k-1 of the edge comparison region based on v k-1;
Obtaining comparison images cmp k and cmp k-1 corresponding to p k and p k-1 respectively based on wid k-1;
Calculating an approximate ratio between cmp k and cmp k-1;
if the approximate proportion is less than the set approximate proportion threshold, then the existence of an obstacle in p k is indicated.
Specifically, since the shooting interval of the invention is very small, objects in images shot in two obstacle avoidance images with adjacent sequence numbers are basically the same, so if dynamic obstacles exist, the objects in the two images are different, and therefore, the dynamic obstacles can be identified by calculating the approximate proportion.
However, if the approximate proportion is calculated based on the whole image, the required time is relatively long because the number of pixels involved in calculation is very large, and the requirement of timely finding the obstacle cannot be met.
The camera of the invention is used for identifying dynamic obstacles which are relatively close to the robot, and the farther dynamic obstacles can be identified by the ultrasonic sensor, so that when the obstacle appears in front of the robot, the proportion of the obstacle in the obstacle avoidance image is relatively large, and the obstacle is intersected with the edge of the obstacle avoidance image with high probability. Therefore, the invention can rapidly judge whether the obstacle is in the obstacle avoidance image or not by calculating the width of the edge comparison area, acquiring the comparison image composed of the areas around the avoidance image based on the calculated width and then calculating the approximate proportion based on the comparison image, wherein the number of the pixel points participating in the calculation is greatly reduced.
The larger the serial number is, the later the shooting time of the obstacle avoidance image is.
Further, the set approximate proportion threshold value is 0.85.
Optionally, calculating the width wid k-1 of the edge comparison region based on v k-1 includes:
wid k-1 was calculated using the following formula:
v std denotes the highest moving speed of the robot, and wid std denotes a set width.
Specifically, the width of the comparison area is inversely related to the moving speed of the robot, the faster the moving speed is, the smaller the width of the comparison area is, because the faster the moving speed is, the higher the probability that the robot approaches a dynamic obstacle is, at this time, the larger the difference between the obstacle avoidance image with the sequence number k and the obstacle avoidance image with the image k-1 in the surrounding area of the image is, so that only the smaller area needs to be detected to determine whether the dynamic obstacle exists. And when the moving speed is slower, the probability that the proportion of the mechanical obstacle in the obstacle avoidance image is smaller is larger, and at the moment, the invention needs to realize effective detection of the dynamic obstacle by enlarging the width of the edge comparison area.
Therefore, the width of the edge comparison area can be adaptively changed along with the change of the moving speed of the robot, so that the smaller the reaction time reserved for the robot to avoid the obstacle, namely, the faster the moving speed of the robot, the fewer pixels are adopted to calculate the approximate proportion, and the accuracy of the obtained obstacle avoidance result is ensured.
Optionally, the width is set to 30. The set width may increase as the resolution of the obstacle avoidance image increases.
Optionally, the process of acquiring the comparison image cmp k corresponding to p k based on wid k-1 includes:
respectively using n and m to represent the row number and the column number of pixel points in the obstacle avoidance image;
storing the pixel points in the interval [1, wid k-1 ] in p k into a set pixuel k;
Storing the pixel points in the interval [ m-wid k-1, m ] in p k into a set pixuel k;
Storing the pixel points with the ordinate in the interval [1, wid k-1 ] in the p k into a set pixuel k;
Storing pixels in p k, whose ordinate is in the interval [ n-wid k-1, n ] into a set pixuel k;
And a comparison image cmp k corresponding to p k is formed by pixel points in pixuel k.
The above acquisition procedure, which represents the removal of the central region of p k, leaves only the region around p k.
Optionally, the process of acquiring the comparison image cmp k-1 corresponding to p k-1 based on wid k-1 includes:
respectively using n and m to represent the row number and the column number of pixel points in the obstacle avoidance image;
Storing the pixel points in the interval [1, wid k-1 ] in p k-1 into a set pixuel k-1;
Storing the pixel points in the interval [ m-wid k-1, m ] in p k-1 into a set pixuel k-1;
storing the pixel points with the ordinate in the interval [1, wid k-1 ] in the p k-1 into a set pixuel k-1;
Storing pixels in p k-1, whose ordinate is in the interval [ n-wid k-1, n ] into a set pixuel k-1;
And a comparison image cmp k-1 corresponding to p k-1 is formed by pixel points in pixuel k-1.
Optionally, calculating an approximate ratio between cmp k and cmp k-1 includes:
Graying treatment is carried out on cmp k and cmp k-1 respectively to obtain images grp k and grp k-1;
Acquiring a pixel point pix k with coordinates (x, y) in a grp k;
Acquiring a pixel point pix k-1 with coordinates (x, y) in a grp k-1;
Calculating an absolute value dif k,k-1 of the gray value difference between pix k and pix k-1;
If dif k,k-1 is larger than the set gray value, storing pix k into a comparison set;
the approximate proportion is calculated using the following formula:
simpro denotes an approximate scale, numfgrp k denotes the total number of pixel points in grp k, and numfcmp denotes the total number of pixel points in the comparison set.
The approximate proportion of the invention is determined based on the absolute value of the difference value of the gray values of the pixel points at the same position, and the larger the absolute value is, the larger the difference between the two pixel points is, and the larger the probability of the existence of the obstacle is. Therefore, if the total number of pixel points in the comparison set is smaller, the approximation ratio is larger, and the two obstacle avoidance images are more similar, the probability of having an obstacle is smaller.
Alternatively, the set gradation value is 5.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some or all of the components in a hardware configuration device for a request message according to an embodiment of the present invention may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present invention can also be implemented as an apparatus or device program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present invention may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (10)

1. The robot autonomous navigation system is characterized by comprising an obstacle avoidance module and a moving module;
The obstacle avoidance module comprises an ultrasonic ranging sensor, a distance judging device, a camera device, an image recognition device and an obstacle avoidance device;
The ultrasonic ranging sensor is used for acquiring the distance between an object right in front of the moving direction of the robot and the robot;
the distance judging device is used for judging whether the distance obtained by the ultrasonic ranging sensor is smaller than a set distance threshold value or not;
the camera device is used for shooting the right front of the moving direction of the robot to obtain an obstacle avoidance image;
The image recognition device is used for recognizing the obstacle avoidance image and judging whether an obstacle exists in the obstacle avoidance image;
The obstacle avoidance device is used for controlling the mobile module based on a preset obstacle avoidance algorithm when the distance is smaller than a set distance threshold value or an obstacle exists in the obstacle avoidance image, so that the robot can bypass the obstacle.
2. The autonomous navigation system of claim 1, further comprising a positioning module and a path planning module;
the positioning module is used for re-acquiring the coordinates of the current position of the robot after the robot bypasses the obstacle;
the path planning module is used for re-planning the navigation path between the coordinates obtained by the positioning module and the coordinates of the end point.
3. The autonomous navigation system of claim 2, wherein the movement module is further configured to control movement of the robot based on the navigation path obtained by the path planning module.
4. The autonomous navigation system of claim 1, wherein the image pickup device includes a photographing interval determining unit and a photographing unit;
The shooting interval determining unit is used for determining the shooting interval of the shooting unit periodically according to the obstacle avoidance image obtained by the shooting unit;
The shooting unit is used for shooting the right front of the moving direction of the robot according to the shooting interval, and obstacle avoidance images are obtained.
5. The autonomous navigation system of claim 4, wherein determining a photographing interval of the photographing unit from the obstacle avoidance image obtained by the photographing unit comprises:
identifying the obstacle avoidance image, and obtaining the pavement proportion in the obstacle avoidance image;
the photographing interval is calculated based on the road surface proportion.
6. The autonomous navigation system of claim 5, wherein the means for identifying the obstacle avoidance image and obtaining the proportion of road surface in the obstacle avoidance image comprises:
Preprocessing the obstacle avoidance image to obtain an image to be identified;
inputting an image to be identified into a pre-trained neural network for identification, and obtaining a pavement area in the image to be identified;
acquiring the number of columns of pixel points in a road surface area;
Dividing the number of columns of the pixel points of the road surface area by the number of columns of the image to be identified to obtain the road surface proportion.
7. The autonomous navigation system of claim 1, wherein the identifying the obstacle avoidance image to determine whether an obstacle is present in the obstacle avoidance image comprises:
The serial number of the obstacle avoidance image obtained by the latest image pick-up device is denoted as k, and the obstacle avoidance image obtained by the latest image pick-up device is denoted as p k;
Obtaining an obstacle avoidance image p k-1 with the sequence number of k-1;
acquiring the moving speed v k-1 of the robot when the camera device shoots p k-1;
calculating the width wid k-1 of the edge comparison region based on v k-1;
Obtaining comparison images cmp k and cmp k-1 corresponding to p k and p k-1 respectively based on wid k-1;
Calculating an approximate ratio between cmp k and cmp k-1;
if the approximate proportion is less than the set approximate proportion threshold, then the existence of an obstacle in p k is indicated.
8. The autonomous robot navigation system of claim 7, wherein calculating the width wid k-1 of the edge comparison area based on v k-1 comprises:
wid k-1 was calculated using the following formula:
v std denotes the highest moving speed of the robot, and wid std denotes a set width.
9. The autonomous navigation system of claim 8, wherein the process of obtaining the comparison image cmp k corresponding to p k based on wid k-1 comprises:
respectively using n and m to represent the row number and the column number of pixel points in the obstacle avoidance image;
storing the pixel points in the interval [1, wid k-1 ] in p k into a set pixuel k;
Storing the pixel points in the interval [ m-wid k-1, m ] in p k into a set pixuel k;
Storing the pixel points with the ordinate in the interval [1, wid k-1 ] in the p k into a set pixuel k;
Storing pixels in p k, whose ordinate is in the interval [ n-wid k-1, n ] into a set pixuel k;
And a comparison image cmp k corresponding to p k is formed by pixel points in pixuel k.
10. The autonomous navigation system of claim 8, wherein the process of obtaining the comparison image cmp k-1 corresponding to p k-1 based on wid k-1 comprises:
respectively using n and m to represent the row number and the column number of pixel points in the obstacle avoidance image;
Storing the pixel points in the interval [1, wid k-1 ] in p k-1 into a set pixuel k-1;
Storing the pixel points in the interval [ m-wid k-1, m ] in p k-1 into a set pixuel k-1;
storing the pixel points with the ordinate in the interval [1, wid k-1 ] in the p k-1 into a set pixuel k-1;
Storing pixels in p k-1, whose ordinate is in the interval [ n-wid k-1, n ] into a set pixuel k-1;
And a comparison image cmp k-1 corresponding to p k-1 is formed by pixel points in pixuel k-1.
CN202410085125.2A 2024-01-20 2024-01-20 Autonomous navigation system of robot Pending CN117908031A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410085125.2A CN117908031A (en) 2024-01-20 2024-01-20 Autonomous navigation system of robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410085125.2A CN117908031A (en) 2024-01-20 2024-01-20 Autonomous navigation system of robot

Publications (1)

Publication Number Publication Date
CN117908031A true CN117908031A (en) 2024-04-19

Family

ID=90689261

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410085125.2A Pending CN117908031A (en) 2024-01-20 2024-01-20 Autonomous navigation system of robot

Country Status (1)

Country Link
CN (1) CN117908031A (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004133846A (en) * 2002-10-15 2004-04-30 Matsushita Electric Ind Co Ltd Vehicle
JP2005329779A (en) * 2004-05-19 2005-12-02 Daihatsu Motor Co Ltd Method and device for recognizing obstacle
CN102621986A (en) * 2012-04-13 2012-08-01 西北农林科技大学 Navigation control system based on vision and ultrasonic waves
CN104238566A (en) * 2014-09-27 2014-12-24 江阴润玛电子材料股份有限公司 Image-recognition-based line patrolling robot control system for electronic circuit
CN106383518A (en) * 2016-09-29 2017-02-08 国网重庆市电力公司电力科学研究院 Multi-sensor tunnel robot obstacle avoidance control system and method
CN110069057A (en) * 2018-01-24 2019-07-30 南京机器人研究院有限公司 A kind of obstacle sensing method based on robot
CN111142524A (en) * 2019-12-27 2020-05-12 广州番禺职业技术学院 Garbage picking robot, method and device and storage medium
KR102313115B1 (en) * 2021-06-10 2021-10-18 도브텍 주식회사 Autonomous flying drone using artificial intelligence neural network
CN113532461A (en) * 2021-07-08 2021-10-22 山东新一代信息产业技术研究院有限公司 Robot autonomous obstacle avoidance navigation method, equipment and storage medium
KR20220086391A (en) * 2020-12-16 2022-06-23 현대모비스 주식회사 Ground recognition based ultrasonic sensor sensing distance control system and method
CN115599119A (en) * 2022-10-25 2023-01-13 南通亿思特机器人科技有限公司(Cn) Unmanned aerial vehicle keeps away barrier system
CN117130389A (en) * 2023-10-08 2023-11-28 河南航投通用航空投资有限公司 High-reliability tilting rotor wing bimodal logistics unmanned aerial vehicle

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004133846A (en) * 2002-10-15 2004-04-30 Matsushita Electric Ind Co Ltd Vehicle
JP2005329779A (en) * 2004-05-19 2005-12-02 Daihatsu Motor Co Ltd Method and device for recognizing obstacle
CN102621986A (en) * 2012-04-13 2012-08-01 西北农林科技大学 Navigation control system based on vision and ultrasonic waves
CN104238566A (en) * 2014-09-27 2014-12-24 江阴润玛电子材料股份有限公司 Image-recognition-based line patrolling robot control system for electronic circuit
CN106383518A (en) * 2016-09-29 2017-02-08 国网重庆市电力公司电力科学研究院 Multi-sensor tunnel robot obstacle avoidance control system and method
CN110069057A (en) * 2018-01-24 2019-07-30 南京机器人研究院有限公司 A kind of obstacle sensing method based on robot
CN111142524A (en) * 2019-12-27 2020-05-12 广州番禺职业技术学院 Garbage picking robot, method and device and storage medium
KR20220086391A (en) * 2020-12-16 2022-06-23 현대모비스 주식회사 Ground recognition based ultrasonic sensor sensing distance control system and method
KR102313115B1 (en) * 2021-06-10 2021-10-18 도브텍 주식회사 Autonomous flying drone using artificial intelligence neural network
CN113532461A (en) * 2021-07-08 2021-10-22 山东新一代信息产业技术研究院有限公司 Robot autonomous obstacle avoidance navigation method, equipment and storage medium
CN115599119A (en) * 2022-10-25 2023-01-13 南通亿思特机器人科技有限公司(Cn) Unmanned aerial vehicle keeps away barrier system
CN117130389A (en) * 2023-10-08 2023-11-28 河南航投通用航空投资有限公司 High-reliability tilting rotor wing bimodal logistics unmanned aerial vehicle

Similar Documents

Publication Publication Date Title
KR100377067B1 (en) Method and apparatus for detecting object movement within an image sequence
US7327855B1 (en) Vision-based highway overhead structure detection system
JP7147420B2 (en) OBJECT DETECTION DEVICE, OBJECT DETECTION METHOD AND COMPUTER PROGRAM FOR OBJECT DETECTION
JP5297078B2 (en) Method for detecting moving object in blind spot of vehicle, and blind spot detection device
US11010622B2 (en) Infrastructure-free NLoS obstacle detection for autonomous cars
CN105182320A (en) Depth measurement-based vehicle distance detection method
US20140146176A1 (en) Moving body detection device and moving body detection method
US20060111841A1 (en) Method and apparatus for obstacle avoidance with camera vision
CN109682388B (en) Method for determining following path
JPH06124340A (en) Image processor for vehicle
CN112947419B (en) Obstacle avoidance method, device and equipment
KR101667835B1 (en) Object localization using vertical symmetry
CN110949257A (en) Auxiliary parking device and method for motor vehicle
CN114359714A (en) Unmanned body obstacle avoidance method and device based on event camera and intelligent unmanned body
WO2020191978A1 (en) Sar imaging method and imaging system thereof
JP7035272B2 (en) Shooting system
CN110824495B (en) Laser radar-based drosophila visual inspired three-dimensional moving target detection method
CN105741284A (en) Multi-beam forward-looking sonar target detection method
JP5375249B2 (en) Moving path planning device, moving body control device, and moving body
US20220245831A1 (en) Speed estimation systems and methods without camera calibration
JP3925285B2 (en) Road environment detection device
CN117908031A (en) Autonomous navigation system of robot
CN110949255A (en) Auxiliary parking device and method for motor vehicle
CN116434156A (en) Target detection method, storage medium, road side equipment and automatic driving system
Michalke et al. A self-adaptive approach for curbstone/roadside detection based on human-like signal processing and multi-sensor fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination