Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The terms "first," "second," "third," and the like in this application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second", and "a third" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise. All directional indications (such as up, down, left, right, front, back … …) in the embodiments of the present application are merely used to explain the relative positional relationship, movement, etc. between the components in a particular gesture (as shown in the drawings), and if the particular gesture changes, the directional indication changes accordingly. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
Referring to fig. 1, fig. 1 is a flow chart of a hybrid navigation method of a mobile robot according to a first embodiment of the invention. It should be noted that, if there are substantially the same results, the method of the present invention is not limited to the flow sequence shown in fig. 1. As shown in fig. 1, the method comprises the steps of:
step S101: and acquiring an environment map, wherein the environment map comprises point cloud information of the running environment of the mobile robot and a mode switching point, and the mode switching point is used for indicating the mobile robot to switch a navigation mode.
Laser navigation is to use 2D or 3D lidar (also called single-line or multi-line lidar) for distance measurement. The object information acquired by the lidar presents a series of discrete points with accurate angle and distance information, referred to as a point cloud. In general, the laser SLAM system calculates the change of the distance and the posture of the laser radar relative motion through matching and comparing two point clouds at different moments, and then the positioning of the robot is completed. The laser radar ranging is accurate, the error model is simple, the operation is stable in the environment except the strong light direct irradiation, the processing of the point cloud is easy, and meanwhile, the point cloud information itself contains a direct geometric relationship, so that the path planning and navigation of the robot become visual. For the laser navigation scheme, the mobile robot cannot adapt to the scene of uneven ground or long corridor, especially the environment with uphill and downhill, the laser radar can obtain environment information on different planes with larger difference, and the matching between the point cloud data has larger probability failure or error result is obtained, so that the mobile robot cannot accurately realize positioning. Visual navigation employs visual sensors to obtain environmental information. The visual sensor acquires the surface characteristic image of the measured object, the processing of the digital image is completed by the special hardware of the high-speed image, the image coordinates of the characteristic information are extracted, and the parameters such as the relative motion distance and the gesture of the visual sensor are obtained through the matching of the characteristic information between frames, so that the positioning of the robot is realized. The existing visual navigation scheme generally adopts a color camera as a sensing unit, and different illumination conditions have larger influence on the environmental information acquired by the sensing unit, so that an error result is generated in matching among images, and especially for a mobile robot needing to work at night, the normal work of a visual navigation system is difficult to ensure under the illumination conditions at night, so that the application scene of the visual navigation scheme is limited.
In this embodiment, the environment map includes point cloud information and a mode switching point of the mobile robot operating environment, where the mode switching point is a positioning point for indicating that the mobile robot switches a navigation mode, and marked in the environment map, so that the whole application scene needs to be scanned by the laser radar to complete a mapping process, the environment information can be represented by using a grid map, then a corresponding navigation mode switching point is set on the map, the preset range can be set manually, the range of the mode switching point can be controlled by the preset range, and the navigation mode includes a laser navigation mode and a visual navigation mode. The mobile robot is provided with an imaging element, such as a shooting camera, for visual navigation, and is also provided with a laser radar for laser navigation.
Step S102: and presetting a lane line in the running environment of the mobile robot, wherein the lane line is used for providing visual navigation reference for the mobile robot.
Different from laser navigation, the visual navigation needs to acquire environmental information by adopting visual sensors such as shooting cameras, the visual sensors acquire surface characteristic images of the detected objects, digital image processing is completed by high-speed image special hardware, image coordinates of the characteristic information are extracted, and parameters such as relative movement distance and posture of the visual sensors are obtained through matching of the characteristic information between frames, so that the mobile robot is positioned. In the embodiment, the lane lines are used as the characteristic images to provide visual navigation references for the mobile robot, so that the mobile robot is simple to lay and convenient to maintain.
Step S103: whether the current position of the mobile robot is within a preset range of the mode switching point in the environment map; if not, step S104 is executed, and if yes, step S105 is executed.
Specifically, the mobile robot may determine whether the current position is within a preset range of the mode switching point in the environment map during the navigation process, and when the current position of the mobile robot is not within the preset range of the mode switching point, the laser navigation mode is adopted to navigate by using a laser radar scanning mode, and when the current position of the mobile robot is within the preset range of the mode switching point, the visual navigation mode is adopted to navigate by using an image pickup element to shoot and identify the lane line.
Step S104: and if the current position of the mobile robot is not in the preset range of the mode switching point, navigating by adopting the laser navigation mode in a laser radar scanning mode.
Referring to fig. 2, fig. 2 is a flow chart of a laser navigation mode according to a first embodiment of the invention.
Specifically, in step S104, the mobile robot is provided with a laser radar, and when the mobile robot uses a laser navigation mode, the navigation by using a laser radar scanning mode includes the following steps:
step S104a: acquiring an initial pose of the current position of the mobile robot;
in the laser navigation mode, the mobile robot first needs to acquire an initial pose of the current position. In this embodiment, an odometer is disposed on the mobile robot, in the running process of the mobile robot, the odometer may record the action mileage of the mobile robot, the recording result of the odometer may be directly used as the initial pose, in another embodiment, an Inertial Measurement Unit (IMU) may also be disposed on the mobile robot, and then the initial pose may be obtained by fusion according to the poses obtained by matching the odometer with the Inertial Measurement Unit (IMU) and the point cloud scanned by the laser radar, and a self-adaptive monte carlo method (AMCL) may also be adopted to obtain the initial pose.
Step S104b: acquiring a virtual point cloud according to the initial pose and the environment map;
and then, the mobile robot generates a virtual point cloud according to the initial pose and the environment map, namely, the calculated virtual point cloud information is obtained in the environment map through the initial pose.
Step S104c: and acquiring a first pose of the mobile robot according to the virtual point cloud and the point cloud obtained by actual scanning of the laser radar, wherein the first pose is an accurate pose of the current position of the mobile robot.
And finally, using an iterative closest point method (ICP) to find out rotation and translation transformation which enables the virtual point cloud to be closest to the point cloud actually scanned by the laser radar through multiple iterations, and obtaining the first pose of the mobile robot through the obtained rotation and translation transformation of the initial pose.
After the mobile robot obtains the first pose, the mobile robot can navigate on the environment map according to the first pose.
Step S105: and if the current position of the mobile robot is within the preset range of the mode switching point, adopting the visual navigation mode to carry out navigation by utilizing the mode of shooting and identifying the lane line by using the image pickup element.
It should be noted that, unlike the laser navigation scheme, the visual navigation uses the visual sensor to obtain the characteristic information in the environment information, and performs navigation according to the characteristic information, so the visual navigation needs to set the characteristic information in advance.
In this embodiment, the characteristic information for visual navigation is a lane line, and the lane line as the visual navigation characteristic information can be applied to an indoor scene and an outdoor scene, and is convenient to maintain, so that the working environment of the mobile robot needs to be modified, and the lane line for navigation is laid in advance at the place where visual navigation is needed.
Referring to fig. 3, fig. 3 is a flow chart of a visual navigation mode according to a first embodiment of the present invention.
Specifically, in step S105, the mobile robot is provided with a photographing camera, and when the mobile robot is switched to a visual navigation mode, the navigation is performed by using the photographing mode of the image pickup device in the visual navigation mode, including the following steps:
step S105a: a first frame image is acquired.
In this embodiment, the photographing camera photographs the environmental image at a preset time interval, where the preset time interval may be set manually, and the first frame image is the environmental image photographed by the photographing camera at the current time.
Step S105b: and extracting a first lane line according to the first frame image, wherein the first lane line is a lane line shot in the first frame image.
In step S105b, a predicted lane line is required to be obtained from the first frame image, where the predicted lane line is a predicted estimate of the current time lane line, and in this embodiment, the predicted lane line is obtained by predicting a lane line in an environmental image captured at a time previous to the current time by a preset time interval.
Specifically, assuming that the current time is t and the preset time interval is Δt, the time immediately before the preset time interval is t- Δt, performing color image-to-gray image conversion on an environment image shot at the time t- Δt, performing Gaussian blur processing on the gray image, performing perspective conversion to obtain a ground top view of the environment image, extracting lane lines in the environment image from the ground top view, and converting the lane lines in the environment image into a world coordinate system, so that the distance and the angle between the mobile robot and the lane lines in the environment image can be calculated. The mobile robot advances in the direction of decreasing distance from the lane line in the visual navigation mode, so that the angle and the position of the lane line in the environment image are changed in the top view during the navigation movement along the lane line, and therefore two parameters of the angle change amount and the position change amount are increased to compensate the position change of the lane line of the shot image at the next preset time interval, namely at the moment t, namely the position of the lane line at the moment t is predicted by the two parameters of the angle change amount and the position change amount, and the predicted lane line is the predicted lane line.
Further, lane lines meeting preset conditions are selected from the first frame image according to the predicted lane lines to serve as the first lane lines. Specifically, the processing procedure of the first frame image is similar to the processing procedure of acquiring the environment image of the predicted lane line, firstly, the first frame image shot at the current moment is converted from a color image to a gray image to obtain the first gray image, then the first gray image is subjected to Gaussian blur processing and then is subjected to perspective transformation to obtain a first top view of the first frame image, and the first top view is a ground top view. And extracting the identified line segments from the first top view, wherein the set of line segments is a first line segment set, comparing the first line segment set with the predicted lane lines respectively, and screening line segments meeting preset conditions to obtain the first lane lines.
In this embodiment, the preset conditions are:
(1) The angle difference between the line segment and the predicted lane line cannot exceed 10 degrees, and the real distance between the line segment and the left side contour line of the predicted lane line cannot exceed 0.05m or the real distance between the line segment and the right side contour line of the predicted lane line cannot exceed 0.05m;
(2) The energy value of the line segment is greater than a threshold, which may be set by human.
If the line segment meeting the preset condition is not met, the first lane line is considered not found at the current moment, when the first lane line cannot be acquired from the continuous preset number of frame images, the mobile robot is considered to reach the end of the lane line, the mobile robot is decelerated and stopped and exits the visual navigation mode, and the preset number can be set manually.
Step S105c: tracking and navigating according to the first lane line.
After the first lane line is screened, the gesture of the mobile robot is adjusted according to the distance and the angle between the mobile robot and the first lane line, so that the mobile robot advances towards the direction of reducing the distance between the mobile robot and the first lane line, and the mobile robot is ensured to always run along the lane line preset in the running environment.
When the mobile robot adopts the visual navigation mode, the odometer continuously records mileage, and when the visual navigation mode exits, the mobile robot calculates the first pose according to the initial pose calculated by the mileage and the environment map, and can reenter the laser navigation mode.
The hybrid navigation method of the mobile robot of the first embodiment of the invention enables the mobile robot to adapt to more navigation environments, in particular to cross-plane uphill and downhill, long corridor and night environments by adopting the hybrid navigation method of the laser navigation mode and the visual navigation mode.
Furthermore, in the visual navigation mode, the lane lines are used as visual navigation characteristic information, the lane lines are suitable for indoor and outdoor environments, the pavement is simple, the maintenance is convenient, and the stability of visual navigation is ensured.
Further, the navigation mode is switched by setting a mode switching point on the map, so that the stability and the accuracy of navigation are improved.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a hybrid navigation device for a mobile robot according to an embodiment of the invention. As shown in fig. 4, the apparatus includes an acquisition module 41, a visual navigation module 42, and a laser navigation module 43.
The acquiring module 41 is configured to acquire an environment map, where the environment map includes point cloud information, a mode switching point, and lane line information of the mobile robot operating environment;
optionally, the acquiring module 41 may be further configured to acquire a first frame image and an environmental image;
the visual navigation module 42 is used for navigating by utilizing a shooting mode of the image pickup element;
optionally, the visual navigation module 42 may be further configured to extract a first lane line according to the first frame image, and track navigation according to the first lane line;
alternatively, the visual navigation module 42 may be further configured to obtain a predicted lane line, and select a lane line that meets a preset condition from the first frame image according to the predicted lane line as the first lane line.
Optionally, the visual navigation module 42 may be further configured to acquire a lane line in an environmental image captured at a time previous to the current time interval by a preset time interval; acquiring the distance and the angle between the mobile robot and the lane line when the mobile robot navigates along the lane line; and acquiring the predicted lane line according to the change amount of the distance and the change amount of the angle.
Optionally, the visual navigation module 42 may be further configured to obtain a first gray-scale image from the first frame image; obtaining a first top view of the first frame image according to the first gray level image; and extracting a first line segment set according to the first top view, and screening line segments meeting preset conditions compared with the predicted lane lines from the first line segment set to serve as the first lane lines.
The laser navigation module 43 is used for navigating by using a laser radar scanning mode.
Optionally, the laser navigation module 43 may be further configured to obtain an initial pose of the current position of the mobile robot; acquiring a virtual point cloud according to the initial pose and the environment map; and acquiring a first pose of the mobile robot according to the virtual point cloud and the point cloud obtained by actual scanning of the laser radar, wherein the first pose is an accurate pose of the current position of the mobile robot.
The mobile robot hybrid navigation device of the first embodiment of the invention enables the mobile robot to adapt to more navigation environments, in particular to cross-plane uphill and downhill, long corridor and night environments by adopting a hybrid navigation method of a laser navigation mode and a visual navigation mode.
Furthermore, in the visual navigation mode, the lane lines are used as visual navigation characteristic information, the lane lines are suitable for indoor and outdoor environments, the pavement is simple, the maintenance is convenient, and the stability of visual navigation is ensured.
Further, the navigation mode is switched by setting a mode switching point on the map, so that the stability and the accuracy of navigation are improved.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a hybrid navigation device according to an embodiment of the invention. As shown in fig. 5, the upgrade apparatus 60 includes a processor 61 and a memory 62 coupled to the processor 61.
The memory 62 stores program instructions for implementing the hybrid mobile robot navigation method according to any of the embodiments described above.
The processor 61 is adapted to execute program instructions stored by the memory 62 for hybrid navigation of the mobile robot.
The processor 61 may also be referred to as a CPU (Central Processing Unit ). The processor 61 may be an integrated circuit chip with signal processing capabilities. Processor 61 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a memory device according to an embodiment of the invention. The storage device of the embodiment of the present invention stores a program file 71 capable of implementing the hybrid navigation method of all mobile robots, where the program file 71 may be stored in the storage device in the form of a software product, and includes several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. The aforementioned storage device includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes, or a terminal device such as a computer, a server, a mobile phone, a tablet, or the like.
In the several embodiments provided in this application, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units. The foregoing is only the embodiments of the present application, and not the patent scope of the present application is limited by the foregoing description, but all equivalent structures or equivalent processes using the contents of the present application and the accompanying drawings, or directly or indirectly applied to other related technical fields, which are included in the patent protection scope of the present application.