CN111780744B - Mobile robot hybrid navigation method, equipment and storage device - Google Patents

Mobile robot hybrid navigation method, equipment and storage device Download PDF

Info

Publication number
CN111780744B
CN111780744B CN202010592745.7A CN202010592745A CN111780744B CN 111780744 B CN111780744 B CN 111780744B CN 202010592745 A CN202010592745 A CN 202010592745A CN 111780744 B CN111780744 B CN 111780744B
Authority
CN
China
Prior art keywords
mobile robot
navigation
lane line
mode
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010592745.7A
Other languages
Chinese (zh)
Other versions
CN111780744A (en
Inventor
王坤
林辉
卢维
殷俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Huaray Technology Co Ltd
Original Assignee
Zhejiang Huaray Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Huaray Technology Co Ltd filed Critical Zhejiang Huaray Technology Co Ltd
Priority to CN202010592745.7A priority Critical patent/CN111780744B/en
Publication of CN111780744A publication Critical patent/CN111780744A/en
Application granted granted Critical
Publication of CN111780744B publication Critical patent/CN111780744B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a mobile robot hybrid navigation method, equipment and a storage device, wherein the mobile robot hybrid navigation method comprises the steps of obtaining an environment map, presetting a lane line in the running environment of a mobile robot, and judging whether the current position of the mobile robot is in the preset range of the mode switching point in the environment map; and if the mode is judged not to be within the preset range of the mode switching point, performing navigation by adopting the laser navigation mode by utilizing a laser radar scanning mode, and if the mode is judged to be within the preset range of the mode switching point, performing navigation by adopting the visual navigation mode by utilizing an image pickup element to shoot and identify the lane line mode. Through the mode, the navigation system can achieve the purpose of adapting to various navigation environments, and particularly can accurately navigate in the environments of ascending and descending slopes, long corridor and night navigation.

Description

Mobile robot hybrid navigation method, equipment and storage device
Technical Field
The present disclosure relates to the field of mobile robot navigation, and in particular, to a mobile robot hybrid navigation method, apparatus, and storage device.
Background
The mobile robot is a robot with high self-planning, self-organizing and self-adapting capabilities, and in the research of the related technology of the mobile robot, the navigation technology is the core and key of the research of the mobile robot, and is also the key for realizing the intelligence of the mobile robot.
The prior art scheme of mobile robot navigation is mainly a laser navigation or visual navigation scheme, wherein the laser navigation uses a laser radar to measure distance, and the visual navigation uses a visual sensor to acquire environmental information.
For a laser navigation scheme, a 2D laser radar is mostly adopted in engineering practice, so that the problem is brought that a mobile robot cannot adapt to a scene with uneven ground or a long corridor, particularly an environment with an ascending slope and a descending slope, environmental information obtained by the 2D laser radar on different planes may have larger phase difference, and the probability of failure in matching point cloud data or error results are obtained, so that the robot cannot accurately realize positioning.
For the visual navigation scheme, a color camera is generally adopted as a sensing unit, and different illumination conditions have a larger influence on the environmental information acquired by the sensing unit, so that an error result is generated in matching among images, and especially for a robot needing to work at night, the normal work of a visual navigation system is difficult to ensure under the illumination conditions at night, so that the application scene of the visual navigation scheme is limited.
Therefore, it is necessary to provide a hybrid navigation method, apparatus and storage device for mobile robots to solve the above technical problems.
Disclosure of Invention
The application provides a mobile robot hybrid navigation method, equipment and a storage device, which can achieve the purpose of adapting to various navigation environments, and particularly can still accurately navigate under the conditions of ascending and descending slopes, long corridor and night navigation.
In order to solve the technical problems, one technical scheme adopted by the application is as follows: the hybrid navigation method of the mobile robot comprises the following steps:
acquiring an environment map, wherein the environment map comprises point cloud information of the running environment of the mobile robot and a mode switching point, the mode switching point is used for indicating the mobile robot to switch a navigation mode, and the navigation mode comprises a laser navigation mode and a visual navigation mode;
presetting a lane line in the running environment of the mobile robot, wherein the lane line is used for providing visual navigation reference for the mobile robot;
whether the current position of the mobile robot is within a preset range of the mode switching point in the environment map;
and if the mode is judged not to be within the preset range of the mode switching point, performing navigation by adopting the laser navigation mode by utilizing a laser radar scanning mode, and if the mode is judged to be within the preset range of the mode switching point, performing navigation by adopting the visual navigation mode by utilizing an image pickup element to shoot and identify the lane line mode.
In order to solve the technical problems, another technical scheme adopted by the application is as follows: there is provided a hybrid navigation device comprising a processor, a memory coupled to the processor, wherein the memory stores program instructions for implementing the mobile robot hybrid navigation method described above; the processor is configured to execute the program instructions stored by the memory to hybrid navigate a mobile robot.
In order to solve the technical problem, a further technical scheme adopted by the application is as follows: provided is a storage device in which a program file capable of realizing the hybrid navigation method for a mobile robot is stored.
The beneficial effects of this application are:
according to the mobile robot hybrid navigation method, the mobile robot hybrid navigation device and the storage device, the mobile robot can adapt to various environments through the laser navigation mode and the visual navigation mode, and particularly, the mobile robot can still perform accurate navigation in the environments of ascending and descending slopes, long corridor, night and the like.
Drawings
Fig. 1 is a flow chart of a hybrid navigation method of a mobile robot according to a first embodiment of the present invention;
FIG. 2 is a flow chart of a laser navigation mode according to a first embodiment of the present invention;
FIG. 3 is a flow chart of a visual navigation mode according to a first embodiment of the present invention;
fig. 4 is a schematic structural view of a hybrid navigation device for a mobile robot according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a hybrid navigation device according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a memory device according to an embodiment of the present invention.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The terms "first," "second," "third," and the like in this application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second", and "a third" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise. All directional indications (such as up, down, left, right, front, back … …) in the embodiments of the present application are merely used to explain the relative positional relationship, movement, etc. between the components in a particular gesture (as shown in the drawings), and if the particular gesture changes, the directional indication changes accordingly. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
Referring to fig. 1, fig. 1 is a flow chart of a hybrid navigation method of a mobile robot according to a first embodiment of the invention. It should be noted that, if there are substantially the same results, the method of the present invention is not limited to the flow sequence shown in fig. 1. As shown in fig. 1, the method comprises the steps of:
step S101: and acquiring an environment map, wherein the environment map comprises point cloud information of the running environment of the mobile robot and a mode switching point, and the mode switching point is used for indicating the mobile robot to switch a navigation mode.
Laser navigation is to use 2D or 3D lidar (also called single-line or multi-line lidar) for distance measurement. The object information acquired by the lidar presents a series of discrete points with accurate angle and distance information, referred to as a point cloud. In general, the laser SLAM system calculates the change of the distance and the posture of the laser radar relative motion through matching and comparing two point clouds at different moments, and then the positioning of the robot is completed. The laser radar ranging is accurate, the error model is simple, the operation is stable in the environment except the strong light direct irradiation, the processing of the point cloud is easy, and meanwhile, the point cloud information itself contains a direct geometric relationship, so that the path planning and navigation of the robot become visual. For the laser navigation scheme, the mobile robot cannot adapt to the scene of uneven ground or long corridor, especially the environment with uphill and downhill, the laser radar can obtain environment information on different planes with larger difference, and the matching between the point cloud data has larger probability failure or error result is obtained, so that the mobile robot cannot accurately realize positioning. Visual navigation employs visual sensors to obtain environmental information. The visual sensor acquires the surface characteristic image of the measured object, the processing of the digital image is completed by the special hardware of the high-speed image, the image coordinates of the characteristic information are extracted, and the parameters such as the relative motion distance and the gesture of the visual sensor are obtained through the matching of the characteristic information between frames, so that the positioning of the robot is realized. The existing visual navigation scheme generally adopts a color camera as a sensing unit, and different illumination conditions have larger influence on the environmental information acquired by the sensing unit, so that an error result is generated in matching among images, and especially for a mobile robot needing to work at night, the normal work of a visual navigation system is difficult to ensure under the illumination conditions at night, so that the application scene of the visual navigation scheme is limited.
In this embodiment, the environment map includes point cloud information and a mode switching point of the mobile robot operating environment, where the mode switching point is a positioning point for indicating that the mobile robot switches a navigation mode, and marked in the environment map, so that the whole application scene needs to be scanned by the laser radar to complete a mapping process, the environment information can be represented by using a grid map, then a corresponding navigation mode switching point is set on the map, the preset range can be set manually, the range of the mode switching point can be controlled by the preset range, and the navigation mode includes a laser navigation mode and a visual navigation mode. The mobile robot is provided with an imaging element, such as a shooting camera, for visual navigation, and is also provided with a laser radar for laser navigation.
Step S102: and presetting a lane line in the running environment of the mobile robot, wherein the lane line is used for providing visual navigation reference for the mobile robot.
Different from laser navigation, the visual navigation needs to acquire environmental information by adopting visual sensors such as shooting cameras, the visual sensors acquire surface characteristic images of the detected objects, digital image processing is completed by high-speed image special hardware, image coordinates of the characteristic information are extracted, and parameters such as relative movement distance and posture of the visual sensors are obtained through matching of the characteristic information between frames, so that the mobile robot is positioned. In the embodiment, the lane lines are used as the characteristic images to provide visual navigation references for the mobile robot, so that the mobile robot is simple to lay and convenient to maintain.
Step S103: whether the current position of the mobile robot is within a preset range of the mode switching point in the environment map; if not, step S104 is executed, and if yes, step S105 is executed.
Specifically, the mobile robot may determine whether the current position is within a preset range of the mode switching point in the environment map during the navigation process, and when the current position of the mobile robot is not within the preset range of the mode switching point, the laser navigation mode is adopted to navigate by using a laser radar scanning mode, and when the current position of the mobile robot is within the preset range of the mode switching point, the visual navigation mode is adopted to navigate by using an image pickup element to shoot and identify the lane line.
Step S104: and if the current position of the mobile robot is not in the preset range of the mode switching point, navigating by adopting the laser navigation mode in a laser radar scanning mode.
Referring to fig. 2, fig. 2 is a flow chart of a laser navigation mode according to a first embodiment of the invention.
Specifically, in step S104, the mobile robot is provided with a laser radar, and when the mobile robot uses a laser navigation mode, the navigation by using a laser radar scanning mode includes the following steps:
step S104a: acquiring an initial pose of the current position of the mobile robot;
in the laser navigation mode, the mobile robot first needs to acquire an initial pose of the current position. In this embodiment, an odometer is disposed on the mobile robot, in the running process of the mobile robot, the odometer may record the action mileage of the mobile robot, the recording result of the odometer may be directly used as the initial pose, in another embodiment, an Inertial Measurement Unit (IMU) may also be disposed on the mobile robot, and then the initial pose may be obtained by fusion according to the poses obtained by matching the odometer with the Inertial Measurement Unit (IMU) and the point cloud scanned by the laser radar, and a self-adaptive monte carlo method (AMCL) may also be adopted to obtain the initial pose.
Step S104b: acquiring a virtual point cloud according to the initial pose and the environment map;
and then, the mobile robot generates a virtual point cloud according to the initial pose and the environment map, namely, the calculated virtual point cloud information is obtained in the environment map through the initial pose.
Step S104c: and acquiring a first pose of the mobile robot according to the virtual point cloud and the point cloud obtained by actual scanning of the laser radar, wherein the first pose is an accurate pose of the current position of the mobile robot.
And finally, using an iterative closest point method (ICP) to find out rotation and translation transformation which enables the virtual point cloud to be closest to the point cloud actually scanned by the laser radar through multiple iterations, and obtaining the first pose of the mobile robot through the obtained rotation and translation transformation of the initial pose.
After the mobile robot obtains the first pose, the mobile robot can navigate on the environment map according to the first pose.
Step S105: and if the current position of the mobile robot is within the preset range of the mode switching point, adopting the visual navigation mode to carry out navigation by utilizing the mode of shooting and identifying the lane line by using the image pickup element.
It should be noted that, unlike the laser navigation scheme, the visual navigation uses the visual sensor to obtain the characteristic information in the environment information, and performs navigation according to the characteristic information, so the visual navigation needs to set the characteristic information in advance.
In this embodiment, the characteristic information for visual navigation is a lane line, and the lane line as the visual navigation characteristic information can be applied to an indoor scene and an outdoor scene, and is convenient to maintain, so that the working environment of the mobile robot needs to be modified, and the lane line for navigation is laid in advance at the place where visual navigation is needed.
Referring to fig. 3, fig. 3 is a flow chart of a visual navigation mode according to a first embodiment of the present invention.
Specifically, in step S105, the mobile robot is provided with a photographing camera, and when the mobile robot is switched to a visual navigation mode, the navigation is performed by using the photographing mode of the image pickup device in the visual navigation mode, including the following steps:
step S105a: a first frame image is acquired.
In this embodiment, the photographing camera photographs the environmental image at a preset time interval, where the preset time interval may be set manually, and the first frame image is the environmental image photographed by the photographing camera at the current time.
Step S105b: and extracting a first lane line according to the first frame image, wherein the first lane line is a lane line shot in the first frame image.
In step S105b, a predicted lane line is required to be obtained from the first frame image, where the predicted lane line is a predicted estimate of the current time lane line, and in this embodiment, the predicted lane line is obtained by predicting a lane line in an environmental image captured at a time previous to the current time by a preset time interval.
Specifically, assuming that the current time is t and the preset time interval is Δt, the time immediately before the preset time interval is t- Δt, performing color image-to-gray image conversion on an environment image shot at the time t- Δt, performing Gaussian blur processing on the gray image, performing perspective conversion to obtain a ground top view of the environment image, extracting lane lines in the environment image from the ground top view, and converting the lane lines in the environment image into a world coordinate system, so that the distance and the angle between the mobile robot and the lane lines in the environment image can be calculated. The mobile robot advances in the direction of decreasing distance from the lane line in the visual navigation mode, so that the angle and the position of the lane line in the environment image are changed in the top view during the navigation movement along the lane line, and therefore two parameters of the angle change amount and the position change amount are increased to compensate the position change of the lane line of the shot image at the next preset time interval, namely at the moment t, namely the position of the lane line at the moment t is predicted by the two parameters of the angle change amount and the position change amount, and the predicted lane line is the predicted lane line.
Further, lane lines meeting preset conditions are selected from the first frame image according to the predicted lane lines to serve as the first lane lines. Specifically, the processing procedure of the first frame image is similar to the processing procedure of acquiring the environment image of the predicted lane line, firstly, the first frame image shot at the current moment is converted from a color image to a gray image to obtain the first gray image, then the first gray image is subjected to Gaussian blur processing and then is subjected to perspective transformation to obtain a first top view of the first frame image, and the first top view is a ground top view. And extracting the identified line segments from the first top view, wherein the set of line segments is a first line segment set, comparing the first line segment set with the predicted lane lines respectively, and screening line segments meeting preset conditions to obtain the first lane lines.
In this embodiment, the preset conditions are:
(1) The angle difference between the line segment and the predicted lane line cannot exceed 10 degrees, and the real distance between the line segment and the left side contour line of the predicted lane line cannot exceed 0.05m or the real distance between the line segment and the right side contour line of the predicted lane line cannot exceed 0.05m;
(2) The energy value of the line segment is greater than a threshold, which may be set by human.
If the line segment meeting the preset condition is not met, the first lane line is considered not found at the current moment, when the first lane line cannot be acquired from the continuous preset number of frame images, the mobile robot is considered to reach the end of the lane line, the mobile robot is decelerated and stopped and exits the visual navigation mode, and the preset number can be set manually.
Step S105c: tracking and navigating according to the first lane line.
After the first lane line is screened, the gesture of the mobile robot is adjusted according to the distance and the angle between the mobile robot and the first lane line, so that the mobile robot advances towards the direction of reducing the distance between the mobile robot and the first lane line, and the mobile robot is ensured to always run along the lane line preset in the running environment.
When the mobile robot adopts the visual navigation mode, the odometer continuously records mileage, and when the visual navigation mode exits, the mobile robot calculates the first pose according to the initial pose calculated by the mileage and the environment map, and can reenter the laser navigation mode.
The hybrid navigation method of the mobile robot of the first embodiment of the invention enables the mobile robot to adapt to more navigation environments, in particular to cross-plane uphill and downhill, long corridor and night environments by adopting the hybrid navigation method of the laser navigation mode and the visual navigation mode.
Furthermore, in the visual navigation mode, the lane lines are used as visual navigation characteristic information, the lane lines are suitable for indoor and outdoor environments, the pavement is simple, the maintenance is convenient, and the stability of visual navigation is ensured.
Further, the navigation mode is switched by setting a mode switching point on the map, so that the stability and the accuracy of navigation are improved.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a hybrid navigation device for a mobile robot according to an embodiment of the invention. As shown in fig. 4, the apparatus includes an acquisition module 41, a visual navigation module 42, and a laser navigation module 43.
The acquiring module 41 is configured to acquire an environment map, where the environment map includes point cloud information, a mode switching point, and lane line information of the mobile robot operating environment;
optionally, the acquiring module 41 may be further configured to acquire a first frame image and an environmental image;
the visual navigation module 42 is used for navigating by utilizing a shooting mode of the image pickup element;
optionally, the visual navigation module 42 may be further configured to extract a first lane line according to the first frame image, and track navigation according to the first lane line;
alternatively, the visual navigation module 42 may be further configured to obtain a predicted lane line, and select a lane line that meets a preset condition from the first frame image according to the predicted lane line as the first lane line.
Optionally, the visual navigation module 42 may be further configured to acquire a lane line in an environmental image captured at a time previous to the current time interval by a preset time interval; acquiring the distance and the angle between the mobile robot and the lane line when the mobile robot navigates along the lane line; and acquiring the predicted lane line according to the change amount of the distance and the change amount of the angle.
Optionally, the visual navigation module 42 may be further configured to obtain a first gray-scale image from the first frame image; obtaining a first top view of the first frame image according to the first gray level image; and extracting a first line segment set according to the first top view, and screening line segments meeting preset conditions compared with the predicted lane lines from the first line segment set to serve as the first lane lines.
The laser navigation module 43 is used for navigating by using a laser radar scanning mode.
Optionally, the laser navigation module 43 may be further configured to obtain an initial pose of the current position of the mobile robot; acquiring a virtual point cloud according to the initial pose and the environment map; and acquiring a first pose of the mobile robot according to the virtual point cloud and the point cloud obtained by actual scanning of the laser radar, wherein the first pose is an accurate pose of the current position of the mobile robot.
The mobile robot hybrid navigation device of the first embodiment of the invention enables the mobile robot to adapt to more navigation environments, in particular to cross-plane uphill and downhill, long corridor and night environments by adopting a hybrid navigation method of a laser navigation mode and a visual navigation mode.
Furthermore, in the visual navigation mode, the lane lines are used as visual navigation characteristic information, the lane lines are suitable for indoor and outdoor environments, the pavement is simple, the maintenance is convenient, and the stability of visual navigation is ensured.
Further, the navigation mode is switched by setting a mode switching point on the map, so that the stability and the accuracy of navigation are improved.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a hybrid navigation device according to an embodiment of the invention. As shown in fig. 5, the upgrade apparatus 60 includes a processor 61 and a memory 62 coupled to the processor 61.
The memory 62 stores program instructions for implementing the hybrid mobile robot navigation method according to any of the embodiments described above.
The processor 61 is adapted to execute program instructions stored by the memory 62 for hybrid navigation of the mobile robot.
The processor 61 may also be referred to as a CPU (Central Processing Unit ). The processor 61 may be an integrated circuit chip with signal processing capabilities. Processor 61 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a memory device according to an embodiment of the invention. The storage device of the embodiment of the present invention stores a program file 71 capable of implementing the hybrid navigation method of all mobile robots, where the program file 71 may be stored in the storage device in the form of a software product, and includes several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. The aforementioned storage device includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes, or a terminal device such as a computer, a server, a mobile phone, a tablet, or the like.
In the several embodiments provided in this application, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units. The foregoing is only the embodiments of the present application, and not the patent scope of the present application is limited by the foregoing description, but all equivalent structures or equivalent processes using the contents of the present application and the accompanying drawings, or directly or indirectly applied to other related technical fields, which are included in the patent protection scope of the present application.

Claims (8)

1. The mobile robot hybrid navigation method is characterized by comprising the following steps of:
acquiring an environment map, wherein the environment map comprises point cloud information of the running environment of the mobile robot and a mode switching point, the mode switching point is used for indicating the mobile robot to switch a navigation mode, and the navigation mode comprises a laser navigation mode and a visual navigation mode;
presetting a lane line in the running environment of the mobile robot, wherein the lane line is used for providing visual navigation reference for the mobile robot;
whether the current position of the mobile robot is within a preset range of the mode switching point in the environment map;
if the mode is judged not to be within the preset range of the mode switching point, the laser navigation mode is adopted to navigate by utilizing a laser radar scanning mode, and if the mode is judged to be within the preset range of the mode switching point, the visual navigation mode is adopted to navigate by utilizing a mode that the image pickup element shoots and identifies the lane line;
the navigation by using the visual navigation mode of the lane line mode is performed by shooting and identifying the image pickup element comprises the following steps:
acquiring a first frame image, wherein the first frame image is an environment image shot by the image pickup element at the current moment;
acquiring a lane line in an environment image shot at the last moment of a preset time interval from the current moment;
acquiring the distance and the angle between the mobile robot and the lane line when the mobile robot navigates along the lane line;
obtaining a predicted lane line according to the change amount of the distance and the change amount of the angle, wherein the predicted lane line is a predicted estimation of the lane line at the current moment;
extracting a first lane line from the first frame image according to the predicted lane line, wherein the first lane line is the lane line shot in the first frame image;
tracking and navigating according to the first lane line.
2. The hybrid mobile robot navigation method of claim 1, wherein,
and selecting the lane line meeting the preset condition from the first frame image as the first lane line according to the predicted lane line.
3. The hybrid navigation method of claim 2, wherein the selecting, from the first frame image, a lane line satisfying a preset condition as the first lane line according to the predicted lane line, includes:
obtaining a first gray level image according to the first frame image, wherein the first gray level image is an image obtained after gray level processing of the first frame image;
obtaining a first top view of the first frame image according to the first gray level image, wherein the first top view is a ground top view;
and extracting a first line segment set according to the first top view, wherein the first line segment set is a set of all line segments in the first top view, and screening line segments meeting preset conditions compared with the predicted lane lines from the first line segment set as the first lane lines.
4. The hybrid navigation method of claim 1, wherein the visual navigation mode is exited when the first lane line is not acquired from a continuous preset number of frame images.
5. The hybrid navigation method of claim 1, wherein the navigating using the laser navigation mode of the laser radar scanning method comprises:
acquiring an initial pose of the current position of the mobile robot;
acquiring a virtual point cloud according to the initial pose and the environment map;
and acquiring a first pose of the mobile robot according to the virtual point cloud and the point cloud obtained by actual scanning of the laser radar, wherein the first pose is an accurate pose of the current position of the mobile robot.
6. The hybrid navigation method of a mobile robot according to claim 1, wherein an odometer is further preset on the mobile robot, and the odometer records mileage when the mobile robot is operated.
7. A hybrid navigation device comprising a processor, a memory coupled to the processor, wherein,
the memory stores program instructions for implementing the mobile robot hybrid navigation method of any one of claims 1-6;
the processor is configured to execute the program instructions stored by the memory to hybrid navigate a mobile robot.
8. A storage device, characterized in that a program file capable of realizing the mobile robot hybrid navigation method according to any one of claims 1 to 6 is stored.
CN202010592745.7A 2020-06-24 2020-06-24 Mobile robot hybrid navigation method, equipment and storage device Active CN111780744B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010592745.7A CN111780744B (en) 2020-06-24 2020-06-24 Mobile robot hybrid navigation method, equipment and storage device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010592745.7A CN111780744B (en) 2020-06-24 2020-06-24 Mobile robot hybrid navigation method, equipment and storage device

Publications (2)

Publication Number Publication Date
CN111780744A CN111780744A (en) 2020-10-16
CN111780744B true CN111780744B (en) 2023-12-29

Family

ID=72761141

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010592745.7A Active CN111780744B (en) 2020-06-24 2020-06-24 Mobile robot hybrid navigation method, equipment and storage device

Country Status (1)

Country Link
CN (1) CN111780744B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113777615B (en) * 2021-07-19 2024-03-29 派特纳(上海)机器人科技有限公司 Positioning method and system of indoor robot and cleaning robot
CN115080676B (en) * 2022-06-16 2024-06-18 重庆邮电大学 Dynamic management method for combined map of mobile robot

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9081383B1 (en) * 2014-01-22 2015-07-14 Google Inc. Enhancing basic roadway-intersection models using high intensity image data
CN106918830A (en) * 2017-03-23 2017-07-04 安科机器人有限公司 A kind of localization method and mobile robot based on many navigation modules
CN107421540A (en) * 2017-05-05 2017-12-01 华南理工大学 A kind of Mobile Robotics Navigation method and system of view-based access control model
CN109460739A (en) * 2018-11-13 2019-03-12 广州小鹏汽车科技有限公司 Method for detecting lane lines and device
CN109785667A (en) * 2019-03-11 2019-05-21 百度在线网络技术(北京)有限公司 Deviation recognition methods, device, equipment and storage medium
WO2019140745A1 (en) * 2018-01-16 2019-07-25 广东省智能制造研究所 Robot positioning method and device
WO2020011025A1 (en) * 2018-07-12 2020-01-16 广州小鹏汽车科技有限公司 Automated vehicular lane changing method and apparatus
CN110986920A (en) * 2019-12-26 2020-04-10 武汉万集信息技术有限公司 Positioning navigation method, device, equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019232806A1 (en) * 2018-06-08 2019-12-12 珊口(深圳)智能科技有限公司 Navigation method, navigation system, mobile control system, and mobile robot

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9081383B1 (en) * 2014-01-22 2015-07-14 Google Inc. Enhancing basic roadway-intersection models using high intensity image data
CN106918830A (en) * 2017-03-23 2017-07-04 安科机器人有限公司 A kind of localization method and mobile robot based on many navigation modules
CN107421540A (en) * 2017-05-05 2017-12-01 华南理工大学 A kind of Mobile Robotics Navigation method and system of view-based access control model
WO2019140745A1 (en) * 2018-01-16 2019-07-25 广东省智能制造研究所 Robot positioning method and device
WO2020011025A1 (en) * 2018-07-12 2020-01-16 广州小鹏汽车科技有限公司 Automated vehicular lane changing method and apparatus
CN109460739A (en) * 2018-11-13 2019-03-12 广州小鹏汽车科技有限公司 Method for detecting lane lines and device
CN109785667A (en) * 2019-03-11 2019-05-21 百度在线网络技术(北京)有限公司 Deviation recognition methods, device, equipment and storage medium
CN110986920A (en) * 2019-12-26 2020-04-10 武汉万集信息技术有限公司 Positioning navigation method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
激光测距在移动机器人自主导航中的应用;隋金雪;杨莉;贺永强;;传感器与微系统(第07期);114-117 *

Also Published As

Publication number Publication date
CN111780744A (en) 2020-10-16

Similar Documents

Publication Publication Date Title
CN112567201B (en) Distance measuring method and device
WO2021233029A1 (en) Simultaneous localization and mapping method, device, system and storage medium
US10964054B2 (en) Method and device for positioning
US10192113B1 (en) Quadocular sensor design in autonomous platforms
KR101725060B1 (en) Apparatus for recognizing location mobile robot using key point based on gradient and method thereof
US10496104B1 (en) Positional awareness with quadocular sensor in autonomous platforms
KR101776622B1 (en) Apparatus for recognizing location mobile robot using edge based refinement and method thereof
Held et al. Precision tracking with sparse 3d and dense color 2d data
JP6095018B2 (en) Detection and tracking of moving objects
KR101776621B1 (en) Apparatus for recognizing location mobile robot using edge based refinement and method thereof
US8180107B2 (en) Active coordinated tracking for multi-camera systems
Senior et al. Acquiring multi-scale images by pan-tilt-zoom control and automatic multi-camera calibration
WO2014114923A1 (en) A method of detecting structural parts of a scene
JP2012529691A (en) 3D image generation
JP2008298685A (en) Measuring device and program
JP2011022157A (en) Position detection apparatus, position detection method and position detection program
CN111780744B (en) Mobile robot hybrid navigation method, equipment and storage device
CN110597265A (en) Recharging method and device for sweeping robot
CN112734765A (en) Mobile robot positioning method, system and medium based on example segmentation and multi-sensor fusion
CN108362205B (en) Space distance measuring method based on fringe projection
CN113160327A (en) Method and system for realizing point cloud completion
Konrad et al. Localization in digital maps for road course estimation using grid maps
CN113034586B (en) Road inclination angle detection method and detection system
CN112950696A (en) Navigation map generation method and generation device and electronic equipment
CN115218906A (en) Indoor SLAM-oriented visual inertial fusion positioning method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20201230

Address after: C10, 1199 Bin'an Road, Binjiang District, Hangzhou City, Zhejiang Province, 310051

Applicant after: ZHEJIANG HUARAY TECHNOLOGY Co.,Ltd.

Address before: No.1187 Bin'an Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: ZHEJIANG DAHUA TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
CB02 Change of applicant information

Address after: 310051 8 / F, building a, 1181 Bin'an Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Zhejiang Huarui Technology Co.,Ltd.

Address before: C10, 1199 Bin'an Road, Binjiang District, Hangzhou City, Zhejiang Province, 310051

Applicant before: ZHEJIANG HUARAY TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant