CN111427363B - Robot navigation control method and system - Google Patents

Robot navigation control method and system Download PDF

Info

Publication number
CN111427363B
CN111427363B CN202010334371.9A CN202010334371A CN111427363B CN 111427363 B CN111427363 B CN 111427363B CN 202010334371 A CN202010334371 A CN 202010334371A CN 111427363 B CN111427363 B CN 111427363B
Authority
CN
China
Prior art keywords
obstacle
robot
map
scanning
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010334371.9A
Other languages
Chinese (zh)
Other versions
CN111427363A (en
Inventor
史超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Guoxin Taifu Technology Co ltd
Original Assignee
Shenzhen Guoxin Taifu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Guoxin Taifu Technology Co ltd filed Critical Shenzhen Guoxin Taifu Technology Co ltd
Priority to CN202010334371.9A priority Critical patent/CN111427363B/en
Publication of CN111427363A publication Critical patent/CN111427363A/en
Application granted granted Critical
Publication of CN111427363B publication Critical patent/CN111427363B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/0278Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using satellite positioning signals, e.g. GPS

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Multimedia (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a robot navigation control method and system, and relates to the field of robot control. Comprising the following steps: step S1, acquiring real-time position information of a robot by adopting a positioning device; s2, processing and obtaining a navigation map according to the wide baseline image, the narrow baseline image and the real-time position information; s3, scanning in real time to obtain a scanning map; comprising a number of obstacles; s4, measuring the obstacle to obtain obstacle height information, and marking to obtain an obstacle map; s5, fusing the obstacle map and the navigation map to establish a topographic map; step S6, outputting a first control instruction if the obstacle height information in the topographic map is smaller than a preset obstacle threshold value; outputting a second control instruction if the obstacle threshold value is not smaller than the obstacle threshold value; step S7, crossing the obstacle according to the first control instruction; and S8, steering to avoid the obstacle according to the second control instruction. Has the following beneficial effects: the precise navigation of the robot and the effective avoidance of obstacles are realized.

Description

Robot navigation control method and system
Technical Field
The invention relates to the field of robot control, in particular to a robot navigation control method and system.
Background
With the development of artificial intelligence technology, the living standard of people is improved, and the robot is widely applied to industries such as industry, agriculture, medical treatment, service, intelligent home and the like.
In the prior art, the robot navigation mostly adopts a camera sensor probe to collect related images and analyzes and processes the collected related images to provide the robot to navigate, but the image collection by the camera sensor probe only has the consequence that the collected images are single, the position accuracy obtained in the subsequent image analysis process is low, and the robot is often in error positioning or great deviation of the navigation route of the robot.
For a robot, how to automatically identify the advancing direction and adaptively avoid when an obstacle appears is one of the problems that the industry is difficult to effectively process, and in order to solve the problem that single image analysis accuracy is poor in the prior art, the invention provides a method for establishing a topographic map by processing a wide-baseline stereo camera, a narrow-baseline stereo camera and a panoramic radar, and the robot realizes accurate navigation and effective avoidance of the obstacle through the topographic map.
Disclosure of Invention
In order to solve the above problems, the present invention provides a robot navigation control method, wherein a wide-baseline stereo camera, an environment sensing device and a positioning device are arranged on the head of the robot, and the method comprises the following steps:
step S1, acquiring real-time position information of the robot by adopting the positioning device;
step S2, matching the wide baseline image obtained by the real-time shooting of the wide baseline stereo camera with the narrow baseline image obtained by the real-time shooting of the narrow baseline stereo camera, and processing according to the matching result and the real-time position information to obtain a navigation map;
step S3, the environment sensing device scans in real time to obtain a scanning map, wherein the scanning map comprises a plurality of obstacles;
s4, measuring the obstacle in the scanning map to obtain obstacle height information, and marking the obstacle height information in the scanning map to obtain an obstacle map;
s5, fusing the obstacle map and the navigation map to establish a topographic map;
step S6, comparing the obstacle height information in the topographic map with a preset obstacle threshold value:
if the obstacle height information is smaller than the obstacle threshold, outputting a first control instruction, and turning to the step S7;
if the obstacle height information is not smaller than the obstacle threshold, outputting a second control instruction, and turning to the step S8;
step S7, controlling the robot to cross the obstacle according to the first control instruction;
and S8, controlling the robot to turn to avoid the obstacle according to the second control instruction.
Preferably, the positioning device comprises a remote satellite positioning unit and a near field positioning unit.
Preferably, the environmental perception device comprises a panoramic camera and a laser radar,
the step S2 includes:
step S21, the panoramic camera performs rotary segmented scanning on the surrounding environment to obtain a plurality of segmented images, and simultaneously the laser radar detects each obstacle in the surrounding environment in real time;
and S22, splicing the segmented images by combining the obstacles to obtain the scanning image containing the obstacles.
Preferably, the step S4 includes:
step S41, carrying out integral scanning on the obstacle, and searching to obtain the highest point of the obstacle;
and step S42, carrying out height measurement on the obstacle to obtain obstacle height information.
Preferably, the step S7 includes:
step S71, adjusting the gravity center of the robot according to the obstacle height information to obtain a robot gesture;
and step S72, maintaining the posture of the robot, and crossing the obstacle according to the first control instruction.
A robot navigation control system applied to the robot navigation control method described in any one of the above, comprising:
the positioning device is used for acquiring real-time position information of the robot;
the matching module is connected with the positioning device and used for matching the stereoscopic distance image obtained by the real-time shooting of the wide-baseline stereoscopic camera with the front environment image obtained by the real-time shooting of the narrow-baseline stereoscopic camera, and obtaining a navigation map according to the matching result and the real-time position information;
the scanning module is used for obtaining a scanning map by adopting the environment sensing device to scan in real time, and the scanning map comprises a plurality of obstacles;
the marking module is connected with the scanning module and used for measuring the obstacle in the scanning map to obtain obstacle height information and marking the obstacle height information into the scanning map to obtain an obstacle map;
the building module is connected with the matching module and the labeling module and is used for fusing the obstacle map and the navigation map to build a topographic map;
the comparison module is connected with the establishment module and is used for comparing the obstacle height information in the topographic map with a preset obstacle threshold value, outputting a first control instruction when the obstacle height information is smaller than the obstacle threshold value, and outputting a second control instruction when the obstacle height information is not smaller than the obstacle threshold value;
the first execution module is connected with the comparison module and used for controlling the robot to cross the obstacle according to the first control instruction;
and the second execution module is connected with the comparison module and used for controlling the robot to turn to avoid the obstacle according to the second control instruction.
Preferably, the positioning device comprises a remote satellite positioning unit and a near field positioning unit, and the near field positioning unit further comprises an inertial measurement module and a visual odometer.
Preferably, the environmental perception device comprises a panoramic camera and a laser radar,
the scanning module comprises:
the segmentation unit is used for carrying out rotary segmentation scanning on the surrounding environment by adopting the panoramic camera to obtain a plurality of segmented images, and simultaneously detecting each obstacle in the surrounding environment in real time by adopting the laser radar;
and the splicing unit is connected with the segmentation unit and is used for splicing the segmented images by combining the obstacles to obtain the scanning image containing the obstacles.
Preferably, the labeling module includes:
the searching unit is used for carrying out integral scanning on the obstacle and searching to obtain the highest point of the obstacle;
and the measuring unit is connected with the searching unit and is used for measuring the height of the obstacle to obtain the height information of the obstacle.
Preferably, the first execution module includes:
the adjusting unit is used for adjusting the gravity center of the robot according to the obstacle height information to obtain a robot gesture;
and the crossing unit is connected with the adjusting unit and used for keeping the gesture of the robot and crossing the obstacle according to the first control instruction.
Has the following beneficial effects:
the invention establishes the topographic map by combining the wide-baseline stereo camera, the narrow-baseline stereo camera, the environment sensing device and the positioning device, thereby realizing accurate navigation of the robot and effective avoidance of obstacles.
Drawings
FIG. 1 is a flow chart of a method for controlling navigation of a robot according to a preferred embodiment of the present invention;
FIG. 2 is a flow chart of the scan image acquisition in the preferred embodiment of the invention;
FIG. 3 is a flow chart of the obstacle image acquisition in the preferred embodiment of the invention;
FIG. 4 is a schematic diagram of a robot traversing an obstacle according to a preferred embodiment of the invention;
fig. 5 is a schematic structural diagram of a robot navigation control system according to a preferred embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that, without conflict, the embodiments of the present invention and features of the embodiments may be combined with each other.
The invention is further described below with reference to the drawings and specific examples, which are not intended to be limiting.
In order to solve the above problems, the present invention provides a robot navigation control method, wherein a wide-baseline stereo camera, a narrow-baseline stereo camera, an environment sensing device and a positioning device are arranged on the head of a robot, as shown in fig. 1, comprising the following steps:
step S1, acquiring real-time position information of a robot by adopting a positioning device;
step S2, matching a wide baseline image obtained by shooting a wide baseline stereo camera in real time with a narrow baseline image obtained by shooting a narrow baseline stereo camera in real time, and processing according to a matching result and real-time position information to obtain a navigation map;
step S3, the environment sensing device scans in real time to obtain a scanning map, wherein the scanning map comprises a plurality of obstacles;
s4, measuring the obstacle in the scanned map to obtain obstacle height information, and marking the obstacle height information in the scanned map to obtain an obstacle map;
s5, fusing the obstacle map and the navigation map to establish a topographic map;
step S6, comparing the obstacle height information in the topographic map with a preset obstacle threshold value:
if the obstacle height information is smaller than the obstacle threshold, outputting a first control instruction, and turning to the step S7;
if the obstacle height information is not smaller than the obstacle threshold, outputting a second control instruction, and turning to the step S8;
step S7, controlling the robot to cross the obstacle according to the first control instruction;
and S8, controlling the robot to turn to avoid the obstacle according to the second control instruction.
Specifically, in this embodiment, before acquiring the navigation map, the positioning device is required to acquire real-time position information of the robot as a navigation starting position, a wide-baseline image obtained by real-time photographing with the wide-baseline stereo camera is acquired for a remote environment of the robot, a narrow-baseline image obtained by real-time photographing with the narrow-baseline stereo camera is acquired for an environment near the foot of the robot, and is mainly focused on walking of the robot, then the wide-baseline image data and the narrow-baseline image data are matched, and a navigation map using the real-time position information as a navigation starting point is obtained by combining the real-time position information processing of the robot according to a matching result; the wide-base line stereo camera preferably has a 120-degree viewing angle, and the narrow-base line stereo camera has a 70-degree viewing angle. The environment sensing device comprises a panoramic camera which is provided with the environment sensing device, a panoramic image is obtained, and a scanning map containing a plurality of obstacles is obtained by scanning the obstacles through a laser radar; measuring the obstacle in the scanned map to obtain obstacle height information, and marking the height information into the scanned map to obtain an obstacle map; and then, the obstacle map and the navigation map are fused to obtain a topographic map, wherein the topographic map contains the obstacle height information of the obstacle so as to judge whether to carry out the crossing or steering avoidance by the robot navigation.
When the obstacle height information is smaller than a preset obstacle threshold value, the robot is enabled to climb over the obstacle, and then the robot is executed to climb over the obstacle; when the obstacle height information is not smaller than the preset obstacle threshold value, the robot cannot turn over the height of the obstacle, and the robot is executed to turn to avoid the obstacle, so that the obstacle is prevented.
In a preferred embodiment of the present invention, the positioning device includes a remote satellite positioning unit and a near field positioning unit, and the near field positioning unit further includes an inertial measurement module and a visual odometer.
Specifically, in the present embodiment, the acquisition of the real-time position information of the robot is achieved by using the remote satellite positioning unit 100 and the near-field positioning unit 101 in combination. The remote satellite positioning unit is arranged on the head of the robot, and the near field positioning unit is arranged on the trunk of the robot. Because the remote satellite positioning unit is likely to be unable to position due to signal shielding and other problems when being used alone, and the near field positioning unit has accumulated errors when being used alone, the real-time performance and accuracy of the real-time position information of the robot can be effectively ensured by combining the remote satellite positioning unit with the near field positioning unit.
Further, the inertial measurement module (IMU) includes a tri-axis accelerometer, a tri-axis gyroscope, and a tri-axis magnetometer, wherein the tri-axis accelerometer and the tri-axis gyroscope are capable of measuring a pose of the robot relative to a direction of gravity, and the tri-axis magnetometer is capable of providing a complete measurement relative to the direction of gravity and a direction of a magnetic field of the earth. Specifically, the triaxial accelerometer can accurately measure the pitch angle and the roll angle of the robot; the three-axis gyroscope can detect the angular velocity of the robot; the three-axis magnetometer can provide data of magnetic fields born by the robot in the X axis, the Y axis and the Z axis so as to provide heading angles related to magnetic north poles, and the robot can detect geographic azimuth by utilizing the information in the action process so as to acquire the real-time position information.
In a preferred embodiment of the present invention, the environmental awareness means comprises a panoramic camera and a lidar,
step S2 comprises:
s21, performing rotary sectional scanning on the surrounding environment by using a panoramic camera to obtain a plurality of sectional images, and simultaneously detecting various obstacles in the surrounding environment in real time by using a laser radar;
and S22, splicing the segmented images by combining the obstacles to obtain a scanning image containing the obstacles.
Specifically, in this embodiment, the panoramic camera performs rotational scanning on the surrounding environment in real time to obtain a segmented image, the laser radar can obtain an obstacle of the surrounding environment through scanning, and the segmented image is spliced according to the obstacle to obtain a panoramic scanned image.
In a preferred embodiment of the present invention, as shown in fig. 3, step S4 includes:
step S41, performing integral scanning on the obstacle, and searching to obtain the highest point of the obstacle;
step S42, measuring the height of the obstacle to obtain obstacle height information.
Specifically, in this embodiment, the highest point of the obstacle is obtained by scanning the obstacle, and the height information of the obstacle is obtained by measuring the highest point, which is used for determining the obstacle to cross or turn around in the robot navigation.
In a preferred embodiment of the present invention, as shown in fig. 4, step S7 includes:
step S71, adjusting the gravity center of the robot according to the obstacle height information to obtain a robot gesture;
step S72, maintaining the attitude of the robot, and crossing the obstacle according to the first control instruction.
Specifically, in this embodiment, according to the height information of the obstacle being smaller than the obstacle threshold, the robot will lower the center of gravity to prepare to execute the obstacle-crossing after receiving the first control instruction, so as to achieve the safe obstacle-crossing.
A robot navigation control system, applied to a robot navigation control method, as shown in fig. 5, includes:
a positioning device 1 for acquiring real-time position information of a robot;
the matching module 2 is connected with the positioning device 1 and is used for matching a stereoscopic distance image obtained by real-time shooting of the wide-baseline stereoscopic camera with a front environment image obtained by real-time shooting of the narrow-baseline stereoscopic camera, and obtaining a navigation map according to a matching result and real-time position information processing;
the scanning module 3 is used for obtaining a scanning map by adopting the environment sensing device to scan in real time, wherein the scanning map comprises a plurality of obstacles;
the marking module 4 is connected with the scanning module 3 and is used for measuring the obstacle in the scanned map to obtain obstacle height information and marking the obstacle height information into the scanned map to obtain an obstacle map;
the building module 5 is connected with the matching module 2 and the labeling module 4 and is used for fusing the obstacle map and the navigation map to build a topography map;
a comparison module 6, a connection establishment module 5, configured to compare the obstacle height information in the topographic map with a preset obstacle threshold, output a first control instruction when the obstacle height information is less than the obstacle threshold, and output a second control instruction when the obstacle height information is not less than the obstacle threshold;
the first execution module 7 is connected with the comparison module 6 and is used for controlling the robot to cross the obstacle according to the first control instruction;
the second execution module 8 is connected with the comparison module 6 and is used for controlling the robot to turn to avoid the obstacle according to the second control instruction.
Specifically, in this embodiment, the positioning device 1 can obtain real-time position information of the robot as a starting point of navigation, and the matching module 2 matches a far environment image obtained by real-time photographing of the wide-baseline stereo camera with a near environment image obtained by real-time photographing of the narrow-baseline stereo camera, and obtains a navigation map by combining the real-time position information, which is a basic map for robot navigation; the scanning module 3 is used for scanning through the environment sensing device to obtain a scanning map; the marking module 4 is used for measuring the obstacle in the scanned map to obtain obstacle height information and marking the obstacle height information into the scanned map to obtain an obstacle map; the building module 5 is used for building a topography map after fusing the obstacle map and the scanning map to form a complete map required by robot navigation; the comparison module 6 is used for judging whether the robot performs the function of crossing or steering avoidance on the obstacle, specifically, the robot performs the function of crossing through the first execution module 7 and steering avoidance through the second execution module 8, so as to realize intelligent judgment of the robot.
In a preferred embodiment of the invention, the environmental awareness means comprises a panoramic camera and a lidar,
the scanning module 3 comprises:
the segmentation unit 31 is used for performing rotary segmentation scanning on the surrounding environment by adopting a panoramic camera to obtain a plurality of segmented images, and simultaneously detecting various obstacles in the surrounding environment in real time by adopting a laser radar;
and a stitching unit 32, connected to the segmentation unit 31, for stitching the segmented images in combination with each obstacle to obtain a scanned image containing each obstacle.
In a preferred embodiment of the present invention, the labeling module 4 includes:
a searching unit 41, configured to perform overall scanning on the obstacle, and search for a highest point of the obstacle;
and a measuring unit 42 connected with the searching unit 41 for measuring the height of the obstacle to obtain the obstacle height information.
In a preferred embodiment of the present invention, the first execution module 7 includes:
an adjusting unit 71 for adjusting the center of gravity of the robot according to the obstacle height information to obtain a robot pose;
a crossing unit 72 connected to the adjusting unit 71 for maintaining the robot posture, crossing the obstacle according to the first control instruction.
The foregoing description is only illustrative of the preferred embodiments of the present invention and is not to be construed as limiting the scope of the invention, and it will be appreciated by those skilled in the art that equivalent substitutions and obvious variations may be made using the description and illustrations of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. The robot navigation control method is characterized in that a wide-baseline stereo camera, a narrow-baseline stereo camera, an environment sensing device and a positioning device are arranged on the head of the robot, and the method comprises the following steps:
step S1, acquiring real-time position information of the robot by adopting the positioning device;
step S2, matching the wide baseline image obtained by the real-time shooting of the wide baseline stereo camera with the narrow baseline image obtained by the real-time shooting of the narrow baseline stereo camera, and processing according to the matching result and the real-time position information to obtain a navigation map;
step S3, the environment sensing device scans in real time to obtain a scanning map, wherein the scanning map comprises a plurality of obstacles;
s4, measuring the obstacle in the scanning map to obtain obstacle height information, and marking the obstacle height information in the scanning map to obtain an obstacle map;
s5, fusing the obstacle map and the navigation map to establish a topographic map;
step S6, comparing the obstacle height information in the topographic map with a preset obstacle threshold value:
if the obstacle height information is smaller than the obstacle threshold, outputting a first control instruction, and turning to the step S7;
if the obstacle height information is not smaller than the obstacle threshold, outputting a second control instruction, and turning to the step S8;
step S7, controlling the robot to cross the obstacle according to the first control instruction;
and S8, controlling the robot to turn to avoid the obstacle according to the second control instruction.
2. The method of claim 1, wherein the positioning device comprises a remote satellite positioning unit and a near field positioning unit, the near field positioning unit comprising an inertial measurement module and a visual odometer.
3. The method of claim 1, wherein the environmental awareness means includes a panoramic camera and a lidar,
the step S2 includes:
step S21, the panoramic camera performs rotary segmented scanning on the surrounding environment to obtain a plurality of segmented images, and simultaneously the laser radar detects each obstacle in the surrounding environment in real time;
and S22, splicing the segmented images by combining the obstacles to obtain the scanning image containing the obstacles.
4. The robot navigation control method according to claim 1, wherein the step S4 includes:
step S41, carrying out integral scanning on the obstacle, and searching to obtain the highest point of the obstacle;
and step S42, carrying out height measurement on the obstacle to obtain obstacle height information.
5. The robot navigation control method according to claim 1, wherein the step S7 includes:
step S71, adjusting the gravity center of the robot according to the obstacle height information to obtain a robot gesture;
and step S72, maintaining the posture of the robot, and crossing the obstacle according to the first control instruction.
6. A robot navigation control system, characterized by being applied to the robot navigation control method according to any one of claims 1 to 5, comprising:
the positioning device is used for acquiring real-time position information of the robot;
the matching module is connected with the positioning device and used for matching the stereoscopic distance image obtained by the real-time shooting of the wide-baseline stereoscopic camera with the front environment image obtained by the real-time shooting of the narrow-baseline stereoscopic camera, and obtaining a navigation map according to the matching result and the real-time position information;
the scanning module is used for obtaining a scanning map by adopting the environment sensing device to scan in real time, and the scanning map comprises a plurality of obstacles;
the marking module is connected with the scanning module and used for measuring the obstacle in the scanning map to obtain obstacle height information and marking the obstacle height information into the scanning map to obtain an obstacle map;
the building module is connected with the matching module and the labeling module and is used for fusing the obstacle map and the navigation map to build a topographic map;
the comparison module is connected with the establishment module and is used for comparing the obstacle height information in the topographic map with a preset obstacle threshold value, outputting a first control instruction when the obstacle height information is smaller than the obstacle threshold value, and outputting a second control instruction when the obstacle height information is not smaller than the obstacle threshold value;
the first execution module is connected with the comparison module and used for controlling the robot to cross the obstacle according to the first control instruction;
and the second execution module is connected with the comparison module and used for controlling the robot to turn to avoid the obstacle according to the second control instruction.
7. The robotic navigation control system of claim 6, wherein the positioning device includes a remote satellite positioning unit and a near field positioning unit, the near field positioning unit including an inertial measurement module and a visual odometer.
8. The robotic navigation control system of claim 6, wherein the environmental awareness means includes a panoramic camera and a lidar,
the scanning module comprises:
the segmentation unit is used for carrying out rotary segmentation scanning on the surrounding environment by adopting the panoramic camera to obtain a plurality of segmented images, and simultaneously detecting each obstacle in the surrounding environment in real time by adopting the laser radar;
and the splicing unit is connected with the segmentation unit and is used for splicing the segmented images by combining the obstacles to obtain the scanning image containing the obstacles.
9. The robotic navigation control system of claim 6, wherein the labeling module comprises:
the searching unit is used for carrying out integral scanning on the obstacle and searching to obtain the highest point of the obstacle;
and the measuring unit is connected with the searching unit and is used for measuring the height of the obstacle to obtain the height information of the obstacle.
10. The robotic navigation control system of claim 6, wherein the first execution module comprises:
the adjusting unit is used for adjusting the gravity center of the robot according to the obstacle height information to obtain a robot gesture;
and the crossing unit is connected with the adjusting unit and used for keeping the gesture of the robot and crossing the obstacle according to the first control instruction.
CN202010334371.9A 2020-04-24 2020-04-24 Robot navigation control method and system Active CN111427363B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010334371.9A CN111427363B (en) 2020-04-24 2020-04-24 Robot navigation control method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010334371.9A CN111427363B (en) 2020-04-24 2020-04-24 Robot navigation control method and system

Publications (2)

Publication Number Publication Date
CN111427363A CN111427363A (en) 2020-07-17
CN111427363B true CN111427363B (en) 2023-05-05

Family

ID=71554666

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010334371.9A Active CN111427363B (en) 2020-04-24 2020-04-24 Robot navigation control method and system

Country Status (1)

Country Link
CN (1) CN111427363B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111752285A (en) * 2020-08-18 2020-10-09 广州市优普科技有限公司 Autonomous navigation method and device for quadruped robot, computer equipment and storage medium
CN111966109B (en) * 2020-09-07 2021-08-17 中国南方电网有限责任公司超高压输电公司天生桥局 Inspection robot positioning method and device based on flexible direct current converter station valve hall
CN112629520A (en) * 2020-11-25 2021-04-09 北京集光通达科技股份有限公司 Robot navigation and positioning method, system, equipment and storage medium
CN112716401B (en) * 2020-12-30 2022-11-04 北京奇虎科技有限公司 Obstacle-detouring cleaning method, device, equipment and computer-readable storage medium
CN114911223B (en) * 2021-02-09 2023-05-05 北京盈迪曼德科技有限公司 Robot navigation method, device, robot and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017025521A1 (en) * 2015-08-07 2017-02-16 Institut De Recherche Technologique Jules Verne Device and method for detecting obstacles suitable for a mobile robot
CN109900280A (en) * 2019-03-27 2019-06-18 浙江大学 A kind of livestock and poultry information Perception robot and map constructing method based on independent navigation
CN110275540A (en) * 2019-07-01 2019-09-24 湖南海森格诺信息技术有限公司 Semantic navigation method and its system for sweeping robot

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160002178A (en) * 2014-06-30 2016-01-07 현대자동차주식회사 Apparatus and method for self-localization of vehicle

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017025521A1 (en) * 2015-08-07 2017-02-16 Institut De Recherche Technologique Jules Verne Device and method for detecting obstacles suitable for a mobile robot
CN109900280A (en) * 2019-03-27 2019-06-18 浙江大学 A kind of livestock and poultry information Perception robot and map constructing method based on independent navigation
CN110275540A (en) * 2019-07-01 2019-09-24 湖南海森格诺信息技术有限公司 Semantic navigation method and its system for sweeping robot

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
戚尔江 ; 彭道刚 ; 关欣蕾 ; 王立力 ; 梅兰 ; .基于RTK北斗和激光雷达的巡检机器人导航系统研究.仪表技术与传感器.2018,(06),全文. *
李俊琴 ; 肖南峰 ; .基于多传感器的家庭服务机器人局部导航方法研究.微计算机信息.2006,(20),全文. *

Also Published As

Publication number Publication date
CN111427363A (en) 2020-07-17

Similar Documents

Publication Publication Date Title
CN111427363B (en) Robot navigation control method and system
CN107145578B (en) Map construction method, device, equipment and system
JP7082545B2 (en) Information processing methods, information processing equipment and programs
Brenner Extraction of features from mobile laser scanning data for future driver assistance systems
WO2020038285A1 (en) Lane line positioning method and device, storage medium and electronic device
JP5227065B2 (en) 3D machine map, 3D machine map generation device, navigation device and automatic driving device
CN109086277B (en) Method, system, mobile terminal and storage medium for constructing map in overlapping area
KR101880185B1 (en) Electronic apparatus for estimating pose of moving object and method thereof
WO2016077703A1 (en) Gyroscope assisted scalable visual simultaneous localization and mapping
EP3680616A1 (en) Localization method and apparatus, mobile terminal and computer-readable storage medium
KR102006291B1 (en) Method for estimating pose of moving object of electronic apparatus
CN111768489B (en) Indoor navigation map construction method and system
WO2016059904A1 (en) Moving body
JP2009110250A (en) Map creation device and method for determining traveling path of autonomous traveling object
JP2016188806A (en) Mobile entity and system
CN105116886A (en) Robot autonomous walking method
Xian et al. Fusing stereo camera and low-cost inertial measurement unit for autonomous navigation in a tightly-coupled approach
KR101356644B1 (en) System for localization and method thereof
Ruotsalainen Visual gyroscope and odometer for pedestrian indoor navigation with a smartphone
JPH1185981A (en) Distance measuring origin recognition device for mobile object
Xie et al. A bio-inspired multi-sensor system for robust orientation and position estimation
US11992961B2 (en) Pose determination method, robot using the same, and computer readable storage medium
CN115588036A (en) Image acquisition method and device and robot
Wang et al. Pedestrian positioning in urban city with the aid of Google maps street view
KR101376536B1 (en) Position Recognition Method for mobile object using convergence of sensors and Apparatus thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant