CN111958597B - Method for controlling autonomous obstacle crossing process of mobile robot - Google Patents

Method for controlling autonomous obstacle crossing process of mobile robot Download PDF

Info

Publication number
CN111958597B
CN111958597B CN202010821915.4A CN202010821915A CN111958597B CN 111958597 B CN111958597 B CN 111958597B CN 202010821915 A CN202010821915 A CN 202010821915A CN 111958597 B CN111958597 B CN 111958597B
Authority
CN
China
Prior art keywords
scanning
obstacle
robot
data
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010821915.4A
Other languages
Chinese (zh)
Other versions
CN111958597A (en
Inventor
王伟东
杜志江
张权
王韩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN202010821915.4A priority Critical patent/CN111958597B/en
Publication of CN111958597A publication Critical patent/CN111958597A/en
Application granted granted Critical
Publication of CN111958597B publication Critical patent/CN111958597B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • B25J9/1666Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1674Programme controls characterised by safety, monitoring, diagnostic
    • B25J9/1676Avoiding collision or forbidden zones

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

A method for controlling an autonomous obstacle crossing process of a mobile robot belongs to the technical field of autonomous obstacle crossing of mobile robots. The specific scheme is as follows: a method for autonomous obstacle crossing process control of a mobile robot, comprising the steps of: firstly, mounting data scanning sensors on the left side and the right side of a robot, and scanning the data scanning sensors along the longitudinal direction to obtain scanning line data; secondly, extracting characteristic information related to the obstacle from the scanning line data so as to determine the spatial position relation between the robot and the obstacle; and step three, switching of obstacle crossing steps is achieved according to the real-time space position relation between the robot and the obstacle. The invention obtains the real-time position relation of the robot relative to the obstacle by extracting the scanning line data characteristics of the data scanning sensor, and controls the obstacle crossing process based on the real-time position relation of the robot relative to the obstacle, thereby having higher real-time performance and reliability; meanwhile, the obstacle crossing control mode is convenient and quick, and the obstacle crossing efficiency is improved.

Description

Method for controlling autonomous obstacle crossing process of mobile robot
Technical Field
The invention belongs to the technical field of autonomous obstacle crossing of mobile robots, and particularly relates to a method for controlling an autonomous obstacle crossing process of a mobile robot.
Background
With the continuous development of science and technology, the function of the robot is more and more abundant. The robot autonomous obstacle crossing refers to a process that the robot acquires environment information through sensing equipment carried by the robot, determines the position of the robot, identifies obstacle information in the environment, and then combines the obstacle crossing capability of the robot to realize obstacle crossing under the condition of no human factor intervention. In the present day that the robot is continuously pursued to be intelligent and autonomous, the autonomous behavior of the robot which can be really separated from manual operation and intervention is a development target and direction, and the appearance of high-precision sensors, the improvement of hardware computing capability and the rapid development of perception technology provide possibility in recent years.
The autonomy is the development direction and the technical difficulty of the robot obstacle surmounting, and the development of the mobile robot autonomous obstacle surmounting research in special environments has important significance and great application value.
When the robot crosses the obstacle, when the mobile robot adopts the obstacle crossing method of the step type, the robot realizes a complete obstacle crossing process by sequentially executing each step, and the motion control and step switching of the robot are important in the obstacle crossing process. On one hand, the robot is ensured to be always over against the obstacle, and timely correction is needed when the direction of the robot and the obstacle deviates due to slipping; on the other hand, the obstacle crossing step is switched according to the current real-time position of the robot.
The traditional obstacle crossing control mode is manual remote control obstacle crossing, and is generally realized by an operator calling an obstacle crossing control program through manually remotely controlling a robot or inputting corresponding parameters according to the current state of the robot and obstacle information in the environment, which is all established on the basis that the operator knows the environment conditions of the robot and the periphery of the robot, and the operator can obtain information by relying on manual operation no matter in the obstacle crossing site or by remote control through sensing equipment such as a camera. The remote control obstacle crossing mode has higher requirements on the operation skill of an operator, the execution efficiency is lower, and the problems of operation delay and the like exist in remote control operation. Because the wireless remote control is in a path blocked by a conductor such as metal, signals are greatly attenuated. The metal shell has a shielding effect on wireless signals, in most obstacle crossing scenes, the robot needs to be separated from manual intervention to carry out individual soldier work, and the environment like nuclear radiation limits signal transmission, which brings difficulty to the traditional obstacle crossing mode.
Disclosure of Invention
The invention aims to obtain the real-time position relation of a robot relative to an obstacle and control an obstacle crossing process based on the real-time position relation, and provides a method for controlling an autonomous obstacle crossing process of a mobile robot.
In order to realize the purpose, the technical scheme adopted by the invention is as follows:
a method for autonomous obstacle crossing process control of a mobile robot, comprising the steps of:
step one, mounting data scanning sensors on the left side and the right side of a robot, and scanning the data scanning sensors along the longitudinal direction to obtain scanning line data;
secondly, extracting characteristic information related to the obstacle from the scanning line data so as to determine the spatial position relation between the robot and the obstacle;
and step three, switching the obstacle crossing step according to the real-time space position relationship between the robot and the obstacle.
Further, in the first step, the data scanning sensor includes a two-dimensional laser radar, a three-dimensional laser radar, a depth camera, or a binocular stereo camera.
Further, in the second step, the feature information related to the obstacle includes edge point information of the obstacle.
Further, the edge point information includes corner point feature information and breakpoint feature information.
Further, the process of extracting the corner feature information includes the following steps:
step 1, scanning line data preprocessing:
scan process =f r (f a (scan)) (1)
in the above formula, scan and scan process Respectively before and after processing, f a (. -) represents an angular filter that can extract data within a specified angular range from raw scan data by setting a start-stop angle, f r (-) represents a distance filter to filter out unwanted data outside of the specified distance of the distance data scanning sensor;
step 2, scanning line data segmentation: dividing the scanning line data into different data segments by searching for a breakpoint in the scanning line, and simultaneously filtering out noise points in the dividing process;
step 3, angular point extraction: and (4) carrying out corner extraction on each data segment obtained by segmentation by adopting a connecting line projection method.
Further, in step 2, a self-adaptive threshold method is used to find a breakpoint in the scan line, and the specific method is as follows: sequentially calculating Euclidean distances between two adjacent scanning points according to the scanning sequence, judging that no breakpoint exists between the two adjacent scanning points when the distances are smaller than the self-adaptive threshold, and otherwise, judging that the two adjacent scanning points are breakpoints in the scanning line.
Further, in the step 3, the extracted corner points are screened and removed, the screening method is to set a sliding window with the size of 3, slide on the extracted corner point list according to the scanning sequence, judge three corner points in the window each time when the sliding window slides by the step length of one corner point, and when the three corner points can form a straight line, indicate that the middle corner point is a pseudo corner point and remove the pseudo corner point.
And further. The method for extracting the breakpoint characteristic information is to search the breakpoint in the scanning line by adopting a self-adaptive threshold method, and the specific method is as follows: sequentially calculating Euclidean distances between two adjacent scanning points according to the scanning sequence, judging that no breakpoint exists between the two adjacent scanning points when the distances are smaller than the self-adaptive threshold, and otherwise, judging that the two adjacent scanning points are breakpoints in the scanning line.
Further, in the third step, the position of the robot for switching the steps is determined through obstacle crossing teaching or analysis and calculation.
The obstacle crossing teaching means that the human-controlled robot stops at one or more positions, and the distance value between the data scanning sensor scanned by the data scanning sensor and a position point at one position of an obstacle at the moment is recorded, so that the robot can stop and control in the subsequent autonomous obstacle crossing process.
The analysis calculation refers to calculating the distance between the data scanning sensor positioned at the center of the vehicle body and a certain position point when the middle wheel approaches the certain position point of the obstacle through geometric analysis, so that the stopping control of the robot is realized based on the fact that the robot automatically crosses the obstacle later.
Furthermore, in the third step, the adjustment of the robot direction is realized by comparing the magnitude relation of the distance values of the obstacle edge points in the left and right data scanning sensors.
Compared with the prior art, the invention has the beneficial effects that:
the invention obtains the real-time position relation of the robot relative to the obstacle by extracting the scanning line data characteristics of the data scanning sensor, and controls the obstacle crossing process based on the real-time position relation of the robot relative to the obstacle, thereby having higher real-time performance and reliability; meanwhile, the obstacle crossing control mode is convenient and quick, and the obstacle crossing efficiency is improved.
Drawings
FIG. 1 is a schematic side view of a data scanning sensor mounted on a robot, where A is an angular point;
FIG. 2 is a schematic front view of a data scanning sensor mounted on a robot;
FIG. 3 is a schematic diagram of characteristic points of a two-dimensional laser radar scanning line, wherein B is an angular point, C is a breakpoint, and D is a noise point;
FIG. 4 is a schematic diagram of a line projection method, wherein E is an angular point, a and b are two end points of a scanning range, respectively, and l is a line connecting the two end points in the scanning range;
FIG. 5 is a schematic diagram of a second line projection method, in which c and d are two end points of the scanning range, respectively, e and f are both corner points, and g is a pseudo corner point;
FIG. 6 is a schematic view of the robot planning the movement of the robot across the trench;
fig. 7 shows the two-dimensional lidar scanning line feature point extraction result, and F is the trench back edge corner point.
Detailed Description
The technical solution of the present invention is explained in detail below with reference to the accompanying fig. 1 to 7 and the detailed description.
Detailed description of the invention
A method for autonomous obstacle crossing process control of a mobile robot, comprising the steps of:
step one, mounting two-dimensional laser radars in the middle positions of the left side and the right side of the robot, wherein the mounting positions are shown in square blocks in figures 1 and 2, and the two-dimensional laser radars scan along the longitudinal direction to obtain radar scanning line data, which are shown as dotted scanning lines in figure 1;
extracting feature information related to the obstacle from the radar scanning line data, and determining the spatial position relation between the robot and the obstacle;
and step three, switching the obstacle crossing step according to the real-time space position relationship between the robot and the obstacle.
Further, in the second step, the feature information related to the obstacle includes edge point information of the obstacle, and the edge point information includes corner point feature information and breakpoint feature information.
Taking several common characteristics of break points, angular points, straight lines, noise points and the like in a two-dimensional laser radar scanning line as examples for discussion, as shown in fig. 3, the break points refer to adjacent scanning points with a longer distance in the scanning line, as shown in a point C in the figure; the breakpoint divides the scanning line into different data segments; the corner points usually appear together with straight lines, and the first derivative of the scanning lines at the corner points is the largest, which can be understood as the turning points of line segments, as shown by the point B in the figure; the noise points refer to individual discrete points deviated from the main body of the scanning line, and as indicated by point D in the figure, the noise points are usually filtered out in the data processing process. The present embodiment is expected to find the edge point information of the obstacle from the radar scan line, for example, the edge point a of the obstacle in fig. 1 belongs to the corner point in the radar scan line. The process of extracting the angular point features of the scanning line of the two-dimensional laser radar comprises three steps of data preprocessing, scanning line segmentation and angular point extraction, and the specific process comprises the following steps:
step 1, scanning line data preprocessing:
the scanning visual angle of the two-dimensional laser radar is 270 degrees or 360 degrees, and only the radar scanning line data in a smaller range related to the obstacle below the robot is concerned in the edge point extraction process, so that the scanning line data of the laser radar needs to be preprocessed:
scan process =f r (f a (scan)) (1)
in the above formula, scan and scan process Respectively radar scan line data before and after processing, f a (. -) represents an angular filter that can extract data within a specified angular range from raw scan data by setting a start-stop angle, f r (. Cndot.) represents a distance filter for filtering out unwanted data outside the specified distance of the distance data scanning sensor, and for filtering out distance data according to the application scenarioScanning useless data outside the sensor 2 m;
step 2, scanning line data segmentation: dividing the scanning line data into different data segments by searching for a breakpoint in the scanning line, and simultaneously filtering out noise points in the dividing process; here, a self-adaptive threshold method is adopted to find a breakpoint in a scanning line, and the specific method is as follows: sequentially calculating Euclidean distances between two adjacent scanning points according to the scanning sequence, judging that no breakpoint exists between the two adjacent scanning points when the distances are smaller than the self-adaptive threshold, and otherwise, judging that the two adjacent scanning points are the breakpoints in the scanning line. The scanning line can be divided into different data segments through one-time traversal.
Due to the characteristics of the two-dimensional laser radar, the scanning points close to the radar are dense, the scanning points far away from the radar are relatively sparse, so that an adaptive threshold is adopted when the distance between adjacent scanning points is judged, the threshold when the distance between the scanning points and the radar is 1m is taken as Thd, and then the adaptive threshold Thd is defined as:
Thd∝r×thd (2)
where r is the distance of the scanning point from the radar.
When the number of the points of each data segment is too small, the points are considered as noise points and are discarded; and if the distance between the head point and the tail point of the two sections of data before and after the noise point is smaller than the self-adaptive threshold, combining the two sections of data before and after the noise point. The pseudo code for the scan line segmentation algorithm can be summarized as follows:
Figure BDA0002634749280000051
wherein, scan _ points is a two-dimensional coordinate form corresponding to the radar scanning point, has _ erase is a flag bit for judging whether a noise point exists, distance () is a function for calculating Euclidean Distance between two scanning points, and N is min The result of the scan line segmentation is stored in broken _ scan _ points as a sequence of points as an upper limit value of the number of noise points.
Step 3, angular point extraction: and extracting angular points of each data segment obtained by segmentation by adopting a connecting line projection method.
As shown in fig. 4, connecting the start point a and the end point b of the scan line to obtain a straight line l, and calculating the distance from other points in the scan line to the straight line l, where the farthest point is a potential corner point, such as the point E in fig. 4; when a plurality of angular points exist in the scanning line, the scanning line can be divided into a-E and E-b sections by the point E, and each section is continuously subjected to recursive solution by using the method until no angular point exists in the scanning line section. The distance from point to line can be derived from the nature of the vector cross product, taking the distance d from point E to line l in fig. 4 as an example:
Figure BDA0002634749280000061
the method has the disadvantage that when a line segment parallel to a connecting line of a start point and a stop point exists in a scanning line, a pseudo corner point is easy to extract. As shown in fig. 5, the e-f segment scan lines are approximately parallel to the connecting line cd, and when calculating the first corner point, it is easy to obtain points other than the real corner point on the e-f segment. Therefore, after the corner recursive solution is performed on each scan line data segment, a pseudo corner screening process needs to be performed on the potential corners. The screening method comprises the following steps: setting a sliding window with the size of 3, sliding on the potential corner point list according to the scanning sequence, judging three potential corner points in the window by sliding the step length of one corner point each time, and when three points can form a straight line, explaining that the middle potential corner point is a pseudo corner point and rejecting the pseudo corner point. Whether the straight line is formed is also judged by the formula (3).
The overall algorithm framework for corner extraction is as follows:
Figure BDA0002634749280000062
the corner index of each group of scan lines is stored in the corner _ index in descending order, and the corner _ index _ list is a sequence of corner indexes of each group of scan lines.
CornerPointExtraction () is a function that extracts the corner index from the scan line, with the pseudo code as follows:
Figure BDA0002634749280000071
wherein the DistanceToLine () function calculates the distance of a point to a straight line according to a given parameter.
Cornerpoinxection () is a corner filtering function to remove false corners, whose pseudo code is as follows:
Figure BDA0002634749280000072
the IsoNeLine () function is used to determine whether the point corresponding to a given index and the three points before and after the point form a straight line.
For each type of specific obstacles, one edge of the obstacle can be detected by a radar all the time in the obstacle crossing process, and necessarily corresponds to a breakpoint or an angular point in a scanning line, and the points are tracked and monitored in real time in the obstacle crossing process, so that the relative position relationship between the robot and the obstacle can be obtained. After the relative position relation between the robot and the obstacle is obtained, on one hand, the obstacle crossing step can be switched according to the real-time position relation between the robot and the obstacle. The position where the step is switched is determined by obstacle crossing teaching or analysis calculation; on the other hand, by comparing the magnitude relation of the distance values of the obstacle edge points in the left radar and the right radar, the direction of the robot can be adjusted, and the problem of direction deviation between the robot and the obstacle caused by tire slip and idle rotation and the like can be solved.
Example 1
Fig. 6 shows the obstacle crossing process of the robot crossing the trench, including steps a), b), c) and d). In the obstacle crossing step b), when the middle wheel is close to the back edge of the trench, the robot needs to stop moving to perform corresponding posture transformation, the back edge of the trench obstacle in the obstacle crossing process can be detected by the radar all the time, and the robot is necessarily corresponding to the angular points in the scanning lines before stopping moving, as shown in fig. 7, two dense points in the figure are the scanning lines of the two-dimensional laser radar, and the starting point of an arrow is the breakpoint or the angular point of the scanning lines.
During feature extraction, the edge point can be ensured to be the first angular point from right to left in the scanned data all the time by setting a proper angle filter, as shown in fig. 7, the point is tracked and monitored in real time in the obstacle crossing process, and the relative position relationship between the robot and the obstacle can be obtained.
On one hand, the method of obstacle crossing teaching or analysis calculation can be used for determining the numerical value of the radar scanning data corresponding to the edge point when the robot stops in the step b) in fig. 6, and the stopping control of the robot can be realized by taking the numerical value as the basis. On the other hand, when the robot is aligned with the obstacle, the numerical values of the edge points in the left radar and the right radar are similar; when the numerical value difference between the left radar and the right radar is large, the robot and the obstacle are considered to have direction deviation, and a corresponding steering control instruction is output to the robot according to the numerical value relation, so that the direction of the robot can be adjusted in time.
According to the illustration in fig. 6, the obstacle crossing teaching specifically comprises the following steps:
step 1), the remote control robot reaches a trench, the obstacle-crossing front robot is in a middle position state, firstly, the suspension is adjusted to be low, the gravity center of the robot is lowered, then, the middle shaft moves forwards, and the front wheelbase is smaller than the rear wheelbase;
and 2), the remote control robot starts to advance, the robot is ensured not to tip over when the current wheel is suspended, the front wheel has contacted the rear edge of the trench when the middle wheel is separated from the front edge of the trench, the robot stops moving when the middle wheel is close to the rear edge of the trench, and the distance value between the radar scanned by the radar and the rear edge angular point is recorded.
And 3), moving the middle shaft of the robot backwards to enable the front wheelbase to be larger than the rear wheelbase.
And 4), the remote control robot continues to move forward, the middle wheel is ensured to be in contact with the rear edge of the trench when the rear wheel is separated from the front edge of the trench, the robot does not tip over, the suspension and the axle distance are reset after the robot completely crosses the trench, the robot is restored to the neutral position state, and the teaching process is finished.
The operation effect of the algorithm is shown in fig. 7, two dense points in the graph are scanning lines of the two-dimensional laser radar, the starting point of the arrow is a breakpoint or angular point of the scanning line, and two angular points and one breakpoint in the scanning lines are correctly extracted by the graph-visible algorithm.
In the above specific embodiments, the obstacle feature points are not limited to angular points and break points, and for example, the obstacle crossing process control is implemented by extracting features such as arcs related to the obstacle to determine the position relationship between the robot and the obstacle, which also belongs to the protection scope of the present invention. In addition, the extraction method of the features such as the corners, the breakpoints and the like is not limited to the method set forth in the present invention, and the feature points in the scan line data can be obtained by other similar methods. The above mentioned algorithms mentioned in the present invention also fall within the scope of protection.
The control process is not limited to stopping recording distance values and adjusting the position of the middle axis once, and stopping twice, three times or even any times in the control process belongs to the protection point of the invention, so that the achievement of the invention can be prevented from being stolen by other people by changing the stopping times in the same method, and the teaching process is only an example of stopping once.
The above-described embodiments are merely illustrative and not restrictive of the scope of the present invention, and various modifications may be made in the details within the scope and range of equivalents of the present invention without departing from the spirit of the present invention.

Claims (6)

1. A method for autonomous obstacle crossing process control of a mobile robot, comprising the steps of:
firstly, mounting data scanning sensors on the left side and the right side of a robot, and scanning the data scanning sensors along the longitudinal direction to obtain scanning line data;
extracting feature information related to the obstacle from the scanning line data so as to determine the spatial position relationship between the robot and the obstacle;
step three, switching of obstacle crossing steps is achieved according to the real-time space position relation between the robot and the obstacle;
in the second step, the feature information related to the obstacle includes edge point information of the obstacle, and the edge point information includes corner point feature information and breakpoint feature information;
the process of extracting the angular point feature information comprises the following steps:
step 1, scanning line data preprocessing:
scan process =f r (f a (scan)) (1)
in the above formula, scan and scan process Scan line data before and after processing, respectively, f a (. -) represents an angular filter that can extract data within a specified angular range from raw scan data by setting a start-stop angle, f r (-) represents a range filter to filter out unwanted data outside of a specified range of the range data scanning sensor;
step 2, scanning line data segmentation: dividing the scanning line data into different data segments by searching for a breakpoint in the scanning line, and simultaneously filtering out noise points in the dividing process;
step 3, angular point extraction: extracting angular points of each data segment obtained by segmentation by adopting a connecting line projection method;
the method for extracting the breakpoint characteristic information is to search a breakpoint in a scanning line by adopting a self-adaptive threshold method, and comprises the following specific steps: sequentially calculating the Euclidean distance between two adjacent scanning points according to the scanning sequence, when the Euclidean distance is smaller than the self-adaptive threshold, judging that no breakpoint exists between the two adjacent scanning points, and otherwise, judging that the two adjacent scanning points are the breakpoints in the scanning line.
2. A method for autonomous obstacle detouring process control of a mobile robot according to claim 1, characterized in that: in the first step, the data scanning sensor comprises a two-dimensional laser radar, a three-dimensional laser radar, a depth camera or a binocular stereo camera.
3. A method for autonomous obstacle detouring process control of a mobile robot according to claim 1, characterized in that: in the step 2, a self-adaptive threshold method is adopted to search for a breakpoint in a scanning line, and the specific method is as follows: sequentially calculating the Euclidean distance between two adjacent scanning points according to the scanning sequence, when the Euclidean distance is smaller than the self-adaptive threshold, judging that no breakpoint exists between the two adjacent scanning points, and otherwise, judging that the two adjacent scanning points are the breakpoints in the scanning line.
4. A method for autonomous obstacle detouring process control of a mobile robot according to claim 1, characterized in that: in the step 3, the extracted corner points are screened and removed, the screening method is to set a sliding window with the size of 3, the sliding window slides on the extracted corner point list according to the scanning sequence, three corner points in the window are judged by sliding the step length of one corner point each time, and when the three corner points can form a straight line, the middle corner point is indicated as a pseudo corner point, and the pseudo corner point is removed.
5. A method for autonomous obstacle detouring process control of a mobile robot according to claim 1, characterized in that: and in the third step, determining the position of the robot for switching the steps through obstacle crossing teaching or analysis and calculation.
6. A method for autonomous obstacle detouring process control of a mobile robot according to claim 1, characterized in that: in the third step, the adjustment of the robot direction is realized by comparing the magnitude relation of the distance values of the obstacle edge points in the left and right data scanning sensors.
CN202010821915.4A 2020-08-15 2020-08-15 Method for controlling autonomous obstacle crossing process of mobile robot Active CN111958597B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010821915.4A CN111958597B (en) 2020-08-15 2020-08-15 Method for controlling autonomous obstacle crossing process of mobile robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010821915.4A CN111958597B (en) 2020-08-15 2020-08-15 Method for controlling autonomous obstacle crossing process of mobile robot

Publications (2)

Publication Number Publication Date
CN111958597A CN111958597A (en) 2020-11-20
CN111958597B true CN111958597B (en) 2022-10-21

Family

ID=73388811

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010821915.4A Active CN111958597B (en) 2020-08-15 2020-08-15 Method for controlling autonomous obstacle crossing process of mobile robot

Country Status (1)

Country Link
CN (1) CN111958597B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014011068A3 (en) * 2012-07-11 2014-10-16 Introsys - Integration For Robotic Systems-Integração De Sistemas Robóticos, S.A. Autonomous all terrain robotics vehicle
US10012996B1 (en) * 2017-09-15 2018-07-03 Savioke, Inc. Route planning for a mobile robot using configuration-based preferences
CN108398672A (en) * 2018-03-06 2018-08-14 厦门大学 Road surface based on the 2D laser radar motion scans that lean forward and disorder detection method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426858A (en) * 2015-11-26 2016-03-23 哈尔滨工业大学 Vision and vibration information fusion based ground type identification method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014011068A3 (en) * 2012-07-11 2014-10-16 Introsys - Integration For Robotic Systems-Integração De Sistemas Robóticos, S.A. Autonomous all terrain robotics vehicle
US10012996B1 (en) * 2017-09-15 2018-07-03 Savioke, Inc. Route planning for a mobile robot using configuration-based preferences
CN108398672A (en) * 2018-03-06 2018-08-14 厦门大学 Road surface based on the 2D laser radar motion scans that lean forward and disorder detection method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A path generation algorithm of autonomous robot vehicle through scanning of a sensor platform;Tong-Jin Park 等;《Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation 》;20060418;全文 *
基于直线地图的机器人同步定位与建图研究;季鼎耀;《中国优秀硕士学位论文全文数据库 信息科技辑》;20200115;全文 *

Also Published As

Publication number Publication date
CN111958597A (en) 2020-11-20

Similar Documents

Publication Publication Date Title
CN210046133U (en) Welding seam visual tracking system based on laser structured light
CN110253570B (en) Vision-based man-machine safety system of industrial mechanical arm
CA2950791C (en) Binocular visual navigation system and method based on power robot
CN107315410B (en) Automatic obstacle removing method for robot
CN104626206B (en) The posture information measuring method of robot manipulating task under a kind of non-structure environment
CN109872372B (en) Global visual positioning method and system for small quadruped robot
CN113485375B (en) Indoor environment robot exploration method based on heuristic bias sampling
CN111368607A (en) Robot, obstacle detection method and detection device
CN101976079A (en) Intelligent navigation control system and method
CN111521195B (en) Intelligent robot
CN108600620B (en) Target tracking method of mobile robot based on electro-hydraulic adjustable-focus lens
CN110163963B (en) Mapping device and mapping method based on SLAM
CN114407030B (en) Autonomous navigation distribution network live working robot and working method thereof
CN107831760A (en) Robot barrier thing processing system and method
CN110262487B (en) Obstacle detection method, terminal and computer readable storage medium
CN112066994B (en) Local autonomous navigation method and system for fire-fighting robot
CN116879870B (en) Dynamic obstacle removing method suitable for low-wire-harness 3D laser radar
CN110780670B (en) Robot obstacle avoidance control method based on fuzzy control algorithm
CN113822251B (en) Ground reconnaissance robot gesture control system and control method based on binocular vision
CN114998276A (en) Robot dynamic obstacle real-time detection method based on three-dimensional point cloud
CN111958597B (en) Method for controlling autonomous obstacle crossing process of mobile robot
CN104268551A (en) Steering angle control method based on visual feature points
CN117470241A (en) Self-adaptive map-free navigation method and system for non-flat terrain
CN116466724A (en) Mobile positioning method and device of robot and robot
CN116405778A (en) Active visual tracking method and device for moving target

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant