CN112674646B - Self-adaptive welting operation method based on multi-algorithm fusion and robot - Google Patents

Self-adaptive welting operation method based on multi-algorithm fusion and robot Download PDF

Info

Publication number
CN112674646B
CN112674646B CN202011479112.1A CN202011479112A CN112674646B CN 112674646 B CN112674646 B CN 112674646B CN 202011479112 A CN202011479112 A CN 202011479112A CN 112674646 B CN112674646 B CN 112674646B
Authority
CN
China
Prior art keywords
line laser
road edge
robot
welting
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011479112.1A
Other languages
Chinese (zh)
Other versions
CN112674646A (en
Inventor
张岁寒
张斌
陈凯
罗新亮
李亮
龚建球
邵将
廖国勇
晋亚超
刘琛
周诚远
李晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yingfeng Environmental Robot Co ltd
Changsha Zoomlion Environmental Industry Co Ltd
Original Assignee
Shenzhen Yingfeng Environmental Robot Co ltd
Guangdong Yingfeng Intelligent Environmental Sanitation Technology Co ltd
Changsha Zoomlion Environmental Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yingfeng Environmental Robot Co ltd, Guangdong Yingfeng Intelligent Environmental Sanitation Technology Co ltd, Changsha Zoomlion Environmental Industry Co Ltd filed Critical Shenzhen Yingfeng Environmental Robot Co ltd
Priority to CN202011479112.1A priority Critical patent/CN112674646B/en
Publication of CN112674646A publication Critical patent/CN112674646A/en
Application granted granted Critical
Publication of CN112674646B publication Critical patent/CN112674646B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a self-adaptive welting operation method and a robot based on multi-algorithm fusion, wherein the method comprises the following steps: respectively acquiring point clouds scanned by a front single-line laser radar and a rear single-line laser radar of the robot; respectively processing point clouds obtained by scanning two single line laser radars based on the geometrical characteristics of the road edges and the grid principle to extract road edge information, so that each single line laser radar returns two groups of road edge distance detection values; respectively obtaining final road edge distance detection values of the front single line laser radar and the rear single line laser radar according to the average value of the two groups of road edge distance detection values of the front single line laser radar and the rear single line laser radar; and sending an instruction to the vehicle control unit according to the final road edge distance detection values of the front and rear single-line laser radars, so that the robot carries out welting movement according to the set welting distance and the vehicle speed. The invention has the advantages of high detection precision, wide adaptability, strong adaptability and anti-interference capability to the environment, safety, reliability, low cost and great reduction of the labor intensity of operating personnel.

Description

Self-adaptive welting operation method based on multi-algorithm fusion and robot
Technical Field
The invention relates to the technical field of robot control, in particular to a self-adaptive welting operation method based on multi-algorithm fusion and a robot.
Background
In the traditional situation, when the environmental sanitation equipment cleans the road surface, the distance between the sweeping disc and the road edge is kept by controlling the steering wheel by observing through eyes, so that the accuracy is poor, and the influence on the operation effect is large; the driver needs to observe the road edge and the road condition in front to guarantee driving, so that the labor intensity is high; driver handles multiple work simultaneously, it is big to driving safety influence, consequently, for reducing driver intensity of labour, improve driving safety, some sanitation equipment have set up the supplementary driving system who carries out the curb through fusing multiple sensor and detect at present, in order to reduce the reliance to the driver, and sensor commonly used includes GPS location, wheel speed meter, IMU sensor, two 16 line laser radar, camera etc. however, engineering practice proves, and above-mentioned sensor exists not enoughly equally, include:
1. GPS location, wheel speed meter, IMU sensor can be along with the robot removes the continuous accumulation of self error, and the welt detects the information precision relatively poor.
2. The GPS is adopted for poor positioning accuracy, and the lane line is identified by vision and is easily influenced by the surrounding environment and illumination.
3. By adopting the double 16-line laser radar, the main controller has higher calculation force requirement, so the hardware requirement on the main controller is high, and the cost is high.
4. The cost of the existing system is too high, and the commercial popularization is not facilitated.
Disclosure of Invention
The invention provides a self-adaptive welting operation method based on multi-algorithm fusion, and aims to solve the technical problems of poor detection precision, environmental influence, poor identification stability and overhigh cost of the conventional auxiliary driving system.
The technical scheme adopted by the invention is as follows:
a self-adaptive welting operation method based on multi-algorithm fusion comprises the following steps:
respectively acquiring point cloud xi scanned by front and back single-line laser radars of the robot i
Point cloud xi obtained by respectively scanning two single line laser radars based on road edge side edge geometric characteristics and grid principle i Processing and extracting road edge information to enable each single line laser radar to return two groups of road edge distance detection values;
respectively obtaining final road edge distance detection values d1 and d2 of the front single line laser radar and the rear single line laser radar according to the average value of the two groups of road edge distance detection values of the front single line laser radar and the rear single line laser radar;
and sending an instruction to the vehicle control unit according to the final road edge distance detection values of the front and rear single-line laser radars, so that the robot carries out welting movement according to the set welting distance and the vehicle speed.
Further, point cloud xi obtained by scanning two single line laser radars based on geometric characteristics of road edge side edge i The method comprises the following steps of processing and extracting road edge information, and enabling each single-line laser radar to return two sets of road edge distance detection values, and specifically comprises the following steps:
performing second-order difference operation on the point cloud xi to obtain a set A;
in set A, ξ i <The point cloud data of 0 marks the position 1 and deletes xi i >Point 0, resulting in set a 1;
performing DBSCAN clustering on the point clouds in the set A1 to obtain a plurality of cluster sets phi i
Road utilizationGathering geometric features along side point clouds from clusters
Figure BDA0002836864850000021
Finding out road edge side point cloud cluster, and according to the first point cloud xi in the road edge side point cloud cluster 1 And obtaining a current road edge detection result.
Further, the geometric features of the road edge side point cloud are utilized to collect from the clusters
Figure BDA0002836864850000022
Finding out road edge side point cloud cluster, and according to the first point cloud xi in the road edge side point cloud cluster 1 The method for obtaining the current road edge detection result specifically comprises the following steps:
each cluster is integrated with phi i Point cloud xi in i Decomposing the point cloud to a corresponding XY plane to obtain the xi of each point cloud i X, Y components of xi respectively i _X、ξ i Y, wherein the XY plane is coplanar with the scanning surface of the corresponding single line laser radar, the center of the corresponding single line laser radar is taken as an origin, the Y axis is vertical to the ground, and the X axis points to the road edge direction;
for each cluster set phi i Judging whether the point cloud in the point cloud meets the requirement of xi i _Y–ξ i-1 _Y<δ)&&(ξ max _Y–ξ min _Y>σ)&&(ξ max _X-ξ min _X<λ)&&(ξ i >m), wherein ξ max Y is a cluster set phi i Maximum Y component, ξ, of the point cloud min Y is a cluster set phi i Minimum Y component, xi, of medium point cloud max X is a cluster set phi i Maximum X component, xi, of the medium point cloud min X is a cluster set phi i The minimum X component of the medium point cloud, delta, sigma, lambda and m are all set thresholds;
if yes, the first point cloud xi in the road edge side point cloud cluster is determined 1 X component xi of 1 Taking the X value as the road edge distance detection value epsilon of the current frame 1
Further, two single line laser radars are subjected to grid principleScanning the obtained point cloud xi i The method comprises the following steps of processing and extracting road edge information, enabling each single-line laser radar to return two sets of road edge distance detection values, and specifically comprising the following steps:
respectively scanning two point clouds xi obtained by single line laser radar i Decomposing the point cloud into two corresponding XY planes to obtain xi for each point cloud i X, Y components of xi respectively i _X、ξ i Y, wherein the two XY planes are coplanar with the scanning surface of the corresponding single line laser radar, the center of the corresponding single line laser radar is taken as an origin, the Y axis is vertical to the ground, and the X axis points to the road edge direction;
dividing point cloud data to be processed into a plurality of equally divided intervals on an X axis;
calculating the maximum value xi of the Y axis of the nth interval n Ymax and minimum value ξ n Ymin, and xi n Ymax and minimum value ξ n Average value xi of Ymin n _Ymean;
Traversing each equal division interval, if the average value xi of the nth interval n Mean xi of the n-th interval and Ymean n-1 Ymean satisfies ξ n _Ymean-ξ n-1 _Ymean>If not, deleting all the point cloud data of the nth interval, and if not, setting psi as a set threshold;
for all point clouds xi in the set A i After denoising pretreatment, counting all point cloud xi in the set A i The total number t of; if xi i _X-∑ξ i _X/t|<Ω, then xi i X storing queue Q i The point cloud counting value m is m +1, and psi is a set threshold value; if xi i _X-∑ξ i _X/t|>Omega, then xi is deleted i X, n is 0, and queue Q is held i-1 (ii) a The process is circulated until the whole set A is traversed, and finally a plurality of queues Q are obtained i A set of Q;
traverse set Q, for queue Q i Is in each point cloud xi, if (xi) i _X>θ)&&(|ξ i _Ymax-ξ i _Ymin|>ω), then Q is returned i And calculate Q i Average value xi in i Taking the Xmean as the current road edge distance detectionMeasured value epsilon 2 And θ and ω are set threshold values.
Further, the final road edge distance detection values d1 and d2 of the front and rear single line laser radars are respectively obtained according to the average value of the two groups of road edge distance detection values of the front and rear single line laser radars, and the method specifically comprises the following steps:
at time i, if (| ε) 1i-11i |<η 1 )&&(|ε 2i-12i |<η1)&&(|ε 1i2i |<η 2 ) Then will epsilon 1i 、ε 2i Is stored in a queue L, epsilon 1i And ε 1i-1 Respectively representing the current road edge distance detection value and the previous road edge distance detection value, epsilon, obtained based on the road edge side geometric characteristics 2i And ε 2i-1 Respectively representing the current road edge distance detection value and the previous road edge distance detection value, eta obtained based on the grid principle 1 And η 2 Respectively setting threshold values;
and taking the average value in the queue L as the final road edge distance detection values d1 and d2 corresponding to the front and rear single-line laser radars.
Further, an instruction is sent to the vehicle control unit according to the final road edge distance detection values of the front single-line laser radar and the rear single-line laser radar, so that the robot carries out welting movement according to the set welting distance and the vehicle speed, and the method specifically comprises the following steps:
and sending a command to the vehicle control unit according to the final road edge distance detection values of the front and rear single-line laser radars, sending a driving command to the vehicle control unit, and controlling the robot steering system to carry out welting movement according to e < d1< f, wherein both e and f are set thresholds.
Further, an instruction is sent to the vehicle control unit according to the final road edge distance detection values of the front single-line laser radar and the rear single-line laser radar, so that the robot carries out welting movement according to the set welting distance and the vehicle speed, and the method specifically comprises the following steps:
acquiring initial position coordinate points of the robot, and confirming the type of a target intersection in front of the robot according to the initial position coordinate points (E0, N0) of the robot and a route database, wherein the type of the target intersection comprises the following steps: the road-edge working condition is straight, the right turn is right, and the long-distance road-edge-free working condition is straight;
and sending an instruction to the vehicle control unit according to the final road edge distance detection values d1 and d2 corresponding to the front and rear single-line laser radars and the type of the target intersection in front of the robot, so that the robot carries out welting motion according to the set welting distance and the speed of the vehicle.
Further, acquiring a coordinate point of the initial position of the robot, and confirming the type of a target intersection in front of the robot according to the coordinate point (E0, N0) of the initial position of the robot and a route database, specifically comprising the steps of:
coordinate points (E) at each position on the robot working path i ,N i ) Storing the intersection type and the corresponding intersection type into a route database, wherein i represents the serial number of the position coordinate point stored in the route database;
acquiring initial position coordinate points of the robot through a GNSS module (E0, N0);
finding a corresponding location coordinate point (E0, N0) in the route database from the initial location coordinate point (E0, N0) i ,N i ) And the intersection types corresponding to the multiple continuous position coordinate points in the front set range;
and determining a target intersection type corresponding to the front of the coordinate point (E0, N0) of the initial position of the robot according to the intersection type.
Further, a corresponding position coordinate point (E) is searched in the route database from the initial position coordinate points (E0, N0) i ,N i ) Before the intersection types corresponding to the multiple continuous position coordinate points in the front setting range, the method also comprises the following steps:
acquiring a manually selected current operation road section, and finding a road section matched with the manually selected current operation road section in a route database;
the initial position coordinate points (E0, N0) of the robot obtained by the GNSS module and the corresponding position coordinate points (E) in the route database i ,N i ) Carrying out longitude and latitude matching;
if the following conditions are met:
Figure BDA0002836864850000041
and if the matching is successful, saving i and d as set thresholds.
Further, determining a target intersection type corresponding to the front of the initial position coordinate point (E0, N0) of the robot according to the intersection type, specifically comprising the following steps:
searching a route database for a position coordinate point (Ei, N) corresponding to the initial position coordinate point (E0, N0) i ) Corresponding crossing type, and the position coordinate point (Ei, N) i ) The intersection types corresponding to a plurality of continuous position coordinate points in the front set range respectively;
and if the ratio of one intersection type is larger than the set threshold value, setting the intersection type as the corresponding target intersection type in front of the robot.
Further, according to final road edge distance detection values d1 and d2 corresponding to the front and rear single line laser radars and a target intersection type in front of the robot, an instruction is sent to the vehicle controller, so that the robot carries out welting motion according to a set welting distance and vehicle speed, and the method specifically comprises the following steps:
when the type of the target intersection is the straight-ahead running under the road-edge working condition, a running instruction is sent to the vehicle controller, the robot steering system is controlled to carry out welting motion according to e < d1< f, the speed of the vehicle is controlled to be a1, and both e and f are set thresholds;
when the type of the target intersection is right turning, a driving instruction is sent to the vehicle controller, the robot steering system is controlled to carry out welting movement according to s < d1< j, and e < s, f < j, the vehicle speed is controlled to be a2, and a1 is greater than a 2;
when the type of the target intersection is long-distance straight-driving without a road edge working condition, carrying out linear fitting on initial position coordinate points (E0, N0) and a plurality of continuous position coordinate points in a front set range to obtain y as kx + b; acquiring a current position coordinate point (E, N) of the robot through a GNSS module, and if k, E + b and N are greater than N, sending a driving instruction to a vehicle control unit to control a steering system to rotate left by a set angle; and if k, E + b < N, sending a driving command to the vehicle controller, controlling the steering system to turn right by a set angle, and controlling the vehicle speed to be a2, wherein a1 is more than a 2.
Further, an instruction is sent to the vehicle control unit according to the final road edge distance detection values of the front single-line laser radar and the rear single-line laser radar, so that the robot carries out welting movement according to the set welting distance and the vehicle speed, and the method specifically comprises the following steps:
acquiring the road surface gradient and the road curve type of a road in front of the robot in real time;
and sending an instruction to the vehicle control unit according to the final road edge distance detection value, the road surface gradient and the road curve and straight type of the front and rear single-line laser radars, so that the robot carries out welting motion according to the set welting distance and the vehicle speed.
Further, the real-time acquisition of the road surface gradient and the road curve and straight type of the road in front of the robot specifically comprises the following steps:
acquiring road images corresponding to a first distance and a second distance in front of a road section to be identified of the robot in real time, and acquiring a pitch angle sigma of the road section to be identified in real time through an IMU (inertial measurement Unit) to obtain a road surface gradient, wherein the distance between the second distance and the robot is greater than the distance between the first distance and the robot;
and respectively obtaining a first road curvature and a second road curvature corresponding to a first distance and a second distance in front of the road section to be identified of the robot according to the road image, and determining the road curve type of the road section to be identified according to the second road curvature, wherein the road curve type comprises a straight road, a gentle curve and a right-angle curve.
Further, after obtaining the final road edge distance detection values d1 and d2 of the front and rear single line laser radars, sending an instruction to the vehicle controller according to the final road edge distance detection values of the front and rear single line laser radars, so that before the robot carries out welting movement according to the set welting distance and the vehicle speed, the method further comprises the following steps:
and adjusting the posture of the vehicle body according to the final road edge distance detection values d1 and d2 corresponding to the front and rear single-line laser radars so that the direction of the vehicle body is consistent with the advancing direction.
Further, the vehicle body posture adjustment is carried out according to the final road edge distance detection values d1 and d2 corresponding to the front and rear single line laser radars, so that the vehicle body direction is consistent with the advancing direction, and the method specifically comprises the following steps:
if alpha < | d1-d2| < beta & & (e < d1< f), the direction of the vehicle body is consistent with the traveling direction, a driving instruction is sent to the whole vehicle controller, the steering system is controlled to return to the positive state, and alpha, beta, e and f are all set threshold values;
if (| d1-d2| > beta) & (d1> d2), sending a driving instruction to the whole vehicle controller, controlling the steering system to turn left for a set angle, enabling the direction of the vehicle body to be consistent with the advancing direction, and finishing the adjustment of the posture of the vehicle body;
and if the vehicle body attitude is adjusted (| d1-d2| > beta) & (d1< d2), sending a driving command to the whole vehicle controller, controlling the steering system to rotate to the right for a set angle, enabling the direction of the vehicle body to be consistent with the advancing direction, and completing the adjustment of the vehicle body attitude.
The invention also provides a self-adaptive welting operation device based on multi-algorithm fusion, which comprises the following components:
a point cloud acquisition module for respectively acquiring point cloud xi scanned by the front and the back single line laser radars of the robot i
A road edge distance detection module for respectively scanning the point clouds xi obtained by the two single line laser radars based on the road edge side edge geometric characteristics and the grid principle i Processing and extracting road edge information to enable each single line laser radar to return two groups of road edge distance detection values;
the final road edge distance calculation module is used for respectively obtaining final road edge distance detection values d1 and d2 of the front and rear single line laser radars according to the average value of the two groups of road edge distance detection values of the front and rear single line laser radars;
and the welting control module is used for sending an instruction to the vehicle control unit according to the final road edge distance detection values of the front and rear single-line laser radars so that the robot carries out welting motion according to the set welting distance and the vehicle speed.
In another aspect, the present invention provides a robot, including:
the two single-line laser radars are respectively arranged on the front side and the rear side of the robot body, and scanning surfaces of the two single-line laser radars are vertical to the ground and intersect with the road edge to set an included angle;
the vehicle control unit is used for controlling the transverse and longitudinal movement and braking of the robot according to the instruction;
and the industrial personal computer is respectively in signal connection with the two single-line laser radars and the vehicle control unit and is used for realizing the self-adaptive welting operation method based on multi-algorithm fusion.
Further, the robot includes:
the road condition information detection device is used for detecting the road condition information in front of the robot;
and the industrial personal computer is respectively in signal connection with the two single-line laser radars, the vehicle control unit and the road condition information detection device and is used for realizing the self-adaptive welting operation method based on multi-algorithm fusion.
Further, the traffic information detecting device includes:
the GNSS module is arranged on the body of the robot and used for acquiring the current position coordinate of the robot;
the industrial personal computer is respectively in signal connection with the two single-line laser radars, the GNSS module and the vehicle control unit and is used for realizing the self-adaptive welting operation method based on multi-algorithm fusion.
Further, the traffic information detecting device includes:
the camera is arranged on the front side of the robot body and used for acquiring road image information of the robot at a certain distance in front of a road section to be identified;
the depth camera is arranged on the front side of the robot body and used for acquiring original depth frame data which are collected at a certain distance in front of the road section to be identified;
the IMU is arranged on the robot body and used for acquiring the pitch angle of the road section to be identified in real time to obtain the road gradient;
the industrial personal computer is respectively in signal connection with the two single-line laser radars, the camera, the depth camera, the IMU and the vehicle control unit and is used for realizing the self-adaptive edge pasting operation method based on multi-algorithm fusion.
The invention has the following beneficial effects:
the invention discloses a multi-algorithm fusion-based self-adaptive welting operation method, which comprises the steps of firstly respectively obtaining point clouds scanned by a front single-line laser radar and a rear single-line laser radar of a robotξ i (ii) a The receiver respectively scans point cloud xi obtained by scanning two single line laser radars based on the geometric characteristics of road edges and the grid principle i Processing and extracting road edge information to enable each single line laser radar to return two groups of road edge distance detection values; then respectively obtaining final road edge distance detection values d1 and d2 of the front single line laser radar and the rear single line laser radar according to the average value of the two groups of road edge distance detection values of the front single line laser radar and the rear single line laser radar; and finally, sending an instruction to the vehicle control unit according to the final road edge distance detection values of the front single-line laser radar and the rear single-line laser radar, so that the robot carries out welting movement according to the set welting distance and the vehicle speed. On one hand, the single-line laser radar is used for accurately identifying the road edge, and the single-line laser radar is high in angular resolution and is not influenced by ambient light, so that the single-line laser radar has the advantages of wide detection distance, high detection precision, low requirement on ambient light, applicability in both day and night, capability of effectively filtering a well cover, applicability to uneven ground, plant isolation belts and partially shielded road edge scenes, and strong adaptability and anti-interference capability to the environment; on the other hand, the front and the rear single-line laser radars of the invention both use a dual-algorithm fusion detection mode, such as a detection mode based on the geometrical characteristics of the road edge, fully excavate the geometrical shape of the road edge for constraint, have high identification accuracy, small calculation force, are sensitive to the identification of the road edge, and are beneficial to preventing the environmental sanitation robot from colliding with the road edge; when point clouds are segmented by a detection mode based on a grid principle, geometric feature constraints of road edges are added, interferences of ground grid well covers, plant belts and the like can be effectively filtered, an aggregate average value is selected after clustering, and the identification stability is high; finally, the road edge distance detection value is obtained according to the two algorithms, the accuracy and stability of road edge detection in partial scenes are improved, and the road edge detection accuracy is improved from 92.8% to 99.2%. Because the single-line laser radar is adopted, the required computing power is small, the hardware cost is low, the road edge information can be quickly detected, the intelligent self-adaptive road edge detection system can be used as a robot auxiliary driving system to detect the road edge, the robot edge attaching operation is controlled, the safety and the reliability are realized, and the labor intensity of sanitation workers can be greatly reduced.
In addition to the objects, features and advantages described above, other objects, features and advantages of the present invention are also provided. The present invention will be described in further detail below with reference to the accompanying drawings.
Drawings
The exemplary embodiments and descriptions of the present invention are provided to explain the present invention and not to limit the present invention. In the drawings:
fig. 1 is a schematic flow chart of an adaptive welting operation method based on multi-algorithm fusion according to a preferred embodiment of the present invention.
FIG. 2 is a schematic diagram illustrating the effect of the point cloud after being preprocessed.
Fig. 3 is a schematic diagram of distribution of the road edge point cloud after clustering on the XY plane.
Fig. 4 is a schematic diagram of an adaptive welting device module based on multi-algorithm fusion according to a preferred embodiment of the present invention.
Fig. 5 is a schematic block diagram of an electronic device entity according to a preferred embodiment of the present invention.
Fig. 6 is a schematic diagram of the robot composition according to the preferred embodiment of the present invention.
Fig. 7 is a schematic top view of a robot scanning system according to a preferred embodiment of the present invention.
Fig. 8 is a schematic diagram of the robot according to another preferred embodiment of the present invention.
Fig. 9 is a schematic diagram of the robot according to another preferred embodiment of the present invention.
Fig. 10 is a schematic top view of a robot according to another preferred embodiment of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
As shown in fig. 1, a preferred embodiment of the present invention provides a method for adaptive welting based on multi-algorithm fusion, which includes the steps of:
s1, respectively obtaining point cloud xi scanned by front and back single line laser radar of robot i
S2 point cloud xi obtained by scanning two single line laser radars respectively based on road edge side edge geometric characteristics and grid principle i Processing and extracting road edge information to make every single line laser radar return to twoA road edge distance detection value;
s3, respectively obtaining final road edge distance detection values d1 and d2 of the front single-line laser radar and the rear single-line laser radar according to the average value of the two groups of road edge distance detection values of the front single-line laser radar and the rear single-line laser radar;
and S4, sending an instruction to the vehicle control unit according to the final road edge distance detection values of the front and rear single-line laser radars, so that the robot can carry out welting movement according to the set welting distance and the vehicle speed.
In the self-adaptive welting operation method based on multi-algorithm fusion, firstly, point clouds xi obtained by scanning front and back single line laser radars of a robot are respectively obtained i (ii) a The receiver respectively scans point cloud xi obtained by scanning two single line laser radars based on the geometric characteristics of road edges and the grid principle i Processing and extracting road edge information to enable each single line laser radar to return two groups of road edge distance detection values; then respectively obtaining final road edge distance detection values d1 and d2 of the front single line laser radar and the rear single line laser radar according to the average value of the two groups of road edge distance detection values of the front single line laser radar and the rear single line laser radar; and finally, sending an instruction to the vehicle control unit according to the final road edge distance detection values of the front and rear single-line laser radars, so that the robot carries out welting motion according to the set welting distance and the vehicle speed. On one hand, the single-line laser radar is used for accurately identifying the road edge, and the single-line laser radar is high in angular resolution and is not influenced by ambient light, so that the detection distance is wide, the detection precision is high, the requirement on ambient light is low, the system is suitable for both daytime and night, the well cover can be effectively filtered, the system is suitable for uneven ground, plant isolation belts and partially shielded road edge scenes, and the system has strong adaptability and anti-interference capability to the environment; on the other hand, the front and the rear single-line laser radars of the invention both use a dual-algorithm fusion detection mode, such as a detection mode based on the geometrical characteristics of the road edge, fully excavate the geometrical shape of the road edge for constraint, and have high identification accuracy, small calculation force, sensitivity to the identification of the road edge, and contribution to preventing the sanitation robot from colliding with the road edge; and when the point cloud is segmented by the detection mode based on the grid principle, the geometric feature constraint of the road edge is added, so that ground grid well covers, plant belts and the like can be effectively filteredInterference, after clustering, selecting a set average value, and having high identification stability; finally, the road edge distance detection value is obtained according to the two algorithms, the false detection rate and the detection stability of the road edge detection in partial scenes are improved, and the road edge detection accuracy is improved from 92.8% to 99.2%. This embodiment is based on single line laser radar detects curbstone, because single line laser radar angular resolution is high, and angular resolution is 0.33, does not receive the ambient light influence, and scans a week point cloud altogether 811 some, sets for serial number ID to 811 some clouds respectively and is 0-810, is applicable to highly the being applicable to>The detection precision reaches within 3m range when the road edge stone is 5cm and the cement short wall with the height of 20 cm-2 m is detected<2cm, the manhole cover can be effectively filtered, a part of the covered road edge is identified, the requirement on environmental illumination is low, the manhole cover is suitable for use in the daytime and at night, and the environmental adaptability is good; high detection precision and static detection error<1cm, dynamic detection error<2 cm; because the single-line laser radar is adopted, the calculation force is small, the hardware cost is low, the road edge information can be quickly detected, and the commercial popularization degree is good; the intelligent self-adaptive detection curb of the robot auxiliary driving system can be used as the intelligent self-adaptive detection curb of the robot auxiliary driving system, the robot welting operation is controlled, safety and reliability are realized, the labor intensity of sanitation workers can be greatly reduced, and the safety and reliability of the welting operation are improved.
In another embodiment of the invention, the point cloud xi obtained by scanning two single line laser radars based on the geometric characteristics of the road edge side edge i The method comprises the following steps of processing and extracting road edge information, enabling each single-line laser radar to return two sets of road edge distance detection values, and specifically comprising the following steps:
s204, performing second-order differential operation on the point cloud xi to obtain a set A;
s205, in the set A, ξ i <The point cloud data of 0 marks the position 1 and deletes xi i >Point 0, resulting in set a 1;
s206, carrying out DBSCAN clustering on the point clouds in the set A1 to obtain a plurality of cluster sets phi i
S207, collecting from clusters by using geometrical characteristics of road edge side point cloud
Figure BDA0002836864850000101
Finding out point cloud along road edgeClustering, and clustering according to the first point cloud xi in the road edge side point cloud 1 And obtaining a current road edge detection result.
The embodiment performs second-order difference operation on the point cloud xi in the set A1 to convert xi into xi i >Filtering and deleting the point cloud data of which the value is 0, and only xi is reserved i <Point cloud data of 0 and mark location 1, where ξ i <The 0 point cloud is the point cloud corresponding to the side edge of the road edge stone, so that other irrelevant point clouds are eliminated during the edge pasting detection, and the accuracy and reliability of the subsequent road edge detection are ensured.
In another embodiment of the present invention, before performing a second order difference operation on the point cloud xi to obtain the set a, the method further includes the following steps:
s201, preprocessing point cloud data obtained by scanning of the single-line laser radar, eliminating noise points and abnormal point data, and ensuring the processing efficiency and precision of subsequent point cloud data;
s202, performing straight-through filtering on the preprocessed point clouds, reserving the point clouds with serial numbers ID being set range values, such as the point clouds with serial numbers ID being 90-540, selecting the point clouds in the range to be beneficial to reserving road edge key information, and meanwhile, filtering partial invalid features to save computing resources used by a subsequent algorithm;
s203, performing curve smoothing and downsampling processing on the point cloud subjected to the direct filtering, and keeping curve trend characteristics to achieve the purpose of filtering, so that upward or downward spurs formed on the curve by a few high or low points are eliminated, and at the moment, a smoothing method is used to enable a new curve to be relatively stable and reflect the overall trend of the curve more accurately, as shown in FIG. 2.
In another embodiment of the invention, the geometric features of the road edge side point cloud are utilized to collect from the clusters
Figure BDA0002836864850000111
Finding out road edge side point cloud cluster, and according to the first point cloud xi in the road edge side point cloud cluster 1 Obtaining a current road edge detection result, specifically comprising the steps of:
s2071, collecting each cluster phi i Point cloud ofξ i Decomposing the point cloud to a corresponding XY plane to obtain the xi of each point cloud i X, Y components of xi respectively i _X、ξ i Y, wherein the XY plane is coplanar with the scanning surface of the corresponding single line laser radar, the center of the corresponding single line laser radar is taken as an origin, the Y axis is vertical to the ground, and the X axis points to the road edge direction;
s2072, for each cluster set phi i Judging whether the point cloud in the point cloud meets the requirement of xi i _Y–ξ i-1 _Y<δ)&&(ξ max _Y–ξ min _Y>σ)&&(ξ max _X-ξ min _X<λ)&&(ξ i >m), wherein ξ max Y is a cluster set phi i Maximum Y component, ξ, of the point cloud min Y is a cluster set phi i Minimum Y component, xi, of medium point cloud max X is a cluster set phi i Maximum X component, xi, of the medium point cloud min X is a cluster set phi i In the minimum X component of the medium point cloud, δ, σ, λ, and m are all set thresholds, where δ is 1.5cm, σ is 8cm, λ is 5cm, and m is 10 in this embodiment, where m represents the number of point clouds, and sufficient point clouds can avoid false detection, thereby improving detection accuracy;
s2073, if yes, clustering the first point cloud xi in the road edge side point cloud 1 X component xi of 1 Taking the X value as the road edge distance detection value epsilon of the current frame 1
As shown in fig. 3, the vertical dotted line in the drawing is the road edge side, and the road edge side is perpendicular to the ground, so the above-mentioned judgment formula of this embodiment is used to represent the geometric features of the point cloud of the laser radar scanning road edge side, wherein the projection of the point cloud of the scanning road edge side on the Y axis of the XY plane, that is, the Y component is vertically distributed along the Y axis direction, on one hand, the maximum Y component ξ of the point cloud on the Y axis is max Y and minimum Y component xi min The height difference of Y is large and meets xi max _Y–ξ min _Y>Sigma; on the other hand, the projection of the road edge side point cloud on the X axis of the XY plane is more concentrated, so that the maximum X component xi of the point cloud on the X axis max X and minimum X component xi min The distance difference of _Xis small and satisfies xi max _X-ξ min _X<Lambda; because the road edge side is a continuous surface, the Y component height difference of adjacent point clouds in the road edge side point cloud cluster is small and meets xi i _Y–ξ i-1 _Y<Delta, and to further accurately judge whether the point cloud is a road edge side, a constraint condition xi needs to be strengthened i >m, so as to reduce the error recognition of the algorithm to the roadside barriers (such as ground leaves and small stones).
In the embodiment, a set phi is clustered by using the geometric characteristics of point clouds on the side edges of the road edges i Accurately finding out road edge side point cloud cluster, and simultaneously, collecting the first point cloud xi in the road edge side point cloud cluster 1 X component xi of 1 And taking the _Xas the current road edge distance detection value, namely taking the X component of the point cloud closest to the ground in the road edge side point cloud cluster as the current road edge distance detection value, thereby ensuring that the obtained current road edge distance detection value can embody the real distance between the robot and the road edge.
In another embodiment of the invention, the point cloud xi obtained by scanning two single-line laser radars based on the grid principle i The method comprises the following steps of processing and extracting road edge information, enabling each single-line laser radar to return two sets of road edge distance detection values, and specifically comprising the following steps:
s211, respectively scanning the point clouds xi obtained by the two single line laser radars i Decomposing the two points to two corresponding XY planes to obtain xi for each point cloud i X, Y components of xi respectively i _X、ξ i Y, wherein the two XY planes are coplanar with the scanning surface of the corresponding single line laser radar, the center of the corresponding single line laser radar is taken as an origin, the Y axis is vertical to the ground, and the X axis points to the road edge direction;
s212, dividing the point cloud data to be processed into a plurality of equal division intervals on the X axis;
s213, calculating the maximum value xi of the Y axis in the nth interval n Ymax and minimum value ξ n Ymin, and xi n Ymax and minimum value ξ n Average value xi of Ymin n _Ymean;
S214, traversing each equally divided interval, and if the average value xi of the nth interval is n Mean xi of the n-th interval and Ymean n-1 Ymean satisfies ξ n _Ymean-ξ n-1 _Ymean>If not, deleting all the point cloud data of the nth interval, and if not, setting psi as a set threshold;
s215, aiming at all point clouds xi in the set B i After denoising pretreatment, counting all point cloud xi in the set B i The total number t of; if xi i _X-∑ξ i _X/t|<Ω, then xi i X storing queue Q i The point cloud counting value m is m +1, and psi is a set threshold value; if xi i _X-∑ξ i _X/t|>Omega, then xi is deleted i X, n is 0, and queue Q is held i-1 (ii) a The process is circulated until the whole set A is traversed, and finally a plurality of queues Q are obtained i A set of Q;
s216, traversing the set Q and aiming at the queue Q i Is in each point cloud xi, if (xi) i _X>θ)&&(|ξ i _Ymax-ξ i _Ymin|>ω), then Q is returned i And calculate Q i Average value xi in i Taking an Xmean as a current road edge distance detection value epsilon 2 And θ and ω are set threshold values.
The embodiment is used for processing point cloud based on a grid principle, and simultaneously, the geometrical characteristic constraint of the road edge is added, so that the interferences of ground grid well covers, plant belts and the like can be effectively filtered, the average value of the set is selected after clustering, and the identification stability is high.
In another embodiment of the present invention, the final road edge distance detection values d1 and d2 of the front and rear single line laser radars are obtained according to an average value of two sets of road edge distance detection values of the front and rear single line laser radars, which specifically includes the steps of:
s31, at the i-th moment, if (| epsilon) 1i-11i |<η 1 )&&(|ε 2i-12i |<η1)&&(|ε 1i2i |<η 2 ) Then will epsilon 1i 、ε 2i Is stored in a queue L, epsilon 1i And ε 1i-1 Respectively representing the current road edge distance detection value and the previous road edge distance detection value, epsilon, obtained based on the road edge side geometric characteristics 2i And ε 2i-1 Are respectively provided withRepresenting the current and the last road edge distance detection values, eta, based on the grid principle 1 And η 2 Respectively setting threshold values;
s32, taking the average value in the queue L as the final road edge distance detection values d1 and d2 corresponding to the front and rear single line laser radars.
In this embodiment, (| ε) 1i-11i |<η 1 ) Represents that the calculated value of the current frame obtained based on the geometric feature detection of the side edge of the road edge is compared with the calculated value of the previous frame, (| epsilon) 2i-12i |<η 1) represents the comparison of the current frame calculation value with the previous frame calculation value based on the grid principle. The detection value does not have sudden change under the normal condition of the road edge, so the detection value at the adjacent moment in the two detection methods does not have great change; (| ε) 1i2i |<η 2 ) The distance value calculated by the two detection methods is similar to the distance value calculated by the two detection methods aiming at the road edge at the same moment under the normal condition, the numerical fluctuation is in the operating range, and the data belong to effective data meeting the precision requirement, otherwise, the data are inevitably abnormal. Therefore, in the embodiment, when the final result is selected, the two detection methods are fully considered for synchronously comparing the historical detection value with the calculated detection value of the two detection methods at the current moment, the two detection methods are organically integrated, and when the two detection methods are integrated, the mutual relation between the current frame calculated value and the historical detection value of the two methods is considered, the mutual relation between the calculated values obtained by the two methods at the same moment is also considered, and the longitudinal comparison and the transverse comparison are combined to cooperatively ensure the reliability and the accuracy of the obtained calculated values. In this embodiment, after the two detection methods synchronously compare the historical detection value with the calculated detection value of the two detection methods at the current time, the corresponding average value is used as the final road edge distance detection values d1 and d2 corresponding to the front and rear single line laser radars, so that the accuracy and stability of road edge detection in partial scenes are improved, the road edge detection accuracy is improved from 92.8% to 99.2%, and errors are reduced.
In another embodiment of the invention, an instruction is sent to the vehicle control unit according to the final road edge distance detection values of the front and rear single-line laser radars, so that the robot carries out welting movement according to the set welting distance and the vehicle speed, and the method specifically comprises the following steps:
s401, sending a command to the vehicle control unit according to the final road edge distance detection values of the front single-line laser radar and the rear single-line laser radar, sending a driving command to the vehicle control unit, and controlling the robot steering system to move along the edge according to e < d1< f, wherein both e and f are set threshold values.
In another embodiment of the invention, an instruction is sent to the vehicle control unit according to the final road edge distance detection values of the front and rear single-line laser radars, so that the robot carries out welting movement according to the set welting distance and the vehicle speed, and the method specifically comprises the following steps:
s411, acquiring the coordinate point of the initial position of the robot, and confirming the type of a target intersection in front of the robot according to the coordinate point (E0, N0) of the initial position of the robot and a route database, wherein the type of the target intersection comprises the following steps: the road-edge working condition is straight, the right turn is right, and the long-distance road-edge-free working condition is straight;
and S412, sending an instruction to the vehicle control unit according to the final road edge distance detection values d1 and d2 corresponding to the front and rear single-line laser radars and the type of the target intersection in front of the robot, so that the robot carries out welting motion according to the set welting distance and the vehicle speed.
In this embodiment, for the sweeping operation of the sanitation robot in the environments of the auxiliary road, the isolation belt, the long-distance no-road-edge and the like, because the road section where the robot passes has the complex situation of straight lines, curved lines or no road-edge partially, for different road conditions, the robot needs to make corresponding adjustment to ensure that the robot can safely and reliably carry out the welting motion, and if the long-distance no-road-edge exists, the front and rear single-line laser radars cannot obtain the final road-edge distance detection values d1 and d2 according to the method of the foregoing embodiment. Therefore, different from the foregoing embodiment, in this embodiment, when sending a command to the vehicle controller to enable the robot to perform the welting movement according to the set welting distance and the vehicle speed, the final road edge distance detection values d1 and d2 corresponding to the front and rear single line laser radars are considered, and meanwhile, the type of the target intersection in front of the robot is also considered, which make up for the deficiency of each other, and the robot is controlled to perform the welting movement according to the set welting distance and the vehicle speed together, so as to satisfy the welting cleaning operation of the sanitation robot in the environments such as the auxiliary road, the isolation belt, the long-distance no road edge, and the like, and improve the operation application range, the robustness and the reliability of the sanitation robot.
In another embodiment of the present invention, the method for obtaining the initial position coordinate point of the robot and confirming the type of the target intersection in front of the robot according to the initial position coordinate point (E0, N0) of the robot and the route database includes the following steps:
s4111, coordinate points (E) of each position on the robot working path i ,N i ) Storing the intersection type and the corresponding intersection type into a route database, wherein i represents the serial number of the position coordinate point stored in the route database;
s4112, acquiring a coordinate point of the initial position of the robot through the GNSS module (E0, N0);
s4113, searching the corresponding position coordinate point (E0, N0) in the route database according to the initial position coordinate point (E0, N0) i ,N i ) And the intersection types corresponding to the multiple continuous position coordinate points in the front set range;
s4114, determining a target intersection type corresponding to the front of the initial position coordinate point (E0, N0) of the robot according to the intersection type.
In this embodiment, in order to confirm the type of the target intersection in front of the robot, the GNSS module in this embodiment obtains the coordinate points (E0, N0) of the initial position of the robot, and searches the coordinate points (E0, N0) of the initial position from the route database for the corresponding intersection type to determine the type of the target intersection in front of the coordinate points (E0, N0) of the initial position of the robot, where the route database is established as follows:
before the robot carries out the edge pasting cleaning operation, a manual teaching recording path is required, and longitude and latitude information of the cleaning operation path is stored into a GNSSDATA field of a database. And synchronously, when different road conditions are met, marking the current intersection by manually pressing a physical button, storing UTMFLAG fields in a database, wherein the UTMFLAG fields are stored in a GNSS table and correspond to GNSSDATA fields one by one, UTMFLAG is an intersection type mark, UTMFLAG is 0 to indicate that the road is straight along, UTMFLAG is 1 to indicate that the road is right-turned, and UTMFLAG is 2 to indicate that the road is straight along without the road. The UTM coordinate system has a horizontal axis denoted by E and a vertical axis denoted by N, which is an international standard. Cartesian-like representation of the abscissa and ordinate systems by x, y.
In the embodiment, the coordinates of the road section and the corresponding current intersection type are stored in the route database in advance in a manual teaching mode, and then after the initial position coordinate point (E0, N0) of the robot is obtained through the GNSS module, the corresponding position coordinate point (E0, N0) can be obtained by searching the route database i ,N i ) And finally, determining the type of a target intersection corresponding to the front of the initial position coordinate point (E0, N0) of the robot based on the obtained intersection types.
In another embodiment of the present invention, a corresponding position coordinate point (E) is searched in the route database according to the initial position coordinate points (E0, N0) i ,N i ) Before the intersection types corresponding to the multiple continuous position coordinate points in the front setting range, the method also comprises the following steps:
s4112a, acquiring the manually selected current operation road section, and finding the road section matched with the manually selected current operation road section in the route database;
s4112b, and enabling the initial position coordinate points (E0, N0) of the robot obtained by the GNSS module to correspond to the position coordinate points (E) in the route database i ,N i ) Carrying out longitude and latitude matching;
s4112c, if the following conditions are met:
Figure BDA0002836864850000151
and if the matching is successful, saving i and d as set thresholds.
The present embodiment searches the route database for the corresponding position coordinate point (E) from the initial position coordinate points (E0, N0) i ,N i ) Before the intersection types corresponding to a plurality of continuous position coordinate points in the front set range are reached, the initial position coordinate points (E0, N0) of the robot acquired by the GNSS module and the route are firstly comparedCorresponding position coordinate point (E) in the database i ,N i ) And performing longitude and latitude matching, wherein the coordinate points and the intersection types are in corresponding relation, so when the current coordinate point of the robot is taken as a key word and the corresponding intersection type is searched from the database, in order to ensure the accuracy of the search result, the initial position coordinate points (E0, N0) and the corresponding position coordinate points (E0, N0) in the route database i ,N i ) Certain position errors must be met so as to ensure the accuracy of the type of the searched intersection, otherwise, the problem that the actual intersection type is inconsistent with the pre-stored intersection type occurs.
In another embodiment of the present invention, determining a target intersection type corresponding to a coordinate point (E0, N0) at an initial position of a robot according to the intersection type specifically includes the steps of:
s41141, searching the route database for a position coordinate point (E) corresponding to the initial position coordinate point (E0, N0) i ,N i ) Corresponding crossing type, and the position coordinate point (E) i ,N i ) The intersection types corresponding to a plurality of continuous position coordinate points in the front set range respectively;
s41142, if the proportion of one intersection type is larger than a set threshold value, setting the intersection type as a corresponding target intersection type in front of the robot.
In this embodiment, when determining the target intersection type corresponding to the front of the initial position coordinate point (E0, N0) of the robot according to the intersection type, a statistical method is adopted to obtain the corresponding position coordinate point (E0) in the database first i ,N i ) Corresponding crossing type, and the position coordinate point (E) i ,N i ) Setting the intersection type as the corresponding target intersection type in front of the robot by adopting a statistical method and taking a certain intersection type with a ratio larger than a set threshold value as the intersection type corresponding to each of a plurality of continuous position coordinate points in a front set range, and if a corresponding position coordinate point (E) in a database is obtained i ,N i ) Corresponding crossing type, and the position coordinate point (E) i ,N i ) The intersection types corresponding to the front 60 position coordinate points respectively, namely the number of GNSSDATA fieldsObtaining the values of i to i + 60.
If:
Figure BDA0002836864850000161
the corresponding target intersection type is straight-going under the working condition with road edges;
if:
Figure BDA0002836864850000162
the corresponding type of the target intersection is a right turn;
if:
Figure BDA0002836864850000171
the corresponding target intersection type is a long-distance straight-going without road-edge working condition.
The frequency of the GNSS module of this embodiment is 10HZ, and 10 pieces of data can be obtained per second; the vehicle speed is 5km/h and is about 0.8m per second; 60 data correspond to about 5 m; the trend of the road section 5m ahead can be judged in advance, so that the operation efficiency and the calculation force demand can be considered, if the trend of the road section ahead is judged too early, the speed of the system needs to be reduced, and the operation efficiency cannot be guaranteed.
In another embodiment of the invention, an instruction is sent to a vehicle control unit according to final road edge distance detection values d1 and d2 corresponding to front and rear single line laser radars and a target intersection type in front of the robot, so that the robot carries out welting motion according to a set welting distance and vehicle speed, and the method specifically comprises the following steps:
s4121, when the type of the target intersection is a straight line with a road edge working condition, sending a driving instruction to a vehicle controller, controlling a robot steering system to carry out welting movement according to e < d1< f, and controlling the speed of the vehicle to be a1, wherein both e and f are set thresholds;
s4122, when the type of the target intersection is a right turn, sending a driving instruction to the vehicle controller, controlling the robot steering system to carry out welting movement according to S < d1< j, and controlling the vehicle speed to be a2 and a1 to be a2, wherein e < S, f < j;
s4123, when the type of the target intersection is long-distance straight-driving without a road edge working condition, performing linear fitting on the initial position coordinate points (E0, N0) and a plurality of continuous position coordinate points in a front set range to obtain y as kx + b; acquiring a current position coordinate point (E, N) of the robot through a GNSS module, and if k, E + b and N are greater than N, sending a driving instruction to a vehicle control unit to control a steering system to rotate left by a set angle; and if k, E + b < N, sending a driving command to the vehicle controller, controlling the steering system to turn right by a set angle, and controlling the vehicle speed to be a2, wherein a1 is more than a 2.
In the embodiment, based on the final road edge distance detection values d1 and d2 corresponding to the front and rear single-line laser radars and the target intersection type in front of the robot, the robot sends an instruction to the whole vehicle controller together, so that the robot carries out welting motion according to the set welting distance and the speed of the whole vehicle, and can send corresponding running instructions to the whole vehicle controller when the robot faces straight running under the road edge working condition, turns right and runs straight under the long-distance no-road-edge working condition, so that the robot can combine the final road edge distance detection values d1 and d2 and the target intersection type to implement optimal operation, meet the welting operation requirements under different working conditions, greatly improve the adaptability, safety and reliability of the welting operation of the robot, and greatly reduce the labor intensity of sanitation workers.
In another embodiment of the invention, an instruction is sent to the vehicle control unit according to the final road edge distance detection values of the front and rear single-line laser radars, so that the robot carries out welting movement according to the set welting distance and the vehicle speed, and the method specifically comprises the following steps:
s421, acquiring the road surface gradient and the road curve type of the road in front of the robot in real time;
and S422, sending an instruction to the vehicle control unit according to the final road edge distance detection value, the road slope and the road curve and straight type of the front and rear single-line laser radars, so that the robot carries out welting motion according to the set welting distance and the vehicle speed.
When the road section that the robot passes through has the complicated situation of different straight types, different slopes, need the robot to make corresponding adjustment and ensure that the robot can safe and reliable carry out welt motion, the road surface of different curvatures needs to set up different driving speed and welt safe distance, and different slopes need corresponding driving speed equally. Therefore, in the embodiment, when an instruction is sent to the vehicle controller, so that the robot performs the welting motion according to the set welting distance and the vehicle speed, the final road edge distance detection values d1 and d2 corresponding to the front and rear single line laser radars are considered to control the safe distance to the curb, and meanwhile, the road surface gradient and the road curve and straight type of the road in front of the robot are also considered to be combined with each other, so that the robot is controlled to perform the welting motion according to the set welting distance and the vehicle speed, thereby meeting the welting cleaning operation of the sanitation robot under complex environments of different curve and straight types, different gradients and the like.
In another embodiment of the present invention, the acquiring the road surface gradient and the road curvature type of the road in front of the robot in real time specifically includes the steps of:
s4211, acquiring road images corresponding to a first distance position and a second distance position in front of a road section to be identified of the robot in real time, and acquiring a pitch angle sigma of the road section to be identified in real time through an IMU to obtain a road surface gradient, wherein the distance between the second distance position and the robot is larger than the distance between the first distance position and the robot;
s4212, respectively obtaining a first road curvature and a second road curvature corresponding to a first distance and a second distance in front of a road section to be identified of the robot according to the road image, and determining a road curve type of the road section to be identified according to the second road curvature, wherein the road curve type comprises a straight road, a gentle curve and a right-angle curve.
In this embodiment, the first road curvature corresponding to the first distance is identified by the depth camera, so as to obtain the road curvature τ 2m ahead of the robot, which includes the following specific processes:
1. acquiring original depth frame data by using a depth camera, converting a depth map into point clouds by using a depth conversion function, performing through filtering on the point cloud data, selecting a plurality of sheet-shaped point clouds of a specified Y-axis section, converting the point clouds from an XYZ format into an XYZRGB format, removing empty points in the sheet-shaped point clouds, and calibrating parameters of the depth camera;
2. sequencing the sheet-shaped point clouds in each frame of point cloud, adding a neighborhood threshold condition to the sheet-shaped point clouds according to the boundary characteristics of the road edge, adding a height difference filtering condition, and detecting the boundary points of the road edge;
3. after the boundary points of the road edge are extracted, the boundary points need to be converted from a local coordinate system of the depth camera to a world coordinate system, so that rotation translation matrixes of two coordinate systems need to be calculated, and the optimal rotation matrix R and translation matrix T are calculated by solving the coordinates of three points under the two coordinate systems;
4. assuming that the local coordinate system is a and the world coordinate system is B, the transformation relationship between the coordinate systems a and B can be expressed as B ═ R × a + T. Wherein, R and T are transformation matrixes for transforming the local coordinate system to the world coordinate system B, and the solving of the optimal transformation matrixes R and T can be decomposed into the following steps:
(1) the centroid of the data set is solved,
Figure BDA0002836864850000191
obtaining the geometric center centroid of each group of vectors;
(2) converting the two data sets to an origin, and then solving an optimal rotation matrix of the two coordinate systems, wherein the method for solving the rotation matrix is Singular Value Decomposition (SVD):
Figure BDA0002836864850000192
[U,S,V]svd (H), which returns singular values of the covariance matrix H and decomposes into U, S, V matrices;
(3) solving translation matrixes R and T, wherein R is VU T ,T=-R*centroid A +centroid B
5. And (3) projecting the boundary points converted into the world coordinate system in an XOZ coordinate system, then carrying out curve fitting on the front 2m road edges by using a least square algorithm to form a curve, obtaining the curvature information tau of the road and providing a judgment basis for controlling the whole vehicle.
The corresponding second road curvature at the second distance is identified by the camera, so as to obtain the road curvature τ at the position 8m in front of the robot, and the specific process is as follows:
1. collecting road images of a road section to be identified, labeling objects to be identified such as roads, zebra crossings and the like, processing label data, and training a semantic segmentation model by using the processed label data, wherein the semantic segmentation model used in the text is Fast-SCNN and has better expression on identification speed and accuracy;
2. when the sweeping robot works, images on a front road are collected in real time, and the trained model is used for recognizing and reasoning the road images to recognize a road area;
3. processing the identified road area by using morphological processing methods such as corrosion, expansion and the like, removing noise to obtain a complete road area, and extracting an edge line on the right side of the road;
4. carrying out hough straight line detection on the edge lines of the road, and judging that the front road is a straight road when one straight line or a plurality of straight lines with smaller angle difference are detected; when the difference of two or more angles is detected to be larger than a set threshold value, judging that a gentle curve exists on the road in front at the moment; when the difference of two or more angles is larger than another set threshold value, the fact that a right-angle curve exists on the road ahead at the moment is judged, the curve and straight type of the road is identified, and follow-up early warning on the road condition is facilitated. In the embodiment, the road image is identified and inferred through the RGB information of the camera, and the road area is identified to obtain the edge line information, so that the road can be effectively distinguished by 8m in advance.
In the preferred embodiment of the invention, an instruction is sent to a vehicle control unit according to the final road edge distance detection value, the road surface gradient and the road curve and straight type of the front and rear single-line laser radars, so that the robot carries out welting motion according to the set welting distance and the vehicle speed, and the method specifically comprises the following steps:
s4221, if 8 ° < σ ═ 15 °, corresponding to the vehicle speed v ═ 150; if 0 degrees < sigma < -8 degrees, the corresponding speed of the whole vehicle is v < 90 degrees; if-8 degrees < sigma < ═ 0 degrees, the corresponding speed of the whole vehicle is v ═ 60; if-15 ° < σ ° -8 °, the corresponding overall vehicle speed v is 30 (the angle is negative, indicating a downhill);
s4222, if the road curve type is a straight road, sending a driving instruction to a finished automobile controller, controlling a robot steering system to carry out welting movement according to e < d1< f, wherein the corresponding finished automobile speed v is 150, and both e and f are set threshold values;
s4223, if the road curve type is a gentle curve, sending a driving instruction to a vehicle controller, controlling a robot steering system to perform curve welting movement according to S < d1< j, wherein the corresponding vehicle speed is v ═ 90, j and S are set threshold values, and e < S, f < j, a > b;
s4224, if the road straight type is a right-angle curve, sending a driving instruction to a vehicle controller, controlling a robot steering system to prepare for a curve according to r < d1< t, and if a corresponding vehicle speed is v 60, then reading a first road curvature τ at 2m in front of the robot, and if (τ > λ 1) & (d1< p) & (d2< p), controlling the robot steering system to perform a curve edge pasting motion according to g < d1< h, where λ 1, r, t, g, h, and p are set thresholds, S < r < g, j < t < h, b > c, and in this embodiment, p is 250 cm;
and S4225, determining the final finished automobile speed according to the smaller value of the finished automobile speed corresponding to the road curve type and the road surface gradient of the road section to be identified.
The embodiment sets corresponding finished automobile speed and welting distance according to different road surface gradients and road curved and straight types, so that the safety and stability of the robot during welting along the roads of different road surface gradients and road curved and straight types are ensured, when the robot carries out welting movement according to the set welting distance and the finished automobile speed, longitudinal control is carried out according to the road surface gradients, and the vehicle body state can be kept for a long time because the inclination of the road surface gradients is a whole section under most road conditions; however, at the time when the road surface has the gradient and needs to turn, the smaller value of the speeds corresponding to the gradient and the curve type of the road surface is taken for longitudinal control, so that the driving safety is ensured. In addition, in the embodiment, the curved and straight type of the road at the position 8m ahead can be obtained in advance during the traveling process of the robot, so that the robot has enough time to adjust the vehicle speed and the welting distance, the stability and the safety of the adjusting process are ensured, and the working efficiency of the robot is reduced by early adjustment. When facing a right-angle bend, the road surface turns sharply, so that the speed and the welting distance of the robot are adjusted by 8m in advance, the welting distance is further adjusted in advance according to the first road curvature tau, d1 and d2 at the 2m front of the robot at the position where the robot continues to advance and is 2m away from the right-angle bend, compared with the welting movement in a straight line or a gentle bend, the safety space required by the welting movement in the right-angle bend is larger, the adjustment at the moment is generally to increase the current welting distance, so that the stability and the safety of the adjustment process are ensured, a sufficient safety distance is kept between the robot and the road edge when the robot carries out right-angle welting movement at the right-angle bend, and the right-angle bend is ensured to be smoothly carried out. Therefore, the embodiment can effectively pre-judge and detect the road by fusing the information of the three sensors, effectively carry out intelligent self-adaptive welting operation, and control the average deviation of the system to be less than 2 cm. In the above embodiment, the speed V is not actually 150, 90, 60, 30, but is simply a coefficient relationship of the vehicle speed under each surface condition, and the actual vehicle speed is adjusted as needed on the basis of the coefficient relationship, for example, V is 25km/h, 20km/h, 15km/h, 10km/h, 5 km/h.
In the preferred embodiment of the present invention, after obtaining the final road edge distance detection values d1 and d2 of the front and rear single line laser radars, sending an instruction to the vehicle control unit according to the final road edge distance detection values of the front and rear single line laser radars, so that before the robot performs welting movement according to the set welting distance and the vehicle speed, the method further includes the following steps:
s3a, adjusting the posture of the vehicle body according to the final road edge distance detection values d1 and d2 corresponding to the front and rear single-line laser radars to enable the direction of the vehicle body to be consistent with the advancing direction, and the method specifically comprises the following steps:
s3a-1, if alpha < | d1-d2| < beta & (e < d1< f), the direction of the vehicle body is consistent with the advancing direction, a driving instruction is sent to the whole vehicle controller, the steering system is controlled to return to the positive state, and alpha, beta, e and f are all set threshold values;
s3a-2, if (| d1-d2| > beta) & (d1> d2), sending a driving instruction to the whole vehicle controller, controlling the steering system to turn left for a set angle, enabling the direction of the vehicle body to be consistent with the advancing direction, and completing the adjustment of the posture of the vehicle body;
and S3a-3, if (| d1-d2| > beta) & (d1< d2), sending a driving instruction to the whole vehicle controller, controlling the steering system to rotate to the right for setting an angle, enabling the direction of the vehicle body to be consistent with the advancing direction, and finishing the adjustment of the posture of the vehicle body.
In this embodiment, when the final road-following distance detection values d1 and d2 are obtained, before the vehicle body of the robot is not started, the current vehicle body posture of the robot needs to be detected first, and it is ensured that the vehicle body and the traveling direction are kept consistent before the welting operation. In the present embodiment, the current vehicle body posture is determined by using the magnitude relationship between the final road edge distance detection values d1 and d2 and the set threshold values α, β, e, and f: if alpha < | d1-d2| < beta & (e < d1< f), the direction of the vehicle body is consistent with the traveling direction, the steering system is controlled to return to the positive state through the whole vehicle controller, if (| d1-d2| > beta) & (d1> d2) or (| d1-d2| > beta) & (d1< d2), the steering system is controlled to turn left or turn right for a certain angle through the whole vehicle controller, the vehicle body is enabled to be consistent with the traveling direction firstly, the posture adjustment of the vehicle body is completed, then an instruction is sent to the whole vehicle controller according to the final road edge distance detection values of the front single-wire laser radar and the rear single-wire laser radar, and the robot is enabled to carry out edge pasting movement according to the set edge pasting distance and the whole vehicle speed, so that the stability and the reliability of the edge pasting movement of the robot are improved.
As shown in fig. 4, another preferred embodiment of the present invention further provides an adaptive welting device based on multi-algorithm fusion, including:
a point cloud acquisition module for respectively acquiring point cloud xi scanned by the front and the back single line laser radars of the robot i
A road edge distance detection module for respectively scanning the point clouds xi obtained by the two single line laser radars based on the road edge side edge geometric characteristics and the grid principle i Processing and extracting road edge information to enable each single line laser radar to return two groups of road edge distance detection values;
the final road edge distance calculation module is used for respectively obtaining final road edge distance detection values d1 and d2 of the front single-line laser radar and the rear single-line laser radar according to the average value of the two groups of road edge distance detection values of the front single-line laser radar and the rear single-line laser radar;
and the welting control module is used for sending an instruction to the vehicle control unit according to the final road edge distance detection values of the front and rear single-line laser radars so that the robot carries out welting motion according to the set welting distance and the vehicle speed.
As shown in fig. 5, another embodiment of the present invention further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the processor implements the adaptive welting operation method based on multi-algorithm fusion.
Another embodiment of the present invention further provides a storage medium, where the storage medium includes a stored program, and when the program runs, the apparatus on which the storage medium is located is controlled to execute the adaptive welting operation method based on multi-algorithm fusion.
As shown in fig. 5 and 7, another preferred embodiment of the present invention also provides a robot including:
the single-line laser radar system comprises two single-line laser radars, a fixed mounting support and a scanning device, wherein the two single-line laser radars are respectively mounted on the front side and the rear side of a robot body, the fixed mounting support is mounted on the robot body and used for preventing vibration, specifically, the ground clearance of the single-line laser radar is set to be 1m, the scanning surface of the single-line laser radar on the front side is perpendicular to the ground and forms an included angle of 45 degrees with a road edge, and the scanning surface of the single-line laser radar on the rear side is perpendicular to the ground and forms an included angle of 90 degrees with the road edge;
the vehicle control unit is used for controlling the transverse and longitudinal movement and braking of the robot according to the instruction;
and the industrial personal computer is respectively in signal connection with the two single-line laser radars and the vehicle control unit and is used for realizing the self-adaptive welting operation method based on multi-algorithm fusion.
As shown in fig. 8, another preferred embodiment of the present invention also provides a robot including:
the single-line laser radar system comprises two single-line laser radars, a fixed mounting support and a scanning device, wherein the two single-line laser radars are respectively mounted on the front side and the rear side of a robot body, the fixed mounting support is mounted on the robot body and used for preventing vibration, specifically, the ground clearance of the single-line laser radar is set to be 1m, the scanning surface of the single-line laser radar on the front side is perpendicular to the ground and forms an included angle of 45 degrees with a road edge, and the scanning surface of the single-line laser radar on the rear side is perpendicular to the ground and forms an included angle of 90 degrees with the road edge;
the GNSS module is arranged on the body of the robot and is used for acquiring the current position coordinates of the robot;
the vehicle control unit is used for controlling the transverse and longitudinal movement and braking of the robot according to the instruction;
and the industrial personal computer is respectively in signal connection with the two single-line laser radars, the GNSS module and the vehicle control unit and is used for realizing the self-adaptive welting operation method based on multi-algorithm fusion.
As shown in fig. 9 and 10, another embodiment of the present invention also provides a robot including:
the single-line laser radar system comprises two single-line laser radars, a fixed mounting support and a scanning device, wherein the two single-line laser radars are respectively mounted on the front side and the rear side of a robot body, the fixed mounting support is mounted on the robot body and used for preventing vibration, specifically, the ground clearance of the single-line laser radar is set to be 1m, the scanning surface of the single-line laser radar on the front side is perpendicular to the ground and forms an included angle of 45 degrees with a road edge, and the scanning surface of the single-line laser radar on the rear side is perpendicular to the ground and forms an included angle of 90 degrees with the road edge;
the camera is arranged on the front side of the robot body and used for acquiring road image information at a certain distance in front of a road section to be identified of the robot;
the depth camera is arranged on the front side of the robot body and used for acquiring original depth frame data which are collected at a certain distance in front of the road section to be identified;
the IMU is arranged on the robot body and used for acquiring the pitch angle of the road section to be identified in real time to obtain the road gradient;
the vehicle control unit is used for controlling the transverse and longitudinal movement and braking of the robot according to the instruction;
and the industrial personal computer is respectively in signal connection with the two single-line laser radars, the camera, the depth camera, the IMU and the vehicle control unit and is used for realizing the self-adaptive edge pasting operation method based on multi-algorithm fusion.
This embodiment is through fusing single line laser radar, camera, degree of depth camera information, can effectively carry out whole car self-adaptation welt control, and the welt precision is high, and required computing power is very little, can short-term test out road edge information, and the cost only has current unmanned driving system's 1/10, and commercial popularization degree is good.
The robot of above-mentioned embodiment still is provided with ultrasonic radar as the system of stopping obstacles, adopts ultrasonic radar double probe mode, arranges 2 groups respectively in four preceding, left and right sides of robot, totally 12 probes for detect the robot obstacle condition all around, send the signal of stopping obstacles when the obstacle is too close to the robot, the robot carries out automatic the obstacle stopping, specifically is: if the detection distance d _ ultra < dv of the ultrasonic radar is greater than the set value, the industrial personal computer controls the vehicle control unit to stop and carry out sound-light alarm, and the vehicle control unit comprises a steering system, a braking system, a transmission system and a robot upper-mounted operation device.
The functionality of the methods of the above embodiments, if implemented in software functional units and sold or used as a stand-alone product, may be stored in one or more computing device readable storage media. Based on such understanding, part of the contribution of the embodiments of the present invention to the prior art or part of the technical solution may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computing device (which may be a personal computer, a server, a mobile computing device, a network device, or the like) to execute all or part of the steps of the method described in the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only a preferred embodiment of the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A self-adaptive welt operation method based on multi-algorithm fusion is characterized by comprising the following steps:
respectively acquiring point cloud xi scanned by front and back single-line laser radars of the robot i
Point cloud xi obtained by respectively scanning two single line laser radars based on road edge side edge geometric characteristics and grid principle i Processing and extracting road edge information to enable each single line laser radar to return two groups of road edge distance detection values;
respectively obtaining final road edge distance detection values d1 and d2 of the front single line laser radar and the rear single line laser radar according to the average value of the two groups of road edge distance detection values of the front single line laser radar and the rear single line laser radar;
and sending an instruction to the vehicle control unit according to the final road edge distance detection values of the front and rear single-line laser radars, so that the robot carries out welting movement according to the set welting distance and the vehicle speed.
2. The adaptive welting method based on multi-algorithm fusion as claimed in claim 1, wherein point cloud ξ obtained by scanning two single line laser radars based on curbstone side geometric features i The method comprises the following steps of processing and extracting road edge information, enabling each single-line laser radar to return two sets of road edge distance detection values, and specifically comprising the following steps:
to point cloud xi i Performing second-order difference operation to obtain a set A;
in set A, ξ i <Point cloud data tag of 0Position 1, and xi is deleted i >Point 0, resulting in set a 1;
performing DBSCAN clustering on the point clouds in the set A1 to obtain a plurality of cluster sets phi i
Gathering phi from clusters using geometric features of road edge side point clouds i Finding out road edge side point cloud cluster, and according to the first point cloud xi in the road edge side point cloud cluster 1 And obtaining a current road edge detection result.
3. The adaptive welting operation method based on multi-algorithm fusion as claimed in claim 2, wherein phi is collected from the clusters by using the geometrical characteristics of the road edge side point cloud i Finding out road edge side point cloud cluster, and according to the first point cloud xi in the road edge side point cloud cluster 1 Obtaining a current road edge detection result, specifically comprising the steps of:
each cluster is integrated with phi i Point cloud xi in i Decomposing the point cloud to a corresponding XY plane to obtain the xi of each point cloud i X, Y components of xi respectively i _X、ξ i Y, wherein the XY plane is coplanar with the scanning surface of the corresponding single line laser radar, the center of the corresponding single line laser radar is taken as an origin, the Y axis is vertical to the ground, and the X axis points to the road edge direction;
for each cluster set phi i Judging whether the point cloud in the point cloud meets the requirement of xi i _Y–ξ i-1 _Y<δ) &&(ξ max _Y–ξ min _Y>σ)&&(ξ max _X-ξ min _X<λ)&&(ξ i >m), wherein ξ max Y is a cluster set phi i Maximum Y component, ξ, of the point cloud min Y is a cluster set phi i Minimum Y component, xi, of medium point cloud max X is a cluster set phi i Maximum X component, xi, of the medium point cloud min X is a cluster set phi i The minimum X component of the medium point cloud, delta, sigma, lambda and m are all set thresholds;
if yes, the first point cloud xi in the road edge side point cloud cluster is determined 1 X component xi of 1 X as the current frameEdge distance detection value epsilon 1
4. The adaptive welting method based on multi-algorithm fusion as claimed in claim 1, wherein the point cloud ξ obtained by scanning two single line laser radars based on the grid principle i The method comprises the following steps of processing and extracting road edge information, enabling each single-line laser radar to return two sets of road edge distance detection values, and specifically comprising the following steps:
respectively scanning two point clouds xi obtained by single line laser radar i Decomposing the two points to two corresponding XY planes to obtain xi for each point cloud i X, Y components of xi respectively i _X、ξ i Y, wherein the two XY planes are coplanar with the scanning surface of the corresponding single line laser radar, the center of the corresponding single line laser radar is taken as an origin, the Y axis is vertical to the ground, and the X axis points to the road edge direction;
dividing point cloud data to be processed into a plurality of equally divided intervals on an X axis;
calculating the maximum value xi of the Y axis of the nth interval n Ymax and minimum value ξ n Ymin, and xi n Ymax and minimum value ξ n Average value xi of Ymin n _Ymean;
Traversing each equal division interval, if the average value xi of the nth interval n Mean xi of the n-th interval and Ymean n-1 Ymean satisfies ξ n _Ymean-ξ n-1 _Ymean>If the point cloud data of the nth interval is greater than psi, storing all the point cloud data of the nth interval into the set A, otherwise, deleting all the point cloud data of the nth interval, and setting psi as a set threshold;
for all point clouds xi in the set A i After denoising pretreatment, counting all point cloud xi in the set A i The total number t of; if xi i _X-∑ξ i _X/t|<Omega, then xi i X store queue Q i The point cloud counting value m is m +1, and psi is a set threshold value; if xi i _X-∑ξ i _X/t|>Omega, then xi is deleted i X, n is 0, and queue Q is held i-1 (ii) a The process is circulated until the whole set A is traversed, and finally a plurality of queues Q are obtained i A set of Q;
traversal setsQ, for queue Q i Each point cloud xi in i If (xi) i _X>θ)&&(|ξ i _Ymax-ξ i _Ymin|>ω), then Q is returned i And calculate Q i Average value xi in i Taking an Xmean as a current road edge distance detection value epsilon 2 And θ and ω are set threshold values.
5. The adaptive welting work method based on multi-algorithm fusion as claimed in claim 1, wherein the final road edge distance detection values d1 and d2 of the front and rear single line laser radars are obtained according to an average value of two groups of road edge distance detection values of the front and rear single line laser radars, and the method comprises the following steps:
at time i, if (| ε) 1i-11i |<η 1 )&&(|ε 2i-12i |<η1)&&(|ε 1i2i |<η 2 ) Then will epsilon 1i 、ε 2i Is stored in a queue L, epsilon 1i And ε 1i-1 Respectively representing the current road edge distance detection value and the previous road edge distance detection value, epsilon, obtained based on the road edge side geometric characteristics 2i And ε 2i-1 Respectively representing the current road edge distance detection value and the previous road edge distance detection value, eta obtained based on the grid principle 1 And η 2 Respectively setting threshold values;
and taking the average value in the queue L as final road edge distance detection values d1 and d2 corresponding to the front and rear single-line laser radars.
6. The self-adaptive welting operation method based on multi-algorithm fusion according to any one of claims 1 to 5, wherein a command is sent to a vehicle control unit according to final road edge distance detection values of front and rear single line laser radars, so that a robot carries out welting motion according to a set welting distance and a vehicle speed, and the method specifically comprises the following steps:
and sending a command to the vehicle control unit according to the final road edge distance detection values of the front and rear single-line laser radars, sending a driving command to the vehicle control unit, and controlling a robot steering system to carry out welting motion according to e < d1< f, wherein both e and f are set threshold values.
7. The self-adaptive welting operation method based on multi-algorithm fusion according to any one of claims 1 to 5, wherein a command is sent to a vehicle control unit according to final road edge distance detection values of front and rear single line laser radars, so that a robot carries out welting motion according to a set welting distance and a vehicle speed, and the method specifically comprises the following steps:
acquiring initial position coordinate points of the robot, and confirming the type of a target intersection in front of the robot according to the initial position coordinate points (E0, N0) of the robot and a route database, wherein the type of the target intersection comprises the following steps: the road-edge working condition is straight, the right turn is right, and the long-distance road-edge-free working condition is straight;
and sending an instruction to the vehicle control unit according to the final road edge distance detection values d1 and d2 corresponding to the front and rear single-line laser radars and the type of the target intersection in front of the robot, so that the robot carries out welting motion according to the set welting distance and the speed of the vehicle.
8. The self-adaptive welting operation method based on multi-algorithm fusion as claimed in claim 7, wherein an instruction is sent to a vehicle control unit according to final road edge distance detection values d1 and d2 corresponding to front and rear single line laser radars and a target intersection type in front of the robot, so that the robot carries out welting motion according to a set welting distance and vehicle speed, and the method specifically comprises the following steps:
when the type of the target intersection is the straight-ahead running under the road-edge working condition, a running instruction is sent to the vehicle controller, the robot steering system is controlled to carry out welting motion according to e < d1< f, the speed of the vehicle is controlled to be a1, and both e and f are set thresholds;
when the type of the target intersection is right turning, a driving instruction is sent to the vehicle controller, the robot steering system is controlled to carry out welting movement according to s < d1< j, and e < s, f < j, the vehicle speed is controlled to be a2, and a1 is greater than a 2;
when the type of the target intersection is long-distance straight-driving without a road edge working condition, carrying out linear fitting on initial position coordinate points (E0, N0) and a plurality of continuous position coordinate points in a front set range to obtain y as kx + b; acquiring a current position coordinate point (E, N) of the robot through a GNSS module, and if k, E + b and N are greater than N, sending a driving instruction to a vehicle control unit to control a steering system to rotate left by a set angle; and if k, E + b < N, sending a driving command to the vehicle controller, controlling the steering system to turn right by a set angle, and controlling the vehicle speed to be a2, wherein a1 is more than a 2.
9. The self-adaptive welting operation method based on multi-algorithm fusion according to any one of claims 1 to 5, wherein a command is sent to a vehicle control unit according to final road edge distance detection values of front and rear single line laser radars, so that a robot carries out welting motion according to a set welting distance and a vehicle speed, and the method specifically comprises the following steps:
acquiring the road surface gradient and the road curve type of a road in front of the robot in real time;
and sending an instruction to the vehicle control unit according to the final road edge distance detection value, the road surface gradient and the road curve and straight type of the front and rear single-line laser radars, so that the robot carries out welting motion according to the set welting distance and the vehicle speed.
10. A robot, comprising:
the two single-line laser radars are respectively arranged on the front side and the rear side of the robot body, and scanning surfaces of the two single-line laser radars are vertical to the ground and intersect with the road edge to set an included angle;
the vehicle control unit is used for controlling the transverse and longitudinal movement and braking of the robot according to the instruction;
the industrial personal computer is respectively in signal connection with the two single-line laser radars and the vehicle control unit and is used for realizing the self-adaptive welting operation method based on multi-algorithm fusion as claimed in any one of claims 1-9.
CN202011479112.1A 2020-12-15 2020-12-15 Self-adaptive welting operation method based on multi-algorithm fusion and robot Active CN112674646B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011479112.1A CN112674646B (en) 2020-12-15 2020-12-15 Self-adaptive welting operation method based on multi-algorithm fusion and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011479112.1A CN112674646B (en) 2020-12-15 2020-12-15 Self-adaptive welting operation method based on multi-algorithm fusion and robot

Publications (2)

Publication Number Publication Date
CN112674646A CN112674646A (en) 2021-04-20
CN112674646B true CN112674646B (en) 2022-08-09

Family

ID=75447891

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011479112.1A Active CN112674646B (en) 2020-12-15 2020-12-15 Self-adaptive welting operation method based on multi-algorithm fusion and robot

Country Status (1)

Country Link
CN (1) CN112674646B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113320518B (en) * 2021-06-25 2022-06-03 东风汽车集团股份有限公司 Method and system for preventing vehicle from sliding after parking on ramp
CN114137964A (en) * 2021-11-09 2022-03-04 长沙中联重科环境产业有限公司 Vehicle attitude and sweeping disc and road edge following control method, device and equipment and special vehicle
CN114425774B (en) * 2022-01-21 2023-11-03 深圳优地科技有限公司 Robot walking road recognition method, robot walking road recognition device, and storage medium
CN115251765A (en) * 2022-07-04 2022-11-01 麦岩智能科技(北京)有限公司 Cleaning robot edge sweeping control method based on multiple sensors

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103760569B (en) * 2013-12-31 2016-03-30 西安交通大学 A kind of drivable region detection method based on laser radar
CN106004659B (en) * 2016-08-03 2017-08-04 安徽工程大学 Vehicle-periphery sensory perceptual system and its control method
CN107356933B (en) * 2017-06-23 2020-09-18 南京理工大学 Unstructured road detection method based on four-line laser radar
CN108460416A (en) * 2018-02-28 2018-08-28 武汉理工大学 A kind of structured road feasible zone extracting method based on three-dimensional laser radar
WO2019216469A1 (en) * 2018-05-11 2019-11-14 서울대학교 산학협력단 Method and device for clustering detected targets in vehicle radar system
US11079488B2 (en) * 2018-05-14 2021-08-03 GM Global Technology Operations LLC DBSCAN parameters as function of sensor suite configuration
CN110068836B (en) * 2019-03-20 2024-02-02 同济大学 Laser radar road edge sensing system of intelligent driving electric sweeper
CN110673107B (en) * 2019-08-09 2022-03-08 北京智行者科技有限公司 Road edge detection method and device based on multi-line laser radar
CN111538338B (en) * 2020-05-28 2023-05-26 长沙中联重科环境产业有限公司 Robot welt motion control system and method
CN111796299A (en) * 2020-06-10 2020-10-20 东风汽车集团有限公司 Obstacle sensing method and device and unmanned sweeper

Also Published As

Publication number Publication date
CN112674646A (en) 2021-04-20

Similar Documents

Publication Publication Date Title
CN112674646B (en) Self-adaptive welting operation method based on multi-algorithm fusion and robot
CN111551958B (en) Mining area unmanned high-precision map manufacturing method
CN109556615B (en) Driving map generation method based on multi-sensor fusion cognition of automatic driving
CN108960183B (en) Curve target identification system and method based on multi-sensor fusion
CN111862672B (en) Parking lot vehicle self-positioning and map construction method based on top view
CN110411462B (en) GNSS/inertial navigation/lane line constraint/milemeter multi-source fusion method
Labayrade et al. In-vehicle obstacles detection and characterization by stereovision
CN109752701A (en) A kind of road edge detection method based on laser point cloud
CN111551957B (en) Park low-speed automatic cruise and emergency braking system based on laser radar sensing
CN108280840B (en) Road real-time segmentation method based on three-dimensional laser radar
CN110379168B (en) Traffic vehicle information acquisition method based on Mask R-CNN
CN110745140A (en) Vehicle lane change early warning method based on continuous image constraint pose estimation
CN113176585A (en) Three-dimensional laser radar-based road surface anomaly detection method
CN113009453B (en) Mine road edge detection and mapping method and device
Kim et al. Artificial intelligence vision-based monitoring system for ship berthing
CN102201054A (en) Method for detecting street lines based on robust statistics
Moras et al. Drivable space characterization using automotive lidar and georeferenced map information
CN109471130A (en) One kind being used for the unpiloted positive and negative obstacle detection method of field environment
CN114325634A (en) Method for extracting passable area in high-robustness field environment based on laser radar
CN114399748A (en) Agricultural machinery real-time path correction method based on visual lane detection
CN114821526A (en) Obstacle three-dimensional frame detection method based on 4D millimeter wave radar point cloud
CN115923839A (en) Vehicle path planning method
JPH0727541A (en) Measuring device for road shape and vehicle position
Jeong et al. Real-time lane detection for autonomous vehicle
Li et al. Real-time 3D-Lidar, MMW Radar and GPS/IMU fusion based vehicle detection and tracking in unstructured environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220510

Address after: 528300 one of No.8 Huishang Road, Jiangcun village, Leliu street, Shunde District, Foshan City, Guangdong Province

Applicant after: Guangdong Yingfeng intelligent environmental sanitation Technology Co.,Ltd.

Applicant after: ZOOMLION ENVIRONMENTAL INDUSTRY Co.,Ltd.

Applicant after: Shenzhen Yingfeng environmental robot Co.,Ltd.

Address before: 528300 one of No.8 Huishang Road, Jiangcun village, Leliu street, Shunde District, Foshan City, Guangdong Province

Applicant before: Guangdong Yingfeng intelligent environmental sanitation Technology Co.,Ltd.

Applicant before: ZOOMLION ENVIRONMENTAL INDUSTRY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant