CN117331093A - Unmanned loader obstacle sensing method based on bucket position rejection - Google Patents

Unmanned loader obstacle sensing method based on bucket position rejection Download PDF

Info

Publication number
CN117331093A
CN117331093A CN202311617595.0A CN202311617595A CN117331093A CN 117331093 A CN117331093 A CN 117331093A CN 202311617595 A CN202311617595 A CN 202311617595A CN 117331093 A CN117331093 A CN 117331093A
Authority
CN
China
Prior art keywords
bucket
laser radar
information
loader
connecting rod
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311617595.0A
Other languages
Chinese (zh)
Other versions
CN117331093B (en
Inventor
刘翼
陈畅
黄冠富
范晶晶
张晓明
黄烟平
姜敏玉
孟祥林
闫鹏翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Intelligent Unmanned Equipment Industry Innovation Center Co ltd
Original Assignee
Jiangsu Intelligent Unmanned Equipment Industry Innovation Center Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Intelligent Unmanned Equipment Industry Innovation Center Co ltd filed Critical Jiangsu Intelligent Unmanned Equipment Industry Innovation Center Co ltd
Priority to CN202311617595.0A priority Critical patent/CN117331093B/en
Publication of CN117331093A publication Critical patent/CN117331093A/en
Application granted granted Critical
Publication of CN117331093B publication Critical patent/CN117331093B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B21/00Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant
    • G01B21/22Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant for measuring angles or tapers; for testing the alignment of axes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/87Combinations of systems using electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles

Abstract

The invention discloses an obstacle sensing method of an unmanned loader based on bucket position rejection, which comprehensively calculates real-time three-dimensional space information of a bucket according to position information and angle information of an upper laser radar, a lower laser radar, a near-end angle sensor and a far-end angle sensor, size information of a connecting rod and three-dimensional structure information of the bucket; calculating a visual field blind area based on the position information and the real-time three-dimensional space information, and complementing the point cloud data of the upper laser radar and the lower laser radar based on the visual field blind area; in the spliced and completed point cloud data, eliminating point clouds of the bucket and accessories thereof based on real-time three-dimensional space information of the bucket; and calculating the occupancy condition of the corresponding three-dimensional space grid, and generating a three-dimensional grid image for the unmanned decision and control unit. According to the invention, through real-time monitoring of the bucket position and combining with the measurement data of the laser radar, the bucket and accessories thereof are removed in real time in the space three-dimensional grid, and the perception capability of other obstacles in the area is maintained.

Description

Unmanned loader obstacle sensing method based on bucket position rejection
Technical Field
The invention relates to the technical field of obstacle recognition, in particular to an obstacle sensing method of an unmanned loader based on bucket position rejection, which is applied to the technical field of obstacle sensing in front of engineering machinery.
Background
At present, the identification of obstacles is one of the most common and vital tasks in unmanned systems, which helps to ensure that the vehicle is able to run safely. Because the complexity of the driving condition is very high, it is imperative to improve the accuracy of the detection and recognition result.
In the prior art, many obstacle sensing and recognition methods have been proposed, including edge detection, machine learning, deep learning, graph-based methods, and the like. In these methods, the main factors interfering with the detection of obstacles include light conditions, weather such as rain, snow, fog, dust, particulate matter, and the like. For the unmanned loader, due to the structural specificity, if the traditional obstacle recognition method is directly adopted, in the process of using the laser radar to sense the obstacle, the bucket and the accessory rod system at the front of the loader, which are in a frequently-changed position, are easily taken as the obstacle, so that the sensing result is wrong; if all the point clouds that the bucket may reach the area are simply removed, the missing detection of other obstacles in the area is caused.
In summary, in the current unmanned loader front obstacle sensing process, only the generation of the surrounding environment of the loader is focused, the situation that false detection exists in the obstacle detection of the laser radar due to the position change of the loader parts is ignored, and the existing false detection eliminating method can cause the occurrence of the false detection missing situation. The traditional obstacle recognition method not only can influence the working efficiency of the unmanned loader, but also can increase the possibility of safety accidents more seriously, so that an obstacle sensing method capable of accurately measuring the real-time position of the bucket and effectively eliminating the point cloud of the bucket and accessories thereof is needed.
Disclosure of Invention
The invention aims to solve the problems in the prior art, and provides an obstacle sensing method of an unmanned loader based on bucket position rejection, so as to solve the problems of false detection and missed detection in the prior art when the obstacle is sensed by a laser radar.
In order to solve the technical problems, the specific technical scheme of the invention is as follows:
the invention provides a barrier sensing method of an unmanned loader based on bucket position rejection, which comprises the following four main steps:
firstly, configuring a laser radar, an angle sensor and a computer processor according to the structure of the unmanned loader;
secondly, calculating the space position of the loader bucket in real time;
thirdly, completing observation blind areas of the laser radar and removing and sensing positions of the bucket;
fourthly, generating a three-dimensional grid map in front of the loader for a decision-making and control unit of the unmanned loader;
further, regarding the above four steps, there are more specific operation flows, respectively, as follows:
regarding the first step: in order to provide more comprehensive and multi-angle obstacle point cloud information, an upper laser radar is arranged at a position above the loader, and a lower laser radar is arranged at a position below the loader. For convenience of explanation of the embodiment of the present invention, the position of the lidar is limited to the upper and lower positions of the loader, but the present invention is not limited to this, and any position may be used as long as it can acquire obstacle information in front of the vehicle body. In the following description, although the upper and lower lidars are defined, the present invention is not limited to this, and two or more lidars may be set according to the loader structure.
The upper laser radar provides the information of the far-distance obstacle point cloud in front of the vehicle, and the lower laser radar provides the information of the near-distance obstacle point cloud in front of the vehicle. The upper laser radar and the lower laser radar are fixed in position and jointly perceived, so that the subsequent bucket position determination and point cloud removal are facilitated. When the bucket moves to different positions, the shielding degree of the bucket on the upper laser radar and the lower laser radar is different, the upper laser radar and the lower laser radar work together, omnidirectional obstacle point cloud information can be provided on different distances and angles, shielding is effectively reduced, and more comprehensive and careful perception of the environment in front of the loader is realized.
In order to be able to determine the three-dimensional position of the bucket relative to the loader body, an angle sensor is arranged at the connection of the links. In the following description, for convenience of explanation of the embodiment of the present invention, the six-bar loader is taken as an explanation example, but the present invention is not limited to this, and the present invention may be applied to three-bar, four-bar, five-bar, six-bar, seven-bar, eight-bar, nine-bar, and the like loaders, and when the number of bars is changed, the number of related angle sensors may be adjusted according to the number of bars.
The specific explanation of the configuration of the angle sensor is described below, wherein a connecting rod is connected between the bucket and the loader, a near-end angle sensor is configured at the connection position of the loader and the connecting rod, and a far-end angle sensor is configured at the connection position of the bucket and the connecting rod; the proximal angle sensor is used for measuring angle information between the connecting rod and the loader, and the distal angle sensor is used for measuring angle information between the connecting rod and the bucket. The two angle sensors are fixed in the vehicle transverse direction relative to the three-dimensional space information of the laser radar, so that only a two-dimensional space position relationship needs to be considered, and the three-dimensional space position of the bucket relative to the vehicle body can be determined by utilizing the two angle sensors. For any determined bucket position, the upper laser radar and the lower laser radar can be combined to complement the blind areas of the perceived visual field, which are blocked by the bucket.
In order to acquire and process data in real time, the invention is provided with a computer processor, and the computer processor is used for receiving and processing point cloud information returned by the upper laser radar and the lower laser radar and carrying out algorithm processing by combining the data of the angle sensor. In the subsequent step, the computer processor calculates the spatial position information of the bucket by using the angle sensor, and combines the point cloud data transmitted by the upper laser radar and the lower laser radar to generate a three-dimensional grid map.
Regarding the second step: receiving the first step, namely acquiring the position information of the upper laser radar, the lower laser radar and the near-end angle sensor; acquiring angle information of the near-end angle sensor and the far-end angle sensor; acquiring size information of the connecting rod and three-dimensional structure information of the bucket; real-time three-dimensional spatial information of the bucket is calculated based on the position information, the angle information, the size information of the link, and the three-dimensional structural information of the bucket.
In the second step, since the positions of the upper and lower lidars relative to the loader are determined, the bucket of the loader can only move up and down, forward and backward, and can not move left and right, the calculation of the three-dimensional position of the bucket is simply a calculation of the bucket information position in a two-dimensional plane.
More specifically, the upper laser radar is taken as the origin of coordinates, and the horizontal direction is leftxThe axle is a positive half axle and is vertically downwardyAnd (5) correcting the half shaft and establishing a two-dimensional coordinate system. Because the positions of the upper lidar, the lower lidar, and the proximal angle sensor are fixed, the coordinates thereof are known. The coordinates of the laser radar are set as%x 1 ,y 1 ) The coordinates of the lower laser radar are set as%x 2 ,y 2 ) Coordinates of the proximal angle sensorIs set as%x 3 ,y 3 )。
After the coordinates are set, calculating the coordinates of the connecting position of the connecting rod and the bucket according to the coordinates of the near-end angle sensor, the size information of the connecting rod and the angle information between the connecting rod and the loader; the method comprises the following specific steps: distance from upper laser radar to horizontal direction of near-end angle sensord 1 =x 3 -x 1 ,yDistance of axisd 2 =y 3 -y 1 Let the measurement result of the near-end angle sensor beαThe distance from the near-end angle sensor to the joint of the connecting rod and the bucket is equal to the length of the connecting rod, which is thatL 37 Therefore, the laser radar is arranged at the position and the connecting position of the connecting rod and the bucketxDistance of axis relativedx 1 =(x 3 -x 1 )+L 37 sinαY-axis relative distancedy 1 =(y 3 -y 1 )+L 37cos alpha, the connecting rod being connected to the bucketxAxis coordinatesx 7 =x 3 +L 37 sinαY-axis coordinatesy 7 =y 3 +L 37cosα。
After the coordinate information of the connecting position of the connecting rod and the bucket is obtained through calculation, determining real-time three-dimensional space information of the bucket based on the coordinate of the connecting position of the connecting rod and the bucket, angle information between the connecting rod and the bucket and three-dimensional structure information of the bucket; the method comprises the following specific steps: setting a reference point on the bucket, wherein a remote angle sensor is used for measuring angle information between the connecting rod and the reference point on the bucket; the relative positions of the reference point on the bucket to the connecting point of the connecting rod and the bucket are fixed, and the distance between the reference point and the connecting rod is set asL 67 The method comprises the steps of carrying out a first treatment on the surface of the Calculating coordinates of a reference point on the bucket based on coordinates of a connecting position of the connecting rod and the bucket, angle information between the connecting rod and the reference point on the bucket and distance from the reference point on the bucket to the connecting position of the connecting rod and the bucket; of remote angle sensorThe measurement result isβSo that the bucket is at the reference pointxCoordinates ofx 6 =x 7 -L 67 cos (90 ° - α+β), reference point on bucketyCoordinates ofy 6 =y 7 -L 67 sin(90°-α+β)。
Because one direction coordinate of the bucket is fixed, the coordinates of the connecting rod and the bucket connecting point and the reference point on the bucket can be confirmed through the angle sensor, on the basis, the three-dimensional space information of the bucket can be positioned by combining the three-dimensional structure information of the bucket, and the subsequent third step is carried out according to the data.
Regarding the third step: receiving a second step, namely calculating the vision blind areas of the upper laser radar and the lower laser radar based on the position information of the upper laser radar, the position information of the lower laser radar and the real-time three-dimensional space information of the bucket; and acquiring point cloud data of the upper laser radar and the lower laser radar, and complementing the point cloud data based on the visual field blind area.
The upper laser radar has an upper laser radar visual field blind area, the lower laser radar has a lower laser radar visual field blind area, the specific position of the blind area can be determined only by respectively making two tangents to the upper boundary and the lower boundary of the bucket for the upper laser radar and the lower laser radar, and the blind area of the upper laser radar is horizontally downwardα 1β 1 The blind area of the lower laser radar is in the horizontal upward directionα 2β 2 Degree.
The blind area of the lower laser radar is upward in the horizontal direction of the upper laser radarα 3 -horizontally downwardα 4 The upper laser radar vision blind area is arranged on the lower laser radar level and upwards in the range of the degreeα 5 -horizontally downwardα 6 Over a range of degrees, using the upper lidar level upwardsα 3 -horizontally downwardα 4 Degree combined splicing technology is used for completing blind areas of lower laser radar, and the lower laser radar is used for horizontally and upwardsα 5 -horizontally downwardα 6 The blind area of the upper laser radar is complemented by the degree combination splicing technology.
Regarding the fourth step: the third step of connection, namely calculating the relative position relation between the upper laser radar and the lower laser radar based on the position information of the upper laser radar and the lower laser radar, and splicing and fusing the completed upper laser radar point cloud data and the completed lower laser radar point cloud data based on the relative position relation to obtain splicing point cloud data; removing point clouds of the bucket from the splicing point cloud data based on the real-time three-dimensional space information of the bucket to obtain integrated point cloud data; and calculating the three-dimensional space grid occupation situation corresponding to the integrated point cloud data, and generating a loader front three-dimensional grid image with the bucket removed based on the three-dimensional space grid occupation situation.
More specifically, a three-dimensional grid map is a grid space that abstracts real world space into a three-dimensional rectangular coordinate system (x, y, z) based grid space. In this grid space, a corpus Ω is defined, each element of which is called a voxel, using C (x,y,z) Representing its three-dimensional coordinates. Each voxel is considered as a side lengthλEach side of the cube is parallel to the spatial coordinate axis. It reflects the presence or absence of objects in the actual environment by determining or probabilistically determining the duty cycle value of each voxel. This means that, depending on the perceptual data and the environmental model, each voxel may be assigned a value indicating whether the voxel is occupied by the actual object. Based on this abstraction, a three-dimensional grid map can be created, which is parameterized with resolutionλBased on the following. This map may be used to describe the location, shape and distribution of objects in the environment.
In order to describe the generation of the three-dimensional grid image more clearly, the following description is made: the three-dimensional grid model is generated by adopting a filling method, which comprises the following steps: first, a defined boundary of the grid model needs to be generated, and then filling of the three-dimensional grid model is performed inside this boundary. The method for generating the discrete points based on the boundary comprises the following specific steps:
defining the size of the generative model: first, three-dimensional spatial center coordinates (x, y, z) of the desired generated model and the spatial dimensions of the model are defined.
Generating a structural model of a laser radar front reflecting object: using data point information from the surface of the anterior object, a surface bounding box model of the anterior object is generated. This model has no attribute values yet, but can be created using different surface generation algorithms, or can employ existing three-dimensional vector data model boundaries.
Defining a filling grid size: the dimensions (a, b, c) of the grid are determined and are dynamically adjusted according to the size of the model and the accuracy requirements of the study. If the resulting front object is larger but of relatively lower accuracy, the grid size may be increased to increase the calculation speed. Conversely, if the front object is smaller but requires high precision, the grid size may be reduced to meet the precision requirement.
Determining a grid attribute value: the attribute values of the grid are determined using the impact values of the known attribute points on the grid points.
Filling grids: filling of the grid is performed between the layers using the resulting object structure surface as a boundary.
The technical scheme of the invention has the beneficial effects that:
according to the loader obstacle sensing method based on bucket position elimination, when the loader is in an unmanned mode, the spatial position of the bucket is calculated in real time, and the bucket is eliminated in the obstacle point cloud and the three-dimensional obstacle grid by combining with the spatial position of the bucket, so that a more optimized loader front obstacle sensing result is obtained; the invention effectively eliminates the influence of the parts of the loader on the detection of the laser radar obstacle, can prevent the occurrence of the condition of missing detection while removing the parts of the loader, overcomes the defects of the prior art, and has extremely high application value.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow diagram of a loader obstacle sensing method based on bucket position culling according to the present invention;
FIG. 2 is a schematic view of the installation of a lidar, an angle sensor and a computer processor in the method for sensing a barrier of a loader based on bucket position removal according to the present invention;
FIG. 3 is a schematic view of a laser radar view blind zone in a loader obstacle sensing method based on bucket position removal according to the invention;
FIG. 4 is a schematic diagram of a laser radar field of view completion blind area in the loader obstacle sensing method based on bucket position removal according to the invention;
FIG. 5 is a schematic diagram of a three-dimensional grid map generation process in a loader obstacle sensing method based on bucket position rejection according to the present invention;
the labels in the drawings are illustrated as follows: 1. a laser radar is arranged; 2. a lower laser radar; 3. a proximal angle sensor; 4. a distal angle sensor; 5. a computer processor; 6. a reference point on the bucket; 7. the connecting rod is connected with the bucket.
Detailed Description
The preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings so that the advantages and features of the present invention can be more easily understood by those skilled in the art, thereby making clear and defining the scope of the present invention.
In the description of the present invention, it should be noted that the described embodiments of the present invention are some, but not all embodiments of the present invention; all other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The embodiment provides a method for sensing obstacles of an unmanned loader based on bucket position rejection, which is shown in fig. 1-5 and comprises the following steps:
firstly, configuring a laser radar, an angle sensor and a computer processor according to the structure of the unmanned loader;
secondly, calculating the space position of the loader bucket in real time;
thirdly, completing observation blind areas of the laser radar and removing and sensing positions of the bucket;
fourthly, generating a three-dimensional grid map in front of the loader for a decision-making and control unit of the unmanned loader;
further, regarding the above four steps, there are more specific operation flows, respectively, as follows:
regarding the first step: in order to be able to provide more comprehensive, multi-angle obstacle point cloud information, a lidar 1 is arranged in a position above the loader and a lidar 2 is arranged in a position below the loader, see fig. 2. For convenience of explanation of the embodiment of the present invention, the position of the lidar is limited to the upper and lower positions of the loader, but the present invention is not limited to this, and any position may be used as long as it can acquire obstacle information in front of the vehicle body. In the following description, although the upper and lower lidars are defined, the present invention is not limited to this, and two or more lidars may be set according to the loader structure.
The upper lidar 1 provides vehicle front long-distance obstacle point cloud information, and the lower lidar 2 provides vehicle front short-distance obstacle point cloud information. The upper laser radar and the lower laser radar are fixed in position and jointly perceived, so that the subsequent bucket position determination and point cloud removal are facilitated. When the bucket moves to different positions, the shielding degree of the bucket on the upper laser radar and the lower laser radar is different, the upper laser radar and the lower laser radar work together, omnidirectional laser point cloud information can be provided on different distances and angles, shielding is effectively reduced, and more comprehensive and careful perception of the environment in front of the loader is realized.
In order to be able to determine the three-dimensional position of the bucket relative to the loader body, an angle sensor is arranged at the connection of the links. In the following description, for convenience of explanation of the embodiment of the present invention, the six-bar loader is taken as an explanation example, but the present invention is not limited to this, and the present invention may be applied to three-bar, four-bar, five-bar, six-bar, seven-bar, eight-bar, nine-bar, and the like loaders, and when the number of bars is changed, the number of related angle sensors may be adjusted according to the number of bars.
The following describes the configuration of the angle sensor in detail, see fig. 2, wherein a link is connected between the bucket and the loader, a proximal angle sensor 3 is arranged at the connection of the loader and the link, and a distal angle sensor 4 is arranged at the connection of the bucket and the link; the near-end angle sensor 3 is used for measuring angle information between the connecting rod and the loader, and the far-end angle sensor 4 is used for measuring angle information between the connecting rod and the bucket. The two angle sensors are fixed in the vehicle transverse direction relative to the three-dimensional space information of the laser radar, so that only a two-dimensional space position relationship needs to be considered, and the three-dimensional space position of the bucket relative to the vehicle body can be determined by utilizing the two angle sensors. For any determined bucket position, the upper laser radar and the lower laser radar can be combined to complement the blind areas of the perceived visual field, which are blocked by the bucket.
In order to acquire and process data in real time, the invention is provided with a computer processor 5, and the computer processor 5 is used for receiving and processing point cloud information returned by the upper laser radar and the lower laser radar and carrying out algorithm processing by combining the data of the angle sensor. In the subsequent step, the computer processor calculates the spatial position information of the bucket by using the angle sensor, and combines the point cloud data transmitted by the upper laser radar and the lower laser radar to generate a three-dimensional grid map.
Regarding the second step: receiving first step, obtaining the position information of the upper laser radar 1, the lower laser radar 2 and the near-end angle sensor 3; acquiring angle information of the proximal angle sensor 3 and the distal angle sensor 4; acquiring size information of the connecting rod and three-dimensional structure information of the bucket; real-time three-dimensional spatial information of the bucket is calculated based on the position information, the angle information, the size information of the link, and the three-dimensional structural information of the bucket.
In the second step, as shown in fig. 2, since the positions of the upper and lower lidars relative to the loader are determined, the bucket of the loader can only move up and down, forward and backward, and cannot move left and right, the bucket position prediction is simply performed by calculating the bucket information position in a two-dimensional plane.
More specifically, the upper lidar 1 is taken as the origin of coordinates, and the horizontal direction is leftxThe axle is a positive half axle and is vertically downwardyThe axle is positive to the half axle, and a two-dimensional coordinate system shown in figure 2 is established. Since the positions of the upper lidar 1, the lower lidar 2, and the proximal angle sensor 3 are fixed, the coordinates thereof are known. The coordinates of the laser radar 1 are set as # -, the laser radar is a laser radarx 1 ,y 1 ) The coordinates of the lower laser radar 2 are set as%x 2 ,y 2 ) The coordinates of the near-end angle sensor 3 are set as # -x 3 ,y 3 )。
After the coordinates are set, calculating coordinates of the connecting position of the connecting rod and the bucket according to the coordinates of the near-end angle sensor 3, the size information of the connecting rod and the angle information between the connecting rod and the loader; the method comprises the following specific steps: distance from upper lidar 1 to near-end angle sensor 3 in horizontal directiond 1 =x 3 -x 1 ,yDistance of axisd 2 =y 3 -y 1 Let the measurement result of the near-end angle sensor 3 beαThe distance from the near-end angle sensor 3 to the connecting rod and the connecting point 7 of the bucket is equal to the length of the connecting rod, which is thatL 37 The position of the upper lidar 1 and the connection 7 of the link to the bucketxDistance of axis relativedx 1 =(x 3 -x 1 )+L 37 sinαY-axis relative distancedy 1 =(y 3 -y 1 )+L 37cos alpha, therefore the connecting rod is connected with the bucket 7xAxis coordinatesx 7 =x 3 +L 37 sinαY-axis coordinatesy 7 =y 3 +L 37cosα。
After the coordinate information of the connecting rod and the bucket connecting position 7 is obtained through calculation, the information is based on the coordinate of the connecting rod and the bucket connecting position 7 and the angle information between the connecting rod and the bucketThe real-time three-dimensional space information of the bucket is determined according to the three-dimensional structure information of the bucket; the method comprises the following specific steps: a reference point position 6 is arranged on the bucket, and a far-end angle sensor 4 is used for measuring angle information between the connecting rod and the reference point position 6 on the bucket; the relative positions of the reference point 6 on the bucket to the connecting rod and the bucket connecting point 7 are fixed, and the distance between the reference point 6 and the connecting rod is set asL 67 The method comprises the steps of carrying out a first treatment on the surface of the Calculating the coordinates of the reference point on the bucket based on the coordinates of the connecting rod and the bucket connecting position 7, the angle information between the connecting rod and the reference point on the bucket, and the distance between the reference point on the bucket and the connecting rod and the bucket connecting position; the measurement result of the remote angle sensor 4 is thatβSo the reference point 6 on the bucketxCoordinates ofx 6 =x 7 -L 67 cos (90 ° - α+β), reference point 6 on bucketyCoordinates ofy 6 =y 7 -L 67 sin(90°-α+β)。
Because one direction coordinate of the bucket is fixed, the coordinates of the connecting rod and the bucket connecting part 7 and the reference point position 6 on the bucket can be confirmed through an angle sensor, on the basis, the three-dimensional space information of the bucket can be positioned by combining the three-dimensional structure information of the bucket, and the subsequent third step is carried out according to the data.
Regarding the third step: receiving a second step, namely calculating the vision blind areas of the upper laser radar 1 and the lower laser radar 2 based on the position information of the upper laser radar 1 and the lower laser radar 2 and the real-time three-dimensional space information of the bucket; and acquiring point cloud data of the upper laser radar 1 and the lower laser radar 2, and complementing the point cloud data based on the visual field blind area.
As shown in fig. 3, the upper lidar 1 has an upper lidar field blind area, the lower lidar 2 has a lower lidar field blind area, the specific position of the blind area can be determined by only making two tangents to the upper and lower boundaries of the bucket for the upper and lower lidars, respectively, and as can be seen in fig. 3, the blind area of the upper lidar is horizontally downwardα 1β 1 The blind area of the lower laser radar is in the horizontal upward directionα 2β 2 Degree.
As shown in fig. 4, the blind area of the field of view of the lower lidar 2 is directed upward at the level of the upper lidar 1α 3 -horizontally downwardα 4 Over a range of degrees, the field of view blind area of the upper lidar 1 is horizontally upward of the lower lidar 2α 5 -horizontally downwardα 6 Over a range of degrees, using the upper lidar 1 horizontally upwardsα 3 -horizontally downwardα 4 The blind area of the lower laser radar 2 is complemented by the degree combined with the splicing technology, and the lower laser radar 2 is used for horizontally and upwardsα 5 -horizontally downwardα 6 The blind area of the upper laser radar 1 is complemented by the degree combination splicing technology.
Regarding the fourth step: the third step of bearing, namely calculating the relative position relation between the upper laser radar 1 and the lower laser radar 2 based on the position information of the upper laser radar 1 and the lower laser radar 2, and splicing and fusing the completed upper laser radar point cloud data and the completed lower laser radar point cloud data based on the relative position relation to obtain splicing point cloud data; removing point clouds of the bucket from the splicing point cloud data based on the real-time three-dimensional space information of the bucket to obtain integrated point cloud data; and calculating the three-dimensional space grid occupation situation corresponding to the integrated point cloud data, and generating a loader front three-dimensional grid image with the bucket removed based on the three-dimensional space grid occupation situation. The invention aims to accurately determine the real-time position of the bucket, effectively remove the point cloud of the bucket and accessories thereof, and further generate a three-dimensional grid map in front of the loader; the decision-making and control unit of the unmanned loader specifically performs obstacle avoidance and path planning according to the three-dimensional grid map, belongs to a general technology in the field, and is not limited to this, as long as the obstacle avoidance and path planning can be realized according to the three-dimensional grid map.
More specifically, a three-dimensional grid map is a grid space that abstracts real world space into a three-dimensional rectangular coordinate system (x, y, z) based grid space. In this grid space, a corpus Ω is defined, each element of which is called a voxel, using C (x,y,z) Representing its three-dimensional coordinates. Each voxel is considered as a side lengthλEach side of the cube is parallel to the spatial coordinate axis. It reflects the presence or absence of objects in the actual environment by determining or probabilistically determining the duty cycle value of each voxel. This means that, depending on the perceptual data and the environmental model, each voxel may be assigned a value indicating whether the voxel is occupied by the actual object. Based on this abstraction, a three-dimensional grid map can be created, which is parameterized with resolutionλBased on the following. This map can be used to describe the location, shape and distribution of objects in the environment, the implementation being as shown in fig. 5.
In order to describe the generation of the three-dimensional grid image more clearly, the following description is made: the three-dimensional grid model is generated by adopting a filling method, which comprises the following steps: first, a defined boundary of the grid model needs to be generated, and then filling of the three-dimensional grid model is performed inside this boundary. The method for generating the discrete points based on the boundary comprises the following specific steps:
defining the size of the generative model: first, three-dimensional spatial center coordinates (x, y, z) of the desired generated model and the spatial dimensions of the model are defined.
Generating a structural model of a laser radar front reflecting object: using data point information from the surface of the anterior object, a surface bounding box model of the anterior object is generated. This model has no attribute values yet, but can be created using different surface generation algorithms, or can employ existing three-dimensional vector data model boundaries.
Defining a filling grid size: the dimensions (a, b, c) of the grid are determined and are dynamically adjusted according to the size of the model and the accuracy requirements of the study. If the resulting front object is larger but of relatively lower accuracy, the grid size may be increased to increase the calculation speed. Conversely, if the front object is smaller but requires high precision, the grid size may be reduced to meet the precision requirement.
Determining a grid attribute value: the attribute values of the grid are determined using the impact values of the known attribute points on the grid points.
Filling grids: filling of the grid is performed between the layers using the resulting object structure surface as a boundary.
Compared with the prior art, the unmanned loader obstacle sensing method based on bucket position rejection is adopted, the bucket three-dimensional space position is calculated in real time based on the angle sensor, three-dimensional space information of the bucket and the data acquisition equipment can be obtained based on the three-dimensional space information, the data can be better processed based on the three-dimensional space information, the bucket and the rod system thereof are rejected, the front obstacle point is accurately obtained for removal, the three-dimensional obstacle grid map is generated, the obstacle is not perceived by the position change of the bucket and the rod system thereof in the operation process, the problems of false detection and omission detection are avoided, the working efficiency and the safety of the unmanned loader are improved, the defects of the prior art are overcome, and the unmanned loader has extremely high application value.
It should be understood that, in the various embodiments herein, the sequence number of each process described above does not mean the sequence of execution, and the execution sequence of each process should be determined by its functions and internal logic, and should not constitute any limitation on the implementation process of the embodiments herein.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure. It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
The foregoing description is only illustrative of the present invention and is not intended to limit the scope of the invention, and all equivalent structures or equivalent processes or direct or indirect application in other related technical fields are included in the scope of the present invention.

Claims (10)

1. An obstacle sensing method of an unmanned loader based on bucket position rejection, which is applied to the unmanned loader, is characterized by comprising the following steps:
an upper laser radar and a lower laser radar are arranged on a loader, a near-end angle sensor is arranged at the joint of the loader and a connecting rod, and a far-end angle sensor is arranged at the joint of a bucket and the connecting rod;
acquiring position information of the upper laser radar, the lower laser radar and the near-end angle sensor; acquiring angle information of the near-end angle sensor and the far-end angle sensor; acquiring size information of the connecting rod and three-dimensional structure information of the bucket; calculating real-time three-dimensional space information of the bucket based on the position information, the angle information, the size information of the connecting rod, and the three-dimensional structure information of the bucket;
calculating a vision blind area of the upper laser radar and the lower laser radar based on the position information of the upper laser radar, the position information of the lower laser radar and the real-time three-dimensional space information of the bucket; acquiring point cloud data of the upper laser radar and the lower laser radar, and complementing the point cloud data based on the visual field blind area;
splicing, fusing and complementing the point cloud data to obtain splicing point cloud data; removing point clouds of the bucket from the splicing point cloud data based on the real-time three-dimensional space information of the bucket to obtain integrated point cloud data; and generating a three-dimensional grid image based on the integrated point cloud data for use by a decision and control unit of the unmanned loader.
2. The unmanned loader obstacle sensing method based on bucket position rejection according to claim 1, wherein: the upper laser radar is fixed at the position above the loader and used for acquiring the information of the remote obstacle point cloud in front of the loader; the lower laser radar is fixed at the position below the loader and used for acquiring the information of the near-distance obstacle point cloud in front of the loader.
3. A method of unmanned loader obstacle sensing based on bucket position rejection as in claim 2, wherein: the connecting rod is connected between the bucket and the loader; the proximal angle sensor is used for measuring angle information between the connecting rod and the loader, and the distal angle sensor is used for measuring angle information between the connecting rod and the bucket.
4. A method of unmanned loader obstacle sensing based on bucket position rejection as defined in claim 3, wherein: the calculating real-time three-dimensional space information of the bucket based on the position information, the angle information, the size information of the link, and the three-dimensional structure information of the bucket, further includes:
establishing a two-dimensional coordinate system by taking the upper laser radar as a coordinate origin, and determining coordinates of the upper laser radar, the lower laser radar and the near-end angle sensor in the two-dimensional coordinate system according to the position information;
calculating coordinates of a joint of the connecting rod and the bucket according to coordinates of the near-end angle sensor, size information of the connecting rod and angle information between the connecting rod and the loader;
real-time three-dimensional space information of the bucket is determined based on coordinates of a connecting position of the connecting rod and the bucket, angle information between the connecting rod and the bucket and three-dimensional structure information of the bucket.
5. The unmanned loader obstacle sensing method based on bucket position rejection according to claim 4, wherein: the determining real-time three-dimensional space information of the bucket based on coordinates of a connection point of the connecting rod and the bucket, angle information between the connecting rod and the bucket, and three-dimensional structure information of the bucket, further includes:
the bucket is provided with a reference point, and the remote angle sensor is used for measuring angle information between the connecting rod and the reference point on the bucket;
obtaining the distance from the reference point on the bucket to the joint of the connecting rod and the bucket;
calculating coordinates of a reference point on the bucket based on coordinates of a connecting position of the connecting rod and the bucket, angle information between the connecting rod and the reference point on the bucket and distance from the reference point on the bucket to the connecting position of the connecting rod and the bucket;
and positioning real-time three-dimensional space information of the bucket based on the coordinates of the connecting rod and the bucket, the coordinates of the reference point on the bucket and the three-dimensional structure information of the bucket.
6. The unmanned loader obstacle sensing method based on bucket position rejection according to claim 5, wherein: the size information of the connecting rod is the length of the connecting rod.
7. The unmanned loader obstacle sensing method based on bucket position rejection according to claim 5, wherein: calculating a field of view blind area of the upper lidar and the lower lidar based on the position information of the upper lidar, the lower lidar, and the real-time three-dimensional space information of the bucket, further comprising:
taking the upper laser radar as a starting point to make two tangential lines with the upper boundary and the lower boundary of the bucket, and determining a visual field blind area of the upper laser radar;
and taking the lower laser radar as a starting point to make two tangential lines with the upper boundary and the lower boundary of the bucket, and determining a visual field blind area of the lower laser radar.
8. The unmanned loader obstacle sensing method based on bucket position rejection according to claim 7, wherein: the obtaining the point cloud data of the upper laser radar and the lower laser radar, the supplementing the point cloud data based on the view blind area, further comprises:
the method comprises the steps that a field blind area of an upper laser radar is complemented by an area corresponding to the field blind area of the upper laser radar in point cloud data of the lower laser radar, so that complemented point cloud data of the upper laser radar is obtained;
and complementing the visual field blind area of the lower laser radar by adopting an area corresponding to the visual field blind area of the lower laser radar in the point cloud data of the upper laser radar to obtain the complemented point cloud data of the lower laser radar.
9. The unmanned loader obstacle sensing method based on bucket position rejection of claim 8, wherein: the splicing, fusing and complementing the point cloud data to obtain splicing point cloud data further comprises:
and calculating the relative position relation between the upper laser radar and the lower laser radar based on the position information of the upper laser radar and the lower laser radar, and carrying out splicing and fusion on the completed upper laser radar point cloud data and the completed lower laser radar point cloud data based on the relative position relation to obtain the spliced point cloud data.
10. A method of unmanned loader obstacle sensing based on bucket position rejection as in claim 9, wherein: the generating a three-dimensional grid image based on the integrated point cloud data further comprises:
and calculating the three-dimensional space grid occupation situation corresponding to the integrated point cloud data, and generating a loader front three-dimensional grid image with the bucket removed based on the three-dimensional space grid occupation situation.
CN202311617595.0A 2023-11-30 2023-11-30 Unmanned loader obstacle sensing method based on bucket position rejection Active CN117331093B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311617595.0A CN117331093B (en) 2023-11-30 2023-11-30 Unmanned loader obstacle sensing method based on bucket position rejection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311617595.0A CN117331093B (en) 2023-11-30 2023-11-30 Unmanned loader obstacle sensing method based on bucket position rejection

Publications (2)

Publication Number Publication Date
CN117331093A true CN117331093A (en) 2024-01-02
CN117331093B CN117331093B (en) 2024-01-26

Family

ID=89279574

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311617595.0A Active CN117331093B (en) 2023-11-30 2023-11-30 Unmanned loader obstacle sensing method based on bucket position rejection

Country Status (1)

Country Link
CN (1) CN117331093B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109902857A (en) * 2019-01-22 2019-06-18 江苏徐工工程机械研究院有限公司 A kind of haulage vehicle gatehead automatic planning and system
CN109948189A (en) * 2019-02-19 2019-06-28 江苏徐工工程机械研究院有限公司 A kind of excavator bucket material volume and weight measuring system
CN110306622A (en) * 2019-06-18 2019-10-08 江苏徐工工程机械研究院有限公司 A kind of working device of loader lift height autocontrol method, apparatus and system
CN111099504A (en) * 2019-12-17 2020-05-05 北汽福田汽车股份有限公司 Crane control method and device and vehicle
CN111364549A (en) * 2020-02-28 2020-07-03 江苏徐工工程机械研究院有限公司 Synchronous drawing and automatic operation method and system based on laser radar
CN111771032A (en) * 2018-03-30 2020-10-13 株式会社小松制作所 Control device for working machine, control device for excavating machine, and control method for working machine
CN112837482A (en) * 2021-01-06 2021-05-25 上海三一重机股份有限公司 Electronic enclosure system for excavator, control method and electronic equipment
CN113107044A (en) * 2021-04-21 2021-07-13 立澈(上海)自动化有限公司 Method and device for determining position of bucket of excavator and electronic equipment
US20210223400A1 (en) * 2020-01-20 2021-07-22 Doosan Infracore Co., Ltd. System and method of controlling wheel loader
CN214335536U (en) * 2021-03-02 2021-10-01 浙江省交通集团检测科技有限公司 Multi-sensor integrated intelligent driving system of asphalt concrete mixing plant loader
WO2022215898A1 (en) * 2021-04-09 2022-10-13 현대두산인프라코어(주) Sensor fusion system and sensing method for construction equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111771032A (en) * 2018-03-30 2020-10-13 株式会社小松制作所 Control device for working machine, control device for excavating machine, and control method for working machine
CN109902857A (en) * 2019-01-22 2019-06-18 江苏徐工工程机械研究院有限公司 A kind of haulage vehicle gatehead automatic planning and system
CN109948189A (en) * 2019-02-19 2019-06-28 江苏徐工工程机械研究院有限公司 A kind of excavator bucket material volume and weight measuring system
CN110306622A (en) * 2019-06-18 2019-10-08 江苏徐工工程机械研究院有限公司 A kind of working device of loader lift height autocontrol method, apparatus and system
CN111099504A (en) * 2019-12-17 2020-05-05 北汽福田汽车股份有限公司 Crane control method and device and vehicle
US20210223400A1 (en) * 2020-01-20 2021-07-22 Doosan Infracore Co., Ltd. System and method of controlling wheel loader
CN111364549A (en) * 2020-02-28 2020-07-03 江苏徐工工程机械研究院有限公司 Synchronous drawing and automatic operation method and system based on laser radar
CN112837482A (en) * 2021-01-06 2021-05-25 上海三一重机股份有限公司 Electronic enclosure system for excavator, control method and electronic equipment
CN214335536U (en) * 2021-03-02 2021-10-01 浙江省交通集团检测科技有限公司 Multi-sensor integrated intelligent driving system of asphalt concrete mixing plant loader
WO2022215898A1 (en) * 2021-04-09 2022-10-13 현대두산인프라코어(주) Sensor fusion system and sensing method for construction equipment
CN113107044A (en) * 2021-04-21 2021-07-13 立澈(上海)自动化有限公司 Method and device for determining position of bucket of excavator and electronic equipment

Also Published As

Publication number Publication date
CN117331093B (en) 2024-01-26

Similar Documents

Publication Publication Date Title
CN110658531B (en) Dynamic target tracking method for port automatic driving vehicle
CA2950791C (en) Binocular visual navigation system and method based on power robot
AU2015234395B2 (en) Real-time range map generation
CN108303096B (en) Vision-assisted laser positioning system and method
CN110503040B (en) Obstacle detection method and device
EP2256690B1 (en) Object motion detection system based on combining 3D warping techniques and a proper object motion detection
Broggi et al. Terrain mapping for off-road autonomous ground vehicles using rational b-spline surfaces and stereo vision
CN113110451B (en) Mobile robot obstacle avoidance method based on fusion of depth camera and single-line laser radar
CN111521195B (en) Intelligent robot
JP4510554B2 (en) Three-dimensional object monitoring device
CN113378760A (en) Training target detection model and method and device for detecting target
CN112097732A (en) Binocular camera-based three-dimensional distance measurement method, system, equipment and readable storage medium
CN110850859B (en) Robot and obstacle avoidance method and obstacle avoidance system thereof
WO2023092870A1 (en) Method and system for detecting retaining wall suitable for automatic driving vehicle
JP2014228941A (en) Measurement device for three-dimensional surface shape of ground surface, runnable region detection device and construction machine mounted with the same, and runnable region detection method
CN117331093B (en) Unmanned loader obstacle sensing method based on bucket position rejection
Gao et al. Design and implementation of autonomous mapping system for ugv based on lidar
CN111781606A (en) Novel miniaturization implementation method for fusion of laser radar and ultrasonic radar
CN113935946B (en) Method and device for detecting underground obstacle in real time
Pfeiffer et al. Ground truth evaluation of the Stixel representation using laser scanners
CN113610910B (en) Obstacle avoidance method for mobile robot
CN111273316B (en) Multi-laser radar multi-view object detection method based on profile expansion fusion
CN114661051A (en) Front obstacle avoidance system based on RGB-D
Whitehorn et al. Stereo vision in LHD automation
Kolu et al. A mapping method tolerant to calibration and localization errors based on tilting 2D laser scanner

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant