CN116220141A - Auxiliary sensing method and device for excavator and excavator - Google Patents

Auxiliary sensing method and device for excavator and excavator Download PDF

Info

Publication number
CN116220141A
CN116220141A CN202310125845.2A CN202310125845A CN116220141A CN 116220141 A CN116220141 A CN 116220141A CN 202310125845 A CN202310125845 A CN 202310125845A CN 116220141 A CN116220141 A CN 116220141A
Authority
CN
China
Prior art keywords
data
excavator
vehicle body
bucket
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310125845.2A
Other languages
Chinese (zh)
Inventor
何勇
董洋
崔帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sany Heavy Machinery Ltd
Original Assignee
Sany Heavy Machinery Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sany Heavy Machinery Ltd filed Critical Sany Heavy Machinery Ltd
Priority to CN202310125845.2A priority Critical patent/CN116220141A/en
Publication of CN116220141A publication Critical patent/CN116220141A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • EFIXED CONSTRUCTIONS
    • E02HYDRAULIC ENGINEERING; FOUNDATIONS; SOIL SHIFTING
    • E02FDREDGING; SOIL-SHIFTING
    • E02F9/00Component parts of dredgers or soil-shifting machines, not restricted to one of the kinds covered by groups E02F3/00 - E02F7/00
    • E02F9/20Drives; Control devices
    • EFIXED CONSTRUCTIONS
    • E02HYDRAULIC ENGINEERING; FOUNDATIONS; SOIL SHIFTING
    • E02FDREDGING; SOIL-SHIFTING
    • E02F3/00Dredgers; Soil-shifting machines
    • E02F3/04Dredgers; Soil-shifting machines mechanically-driven
    • E02F3/28Dredgers; Soil-shifting machines mechanically-driven with digging tools mounted on a dipper- or bucket-arm, i.e. there is either one arm or a pair of arms, e.g. dippers, buckets
    • EFIXED CONSTRUCTIONS
    • E02HYDRAULIC ENGINEERING; FOUNDATIONS; SOIL SHIFTING
    • E02FDREDGING; SOIL-SHIFTING
    • E02F3/00Dredgers; Soil-shifting machines
    • E02F3/04Dredgers; Soil-shifting machines mechanically-driven
    • E02F3/28Dredgers; Soil-shifting machines mechanically-driven with digging tools mounted on a dipper- or bucket-arm, i.e. there is either one arm or a pair of arms, e.g. dippers, buckets
    • E02F3/36Component parts
    • EFIXED CONSTRUCTIONS
    • E02HYDRAULIC ENGINEERING; FOUNDATIONS; SOIL SHIFTING
    • E02FDREDGING; SOIL-SHIFTING
    • E02F3/00Dredgers; Soil-shifting machines
    • E02F3/04Dredgers; Soil-shifting machines mechanically-driven
    • E02F3/28Dredgers; Soil-shifting machines mechanically-driven with digging tools mounted on a dipper- or bucket-arm, i.e. there is either one arm or a pair of arms, e.g. dippers, buckets
    • E02F3/36Component parts
    • E02F3/42Drives for dippers, buckets, dipper-arms or bucket-arms
    • EFIXED CONSTRUCTIONS
    • E02HYDRAULIC ENGINEERING; FOUNDATIONS; SOIL SHIFTING
    • E02FDREDGING; SOIL-SHIFTING
    • E02F3/00Dredgers; Soil-shifting machines
    • E02F3/04Dredgers; Soil-shifting machines mechanically-driven
    • E02F3/28Dredgers; Soil-shifting machines mechanically-driven with digging tools mounted on a dipper- or bucket-arm, i.e. there is either one arm or a pair of arms, e.g. dippers, buckets
    • E02F3/36Component parts
    • E02F3/42Drives for dippers, buckets, dipper-arms or bucket-arms
    • E02F3/43Control of dipper or bucket position; Control of sequence of drive operations
    • EFIXED CONSTRUCTIONS
    • E02HYDRAULIC ENGINEERING; FOUNDATIONS; SOIL SHIFTING
    • E02FDREDGING; SOIL-SHIFTING
    • E02F9/00Component parts of dredgers or soil-shifting machines, not restricted to one of the kinds covered by groups E02F3/00 - E02F7/00
    • E02F9/20Drives; Control devices
    • E02F9/2025Particular purposes of control systems not otherwise provided for
    • EFIXED CONSTRUCTIONS
    • E02HYDRAULIC ENGINEERING; FOUNDATIONS; SOIL SHIFTING
    • E02FDREDGING; SOIL-SHIFTING
    • E02F9/00Component parts of dredgers or soil-shifting machines, not restricted to one of the kinds covered by groups E02F3/00 - E02F7/00
    • E02F9/20Drives; Control devices
    • E02F9/2058Electric or electro-mechanical or mechanical control devices of vehicle sub-units
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The application relates to the technical field of excavators, in particular to an auxiliary sensing method and device for an excavator and the excavator. When the method and the device are applied, after the three-dimensional scene data of the current working area where the excavator is located are obtained, the three-dimensional scene data are cut to obtain the effective three-dimensional scene data after simplification, so that the data volume is reduced, the overall execution efficiency of the auxiliary perception method of the excavator can be improved, and the calculation force is saved. Then according to the current attitude data and the structural parameters of the excavator, the first space position of the bucket relative to the fixed vehicle body in the current state can be obtained, then according to the three-dimensional data of the effective scene and the first space position, the second space position of the bucket relative to the current operation area in the current state can be obtained, namely, the specific position of the bucket in the current state is obtained, and according to the specific position, accurate control can be executed, and collision accidents can be avoided.

Description

Auxiliary sensing method and device for excavator and excavator
Technical Field
The application relates to the technical field of excavators, in particular to an auxiliary sensing method and device for an excavator and the excavator.
Background
The remote control excavator can remove space-time restriction of operators in physics, improve the flexibility and efficiency of operation, and avoid the operation risk of operators in severe environments or dangerous working conditions. Moreover, the mechanical equipment has no physical fatigue and personal safety problems, can greatly improve the working efficiency, prolong the working time and enlarge the working range. With the development of computer vision technology and the development of computer vision technology in recent years, the application of computer vision technology in three-dimensional reconstruction and three-dimensional measurement is becoming wider and wider.
When an operator performs remote control on the excavator, the operator can only refer to the field two-dimensional plane image shot by the camera, and the operator lacks three-dimensional perception in field operation, so that an empty bucket is easily caused in the excavating process. With the rapid development of computer technology and artificial intelligence technology, automatic real-time detection and tracking of an observation target are realized by utilizing a machine learning technology, and the automatic real-time detection and tracking of the observation target becomes a necessary trend. In the working process of the excavator, how to know the accurate position of the bucket of the excavator in the current working area through a three-dimensional sensing technology is a technical problem to be solved in the field.
Disclosure of Invention
In view of the above, the application provides an excavator auxiliary sensing method and device, and an excavator, which can know the accurate position of a bucket of the excavator in a current working area, so that the excavating work is more accurate.
In a first aspect, the present application provides an excavator auxiliary sensing method, which is applied to an excavator, wherein the excavator comprises a fixed vehicle body and a movable vehicle body, and the movable vehicle body comprises a movable arm, a bucket rod and a bucket; the excavator auxiliary sensing method comprises the following steps: acquiring three-dimensional scene data of a current working area where the excavator is located, wherein the three-dimensional scene data are obtained by scanning by taking the fixed vehicle body as a reference system; cutting the three-dimensional scene data to obtain effective three-dimensional scene data in a preset area range; acquiring current attitude data and structural parameters of the excavator; determining a first spatial position of the bucket in a reference system of the fixed vehicle body in a current state according to the current attitude data and the structural parameters; and determining a second spatial position of the bucket relative to the current work area based on the active scene three-dimensional data and the first spatial position.
When the method is used, after the three-dimensional scene data of the current working area where the excavator is located are obtained, the three-dimensional scene data are cut to obtain the effective three-dimensional scene data after simplification, so that the data volume is reduced, the overall execution efficiency of the auxiliary perception method of the excavator can be improved, and the calculation force is saved. Then according to the current attitude data and the structural parameters of the excavator, the first space position of the bucket relative to the fixed vehicle body in the current state can be obtained, then according to the three-dimensional data of the effective scene and the first space position, the second space position of the bucket relative to the current operation area in the current state can be obtained, namely, the specific position of the bucket in the current state is obtained, and according to the specific position, accurate control can be executed, and collision accidents can be avoided.
With reference to the first aspect, in one possible implementation manner, a laser radar is provided on the fixed vehicle body; the obtaining the three-dimensional scene data of the current working area of the excavator, which is obtained by scanning by taking the fixed vehicle body as a reference system, comprises the following steps: acquiring first radar point cloud data of the ground of the current operation area, which is detected by the laser radar, wherein the first radar point cloud data is based on the laser radar as a reference point; obtaining structural parameters of the excavator; determining a third spatial position of a hinge point of the movable arm and the fixed vehicle body on the fixed vehicle body according to the structural parameter; acquiring a fourth space position of the laser radar on the fixed vehicle body; and according to the third space position and the fourth space position, converting the first radar point cloud data into second radar point cloud data taking the hinge point as a reference point, and taking the second radar point cloud data as the scene three-dimensional data.
With reference to the first aspect, in one possible implementation manner, a deep sensing camera device is arranged on the fixed vehicle body; the obtaining the three-dimensional scene data of the current working area of the excavator, which is obtained by scanning by taking the fixed vehicle body as a reference system, comprises the following steps: acquiring first depth image data of the ground of the current operation area detected by the depth sensing camera, wherein the first depth image data is based on the depth sensing camera as a reference point; obtaining structural parameters of the excavator; determining a third spatial position of a hinge point of the movable arm and the fixed vehicle body on the fixed vehicle body according to the structural parameter; acquiring a fifth spatial position of the deep sensing camera device on the fixed vehicle body; and converting the first depth image data into second depth image data taking the hinge point as a reference point according to the third spatial position and the fifth spatial position, and taking the second depth image data as the three-dimensional scene data.
With reference to the first aspect, in one possible implementation manner, the method further includes: acquiring current control instruction data; obtaining a vertical working area of the bucket in the vertical direction according to the current control instruction data; the step of clipping the three-dimensional scene data to obtain the three-dimensional effective scene data in the preset area range comprises the following steps: obtaining a cutting area on the ground of the current operation area according to the vertical operation area; and clipping the three-dimensional scene data according to the clipping region to obtain the three-dimensional effective scene data.
With reference to the first aspect, in one possible implementation manner, the acquiring current attitude data and structural parameters of the excavator includes: acquiring first posture data and first structure data of the movable arm; acquiring second attitude data and second structure data of the bucket rod; acquiring third attitude data and third structure data of the bucket; acquiring fourth structural data of the fixed vehicle body; wherein, according to the current gesture data and the structural parameters, determining the first spatial position of the bucket in the reference frame of the fixed vehicle body in the current state includes: and determining the first spatial position according to the first structure data, the second structure data, the third structure data, the fourth structure data, the first gesture data, the second gesture data and the third gesture data.
With reference to the first aspect, in a possible implementation manner, the determining, according to the current attitude data and the structural parameter, a first spatial position of the bucket in a reference frame of the fixed vehicle body in a current state further includes: determining a center tooth point on the bucket according to the first structural data, the second structural data, the third structural data, the fourth structural data, the first posture data, the second posture data and the third posture data, and a first tooth point space position in a reference system of the fixed vehicle body; wherein said determining a second spatial position of the bucket relative to the current work area based on the valid scene three-dimensional data and the first spatial position comprises: and obtaining the first ground clearance of the center tooth point according to the three-dimensional data of the effective scene and the first tooth point space position.
With reference to the first aspect, in a possible implementation manner, the determining, according to the current attitude data and the structural parameter, a first spatial position of the bucket in a reference frame of the fixed vehicle body in a current state further includes: determining the space position of each tooth tip on the bucket and a second tooth tip in a reference system of the fixed vehicle body according to the first structure data, the second structure data, the third structure data, the fourth structure data, the first posture data, the second posture data and the third posture data; wherein said determining a second spatial position of the bucket relative to the current work area based on the valid scene three-dimensional data and the first spatial position comprises: and respectively obtaining the second ground clearance height of each tooth tip according to the three-dimensional data of the effective scene and the space position of the second tooth tip.
With reference to the first aspect, in one possible implementation manner, after the acquiring the three-dimensional scene data of the current working area where the excavator is located by using the fixed vehicle body as a reference system, the method further includes: detecting whether data points in the three-dimensional scene data are continuous with each other; and if discontinuous unknown points exist in the three-dimensional scene data, calculating the data points corresponding to the unknown points according to the data points adjacent to the unknown points.
In a second aspect, the present application provides an excavator auxiliary sensing device, applied to an excavator, the excavator comprising a fixed body and a movable body, the movable body comprising a boom, an arm and a bucket; the excavator auxiliary sensing device comprises: a scene data acquisition module configured to: acquiring three-dimensional scene data of a current working area where the excavator is located, wherein the three-dimensional scene data are obtained by scanning by taking the fixed vehicle body as a reference system; cutting the three-dimensional scene data to obtain effective three-dimensional scene data in a preset area range; the excavator parameter acquisition module is configured to: acquiring current attitude data and structural parameters of the excavator; and a bucket position calculation module in communication with the scene data acquisition module and the excavator parameter acquisition module, respectively, the bucket position calculation module configured to: determining a first spatial position of the bucket in a reference system of the fixed vehicle body in a current state according to the current attitude data and the structural parameters; and obtaining a second spatial position of the bucket relative to the current working area according to the effective scene three-dimensional data and the first spatial position.
Since the second aspect is a device corresponding to the first aspect, the technical effects of the second aspect are not described herein.
In a third aspect, the present application provides an excavator comprising a fixed body and a movable body, the movable body comprising a boom, an arm and a bucket; the excavator further includes: the scene data detection module is arranged on the fixed car body and is configured to: scanning scene three-dimensional data of a current working area where the excavator is located by taking the fixed vehicle body as a reference system; and the auxiliary sensing device of the excavator, wherein the scene data acquisition module is in communication connection with the scene data detection module.
The third aspect includes the second aspect, and technical effects of the third aspect are not described herein.
Drawings
Fig. 1 is a schematic diagram of method steps of an excavator auxiliary sensing method according to an embodiment of the present application.
Fig. 2 is a schematic diagram illustrating steps of an excavator auxiliary sensing method according to another embodiment of the present application.
Fig. 3 is a schematic diagram illustrating steps of an excavator auxiliary sensing method according to another embodiment of the present application.
Fig. 4 is a schematic diagram illustrating steps of an excavator auxiliary sensing method according to another embodiment of the present application.
Fig. 5 is a schematic diagram illustrating steps of an excavator auxiliary sensing method according to another embodiment of the present application.
Fig. 6 is a schematic diagram illustrating steps of an excavator auxiliary sensing method according to another embodiment of the present application.
Fig. 7 is a schematic diagram illustrating steps of an excavator auxiliary sensing method according to another embodiment of the present application.
Fig. 8 is a schematic diagram illustrating steps of an excavator auxiliary sensing method according to another embodiment of the present application.
Fig. 9 is a schematic structural diagram of an auxiliary sensing device for an excavator according to an embodiment of the present application.
Fig. 10 is a view illustrating a use state of an excavator according to an embodiment of the present application.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made clearly and completely with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
Exemplary excavator Assistant awareness method
The application provides an excavator auxiliary sensing method which is applied to an excavator.
In one embodiment, as shown in FIG. 1, the excavator aided perception method comprises:
and 110, acquiring three-dimensional scene data of a current working area where the excavator is located, wherein the fixed vehicle body is used as a reference system.
In the step, a fixed vehicle body is used as a reference coordinate system, and the scanning detection is carried out on the current working area where the excavator is located through the detection equipment, so that scene three-dimensional data of the current working area are obtained.
And 120, cutting the three-dimensional scene data to obtain the three-dimensional effective scene data in the preset area range.
In this step, since the data amount of the three-dimensional data of the scene detected by the detection device is large, and the boom, the arm, and the bucket of the excavator perform work only in a small range, the three-dimensional data of the scene is cut to reduce the data amount. The working ranges of the movable arm, the bucket rod and the bucket or the range slightly larger than the working range are preset and used as the preset area range, and then the area of the three-dimensional scene data is cut into the preset area range, so that the three-dimensional effective scene data is obtained, and the data volume is reduced. The step can improve the overall execution efficiency of the auxiliary perception method of the excavator and save calculation force.
And 130, acquiring current attitude data and structural parameters of the excavator.
In this step, the current attitude of the movable body in the excavator can be obtained through an IMU (Inertial Measurement Unit ) or other type of sensor on the excavator. And then the structural parameters of the excavator are acquired, so that the specific shape and size of the fixed body and the movable body of the excavator can be known. Wherein, the structural parameters of the excavator are stored in the vehicle-mounted system of the excavator in advance.
And 140, determining a first space position of the bucket in the reference system of the fixed vehicle body in the current state according to the current posture data and the structural parameters.
In this step, the relative position of the bucket with respect to the fixed vehicle body can be calculated according to the current posture and the shape and size, that is, when the fixed vehicle body is taken as a reference coordinate system, the position of the bucket in the reference coordinate system is obtained, and the position is the first spatial position.
And step 150, determining a second spatial position of the bucket relative to the current working area according to the three-dimensional data of the effective scene and the first spatial position.
In this step, since the three-dimensional data of the effective scene and the first spatial position are both set by using the fixed vehicle body as a reference coordinate system, the relative position of the bucket with respect to the current work area, that is, the second spatial position can be obtained by coordinate conversion. The second space position is obtained, namely the specific position of the bucket in the current working area is known, so that accurate excavator control can be executed according to the specific position of the bucket, and the collision between the movable body of the excavator and an obstacle is avoided.
When the method is used, after the three-dimensional scene data of the current working area where the excavator is located are obtained, the three-dimensional scene data are cut to obtain the simplified effective three-dimensional scene data, so that the data volume is reduced, the overall execution efficiency of the auxiliary perception method of the excavator can be improved, and the calculation force is saved. Then according to the current attitude data and the structural parameters of the excavator, the first space position of the bucket relative to the fixed vehicle body in the current state can be obtained, then according to the three-dimensional data of the effective scene and the first space position, the second space position of the bucket relative to the current operation area in the current state can be obtained, namely, the specific position of the bucket in the current state is obtained, and according to the specific position, accurate control can be executed, and collision accidents can be avoided. Furthermore, all reference coordinate systems of the present application are the same criterion, e.g. all right hand coordinate system criteria.
In one embodiment, a laser radar is arranged on the fixed vehicle body, that is, the embodiment scans three-dimensional data of a scene through the laser radar. As shown in fig. 2, step 110 includes:
and 111, acquiring first radar point cloud data of the ground of the current working area, which is detected by the laser radar.
In this step, the first radar point cloud data is based on the laser radar as a reference point, and specifically, the first radar point cloud data uses the laser radar as a coordinate origin of a reference system.
And 112, acquiring structural parameters of the excavator.
In the step, the specific shape and size of the fixed body and the movable body of the excavator can be known by acquiring the structural parameters of the excavator. Wherein, the structural parameters of the excavator are stored in the vehicle-mounted system of the excavator in advance.
And 113, determining a third space position of the hinge point of the movable arm and the fixed vehicle body on the fixed vehicle body according to the structural parameters.
In this step, the specific position of the hinge point on the fixed vehicle body, i.e., the third spatial position, can be known by the structural parameters.
Step 114, obtaining a fourth spatial position of the laser radar on the fixed vehicle body.
In this step, the installation position of the laser radar on the fixed vehicle body, that is, the fourth spatial position is known. Wherein the installation position is stored in the vehicle system of the excavator in advance.
And step 115, converting the first radar point cloud data into second radar point cloud data taking the hinge point as a reference point according to the third space position and the fourth space position, and taking the second radar point cloud data as scene three-dimensional data.
In this step, since the third spatial position and the fourth spatial position are known, the first radar point cloud data is translated in the coordinate system, so that the origin of the coordinates of the reference system of the first radar point cloud data can be moved to the hinge point, and the second radar point cloud data with the hinge point as the origin of the coordinates of the reference system is used as the three-dimensional data of the scene. When the method is applied, specifically, the laser radar is installed on the excavator to get on the excavator, the pose of the laser radar is adjusted to obtain the view angle covering the operation area, the terrain point cloud is cut through the set filtering algorithm, and the second radar point cloud data is obtained, so that reduced terrain information can be provided for remote control operators in all weather, and large information transmission burden can not be caused.
In an embodiment, a depth sensing camera is arranged on the fixed vehicle body, that is, the embodiment scans to obtain three-dimensional scene data through the depth sensing camera. As shown in fig. 3, step 110 includes:
step 116, acquiring first depth image data of the ground of the current working area detected by the depth sensing camera.
In this step, the first depth image data is based on the depth image capturing device as a reference point, and specifically, the first depth image data uses the depth image capturing device as a reference system coordinate origin.
And 112, acquiring structural parameters of the excavator.
In the step, the specific shape and size of the fixed body and the movable body of the excavator can be known by acquiring the structural parameters of the excavator. Wherein, the structural parameters of the excavator are stored in the vehicle-mounted system of the excavator in advance.
And 113, determining a third space position of the hinge point of the movable arm and the fixed vehicle body on the fixed vehicle body according to the structural parameters.
In this step, the specific position of the hinge point on the fixed vehicle body, i.e., the third spatial position, can be known by the structural parameters.
Step 117, obtaining a fifth spatial position of the depth image pickup device on the fixed vehicle body.
In this step, the installation position of the depth sensing imaging device on the fixed vehicle body, that is, the fifth spatial position is known. Wherein the installation position is stored in the vehicle system of the excavator in advance.
Step 118, converting the first depth image data into second depth image data using the hinge point as a reference point according to the third spatial position and the fifth spatial position, and using the second depth image data as scene three-dimensional data.
In this step, since the third spatial position and the fifth spatial position are known, the first depth image data is translated in the coordinate system, so that the origin of the reference system coordinates of the first depth image data can be moved to the hinge point, and the second depth image data using the hinge point as the origin of the reference system coordinates can be used as the three-dimensional scene data.
In one embodiment, as shown in fig. 4, the excavator auxiliary sensing method further comprises:
step 160, obtaining current control instruction data.
And 170, obtaining a vertical working area of the bucket in the vertical direction according to the current control instruction data.
When the method is applied, the upper carriage of the excavator possibly executes the rotation action, the movable arm, the bucket rod and the bucket rotate along with the upper carriage, the current rotation azimuth angle of the upper carriage can be known through the current control instruction data, and the current azimuth angle of the movable arm, the bucket rod and the bucket can be known through the known rotation azimuth angle of the upper carriage. Since the boom, the arm, and the bucket all move in the vertical direction, the bucket can only move in the vertical area where the azimuth angle is currently located, which is the vertical work area.
Wherein step 120 comprises:
and step 121, obtaining a cutting area on the ground of the current operation area according to the vertical operation area.
In this step, the projection of the vertical working area on the ground can be used as the clipping area. Or the region slightly larger than the projection is taken as a clipping region, and the size of the region larger than the projection is preset.
And step 122, clipping the three-dimensional scene data according to the clipping region to obtain the three-dimensional effective scene data.
In this step, the three-dimensional scene data is cut, and only the data in the cutting area obtained in step 121 is reserved, and the data in the cutting area is used as the three-dimensional effective scene data, so that the three-dimensional scene data is simplified, and the data size is accurately positioned in the ground area where the bucket can reach. Since the bucket is only likely to reach the ground area corresponding to the clipping area, only three-dimensional scene data in the ground area need be obtained. The bucket in other ground areas cannot reach, so that three-dimensional scene data in other areas are not required to be known, the data operation amount is greatly reduced, and the overall operation efficiency of the method is improved.
In one embodiment, as shown in FIG. 5, step 130 includes:
step 131, acquiring first posture data and first structure data of the movable arm.
And 132, acquiring second posture data and second structure data of the bucket rod.
Step 133, acquiring third attitude data and third structure data of the bucket.
Step 134, obtaining fourth structural data of the fixed vehicle body.
Wherein step 140 includes:
step 141, determining the first spatial position according to the first structure data, the second structure data, the third structure data, the fourth structure data, the first gesture data, the second gesture data and the third gesture data.
In the application of the embodiment, the movable arm, the bucket rod and the bucket can be regarded as a multi-axis robot arm, and the posture of the movable arm, the posture of the bucket rod and the posture of the bucket can be obtained in real time according to the robot kinematics model through the sensors such as the IMU and the like arranged at different positions. In step 141, a DH model of the boom, arm, and bucket is created based on the design model of the excavator, and the attitude and structure data of the boom, arm, and bucket are input into the DH model, whereby the current first spatial position of the bucket with respect to the fixed vehicle body can be known.
In one embodiment, as shown in fig. 6, step 140 further includes:
step 142, determining a center tooth point on the bucket according to the first structure data, the second structure data, the third structure data, the fourth structure data, the first posture data, the second posture data and the third posture data, and a first tooth point space position in a reference frame of the fixed vehicle body.
The third structural data includes specific structural parameters of each part of the bucket, including specific positions of the center tooth tip in the middle of the bucket on the bucket. In this step, when the first spatial position of the current bucket with respect to the fixed vehicle body is calculated, the position of the center tooth tip in the reference coordinate system of the fixed vehicle body, that is, the first tooth tip spatial position, can be obtained by conversion from the third structural data.
After step 142, step 150 includes:
and 151, obtaining a first ground clearance height of the center tooth point according to the three-dimensional data of the effective scene and the first tooth point space position.
In step 151, since the three-dimensional data of the effective scene and the spatial position of the first tooth tip are both set by using the fixed vehicle body as a reference coordinate system, the height of the center tooth tip of the bucket with respect to the ground of the current working area, that is, the first ground clearance height, can be obtained by coordinate conversion. The first ground clearance is obtained, namely the specific ground clearance of the bucket relative to the ground of the current working area is known, and accurate excavation work can be executed according to the first ground clearance. Aiming at the working scene of uneven pits and bulges, the height of the bucket tip of the excavator bucket in the vertical direction can be fed back in real time in all weather, the empty bucket rate is reduced, and the working efficiency is improved.
Specifically, in a reference coordinate system of a fixed vehicle body, the horizontal directions are set to be the X and Y axes, and the vertical direction is set to be the Z axis. By means of the first tooth tip spatial position, the value Z1 of the central tooth tip on the Z axis in the reference coordinate system of the stationary vehicle body can be obtained. And because the three-dimensional data of the effective scene takes the fixed vehicle body as a reference coordinate system, the Z-axis value Z2 of the ground of the three-dimensional data of the effective scene in the vertical direction of the center tooth point can be obtained, namely the coordinate value Z2 of the ground right below the center tooth point is obtained, and then the difference value between Z1 and Z2 is calculated to obtain the first ground clearance height.
In one embodiment, as shown in fig. 7, step 140 further includes:
step 143, determining the space position of each tooth tip on the bucket and the second tooth tip in the reference system of the fixed vehicle body according to the first structure data, the second structure data, the third structure data, the fourth structure data, the first posture data, the second posture data and the third posture data.
The third structural data includes specific structural parameters of each part of the bucket, including specific positions of each tooth tip of the bucket on the bucket. In the step, when the first space position of the current bucket relative to the fixed vehicle body is obtained through calculation, the position of each tooth point in the reference coordinate system of the fixed vehicle body can be obtained through conversion through the third structural data, and then the second tooth point space position corresponding to each tooth point is obtained.
Wherein step 150 comprises:
and 152, respectively obtaining the second ground clearance height of each tooth point according to the three-dimensional data of the effective scene and the space position of the second tooth point.
In step 152, the height of each tooth tip relative to the ground of the current working area can be obtained through coordinate conversion, that is, the second ground clearance of each tooth tip is obtained. The second ground clearance is obtained, so that the specific position of the bucket in the current working area can be known more accurately and precisely, and the excavating work can be executed more precisely. For example, in an uneven working scene, the second ground clearance height of all the tooth points is controlled to be smaller than or equal to zero, so that all the tooth points of the bucket can be ensured to be excavated to the ground, namely, the whole bucket can be ensured to be put into excavation work, and the working efficiency is further improved.
In one embodiment, as shown in FIG. 8, after step 110, the excavator aided perception method further comprises:
step 180, detecting whether data points in the three-dimensional data of the scene are continuous with each other.
If the judgment result of step 180 is no, that is, when there is a discontinuous unknown point in the three-dimensional scene data, step 190 is executed, and a data point corresponding to the unknown point is calculated according to the data points adjacent to the unknown point.
In this embodiment, when there are discontinuous unknown points in the scene three-dimensional data, it is indicated that there are places that are not scanned, and at this time, the data of the unknown points are obtained by estimating through adjacent data points. Specifically, it can be estimated by interpolation calculation. For example, coordinate values of left and right adjacent points of an unknown point in the X-axis direction are acquired respectively from scene three-dimensional data: x1, Y1, Z1, X2, Y2, Z2, then taking half of the sum of X1 and X2 as the X coordinate value of the unknown point, half of the sum of Y1 and Y2 as the Y coordinate value of the unknown point, and half of the sum of Z1 and Z2 as the Z coordinate value of the unknown point.
Exemplary excavator auxiliary awareness apparatus
The application also provides an excavator auxiliary sensing device, is applied to an excavator, and the excavator comprises a fixed car body and a movable car body, wherein the movable car body comprises a movable arm, a bucket rod and a bucket. As shown in fig. 9, the excavator auxiliary sensing device includes: a scene data acquisition module 901, an excavator parameter acquisition module 902, and a bucket position calculation module 903.
The scene data acquisition module 901 is configured to: acquiring three-dimensional scene data of a current working area where the excavator is located, wherein the three-dimensional scene data are obtained by scanning by taking a fixed vehicle body as a reference system; and cutting the three-dimensional scene data to obtain the three-dimensional effective scene data in the range of the preset area.
The excavator parameter acquisition module 902 is configured to: and acquiring current attitude data and structural parameters of the excavator.
The bucket position calculation module 903 is communicatively connected to the scene data acquisition module 901 and the excavator parameter acquisition module 902, respectively, the bucket position calculation module 903 being configured to: determining a first space position of the bucket in a reference system of the fixed vehicle body in the current state according to the current posture data and the structural parameters; and obtaining a second spatial position of the bucket relative to the current working area according to the effective scene three-dimensional data and the first spatial position.
Exemplary excavator
The application also provides an excavator, which comprises a fixed vehicle body, a movable vehicle body, a scene data detection module and the excavator auxiliary sensing device. The movable car body comprises a movable arm, a bucket rod and a bucket. The scene data detection module is arranged on the fixed vehicle body and is configured to: and scanning three-dimensional scene data of the current working area where the excavator is positioned by taking the fixed vehicle body as a reference system. The scene data acquisition module can be various types of detectors such as a laser radar or a deep-sensing camera device. The scene data acquisition module is in communication connection with the scene data detection module. As shown in fig. 10, the scene data detection module 101 is provided on a fixed vehicle body 100 of the excavator, and a detection direction of the scene data detection module 101 is inclined downward, thereby detecting scene three-dimensional data of the ground.
The basic principles of the present application have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present application are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present application. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, as the application is not intended to be limited to the details disclosed herein as such.
The block diagrams of the devices, apparatuses, devices, systems referred to in this application are only illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
It is also noted that in the apparatus, devices and methods of the present application, the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent to the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description of the preferred embodiments of the present invention is not intended to limit the invention to the precise form disclosed, and any modifications, equivalents, and alternatives falling within the spirit and principles of the present invention are intended to be included within the scope of the present invention.

Claims (10)

1. An excavator auxiliary sensing method is applied to an excavator, and the excavator comprises a fixed vehicle body and a movable vehicle body, wherein the movable vehicle body comprises a movable arm, a bucket rod and a bucket; the auxiliary sensing method for the excavator is characterized by comprising the following steps of:
acquiring three-dimensional scene data of a current working area where the excavator is located, wherein the three-dimensional scene data are obtained by scanning by taking the fixed vehicle body as a reference system;
cutting the three-dimensional scene data to obtain effective three-dimensional scene data in a preset area range;
acquiring current attitude data and structural parameters of the excavator;
determining a first spatial position of the bucket in a reference system of the fixed vehicle body in a current state according to the current attitude data and the structural parameters; and
and determining a second spatial position of the bucket relative to the current working area according to the three-dimensional data of the effective scene and the first spatial position.
2. The excavator auxiliary sensing method according to claim 1, wherein the fixed vehicle body is provided with a laser radar;
the obtaining the three-dimensional scene data of the current working area of the excavator, which is obtained by scanning by taking the fixed vehicle body as a reference system, comprises the following steps:
acquiring first radar point cloud data of the ground of the current operation area, which is detected by the laser radar, wherein the first radar point cloud data is based on the laser radar as a reference point;
obtaining structural parameters of the excavator;
determining a third spatial position of a hinge point of the movable arm and the fixed vehicle body on the fixed vehicle body according to the structural parameter;
acquiring a fourth space position of the laser radar on the fixed vehicle body; and
and according to the third space position and the fourth space position, converting the first radar point cloud data into second radar point cloud data taking the hinge point as a reference point, and taking the second radar point cloud data as the scene three-dimensional data.
3. The excavator auxiliary sensing method according to claim 1, wherein a depth sensing camera is provided on the fixed vehicle body;
the obtaining the three-dimensional scene data of the current working area of the excavator, which is obtained by scanning by taking the fixed vehicle body as a reference system, comprises the following steps:
acquiring first depth image data of the ground of the current operation area detected by the depth sensing camera, wherein the first depth image data is based on the depth sensing camera as a reference point;
obtaining structural parameters of the excavator;
determining a third spatial position of a hinge point of the movable arm and the fixed vehicle body on the fixed vehicle body according to the structural parameter;
acquiring a fifth spatial position of the deep sensing camera device on the fixed vehicle body; and
and according to the third space position and the fifth space position, converting the first depth image data into second depth image data taking the hinge point as a reference point, and taking the second depth image data as the three-dimensional scene data.
4. The excavator assistance awareness method of claim 1 further comprising:
acquiring current control instruction data; and
obtaining a vertical working area of the bucket in the vertical direction according to the current control instruction data;
the step of clipping the three-dimensional scene data to obtain the three-dimensional effective scene data in the preset area range comprises the following steps:
obtaining a cutting area on the ground of the current operation area according to the vertical operation area; and
and clipping the three-dimensional scene data according to the clipping region to obtain the three-dimensional effective scene data.
5. The method of claim 1, wherein the obtaining current attitude data and structural parameters of the excavator comprises:
acquiring first posture data and first structure data of the movable arm;
acquiring second attitude data and second structure data of the bucket rod;
acquiring third attitude data and third structure data of the bucket; and
acquiring fourth structural data of the fixed vehicle body;
wherein, according to the current gesture data and the structural parameters, determining the first spatial position of the bucket in the reference frame of the fixed vehicle body in the current state includes:
and determining the first spatial position according to the first structure data, the second structure data, the third structure data, the fourth structure data, the first gesture data, the second gesture data and the third gesture data.
6. The method of excavator-assisted sensing of claim 5 wherein,
the determining, according to the current attitude data and the structural parameter, a first spatial position of the bucket in the reference frame of the fixed vehicle body in the current state further includes:
determining a center tooth point on the bucket according to the first structural data, the second structural data, the third structural data, the fourth structural data, the first posture data, the second posture data and the third posture data, and a first tooth point space position in a reference system of the fixed vehicle body;
wherein said determining a second spatial position of the bucket relative to the current work area based on the valid scene three-dimensional data and the first spatial position comprises:
and obtaining the first ground clearance of the center tooth point according to the three-dimensional data of the effective scene and the first tooth point space position.
7. The method of excavator-assisted sensing of claim 5 wherein,
the determining, according to the current attitude data and the structural parameter, a first spatial position of the bucket in the reference frame of the fixed vehicle body in the current state further includes:
determining the space position of each tooth tip on the bucket and a second tooth tip in a reference system of the fixed vehicle body according to the first structure data, the second structure data, the third structure data, the fourth structure data, the first posture data, the second posture data and the third posture data;
wherein said determining a second spatial position of the bucket relative to the current work area based on the valid scene three-dimensional data and the first spatial position comprises:
and respectively obtaining the second ground clearance height of each tooth tip according to the three-dimensional data of the effective scene and the space position of the second tooth tip.
8. The excavator assistance-aware method of any one of claims 1 to 7 wherein after the acquiring of the three-dimensional data of the scene of the current work area in which the excavator is located with the fixed vehicle body as a frame of reference, the method further comprises:
detecting whether data points in the three-dimensional scene data are continuous with each other; and
if discontinuous unknown points exist in the three-dimensional scene data, calculating the data points corresponding to the unknown points according to the data points adjacent to the unknown points.
9. An excavator auxiliary sensing device is applied to an excavator, and the excavator comprises a fixed vehicle body and a movable vehicle body, wherein the movable vehicle body comprises a movable arm, a bucket rod and a bucket; the excavator auxiliary sensing device is characterized by comprising:
a scene data acquisition module configured to: acquiring three-dimensional scene data of a current working area where the excavator is located, wherein the three-dimensional scene data are obtained by scanning by taking the fixed vehicle body as a reference system; cutting the three-dimensional scene data to obtain effective three-dimensional scene data in a preset area range;
the excavator parameter acquisition module is configured to: acquiring current attitude data and structural parameters of the excavator; and
the bucket position calculation module is respectively in communication connection with the scene data acquisition module and the excavator parameter acquisition module, and is configured to: determining a first spatial position of the bucket in a reference system of the fixed vehicle body in a current state according to the current attitude data and the structural parameters; and obtaining a second spatial position of the bucket relative to the current working area according to the effective scene three-dimensional data and the first spatial position.
10. An excavator comprises a fixed body and a movable body, wherein the movable body comprises a movable arm, a bucket rod and a bucket; the excavator is characterized by further comprising:
the scene data detection module is arranged on the fixed car body and is configured to: scanning scene three-dimensional data of a current working area where the excavator is located by taking the fixed vehicle body as a reference system; and
the excavator auxiliary awareness apparatus of claim 9, the scene data acquisition module being communicatively coupled to the scene data detection module.
CN202310125845.2A 2023-02-16 2023-02-16 Auxiliary sensing method and device for excavator and excavator Pending CN116220141A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310125845.2A CN116220141A (en) 2023-02-16 2023-02-16 Auxiliary sensing method and device for excavator and excavator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310125845.2A CN116220141A (en) 2023-02-16 2023-02-16 Auxiliary sensing method and device for excavator and excavator

Publications (1)

Publication Number Publication Date
CN116220141A true CN116220141A (en) 2023-06-06

Family

ID=86586811

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310125845.2A Pending CN116220141A (en) 2023-02-16 2023-02-16 Auxiliary sensing method and device for excavator and excavator

Country Status (1)

Country Link
CN (1) CN116220141A (en)

Similar Documents

Publication Publication Date Title
KR102013761B1 (en) Image display system of working machine, remote operation system of working machine and working machine
JP6050525B2 (en) Position measuring system, work machine, and position measuring method
US20220178114A1 (en) Map generation system and map generation method
US11427988B2 (en) Display control device and display control method
CN108885102B (en) Shape measurement system, working machine, and shape measurement method
CN111622296B (en) Excavator safety obstacle avoidance system and method
CN109661494B (en) Detection processing device for working machine and detection processing method for working machine
CN113605483B (en) Automatic operation control method and device for excavator
JP7071203B2 (en) Work machine
CN109872355B (en) Shortest distance acquisition method and device based on depth camera
AU2019292457B2 (en) Display control device, display control system, and display control method
WO2019239668A1 (en) System including work machine, method executed by computer, production method for learned position-estimation model, and learning data
US20210189697A1 (en) Display control system, display control device, and display control method
CN114508135B (en) Unmanned excavator and control method
JP7023813B2 (en) Work machine
CN112508912A (en) Ground point cloud data filtering method and device and boom anti-collision method and system
US20230243127A1 (en) Excavation information processing device, work machine, excavation support device, and excavation information processing method
JP2024052764A (en) Display control device and display method
CN116220141A (en) Auxiliary sensing method and device for excavator and excavator
JP6616149B2 (en) Construction method, work machine control system, and work machine
CN115354708A (en) Excavator bucket autonomous excavation recognition control system and method based on machine vision
WO2021059892A1 (en) Operation recording and analysis system for construction machinery
Satoh Digital twin-based collision avoidance system for autonomous excavator with automatic 3d lidar sensor calibration
US11908076B2 (en) Display system and display method
Tang et al. Method on pose estimation of excavators based on onboard depth camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination