CN116088365A - Visual-based automatic excavation operation unloading point positioning system and method - Google Patents

Visual-based automatic excavation operation unloading point positioning system and method Download PDF

Info

Publication number
CN116088365A
CN116088365A CN202211531654.8A CN202211531654A CN116088365A CN 116088365 A CN116088365 A CN 116088365A CN 202211531654 A CN202211531654 A CN 202211531654A CN 116088365 A CN116088365 A CN 116088365A
Authority
CN
China
Prior art keywords
camera
target
coordinate system
coordinates
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211531654.8A
Other languages
Chinese (zh)
Inventor
胡永彪
赵江营
谭鹏
夏晓华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changan University
Original Assignee
Changan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changan University filed Critical Changan University
Priority to CN202211531654.8A priority Critical patent/CN116088365A/en
Publication of CN116088365A publication Critical patent/CN116088365A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • G05B19/042Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
    • G05B19/0423Input/output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/25Pc structure of the system
    • G05B2219/25257Microcontroller
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Operation Control Of Excavators (AREA)

Abstract

The invention provides a vision-based automatic excavating operation unloading point positioning system and a vision-based automatic excavating operation unloading point positioning method, which relate to the technical field of automatic excavating and comprise a camera, a computer and a target; the target is fixed at the unloading point of the excavator; the camera is arranged above a cab of the excavator and forms a fixed angle with the top of the cab so as to ensure that the target is in the visual field range of the camera, and the camera is used for collecting the position information of the target; the computer is electrically connected with the camera and is used for acquiring the position information of the target acquired by the camera so as to control the excavator to accurately discharge.

Description

Visual-based automatic excavation operation unloading point positioning system and method
Technical Field
The invention relates to the technical field of autonomous excavation, in particular to a vision-based autonomous excavation operation unloading point positioning system and method.
Background
The hydraulic excavator is the engineering machine with the most extensive application, plays an important role in the fields of traffic construction, mining and the like, improves the automation degree of the hydraulic excavator, realizes the autonomous operation of the hydraulic excavator, has very important social and economic values, and can realize the follow-up motion planning and track control only by sensing key operation areas, particularly detecting key operation areas such as unloading points and the like and corresponding 3D positioning information in the unloading process of the excavator.
At present, a method of manually constructing characteristic points and without markers is basically adopted to position the unloading points in the unloading operation of the excavator, and the method cannot provide accurate environment information, and the positioning is deviated, so that the unloading points cannot be accurately placed at the specified unloading points.
Disclosure of Invention
Aiming at the technical problems, the invention aims to provide a vision-based autonomous excavation operation unloading point positioning system and a vision-based autonomous excavation operation unloading point positioning method, wherein the unloading point positioning system and the vision-based autonomous excavation operation unloading point positioning method are fixed at the unloading point of the excavator through the target; the camera is arranged above a cab of the excavator and forms a fixed angle with the top of the cab so as to ensure that the target is in the visual field range of the camera, and the camera is used for collecting the position information of the target; the computer is electrically connected with the camera and is used for acquiring the position information of the target acquired by the camera so as to control the excavator to accurately discharge.
In order to achieve the above purpose, the invention is realized by the following technical scheme:
an autonomous excavation operation unloading point positioning system based on vision comprises a camera, a computer and a target;
the target is fixed at the unloading point of the excavator;
the camera is arranged above a cab of the excavator and forms a fixed angle with the top of the cab so as to ensure that the target is in the visual field range of the camera, and the camera is used for collecting the position information of the target;
the computer is electrically connected with the camera and is used for acquiring the position information of the target acquired by the camera so as to control the excavator to accurately discharge.
Preferably, the camera is a monocular camera.
The visual-based automatic excavating operation unloading point positioning method is applied to the visual-based automatic excavating operation unloading point positioning system and comprises the following steps of:
determining a coordinate system of the camera;
determining world coordinates of the target under a camera coordinate system according to the camera coordinate system;
transforming world coordinates of the target under the camera coordinate system to the target under the global excavator coordinate system;
and determining coordinates of the unloading point according to the target in the global excavator coordinate system.
Preferably, the method for determining world coordinates of the target in the camera coordinate system specifically comprises the following steps:
and detecting saddle points on the target through the camera, determining the position of the target according to the geometric relation between the saddle points, and determining the world coordinates of the target under a camera coordinate system according to the camera coordinate system.
Preferably, the world coordinate calculation method of the target in the camera coordinate system specifically includes:
Figure SMS_1
wherein (u, v) is the pixel coordinates of the target image point, (X) C ,Y C ,Z C ) Is the world coordinate under the camera coordinate system, f x And f y Focal length of the camera in x and y directions, respectively, (u) 0 ,v 0 ) S is a scale factor, which is the principal point coordinate where the optical axis intersects the image plane;
formula (4.1) may also be written as follows
sp=K[R t]P(4.2)
Where K is the camera's internal matrix of parameters (usually calibrated before optimization), R is the rotation matrix, t is the translation vector, P is the coordinates of the 3D spatial point, and P is the coordinates of the corresponding 2D image point;
the pose solving is accomplished by minimizing (4.2) the error, i.e. the form as shown below
Figure SMS_2
In the method, in the process of the invention,
Figure SMS_3
is the spatial point P j Reprojection on image i according to equation (4.2), p ij An image point of the j-th point in the i-th image;
the solution of the formula (4.3) is the solution of the nonlinear least square problem, the solution process is that an EPnP method is used for initialization, and finally an optimized iteration is used for completing the solution by applying a Levenberg-Marquardt algorithm, so that world coordinates of a target under a camera coordinate system are obtained.
Preferably, transforming world coordinates of the target in the camera coordinate system to the target in the global excavator coordinate system specifically includes:
Figure SMS_4
in (x) 0 ,y 0 ,z 0 ) Representing coordinates of points in the global base coordinate system, (x) C ,y C ,z C ) Representing coordinates of a point in the camera coordinate system, (c) x ,c y ,c z ) Origin O representing camera coordinate system C In the working device coordinate system O 1 -X 1 Y 1 Z 1 Lower coordinates, a 1 Indicating the axis Z of the rotary joint 0 With axis Z 1 Length of connecting rod between d 1 Representing an axis X 0 To axis X 1 The connecting rod is offset.
Compared with the prior art, the invention has the beneficial effects that: the invention provides a vision-based automatic excavating operation unloading point positioning system and a vision-based automatic excavating operation unloading point positioning method, wherein a target is fixed at an excavator unloading point; the camera is arranged above a cab of the excavator and forms a fixed angle with the top of the cab so as to ensure that the target is in the visual field range of the camera, and the camera is used for collecting the position information of the target; the computer is electrically connected with the camera, and is used for acquiring the position information of the targets acquired by the camera, so that the excavator is controlled to accurately discharge, and the visual pose calculation method based on the targets can provide rich and accurate environmental information, thereby being beneficial to realizing more accurate positioning.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 is a flow chart of discharge point detection and positioning;
FIG. 2 is a schematic diagram of a stereo camera and an excavation coordinate system;
FIG. 3 is a schematic view of a camera secured to the top of an excavator cab.
In the figure: 1. target, 2, camera.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art. It should be noted that, without conflict, the embodiments of the present invention and features of the embodiments may be combined with each other. The invention will be described in detail below with reference to the drawings in connection with embodiments.
An autonomous excavation operation unloading point positioning system based on vision comprises a camera 2, a computer and a target 1;
the target 1 is fixed at the unloading point of the excavator;
the camera 2 is arranged above a cab of the excavator and forms a fixed angle with the top of the cab, so that the target 1 is in the visual field range of the camera 2, the angle is required to be adjusted before use, the groove, the unloading area and the corresponding front area are completely covered in the process of excavating movement, and the camera 2 is used for collecting the position information of the target 1;
the computer is electrically connected with the camera 2, and is used for acquiring the position information of the target 1 acquired by the camera 2, so as to control the excavator to accurately discharge.
Further, the camera 2 employs a monocular camera 2.
The visual-based automatic excavating operation unloading point positioning method is applied to the visual-based automatic excavating operation unloading point positioning system and comprises the following steps of:
determining a coordinate system of the camera 2;
determining world coordinates of the target 1 under a camera 2 coordinate system according to the camera 2 coordinate system;
transforming world coordinates of the target 1 under the camera 2 coordinate system to the target 1 under the global excavator coordinate system;
and determining the coordinates of the unloading point according to the target 1 in the global excavator coordinate system.
Further, the method for determining world coordinates of the target 1 in the coordinate system of the camera 2 specifically includes: detecting saddle points on the target 1 through the camera 2, determining the position of the target 1 according to the geometric relation between the saddle points, and determining the world coordinates of the target 1 under the coordinate system of the camera 2 according to the coordinate system of the camera 2.
In the present example, the visual pose calculation problem refers to solving the position coordinates and the attitude angles of the object in the three-dimensional space, and the position of the unloading point is represented by using the target 1, so that the coordinates of the unloading point can be calculated by determining the coordinates of the target 1, and the detection and the positioning of the unloading point in the autonomous excavation are completed.
The whole pose calculation method is shown in figure 1, firstly, pose estimation of a target 1 is carried out, a calibrated camera 2 is used for detecting saddle points on a very white target 1, and the position of the target 1 is determined according to the geometric relationship between the saddle points; then converting saddle point image coordinates into 3D coordinates under an excavator coordinate system based on the relative installation positions of the camera 2 and the excavator; and finally, taking the lower left corner of the target 1 as an origin, and shifting the 3D coordinates to obtain coordinates of the unloading point, thereby completing the positioning of the unloading point.
Furthermore, in the embodiment, the CALTag target 1 is adopted, the CALTag target 1 is an automatically identified target 1, and can be accurately and automatically detected in an image, and compared with other methods based on the target 1, the CALTag has better robustness to shielding, so that the CALTag is selected for pose calculation of the excavator.
The world coordinate calculation method of the target 1 in the camera 2 coordinate system specifically comprises the following steps:
Figure SMS_5
where (u, v) is the pixel coordinates of the target 1 image point, (X) C ,Y C ,Z C ) Is the world coordinate in the camera 2 coordinate system, f x And f y The focal length of the camera 2 in the x and y directions, respectively, (u) 0 ,v 0 ) S is a scale factor, which is the principal point coordinate where the optical axis intersects the image plane;
formula (4.1) may also be written as follows
sp=K[R t]P(4.2)
Where K is the internal matrix of the camera 2 (usually calibrated before optimization), R is the rotation matrix, t is the translation vector, P is the coordinates of the 3D spatial point, and P is the coordinates of the corresponding 2D image point;
the pose solving is accomplished by minimizing (4.2) the error, i.e. the form as shown below
Figure SMS_6
In the method, in the process of the invention,
Figure SMS_7
is the spatial point P j Reprojection on image i according to equation (4.2), p ij At the ith image for the jth pointAs shown in fig. 2;
the solution of the formula (4.3) is the solution of the nonlinear least square problem, the solution process is that firstly, an EPnP method is used for initialization, and finally, an optimized iteration is used for completing the solution by applying a Levenberg-Marquardt algorithm, so that world coordinates of the target 1 under a coordinate system of the camera 2 are obtained.
Further, the transformation relationship between the coordinate systems in the excavator is as follows: base coordinate system O of excavator 0 -X 0 Y 0 Z 0 The system is a global coordinate system (the coordinate system is always fixed with the ground and keeps unchanged in the process of the excavation operation) taking the intersection point of the rotation center of the chassis of the excavator and the ground plane of the crawler as an origin. Working device coordinate system O of excavator 1 -X 1 Y 1 Z 1 The system is a local coordinate system which takes the hinging point of the rotation center of the movable arm and the vehicle body as the origin, and the coordinate system of the working device changes along with the turning of the vehicle during the excavating operation. Camera 2 coordinate system O of excavator C -X C Y C Z C The camera 2 coordinate system of the binocular camera 2 changes along with the turning of the vehicle during the digging operation. Herein, the base coordinate system O of the excavator 0 -X 0 Y 0 Z 0 Wherein X is 0 The axis is directed to the right front of the caterpillar, Y 0 The shaft is outwards perpendicular to the length direction of the crawler belt.
The transformation of the camera 2 coordinate system into the excavator coordinate system during the excavation work varies with the different stages of the excavation work process, wherein there are mainly the following three stages: during shoveling operation, the excavator is used for digging at the moment, so that unloading point detection is not needed; when the vehicle is in a turning state, in order to calculate the turning angle of the discharging point relative to the global basic coordinate system, a camera 2 coordinate system O is needed C -X C Y C Z C The lower discharging point coordinate passes through a working device coordinate system O 1 -X 1 Y 1 Z 1 Transforming to global base coordinate system O 0 -X 0 Y 0 Z 0 The method comprises the steps of carrying out a first treatment on the surface of the In the unloading operation, in order to calculate the joint space parameters of the working device at the unloading point, the coordinates of the unloading point under the coordinate system of the camera 2 are transformed to the coordinate system of the working device. Through the analysis, the coordinate transformation of the three stages of the motion process can be uniformly written into the following homogeneous coordinate representation form:
transforming world coordinates of the target 1 in the camera 2 coordinate system to the target 1 in the global excavator coordinate system, specifically including:
Figure SMS_8
in (x) 0 ,y 0 ,z 0 ) Representing coordinates of points in the global base coordinate system, (x) C ,y C ,z C ) Representing coordinates of a point in the camera 2 coordinate system, (c) x ,c y ,c z ) Origin O representing camera 2 coordinate system C In the working device coordinate system O 1 -X 1 Y 1 Z 1 Lower coordinates, a 1 Indicating the axis Z of the rotary joint 0 With axis Z 1 Length of connecting rod between d 1 Representing an axis X 0 To axis X 1 The connecting rod is offset.
In order to obtain the pose of the unloading point of the excavating operation, the target 1 needs to be fixedly arranged at the unloading point of the excavator. First, the size of CALTag is to be determined. The size of the CALTag needs to meet the detection requirements of the excavator positioning system, for example, for narrow dump areas, it is not particularly convenient for a larger CALTag to be secured to the job site. While smaller CALTag may easily result in a long distance measurement, the positioning system cannot detect the target 1, and thus needs to select an appropriate target 1 according to a specific working condition. The CALTag then needs to be arranged in the excavation work area and the target 1 is arranged so as not to be obscured by earth material during the excavation work, so the target 1 is arranged at a distance from the maximum dump area. The target 1 can be detected during each excavation work unloading process within the range of ensuring that the target 1 is within the field of view of the camera 2.
Thus, the target 1 positioning calculation is typically best installed by securing the camera 2 to the excavator, as shown in fig. 3. The mounting mode requires that the camera 2 is fixed above the cab of the excavator, so that the target 1 can be ensured to be always in the visual field range of the camera 2, the target 1 can be detected in real time, the pose of the target 1 is measured in real time, and the position of the unloading point of the excavating operation is determined.
In order to verify the effectiveness of the localization system proposed in this chapter, tests were carried out with a view to good lighting conditions, the size of the target 1 chosen for the test being 228×228mm. First, the furthest effective measurement distance of the pose estimation system is determined.
The pitch of the target 1 will have an effect on the maximum measured distance of the camera 2 to the target 1. The CALTag pitch angles were thus tested at 0 ° and 45 °, respectively. At a pitch angle of 0 ° for target 1, the furthest measurable distance of the positioning system is close to 11m. At a pitch angle of 45 ° for target 1, the furthest measurable distance of the positioning system is approximately 8m.
In the unloading point positioning test, a camera 2 is arranged on an excavator in the test to collect an image of a target 1 of the unloading point, and a main image area displays an unloading area marked by the target 1. In the test, in order to be consistent with the subsequent excavating movement cycle times, 7 times of tests are carried out in total, and each time the excavator rotates to a discharging point, an image is acquired. In order to eliminate errors caused by rotation angle changes in visual positioning, the target 1 is fixed in the image positioning process of the unloading point, and the rotation angle of the excavator is set to be consistent each time. The rotation angles adopted in the test are all 90 degrees, and the rotation angles are measured and controlled by an inclination angle sensor.
In the process of slewing unloading of the excavator, a positioning algorithm can identify the target 1 positioned in an image area in real time. When the number and positions of saddle points (shown by red circles) of the target 1 and the recognition threshold condition are satisfied, and the detection algorithm marks possible saddle points (shown by purple circles) according to the geometric distribution information of the saddle points. Finally, these all marked circles (red and purple) are used to solve for the position of target 1. Otherwise, if no red circle appears, a failure in positioning is indicated. The lower left red circle of target 1 is regarded as the origin of the coordinate system of target 1.
The section compares the image positioning of the unloading point with the corresponding point of the result in the actual environment with the manual measurement result, and further obtains positioning errors. For the convenience of analysis, the visual measurement distance and the actual measurement distance of the origin of the camera 2 and the target 1 are compared in the test, and the maximum positioning error of the method can be obtained because the error distribution is the maximum on the positioning distance as known by the formula (4.3). When the identification error of the unloading point in the excavating operation is kept fixed at the position of the target 1, the distance between the target 1 and the lens of the camera 2 is set to 5578mm in advance, and the actual distance and the visual positioning distance are compared through visual positioning, as shown in the table 1, the maximum positioning error is within 48mm, and the average positioning error obtained by 7 frames of images in the test is 37.1mm, so that the positioning precision meets the daily application requirement in consideration of manual measurement error.
TABLE 1 monocular absolute positioning error of discharge points
Figure SMS_9
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit and scope of the invention, and it is intended that the invention encompass such modifications and variations as fall within the scope of the appended claims and their equivalents.

Claims (6)

1. The vision-based automatic excavating operation unloading point positioning system is characterized by comprising a camera, a computer and a target;
the target is fixed at the unloading point of the excavator;
the camera is arranged above a cab of the excavator and forms a fixed angle with the top of the cab so as to ensure that the target is in the visual field range of the camera, and the camera is used for collecting the position information of the target;
the computer is electrically connected with the camera and is used for acquiring the position information of the target acquired by the camera so as to control the excavator to accurately discharge.
2. The vision-based autonomous excavation work dump point positioning system of claim 1, wherein the camera employs a monocular camera.
3. A vision-based autonomous excavation work unloading point positioning method applied to the vision-based autonomous excavation work unloading point positioning system according to any one of claims 1 to 2, characterized by comprising the steps of:
determining a coordinate system of the camera;
determining world coordinates of the target under a camera coordinate system according to the camera coordinate system;
transforming world coordinates of the target under the camera coordinate system to the target under the global excavator coordinate system;
and determining coordinates of the unloading point according to the target in the global excavator coordinate system.
4. A method of vision-based autonomous excavation work dump point positioning as defined by claim 3 wherein the method of determining world coordinates of the target in a camera coordinate system specifically comprises:
and detecting saddle points on the target through the camera, determining the position of the target according to the geometric relation between the saddle points, and determining the world coordinates of the target under a camera coordinate system according to the camera coordinate system.
5. The vision-based autonomous mining operation unloading point positioning method according to claim 3, wherein the world coordinate calculation method of the target under a camera coordinate system specifically comprises:
Figure FDA0003974318600000021
wherein (u, v) is the pixel coordinates of the target image point, (X) C ,Y C ,Z C ) Is the world coordinate under the camera coordinate system, f x And f y Camera at x and x respectivelyFocal length in y-axis direction, (u) 0 ,v 0 ) S is a scale factor, which is the principal point coordinate where the optical axis intersects the image plane;
formula (4.1) may also be written as follows
In the formula sp=k [ R t ] P (4.2), K is an internal parameter matrix of the camera (usually calibrated before optimization), R is a rotation matrix, t is a translation vector, P is coordinates of a 3D space point, and P is coordinates of a corresponding 2D image point;
the pose solving is accomplished by minimizing (4.2) the error, i.e. the form as shown below
Figure FDA0003974318600000022
In the method, in the process of the invention,
Figure FDA0003974318600000023
is the spatial point P j Reprojection on image i according to equation (4.2), p ij An image point of the j-th point in the i-th image;
the solution of the formula (4.3) is the solution of the nonlinear least square problem, the solution process is that an EPnP method is used for initialization, and finally an optimized iteration is used for completing the solution by applying a Levenberg-Marquardt algorithm, so that world coordinates of a target under a camera coordinate system are obtained.
6. The vision-based autonomous excavation work dump point positioning method of claim 4, wherein transforming world coordinates of a target under the camera coordinate system to a target under a global excavator coordinate system specifically comprises:
Figure FDA0003974318600000024
in (x) 0 ,y 0 ,z 0 ) Representing coordinates of points in the global base coordinate system, (x) C ,y C ,z C ) Representing the coordinates of a point in the camera coordinate system,(c x ,c y ,c z ) Origin O representing camera coordinate system C In the working device coordinate system O 1 -X 1 Y 1 Z 1 Lower coordinates, a 1 Indicating the axis Z of the rotary joint 0 With axis Z 1 Length of connecting rod between d 1 Representing an axis X 0 The link offset to axis X1.
CN202211531654.8A 2022-12-01 2022-12-01 Visual-based automatic excavation operation unloading point positioning system and method Pending CN116088365A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211531654.8A CN116088365A (en) 2022-12-01 2022-12-01 Visual-based automatic excavation operation unloading point positioning system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211531654.8A CN116088365A (en) 2022-12-01 2022-12-01 Visual-based automatic excavation operation unloading point positioning system and method

Publications (1)

Publication Number Publication Date
CN116088365A true CN116088365A (en) 2023-05-09

Family

ID=86200005

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211531654.8A Pending CN116088365A (en) 2022-12-01 2022-12-01 Visual-based automatic excavation operation unloading point positioning system and method

Country Status (1)

Country Link
CN (1) CN116088365A (en)

Similar Documents

Publication Publication Date Title
CN110954067B (en) Monocular vision excavator pose measurement system and method based on target
FI74556B (en) FOERFARANDE FOER TREDIMENSIONELL OEVERVAKNING AV ETT MAOLUTRYMME.
AU2004282274B2 (en) Method and device for determining the actual position of a geodetic instrument
CN102798350B (en) Method, device and system for measuring deflection of arm support
CN109115173B (en) Monocular vision measuring method for position and attitude of heading machine body based on linear positioning model
Olson et al. Maximum likelihood rover localization by matching range maps
CN108873904B (en) Unmanned parking method and device for mining vehicle and readable storage medium
CN112050732B (en) Method and system for automatically detecting spatial pose of cantilever type heading machine
JP2509357B2 (en) Work position detector
US11348322B1 (en) Tracking an ongoing construction by using fiducial markers
CN103175512B (en) Shooting measurement method of attitude of tail end of boom of concrete pump truck
Annusewicz et al. Marker detection algorithm for the navigation of a mobile robot
Roos-Hoefgeest et al. Mobile robot localization in industrial environments using a ring of cameras and ArUco markers
Peng et al. A measuring method for large antenna assembly using laser and vision guiding technology
Roshchin Application of a Machine Vision System for Controlling the Spatial Position of Construction Equipment
CN110244717B (en) Port crane climbing robot automatic path finding method based on existing three-dimensional model
CN116088365A (en) Visual-based automatic excavation operation unloading point positioning system and method
JPH08254409A (en) Three-dimensional shape measuring and analyzing method
CN116704019A (en) Drilling and anchoring robot monocular vision positioning method based on anchor rod network
US11905675B2 (en) Vision-based blade positioning
CN114119752A (en) Indoor and outdoor linked robot positioning method based on GNSS and vision
Goll et al. Testing of the system for estimation of mobile robotic platform displacements by the method of a marker triangle
Zhao et al. Dumping Point Localization of Autonomous Excavation Based on Vision in Trenching Tasks
JPH09178447A (en) Target for measuring three-dimensional shape
Weckesser et al. Position correction of a mobile robot using predictive vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination