CN111553948A - Heading machine cutting head positioning system and method based on double tracers - Google Patents

Heading machine cutting head positioning system and method based on double tracers Download PDF

Info

Publication number
CN111553948A
CN111553948A CN202010345702.9A CN202010345702A CN111553948A CN 111553948 A CN111553948 A CN 111553948A CN 202010345702 A CN202010345702 A CN 202010345702A CN 111553948 A CN111553948 A CN 111553948A
Authority
CN
China
Prior art keywords
points
camera
cutting head
tracers
tracer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010345702.9A
Other languages
Chinese (zh)
Other versions
CN111553948B (en
Inventor
徐昌盛
王华英
张步勤
王学
张雷
郭海军
李佳俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jizhong Energy Fengfeng Group Co ltd
Hebei University of Engineering
Original Assignee
Jizhong Energy Fengfeng Group Co ltd
Hebei University of Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jizhong Energy Fengfeng Group Co ltd, Hebei University of Engineering filed Critical Jizhong Energy Fengfeng Group Co ltd
Priority to CN202010345702.9A priority Critical patent/CN111553948B/en
Publication of CN111553948A publication Critical patent/CN111553948A/en
Application granted granted Critical
Publication of CN111553948B publication Critical patent/CN111553948B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a positioning method of a cutting head of a heading machine, which utilizes two infrared cameras to shoot two tracers on the cutting head, and carries out image processing on the obtained images of the two tracers to carry out contour extraction processing; the method comprises the steps of segmenting one or more optimal regions in an image according to set characteristic values, extracting boundary curves of the regions into direction chain codes, calculating the perimeter and the area of a target region, calculating the minimum radius and the maximum radius of a circle according to the perimeter and the area, selecting qualified boundary points by adopting the minimum radius and the maximum radius as radial constraint conditions, obtaining regions of double tracers in the tunneling machine, obtaining three-dimensional information of the regions according to the two-dimensional information of the double tracers, and obtaining three-dimensional coordinates of the center of a cutting head by combining self parameters of the tunneling machine, so that the positioning of the cutting head of the tunneling machine is realized.

Description

Heading machine cutting head positioning system and method based on double tracers
Technical Field
The invention belongs to the technical field of stereoscopic vision positioning, and relates to a heading machine cutting head positioning system and method based on double tracers.
Background
Coal is one of the most main primary energy sources in China and is a basic stone of energy sources in China. With the progress of science and technology, the level of comprehensive mining technology is continuously improved. The safe, reliable, economical and efficient automatic coal mining technology becomes more and more important.
In recent years, various solutions are proposed at home and abroad, but a plurality of problems still exist in the positioning of the cutting head of the heading machine. Therefore, the invention provides a method for positioning a cutting head of a heading machine.
Most of today's positioning with respect to the cutting head basically uses common optical camera positioning, for example: in 5 months 2014, limited companies in Shaanxi Huang Ling mining industry complete intelligent unmanned fully-mechanized mining technology research and application projects of 1.4m-2.2m coal bed domestic equipment, a common optical camera is used in a cutting head positioning method, other tracers are not used for positioning, visualization and cutting head three-dimensional information accuracy are low, and the cutting head positioning method cannot be well suitable for future digitization and informatization development.
A monocular single tracer method is adopted on the method for accurately positioning the cutting head, but the cutting head is not accurately positioned in the actual experiment process, the three-dimensional coordinate of the cutting head frequently generates coordinate point drift, and major accidents can be caused if the method is applied to actual production.
Disclosure of Invention
The invention provides a double-tracer positioning method which is used for positioning a cutting head of a heading machine, so that the problem of inaccurate positioning of the cutting head in the prior art is solved.
The invention adopts the following technical scheme:
the utility model provides a positioning system of entry driving machine cutterhead which characterized in that: the system comprises a constant-temperature spherical double tracer, a left camera, a right camera and a ground service station; the left camera and the right camera are both infrared cameras; the constant-temperature spherical double tracers are positioned at the front end of the tunneling machine, and the left camera and the right camera are arranged on the tunneling machine and positioned behind the double tracers and used for shooting images of the double tracers; the ground service station is in wireless communication with the left camera and the right camera and is used for processing the images of the double tracers acquired by the left camera and the right camera so as to obtain the accurate position of the cutting head of the heading machine.
A positioning method of a heading machine cutting head comprises the following steps:
1) acquiring images by using a left infrared camera and a right infrared camera, carrying out binarization processing on the images to obtain a binarized image, and carrying out contour extraction processing on the binarized image;
2) the contour extraction process includes: the optimal one or more regions in the scene of the development machine are divided according to the set characteristic values, and the boundary curves of the one or more regions are extracted as Fremann chain codes and normalized; calculating the perimeter Z and the area S of the target area; calculating the minimum radius and the maximum radius of the circle according to the perimeter Z and the area S:
Figure RE-GDA0002537775080000011
Rmax=Z/2π;
3) determining the circle center according to the clustering idea and the non-collinear three points by adopting Rmin,RmaxPreprocessing as a radial constraint condition, and solving a circle center by utilizing minimum two-multiplication; and sequentially selecting three points which are not collinear from the chain codes, and according to a geometric principle, uniquely determining an external circle by the three points which are not collinear, thereby obtaining the coordinate and the radius of the circle center.
4) Selecting qualified boundary points according to the radial constraint conditions; if R is not satisfiedmin<R<RmaxSequentially taking three non-collinear points to calculate the circle center and the radius R until the three points meeting the constraint condition are determined as reference points; continuously selecting the next point from the chain code on the basis of the first two points of the three reference points, and calculating the coordinate of the circle center and the radius R by using the new three points; if R is satisfiedmin<R<RmaxThen save the point, thenSelecting the next point in sequence, if not, deleting the point from the chain code; until all chain codes are traversed to remove invalid or overlarge error boundary points;
5) using the qualified boundary points to obtain a circle center coordinate according to least square fitting; and then obtaining the radius R according to the square root of the distance between the qualified boundary points and the circle center.
6) And if the optimal one or more areas in the scene of the heading machine are divided into a plurality of areas according to the set characteristic values in the step 2), selecting one area with the largest radius R as the area of the double-tracer of the heading machine in the scene.
7) Converting the two-dimensional information of the double tracers into three-dimensional information of the double tracers according to the areas of the double tracers obtained in the step 6); obtaining the three-dimensional coordinates of a first tracer and the three-dimensional coordinates of a second tracer on the heading machine according to imaging formulas of a left camera and a right camera (the first tracer and the second tracer are the double tracers);
8) and according to the three-dimensional coordinates of the first tracer and the second tracer, combining the diameter of the arm of the excavator, the outline shape of the cutting head and the centroid parameter to obtain the three-dimensional coordinates of the center of the cutting head, thereby realizing the positioning of the cutting head.
The method comprises the steps of positioning by using double tracers, carrying out binarization processing on an infrared image of a cutting head in work, carrying out contour extraction processing to obtain a contour of the cutting head, and then obtaining a three-dimensional coordinate of the center of the cutting head by combining parameters of a digging machine and the cutting head according to coordinates of the double tracers in the image, thereby accurately positioning the cutting head.
Drawings
FIG. 1 is a schematic view of the cutting unit positioning system as a whole;
FIG. 2 three coordinate systems in binocular vision;
FIG. 3 shows camera geometry in binocular vision;
FIG. 4 is a three-dimensional coordinate of the center point of the cutting head when the heading machine is in operation;
figure 5 schematic diagram of a dual tracer localization procedure.
Detailed Description
As shown in fig. 1, there is provided a heading machine cutter head positioning system comprising: the system comprises three parts, namely a constant-temperature spherical double tracer 1, an infrared camera 2 for identifying the double tracer and a ground service station 3 for performing operation processing. The constant-temperature spherical double-tracer is positioned at the front end of the development machine; the infrared camera comprises a left camera and a right camera, is arranged on the heading machine, is positioned behind the double tracers and is used for shooting images of the double tracers. The ground service station is in wireless or wired communication with the infrared camera and is used for processing the double-target images acquired by the infrared camera so as to obtain the accurate position of the cutting head. The following describes in detail how the exact position of the cutting head is obtained from the image of the dual tracer.
Computer Vision (Computer Vision) is a biomimetic simulation of biological Vision using a Computer and image capture device to resolve three-dimensional information of a tracer object from an image set containing one or more images. The three-dimensional information includes three-dimensional geometric coordinates of the tracer objects, occlusion and intersection relations between the tracer objects, three-dimensional motion information of the tracer objects, and the like. Binocular vision is a computer vision algorithm that recovers the three-dimensional geometric information of a tracer object using digital images.
Binocular Stereo Vision (Binocular Stereo Vision) is an important form of machine Vision, and is a method for acquiring three-dimensional geometric information of an object by acquiring two images of the object to be measured from different positions by using imaging equipment based on a parallax principle and calculating position deviation between corresponding points of the images. The algorithm for positioning using binocular stereo vision is called binocular positioning for short.
As shown in fig. 2, discussion of the principles of binocular vision involves three coordinate systems: a world coordinate system, a camera coordinate system, and an image coordinate system. Point coordinates in the world coordinate system are denoted as P (x)w,yw,zw) In the camera coordinate system with (x)c,yc,zc) In this specification, the image coordinates are two-dimensional coordinates of an image captured by a camera, and are generally classified into two types: (u, v) image coordinates in pixels, (x, y) in millimetersThe image coordinates of the bits. The image coordinates in millimeters are established because the (u, v) coordinates only represent the number of rows and columns of a pixel in the digital image and do not represent the physical location of the pixel in the digital image.
The camera imaging geometry can be represented by fig. 1. Wherein point O is called the camera optical center, xcAxis and ycThe axis being parallel to the x-and y-axes of the image, zcThe axis is the optical center line of the camera and is perpendicular to the image plane. From point O to axis xc,yc,zcAnd forming a rectangular coordinate system, namely a camera coordinate system. OO1The camera focal length is denoted as f, the value of which is determined by the camera itself. The intersection point of the light core line and the image plane is called the inner pole O of the image coordinate system1The coordinate convention is (u)0,v0) Generally in the center of the image. The pixel coordinates of the discrete digital image captured by the camera are denoted by (u, v), with the origin of the coordinates being selected in the upper left corner of the image. In general, the camera imaging model is a pinhole imaging model, and the imaging transformation matrix is a perspective transformation matrix.
According to the camera imaging principle, a point P (x) in spacew,yw,zw) The image point P (u, v) in the image is the intersection of the line connecting the point P and the camera optical center O with the camera screen. If a point in space is known, the image points of the left camera and the right camera are respectively P1(u1,v1) And P2(u2,v2) Then the original spatial point p is located on the spatial ray O1P1And O2P2At the intersection of the extension lines of (A), O1、O2The left and right camera optical centers respectively, and the space ray O1P1And O2P2The intersection can be used to solve the three-dimensional coordinates of point P, as shown in fig. 3. Thus, binocular vision can be seen as a process of mapping image coordinates to three-dimensional world coordinates.
In order to implement the above three-dimensional reconstruction process, the following two problems must be solved:
(1) to obtain a spatial ray O1P1And O2P2The equation of (a) is given,firstly, respective imaging transformation matrixes of the left camera and the right camera are determined, and each element in the imaging transformation matrixes is solved, and the process is called camera calibration.
(2) Pixel point P1(u1,v1) And P is2(u2,v2) To correspond to the same point P (x) in spacew,yw,zw) So that the space ray and O can be guaranteed1P1And O2P2And (4) intersecting. Pixel P1And P2This is called a pair of stereo pairs, one of which is known, and the process of finding the other point is called stereo matching.
Based on the above discussion, the basic principle of binocular vision can be described as first calibrating the left and right cameras, determining their respective imaging transformation matrices, and then designating a pixel point P in the left camera image1(u1,v1) Stereo matching is carried out in the right camera image, and a matched pixel point P is found out2(u2,v2) And finally to the space ray O1P1And O2P2Intersecting and determining P1And P2The three-dimensional coordinates of the original spatial points which correspond to each other.
Next, coordinate calibration of the camera is described. The camera is also called a camera, camera calibration is the camera calibration, and the camera calibration principle is introduced as follows:
three coordinate systems of the binocular vision principle are respectively as follows: the coordinates of points in the world coordinate system are denoted as p (x)w,yw,zw) P (x) for camera coordinate systemc,yc,zc) The image coordinate system is (x, y); the relationship of the three coordinates is shown in fig. 2.
1. Camera calibration
Camera calibration is a numerical process for determining the camera projection matrix. Setting internal pole O1Coordinate point (u) in a (u, v) coordinate system0,v0) And the physical size of each pixel in the directions of the x axis and the y axis is dx and dy, the following conversion relation exists between the pixel coordinate and the physical coordinate (x, y):
x=dx(u-u0)
y=dy(v-v0)
write it in matrix form:
Figure RE-GDA0002537775080000041
since the camera may generate images of a scene in the world coordinate system from any angle and position, the relationship between the camera coordinate system and the world coordinate system may be described by a rotation matrix R and a translation vector t. Coordinate P (x) of point Pw,yw,zw) And P (x)c,yc,zc) The transformation relationship between can be written as:
Figure RE-GDA0002537775080000042
where R is a 3 × 3 orthogonal unit rotation matrix, t is a 3 × 1 translation vector, 0 ═ 0,0,0, and M is a 4 × 4 matrix.
As can be seen from FIG. 2, any point P (x) in the camera coordinate systemc,yc,zc) The imaging position P (x, y) on the image plane is the intersection point of the connecting line OP of the optical center O and the point P and the image plane, and can be obtained according to a perspective formula:
x=fxc/zc
y=fyc/zc
wherein f is the focal length of the camera, zcIs the depth coordinate of point P in the camera coordinate system. Thus, the perspective projection relationship between the camera coordinates and the image coordinates of the object can be expressed as
Figure RE-GDA0002537775080000043
Substituting the expressions (1) and (2) into the expression (3) to obtain a point P (x) in the world coordinate systemw,yw,zw) In order not to lose generality, the formula for calculating the corresponding image p (u, v) on the camera screen may include a warping parameter γ representing the two pixel coordinate systemsDistortion of coordinate axes:
Figure RE-GDA0002537775080000051
wherein f/dx, f/dy, gamma, u0、v0Relating only to the internal structure of the camera, these parameters being called internal parameters of the camera, M1Referred to as the intrinsic parameter matrix, the following derivation considers the gamma parameter to be 0, since it can be considered to be 0 for most standard cameras. M2Depending on the choice of world coordinate system and the placement of the cameras, it is called an extrinsic parameter matrix. M is M1M2Called 3 × 4 imaging transformation matrix, then:
Figure RE-GDA0002537775080000052
the calibration of the camera is to determine the value of 3 × 4-12 components in the M matrix, and the process can be completed by man-machine interaction and least square method, firstly selecting a standard test object (such as a standard cylindrical surface and a standard inclined plane), selecting a proper world coordinate system, and marking a plurality of test points on the standard test object by pi(i ═ 1,2,3, …, n) three-dimensional coordinates P (x) of the test points are determinedw,yw,zw) Determining the pixel coordinate (u) of each test point in the image by a man-machine interaction methodi,vi) And substituting the equation (5) to obtain N equations about 12 components in the matrix, and solving by using the minimum two multiplication to obtain each component in the matrix, thereby completing the calibration of the camera.
2. Determination of target center point pixel coordinates
According to fig. 3, a center point p where an object is imaged in left and right cameras is required1(u1,v1) And p2(u2,v2) And forming a stereo pair, and finally mapping to an original target point. When the heading machine works, the temperature of a cutting part which continuously rubs with a rock stratum or a coal seam to generate a large amount of heat is far higher than that of surrounding objects, and the imaging area is the largest of infrared heat sources which can be shot by an infrared camera. Therefore we canThe method of threshold segmentation and area screening can be adopted to conveniently obtain the image of the cutting part of the heading machine in the infrared camera.
The basic principle of threshold segmentation is: by setting a characteristic threshold, pixels are classified into several classes. Commonly used features include gray scale or color features directly from the original image, as well as features transformed from the original gray scale or color values. Because the image shot by the infrared camera is directly presented by the gray scale image, the original image can be divided according to the gray scale range to obtain the image part corresponding to the cutting part of the heading machine in the real scene.
Setting the original image as I (x, y), corroding and then expanding I (x, y) to obtain image I with less noise eliminated0(x, y). According to a certain rule in I0Finding a characteristic value T in (x, y), and dividing the image into two parts, wherein the divided image is as follows:
Figure RE-GDA0002537775080000053
get b0=0,b1When the threshold value is 255, the image obtained by dividing the threshold value is directly binarized. Because the temperature of the cutting part of the heading machine is far higher than the ambient temperature during working and other heat sources which may appear in the image of the infrared camera, the characteristic value T is simple to select, and according to the experimental condition, the characteristic value between 230 and 255 is generally selected to well segment one or more areas, which comprise the cutting part of the heading machine in the corresponding real scene, in the image.
After one or more areas corresponding to the cutting part of the heading machine in the real scene are determined in the image, the boundary curve of each area is extracted as a Freeman direction chain code (Fremann chain code) and normalized. Selecting a point on a tracer boundary curve as a starting point, recording coordinates of the starting point, and then searching a next boundary pixel point from a direction (45 degrees) with a code of 1 in an 8-communication mode according to a clockwise direction. After finding, recording the direction code, and repeating the steps from the found pixel point to obtain a group of chain codes.
Let the normalized Freeman directional chain code of any region be A (a)1,a2...,an) The corresponding pixel coordinate sequence is B (B)1,b2,...,bn), bi(ui,vi) Is chain code aiThe corresponding pixel coordinates.
The encoding mode of the chain code is to use 0-7 eight numbers to represent the connecting line direction of a certain pixel and each pixel in 8 neighborhoods, wherein, the even number code is a horizontal or vertical chain code, the code length is 1, the odd number code is a diagonal chain code, and the code length is
Figure RE-GDA0002537775080000061
The target area perimeter Z can be expressed as:
Figure RE-GDA0002537775080000062
wherein n is the total number of chain codes in the chain code sequence, neThe number of even codes in the chain code sequence. The target region area S can be expressed as the integral of the boundary chain code over the x-axis:
Figure RE-GDA0002537775080000063
wherein v isi=vi-1+ai2,v0Is the ordinate of the initial point, ai0And ai2The i-th loop length of the code chain has a component of k 0 (horizontal direction) and k 2 (vertical direction). Because the boundary chain codes of the image areas corresponding to the cutting part of the development machine are closed chain codes v0Can be selected arbitrarily.
According to the clustering idea and the principle of determining the circle center by combining three non-collinear points, the method adopts
Figure RE-GDA0002537775080000064
RmaxAnd (2 pi) calculating the circle center by using a least square method of radial constraint preprocessing.
Sequentially selecting three points A (u) which are not collinear from boundary chain codesa,va)、B(ub,vb)、C(uc,vc) According to the geometric principle, the non-collinear three points can uniquely determine a circumscribed circle, and the circle center is set as (u)0,v0) Then there are:
(ua-u0)2+(va-v0)2=(ub-u0)2+(vb-v0)2
(ua-u0)2+(va-v0)2=(uc-u0)2+(vc-v0)2(7)
simplifying to obtain:
(ua-ub)u0+(va-vb)v0=(ua 2-ub 2)-(vb 2-va 2)2/2
(ua-uc)u0+(vc-va)v0=(ua 2-uc 2)-(vc 2-va 2)2/2 (8)
solving according to the rule of Cramer to obtain:
Figure RE-GDA0002537775080000065
Figure RE-GDA0002537775080000066
according to u0、v0The radius R can be further determined. If the radius R does not satisfy Rmin<R<RmaxSequentially taking three non-collinear points to calculate the circle center and the radius R until R is satisfiedmin<R<Rmax. Three points meeting the constraint condition are determined as reference three points, the first two points of the reference three points are taken as the basis, the next point is continuously selected from the chain code, the circle center coordinate and the radius R are calculated by the new three points, and if the radius R meets the Rmin<R<RmaxAnd storing the point, continuously selecting the next point in sequence, and deleting the point from the chain code if the point is not selected. This is performed in a loop until all chain codes are traversed to remove invalid or over-error boundary points.
Fitting the circle center according to the least square method by using the corresponding boundary point coordinates in the reserved boundary chain code table, and setting the extracted boundary point set as (u)i,vi) (i 1, 2.. times, n), the equation for the circle that minimizes the sum of the squares of the distances to the set of data points is:
(u-u0)2+(v-v0)2=R (10)
let a1=∑ui,b1=∑vi,a2=∑ui 2,b2=∑vi 2,a3=∑ui 3,b3=∑vi 3,c11=∑ui*vi,c12=∑ui*vi 2,c21=∑ui 2*viAnd is provided with f (u)0,v0,R)=∑((ui-u0)2+(vi-v0)2-R2)2Let us order
Figure RE-GDA0002537775080000071
Namely:
-4∑((ui-u0)2+(vi-v0)2-R2)(ui-u0)=0
-4∑((ui-u0)2+(vi-v0)2-R2)(vi-v0)=0
-4∑((ui-u0)2+(vi-v0)2-R2)R=0 (11)
a is to1,b1,a2,b2,a3,b3,c11,c12,c21And (3) carrying out formula (11) and finishing to obtain:
a1(u0 2+v0 2)-2a2u0-2c11v0+a3+c12-a1R2=0
b1(u0 2+v0 2)-2c11u0-2b2v0+c21+b3-b1R2=0
n(u0 2+v0 2)-2a1u0-2b1v0+a2+b2-nR2=0 (12)
obtaining by solution:
Figure RE-GDA0002537775080000072
Figure RE-GDA0002537775080000073
Figure RE-GDA0002537775080000074
because the cutting part of the heading machine in work is the infrared heat source with the largest area in the shooting range of the infrared camera, the area corresponding to the cutting part of the heading machine in the real scene in the image can be determined by comparing the radius R of all target areas in each image and taking the area with the largest radius R.
3. Determination of the spatial coordinates of the target points
From the imaging conversion formula (5), when the two-dimensional image coordinates (u, v) of the spatial point are known, the formula (5) is defined with respect to xw、yw、zwAnd zcFor two cameras, the system of algebraic equations for x can be foundw、yw、zwAnd zcTwo polar line equations of (1), elimination of zcThe intersection point of the polar lines is obtained, namely the space point P (x)w,yw,zw). The imaging formulas of the left camera and the right camera are respectively:
Figure RE-GDA0002537775080000081
and
Figure RE-GDA0002537775080000082
elimination of z in two image transformation equationsc1And zc2The system of equations can be obtained:
Figure RE-GDA0002537775080000083
setting:
Figure RE-GDA0002537775080000084
Figure RE-GDA0002537775080000085
Figure RE-GDA0002537775080000086
then
P=(DTD)-1DTH (15)
The spatial three-dimensional coordinate x of the P point can be obtained by the formula (15)w、yw、zw
4. Heading machine cutting part positioning based on double tracers
As shown in FIG. 3, the tracer tp may be determined according to the binocular localization method described above1、tp2Three-dimensional coordinates obj of1(xobj1,yobj1,zobj1) And obj2(xobj2,,obj2,zobj2)。
Figure RE-GDA0002537775080000087
The distance between a straight line parallel to the tunneling arm and a straight line at the center of the tunneling arm is formed by the double tracers,
Figure RE-GDA0002537775080000091
is over tp2Making an intersection md of a vertical line intersecting the centre of the arm2Distance to the center of the section. Then vector
Figure RE-GDA0002537775080000092
Where O is (0,0,0), the unit vector thereof
Figure RE-GDA0002537775080000093
When in use
Figure RE-GDA0002537775080000094
Obtained by the Pythagorean theorem
Figure RE-GDA0002537775080000095
Figure RE-GDA0002537775080000096
From geometrical relationships
Figure RE-GDA0002537775080000097
Figure RE-GDA0002537775080000098
Since theta will affect the value of sin (theta-pi/2), in order to extend it to the range where pi > theta > 0
Figure RE-GDA0002537775080000099
Then in the range of pi > theta > 0: .
Figure RE-GDA00025377750800000910
The accurate three-dimensional position of the cutting head of the heading machine can be obtained by the formula (16).
So far, the cutting part of the heading machine is accurately identified and positioned according to the positioning information of the tracer.
The following method steps for positioning the cutting head of the heading machine based on the double tracers are specifically described:
1. acquiring images by using a left infrared camera and a right infrared camera, carrying out binarization processing on the images to obtain a binarized image, and carrying out contour extraction processing on the binarized image;
2. the contour extraction process includes: dividing one or more optimal regions in a scene of the heading machine according to a set characteristic value, extracting boundary curves of the one or more regions into Freeman direction chain codes and normalizing the chain codes; calculating the perimeter Z and the area S of the target area; calculating the minimum radius and the maximum radius of the circle according to the perimeter Z and the area S:
Figure RE-GDA00025377750800000911
Rmax=Z/2π
3. according to the principle of determining circle center by clustering thought and non-collinear three points, R is adoptedmin,RmaxPreprocessing as a radial constraint condition, and solving the circle center by utilizing minimum two multiplication; sequentially selecting three points which are not collinear from the Freeman direction chain codes, and according to a geometric principle, uniquely determining a circumscribed circle by the three points which are not collinear so as to obtain a circle center coordinate and a radius;
4. selecting qualified boundary points according to radial constraint conditions; if R is not satisfiedmin<R<RmaxSequentially taking three non-collinear points to calculate the center of the circle and the radius R until the three points meeting the constraint condition are determined as datum points; continuously selecting a next point from the Freeman direction chain code on the basis of the first two points of the three reference points, and calculating the coordinates of the circle center and the radius R by using the new three points; if R is satisfiedmin<R<RmaxIf the sequence does not meet the preset sequence, deleting the point from the Freeman direction chain code; until all chain codes are traversed to remove invalid or overlarge error boundary points;
5. using the qualified boundary points to obtain circle center coordinates according to least square fitting; then, obtaining the radius R according to the square root of the distance between the qualified boundary points and the circle center;
6. if the optimal one or more areas in the scene of the heading machine are divided into a plurality of areas according to the set characteristic values in the step 2), selecting the area with the largest radius R as the area of the double-tracer of the heading machine in the scene.
7. Converting the two-dimensional information of the double tracers into three-dimensional information of the double tracers according to the areas of the double tracers obtained in the step 6); obtaining the three-dimensional coordinates of a first tracer and the three-dimensional coordinates of a second tracer on the heading machine according to the imaging formulas of the left camera and the right camera;
8. and according to the three-dimensional coordinates of the first tracer and the second tracer, combining the diameter of the machine arm of the excavator, the outline shape of the cutting head and the mass center parameter to obtain the three-dimensional coordinates of the center of the cutting head, thereby realizing the positioning of the cutting head.
A schematic flow chart of the above method steps is shown in fig. 5.
It is noted that the disclosed embodiments are intended to aid in further understanding of the invention, but those skilled in the art will appreciate that: various substitutions and modifications are possible without departing from the spirit and scope of the invention and appended claims. Therefore, the invention should not be limited to the embodiments disclosed, but the scope of the invention is defined by the appended claims.

Claims (2)

1. The utility model provides a positioning system of entry driving machine cutterhead which characterized in that: the system comprises a constant-temperature spherical double tracer, a left camera, a right camera and a ground service station; the left camera and the right camera are both infrared cameras; the constant-temperature spherical double tracers are positioned at the front end of the heading machine, and the left camera and the right camera are arranged on the heading machine and positioned behind the double tracers and used for shooting images of the double tracers; the ground service station is in wireless communication with the left camera and the right camera and is used for processing the images of the double tracers acquired by the left camera and the right camera so as to obtain the accurate position of the cutting head of the heading machine.
2. A method of positioning a cutting head of a heading machine using the positioning system of claim 1, wherein the method comprises: the method comprises the following steps:
1) acquiring images by using a left infrared camera and a right infrared camera, carrying out binarization processing on the images to obtain a binarized image, and carrying out contour extraction processing on the binarized image;
2) the contour extraction process includes: the optimal one or more regions in the scene of the development machine are divided according to the set characteristic value, and the boundary curve of the one or more regions is extracted as a Framann chain code and normalized; calculating the perimeter Z and the area S of the target area; calculating the minimum radius and the maximum radius of the circle according to the perimeter Z and the area S:
Figure FDA0002470110460000011
Rmax=Z/2π;
3) determining the circle center according to the clustering idea and the non-collinear three points by adopting Rmin,RmaxPreprocessing as a radial constraint condition, and solving the circle center by using a least square method; and sequentially selecting three points which are not collinear from the chain codes, and according to a geometric principle, uniquely determining an external circle by the three points which are not collinear, thereby obtaining the coordinate and the radius of the circle center.
4) Selecting qualified boundary points according to the radial constraint conditions; if R is not satisfiedmin<R<RmaxSequentially taking three non-collinear points to calculate the circle center and the radius R until the three points meeting the constraint condition are determined as reference points; continuously selecting the next point from the chain code on the basis of the first two points of the three reference points, and calculating the coordinate of the circle center and the radius R by using the new three points; if R is satisfiedmin<R<RmaxIf the current position is not met, deleting the point from the chain code; until all chain codes are traversed to remove invalid or overlarge error boundary points;
5) using the qualified boundary points to obtain a circle center coordinate according to least square fitting; and then obtaining the radius R according to the square root of the distance between the qualified boundary points and the circle center.
6) And if the optimal one or more areas in the scene of the heading machine are divided into a plurality of areas according to the set characteristic values in the step 2), selecting one area with the largest radius R as the area of the double-tracer of the heading machine in the scene.
7) Converting the two-dimensional information of the double tracers into three-dimensional information of the double tracers according to the areas of the double tracers obtained in the step 6); obtaining the three-dimensional coordinates of a first tracer and the three-dimensional coordinates of a second tracer on the heading machine according to imaging formulas of a left camera and a right camera (the first tracer and the second tracer are the double tracers);
8) and according to the three-dimensional coordinates of the first tracer and the second tracer, combining the diameter of the arm of the excavator, the outline shape of the cutting head and the mass center parameter to obtain the three-dimensional coordinates of the center of the cutting head, thereby realizing the positioning of the cutting head.
CN202010345702.9A 2020-04-27 2020-04-27 Heading machine cutting head positioning method based on double tracers Active CN111553948B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010345702.9A CN111553948B (en) 2020-04-27 2020-04-27 Heading machine cutting head positioning method based on double tracers

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010345702.9A CN111553948B (en) 2020-04-27 2020-04-27 Heading machine cutting head positioning method based on double tracers

Publications (2)

Publication Number Publication Date
CN111553948A true CN111553948A (en) 2020-08-18
CN111553948B CN111553948B (en) 2023-01-17

Family

ID=72007690

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010345702.9A Active CN111553948B (en) 2020-04-27 2020-04-27 Heading machine cutting head positioning method based on double tracers

Country Status (1)

Country Link
CN (1) CN111553948B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112161567A (en) * 2020-09-28 2021-01-01 北京天地玛珂电液控制系统有限公司 Positioning method and system for fully mechanized coal mining face
CN113222907A (en) * 2021-04-23 2021-08-06 杭州申昊科技股份有限公司 Detection robot based on bend rail
CN113701633A (en) * 2021-09-06 2021-11-26 安徽深核信息技术有限公司 Position and posture monitoring equipment of development machine
CN115962783A (en) * 2023-03-16 2023-04-14 太原理工大学 Positioning method of cutting head of heading machine and heading machine

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109555521A (en) * 2019-01-29 2019-04-02 冀中能源峰峰集团有限公司 A kind of cutting head of roadheader combined positioning method
CN109767477A (en) * 2019-01-14 2019-05-17 冀中能源峰峰集团有限公司 A kind of Precise Position System and method
CN110532995A (en) * 2019-09-04 2019-12-03 精英数智科技股份有限公司 Tunnelling monitoring method based on computer vision, apparatus and system
CN110568448A (en) * 2019-07-29 2019-12-13 浙江大学 Device and method for identifying accumulated slag at bottom of tunnel of hard rock tunnel boring machine

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109767477A (en) * 2019-01-14 2019-05-17 冀中能源峰峰集团有限公司 A kind of Precise Position System and method
CN109555521A (en) * 2019-01-29 2019-04-02 冀中能源峰峰集团有限公司 A kind of cutting head of roadheader combined positioning method
CN110568448A (en) * 2019-07-29 2019-12-13 浙江大学 Device and method for identifying accumulated slag at bottom of tunnel of hard rock tunnel boring machine
CN110532995A (en) * 2019-09-04 2019-12-03 精英数智科技股份有限公司 Tunnelling monitoring method based on computer vision, apparatus and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
WENJUAN YANG等: "Infrared LEDs-Based Pose Estimation With Underground Camera Model for Boom-Type Roadheader in Coal Mining", 《ACCESS》 *
张梦奇: "截割头齿座参数化定位技术研究", 《煤炭科学技术》 *
杜雨馨: "矿井悬臂式掘进机位姿感知及定位方法研究", 《中国优秀博硕士学位论文全文数据库(博士) 工程科技Ⅰ辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112161567A (en) * 2020-09-28 2021-01-01 北京天地玛珂电液控制系统有限公司 Positioning method and system for fully mechanized coal mining face
CN112161567B (en) * 2020-09-28 2022-05-03 北京天玛智控科技股份有限公司 Positioning method and system for fully mechanized coal mining face
CN113222907A (en) * 2021-04-23 2021-08-06 杭州申昊科技股份有限公司 Detection robot based on bend rail
CN113701633A (en) * 2021-09-06 2021-11-26 安徽深核信息技术有限公司 Position and posture monitoring equipment of development machine
CN115962783A (en) * 2023-03-16 2023-04-14 太原理工大学 Positioning method of cutting head of heading machine and heading machine

Also Published As

Publication number Publication date
CN111553948B (en) 2023-01-17

Similar Documents

Publication Publication Date Title
CN111553948B (en) Heading machine cutting head positioning method based on double tracers
CN111524195B (en) Camera calibration method in positioning of cutting head of heading machine
CN111473739B (en) Video monitoring-based surrounding rock deformation real-time monitoring method for tunnel collapse area
CN107945267B (en) Method and equipment for fusing textures of three-dimensional model of human face
CN109598714B (en) Tunnel super-underexcavation detection method based on image three-dimensional reconstruction and grid curved surface
CN104330074B (en) Intelligent surveying and mapping platform and realizing method thereof
CN106650701B (en) Binocular vision-based obstacle detection method and device in indoor shadow environment
CN104463969B (en) A kind of method for building up of the model of geographical photo to aviation tilt
CN110176032B (en) Three-dimensional reconstruction method and device
JP2004213332A (en) Calibration device, calibration method, program for calibration, and calibration mending tool
CN105931234A (en) Ground three-dimensional laser scanning point cloud and image fusion and registration method
CN113052903B (en) Vision and radar fusion positioning method for mobile robot
CN106651942A (en) Three-dimensional rotation and motion detecting and rotation axis positioning method based on feature points
CN109685855B (en) Camera calibration optimization method under road cloud monitoring platform
CN110044374B (en) Image feature-based monocular vision mileage measurement method and odometer
CN110223351B (en) Depth camera positioning method based on convolutional neural network
CN108227929A (en) Augmented reality setting-out system and implementation method based on BIM technology
Frohlich et al. Absolute pose estimation of central cameras using planar regions
Erickson et al. The accuracy of photo-based three-dimensional scanning for collision reconstruction using 123D catch
CA3233222A1 (en) Method, apparatus and device for photogrammetry, and storage medium
Su et al. A novel camera calibration method based on multilevel-edge-fitting ellipse-shaped analytical model
TWM565860U (en) Smart civil engineering information system
CN111161334A (en) Semantic map construction method based on deep learning
CN104166995B (en) Harris-SIFT binocular vision positioning method based on horse pace measurement
CN116563377A (en) Mars rock measurement method based on hemispherical projection model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant