CN114219801A - Intelligent stone identification method and device for unmanned crushing of excavator - Google Patents
Intelligent stone identification method and device for unmanned crushing of excavator Download PDFInfo
- Publication number
- CN114219801A CN114219801A CN202111646639.3A CN202111646639A CN114219801A CN 114219801 A CN114219801 A CN 114219801A CN 202111646639 A CN202111646639 A CN 202111646639A CN 114219801 A CN114219801 A CN 114219801A
- Authority
- CN
- China
- Prior art keywords
- ellipse
- image
- center
- crushing
- unmanned
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000004575 stone Substances 0.000 title claims abstract description 54
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000012937 correction Methods 0.000 claims abstract description 25
- 230000005484 gravity Effects 0.000 claims abstract description 22
- 238000001514 detection method Methods 0.000 claims abstract description 19
- 238000004422 calculation algorithm Methods 0.000 claims description 29
- 230000006870 function Effects 0.000 claims description 16
- 239000013598 vector Substances 0.000 claims description 14
- 238000004364 calculation method Methods 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 7
- 230000000694 effects Effects 0.000 claims description 7
- 238000005070 sampling Methods 0.000 claims description 7
- 230000008569 process Effects 0.000 claims description 5
- 238000003708 edge detection Methods 0.000 claims description 3
- 239000000126 substance Substances 0.000 claims description 3
- 230000000379 polymerizing effect Effects 0.000 claims description 2
- 230000000452 restraining effect Effects 0.000 claims description 2
- 238000004891 communication Methods 0.000 description 6
- 230000036544 posture Effects 0.000 description 6
- 238000010276 construction Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 238000005286 illumination Methods 0.000 description 5
- 238000000605 extraction Methods 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000009435 building construction Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000011435 rock Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/51—Indexing; Data structures therefor; Storage structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/53—Querying
-
- G06T5/90—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30132—Masonry; Concrete
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Abstract
The invention relates to an intelligent stone identification method and device for unmanned crushing of an excavator, which comprises the following steps: A. acquiring an image of the surrounding environment of the excavator; B. carrying out ellipse detection on the image, classifying the detected ellipse to determine an ellipse for gesture detection, and obtaining a projection ellipse of the cylinder; C. carrying out elliptical gravity center shift correction according to the actual contour of the stone block; d, calculating the coordinate of the stone block under the coordinate system of the camera according to the corrected ellipse; therefore, stones in the surrounding environment of the excavator can be accurately identified, and preparation is made for subsequent automatic crushing work.
Description
Technical Field
The invention relates to the technical field of image recognition, in particular to an intelligent stone recognition method and device for unmanned crushing of an excavator.
Background
The excavator is widely applied to construction sites of projects such as mining, building construction, road and bridge construction and the like. In the prior art, an excavator is generally driven by a hydraulic system and constructed in a manual operation mode; but the working comfort of workers is poor due to the fact that the environment of a construction site is severe; and as the labor cost is increased, the excavator cannot work all the day under the influence of physical strength of workers, external weather and the like, and huge resource waste is brought to enterprises and owners. For the crushing operation of the excavator, in order to realize the automatic construction of the crushing hammer, the machine vision recognition of the stone blocks to be crushed is an important part. Only can accurately identify the stone to be crushed, the breaking hammer can be smoothly guided to crush the stone, and the automatic construction is completed.
In the prior art, a target recognition algorithm/template matching algorithm is usually adopted to recognize stones: the target identification algorithm generally adopts a method of feature point extraction or template matching, SURF and SIFT are currently the most common local feature detection algorithms, and are robust to illumination, noise and small-range visual angle change; however, these two algorithms are prone to false detection of feature points, and furthermore, not too many distinct feature points are available for the stone; when the number of local feature points is small, the effect of the feature point detection algorithm is obviously reduced. The template matching algorithm is to find an optimal pose, and can enable a similarity function between a template and a current image or find a function to obtain an extreme value; however, the detection method has high calculation and storage capacity, and the commonly used algorithms such as SCV, NCC and SSD have poor robustness to the occlusion, and considering the complexity of the working environment of the excavator, the occlusion is easily formed by foreign matters, so the template matching algorithm is not suitable.
Disclosure of Invention
The purpose of the invention is as follows:
in order to overcome the disadvantages pointed out in the background art, the embodiment of the invention provides an intelligent stone identification method and device for unmanned crushing of an excavator, which can effectively solve the problems related to the background art.
The technical scheme is as follows:
the intelligent stone identification method for unmanned crushing of the excavator comprises the following steps: A. acquiring an image of the surrounding environment of the excavator; B. carrying out ellipse detection on the image, classifying the detected ellipse to determine an ellipse for gesture detection, and obtaining a projection ellipse of the cylinder; C. carrying out elliptical gravity center shift correction according to the actual contour of the stone block; and D, calculating the coordinates of the stone block under the coordinate system of the camera according to the corrected ellipse.
As a preferred embodiment of the present invention, the step B includes:
and carrying out Canny edge detection on the image, converting the image into an edge image, restraining and detecting an arc section through a gradient direction, and polymerizing the arc section according to the curvature of the arc section to generate an ellipse.
As a preferred embodiment of the present invention, step a includes:
the exposure parameters during image acquisition are automatically adjusted, and the adjusting process is as follows:
indexing in a lookup table according to the ambient brightness during image shooting to query the image target brightness, wherein the lookup table is pre-established according to the optimal matching relationship between the ambient brightness and the image target brightness;
and calculating an exposure target output value according to the inquired image target brightness, wherein the calculation formula is as follows: input is 255x (Output/255) Gamma, where Input is the image target brightness, Gamma is 2.2, and Output is the exposure target Output value, i.e., the exposure brightness of 18% gray in the divisional exposure method;
evaluating each parameter from multiple angles according to an optimized parameter algorithm, wherein the formula of the optimized parameter algorithm is as follows:
wherein pi is at ambient brightness TabThe amount of adjustment of the exposure time within the interval,ambient brightness TabThe exposure level within the interval needs to be adjusted,ambient brightness TabThe effect of the gray scale adjustment proportion in the interval,ambient brightness TabNegative effects of exposure adjustment within the interval.
As a preferred embodiment of the present invention, step C includes:
carrying out ellipse gravity center shift correction by adopting an ellipse gravity center shift correction algorithm;
the ellipse center-of-gravity shift correction algorithm is divided into two areas and applied to a first quadrant: taking unit step length in the x direction in a first area with the slope absolute value smaller than 1, and taking unit step length in the y direction in a second area with the slope absolute value larger than 1;
take (x)c,yc) (0, 0), defining an elliptic function as:
fellipse(x, y) is a decision parameter;
from (0, r)y) Starting, taking unit step length in the x direction until the boundary of a first area and a second area, then turning to the unit step length in the y direction, covering the rest curve segments in the first quadrant, and detecting the curve slope value in each step;
the slope equation is:
in the interface area of the first area and the second area,and isObtaining a bar offset from the first areaThe parts are as follows:
the decision function is evaluated by the center of gravity shift to determine the next position along the elliptical trajectory:
at the next sampling position (x)k+1+1=xk+2), the decision parameter for the first region may be evaluated as:
wherein, yk+1According to p1kHas a sign of ykOr yk-1;
In the second region, sampling in a unit step in a negative direction;
at the next position yk+1-1=yk-2 evaluating an elliptic function, or
Wherein x isk+1According to p2kCan be used as a signValue of xkOr xk+1。
As a preferred aspect of the present invention, an ellipse center of gravity offset correction method using an ellipse center of gravity offset correction algorithm includes:
input rx、ryAnd center of ellipse (x)c,yc) And get the first point on the ellipse: (x)0,y0)=(0,ry)
each x in the first regionkPosition, starting from k equal to 0, provided that p1k< 0, the next point of the ellipse centered at (0, 0) is (x)k+1,yk) And is andotherwise, the next point along the ellipse is (x)k+1,yk-1) and
Using the last point (x) calculated in the first region0,y0) To calculate initial values of the parameters in the second region:
each y in the second areakAt position, start with k ═ 0, provided p2k> 0, the next point along the ellipse with the center being (0, 0) is (x)k,yk-1) andotherwise, the next point (x) along the ellipsek+1,yk-1) and
performing a calculation using the same x and y increments as in the first region until y is 0;
determining symmetry points in the other three quadrants;
moving each pixel location (x, y) calculated to be centered at (x)c,yc) And drawing points according to coordinate values:
x=x+xc,y=y+yc;
the redrawn ellipse is the ellipse after the center of gravity shift correction.
As a preferred embodiment of the present invention, the step B includes:
utilize the space circle place plane that the stone was fitted to be parallel to each other and obtain the constraint condition, specifically do: and calculating a normal vector of a plane where the space circle is located, intersecting a cone formed by the space circle and the center of the camera with the plane where the space circle is located, and forming a circle on the plane to obtain the parallelism constraint condition.
In a preferred embodiment of the present invention, the center coordinates [ x 'of the space circle'0 y′0 z′0]Normal vector [ n 'to the plane of the space circle'x n′y n′z]Respectively as follows:
as a preferred embodiment of the present invention, the step B includes:
calculating two groups of solutions of the space circle feature position and the attitude according to the elliptical projection of the space circle feature under the plane of the camera, wherein the position in each group of solutions corresponds to the attitude;
if the real normal vector of the space circle feature plane in the camera coordinate system is n1, n1 is approximately equal to kN, and k is not equal to 0; substituting n1 into the formula calculates the attitude parameters of the space circle feature.
The invention realizes the following beneficial effects:
1. according to the invention, the stone blocks are classified into the ellipse classification, the ellipse detection is carried out on the image, then the ellipse is classified to determine the ellipse for posture detection, the projection ellipse of the cylinder is obtained, the center of gravity shift correction of the ellipse is carried out according to the actual contour of the stone block, and finally the coordinates of the stone block under the coordinate system of the camera are calculated according to the corrected ellipse, so that the stone blocks in the surrounding environment of the excavator can be accurately identified, and the automatic stone crushing device is prepared for the subsequent automatic crushing work.
2. According to the invention, when the camera acquires the image of the surrounding environment of the excavator, the exposure parameter during image acquisition is automatically adjusted by adopting the camera exposure automatic adjustment algorithm, so that automatic adjustment of different exposure parameters can be carried out according to the environment brightness, the contrast and definition of the image are improved, and thus, quality is ensured on the source of the picture, and the subsequent identification accuracy is favorably improved.
3. The invention adopts the adjacent circle coplanar constraint technology to eliminate the ambiguity of the single circle pose, thereby realizing the certainty of the stone posture.
Drawings
Fig. 1 is a schematic flow chart of a method for intelligently identifying a stone block according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the rock fitting circle detection pose estimation provided by an embodiment of the present invention;
FIG. 3 is a schematic diagram of a single-circle attitude measurement according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a smart stone identification device according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of an excavator according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
Referring to fig. 1 to 3, the present embodiment provides an intelligent stone identification method for unmanned crushing of an excavator, including the following steps:
A. acquiring an image of the surrounding environment of the excavator;
B. carrying out ellipse detection on the image, classifying the detected ellipse to determine an ellipse for gesture detection, and obtaining a projection ellipse of the cylinder;
C. carrying out elliptical gravity center shift correction according to the actual contour of the stone block; and
D. and calculating the coordinates of the stone block under the coordinate system of the camera according to the corrected ellipse.
In one mode, the camera is used to capture an image of the surrounding environment of the excavator in step a, the captured image is then transmitted to the image processor, and the image processor performs the step B, C, D above on the captured image.
The camera used in this embodiment may be installed on the excavator, or may be independently installed to have a spatial distance from the excavator, and the image transmission between the camera and the image processor may be performed by wire or wireless. In a preferred manner, the cameras used in this embodiment are constituted by a left camera and a right camera, each of which acquires an image and transmits it to the image processor. The camera used in this embodiment may be an independent camera, a camera carried by a mobile device, a camera of a dome camera, a camera of Augmented Reality (AR) \ Virtual Reality (VR) device, or the like, which is not limited in this application.
In some embodiments, when the camera acquires the image of the surrounding environment of the excavator, the camera exposure automatic adjustment algorithm is adopted to automatically adjust the exposure parameters during image acquisition, so that automatic adjustment of different exposure parameters can be performed according to the environment brightness.
In one way, the specific adjustment process described for automatic adjustment of different exposure parameters according to ambient brightness is:
(1) and indexing in a lookup table according to the ambient brightness during image shooting to query the target brightness of the image, wherein the lookup table is pre-established according to the optimal matching relationship between the ambient brightness and the target brightness of the image.
When the camera shoots an image, the current ambient brightness, namely an illumination value, is detected through ambient brightness detection equipment/a sensor, and then the detected ambient brightness is taken into the lookup table to be indexed so as to query the target brightness of the image, wherein the queried target brightness of the image is the optimal image brightness corresponding to the ambient brightness; the relationship between the ambient brightness and the image target brightness in the lookup table may be a corresponding relationship between intervals, for example, an ambient brightness interval corresponds to an image target brightness interval, an ambient brightness interval corresponds to an image target brightness, and an ambient brightness corresponds to an image target brightness interval; non-interval correspondence, i.e. single numerical correspondence, is also possible.
(2) And calculating an exposure target output value according to the inquired image target brightness, wherein the calculation formula is as follows: input is 255x (Output/255) Gamma, where Input is the image target brightness, Gamma is 2.2, and Output is the exposure target Output value, i.e., the exposure brightness of 18% gray in the divisional exposure method.
According to the zonal exposure method, a person is classified into 11 steps (i.e. 0, I, II, III, IV, V, VI, VII, VIII, IX, X; wherein all black is 0 step and all white is X step) for gradual change from black to white, and the block V in the middle is regarded as a moderate exposure intensity, called middle gray. And the light reflectance of the V-stage block is 18%, i.e., 18% gray by definition.
(3) Evaluating each parameter from multiple angles according to an optimized parameter algorithm, wherein the formula of the optimized parameter algorithm is as follows:
wherein, n is the ambient brightness TabThe amount of adjustment of the exposure time within the interval,ambient brightness TabThe exposure level within the interval needs to be adjusted,ambient brightness TabThe effect of the gray scale adjustment proportion in the interval,ambient brightness TabNegative effects of exposure adjustment within the interval.
The real-time exposure is adjusted according to the formula, so that the contrast and definition of the image are improved, the quality is guaranteed from the source of the picture, and the subsequent identification accuracy is improved.
In practical application, a plurality of irregular edge features may exist in the stone, and false edges may also exist due to the influence of noise, illumination, shooting angles and other factors. Therefore, the edges on the image need to be classified, and a reasonable edge feature is selected to establish an object coordinate system, so as to estimate the target attitude. The invention relates to a stone block, which is classified into a structure called a rotator, wherein planes of spatial circles on the structure are parallel to each other, and a straight line passing through the center of the structure is perpendicular to the plane of the spatial circles. Based on the method, the method belongs to ellipse classification, and provides the parallelism and verticality constraint of the projection ellipse to obtain the projection of the upper space circle, thereby providing the spatial multi-ellipse posture estimation method. The implementation of the above step B, C, D is used to identify the stone in the image, namely: firstly, carrying out ellipse detection on an image, then classifying an ellipse by adopting a random sampling consistency algorithm to obtain a projection ellipse of a cylinder, then carrying out ellipse gravity center shift correction according to the actual contour of the stone, and finally calculating the coordinates of the stone under a camera coordinate system according to the corrected ellipses.
The ellipse is detected by adopting an algorithm based on arc segment extraction, the algorithm combines the arc segment extraction and classification, the calculation is simple, and the speed is very high. In the step B, Canny edge detection is firstly carried out on the target image, the image is converted into an edge image, then arc sections are restrained and detected through the gradient direction, and finally the arc sections are polymerized to generate an ellipse through the curvatures of the arc sections. The ellipses on the image are obtained through detection, the ellipses need to be classified, and reasonable ellipses used for gesture detection are distinguished from a plurality of interference ellipses or pseudo ellipses.
In some embodiments, step C comprises: and performing elliptic gravity center shift correction by adopting an elliptic gravity center shift correction algorithm. The ellipse center-of-gravity shift correction algorithm is divided into two areas and applied to a first quadrant: taking unit step length in the x direction in a first area with the slope absolute value smaller than 1, and taking unit step length in the y direction in a second area with the slope absolute value larger than 1;
take (x)c,yc) (0, 0), defining an elliptic function as:
fellipse(x, y) is a decision parameter;
from (0, r)y) Starting, taking unit step length in the x direction until the boundary of a first area and a second area, then turning to the unit step length in the y direction, covering the rest curve segments in the first quadrant, and detecting the curve slope value in each step;
the slope equation is:
in the interface area of the first area and the second area,and isIt follows that the condition for shifting the first region is:
the decision function is evaluated by the center of gravity shift to determine the next position along the elliptical trajectory:
at the next sampling position (x)k+1+1=xk+2, the decision parameter for the first region may be evaluated as:
wherein, yk+1According to p1kHas a sign of ykOr yk-1;
In the second region, sampling in a unit step in a negative direction;
at the next position yk+1-1=yk-2 evaluating an elliptic function, or
Wherein x isk+1According to p2kCan take the value of xkOr xk+1。
The specific process of performing the ellipse barycentric shift correction by using the ellipse barycentric shift correction algorithm is as follows:
(1) input rx、ryAnd center of ellipse (x)c,yc) And get the first point on the ellipse: (x)0,y0)=(0,ry)。
(3) each x in the first regionkPosition, starting from k equal to 0, provided that p1k< 0, the next point of the ellipse centered at (0, 0) is (x)k+1,yk) And is andotherwise, the next point along the ellipse is (x)k+1,yk-1) and
(4) Using the last point (x) calculated in the first region0,y0) To calculate initial values of the parameters in the second region:
(5) each y in the second areakAt position, start with k ═ 0, provided p2k> 0, the next point along the ellipse with the center being (0, 0) is (x)k,yk-1) andotherwise, the next point (x) along the ellipsek+1,yk-1) and
the calculation is performed using the same x and y increments as in the first region until y is 0.
(6) The symmetry points in the other three quadrants are determined.
(7) Moving each pixel location (x, y) calculated to be centered at (x)c,yc) And drawing points according to coordinate values:
x=x+xc,y=y+yc。
(8) the redrawn ellipse is the ellipse after the center of gravity shift correction.
In some embodiments, step B comprises: utilize the space circle place plane that the stone was fitted to be parallel to each other and obtain the constraint condition, specifically do: and calculating a normal vector of a plane where the space circle is located, intersecting a cone formed by the space circle and the center of the camera with the plane where the space circle is located, and forming a circle on the plane to obtain the parallelism constraint condition.
The projection of the spatial circular feature on the camera imaging plane is an ellipse, assuming that the radius of the circular feature is R. When a plane and the section of the elliptic cone form a circle with the radius of R, the center coordinate of the section is the position of the space circular feature, and the normal vector of the section circular ring contains the attitude information of the space circular feature. But under monocular cameraThe solution for single circle feature pose measurement is not unique, as shown in fig. 3. Where E is an elliptical projection of the spatial circular feature on the imaging plane, and C1 and C2 are two spatial circular features corresponding to the elliptical projection, the correspondence between E, C1 and C2 cannot be determined without constraint conditions. Can obtain the center coordinate [ x 'of the space circle'0 y′0 z′0]Normal vector [ n 'to the plane of the space circle'x n′y n′z]Respectively as follows:
in some embodiments, in order to eliminate ambiguity of the single-circle pose, a close-proximity circle-to-plane constraint technique is used to achieve certainty of the stone pose, i.e. step B further includes: calculating two groups of solutions of the space circle feature position and the attitude according to the elliptical projection of the space circle feature under the plane of the camera, wherein the position in each group of solutions corresponds to the attitude; if the real normal vector of the space circle feature plane in the camera coordinate system is n1, n1 is approximately equal to kN, and k is not equal to 0; substituting n1 into the formula calculates the attitude parameters of the space circle feature.
Specifically, two sets of solutions of the positions and the postures of the spatial circular features can be calculated according to the elliptical projection of the single circular features under the plane of the camera, wherein the positions in each set of solutions correspond to the postures, so that the true pose solution of the circular feature object can be obtained in a mode of eliminating the false postures. The normal vectors of the corresponding circle feature planes in the camera coordinate system are assumed to be n1 and n 2. Because the plane of the adjacent circle and the plane of the circle feature are coplanar and parallel to the plane of the circle feature, the real normal vector of the plane of the circle feature is parallel to the normal vector of the plane obtained by rectangular constraint under the camera coordinate system. And if the true normal vector of the circle feature plane in the camera coordinate system is n1, n1 is approximately equal to kN, and k is not equal to 0. By means of the constraint, false solution of the pose of the circular feature can be effectively eliminated, and the pose parameter of the circular feature can be solved by substituting n 1. According to the derivation, under the least constraint condition, the aim of eliminating the false solution of the circular pose can be achieved without knowing the geometric dimension and the coordinate of the rectangle under the world coordinate system.
Under the condition of different illumination and angle changes, the accurate locking of the key positioning target position is realized. According to the mutual relation information, the accurate pose can be obtained.
In some embodiments, the coordinates of the stone under the camera coordinate system are acquired using binocular vision system applications including the steps of camera calibration, stereo correction, stereo matching and three-dimensional reconstruction.
In some embodiments, referring to fig. 4, there is provided a stone intelligent recognition device 10, comprising: a memory 101 and a processor 102, wherein the memory 101 is used for storing a computer program, and the processor 102 is used for calling the computer program to execute the corresponding steps of the intelligent stone identification method provided by the embodiment.
In some embodiments, referring to fig. 5, an excavator 20 is provided that includes a central controller 201 and a vision module 202. The vision module 202 is used for providing image signals for the central controller 201, and comprises an image processor 203 connected with the central controller 201 and cameras 204 and 205 arranged on the excavator 20, wherein the cameras 204 and 205 are connected with the image processor 203.
In some embodiments, the excavator 20 further includes a supplementary lighting source 206 connected to the central controller 201, the supplementary lighting source 206 is used for a part of an area with low illuminance, and the central controller 201 can control the supplementary lighting source 206 to operate according to information of an image, so as to improve the illumination intensity of the working area, facilitate the operation of the excavator 20, and facilitate the vision module 202 to obtain a clear image.
The image processor 203, the left and right cameras 204, 205 and the supplementary lighting source 206 in the excavator are combined to form the vision module 202 of the present invention, which is used for connecting with the central controller 201, realizing the identification of stones around the excavator 20, further determining the working surface and the operation target stones, and realizing the distance measurement of the operation target stones.
In some embodiments, a computer-readable storage medium is provided, in which a computer program is stored, which, when running on a computer, performs the respective steps of the intelligent stone identification method provided by the present embodiment.
Referring to FIG. 6, a computer readable storage medium includes a computer program for executing a computer process on a computing device.
In some embodiments, computer-readable storage media is provided using signal bearing media 300. The signal bearing medium 300 may comprise one or more program instructions, which when executed by one or more processors may provide the functions or portions of the functions described above with respect to fig. 1. Thus, for example, one or more features described with reference to FIGS. 1A-D may be undertaken by one or more instructions associated with the signal bearing medium 300. Further, the program instructions in FIG. 6 also describe example instructions. In some examples, signal bearing medium 300 may comprise a computer readable medium 301 such as, but not limited to, a hard disk drive, a Compact Disc (CD), a Digital Video Disc (DVD), a digital tape, a memory, a read-only memory (ROM), a random-access memory (RAM), or the like.
In some implementations, the signal bearing medium 300 may comprise a computer recordable medium 302 such as, but not limited to, a memory, a read/write (R/W) CD, a R/WDVD, and so forth.
In some implementations, the signal bearing medium 300 may include a communication medium 303 such as, but not limited to, a digital and/or analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.).
The signal bearing medium 300 may be communicated by a wireless form of communication medium 303, such as a wireless communication medium conforming to the IEEE802.11 standard or other transmission protocol. The one or more program instructions may be, for example, computer-executable instructions or logic-implementing instructions.
It should be understood that the arrangements described herein are for illustrative purposes only. Thus, those skilled in the art will appreciate that other arrangements and other elements (e.g., machines, interfaces, functions, orders, and groupings of functions, etc.) can be used instead, and that some elements may be omitted altogether depending upon the desired results. In addition, many of the described elements are functional entities that may be implemented as discrete or distributed components or in conjunction with other components, in any suitable combination and location.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.
Claims (10)
1. The intelligent stone identification method for unmanned crushing of the excavator is characterized by comprising the following steps of:
A. acquiring an image of the surrounding environment of the excavator;
B. carrying out ellipse detection on the image, classifying the detected ellipse to determine an ellipse for gesture detection, and obtaining a projection ellipse of the cylinder;
C. carrying out elliptical gravity center shift correction according to the actual contour of the stone block; and
D. and calculating the coordinates of the stone block under the coordinate system of the camera according to the corrected ellipse.
2. The intelligent stone identification method for unmanned crushing of excavators according to claim 1, characterized in that step B comprises:
and carrying out Canny edge detection on the image, converting the image into an edge image, restraining and detecting an arc section through a gradient direction, and polymerizing the arc section according to the curvature of the arc section to generate an ellipse.
3. The intelligent stone identification method for unmanned crushing of excavators according to claim 1, characterized in that step a comprises:
the exposure parameters during image acquisition are automatically adjusted, and the adjusting process is as follows:
indexing in a lookup table according to the ambient brightness during image shooting to query the image target brightness, wherein the lookup table is pre-established according to the optimal matching relationship between the ambient brightness and the image target brightness;
and calculating an exposure target output value according to the inquired image target brightness, wherein the calculation formula is as follows:
input is 255x (Output/255) Gamma, where Input is the image target brightness, Gamma is 2.2, and Output is the exposure target Output value, i.e., the exposure brightness of 18% gray in the divisional exposure method;
evaluating each parameter from multiple angles according to an optimized parameter algorithm, wherein the formula of the optimized parameter algorithm is as follows:
wherein, n is the ambient brightness TabThe amount of adjustment of the exposure time within the interval,ambient brightness TabThe exposure level within the interval needs to be adjusted,ambient brightness TabThe effect of the gray scale adjustment proportion in the interval,ambient brightness TabNegative effects of exposure adjustment within the interval.
4. The intelligent stone identification method for unmanned crushing of excavators according to claim 1, characterized in that step C comprises:
carrying out ellipse gravity center shift correction by adopting an ellipse gravity center shift correction algorithm;
the ellipse center-of-gravity shift correction algorithm is divided into two areas and applied to a first quadrant: taking unit step length in the x direction in a first area with the slope absolute value smaller than 1, and taking unit step length in the y direction in a second area with the slope absolute value larger than 1;
take (x)c,yc) (0, 0), defining an elliptic function as:
fellipse(x, y) is a decision parameter;
from (0, r)y) Starting, taking unit step length in the x direction until the boundary of a first area and a second area, then turning to the unit step length in the y direction, covering the rest curve segments in the first quadrant, and detecting the curve slope value in each step;
the slope equation is:
It follows that the condition for shifting the first region is:
the decision function is evaluated by the center of gravity shift to determine the next position along the elliptical trajectory:
at the next sampling position (x)k+1+1=xk+2), the decision parameter for the first region may be evaluated as:
wherein, yk+1According to p1kHas a sign of ykOr yk-1;
In the second region, sampling in a unit step in a negative direction;
at the next onePosition yk+1-1=yk-2 evaluating an elliptic function, or
Wherein x isk+1According to p2kCan take the value of xkOr xk+1。
5. The intelligent stone identification method for unmanned crushing of excavators according to claim 4, characterized in that the correction of the center of gravity shift of the ellipse by the correction algorithm of the center of gravity shift of the ellipse comprises:
input rx、ryAnd center of ellipse (x)c,yc) And get the first point on the ellipse: (x)0,y0)=(0,ry)
each x in the first regionkPosition, starting from k equal to 0, provided that p1k< 0, the next point of the ellipse centered at (0, 0) is (x)k+1,yk) And is andotherwise, the next point along the ellipse is (x)k+1,yk-1) and
Using the last point (x) calculated in the first region0,y0) To calculate initial values of the parameters in the second region:
each y in the second areakAt position, start with k ═ 0, provided p2k> 0, the next point along the ellipse with the center being (0, 0) is (x)k,yk-1) andotherwise, the next point (x) along the ellipsek+1,yk-1) and
performing a calculation using the same x and y increments as in the first region until y is 0;
determining symmetry points in the other three quadrants;
moving each pixel location (x, y) calculated to be centered at (x)c,yc) And drawing points according to coordinate values:
x=x+xc,y=y+yc;
the redrawn ellipse is the ellipse after the center of gravity shift correction.
6. The intelligent stone identification method for unmanned crushing of excavators according to claim 1, characterized in that step B comprises:
utilize the space circle place plane that the stone was fitted to be parallel to each other and obtain the constraint condition, specifically do: and calculating a normal vector of a plane where the space circle is located, intersecting a cone formed by the space circle and the center of the camera with the plane where the space circle is located, and forming a circle on the plane to obtain the parallelism constraint condition.
8. the intelligent stone identification method for unmanned crushing of excavators according to claim 7, characterized in that step B comprises:
calculating two groups of solutions of the space circle feature position and the attitude according to the elliptical projection of the space circle feature under the plane of the camera, wherein the position in each group of solutions corresponds to the attitude;
if the real normal vector of the space circle feature plane in the camera coordinate system is n1, n1 is approximately equal to kN, and k is not equal to 0; substituting n1 into the formula calculates the attitude parameters of the space circle feature.
9. An intelligent stone recognition device, comprising: a memory for storing a computer program and a processor for invoking the computer program to perform the method of any of claims 1-8.
10. A computer-readable storage medium, in which a computer program is stored which, when run on a computer, causes the computer to carry out the method of any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111646639.3A CN114219801A (en) | 2021-12-30 | 2021-12-30 | Intelligent stone identification method and device for unmanned crushing of excavator |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111646639.3A CN114219801A (en) | 2021-12-30 | 2021-12-30 | Intelligent stone identification method and device for unmanned crushing of excavator |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114219801A true CN114219801A (en) | 2022-03-22 |
Family
ID=80706942
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111646639.3A Pending CN114219801A (en) | 2021-12-30 | 2021-12-30 | Intelligent stone identification method and device for unmanned crushing of excavator |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114219801A (en) |
-
2021
- 2021-12-30 CN CN202111646639.3A patent/CN114219801A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109270534B (en) | Intelligent vehicle laser sensor and camera online calibration method | |
CN106650708B (en) | Automatic driving obstacle vision detection method and system | |
JP6997066B2 (en) | Human detection system | |
CN109211207B (en) | Screw identification and positioning device based on machine vision | |
CN106650701B (en) | Binocular vision-based obstacle detection method and device in indoor shadow environment | |
CN112734765B (en) | Mobile robot positioning method, system and medium based on fusion of instance segmentation and multiple sensors | |
CN105447853A (en) | Flight device, flight control system and flight control method | |
JP4275378B2 (en) | Stereo image processing apparatus and stereo image processing method | |
CN104484648A (en) | Variable-viewing angle obstacle detection method for robot based on outline recognition | |
WO2020182176A1 (en) | Method and apparatus for controlling linkage between ball camera and gun camera, and medium | |
JP6544257B2 (en) | INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM | |
CN114241298A (en) | Tower crane environment target detection method and system based on laser radar and image fusion | |
CN113658241B (en) | Monocular structured light depth recovery method, electronic device and storage medium | |
CN110130987B (en) | Tunnel convergence deformation monitoring method based on image analysis | |
EP3154024A1 (en) | Human detection system for construction machine | |
CN114495068B (en) | Pavement health detection method based on human-computer interaction and deep learning | |
CN112132874A (en) | Calibration-board-free different-source image registration method and device, electronic equipment and storage medium | |
CN110634138A (en) | Bridge deformation monitoring method, device and equipment based on visual perception | |
CN111998862A (en) | Dense binocular SLAM method based on BNN | |
CN113643345A (en) | Multi-view road intelligent identification method based on double-light fusion | |
CN112652020A (en) | Visual SLAM method based on AdaLAM algorithm | |
CN110499802A (en) | A kind of image-recognizing method and equipment for excavator | |
CN113701750A (en) | Fusion positioning system of underground multi-sensor | |
CN114219801A (en) | Intelligent stone identification method and device for unmanned crushing of excavator | |
CN116958218A (en) | Point cloud and image registration method and equipment based on calibration plate corner alignment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |