CN114219801A - Intelligent stone identification method and device for unmanned crushing of excavator - Google Patents

Intelligent stone identification method and device for unmanned crushing of excavator Download PDF

Info

Publication number
CN114219801A
CN114219801A CN202111646639.3A CN202111646639A CN114219801A CN 114219801 A CN114219801 A CN 114219801A CN 202111646639 A CN202111646639 A CN 202111646639A CN 114219801 A CN114219801 A CN 114219801A
Authority
CN
China
Prior art keywords
ellipse
image
center
crushing
unmanned
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111646639.3A
Other languages
Chinese (zh)
Inventor
张彦群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Axwell Intelligent Technology Suzhou Co ltd
Original Assignee
Axwell Intelligent Technology Suzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Axwell Intelligent Technology Suzhou Co ltd filed Critical Axwell Intelligent Technology Suzhou Co ltd
Priority to CN202111646639.3A priority Critical patent/CN114219801A/en
Publication of CN114219801A publication Critical patent/CN114219801A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30132Masonry; Concrete
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Abstract

The invention relates to an intelligent stone identification method and device for unmanned crushing of an excavator, which comprises the following steps: A. acquiring an image of the surrounding environment of the excavator; B. carrying out ellipse detection on the image, classifying the detected ellipse to determine an ellipse for gesture detection, and obtaining a projection ellipse of the cylinder; C. carrying out elliptical gravity center shift correction according to the actual contour of the stone block; d, calculating the coordinate of the stone block under the coordinate system of the camera according to the corrected ellipse; therefore, stones in the surrounding environment of the excavator can be accurately identified, and preparation is made for subsequent automatic crushing work.

Description

Intelligent stone identification method and device for unmanned crushing of excavator
Technical Field
The invention relates to the technical field of image recognition, in particular to an intelligent stone recognition method and device for unmanned crushing of an excavator.
Background
The excavator is widely applied to construction sites of projects such as mining, building construction, road and bridge construction and the like. In the prior art, an excavator is generally driven by a hydraulic system and constructed in a manual operation mode; but the working comfort of workers is poor due to the fact that the environment of a construction site is severe; and as the labor cost is increased, the excavator cannot work all the day under the influence of physical strength of workers, external weather and the like, and huge resource waste is brought to enterprises and owners. For the crushing operation of the excavator, in order to realize the automatic construction of the crushing hammer, the machine vision recognition of the stone blocks to be crushed is an important part. Only can accurately identify the stone to be crushed, the breaking hammer can be smoothly guided to crush the stone, and the automatic construction is completed.
In the prior art, a target recognition algorithm/template matching algorithm is usually adopted to recognize stones: the target identification algorithm generally adopts a method of feature point extraction or template matching, SURF and SIFT are currently the most common local feature detection algorithms, and are robust to illumination, noise and small-range visual angle change; however, these two algorithms are prone to false detection of feature points, and furthermore, not too many distinct feature points are available for the stone; when the number of local feature points is small, the effect of the feature point detection algorithm is obviously reduced. The template matching algorithm is to find an optimal pose, and can enable a similarity function between a template and a current image or find a function to obtain an extreme value; however, the detection method has high calculation and storage capacity, and the commonly used algorithms such as SCV, NCC and SSD have poor robustness to the occlusion, and considering the complexity of the working environment of the excavator, the occlusion is easily formed by foreign matters, so the template matching algorithm is not suitable.
Disclosure of Invention
The purpose of the invention is as follows:
in order to overcome the disadvantages pointed out in the background art, the embodiment of the invention provides an intelligent stone identification method and device for unmanned crushing of an excavator, which can effectively solve the problems related to the background art.
The technical scheme is as follows:
the intelligent stone identification method for unmanned crushing of the excavator comprises the following steps: A. acquiring an image of the surrounding environment of the excavator; B. carrying out ellipse detection on the image, classifying the detected ellipse to determine an ellipse for gesture detection, and obtaining a projection ellipse of the cylinder; C. carrying out elliptical gravity center shift correction according to the actual contour of the stone block; and D, calculating the coordinates of the stone block under the coordinate system of the camera according to the corrected ellipse.
As a preferred embodiment of the present invention, the step B includes:
and carrying out Canny edge detection on the image, converting the image into an edge image, restraining and detecting an arc section through a gradient direction, and polymerizing the arc section according to the curvature of the arc section to generate an ellipse.
As a preferred embodiment of the present invention, step a includes:
the exposure parameters during image acquisition are automatically adjusted, and the adjusting process is as follows:
indexing in a lookup table according to the ambient brightness during image shooting to query the image target brightness, wherein the lookup table is pre-established according to the optimal matching relationship between the ambient brightness and the image target brightness;
and calculating an exposure target output value according to the inquired image target brightness, wherein the calculation formula is as follows: input is 255x (Output/255) Gamma, where Input is the image target brightness, Gamma is 2.2, and Output is the exposure target Output value, i.e., the exposure brightness of 18% gray in the divisional exposure method;
evaluating each parameter from multiple angles according to an optimized parameter algorithm, wherein the formula of the optimized parameter algorithm is as follows:
Figure BDA0003445378460000021
wherein pi is at ambient brightness TabThe amount of adjustment of the exposure time within the interval,
Figure BDA0003445378460000022
ambient brightness TabThe exposure level within the interval needs to be adjusted,
Figure BDA0003445378460000023
ambient brightness TabThe effect of the gray scale adjustment proportion in the interval,
Figure BDA0003445378460000024
ambient brightness TabNegative effects of exposure adjustment within the interval.
As a preferred embodiment of the present invention, step C includes:
carrying out ellipse gravity center shift correction by adopting an ellipse gravity center shift correction algorithm;
the ellipse center-of-gravity shift correction algorithm is divided into two areas and applied to a first quadrant: taking unit step length in the x direction in a first area with the slope absolute value smaller than 1, and taking unit step length in the y direction in a second area with the slope absolute value larger than 1;
take (x)c,yc) (0, 0), defining an elliptic function as:
Figure BDA0003445378460000025
fellipse(x, y) is a decision parameter;
from (0, r)y) Starting, taking unit step length in the x direction until the boundary of a first area and a second area, then turning to the unit step length in the y direction, covering the rest curve segments in the first quadrant, and detecting the curve slope value in each step;
the slope equation is:
Figure BDA0003445378460000031
in the interface area of the first area and the second area,
Figure BDA0003445378460000032
and is
Figure BDA0003445378460000033
Obtaining a bar offset from the first areaThe parts are as follows:
Figure BDA0003445378460000034
the decision function is evaluated by the center of gravity shift to determine the next position along the elliptical trajectory:
Figure BDA0003445378460000035
at the next sampling position (x)k+1+1=xk+2), the decision parameter for the first region may be evaluated as:
Figure BDA0003445378460000036
wherein, yk+1According to p1kHas a sign of ykOr yk-1
If p1k< 0, incremental increments of
Figure BDA0003445378460000037
If p1kNot less than 0, and increasing increment of
Figure BDA0003445378460000038
In the second region, sampling in a unit step in a negative direction;
Figure BDA0003445378460000039
at the next position yk+1-1=yk-2 evaluating an elliptic function, or
Figure BDA00034453784600000310
Wherein x isk+1According to p2kCan be used as a signValue of xkOr xk+1
As a preferred aspect of the present invention, an ellipse center of gravity offset correction method using an ellipse center of gravity offset correction algorithm includes:
input rx、ryAnd center of ellipse (x)c,yc) And get the first point on the ellipse: (x)0,y0)=(0,ry)
Calculating an initial value of a barycentric shift decision parameter in the first region:
Figure BDA0003445378460000041
each x in the first regionkPosition, starting from k equal to 0, provided that p1k< 0, the next point of the ellipse centered at (0, 0) is (x)k+1,yk) And is and
Figure BDA0003445378460000042
otherwise, the next point along the ellipse is (x)k+1,yk-1) and
Figure BDA0003445378460000043
wherein the content of the first and second substances,
Figure BDA0003445378460000044
and up to
Figure BDA0003445378460000045
Using the last point (x) calculated in the first region0,y0) To calculate initial values of the parameters in the second region:
Figure BDA0003445378460000046
each y in the second areakAt position, start with k ═ 0, provided p2k> 0, the next point along the ellipse with the center being (0, 0) is (x)k,yk-1) and
Figure BDA0003445378460000047
otherwise, the next point (x) along the ellipsek+1,yk-1) and
Figure BDA0003445378460000048
performing a calculation using the same x and y increments as in the first region until y is 0;
determining symmetry points in the other three quadrants;
moving each pixel location (x, y) calculated to be centered at (x)c,yc) And drawing points according to coordinate values:
x=x+xc,y=y+yc
the redrawn ellipse is the ellipse after the center of gravity shift correction.
As a preferred embodiment of the present invention, the step B includes:
utilize the space circle place plane that the stone was fitted to be parallel to each other and obtain the constraint condition, specifically do: and calculating a normal vector of a plane where the space circle is located, intersecting a cone formed by the space circle and the center of the camera with the plane where the space circle is located, and forming a circle on the plane to obtain the parallelism constraint condition.
In a preferred embodiment of the present invention, the center coordinates [ x 'of the space circle'0 y′0 z′0]Normal vector [ n 'to the plane of the space circle'x n′y n′z]Respectively as follows:
Figure BDA0003445378460000049
Figure BDA0003445378460000051
as a preferred embodiment of the present invention, the step B includes:
calculating two groups of solutions of the space circle feature position and the attitude according to the elliptical projection of the space circle feature under the plane of the camera, wherein the position in each group of solutions corresponds to the attitude;
if the real normal vector of the space circle feature plane in the camera coordinate system is n1, n1 is approximately equal to kN, and k is not equal to 0; substituting n1 into the formula calculates the attitude parameters of the space circle feature.
The invention realizes the following beneficial effects:
1. according to the invention, the stone blocks are classified into the ellipse classification, the ellipse detection is carried out on the image, then the ellipse is classified to determine the ellipse for posture detection, the projection ellipse of the cylinder is obtained, the center of gravity shift correction of the ellipse is carried out according to the actual contour of the stone block, and finally the coordinates of the stone block under the coordinate system of the camera are calculated according to the corrected ellipse, so that the stone blocks in the surrounding environment of the excavator can be accurately identified, and the automatic stone crushing device is prepared for the subsequent automatic crushing work.
2. According to the invention, when the camera acquires the image of the surrounding environment of the excavator, the exposure parameter during image acquisition is automatically adjusted by adopting the camera exposure automatic adjustment algorithm, so that automatic adjustment of different exposure parameters can be carried out according to the environment brightness, the contrast and definition of the image are improved, and thus, quality is ensured on the source of the picture, and the subsequent identification accuracy is favorably improved.
3. The invention adopts the adjacent circle coplanar constraint technology to eliminate the ambiguity of the single circle pose, thereby realizing the certainty of the stone posture.
Drawings
Fig. 1 is a schematic flow chart of a method for intelligently identifying a stone block according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the rock fitting circle detection pose estimation provided by an embodiment of the present invention;
FIG. 3 is a schematic diagram of a single-circle attitude measurement according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a smart stone identification device according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of an excavator according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
Referring to fig. 1 to 3, the present embodiment provides an intelligent stone identification method for unmanned crushing of an excavator, including the following steps:
A. acquiring an image of the surrounding environment of the excavator;
B. carrying out ellipse detection on the image, classifying the detected ellipse to determine an ellipse for gesture detection, and obtaining a projection ellipse of the cylinder;
C. carrying out elliptical gravity center shift correction according to the actual contour of the stone block; and
D. and calculating the coordinates of the stone block under the coordinate system of the camera according to the corrected ellipse.
In one mode, the camera is used to capture an image of the surrounding environment of the excavator in step a, the captured image is then transmitted to the image processor, and the image processor performs the step B, C, D above on the captured image.
The camera used in this embodiment may be installed on the excavator, or may be independently installed to have a spatial distance from the excavator, and the image transmission between the camera and the image processor may be performed by wire or wireless. In a preferred manner, the cameras used in this embodiment are constituted by a left camera and a right camera, each of which acquires an image and transmits it to the image processor. The camera used in this embodiment may be an independent camera, a camera carried by a mobile device, a camera of a dome camera, a camera of Augmented Reality (AR) \ Virtual Reality (VR) device, or the like, which is not limited in this application.
In some embodiments, when the camera acquires the image of the surrounding environment of the excavator, the camera exposure automatic adjustment algorithm is adopted to automatically adjust the exposure parameters during image acquisition, so that automatic adjustment of different exposure parameters can be performed according to the environment brightness.
In one way, the specific adjustment process described for automatic adjustment of different exposure parameters according to ambient brightness is:
(1) and indexing in a lookup table according to the ambient brightness during image shooting to query the target brightness of the image, wherein the lookup table is pre-established according to the optimal matching relationship between the ambient brightness and the target brightness of the image.
When the camera shoots an image, the current ambient brightness, namely an illumination value, is detected through ambient brightness detection equipment/a sensor, and then the detected ambient brightness is taken into the lookup table to be indexed so as to query the target brightness of the image, wherein the queried target brightness of the image is the optimal image brightness corresponding to the ambient brightness; the relationship between the ambient brightness and the image target brightness in the lookup table may be a corresponding relationship between intervals, for example, an ambient brightness interval corresponds to an image target brightness interval, an ambient brightness interval corresponds to an image target brightness, and an ambient brightness corresponds to an image target brightness interval; non-interval correspondence, i.e. single numerical correspondence, is also possible.
(2) And calculating an exposure target output value according to the inquired image target brightness, wherein the calculation formula is as follows: input is 255x (Output/255) Gamma, where Input is the image target brightness, Gamma is 2.2, and Output is the exposure target Output value, i.e., the exposure brightness of 18% gray in the divisional exposure method.
According to the zonal exposure method, a person is classified into 11 steps (i.e. 0, I, II, III, IV, V, VI, VII, VIII, IX, X; wherein all black is 0 step and all white is X step) for gradual change from black to white, and the block V in the middle is regarded as a moderate exposure intensity, called middle gray. And the light reflectance of the V-stage block is 18%, i.e., 18% gray by definition.
(3) Evaluating each parameter from multiple angles according to an optimized parameter algorithm, wherein the formula of the optimized parameter algorithm is as follows:
Figure BDA0003445378460000071
wherein, n is the ambient brightness TabThe amount of adjustment of the exposure time within the interval,
Figure BDA0003445378460000072
ambient brightness TabThe exposure level within the interval needs to be adjusted,
Figure BDA0003445378460000073
ambient brightness TabThe effect of the gray scale adjustment proportion in the interval,
Figure BDA0003445378460000074
ambient brightness TabNegative effects of exposure adjustment within the interval.
The real-time exposure is adjusted according to the formula, so that the contrast and definition of the image are improved, the quality is guaranteed from the source of the picture, and the subsequent identification accuracy is improved.
In practical application, a plurality of irregular edge features may exist in the stone, and false edges may also exist due to the influence of noise, illumination, shooting angles and other factors. Therefore, the edges on the image need to be classified, and a reasonable edge feature is selected to establish an object coordinate system, so as to estimate the target attitude. The invention relates to a stone block, which is classified into a structure called a rotator, wherein planes of spatial circles on the structure are parallel to each other, and a straight line passing through the center of the structure is perpendicular to the plane of the spatial circles. Based on the method, the method belongs to ellipse classification, and provides the parallelism and verticality constraint of the projection ellipse to obtain the projection of the upper space circle, thereby providing the spatial multi-ellipse posture estimation method. The implementation of the above step B, C, D is used to identify the stone in the image, namely: firstly, carrying out ellipse detection on an image, then classifying an ellipse by adopting a random sampling consistency algorithm to obtain a projection ellipse of a cylinder, then carrying out ellipse gravity center shift correction according to the actual contour of the stone, and finally calculating the coordinates of the stone under a camera coordinate system according to the corrected ellipses.
The ellipse is detected by adopting an algorithm based on arc segment extraction, the algorithm combines the arc segment extraction and classification, the calculation is simple, and the speed is very high. In the step B, Canny edge detection is firstly carried out on the target image, the image is converted into an edge image, then arc sections are restrained and detected through the gradient direction, and finally the arc sections are polymerized to generate an ellipse through the curvatures of the arc sections. The ellipses on the image are obtained through detection, the ellipses need to be classified, and reasonable ellipses used for gesture detection are distinguished from a plurality of interference ellipses or pseudo ellipses.
In some embodiments, step C comprises: and performing elliptic gravity center shift correction by adopting an elliptic gravity center shift correction algorithm. The ellipse center-of-gravity shift correction algorithm is divided into two areas and applied to a first quadrant: taking unit step length in the x direction in a first area with the slope absolute value smaller than 1, and taking unit step length in the y direction in a second area with the slope absolute value larger than 1;
take (x)c,yc) (0, 0), defining an elliptic function as:
Figure BDA0003445378460000081
fellipse(x, y) is a decision parameter;
from (0, r)y) Starting, taking unit step length in the x direction until the boundary of a first area and a second area, then turning to the unit step length in the y direction, covering the rest curve segments in the first quadrant, and detecting the curve slope value in each step;
the slope equation is:
Figure BDA0003445378460000082
in the interface area of the first area and the second area,
Figure BDA0003445378460000083
and is
Figure BDA0003445378460000084
It follows that the condition for shifting the first region is:
Figure BDA0003445378460000085
the decision function is evaluated by the center of gravity shift to determine the next position along the elliptical trajectory:
Figure BDA0003445378460000086
at the next sampling position (x)k+1+1=xk+2, the decision parameter for the first region may be evaluated as:
Figure BDA0003445378460000087
wherein, yk+1According to p1kHas a sign of ykOr yk-1
If p1k< 0, incremental increments of
Figure BDA0003445378460000091
If p1kNot less than 0, and increasing increment of
Figure BDA0003445378460000092
In the second region, sampling in a unit step in a negative direction;
Figure BDA0003445378460000093
at the next position yk+1-1=yk-2 evaluating an elliptic function, or
Figure BDA0003445378460000094
Wherein x isk+1According to p2kCan take the value of xkOr xk+1
The specific process of performing the ellipse barycentric shift correction by using the ellipse barycentric shift correction algorithm is as follows:
(1) input rx、ryAnd center of ellipse (x)c,yc) And get the first point on the ellipse: (x)0,y0)=(0,ry)。
(2) Calculating an initial value of a barycentric shift decision parameter in the first region:
Figure BDA0003445378460000095
(3) each x in the first regionkPosition, starting from k equal to 0, provided that p1k< 0, the next point of the ellipse centered at (0, 0) is (x)k+1,yk) And is and
Figure BDA0003445378460000096
otherwise, the next point along the ellipse is (x)k+1,yk-1) and
Figure BDA0003445378460000097
wherein the content of the first and second substances,
Figure BDA0003445378460000098
and up to
Figure BDA0003445378460000099
(4) Using the last point (x) calculated in the first region0,y0) To calculate initial values of the parameters in the second region:
Figure BDA00034453784600000910
(5) each y in the second areakAt position, start with k ═ 0, provided p2k> 0, the next point along the ellipse with the center being (0, 0) is (x)k,yk-1) and
Figure BDA00034453784600000911
otherwise, the next point (x) along the ellipsek+1,yk-1) and
Figure BDA00034453784600000912
the calculation is performed using the same x and y increments as in the first region until y is 0.
(6) The symmetry points in the other three quadrants are determined.
(7) Moving each pixel location (x, y) calculated to be centered at (x)c,yc) And drawing points according to coordinate values:
x=x+xc,y=y+yc
(8) the redrawn ellipse is the ellipse after the center of gravity shift correction.
In some embodiments, step B comprises: utilize the space circle place plane that the stone was fitted to be parallel to each other and obtain the constraint condition, specifically do: and calculating a normal vector of a plane where the space circle is located, intersecting a cone formed by the space circle and the center of the camera with the plane where the space circle is located, and forming a circle on the plane to obtain the parallelism constraint condition.
The projection of the spatial circular feature on the camera imaging plane is an ellipse, assuming that the radius of the circular feature is R. When a plane and the section of the elliptic cone form a circle with the radius of R, the center coordinate of the section is the position of the space circular feature, and the normal vector of the section circular ring contains the attitude information of the space circular feature. But under monocular cameraThe solution for single circle feature pose measurement is not unique, as shown in fig. 3. Where E is an elliptical projection of the spatial circular feature on the imaging plane, and C1 and C2 are two spatial circular features corresponding to the elliptical projection, the correspondence between E, C1 and C2 cannot be determined without constraint conditions. Can obtain the center coordinate [ x 'of the space circle'0 y′0 z′0]Normal vector [ n 'to the plane of the space circle'x n′y n′z]Respectively as follows:
Figure BDA0003445378460000101
Figure BDA0003445378460000102
in some embodiments, in order to eliminate ambiguity of the single-circle pose, a close-proximity circle-to-plane constraint technique is used to achieve certainty of the stone pose, i.e. step B further includes: calculating two groups of solutions of the space circle feature position and the attitude according to the elliptical projection of the space circle feature under the plane of the camera, wherein the position in each group of solutions corresponds to the attitude; if the real normal vector of the space circle feature plane in the camera coordinate system is n1, n1 is approximately equal to kN, and k is not equal to 0; substituting n1 into the formula calculates the attitude parameters of the space circle feature.
Specifically, two sets of solutions of the positions and the postures of the spatial circular features can be calculated according to the elliptical projection of the single circular features under the plane of the camera, wherein the positions in each set of solutions correspond to the postures, so that the true pose solution of the circular feature object can be obtained in a mode of eliminating the false postures. The normal vectors of the corresponding circle feature planes in the camera coordinate system are assumed to be n1 and n 2. Because the plane of the adjacent circle and the plane of the circle feature are coplanar and parallel to the plane of the circle feature, the real normal vector of the plane of the circle feature is parallel to the normal vector of the plane obtained by rectangular constraint under the camera coordinate system. And if the true normal vector of the circle feature plane in the camera coordinate system is n1, n1 is approximately equal to kN, and k is not equal to 0. By means of the constraint, false solution of the pose of the circular feature can be effectively eliminated, and the pose parameter of the circular feature can be solved by substituting n 1. According to the derivation, under the least constraint condition, the aim of eliminating the false solution of the circular pose can be achieved without knowing the geometric dimension and the coordinate of the rectangle under the world coordinate system.
Under the condition of different illumination and angle changes, the accurate locking of the key positioning target position is realized. According to the mutual relation information, the accurate pose can be obtained.
In some embodiments, the coordinates of the stone under the camera coordinate system are acquired using binocular vision system applications including the steps of camera calibration, stereo correction, stereo matching and three-dimensional reconstruction.
In some embodiments, referring to fig. 4, there is provided a stone intelligent recognition device 10, comprising: a memory 101 and a processor 102, wherein the memory 101 is used for storing a computer program, and the processor 102 is used for calling the computer program to execute the corresponding steps of the intelligent stone identification method provided by the embodiment.
In some embodiments, referring to fig. 5, an excavator 20 is provided that includes a central controller 201 and a vision module 202. The vision module 202 is used for providing image signals for the central controller 201, and comprises an image processor 203 connected with the central controller 201 and cameras 204 and 205 arranged on the excavator 20, wherein the cameras 204 and 205 are connected with the image processor 203.
In some embodiments, the excavator 20 further includes a supplementary lighting source 206 connected to the central controller 201, the supplementary lighting source 206 is used for a part of an area with low illuminance, and the central controller 201 can control the supplementary lighting source 206 to operate according to information of an image, so as to improve the illumination intensity of the working area, facilitate the operation of the excavator 20, and facilitate the vision module 202 to obtain a clear image.
The image processor 203, the left and right cameras 204, 205 and the supplementary lighting source 206 in the excavator are combined to form the vision module 202 of the present invention, which is used for connecting with the central controller 201, realizing the identification of stones around the excavator 20, further determining the working surface and the operation target stones, and realizing the distance measurement of the operation target stones.
In some embodiments, a computer-readable storage medium is provided, in which a computer program is stored, which, when running on a computer, performs the respective steps of the intelligent stone identification method provided by the present embodiment.
Referring to FIG. 6, a computer readable storage medium includes a computer program for executing a computer process on a computing device.
In some embodiments, computer-readable storage media is provided using signal bearing media 300. The signal bearing medium 300 may comprise one or more program instructions, which when executed by one or more processors may provide the functions or portions of the functions described above with respect to fig. 1. Thus, for example, one or more features described with reference to FIGS. 1A-D may be undertaken by one or more instructions associated with the signal bearing medium 300. Further, the program instructions in FIG. 6 also describe example instructions. In some examples, signal bearing medium 300 may comprise a computer readable medium 301 such as, but not limited to, a hard disk drive, a Compact Disc (CD), a Digital Video Disc (DVD), a digital tape, a memory, a read-only memory (ROM), a random-access memory (RAM), or the like.
In some implementations, the signal bearing medium 300 may comprise a computer recordable medium 302 such as, but not limited to, a memory, a read/write (R/W) CD, a R/WDVD, and so forth.
In some implementations, the signal bearing medium 300 may include a communication medium 303 such as, but not limited to, a digital and/or analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.).
The signal bearing medium 300 may be communicated by a wireless form of communication medium 303, such as a wireless communication medium conforming to the IEEE802.11 standard or other transmission protocol. The one or more program instructions may be, for example, computer-executable instructions or logic-implementing instructions.
It should be understood that the arrangements described herein are for illustrative purposes only. Thus, those skilled in the art will appreciate that other arrangements and other elements (e.g., machines, interfaces, functions, orders, and groupings of functions, etc.) can be used instead, and that some elements may be omitted altogether depending upon the desired results. In addition, many of the described elements are functional entities that may be implemented as discrete or distributed components or in conjunction with other components, in any suitable combination and location.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (10)

1. The intelligent stone identification method for unmanned crushing of the excavator is characterized by comprising the following steps of:
A. acquiring an image of the surrounding environment of the excavator;
B. carrying out ellipse detection on the image, classifying the detected ellipse to determine an ellipse for gesture detection, and obtaining a projection ellipse of the cylinder;
C. carrying out elliptical gravity center shift correction according to the actual contour of the stone block; and
D. and calculating the coordinates of the stone block under the coordinate system of the camera according to the corrected ellipse.
2. The intelligent stone identification method for unmanned crushing of excavators according to claim 1, characterized in that step B comprises:
and carrying out Canny edge detection on the image, converting the image into an edge image, restraining and detecting an arc section through a gradient direction, and polymerizing the arc section according to the curvature of the arc section to generate an ellipse.
3. The intelligent stone identification method for unmanned crushing of excavators according to claim 1, characterized in that step a comprises:
the exposure parameters during image acquisition are automatically adjusted, and the adjusting process is as follows:
indexing in a lookup table according to the ambient brightness during image shooting to query the image target brightness, wherein the lookup table is pre-established according to the optimal matching relationship between the ambient brightness and the image target brightness;
and calculating an exposure target output value according to the inquired image target brightness, wherein the calculation formula is as follows:
input is 255x (Output/255) Gamma, where Input is the image target brightness, Gamma is 2.2, and Output is the exposure target Output value, i.e., the exposure brightness of 18% gray in the divisional exposure method;
evaluating each parameter from multiple angles according to an optimized parameter algorithm, wherein the formula of the optimized parameter algorithm is as follows:
Figure FDA0003445378450000011
wherein, n is the ambient brightness TabThe amount of adjustment of the exposure time within the interval,
Figure FDA0003445378450000012
ambient brightness TabThe exposure level within the interval needs to be adjusted,
Figure FDA0003445378450000013
ambient brightness TabThe effect of the gray scale adjustment proportion in the interval,
Figure FDA0003445378450000014
ambient brightness TabNegative effects of exposure adjustment within the interval.
4. The intelligent stone identification method for unmanned crushing of excavators according to claim 1, characterized in that step C comprises:
carrying out ellipse gravity center shift correction by adopting an ellipse gravity center shift correction algorithm;
the ellipse center-of-gravity shift correction algorithm is divided into two areas and applied to a first quadrant: taking unit step length in the x direction in a first area with the slope absolute value smaller than 1, and taking unit step length in the y direction in a second area with the slope absolute value larger than 1;
take (x)c,yc) (0, 0), defining an elliptic function as:
Figure FDA0003445378450000021
fellipse(x, y) is a decision parameter;
from (0, r)y) Starting, taking unit step length in the x direction until the boundary of a first area and a second area, then turning to the unit step length in the y direction, covering the rest curve segments in the first quadrant, and detecting the curve slope value in each step;
the slope equation is:
Figure FDA0003445378450000022
in the interface area of the first area and the second area,
Figure FDA0003445378450000023
and is
Figure FDA0003445378450000024
It follows that the condition for shifting the first region is:
Figure FDA0003445378450000025
the decision function is evaluated by the center of gravity shift to determine the next position along the elliptical trajectory:
Figure FDA0003445378450000026
at the next sampling position (x)k+1+1=xk+2), the decision parameter for the first region may be evaluated as:
Figure FDA0003445378450000027
wherein, yk+1According to p1kHas a sign of ykOr yk-1
If p1k< 0, incremental increments of
Figure FDA0003445378450000028
If p1kNot less than 0, and increasing increment of
Figure FDA0003445378450000029
In the second region, sampling in a unit step in a negative direction;
Figure FDA00034453784500000210
at the next onePosition yk+1-1=yk-2 evaluating an elliptic function, or
Figure FDA0003445378450000031
Wherein x isk+1According to p2kCan take the value of xkOr xk+1
5. The intelligent stone identification method for unmanned crushing of excavators according to claim 4, characterized in that the correction of the center of gravity shift of the ellipse by the correction algorithm of the center of gravity shift of the ellipse comprises:
input rx、ryAnd center of ellipse (x)c,yc) And get the first point on the ellipse: (x)0,y0)=(0,ry)
Calculating an initial value of a barycentric shift decision parameter in the first region:
Figure FDA0003445378450000032
each x in the first regionkPosition, starting from k equal to 0, provided that p1k< 0, the next point of the ellipse centered at (0, 0) is (x)k+1,yk) And is and
Figure FDA0003445378450000033
otherwise, the next point along the ellipse is (x)k+1,yk-1) and
Figure FDA0003445378450000034
wherein the content of the first and second substances,
Figure FDA0003445378450000035
and up to
Figure FDA0003445378450000036
Using the last point (x) calculated in the first region0,y0) To calculate initial values of the parameters in the second region:
Figure FDA0003445378450000037
each y in the second areakAt position, start with k ═ 0, provided p2k> 0, the next point along the ellipse with the center being (0, 0) is (x)k,yk-1) and
Figure FDA0003445378450000038
otherwise, the next point (x) along the ellipsek+1,yk-1) and
Figure FDA0003445378450000039
performing a calculation using the same x and y increments as in the first region until y is 0;
determining symmetry points in the other three quadrants;
moving each pixel location (x, y) calculated to be centered at (x)c,yc) And drawing points according to coordinate values:
x=x+xc,y=y+yc
the redrawn ellipse is the ellipse after the center of gravity shift correction.
6. The intelligent stone identification method for unmanned crushing of excavators according to claim 1, characterized in that step B comprises:
utilize the space circle place plane that the stone was fitted to be parallel to each other and obtain the constraint condition, specifically do: and calculating a normal vector of a plane where the space circle is located, intersecting a cone formed by the space circle and the center of the camera with the plane where the space circle is located, and forming a circle on the plane to obtain the parallelism constraint condition.
7. Intelligent stone identification method for unmanned crushing of excavators according to claim 6, characterized in that the center coordinates [ x 'of the space circle'0 y′0 z′0]Normal vector [ n 'to the plane of the space circle'x n′y n′z]Respectively as follows:
Figure FDA0003445378450000041
Figure FDA0003445378450000042
8. the intelligent stone identification method for unmanned crushing of excavators according to claim 7, characterized in that step B comprises:
calculating two groups of solutions of the space circle feature position and the attitude according to the elliptical projection of the space circle feature under the plane of the camera, wherein the position in each group of solutions corresponds to the attitude;
if the real normal vector of the space circle feature plane in the camera coordinate system is n1, n1 is approximately equal to kN, and k is not equal to 0; substituting n1 into the formula calculates the attitude parameters of the space circle feature.
9. An intelligent stone recognition device, comprising: a memory for storing a computer program and a processor for invoking the computer program to perform the method of any of claims 1-8.
10. A computer-readable storage medium, in which a computer program is stored which, when run on a computer, causes the computer to carry out the method of any one of claims 1 to 8.
CN202111646639.3A 2021-12-30 2021-12-30 Intelligent stone identification method and device for unmanned crushing of excavator Pending CN114219801A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111646639.3A CN114219801A (en) 2021-12-30 2021-12-30 Intelligent stone identification method and device for unmanned crushing of excavator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111646639.3A CN114219801A (en) 2021-12-30 2021-12-30 Intelligent stone identification method and device for unmanned crushing of excavator

Publications (1)

Publication Number Publication Date
CN114219801A true CN114219801A (en) 2022-03-22

Family

ID=80706942

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111646639.3A Pending CN114219801A (en) 2021-12-30 2021-12-30 Intelligent stone identification method and device for unmanned crushing of excavator

Country Status (1)

Country Link
CN (1) CN114219801A (en)

Similar Documents

Publication Publication Date Title
CN109270534B (en) Intelligent vehicle laser sensor and camera online calibration method
CN106650708B (en) Automatic driving obstacle vision detection method and system
JP6997066B2 (en) Human detection system
CN109211207B (en) Screw identification and positioning device based on machine vision
CN106650701B (en) Binocular vision-based obstacle detection method and device in indoor shadow environment
CN112734765B (en) Mobile robot positioning method, system and medium based on fusion of instance segmentation and multiple sensors
CN105447853A (en) Flight device, flight control system and flight control method
JP4275378B2 (en) Stereo image processing apparatus and stereo image processing method
CN104484648A (en) Variable-viewing angle obstacle detection method for robot based on outline recognition
WO2020182176A1 (en) Method and apparatus for controlling linkage between ball camera and gun camera, and medium
JP6544257B2 (en) INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM
CN114241298A (en) Tower crane environment target detection method and system based on laser radar and image fusion
CN113658241B (en) Monocular structured light depth recovery method, electronic device and storage medium
CN110130987B (en) Tunnel convergence deformation monitoring method based on image analysis
EP3154024A1 (en) Human detection system for construction machine
CN114495068B (en) Pavement health detection method based on human-computer interaction and deep learning
CN112132874A (en) Calibration-board-free different-source image registration method and device, electronic equipment and storage medium
CN110634138A (en) Bridge deformation monitoring method, device and equipment based on visual perception
CN111998862A (en) Dense binocular SLAM method based on BNN
CN113643345A (en) Multi-view road intelligent identification method based on double-light fusion
CN112652020A (en) Visual SLAM method based on AdaLAM algorithm
CN110499802A (en) A kind of image-recognizing method and equipment for excavator
CN113701750A (en) Fusion positioning system of underground multi-sensor
CN114219801A (en) Intelligent stone identification method and device for unmanned crushing of excavator
CN116958218A (en) Point cloud and image registration method and equipment based on calibration plate corner alignment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination