CN117893966A - Crane swing identification method and device based on vision and computing equipment - Google Patents
Crane swing identification method and device based on vision and computing equipment Download PDFInfo
- Publication number
- CN117893966A CN117893966A CN202311764162.8A CN202311764162A CN117893966A CN 117893966 A CN117893966 A CN 117893966A CN 202311764162 A CN202311764162 A CN 202311764162A CN 117893966 A CN117893966 A CN 117893966A
- Authority
- CN
- China
- Prior art keywords
- image
- target plate
- swing
- camera
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 238000001514 detection method Methods 0.000 claims abstract description 33
- 238000012545 processing Methods 0.000 claims description 33
- 230000000877 morphologic effect Effects 0.000 claims description 12
- 238000012216 screening Methods 0.000 claims description 11
- 230000003068 static effect Effects 0.000 claims description 4
- 238000007689 inspection Methods 0.000 claims 1
- 238000004364 calculation method Methods 0.000 description 15
- 238000010586 diagram Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000009434 installation Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000005530 etching Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013144 data compression Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000010339 dilation Effects 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 230000003628 erosive effect Effects 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The application relates to the technical field of cranes, in particular to a crane swing identification method and device based on vision and computing equipment. The method comprises the following steps: acquiring a first image of the target plate through the camera; establishing a coordinate system by taking a target plate center point in the first image as an origin, and determining a first vector corresponding to the reference pattern on the target plate; acquiring a second image of the target plate through the camera; determining a second pixel coordinate of a center point of the target plate in the second image and a second vector corresponding to the reference pattern; and carrying out swing identification according to the second pixel coordinates, the second vector, the swing value, the camera parameters and the preset duration. According to the method, the target plate is introduced and the image of the target plate is acquired, the swing of the crane is identified, the equipment is simple and convenient to install and maintain, the identification accuracy is high, various data can be detected and identified, and the detection and identification efficiency is improved.
Description
Technical Field
The invention belongs to the technical field of cranes, and particularly relates to a crane swing identification method and device based on vision and computing equipment.
Background
With the continuous development of industrial intelligence, the unmanned action of the crane is more popular, and one of the key technologies in the unmanned control system of the crane is anti-shake control. In the anti-shake control system, the safety and the control precision of the closed-loop anti-shake system are very high, wherein the identification and the detection of anti-shake are very important, and the overall operation performance of the crane is affected.
The anti-shake recognition and detection result influences the control link of the whole anti-shake control system. The anti-swing system has the advantages that the application environment is mostly complex, the installation and maintenance are difficult, the interference is more, and the difficulty is brought to the swing identification of the crane.
Disclosure of Invention
In view of the above problems in the prior art, the application provides a crane swing identification method, device and computing equipment based on vision, which are efficient and accurate in identification, simple to operate, simple and convenient to install and maintain, and capable of adapting to field severe working condition environments and the like.
In order to achieve the above purpose, a first aspect of the present application provides a vision-based crane swing identification method, where the crane includes a lifting rope, a lifting hook located at the lower end of the lifting rope, a lifting rope support device located at the upper end of the lifting rope, a camera is disposed on the support device, the field of view of the camera is vertically downward, and a target plate containing a reference pattern is fixedly disposed at the lower end of the lifting rope; the crane swing identification method comprises the following steps:
Acquiring a first image of the target plate through the camera, wherein the first image comprises an image of the target plate when the target plate is static;
establishing a coordinate system by taking a target plate center point in the first image as an origin, and determining a first vector corresponding to the reference pattern on the target plate;
acquiring a second image of the target plate through the camera, wherein the second image comprises an image of the target plate after a preset time period;
determining a second pixel coordinate of a center point of the target plate in the second image and a second vector corresponding to the reference pattern;
swing identification is carried out according to the second pixel coordinates, the first vector, the second vector, parameters of the camera and preset duration; the swing identification includes determining a swing value of the target plate according to the second pixel coordinates, determining a rotation angle of the target plate according to the first vector and the second vector, determining a swing angle of the target plate according to the swing value and a parameter of the camera, and determining a swing speed of the target plate according to the swing value and the preset duration.
In this embodiment, through introducing the target board and gathering the image of target board, discern the swing of hoist lifting hook, equipment installation is maintained portably, the discernment accuracy is high to can detect the discernment to multiple data, promoted the efficiency of detecting the discernment.
As a possible implementation manner of the first aspect, determining a swing value of the target board according to the second pixel coordinate includes:
n1: determining a distance between the camera and the target plate; satisfies the following formula:
z=focla×length_side/(d_pix×side)
wherein z is the distance between the camera and the target plate, focal is the focal length of the camera lens, length_side is the actual length of the target plate, d_pix is the pixel size of the camera, and side is the pixel length of the target plate in the standard image;
n2: determining the swing value; the method comprises the steps of X-axis swing value X and Y-axis swing value Y, and the following formula is satisfied:
x=(Xc-frame_width/2)/side
y=(Yc-frame_height/2)/side
wherein, (Xc, yc) is the second pixel coordinate, and frame_width, frame_height are the X-axis direction pixel length and Y-axis direction pixel length of the target board.
In the embodiment, not only the swing amplitude value can be identified, but also the detailed state of the crane can be mastered by dividing and identifying different coordinate directions of the swing amplitude value of the target plate; in addition, the accuracy of anti-shake adjustment can be improved for the anti-shake process.
As a possible implementation manner of the first aspect, determining the rotation angle of the target plate according to the first vector and the second vector includes:
And determining an included angle between the first vector and the second vector as a rotation angle of the target plate.
In the embodiment, the rotation angle of the target plate is obtained by introducing the vector corresponding to the reference pattern on the target plate and calculating the vector change, so that the method is simple, convenient and easy to realize and has high accuracy.
As a possible implementation manner of the first aspect, the swing angle of the target board is determined according to the swing value and the camera parameter, including an X-axis swing angle and a Y-axis swing angle, and the following formula is satisfied:
θ x =arcsin(x/z)
θ y =arcsin(y/z)
wherein said θ x Represents the swing angle of the X-axis, theta y The Y-axis swing angle is represented by X, the X-axis swing value is represented by Y, the Y-axis swing value is represented by Y, and the distance between the camera and the target plate is represented by z.
In the embodiment, not only the swing angle can be identified, but also the detailed state of the crane can be grasped by dividing and identifying the swing angle of the target plate in different coordinate directions; in addition, the accuracy of anti-shake adjustment can be improved for the anti-shake process.
As one possible implementation manner of the first aspect, determining the swing speed of the target plate according to the swing value and the preset duration includes:
x-axis swing speed:
y-axis swing speed:
wherein Δt represents the preset time period, Δt represents the actual swing speed, v x Representing the X-axis swing speed, v y Representing the Y-axis swing speed.
As a possible implementation manner of the first aspect, acquiring a detection image of the target board includes:
the acquired image is adjusted through morphological operation;
extracting a target image in the adjusted image;
and screening the target image most similar to the standard image to obtain the detection image.
In the present embodiment, the accuracy of recognition can be effectively improved by performing morphological closing operation, similarity screening, and the like on the acquired image.
As a possible implementation manner of the first aspect, the screening the target board target image that is most similar to the standard image, to obtain the detection image, further includes:
detecting the Hu invariant moment distance scale of the target image and the standard image;
and determining the target image with the minimum distance scale as the target image most similar to the standard image, and obtaining the detection image.
In the embodiment, the accuracy of identification can be further improved by introducing Hu invariant moment screening and determining the detection image.
As a possible implementation manner of the first aspect, acquiring, by a camera, a standard image of the target plate including a reference pattern, further includes:
And the camera view field plane is horizontally arranged, the target plate containing the reference pattern is fixedly arranged with the lifting rope, and the central axis of the camera view field coincides with the central line of the target plate.
In the embodiment, the equipment is simple and convenient to install, operate and maintain, the central axis of the camera view field coincides with the central line of the target plate, complex calculation caused by rotation of the target plate can be avoided, and the recognition speed and accuracy can be improved.
The second aspect of the application provides a hoist swing recognition device based on vision, the hoist includes the lifting rope, is located the lifting hook of lifting rope lower extreme, is located the lifting rope strutting arrangement of lifting rope upper end, the fixed target plate that contains reference pattern that is provided with of lifting rope lower extreme, the device includes:
the camera module is arranged on the supporting device, and the view field of the camera module is vertically downwards arranged and is used for acquiring a first image of the target plate, wherein the first image comprises an image of the target plate when the target plate is static;
the data processing module is used for establishing a coordinate system by taking a target plate center point in the first image as an origin, and determining a first vector corresponding to the reference pattern;
the camera module is further configured to obtain a second image of the target board, where the second image includes an image of the target board after a preset period of time;
The data processing module is further configured to determine a second pixel coordinate of a center point of the target plate in the second image and a second vector corresponding to the reference pattern;
the data processing module is further configured to determine a swing value of the target plate according to the second pixel coordinate, determine a rotation angle of the target plate according to the second vector, determine a swing angle of the target plate according to the swing value and a camera parameter, and determine a swing speed of the target plate according to the swing value and the preset duration;
wherein, the camera module is electrically connected with the data processing module.
A third aspect of the present application provides a computing device comprising:
processor and method for controlling the same
A memory having stored thereon program instructions that when executed by the processor cause the processor to perform a vision-based crane swing identification method as described above.
A fourth aspect of the present application provides a computer readable storage medium having stored thereon program instructions which, when executed by a computer, cause the computer to perform a vision-based crane swing identification method as described above.
These and other aspects of the invention will be apparent from and elucidated with reference to the embodiment(s) described hereinafter.
Drawings
The various features of the invention and the connections between the various features are further described below with reference to the figures. The figures are exemplary, some features are not shown in actual scale, and some features that are conventional in the art to which this application pertains and are not essential to the application may be omitted from some figures, or features that are not essential to the application may be additionally shown, and combinations of the various features shown in the figures are not meant to limit the application. In addition, throughout the specification, the same reference numerals refer to the same. The specific drawings are as follows:
fig. 1 is a schematic flow chart of a vision-based crane swing identification method according to an embodiment of the present application;
FIG. 2 is an example diagram of a camera and target board arrangement provided by an embodiment of the present application;
fig. 3 is a schematic flow chart of a vision-based crane swing identification method according to another embodiment of the present application;
FIG. 4 is a schematic diagram of the operation of the camera and image processing device provided in an embodiment of the present application;
fig. 5 is a schematic flow chart of a detection image of a target board obtained by processing by the image processing device according to the embodiment of the present application;
Fig. 6 is a schematic structural diagram of a vision-based crane swing identification device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a computing device provided by an embodiment of the present application.
Detailed Description
The technical scheme provided by the application is further described below by referring to the accompanying drawings and examples. It should be understood that the system structures and service scenarios provided in the embodiments of the present application are mainly for illustrating possible implementations of the technical solutions of the present application, and should not be construed as the only limitation of the technical solutions of the present application. As one of ordinary skill in the art can know, with the evolution of the system structure and the appearance of new service scenarios, the technical scheme provided in the application is applicable to similar technical problems.
It should be understood that the vision-based crane swing identification scheme provided in the embodiments of the present application includes a vision-based crane swing identification method, apparatus, computing device, etc. Because the principles of solving the problems in these technical solutions are the same or similar, in the following description of the specific embodiments, some repetition is not described in detail, but it should be considered that these specific embodiments have mutual references and can be combined with each other.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. If there is a discrepancy, the meaning described in the present specification or the meaning obtained from the content described in the present specification is used. In addition, the terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application. For the purpose of accurately describing the technical content of the present application, and for the purpose of accurately understanding the present invention, the terms used in the present specification are given the following explanation or definition before the explanation of the specific embodiments:
1) Hu invariant moment: the Hu invariant moment principle has the characteristics of rotation invariance, translation invariance, scaling invariance and the like. The method and the device can be used for detecting image matching and standard image matching and the like.
Based on the defects existing in the prior art, the application provides a scheme for identifying the swing of the crane based on vision, and the technology of the application is mainly applied to the technical field of cranes, and particularly identifies the swing of the crane when the crane lifts an object. The swing identification detection is very important for preventing the swing control, and the integral operation performance and safety of the crane can be ensured. For example, the crane mainly comprises a bridge, a cart, a trolley, a lifting mechanism comprising a lifting hook and a lifting rope, and the like, wherein a camera is arranged on a lifting rope supporting device at the upper end of the lifting rope, and the view field of the camera is vertically downwards arranged; a target plate containing a reference pattern is fixedly arranged at the lower end of the lifting rope, and the target plate is arranged above the lifting hook.
The method and the device acquire the image of the target plate mounted at the lifting rope through the camera, and judge the swinging state and information of the crane when the crane lifts the object through identifying the swinging of the target plate. In the application, for bridge cranes and trolley luffing tower cranes, the camera can be arranged on the crane trolley; for a luffing jib luffing tower crane, the camera in the application can be arranged at the luffing pulley block. The camera arrangement in the present application includes, but is not limited to, the above-mentioned modes.
Fig. 1 is a schematic flow chart of a vision-based crane swing identification method according to an embodiment of the present application. As shown in fig. 1, the vision-based crane swing identification method provided in the present embodiment includes the following steps:
s101: a first image of the target plate is acquired by a camera.
The camera may be an imaging device that meets the requirements, such as an industrial camera, an infrared multispectral camera, etc., and can continuously and in real time acquire images of the lifting rope, the target plate, the lifting hook, and the suspended object of the trolley. As shown in fig. 2, in this embodiment, a target plate is introduced, and the target plate is fixedly disposed below the lifting rope, and a reflective sticker or the like may be disposed around the target plate, so that image acquisition and recognition are easy. The target plate can be further provided with a reference pattern, such as an L-shaped reflective sticker, which is beneficial to judging whether the target plate rotates or not. In some embodiments, the central axis of the camera field of view and the central axis of the target plate are coincident, so that the swing amplitude of the target plate can be calculated conveniently, the recognition efficiency is improved, complex geometric calculation due to the rotation of the target plate can be reduced, and the detection recognition speed can be improved.
In this embodiment, the first image may be an image of the camera and the target board set as described above, and the first image may be a standard image for subsequent wobble recognition when the target board is stationary.
S102: and establishing a coordinate system by taking the center point of the target plate in the first image as an origin, and determining a first vector corresponding to the reference pattern.
The established coordinate system may be a rectangular coordinate system. When the first image is shot, the positions of the target plate and the camera are relatively fixed, namely, a rectangular coordinate system is established by taking the center point of the target plate as an original point under the view field of the camera, the axis of the central axis of the view field of the camera of the coordinate system is a vertical axis (Z axis), and the horizontal axis (X axis) and the vertical axis (Y axis) are arranged on the horizontal plane of the target plate.
Based on the established coordinate system, the vector contained in the reference pattern in the first image is determined or a vector is selected according to the requirement.
The coordinates of the points in the coordinate system are coordinates of the corresponding pixels in the image captured by the camera, and are referred to as pixel coordinates in this application.
S103: a second image of the target plate is acquired by the camera.
The second image may include an image of the target board obtained by the camera after a preset period of time, that is, a detection image when detection is performed. The preset duration can be a time period of continuous shooting, and the real-time change of the target plate can be mastered through continuous shooting and corresponding processing, so that the real-time performance of swing identification is improved.
In some embodiments, when determining the second image, the acquired image may be adjusted by morphological operations, such as dilation, erosion, closing operations, etc., on the image acquired by the camera; a target image within the adjusted image, such as a contour image of the target plate and a contour image of the reference pattern, is then extracted.
In the present embodiment, the accuracy of recognition can be effectively improved by performing morphological closing operation, similarity screening, and the like on the acquired image.
And screening out the target image most similar to the first image to obtain a second image. Specifically, the area of each target image can be calculated, and compared with the first image, so that the target image which is most similar to the first image, namely the second image, can be determined.
Screening the target image most similar to the first image to obtain a second image may include: detecting the distance scale of Hu invariant moment of the target image and the first image; the Hu invariant moment principle is utilized to have the characteristics of rotation invariance, translation invariance, scaling invariance and the like, and the target image with the minimum distance scale is determined to be the target image most similar to the first image, so that the second image is obtained. The Hu invariant moment calculation formula is as follows:
Wherein h is 0 Hu invariant moment for the first image, h 1 Hu invariant moment for the target image.
In addition, whether the Hu invariant moment calculation detection result D accords with a preset threshold value can be judged, if the D accords with the preset threshold value, the detection state is determined to be normal, and the subsequent steps are continuously executed; if the D does not accord with the preset threshold, the abnormal contour detection and area calculation of the image after morphological closing operation are judged, and the image is required to be executed again.
In the embodiment, the accuracy of identification can be further improved by introducing Hu invariant moment screening and determining the detection image.
S104: and determining a second pixel coordinate of the center point of the target plate in the second image and a second vector corresponding to the reference pattern.
Assuming that the target board swings at this time, the coordinate positions of the points in the second image captured by the camera also change. And determining a second pixel coordinate of the center point of the target plate after the target plate swings and a second vector corresponding to the reference pattern according to the established coordinate system.
S105: and carrying out swing identification according to the second pixel coordinates, the second vector, the swing value, the camera parameters and the preset duration.
The swing identification comprises the steps of determining a swing value of the target plate according to the second pixel coordinates, determining a rotation angle of the target plate according to the second vector, determining a swing angle of the target plate according to the swing value and the camera parameters, and determining a swing speed of the target plate according to the swing value and the preset duration.
Specifically, determining the swing value of the target plate according to the second pixel coordinate includes:
n1: determining the distance between the camera and the target plate; satisfies the following formula:
z=focla×length_side/(d_pix×side)
wherein z is the distance between the camera and the target plate, focal is the focal length of the camera lens, length_side is the actual length of the target plate, d_pix is the pixel size of the camera, and side is the pixel length of the target plate in the standard image;
n2: determining an amplitude value; the method comprises the steps of X-axis swing value X and Y-axis swing value Y, and the following formula is satisfied:
x=(Xc-frame_width/2)/side
y=(Yc-frame_height/2)/side
where (Xc, yc) is the second pixel coordinate, and frame_width, frame_height are the X-axis direction pixel length and Y-axis direction pixel length of the target plate.
Determining a rotation angle from the first vector and the second vector, comprising:
and determining the included angle between the first vector and the second vector as the rotation angle of the target plate. Assume that the first vector isThe second vector is->The rotation angle θ of the target plate satisfies:
according to the swing value and the camera parameters, determining the swing angle of the target plate, including the X-axis swing angle and the Y-axis swing angle, and satisfying the following formula:
θ x =arcsin(x/z)
θ y =arcsin(y/z)
wherein θ x Represents the swing angle of the X-axis, theta y The Y-axis swing angle is represented by X, the X-axis swing value is represented by Y, the Y-axis swing value is represented by Y, and the distance between the camera and the target plate is represented by z.
Determining the swing speed of the target plate according to the swing value and the preset duration comprises the following steps:
X-axis swing speed:
y-axis swing speed:
wherein Δt represents a preset time period, Δt represents an actual swing speed, v x Representing the X-axis swing speed, v y Indicating the Y-axis swing speed.
The actual swing speed of the target plate can also be determined, and the following conditions are satisfied:
in some embodiments, the difference between the actual swing speed of the target plate and the X-axis swing speed or the Y-axis swing speed is negligible, so in order to reduce the amount of calculation and shorten the calculation time, the swing speed of the target plate may be directly expressed by the X-axis swing speed or the Y-axis swing speed.
The application provides an image through introducing the target board and gathering the target board discerns the swing of hoist, and equipment installation is maintained portably, discernment accuracy is high to can detect the discernment to multiple swing parameter, promoted the efficiency of detecting the discernment
The method for identifying the swing of the crane based on vision provided by the application is further described below with reference to a specific embodiment. Fig. 3 is a schematic flow chart of a crane swing identification method based on vision according to the present embodiment. The bridge crane mainly comprises a bridge, a cart, a trolley, a lifting mechanism comprising a lifting hook and a lifting rope and the like, wherein a camera is arranged on the trolley, and the view field of the camera is vertically downwards arranged; a target plate containing a reference pattern is fixedly arranged at the lower end of the lifting rope, and the target plate is arranged above the lifting hook.
In this embodiment, an image processing apparatus may be further included, as shown in fig. 4, where the image processing apparatus may be a separate device, or may be integrated with a camera, and a related algorithm may be built in the image processing apparatus, and may be used for image processing, data calculation, and the like in this application.
As shown in fig. 3, the vision-based crane swing identification method provided in the present embodiment includes the following steps:
s201: a standard image of the target plate is acquired by the camera.
The camera is fixedly arranged in the middle of the trolley of the crane, the field of view of the camera is kept horizontal, and the camera continuously and real-timely acquires images of a target plate fixedly arranged on the lifting rope or the lifting hook based on a preset shooting interval or shooting frame rate.
In the present embodiment, the standard image refers to an image of the target plate at rest acquired by the camera.
S202: the camera passes the standard image into the image processing device.
After receiving the standard image, the image processing device establishes a rectangular coordinate system by taking the center point of the target plate as the origin under the view field of the camera.
For subsequent calculations, the image processing device may also determine the reference pattern, and the first vector corresponding to the reference pattern, automatically or manually based on the established coordinate system.
S203: the image processing device processes the acquired image to obtain a detection image of the target plate.
In this embodiment, the detection image refers to an image obtained by a camera of the target board after a preset period of time.
Specifically, an image acquired at a preset acquisition time of the camera needs to be identified to obtain a detection image of the target plate at the moment. As shown in fig. 5, the flow chart of the image processing device for processing the detected image of the target plate includes the following steps M1-M4:
m1: and binarizing the acquired image.
Binarizing the acquired image may convert the pixel values in the image into a binary image having only two values (typically 0 and 1). The representation and processing of the image can be simplified and the target object or specific feature in the image highlighted. Is beneficial to feature extraction, noise elimination, image processing efficiency improvement, data compression and the like.
M2: and performing morphological closing operation on the binarized image.
The binarized image is subjected to morphological closing operation, so that the holes, the cracks and the missing parts in the image can be filled and repaired, and the edges of the target object can be smoothed and connected. Morphological closing operations generally consist of an expansion and etching operation, for example, an expansion operation followed by an etching operation.
M3: and carrying out contour detection and area calculation on the image after morphological closing operation.
Then, a target image in the morphological closing operation image, for example, a contour image of the target plate and a contour image of the reference pattern are extracted from the target image.
And screening out a target image which is most similar to the standard image of the target plate, and obtaining a detection image of the target plate. And then, the area of each target image is calculated and compared with the standard image, so that the target image with the area similar to the standard image is determined.
M4: and determining a target image with the area similar to that of the standard image to obtain a detection image.
The Hu invariant moment principle is utilized to have the characteristics of rotation invariance, translation invariance, scaling invariance and the like, the distance scale of the target image which is similar to the standard image in area and the Hu invariant moment of the standard image is determined, and the shorter the distance scale is, the more the target image accords with the standard image; therefore, the target image with the smallest distance scale is selected as the detection image. The Hu invariant moment calculation formula is as follows:
wherein h is 0 Hu invariant moment for standard image, h 1 Hu invariant moment for the target image.
In addition, whether the Hu invariant moment calculation detection result D accords with a preset threshold value can be judged, if the D accords with the preset threshold value, the detection state is determined to be normal, and the subsequent steps are continuously executed; if the D does not accord with the preset threshold, the abnormal contour detection and area calculation of the image after morphological closing operation are judged, and the image is required to be executed again.
S204: the image processing device determines a second pixel coordinate of the center point of the target plate in the detection image and a second vector corresponding to the reference pattern.
The image processing device obtains a second vector corresponding to the reference pattern and a second pixel coordinate corresponding to the pixel of the center point of the target plate in the coordinate system based on the established coordinate system and the data of the detection image.
S205: the image processing apparatus performs wobble recognition.
The swing identification includes determining a swing value of the target plate, determining a rotation angle of the target plate, determining a swing speed of the target plate, and the like.
Specifically, determining the swing value of the target board includes:
p1: determining the distance between the camera and the target plate; satisfies the following formula:
z=focla×length_side/(d_pix×side)
wherein z is the distance between the camera and the target plate, focal is the focal length of the camera lens, length_side is the actual length of the target plate, d_pix is the pixel size of the camera, and side is the pixel length of the target plate in the standard image;
p2: determining an amplitude value; the method comprises the steps of X-axis swing value X and Y-axis swing value Y, and the following formula is satisfied:
x=(Xc-frame_width/2)/side
y=(Yc-frame_height/2)/side
where (Xc, yc) is the second pixel coordinate, and frame_width, frame_height are the X-axis direction pixel length and Y-axis direction pixel length of the target plate.
Determining a rotation angle of the target plate includes:
and determining the included angle between the first vector and the second vector as the rotation angle of the target plate. Assume that the first vector isThe second vector is->The rotation angle θ of the target plate satisfies:
determining the swing angle of the target plate, including an X-axis swing angle and a Y-axis swing angle, satisfying the following formula:
θ x =arcsin(x/z)
θ y =arcsin(y/z)
wherein θ x Represents the swing angle of the X-axis, theta y The Y-axis swing angle is represented by X, the X-axis swing value is represented by Y, the Y-axis swing value is represented by Y, and the distance between the camera and the target plate is represented by z.
Determining the swing speed of the target plate according to the swing value and the preset duration comprises the following steps:
x-axis swing speed:
y-axis swing speed:
wherein Δt represents a preset time period, Δt represents an actual pendulumVelocity of movement, v x Representing the X-axis swing speed, v y Indicating the Y-axis swing speed.
The actual swing speed of the target plate can also be determined, and the following conditions are satisfied:
the current swing speed of the target plate can be calculated by collecting swing values and collecting intervals in continuous images.
In some embodiments, the difference between the actual swing speed of the target plate and the X-axis swing speed or the Y-axis swing speed is negligible, so in order to reduce the amount of calculation and shorten the calculation time, the swing speed of the target plate may be directly expressed by the X-axis swing speed or the Y-axis swing speed.
According to the vision-based crane swing identification scheme, the target plate is introduced and the motion state of the target plate is detected, so that a user can know the motion state data of an object lifted by the crane in real time and find out the operation problem in time, and the vision-based crane swing identification scheme has important guiding significance for the optimization of remote control prevention and the safe, efficient and stable operation of the system. And the equipment is simple and convenient to install and maintain, high in identification accuracy, and capable of detecting and identifying various data, so that the detection and identification efficiency is improved.
Based on an inventive concept, the present application further provides a vision-based crane swing identification device, as shown in fig. 6, and fig. 6 is a schematic structural diagram of a vision-based crane swing identification device 500 according to an embodiment of the present application. The vision-based crane swing recognition device 500 of this embodiment is specifically used to perform the above steps S101-S105 and any of the optional examples. Reference may be made in particular to the detailed description of the method embodiments, which are briefly described here below:
the vision-based crane swing identification device is applied to the swing identification of a crane and a lifting object thereof, the crane can comprise a lifting rope, a lifting hook positioned at the lower end of the lifting rope, a lifting rope support device positioned at the upper end of the lifting rope, a target plate containing a reference pattern is fixedly arranged at the lower end of the lifting rope,
A camera module 501, disposed on the supporting device, wherein a field of view of the camera module is vertically downward, and is configured to obtain a first image of the target board, where the first image includes an image of the target board when the target board is stationary;
the data processing module 502 is configured to establish a coordinate system with a center point of a target board in the first image as an origin, and determine a first vector corresponding to the reference pattern;
the camera module 501 is further configured to obtain a second image of the target board, where the second image includes an image of the target board after a preset period of time;
the data processing module 502 is further configured to determine a second pixel coordinate of a center point of the target board in the second image and a second vector corresponding to the reference pattern;
the data processing module 502 is further configured to determine a swing value of the target plate according to the second pixel coordinate, determine a rotation angle of the target plate according to the first vector and the second vector, determine a swing angle of the target plate according to the swing value and a parameter of the camera module, and determine a swing speed of the target plate according to the swing value and the preset duration.
Based on an inventive concept, the present application also provides a computing device, and fig. 7 is a schematic structural diagram of a computing device 900 provided in an embodiment of the present application. The computing device may be used as a vision-based crane swing recognition device to perform the various alternative embodiments of the vision-based crane swing recognition method described above, and the computing device may be a terminal, or may be a chip or a chip system within the terminal. As shown in fig. 7, the computing device 900 includes: processor 910, memory 920, and communication interface 930.
It should be appreciated that the communication interface 930 in the computing device 900 shown in fig. 7 may be used to communicate with other devices and may include, in particular, one or more transceiver circuits or interface circuits.
Wherein the processor 910 may be coupled to a memory 920. The memory 920 may be used to store the program codes and data. Accordingly, the memory 920 may be a storage unit internal to the processor 910, an external storage unit independent of the processor 910, or a component including a storage unit internal to the processor 910 and an external storage unit independent of the processor 910.
Optionally, computing device 900 may also include a bus. The memory 920 and the communication interface 930 may be connected to the processor 910 through a bus. The bus may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, or the like. The buses may be classified as address buses, data buses, control buses, etc. For ease of illustration, an unbiased line is shown in FIG. 7, but does not represent only one bus or one type of bus.
It should be appreciated that in embodiments of the present application, the processor 910 may employ a central processing unit (central processing unit, CPU). The processor may also be other general purpose processors, digital signal processors (digital signal processor, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), off-the-shelf programmable gate arrays (field programmable gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. Or the processor 910 may employ one or more integrated circuits for executing associated programs to perform the techniques provided in the embodiments of the present application.
The memory 920 may include read only memory and random access memory and provide instructions and data to the processor 910. A portion of the processor 910 may also include nonvolatile random access memory. For example, the processor 910 may also store information of the device type.
When the computing device 900 is running, the processor 910 executes computer-executable instructions in the memory 920 to perform any of the operational steps of the methods described above, as well as any of the alternative embodiments.
It should be understood that the computing device 900 according to the embodiments of the present application may correspond to a respective subject performing the methods according to the embodiments of the present application, and that the foregoing and other operations and/or functions of the respective modules in the computing device 900 are respectively for implementing the respective flows of the methods of the embodiments, and are not described herein for brevity.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Embodiments of the present application also provide a computer-readable storage medium having stored thereon a computer program for performing the above-described method when executed by a processor, the method comprising at least one of the aspects described in the above-described embodiments.
Any combination of one or more computer readable media may be employed as the computer storage media of the embodiments herein. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present application may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In addition, the terms "first, second, third, etc." or module a, module B, module C, etc. in the description and the claims are used solely for distinguishing between similar objects and not necessarily for a specific ordering of objects, it being understood that a specific order or sequence may be interchanged if allowed to enable the embodiments of the application described herein to be practiced otherwise than as specifically illustrated and described herein.
In the above description, reference numerals indicating steps such as S110, S120, … …, etc. do not necessarily indicate that the steps are performed in this order, and the order of the steps may be interchanged or performed simultaneously as the case may be.
The term "comprising" as used in the description and claims should not be interpreted as being limited to what is listed thereafter; it does not exclude other elements or steps. Thus, it should be interpreted as specifying the presence of the stated features, integers, steps or components as referred to, but does not preclude the presence or addition of one or more other features, integers, steps or components, or groups thereof. Thus, the expression "a device comprising means a and B" should not be limited to a device consisting of only components a and B.
Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the application. Thus, appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment, but may. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments as would be apparent to one of ordinary skill in the art from this disclosure.
Note that the above is only a preferred embodiment of the present application and the technical principle applied. Those skilled in the art will appreciate that the present application is not limited to the particular embodiments described herein, but is capable of numerous obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the present application. Thus, while the present application has been described in terms of the foregoing embodiments, the present application is not limited to the foregoing embodiments, but may include many other equivalent embodiments without departing from the spirit of the present application, all of which fall within the scope of the present application.
Claims (10)
1. The utility model provides a hoist swing identification method based on vision, its characterized in that, the hoist includes the lifting rope, is located the lifting hook of lifting rope lower extreme, is located the lifting rope strutting arrangement of lifting rope upper end, be provided with the camera on the strutting arrangement, the camera lens sets up down vertically, the lifting rope lower extreme is fixedly provided with the target plate that contains the reference pattern, the method includes:
acquiring a first image of the target plate through the camera, wherein the first image comprises an image of the target plate when the target plate is static;
establishing a coordinate system by taking a target plate center point in the first image as an origin, and determining a first vector corresponding to the reference pattern on the target plate;
Acquiring a second image of the target plate through the camera, wherein the second image comprises an image of the target plate after a preset time period;
determining a second pixel coordinate of a center point of the target plate in the second image and a second vector corresponding to the reference pattern;
swing identification is carried out according to the second pixel coordinates, the first vector, the second vector, parameters of the camera and preset duration; the swing identification includes determining a swing value of the target plate according to the second pixel coordinates, determining a rotation angle of the target plate according to the first vector and the second vector, determining a swing angle of the target plate according to the swing value and a parameter of the camera, and determining a swing speed of the target plate according to the swing value and the preset duration.
2. The method of claim 1, wherein determining the swing value of the target plate from the second pixel coordinates comprises:
n1: determining a distance between the camera and the target plate; satisfies the following formula:
Z=focla×length_side(d_pix×side)
wherein z is the distance between the camera and the target plate, focal is the focal length of the camera lens, length_side is the actual length of the target plate, d_pix is the pixel size of the camera, and side is the pixel length of the target plate in the first image;
N2: determining the swing value; comprising the following steps: the X-axis swing value X and the Y-axis swing value Y satisfy the following formula:
x=(Xc-frame_width/2)/side
y=(Yc-frame_height/2)/side
wherein, (Xc, yc) is the second pixel coordinate, and frame_width, frame_height are the X-axis direction pixel length and Y-axis direction pixel length of the target board.
3. The method of claim 1, wherein determining the rotation angle of the target plate from the first vector and the second vector comprises:
and determining an included angle between the first vector and the second vector as a rotation angle of the target plate.
4. The method of claim 1, wherein determining the yaw angle of the target plate from the yaw value and camera parameters comprises: the X-axis swing angle and the Y-axis swing angle satisfy the following formula:
θ x =arcsin(x/z)
θ y =arcsin(y/z)
wherein said θ x Represents the swing angle of the X-axis, theta y And the Y-axis swing angle is represented, X represents the X-axis swing value, Y represents the Y-axis swing value, and z represents the distance between the camera and the target plate.
5. The method according to claim 1, wherein the determining the swing speed of the target plate according to the swing value and the preset time period includes:
x-axis swing speed:
y-axis swing speed:
wherein Δt represents the preset time period, v x Representing the X-axis swing speed, v y Representing the Y-axis swing speed.
6. The method of claim 1, wherein acquiring the inspection image of the target plate comprises:
the acquired image is adjusted through morphological closing operation;
extracting a target image in the adjusted image;
and screening the target image most similar to the first image to obtain the detection image.
7. The method of claim 6, wherein the screening the target image that is most similar to the first image to obtain the detection image further comprises:
detecting a Hu invariant distance scale of the target image and the first image;
and determining the target image with the minimum distance scale as the target image most similar to the first image, and obtaining the detection image.
8. The utility model provides a hoist swing recognition device based on vision, its characterized in that, the hoist includes the lifting rope, is located the lifting hook of lifting rope lower extreme, is located the lifting rope strutting arrangement of lifting rope upper end, the fixed target plate that contains the reference pattern that is provided with of lifting rope lower extreme, the device includes:
the camera module is arranged on the supporting device, and the view field of the camera module is vertically downwards arranged and is used for acquiring a first image of the target plate, wherein the first image comprises an image of the target plate when the target plate is static;
The data processing module is used for establishing a coordinate system by taking a target plate center point in the first image as an origin, and determining a first vector corresponding to the reference pattern;
the camera module is further configured to obtain a second image of the target board, where the second image includes an image of the target board after a preset period of time;
the data processing module is further configured to determine a second pixel coordinate of a center point of the target plate in the second image and a second vector corresponding to the reference pattern;
the data processing module is further configured to determine a swing value of the target plate according to the second pixel coordinate, determine a rotation angle of the target plate according to the first vector and the second vector, determine a swing angle of the target plate according to the swing value and a parameter of the camera module, and determine a swing speed of the target plate according to the swing value and the preset duration.
Wherein, the camera module is electrically connected with the data processing module.
9. A computing device, comprising:
processor and method for controlling the same
A memory having stored thereon program instructions that, when executed by the processor, cause the processor to perform the vision-based crane swing identification method of any one of claims 1 to 7.
10. A computer readable storage medium, characterized in that it has stored thereon program instructions, which when executed by a computer, cause the computer to perform the vision-based crane swing identification method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311764162.8A CN117893966A (en) | 2023-12-20 | 2023-12-20 | Crane swing identification method and device based on vision and computing equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311764162.8A CN117893966A (en) | 2023-12-20 | 2023-12-20 | Crane swing identification method and device based on vision and computing equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117893966A true CN117893966A (en) | 2024-04-16 |
Family
ID=90638670
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311764162.8A Pending CN117893966A (en) | 2023-12-20 | 2023-12-20 | Crane swing identification method and device based on vision and computing equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117893966A (en) |
-
2023
- 2023-12-20 CN CN202311764162.8A patent/CN117893966A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108489996B (en) | Insulator defect detection method and system and terminal equipment | |
CN111259908A (en) | Machine vision-based steel coil number identification method, system, equipment and storage medium | |
CN111860060A (en) | Target detection method and device, terminal equipment and computer readable storage medium | |
CN116168351B (en) | Inspection method and device for power equipment | |
CN114897999B (en) | Object pose recognition method, electronic device, storage medium, and program product | |
CN114998217A (en) | Method for determining defect grade of glass substrate, computer device and storage medium | |
CN115272262A (en) | Outdoor insulator surface defect identification method and device and electronic equipment | |
CN113989516A (en) | Smoke dynamic identification method and related device | |
CN113286086B (en) | Camera use control method and device, electronic equipment and storage medium | |
CN117885089A (en) | Grabbing method, grabbing device, electronic equipment and storage medium | |
CN112669302A (en) | Dropper defect detection method and device, electronic equipment and storage medium | |
CN114219770A (en) | Ground detection method, ground detection device, electronic equipment and storage medium | |
WO2024021803A1 (en) | Mark hole positioning method and apparatus, assembly device, and storage medium | |
CN112507956A (en) | Signal lamp identification method and device, electronic equipment, road side equipment and cloud control platform | |
CN117893966A (en) | Crane swing identification method and device based on vision and computing equipment | |
CN114581890B (en) | Method and device for determining lane line, electronic equipment and storage medium | |
CN117037082A (en) | Parking behavior recognition method and system | |
CN112001336A (en) | Pedestrian boundary crossing alarm method, device, equipment and system | |
WO2023137871A1 (en) | Automatic sorting method and device based on height sensor and readable medium | |
CN115861801A (en) | Pointer instrument identification method and system, electronic equipment and readable storage medium | |
CN118037849A (en) | Calibration method and related device based on point cloud data and image | |
CN113077455B (en) | Tree obstacle detection method and device for protecting overhead transmission line, electronic equipment and medium | |
CN111767751B (en) | Two-dimensional code image recognition method and device | |
CN103837098A (en) | Screen test device and method | |
Ji et al. | A Novel Vision-Based Truck-Lifting Accident Detection Method for Truck-Lifting Prevention System in Container Terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |