CN114581857A - Intelligent crown block control method based on image analysis - Google Patents

Intelligent crown block control method based on image analysis Download PDF

Info

Publication number
CN114581857A
CN114581857A CN202210483980.XA CN202210483980A CN114581857A CN 114581857 A CN114581857 A CN 114581857A CN 202210483980 A CN202210483980 A CN 202210483980A CN 114581857 A CN114581857 A CN 114581857A
Authority
CN
China
Prior art keywords
color
degree
target object
image
connected domain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210483980.XA
Other languages
Chinese (zh)
Other versions
CN114581857B (en
Inventor
肖家宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Duxin Technology Industry Co ltd
Original Assignee
Wuhan Duxin Technology Industry Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Duxin Technology Industry Co ltd filed Critical Wuhan Duxin Technology Industry Co ltd
Priority to CN202210483980.XA priority Critical patent/CN114581857B/en
Publication of CN114581857A publication Critical patent/CN114581857A/en
Application granted granted Critical
Publication of CN114581857B publication Critical patent/CN114581857B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66CCRANES; LOAD-ENGAGING ELEMENTS OR DEVICES FOR CRANES, CAPSTANS, WINCHES, OR TACKLES
    • B66C13/00Other constructional features or details
    • B66C13/18Control systems or devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06T5/73
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Abstract

The invention relates to the technical field of image data processing, in particular to an intelligent crown block control method based on image analysis. The method comprises the following steps: collecting a target object image and converting the target object image into a lab color space to obtain a plurality of color connected domains; obtaining the integrating degree and the color difference balance degree between the two color connected domains; further acquiring the fuzzy degree between two adjacent color connected domains; acquiring a high fuzzy group and a low fuzzy group, and performing circle fitting on the coordinates of the color connected domain corresponding to the low fuzzy group to obtain a clear range; adjusting the aperture of the camera to enable the minimum circumscribed circle of the target object to be consistent with the clear range, and obtaining a clear image; and acquiring the angular point and the position of the lifting hook of the target object in the clear image, judging the shaking degree and further controlling the speed of the overhead travelling crane. The embodiment of the invention can adaptively adjust the aperture to obtain a clear image, further judge the running state of the crown block, control the running speed of the crown block, reduce the interference of the background and obtain a more accurate judgment result.

Description

Intelligent crown block control method based on image analysis
Technical Field
The invention relates to the technical field of image data processing, in particular to an intelligent crown block control method based on image analysis.
Background
In industrial production, a hoisting crown block is often used for transporting production materials and finished products, is important equipment for industrial production, and plays an important role in industrial development. However, many enterprise workshop safety accidents are caused in the process of hoisting goods by the overhead travelling crane, and casualties and huge property loss occur, so that the safe and reliable operation of the overhead travelling crane is very important for the material dispatching and transportation production of enterprises.
Because the overhead traveling crane operation environment condition is complicated, wherein to the discernment of lifting hook and target object surrounding environment, judge whether lifting hook and target object are firm, and whether the minimum distance of target object and barrier is safe distance, be the key of guaranteeing overhead traveling crane transportation safety.
Therefore, during the operation of the overhead traveling crane, a clear image needs to be acquired for safety analysis of the overhead traveling crane, but when the overhead traveling crane lifts an object, the height of the object changes, and if a camera is not adjusted, a blurred image can be obtained, thereby possibly causing a safety problem.
Disclosure of Invention
In order to solve the above technical problem, an object of the present invention is to provide an intelligent crown block control method based on image analysis, which adopts the following technical solutions:
an embodiment of the invention provides an intelligent crown block control method based on image analysis, which comprises the following steps:
acquiring a target object transported by a crown block, and collecting a target object image in a overlooking manner, wherein the target object image comprises a hook pulley structure, a lifting rope and the target object; converting the target object image into a lab color space, and grouping different colors to obtain a plurality of color connected domains;
the method comprises the steps of obtaining the degree of engagement between two adjacent color connected domains by calculating the proportion of the adjacent edges of the two adjacent color connected domains on the edge of each color connected domain; acquiring color difference balance between two adjacent color connected domains; combining the degree of engagement and the color difference balance degree to obtain the fuzzy degree between two adjacent color connected domains;
classifying all fuzzy degrees to obtain a high fuzzy group and a low fuzzy group, and performing circle fitting on coordinates of a color connected domain corresponding to the low fuzzy group to obtain a clear range; adjusting the aperture of the camera to enable the minimum circumscribed circle of the target object to be consistent with the clear range, and obtaining a clear image;
the method comprises the steps of obtaining the angular point and the position of a lifting hook of a target object in a clear image, obtaining a first moving distance of the angular point at adjacent moments and a second moving distance of the lifting hook, judging the shaking degree according to the first moving distance and the second moving distance, and further controlling the speed of the overhead travelling crane.
Preferably, the method for acquiring the color connected component includes:
the method comprises the steps of obtaining color components of each pixel point, clustering the color components to obtain a plurality of color categories, extracting a corresponding area of each color category in a target object image, and obtaining a corresponding color connected domain.
Preferably, the method for obtaining the degree of engagement is as follows:
extracting the edge of each color connected domain, establishing a sliding window to slide on the edge, obtaining the lengths of the adjacent edges, respectively calculating the proportions of the lengths of the adjacent edges on the edges of the two color connected domains, and selecting a larger proportion as the degree of engagement between the two corresponding color connected domains.
Preferably, the obtaining of the color difference equalization degree includes:
acquiring a first color difference between a target color connected domain and an adjacent connected domain;
selecting a color connected domain with the maximum degree of engagement with the target color connected domain as a corresponding engagement connected domain in other color connected domains adjacent to the target color connected domain except the adjacent connected domain, and acquiring a second color difference between the target color connected domain and the engagement connected domain;
and taking the ratio of the first color difference and the second color difference as the color difference balance degree between the target color connected domain and the adjacent connected domain.
Preferably, the method for acquiring the blur degree includes:
acquiring a color difference degree based on the color difference balance degree, wherein the sum of the color difference balance degree and the color difference degree is a first preset value; and taking the sum of the absolute value of the color difference degree and the first color difference as a negative index of a second preset value, and taking the product of the obtained result and the degree of engagement as the fuzzy degree.
Preferably, the method for acquiring the clear image comprises the following steps:
acquiring a minimum circumscribed circle of a target object, calculating a difference value between the radius of the minimum circumscribed circle and the radius of the clear range to be used as an intersection difference value, when the intersection difference value is larger than a preset threshold value, searching an initial aperture value by adopting a simulated annealing method, superposing an original aperture of a camera and the initial aperture value, acquiring an image of the target object again, calculating the intersection difference value until the intersection difference value is smaller than or equal to the preset threshold value, and acquiring an adjustment value of the aperture; and the clear image is the target object image shot after the aperture is adjusted by using the adjusting value.
The embodiment of the invention at least has the following beneficial effects:
the corresponding fuzzy degree is obtained by obtaining the degree of engagement and the chromatic aberration balance degree between two adjacent color communication domains, the low fuzzy group is obtained according to the classified fuzzy degree, the clear range is fitted, the target object is in the clear range by adjusting the aperture, the clear image is obtained, and the shaking degree is evaluated according to the clear image so as to control the speed of the crown block. The method can adaptively adjust the aperture to obtain a clear image, further judge the running state of the overhead traveling crane, control the running speed of the overhead traveling crane, reduce the interference of the background, obtain a more accurate judgment result and ensure the transportation safety of the overhead traveling crane.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart illustrating steps of an intelligent overhead traveling crane control method based on image analysis according to an embodiment of the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention for achieving the predetermined objects, the following detailed description of the method for controlling an intelligent overhead traveling crane based on image analysis, the detailed implementation, structure, features and effects thereof will be provided in conjunction with the accompanying drawings and the preferred embodiments. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following describes a specific scheme of the intelligent overhead traveling crane control method based on image analysis in detail with reference to the accompanying drawings.
Referring to fig. 1, a flowchart illustrating steps of an intelligent overhead traveling crane control method based on image analysis according to an embodiment of the present invention is shown, where the method includes the following steps:
s001, acquiring a target object transported by a crown block, and overlooking and collecting a target object image, wherein the target object image comprises a lifting hook pulley structure, a lifting rope and the target object; and converting the target object image into a lab color space, and grouping different colors to obtain a plurality of color connected domains.
The method comprises the following specific steps:
1. obtaining an image of a target
And the RGB camera is fixed on the position of the overhead travelling crane for controlling the movement of the lifting hook, and the camera overlooks the lifting hook and is used for acquiring images of the hung target and judging whether the current hung target is in a safe state.
The acquired target object image comprises a hook pulley structure, a lifting rope and a target object.
When the overhead traveling crane lifts the target object, the overhead traveling crane does not move and is fixed in position, so that the existing automatic focusing method can be adopted to continuously perform self-adaptive focusing on the position of the target object.
When the target object is too large, the target object still has a part of area out of the clear range of shooting after focusing possibly, so that the target object has a fuzzy part and the safety judgment of the overhead traveling crane during operation is influenced.
2. And converting the target object image into a lab color space, and grouping different colors to obtain a plurality of color connected domains.
The focus of the camera is the image center, the farther away from the image center point the more blurred the pixel points are along with the change of the depth of field, for the area with uniform color, the area before and after blurring is not greatly distinguished, and the area is invalid when blurring estimation is carried out, but the edge of each color area can be obtained for blurring degree evaluation, and when the blurring degree is larger, the edge of each color area is less obvious until disappearing.
The lab color space is in accordance with the vision of human eyes, the perception is uniform, and accurate color balance can be performed by modifying the output color levels of the a and b components, so that the target object image is converted into the lab color space.
The method comprises the steps of obtaining color components of each pixel point, clustering the color components to obtain a plurality of color categories, extracting corresponding areas of the color categories in a target object image, and obtaining corresponding color connected domains.
In the lab color space, the [ a, b ] value of each pixel point represents a color component, the color components are clustered by adopting a mean shift algorithm, the color values are clustered approximately into one class, m color classes are obtained in total, the corresponding regions of different color classes in the target image are obtained, and the color connected domain corresponding to each color class is obtained through a connected domain extraction algorithm.
Step S002, calculating the ratio of the adjacent edges of the two adjacent color connected domains to the edge of each color connected domain, and obtaining the degree of engagement between the two color connected domains; acquiring color difference balance between two adjacent color connected domains; and acquiring the fuzzy degree between two adjacent color connected domains by combining the degree of fit and the color difference balance degree.
The method comprises the following specific steps:
1. and acquiring the integrating degree between the two color connected domains.
Extracting the edge of each color connected domain, establishing a sliding window to slide on the edge, obtaining the lengths of the adjacent edges, respectively calculating the occupation ratio of the lengths of the adjacent edges on the edges of the two color connected domains, and selecting a larger occupation ratio as the integrating degree between the two corresponding color connected domains.
Obtaining the edge of the ith color connected domain through a connected domain extraction algorithm, establishing a 3 x 3 sliding window on the edge of the ith color connected domain, when the sliding window slides along the edge, if the edges of a plurality of regions appear in the sliding window, indicating that adjacent edges appear, for example, the ith color connected domain is adjacent to the jth color connected domain, counting the length of the adjacent edges, and calculating the proportion of the adjacent edges on the edge of the ith color connected domain
Figure 460949DEST_PATH_IMAGE001
And the specific gravity of the adjacent edge on the jth color connected domain edge
Figure 378090DEST_PATH_IMAGE002
Selecting
Figure 157827DEST_PATH_IMAGE003
Maximum value among
Figure 615353DEST_PATH_IMAGE004
As the degree of engagement between the ith and jth connected domains.
The greater the degree of engagement, the greater the correlation between the two color connected domains.
2. And acquiring the color difference balance degree between two adjacent color connected domains.
2.1 obtaining a first color difference between the target color connected component and the adjacent connected component.
The color values of each color area are not absolutely uniform by performing mean shift clustering on the color components, but when the color components are subjected to mean shift clustering, the color value of the clustering center can better represent the color value of the current color area, so that the color component corresponding to the position of the clustering center is used as the main color value of the current color area.
Taking the adjacent ith color connected domain and the jth color connected domain as an example, taking the jth color connected domain as a target color connected domain, taking the ith color connected domain as an adjacent connected domain of the jth color connected domain, and calculating the difference value of the main color values between the two color connected domains
Figure 429725DEST_PATH_IMAGE005
As the first color difference.
2.2 in other color connected domains adjacent to the target color connected domain except the adjacent connected domain, selecting the color connected domain with the maximum degree of engagement with the target color connected domain as the corresponding engagement connected domain, and obtaining a second color difference between the target color connected domain and the engagement connected domain.
Similarly, taking the adjacent ith color connected domain and jth color connected domain as an example, for the jth color connected domain, i.e. the target color connected domain, calculating the engagement degree between each color connected domain adjacent to the target color connected domain except the adjacent connected domain, i.e. the ith color connected domain, and the target color connected domain, selecting the color connected domain with the largest engagement degree as the engagement connected domain, assuming that the color connected domain is the jth color connected domain, and calculating the difference value of the main color values of the jth color connected domain and the jth color connected domain
Figure 252188DEST_PATH_IMAGE006
As the second color difference.
And 2.3, taking the ratio of the first color difference and the second color difference as the color difference equalization degree between the target color connected domain and the adjacent connected domain.
Figure 519221DEST_PATH_IMAGE007
Wherein the content of the first and second substances,
Figure 327908DEST_PATH_IMAGE008
representing the degree of color difference equalization between the target color connected component and the adjacent connected component.
Because the color difference between two color value regions in the image may be relatively large, and a new color region may be formed between the two color regions due to the blur caused by the depth of field, the color difference between two adjacent color regions alone cannot well represent the blur degree of the current color region. The embodiment of the invention adopts the ratio of the first color difference and the second color difference as the color difference equilibrium degree to judge whether the target color connected domain is a new color connected domain formed between the two color connected domains.
It should be noted that, if the target color connected component has only one adjacent color connected component, the color difference balance between the two color connected components is 1.
3. And acquiring the fuzzy degree between two adjacent color connected domains by combining the integrating degree and the color difference balancing degree.
Acquiring a color difference degree based on the color difference equilibrium degree, wherein the sum of the color difference equilibrium degree and the color difference degree is a first preset value; and taking the sum of the absolute value of the color difference degree and the first color difference as a negative index of a second preset value, and taking the product of the obtained result and the degree of engagement as the fuzzy degree.
In the embodiment of the present invention, the first preset value is 1, so that the color difference is obtained:
Figure 996787DEST_PATH_IMAGE009
and then calculating the fuzzy degree:
Figure 990151DEST_PATH_IMAGE010
wherein, the first and the second end of the pipe are connected with each other,
Figure 10059DEST_PATH_IMAGE011
representing a modulus between a target color connected component and an adjacent connected componentThe degree of pasting is set as follows,
Figure 747071DEST_PATH_IMAGE012
representing a base number based on a natural number e, to
Figure 270456DEST_PATH_IMAGE013
As a function of the exponent.
In the embodiment of the present invention, the value of the second preset value is e, and in other embodiments, the second preset value may also be other natural numbers greater than 1.
The greater the degree of blurring, if the greater the correlation between two color connected domains, the more gradual the color transition is. Therefore, the greater the degree of engagement, the greater the degree of blurring.
If the color difference balance degree between the two color connected domains is higher, the target color connected domain is more likely to be a new color connected domain formed between the two color connected domains, and the two color connected domains are more fuzzy, so that the color difference is subjected to negative correlation mapping, and the higher the color difference balance degree is, the larger the fuzzy degree is.
S003, classifying all fuzzy degrees to obtain a high fuzzy group and a low fuzzy group, and performing circle fitting on coordinates of a color connected domain corresponding to the low fuzzy group to obtain a clear range; and adjusting the aperture of the camera to ensure that the minimum circumscribed circle of the target object is consistent with the clear range to obtain a clear image.
The method comprises the following specific steps:
1. a clear range is obtained.
And (3) performing secondary classification on all fuzzy degrees by adopting a k-means and k =2 clustering algorithm, and classifying the edge weights of all nodes according to high fuzzy values and low fuzzy values, wherein the high fuzzy values are more parts far away from the center of the image, and the low fuzzy values are more parts of the center of the image.
Because the depth of field is circularly diffused, the coordinates of the color connected domain corresponding to the low fuzzy value are fitted by a ranging circle to obtain a circular range, and the circular range is a clear range because the low fuzzy value is more part of the center of the image.
2. And adjusting the aperture of the camera to enable the minimum circumscribed circle of the target object to be consistent with the clear range, and obtaining a clear image.
Acquiring a minimum circumscribed circle of a target object, calculating a difference value between the radius of the minimum circumscribed circle and the radius of a clear range to be used as an intersection difference value, searching an initial aperture value by adopting a simulated annealing method when the intersection difference value is larger than a preset threshold value, superposing an original aperture of a camera and the initial aperture value, acquiring an image of the target object again and calculating the intersection difference value until the intersection difference value is smaller than or equal to the preset threshold value, and acquiring an adjustment value of the aperture; and the target object image shot after the aperture is adjusted by using the adjusting value is a clear image.
The method comprises the steps of obtaining a target object by performing semantic segmentation on a target object image, obtaining the outline of the target object by utilizing connected domain analysis, and obtaining the minimum circumscribed circle corresponding to the current target object according to the outline of the target object. Calculating the difference value between the radius of the minimum circumscribed circle and the radius of the clear range to be used as an intersection difference value, when the intersection difference value is larger than a preset threshold value, intersecting the two circles, adjusting the aperture, searching an initial aperture value by adopting a simulated annealing method, superposing the original aperture of the camera and the initial aperture value, acquiring the target object image again, calculating the intersection difference value, further generating random disturbance on the superposed value, updating iteration again until the iteration times are reached, and obtaining an optimal solution to be used as the aperture adjustment value.
And when the semantic segmentation is carried out, a trained semantic segmentation network for identifying the target object is adopted for realizing the semantic segmentation.
As an example, in the embodiment of the present invention, the preset threshold is 0.1 times of the minimum circumscribed radius, the number of iterations is 100, and the temperature threshold in the simulated annealing algorithm is 0.1.
And the target object image shot after the aperture is adjusted by using the adjusting value is a clear image.
And step S004, acquiring the angular point and the position of the lifting hook of the target object in the clear image, acquiring a first moving distance of the angular point at adjacent moments and a second moving distance of the lifting hook, and judging the shaking degree according to the first moving distance and the second moving distance so as to control the speed of the overhead travelling crane.
The method comprises the following specific steps:
1. and acquiring the corner point and the hook position of the target object.
The target object can rock when just beginning to lift by crane, cuts apart the network through the semanteme and carries out the accuracy to the clear image that obtains and cut apart, can acquire the state that current target object is lifting more accurately, if rock the dynamics too fast, then the target object has the danger of droing, should stop to lift by crane, can be more accurate control rocking to the overhead traveling crane material of lifting by crane.
And identifying a target object in the clear image through the trained semantic segmentation network, wherein the identified target object is clear at the moment, acquiring a minimum circumscribed circle of the clear target object, and taking the position where the target object is intersected with the minimum circumscribed circle as an angular point for tracking the angular point to obtain the moving distance of the target object.
Before a clear image is obtained, as the image may have local blur, the target object contour and the minimum circumscribed circle obtained by semantic segmentation may not be accurate enough, and therefore, the corner points are obtained by obtaining the target object contour and the minimum circumscribed circle of the clear image.
The network is cut apart to the semantics of training detection lifting hook, and the camera overlooks when shooing because the sheltering from of lifting hook pulley structure, can not shoot the lifting hook, needs the position of artificially marking the lifting hook in training set image according to experience, and the pixel mark with lifting hook position department is 1 during the mark, and other pixel point marks are 0, utilize the cross entropy loss function to carry out the network training.
And recognizing the position of the hook by using the trained semantic segmentation network.
2. And acquiring a first moving distance of the corner points at adjacent time and a second moving distance of the lifting hook, and judging the shaking degree according to the first moving distance and the second moving distance so as to control the speed of the overhead travelling crane.
Recognizing the position coordinates of the corner points of the target object and the lifting hook, tracking the positions of the corner points of the target object and the lifting hook by adopting an optical flow method, and counting the average moving distance of the corner points of the target object in two adjacent frames of images as a first moving distance
Figure 700301DEST_PATH_IMAGE014
And the moving distance of the hook in two adjacent frames of images is taken as a second moving distance
Figure 253511DEST_PATH_IMAGE015
And characterizing the shaking degree between the target object and the lifting hook based on the magnitude relation of the first moving distance and the second moving distance:
Figure 528634DEST_PATH_IMAGE016
the larger the shaking degree S is, the larger the shaking is, and if the shaking is too large, the speed of the overhead travelling crane needs to be reduced, so that the stability of the state of the hung target object is ensured.
When the shaking degree S is larger than the shaking threshold r, too large shaking is generated between the target object and the lifting hook, the current overhead traveling crane is dangerous in operation safety, and the overhead traveling crane is controlled to run at a reduced speed.
When the current speed is too high, the fixed speed a is reduced every time, and after the speed is reduced, the shaking degree is calculated again by the method until the shaking degree is reached
Figure 172105DEST_PATH_IMAGE017
Wherein a is a hyper-parameter, and is adjusted according to a specific implementation scenario, as an example, embodiment a = 5.
When degree of shaking
Figure 117058DEST_PATH_IMAGE017
And (4) when the crane works normally, the current crane works normally.
As an example, r takes a value of 3 in the embodiment of the present invention.
In summary, in the embodiments of the present invention, a target object transported by a crown block is obtained, and an image of the target object is collected in a plan view, where the image of the target object includes a hook pulley structure, a lifting rope, and the target object; converting the target object image into a lab color space, and grouping different colors to obtain a plurality of color connected domains; the method comprises the steps of obtaining the degree of engagement between two adjacent color connected domains by calculating the proportion of the adjacent edges of the two adjacent color connected domains on the edge of each color connected domain; obtaining the color difference balance degree between two adjacent color connected domains; acquiring the fuzzy degree between two adjacent color connected domains by combining the integrating degree and the color difference equilibrium degree; classifying all fuzzy degrees to obtain a high fuzzy group and a low fuzzy group, and performing circle fitting on coordinates of a color connected domain corresponding to the low fuzzy group to obtain a clear range; adjusting the aperture of the camera to enable the minimum circumscribed circle of the target object to be consistent with the clear range, and obtaining a clear image; the method comprises the steps of obtaining the angular point and the position of a lifting hook of a target object in a clear image, obtaining a first moving distance of the angular point at adjacent moments and a second moving distance of the lifting hook, judging the shaking degree according to the first moving distance and the second moving distance, and further controlling the speed of the overhead travelling crane. The embodiment of the invention can adaptively adjust the aperture to obtain a clear image, further judge the running state of the crown block, control the running speed of the crown block, reduce the interference of the background and obtain a more accurate judgment result.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the embodiments of the present application, and they should be construed as being included in the present application.

Claims (6)

1. The intelligent crown block control method based on image analysis is characterized by comprising the following steps of:
acquiring a target object transported by a crown block, and overlooking and acquiring a target object image, wherein the target object image comprises a hook pulley structure, a lifting rope and a target object; converting the target object image into a lab color space, and grouping different colors to obtain a plurality of color connected domains;
the method comprises the steps of obtaining the degree of engagement between two adjacent color connected domains by calculating the proportion of the adjacent edges of the two adjacent color connected domains on the edge of each color connected domain; acquiring color difference balance between two adjacent color connected domains; combining the degree of engagement and the color difference balance degree to obtain the fuzzy degree between two adjacent color connected domains;
classifying all fuzzy degrees to obtain a high fuzzy group and a low fuzzy group, and performing circle fitting on coordinates of a color connected domain corresponding to the low fuzzy group to obtain a clear range; adjusting the aperture of the camera to enable the minimum circumscribed circle of the target object to be consistent with the clear range, and obtaining a clear image;
the method comprises the steps of obtaining the angular point and the position of a lifting hook of a target object in a clear image, obtaining a first moving distance of the angular point at adjacent moments and a second moving distance of the lifting hook, judging the shaking degree according to the first moving distance and the second moving distance, and further controlling the speed of the overhead travelling crane.
2. The intelligent crown block control method based on image analysis according to claim 1, wherein the color connected domain obtaining method comprises:
the method comprises the steps of obtaining color components of each pixel point, clustering the color components to obtain a plurality of color categories, extracting corresponding areas of the color categories in a target object image, and obtaining corresponding color connected domains.
3. The image analysis-based intelligent crown block control method according to claim 1, wherein the method for obtaining the degree of engagement is as follows:
extracting the edge of each color connected domain, establishing a sliding window to slide on the edge, obtaining the lengths of the adjacent edges, respectively calculating the occupation ratio of the lengths of the adjacent edges on the edges of the two color connected domains, and selecting a larger occupation ratio as the integrating degree between the two corresponding color connected domains.
4. The intelligent crown block control method based on image analysis according to claim 1, wherein the obtaining of the color difference equalization degree comprises:
acquiring a first color difference between a target color connected domain and an adjacent connected domain;
selecting a color connected domain with the maximum degree of engagement with the target color connected domain as a corresponding engagement connected domain in other color connected domains adjacent to the target color connected domain except the adjacent connected domain, and acquiring a second color difference between the target color connected domain and the engagement connected domain;
and taking the ratio of the first color difference and the second color difference as the color difference balance degree between the target color connected domain and the adjacent connected domain.
5. The intelligent crown block control method based on image analysis according to claim 4, wherein the fuzzy degree obtaining method comprises:
acquiring a color difference degree based on the color difference balance degree, wherein the sum of the color difference balance degree and the color difference degree is a first preset value; and taking the sum of the absolute value of the color difference degree and the first color difference as a negative index of a second preset value, and taking the product of the obtained result and the degree of engagement as the fuzzy degree.
6. The intelligent crown block control method based on image analysis according to claim 1, wherein the clear image obtaining method comprises the following steps:
acquiring a minimum circumscribed circle of a target object, calculating a difference value between the radius of the minimum circumscribed circle and the radius of the clear range to be used as an intersection difference value, when the intersection difference value is larger than a preset threshold value, searching an initial aperture value by adopting a simulated annealing method, superposing an original aperture of a camera and the initial aperture value, acquiring an image of the target object again, calculating the intersection difference value until the intersection difference value is smaller than or equal to the preset threshold value, and acquiring an adjustment value of the aperture; and the clear image is the target object image shot after the aperture is adjusted by using the adjusting value.
CN202210483980.XA 2022-05-06 2022-05-06 Intelligent crown block control method based on image analysis Active CN114581857B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210483980.XA CN114581857B (en) 2022-05-06 2022-05-06 Intelligent crown block control method based on image analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210483980.XA CN114581857B (en) 2022-05-06 2022-05-06 Intelligent crown block control method based on image analysis

Publications (2)

Publication Number Publication Date
CN114581857A true CN114581857A (en) 2022-06-03
CN114581857B CN114581857B (en) 2022-07-05

Family

ID=81779176

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210483980.XA Active CN114581857B (en) 2022-05-06 2022-05-06 Intelligent crown block control method based on image analysis

Country Status (1)

Country Link
CN (1) CN114581857B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782693A (en) * 2022-06-17 2022-07-22 江苏乐尔环境科技股份有限公司 Health management system for industrial engineering equipment
CN115314634A (en) * 2022-06-28 2022-11-08 上海艾为电子技术股份有限公司 Anti-shake detection method and device, terminal equipment and readable storage medium
CN116902487A (en) * 2023-07-28 2023-10-20 徐州市奥睿自动化设备有限公司 Intelligent monitoring and regulating system for article conveying based on visualization

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8144984B2 (en) * 2006-12-08 2012-03-27 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and program for color fringing estimation and compensation
EP2575077A2 (en) * 2011-09-29 2013-04-03 Ricoh Company, Ltd. Road sign detecting method and road sign detecting apparatus
CN107197233A (en) * 2017-06-23 2017-09-22 安徽大学 Monitor video quality of data evaluating method and device based on edge calculations model
CN107832359A (en) * 2017-10-24 2018-03-23 杭州群核信息技术有限公司 A kind of picture retrieval method and system
CN110428463A (en) * 2019-06-04 2019-11-08 浙江大学 The method that image automatically extracts center during aspherical optical element defocus blur is fixed
CN111311573A (en) * 2020-02-12 2020-06-19 贵州理工学院 Branch determination method and device and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8144984B2 (en) * 2006-12-08 2012-03-27 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and program for color fringing estimation and compensation
EP2575077A2 (en) * 2011-09-29 2013-04-03 Ricoh Company, Ltd. Road sign detecting method and road sign detecting apparatus
CN103034836A (en) * 2011-09-29 2013-04-10 株式会社理光 Road sign detection method and device
CN107197233A (en) * 2017-06-23 2017-09-22 安徽大学 Monitor video quality of data evaluating method and device based on edge calculations model
CN107832359A (en) * 2017-10-24 2018-03-23 杭州群核信息技术有限公司 A kind of picture retrieval method and system
CN110428463A (en) * 2019-06-04 2019-11-08 浙江大学 The method that image automatically extracts center during aspherical optical element defocus blur is fixed
CN111311573A (en) * 2020-02-12 2020-06-19 贵州理工学院 Branch determination method and device and electronic equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782693A (en) * 2022-06-17 2022-07-22 江苏乐尔环境科技股份有限公司 Health management system for industrial engineering equipment
CN115314634A (en) * 2022-06-28 2022-11-08 上海艾为电子技术股份有限公司 Anti-shake detection method and device, terminal equipment and readable storage medium
CN116902487A (en) * 2023-07-28 2023-10-20 徐州市奥睿自动化设备有限公司 Intelligent monitoring and regulating system for article conveying based on visualization
CN116902487B (en) * 2023-07-28 2023-12-29 徐州市奥睿自动化设备有限公司 Intelligent monitoring and regulating system for article conveying based on visualization

Also Published As

Publication number Publication date
CN114581857B (en) 2022-07-05

Similar Documents

Publication Publication Date Title
CN114581857B (en) Intelligent crown block control method based on image analysis
CN109147254B (en) Video field fire smoke real-time detection method based on convolutional neural network
CN108416394A (en) Multi-target detection model building method based on convolutional neural networks
CN110298297B (en) Flame identification method and device
CN107871134A (en) A kind of method for detecting human face and device
CN109858466A (en) A kind of face critical point detection method and device based on convolutional neural networks
CN108446729A (en) Egg embryo classification method based on convolutional neural networks
EP3036714B1 (en) Unstructured road boundary detection
CN107330360A (en) A kind of pedestrian's clothing colour recognition, pedestrian retrieval method and device
CN112052782B (en) Method, device, equipment and storage medium for recognizing parking space based on looking around
CN109086781A (en) A kind of cabinet lamp state identification method based on deep learning
CN109045676A (en) A kind of Chinese chess identification learning algorithm and the robot intelligence dynamicization System and method for based on the algorithm
US11538238B2 (en) Method and system for performing image classification for object recognition
CN114897896A (en) Building wood defect detection method based on gray level transformation
CN107392089A (en) A kind of eyebrow movement detection method and device and vivo identification method and system
CN109360179A (en) A kind of image interfusion method, device and readable storage medium storing program for executing
CN108460320A (en) Based on the monitor video accident detection method for improving unit analysis
CN107358155A (en) A kind of funny face motion detection method and device and vivo identification method and system
CN109284759A (en) One kind being based on the magic square color identification method of support vector machines (svm)
CN108154496A (en) A kind of power equipment appearance suitable for electric operating robot changes recognition methods
CN109685823A (en) A kind of method for tracking target based on depth forest
CN111898454A (en) Weight binarization neural network and transfer learning human eye state detection method and device
CN112488220A (en) Small target detection method based on deep learning
CN110334703B (en) Ship detection and identification method in day and night image
CN107358151A (en) A kind of eye motion detection method and device and vivo identification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant