CN114581447A - Conveying belt deviation identification method and device based on machine vision - Google Patents
Conveying belt deviation identification method and device based on machine vision Download PDFInfo
- Publication number
- CN114581447A CN114581447A CN202210489250.0A CN202210489250A CN114581447A CN 114581447 A CN114581447 A CN 114581447A CN 202210489250 A CN202210489250 A CN 202210489250A CN 114581447 A CN114581447 A CN 114581447A
- Authority
- CN
- China
- Prior art keywords
- belt
- image
- deviation
- current frame
- edge
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/001—Industrial image inspection using an image reference approach
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65G—TRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
- B65G43/00—Control devices, e.g. for safety, warning or fault-correcting
- B65G43/02—Control devices, e.g. for safety, warning or fault-correcting detecting dangerous physical condition of load carriers, e.g. for interrupting the drive in the event of overheating
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65G—TRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
- B65G2203/00—Indexing code relating to control or detection of the articles or the load carriers during conveying
- B65G2203/02—Control or detection
- B65G2203/0266—Control or detection relating to the load carrier(s)
- B65G2203/0283—Position of the load carrier
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
Abstract
The invention discloses a method and a device for identifying the deviation of a conveyor belt based on machine vision, wherein the method for identifying the deviation of the conveyor belt based on machine vision comprises the steps of acquiring a monitoring video frame of the conveyor belt on line, and acquiring an interesting image to be detected of a current frame according to a preset background area; carrying out block matching on the current frame image to be detected and a belt background template picture to determine a position interval of the belt edge; and monitoring the deviation of the conveying belt and carrying out belt deviation early warning according to the position interval of the belt edge. By using the method, the robustness and the identification accuracy of the belt deviation early warning method under the interference scenes such as image blurring, material shielding, longitudinal belt scratches and the like can be improved; and the belt deviation trend can be monitored on line, and the belt deviation which does not reach the alarm level can be analyzed quantitatively.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a device for identifying deviation of a conveying belt based on machine vision.
Background
In the industrial fields of coal mines, cement, metallurgy and the like, belt transportation is the mainstream material transportation mode. Due to production requirements, the conveyor belt requires long, highly loaded runs. In the process, factors such as belt quality, equipment installation errors, blanking points, carrier roller sizing and the like can cause the belt to deviate, so that the problems of belt tearing, material rollover and the like are caused, the transportation equipment can be seriously damaged, and the production safety and the production efficiency are greatly influenced. Therefore, timely early warning of the deviation of the conveying belt is extremely necessary.
The existing early warning of the deviation of the conveying belt mainly depends on two methods of deviation switch and intelligent monitoring.
The belt deviation early warning method based on the deviation switch identifies the deviation of the belt according to the deflection angle of the vertical stick pushed by the deviation belt, and carries out grading early warning on the deviation of the belt according to the magnitude of the deflection angle. The belt deviation early warning method depending on the mechanical structure is high in stability, but the deviation state and the trend of the belt cannot be monitored on line, communication among equipment is complex, a large amount of installation is needed, and deployment and maintenance costs are high.
The belt deviation early warning method based on intelligent monitoring combines a video monitoring system and an image processing algorithm, and can monitor and analyze the deviation state and trend of the conveying belt on line. However, the existing belt deviation algorithm mainly depends on an edge extraction technology, and common interference factors such as image blurring, material shielding, belt longitudinal scratches and the like in the actual production process all affect the robustness and the identification precision of the algorithm, so that deviation false alarm or missing detection is caused.
Therefore, aiming at the existing belt transportation production field, a belt deviation identification method with low cost, high precision and high robustness is urgently needed, so that the production delay is avoided, and the fault maintenance cost is reduced.
Disclosure of Invention
In view of the above disadvantages of the prior art, an object of the present invention is to provide a method and an apparatus for identifying deviation of a conveyor belt based on machine vision, which can monitor the deviation degree of the conveyor belt on line and perform a deviation warning in time.
In order to achieve the above and other related objects, the present invention provides a method for identifying deviation of a conveyor belt based on machine vision, comprising:
acquiring a monitoring video frame of a conveyor belt on line, and acquiring a current frame to-be-detected interesting image according to a preset background area;
carrying out block matching on the current frame image to be detected and a belt background template image to determine a position interval of the belt edge;
and monitoring the deviation of the conveying belt and carrying out belt deviation early warning according to the position interval of the belt edge.
In an optional embodiment of the present invention, the obtaining a monitoring video frame of a conveyor belt on line and obtaining a current frame of an image of interest to be measured according to a preset background area includes:
acquiring monitoring video frame data of a conveyor belt on line, and extracting the current frame monitoring original image according to the preset background area to acquire a current frame extracted image;
and carrying out perspective transformation processing on the current frame extracted image so as to convert the current frame extracted image into an image with the same size as the belt background template picture, wherein the image is used as the current frame image to be detected.
In an optional embodiment of the present invention, the block matching of the current frame image of interest to be detected and a belt background template image, and the determining of the position section of the belt edge includes:
respectively dividing the current frame interesting image and the belt background template picture into a plurality of image blocks with the same quantity and size;
carrying out similarity matching on the current frame image to be detected and the belt background template image block by block to obtain a similarity matrix;
according to a preset similarity threshold value, binarizing the similarity matrix to obtain a belt positioning matrix;
traversing the belt positioning matrix row by row, recording the vertical coordinate of the image block with the difference appearing for the first time in each row of elements, wherein the corresponding image block comprises a section of belt edge, the vertical coordinate of the image block with the difference appearing for the first time in each row of elements jointly forms a belt edge positioning vector, and the determination of the position interval of the belt edge is completed through the belt edge positioning vector.
In an optional embodiment of the invention, the monitoring the deviation of the transportation belt and the belt deviation warning according to the position interval of the belt edge comprises:
calculating an interval upper boundary line or an interval lower boundary line of the belt edge in the current frame interesting image according to the belt edge positioning vector;
and monitoring the deviation of the conveying belt through the upper boundary line or the lower boundary line of the belt edge in the current frame image to be detected and carrying out belt deviation early warning.
In an optional embodiment of the present invention, the monitoring the deviation amount of the transportation belt and performing the belt deviation warning by using the upper boundary line or the lower boundary line of the area of the belt edge in the current image to be measured and interested in the current frame includes:
mapping the upper boundary or the lower boundary of the belt edge in the current frame to-be-detected interesting image back to the current frame monitoring original image through perspective inverse transformation to obtain the upper boundary or the lower boundary of the belt edge in the current frame monitoring original image;
calculating the offset of the upper boundary line or the lower boundary line of the belt edge in the current frame monitoring original image and a pre-marked belt edge reference line segment;
and comparing the offset with the belt deviation alarm threshold value to judge whether the transportation belt is deviated currently.
In an optional embodiment of the present invention, the offset is compared with the belt deviation alarm threshold to determine whether the transportation belt is currently deviated, and whether the transportation belt is currently deviated is determined according to the following formula:
wherein the content of the first and second substances,Δd l andΔd r monitoring the offset of the lower boundary line of the interval of the left and right edges of the belt in the original image for the current frame,t d and (3) a belt deviation alarm threshold value, normal indicates that the transportation belt normally runs and does not deviate, and alarm indicates that the transportation belt deviates and needs belt deviation alarm.
In an optional embodiment of the invention, the method for identifying the deviation of the conveying belt based on the machine vision further comprises the following steps of;
obtaining a standard belt monitoring image in an off-line manner, and marking reference edge points on two sides of the conveying belt in the standard belt monitoring image;
determining the preset background area outside the conveyor belt according to the reference edge points on the two sides of the conveyor belt;
and constructing the belt background template picture according to the preset background area.
In an optional embodiment of the present invention, the determining the preset background area outside the conveyor belt according to the reference edge points at both sides of the conveyor belt includes:
setting a deviation detection range;
and determining the preset background area by taking the reference edge points on two sides of the conveying belt as a reference and extending towards the outer side of the conveying belt along the abscissa direction, wherein the outward extending amount is the deviation detection range and the transverse width of the conveying belt at the corresponding reference edge points.
In an optional embodiment of the present invention, the constructing the belt background template map according to the preset background area includes:
performing image extraction on the standard belt monitoring image according to the preset background area to obtain a standard extracted image;
and carrying out perspective transformation processing on the standard extracted image so as to convert the standard extracted image into the belt background template drawing.
In order to achieve the above and other related objects, the present invention further provides a device for identifying deviation of a conveyor belt based on machine vision, comprising:
the ROI image acquisition module is used for acquiring a monitoring video frame of the conveyor belt on line and acquiring an image of interest to be detected of a current frame according to a preset background area;
the edge position determining module is used for carrying out block matching on the current frame image to be detected and the belt background template image to determine a position interval of the belt edge;
and the deviation early warning module is used for monitoring the deviation of the conveying belt and carrying out belt deviation early warning according to the position interval of the belt edge.
The method and the device for identifying the deviation of the conveying belt based on the machine vision can effectively avoid the dependence of the existing belt deviation early warning method based on the edge extraction technology on complicated post-processing steps such as texture enhancement, line screening and the like, and simultaneously improve the robustness and the identification precision of the belt deviation early warning method under the interference scenes such as image blurring, material shielding, longitudinal scratches of the belt and the like; meanwhile, compared with a belt deviation early warning method based on a deviation switch of a mechanical structure, the belt deviation trend can be monitored on line, and belt deviation which does not reach the alarm level can be analyzed quantitatively.
Drawings
Fig. 1 is a schematic flow chart of a method for identifying deviation of a conveyor belt based on machine vision.
Fig. 2 is a sub-flowchart of step S10.
FIG. 3 is a schematic diagram of the belt labeling and background template building process of the present invention.
Fig. 4 is a sub-flowchart of step S20.
FIG. 5 is a schematic diagram illustrating the block matching and belt edge zone location process of the present invention.
FIG. 6 is a schematic illustration of a belt offset calculation process according to the present invention.
Fig. 7 is a functional block diagram of the apparatus for identifying deviation of a conveyor belt based on machine vision according to the present invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention.
Please refer to fig. 1-7. It should be noted that the drawings provided in the present embodiment are only for schematically illustrating the basic idea of the present invention, and the drawings only show the components related to the present invention rather than being drawn according to the number, shape and size of the components in actual implementation, and the form, quantity and proportion of each component in actual implementation may be arbitrarily changed, and the layout of the components may be more complicated.
Referring to fig. 1, fig. 1 is a flow chart illustrating a method for identifying deviation of a conveyor belt based on machine vision according to a preferred embodiment of the present invention. The conveyor belt deviation identification method based on machine vision provided by the embodiment of the invention can be applied to the technical field of image processing, can monitor the deviation degree of the conveyor belt on line and can perform deviation early warning in time.
In this embodiment, as shown in fig. 1, for the purpose of implementing a method for identifying the deviation of a transportation belt based on machine vision, it is necessary to obtain a standard monitoring image offline in advance, label belt reference information, and obtain a belt background template map (step S10), and then position an edge section of the belt online by using a block matching method to perform analysis and warning of the deviation of the transportation belt (step S20). The method for identifying the deviation of the conveyor belt based on machine vision according to the embodiment will be described in detail with reference to the accompanying drawings.
First, step S10 is executed to obtain a standard monitoring image, label the belt reference information, and obtain a belt background template map. Fig. 2 shows a sub-flowchart of step S10, and fig. 3 shows a schematic diagram of the belt labeling and background template building process of the present embodiment.
Specifically, in this embodiment, before performing online identification of belt deviation, it is necessary to perform partial information labeling and acquisition of a belt background template map for a monitoring scene, as shown in fig. 2, step S10 includes the following steps:
and S11, acquiring a standard belt monitoring image, wherein the standard belt monitoring image is marked with reference edge points on two sides of the conveying belt.
As an example, as shown in fig. 3, an off-line belt monitoring image I without deviation can be obtained, and the manual labeling image includes 4 (of course, 6, 8, etc.) reference edge points on the left and right sides of the belt as a standard belt monitoring image. Belt left side reference edge points are recordedB l 、C l Line segmentB l C l Namely the belt left side reference edge; in the same way, the edge point on the right side of the belt is recordedB r 、C r Line segmentB r C r Is the belt right reference edge. In addition, the belt reference edge pointB l AndB r edge points of beltC l AndC r respectively belonging to the same abscissa, i.e.。
As an example, the coordinates of the points are as follows(x,y)Is expressed in the form of (1), whereinxIs the pixel abscissa of the point in the image,yis the pixel ordinate of a point in the image.
It should be noted that, since the subsequent steps of identifying the left deviation and the right deviation of the conveyor belt are basically the same in this embodiment, the steps of processing and identifying the left side of the conveyor belt are taken as an example for description, and are not separately repeated.
S12, determining the preset background area outside the conveyor belt according to the reference edge points on the two sides of the conveyor belt, namely determining the preset background area outside the conveyor belt according to the marked belt reference information. Specifically, firstly, a deviation detection range is set; and then, taking reference edge points on two sides of the conveying belt as a reference, and extending towards the outer side of the conveying belt along the abscissa direction to determine the preset background area, wherein the outward extension is the lateral width of the deviation detection range and the corresponding reference edge point of the conveying belt.
Continuing with the previous example, set the off tracking monitoring range tosThen the preset background area of the conveyor belt (left side) is determined in the following way:
by left edge pointB l For reference, extending outside the belt in the direction of the abscissaΔx top Determining a vertex for each pixelA l WhereinIs a line segmentB l B r Length of (i.e. belt) iny B Width of pixel, dotA l Has the coordinates of;
By left edge pointC l For reference, extending along the abscissa toward the outside of the beltΔx bottom Determining a vertex for each pixelD l Wherein,Is a line segmentC l C r Length of (i.e. belt) iny C Width of pixel, dotD l Has the coordinates of。
The quadrilateral ABCD is the preset background area on the outer side (left) of the belt and is marked as R.
In this embodiment, the deviation monitoring rangesSet at 30%, i.e., a maximum single-sided detectable belt deflection of 30% of the belt width.
It should be noted that the determination of the preset background area on the outer side (right side) of the belt based on the label information is slightly different from the determination of the preset background area on the outer side (left side) of the belt based on the label information.
Set the deviation monitoring range assThe predetermined background area of the conveyor belt (right side) is then determined in the following manner:
with right edge pointB r For reference, extending outside the belt in the direction of the abscissaΔx top Determining a vertex for each pixelA r Wherein,Is a line segmentB l B r Length of (i.e. belt) iny B Width of pixel, dotA r Has the coordinates of;
With right side edge pointC r For reference, extending outside the belt in the direction of the abscissaΔx bottom Determining a vertex for each pixelD r Wherein,Is a line segmentC l C r Length of (i.e. belt) iny C Width of pixel, dotD r Has the coordinates of。
S13, constructing the belt background template picture according to the preset background area. Specifically, image extraction is carried out on the standard belt monitoring image according to the preset background area so as to obtain a standard extraction image; and carrying out perspective transformation processing on the standard extracted image so as to convert the standard extracted image into the belt background template drawing.
Continuing with the example, according to the preset background region R, extracting an ROI image (region of interest image) corresponding to the preset background region R in the belt monitoring image I as a standard extraction image;
the ROI image is transformed and mapped into a rectangular belt background template picture T through perspective transformation. Specifically, the size of the mapped belt background template graph isW×HWherein , (ii) a In particular, the apex of the ROI imageA l 、B l 、C l And D l The coordinates after perspective transformation mapping are (W, 0), (W, H), (0, H) and (0, 0), respectively.
And then, in step S20, positioning the belt edge interval on line by using a block matching method, and carrying out deviation analysis and early warning on the conveying belt. And after the marking information and the belt background template picture are obtained, the online monitoring and identification of the deviation of the conveying belt can be started. The method acquires a real-time detection image of the conveyor belt through monitoring the video stream, and extracts a belt ROI image of a current frame according to a preset background region R. On this basis, it is transformed into a rectangular ROI map in the same manner as S13, and block-matched with the belt background template map T, thereby determining the position section of the belt edge. And finally, calculating the edge offset of the current frame, and carrying out process monitoring and online early warning on belt deviation. Fig. 4 is a sub-flowchart of step S20. As shown in fig. 4, step S20 includes the following steps:
and step S21, acquiring a monitoring video frame of the conveyor belt on line, and acquiring the current frame to-be-detected interesting image according to a preset background area.
Step S21 further comprises the steps of obtaining monitoring video frame data of the conveyor belt on line, and extracting the current frame monitoring original image according to the preset background area to obtain a current frame extracted image; and carrying out perspective transformation processing on the current frame extracted image so as to convert the current frame extracted image into an image with the same size as the belt background template picture as the current frame image to be detected.
As an example, as shown in FIG. 5, an image is monitored for a current frame acquiredI t First, the image is monitored for the current frame according to the preset background region R in S12I t Performing image extraction to obtain a preset region ROI image shown in FIG. 5 as a current frame extraction image; then, the perspective transformation method in S13 is adopted to convert the current frame extracted image into a rectangular ROI (region of interest) image to be measured with the same size as the belt background template image, namely the corrected ROI image in FIG. 5S t The image of interest to be detected as the current frame, i.e. the corrected ROI image in FIG. 5S t 。
And step S22, performing block matching on the current frame interesting image and the belt background template image, and determining the position interval of the belt edge.
Step S22 further includes dividing the current frame image of interest and the belt background template map into a plurality of image blocks with the same number and size; carrying out similarity matching on the current frame image to be detected and the belt background template image block by block to obtain a similarity matrix; according to a preset similarity threshold value, binarizing the similarity matrix to obtain a belt positioning matrix; traversing the belt positioning matrix row by row, recording the vertical coordinate of the image block with the difference appearing for the first time in each row of elements, wherein the corresponding image block comprises a section of belt edge, the vertical coordinate of the image block with the difference appearing for the first time in each row of elements jointly forms a belt edge positioning vector, and the determination of the position interval of the belt edge is completed through the belt edge positioning vector.
As an example, first, for a set of ROI images to be measuredS t And a belt background template map T, which is divided into N × M square image blocks (of course, rectangular blocks may be used) having the same size. Wherein, the size of the image block represents the size of the section of the edge of the conveyor belt, and the side length of a single image blockt c The belt width is set to be 5 percent (which can be configured according to requirements), namely, the zone size of the belt edge positioned by the block matching method is 5 percent of the belt width. Therefore, the temperature of the molten metal is controlled,。
then, as shown in FIG. 5, the ROI image S to be measuredtAnd carrying out similarity matching with the background template picture T on a picture-by-picture block basis. Specifically, for image blocks of ROI images to be measuredImage block of background template pictureDegree of inter-similarityα n,m The calculation method of (c) is as follows:
whereinKIs the total number of the pixel points in the image block,kfor the index of the current pixel point,is the L2 distance metric formula. After all the corresponding image blocks are calculated, a similarity matrix with the size of NxM can be obtainedα。
Then, according to the set similarity thresholdt α A matrix of degrees of similarityαBinaryzation is carried out to obtain a belt positioning matrixβ,
Wherein 0 represents that the corresponding image block is a similar image block, namely a background image block; and 1, the corresponding image block is a difference image block, namely, the belt occlusion of the original background occurs in the image block. It should be noted that the similarity threshold valuet α Can be adjusted according to the requirement, when the difference between the transport belt and the background is smaller, the similarity threshold value is correspondingly reducedt α When the difference between the conveyor belt and the background is larger, the similarity threshold value is correspondingly increasedt α Is selected heret α And 5, taking.
Finally, traversing the belt positioning matrix column by columnβRecording the ordinate of the first occurrence of a difference image block in each column elementCorresponding to the image blockEach of which includes a belt edge. Ordinate of first appearing difference image block in each column elementAnd a belt edge positioning vector v is formed together, namely the determination of the edge position interval of the whole conveying belt can be completed through the belt edge positioning vector v.
And S23, monitoring the deviation of the conveying belt and carrying out belt deviation early warning according to the position interval of the belt edge.
As shown in FIG. 6, the upper boundary and the lower boundary of the region of the belt edge in the image of interest of the current frame are determined by the belt edge location vector v. Wherein, the upper boundary line of the interval is the background area, the lower boundary line of the interval is the belt area, and the boundary edge between the conveying belt and the background area is between the upper boundary line of the interval and the lower boundary line of the interval. Therefore, the deviation of the conveying belt is monitored through the upper boundary line or the lower boundary line of the belt edge in the current frame image to be detected, and the belt deviation early warning is carried out, so that the belt deviation identification can be completed.
Specifically, when the deviation amount of the conveying belt is monitored through the upper boundary line or the lower boundary line of the belt edge in the current frame of the image of interest to be detected and the belt deviation warning is performed, the upper boundary line or the lower boundary line of the belt edge in the current frame of the image of interest to be detected is mapped back to the current frame of the original image to be monitored through perspective inverse transformation so as to obtain the upper boundary line or the lower boundary line of the belt edge in the current frame of the original image to be monitored; calculating the offset of the upper boundary line or the lower boundary line of the belt edge in the current frame monitoring original image and a pre-marked belt edge reference line segment; finally, the offset is compared with the belt deviation alarm threshold value to judge whether the transportation belt is currently deviated or not, the deviation of the transportation belt is monitored through the upper boundary line or the lower boundary line of the interval of the belt edge in the current frame image to be detected, and the belt deviation early warning is carried out, so that the belt deviation identification can be completed
As an example. Calculating the current frame in the interesting image to be detected according to the belt edge positioning vector vS t Middle belt edge interval upper boundary point setL up The method comprises the following steps:
calculating the current frame in the interesting image to be detectedS t Middle belt edge interval lower boundary line point setL bottom The method comprises the following steps:
Further, as shown in fig. 6, completing the process of obtaining the image of interest of the current frameS t Middle belt edge interval upper boundary point setL up And the lower boundary line point set of the intervalL bottom After calculation, the current frame is subjected to perspective inverse transformation to obtain an interesting imageS t Middle interval upper boundary line point setL up And interval lower boundary point setL bottom Mapping back to current frame monitoring original imageI t In the middle, it is recorded as the current frame monitoring original imageI t Upper or lower boundary line of the middle belt edgeL' up AndL' bottom 。
on the basis, the monitoring original image of the current frame is calculated point by pointI t Middle belt edge lower boundary line point setL' bottom With pre-marked belt edge reference line segmentBCOffset betweenΔdThe specific calculation method is as follows:
wherein the content of the first and second substances,representing a current frame of a monitored original imageI t Middle and lower boundary line point setL' bottom To middlewDotThe abscissa of the (c) axis of the (c),x w,BC for pre-marked belt edge reference linesBCUpper and lowerP w Points of the same ordinateThe abscissa of the (c) axis of the (c),Δdi.e., the overall offset of the belt.
Based on the calculation steps, the current frame monitoring original image can be obtainedOffset of lower boundary line of left and right side edges of middle beltΔd l AndΔd r then, comparing the set belt deviation alarm threshold valuet d And judging whether the current belt deviates. The specific judgment method is as follows:
wherein normal indicates that the belt normally runs and does not deviate, alarm indicates that the belt deviates more than a set threshold value, namely, the belt deviates, and the belt deviation alarm is required.
It should be noted that the original image may also be monitored based on the current frameI t Upper boundary line of middle belt edgeL' up Offset from a pre-marked belt edge reference line segment BCΔd。
By the conveyor belt deviation identification method based on machine vision, the dependence of the existing belt deviation early warning method based on the edge extraction technology on complex post-processing steps such as texture enhancement and line screening can be effectively avoided, and the robustness and the identification accuracy of the belt deviation early warning method under interference scenes such as image blurring, material shielding and belt longitudinal scratches are improved; meanwhile, compared with a belt deviation early warning method based on a deviation switch of a mechanical structure, the belt deviation trend can be monitored on line, and the belt deviation which does not reach the alarm level can be analyzed quantitatively.
Fig. 7 shows a functional block diagram of a preferred embodiment of the machine vision-based conveyor belt deviation identification device 100 of the present invention. The device 100 for identifying the deviation of the conveying belt based on the machine vision comprises a standard image acquisition module 111, a background area acquisition module 112, a background template acquisition module 113, an ROI image acquisition module 121, an edge position determination module 122 and a deviation early warning module 123.
The combination of the standard image obtaining module 111, the background area obtaining module 112 and the background template obtaining module 113 is used for obtaining the standard monitoring image by the obtaining module 111, labeling the belt reference information and obtaining the belt background template drawing. Specifically, the standard image obtaining module 111 is configured to obtain a standard belt monitoring image, where reference edge points on two sides of the transportation belt are marked in the standard belt monitoring image; the background area obtaining module 112 is configured to determine the preset background area outside the conveyor belt according to the reference edge points on two sides of the conveyor belt; the background template obtaining module 113 is configured to construct the belt background template map according to the preset background area.
The combination of the ROI image acquisition module 121, the edge position determination module 122 and the deviation early warning module 123 is used for positioning the edge interval of the belt on line by using a block matching method to perform deviation analysis and early warning on the conveying belt. Specifically, the ROI image acquisition module 121 is configured to acquire a monitoring video frame of the conveyor belt on line, and acquire an image of interest to be detected of a current frame according to a preset background region; the edge position determining module 122 is configured to perform block matching on the current frame to-be-detected interest image and a belt background template image, and determine a position interval of a belt edge; the deviation early warning module 123 is used for monitoring the deviation of the conveying belt and carrying out belt deviation early warning according to the position interval of the belt edge.
It should be noted that the transportation belt deviation identification device 100 based on machine vision of the present invention is a virtual device corresponding to the transportation belt deviation identification method based on machine vision, and the functional modules in the transportation belt deviation identification device 100 based on machine vision respectively correspond to the corresponding steps in the transportation belt deviation identification method based on machine vision. The device 100 for identifying the deviation of the conveying belt based on the machine vision can be implemented by being matched with a method for identifying the deviation of the conveying belt based on the machine vision. The related technical details mentioned in the method for identifying the deviation of the conveyor belt based on the machine vision of the present invention are still valid in the device 100 for identifying the deviation of the conveyor belt based on the machine vision, and are not repeated herein in order to reduce the repetition. Accordingly, the related art details mentioned in the machine vision-based device 100 for identifying deviation of a conveyor belt according to the present invention can also be applied to the above-mentioned machine vision-based method for identifying deviation of a conveyor belt.
It should be noted that, when the above functional modules are actually implemented, all or part of the functional modules may be integrated into one physical entity, or may be physically separated. And these modules can be realized in the form of software called by processing element; or may be implemented entirely in hardware; and part of the modules can be realized in the form of calling software by the processing element, and part of the modules can be realized in the form of hardware. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In the implementation process, part or all of the steps of the method or the above functional modules may be implemented by hardware integrated logic circuits in a processor element or instructions in the form of software.
In the description herein, numerous specific details are provided, such as examples of components and/or methods, to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that an embodiment of the invention can be practiced without one or more of the specific details, or with other apparatus, systems, assemblies, methods, components, materials, parts, and/or the like. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of embodiments of the invention.
It will also be appreciated that one or more of the elements shown in the drawings/figures can also be implemented in a more separated or integrated manner, or even removed because it is not operational in certain circumstances or may be provided as useful in accordance with a particular application.
Additionally, any reference arrows in the drawings/figures should be considered only as exemplary, and not limiting, unless otherwise expressly specified. Further, as used herein, the term "or" is generally intended to mean "and/or" unless otherwise indicated. Combinations of components or steps will also be considered as being noted where terminology is foreseen as rendering the ability to separate or combine is unclear.
The above description of illustrated embodiments of the invention, including what is described in the abstract of the specification, is not intended to be exhaustive or to limit the invention to the precise forms disclosed herein. While specific embodiments of, and examples for, the invention are described herein for illustrative purposes only, various equivalent modifications are possible within the spirit and scope of the present invention, as those skilled in the relevant art will recognize and appreciate. As indicated, these modifications may be made to the present invention in light of the foregoing description of illustrated embodiments of the present invention and are to be included within the spirit and scope of the present invention.
The systems and methods have been described herein in general terms as the details aid in understanding the invention. Furthermore, various specific details have been given to provide a general understanding of the embodiments of the invention. One skilled in the relevant art will recognize, however, that an embodiment of the invention can be practiced without one or more of the specific details, or with other apparatus, systems, assemblies, methods, components, materials, parts, and/or the like. In other instances, well-known structures, materials, and/or operations are not specifically shown or described in detail to avoid obscuring aspects of embodiments of the invention.
Thus, although the present invention has been described herein with reference to particular embodiments thereof, a latitude of modification, various changes and substitutions are intended in the foregoing disclosures, and it will be appreciated that in some instances some features of the invention will be employed without a corresponding use of other features without departing from the scope and spirit of the invention as set forth. Thus, many modifications may be made to adapt a particular situation or material to the essential scope and spirit of the present invention. It is intended that the invention not be limited to the particular terms used in following claims and/or to the particular embodiment disclosed as the best mode contemplated for carrying out this invention, but that the invention will include any and all embodiments and equivalents falling within the scope of the appended claims. Accordingly, the scope of the invention is to be determined solely by the appended claims.
Claims (10)
1. A conveying belt deviation identification method based on machine vision is characterized by comprising the following steps:
acquiring a monitoring video frame of a conveyor belt on line, and acquiring a current frame to-be-detected interesting image according to a preset background area;
carrying out block matching on the current frame image to be detected and a belt background template picture to determine a position interval of the belt edge;
and monitoring the deviation of the conveying belt and carrying out belt deviation early warning according to the position interval of the belt edge.
2. The machine vision-based transportation belt deviation identification method as claimed in claim 1, wherein the online acquisition of the monitoring video frame of the transportation belt and the acquisition of the current frame of interest image to be detected according to the preset background area comprises:
acquiring monitoring video frame data of the conveyor belt on line, and extracting the monitoring original image of the current frame according to the preset background area to acquire an extracted image of the current frame;
and carrying out perspective transformation processing on the current frame extracted image so as to convert the current frame extracted image into an image with the same size as the belt background template picture, wherein the image is used as the current frame image to be detected.
3. The machine vision-based conveying belt deviation identification method as claimed in claim 1, wherein the step of performing block matching on the current frame image of interest to be detected and a belt background template image, and the step of determining the position interval of the belt edge comprises:
respectively dividing the current frame image to be detected and the belt background template map into a plurality of image blocks with the same number and size;
carrying out similarity matching on the current frame image to be detected and the belt background template image block by block to obtain a similarity matrix;
according to a preset similarity threshold value, binarizing the similarity matrix to obtain a belt positioning matrix;
traversing the belt positioning matrix row by row, recording the vertical coordinate of the image block with the difference appearing for the first time in each row of elements, wherein the corresponding image block comprises a section of belt edge, the vertical coordinate of the image block with the difference appearing for the first time in each row of elements jointly forms a belt edge positioning vector, and the determination of the position interval of the belt edge is completed through the belt edge positioning vector.
4. The machine vision-based transportation belt deviation identification method as claimed in claim 3, wherein the monitoring of the deviation amount of the transportation belt and the belt deviation early warning according to the position interval of the belt edge comprises:
calculating an interval upper boundary line or an interval lower boundary line of the belt edge in the current frame interesting image according to the belt edge positioning vector;
and monitoring the deviation of the conveying belt through the upper boundary line or the lower boundary line of the belt edge in the current frame image to be detected and carrying out belt deviation early warning.
5. The machine vision-based transportation belt deviation identification method as claimed in claim 4, wherein the monitoring of the deviation of the transportation belt and the belt deviation warning by the upper boundary line or the lower boundary line of the belt edge in the current frame image of interest comprises:
mapping the upper boundary or the lower boundary of the belt edge in the current frame to-be-detected interesting image back to the current frame monitoring original image through perspective inverse transformation to obtain the upper boundary or the lower boundary of the belt edge in the current frame monitoring original image;
calculating the offset of the upper boundary line or the lower boundary line of the belt edge in the current frame monitoring original image and a pre-marked belt edge reference line segment;
and comparing the offset with the belt deviation alarm threshold value to judge whether the transportation belt is deviated currently.
6. The machine vision-based transportation belt deviation identification method of claim 5, wherein the offset is compared with the belt deviation alarm threshold to determine whether the transportation belt is currently deviated, and whether the transportation belt is currently deviated is determined according to the following formula:
wherein the content of the first and second substances,Δd l andΔd r monitoring the offset of the lower boundary line of the interval of the left and right edges of the belt in the original image for the current frame,t d and (3) a belt deviation alarm threshold value, normal indicates that the transportation belt normally runs and does not deviate, and alarm indicates that the transportation belt deviates and needs belt deviation alarm.
7. The machine-vision-based transportation belt deviation identification method according to claim 1, further comprising;
acquiring a standard belt monitoring image, wherein reference edge points on two sides of the conveying belt are marked in the standard belt monitoring image;
determining the preset background area outside the conveyor belt according to the reference edge points on the two sides of the conveyor belt;
and constructing the belt background template picture according to the preset background area.
8. The machine-vision-based transportation belt deviation identification method as claimed in claim 7, wherein said determining the preset background area outside the transportation belt according to the reference edge points at both sides of the transportation belt comprises:
setting a deviation detection range;
and determining the preset background area by taking the reference edge points on two sides of the conveying belt as a reference and extending towards the outer side of the conveying belt along the abscissa direction, wherein the outward extending amount is the deviation detection range and the transverse width of the conveying belt at the corresponding reference edge points.
9. The machine-vision-based conveyor belt deviation identification method of claim 7, wherein the building the belt background template map according to the preset background area comprises:
performing image extraction on the standard belt monitoring image according to the preset background area to obtain a standard extracted image;
and carrying out perspective transformation processing on the standard extracted image so as to convert the standard extracted image into the belt background template drawing.
10. The utility model provides a transportation belt off tracking recognition device based on machine vision which characterized in that includes:
the ROI image acquisition module is used for acquiring a monitoring video frame of the conveyor belt on line and acquiring an image of interest to be detected of a current frame according to a preset background area;
the edge position determining module is used for carrying out block matching on the current frame image to be detected and the belt background template image to determine a position interval of the belt edge;
and the deviation early warning module is used for monitoring the deviation of the conveying belt and carrying out belt deviation early warning according to the position interval of the belt edge.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210489250.0A CN114581447B (en) | 2022-05-07 | 2022-05-07 | Conveying belt deviation identification method and device based on machine vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210489250.0A CN114581447B (en) | 2022-05-07 | 2022-05-07 | Conveying belt deviation identification method and device based on machine vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114581447A true CN114581447A (en) | 2022-06-03 |
CN114581447B CN114581447B (en) | 2022-08-05 |
Family
ID=81767707
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210489250.0A Active CN114581447B (en) | 2022-05-07 | 2022-05-07 | Conveying belt deviation identification method and device based on machine vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114581447B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117495858A (en) * | 2023-12-29 | 2024-02-02 | 合肥金星智控科技股份有限公司 | Belt offset detection method, system, equipment and medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6453069B1 (en) * | 1996-11-20 | 2002-09-17 | Canon Kabushiki Kaisha | Method of extracting image from input image using reference image |
US20150063709A1 (en) * | 2013-08-29 | 2015-03-05 | Disney Enterprises, Inc. | Methods and systems of detecting object boundaries |
CN104636706A (en) * | 2015-03-04 | 2015-05-20 | 深圳市金准生物医学工程有限公司 | Complicated background bar code image automatic partitioning method based on gradient direction consistency |
CN110697373A (en) * | 2019-07-31 | 2020-01-17 | 湖北凯瑞知行智能装备有限公司 | Conveying belt deviation fault detection method based on image recognition technology |
CN113112485A (en) * | 2021-04-20 | 2021-07-13 | 中冶赛迪重庆信息技术有限公司 | Belt conveyor deviation detection method, system, equipment and medium based on image processing |
-
2022
- 2022-05-07 CN CN202210489250.0A patent/CN114581447B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6453069B1 (en) * | 1996-11-20 | 2002-09-17 | Canon Kabushiki Kaisha | Method of extracting image from input image using reference image |
US20150063709A1 (en) * | 2013-08-29 | 2015-03-05 | Disney Enterprises, Inc. | Methods and systems of detecting object boundaries |
CN104636706A (en) * | 2015-03-04 | 2015-05-20 | 深圳市金准生物医学工程有限公司 | Complicated background bar code image automatic partitioning method based on gradient direction consistency |
CN110697373A (en) * | 2019-07-31 | 2020-01-17 | 湖北凯瑞知行智能装备有限公司 | Conveying belt deviation fault detection method based on image recognition technology |
CN113112485A (en) * | 2021-04-20 | 2021-07-13 | 中冶赛迪重庆信息技术有限公司 | Belt conveyor deviation detection method, system, equipment and medium based on image processing |
Non-Patent Citations (2)
Title |
---|
YI LIU,ET AL.: "Research on Deviation Detection of Belt Conveyor Based on Inspection Robot and Deep Learning", 《COMPLEXTITY》 * |
杨明花 等: "基于机器视觉和DSP技术的输煤皮带跑偏检测控制器", 《电子世界》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117495858A (en) * | 2023-12-29 | 2024-02-02 | 合肥金星智控科技股份有限公司 | Belt offset detection method, system, equipment and medium |
Also Published As
Publication number | Publication date |
---|---|
CN114581447B (en) | 2022-08-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109785316B (en) | Method for detecting apparent defects of chip | |
CN109724990B (en) | Method for quickly positioning and detecting code spraying area in label of packaging box | |
CN108345881B (en) | Document quality detection method based on computer vision | |
US8902053B2 (en) | Method and system for lane departure warning | |
TWI409718B (en) | Method of locating license plate of moving vehicle | |
CN109900711A (en) | Workpiece, defect detection method based on machine vision | |
CN111310645B (en) | Method, device, equipment and storage medium for warning overflow bin of goods accumulation | |
Bedruz et al. | Real-time vehicle detection and tracking using a mean-shift based blob analysis and tracking approach | |
CN113221861B (en) | Multi-lane line detection method, device and detection equipment | |
CN111539927B (en) | Detection method of automobile plastic assembly fastening buckle missing detection device | |
CN110189363B (en) | Airport scene moving target low-visual-angle video speed measuring method | |
CN113077437B (en) | Workpiece quality detection method and system | |
CN114881915A (en) | Symmetry-based mobile phone glass cover plate window area defect detection method | |
CN113706566B (en) | Edge detection-based perfuming and spraying performance detection method | |
CN114581447B (en) | Conveying belt deviation identification method and device based on machine vision | |
CN114022439A (en) | Flexible circuit board defect detection method based on morphological image processing | |
KR102242996B1 (en) | Method for atypical defects detect in automobile injection products | |
CN111476712B (en) | Trolley grate image shooting and detecting method and system of sintering machine | |
CN108492306A (en) | A kind of X-type Angular Point Extracting Method based on image outline | |
CN107563371B (en) | Method for dynamically searching interesting region based on line laser light strip | |
CN112053333B (en) | Square billet detection method, system, equipment and medium based on machine vision | |
CN110516725B (en) | Machine vision-based wood board stripe spacing and color detection method | |
CN115947066B (en) | Belt tearing detection method, device and system | |
CN111028215A (en) | Method for detecting end surface defects of steel coil based on machine vision | |
KR101696040B1 (en) | Apparatus and method for scalping defect detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |