CN111222510A - Trolley grate bar image shooting method and system of sintering machine - Google Patents

Trolley grate bar image shooting method and system of sintering machine Download PDF

Info

Publication number
CN111222510A
CN111222510A CN202010177626.5A CN202010177626A CN111222510A CN 111222510 A CN111222510 A CN 111222510A CN 202010177626 A CN202010177626 A CN 202010177626A CN 111222510 A CN111222510 A CN 111222510A
Authority
CN
China
Prior art keywords
image
contour
grate bar
sintering machine
image pickup
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010177626.5A
Other languages
Chinese (zh)
Other versions
CN111222510B (en
Inventor
李宗平
廖婷婷
李曦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongye Changtian International Engineering Co Ltd
Original Assignee
Zhongye Changtian International Engineering Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongye Changtian International Engineering Co Ltd filed Critical Zhongye Changtian International Engineering Co Ltd
Priority to CN202010177626.5A priority Critical patent/CN111222510B/en
Publication of CN111222510A publication Critical patent/CN111222510A/en
Application granted granted Critical
Publication of CN111222510B publication Critical patent/CN111222510B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F27FURNACES; KILNS; OVENS; RETORTS
    • F27BFURNACES, KILNS, OVENS, OR RETORTS IN GENERAL; OPEN SINTERING OR LIKE APPARATUS
    • F27B21/00Open or uncovered sintering apparatus; Other heat-treatment apparatus of like construction
    • F27B21/02Sintering grates or tables
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F27FURNACES; KILNS; OVENS; RETORTS
    • F27DDETAILS OR ACCESSORIES OF FURNACES, KILNS, OVENS, OR RETORTS, IN SO FAR AS THEY ARE OF KINDS OCCURRING IN MORE THAN ONE KIND OF FURNACE
    • F27D21/00Arrangements of monitoring devices; Arrangements of safety devices
    • F27D21/02Observation or illuminating devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06T3/18
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F27FURNACES; KILNS; OVENS; RETORTS
    • F27DDETAILS OR ACCESSORIES OF FURNACES, KILNS, OVENS, OR RETORTS, IN SO FAR AS THEY ARE OF KINDS OCCURRING IN MORE THAN ONE KIND OF FURNACE
    • F27D21/00Arrangements of monitoring devices; Arrangements of safety devices
    • F27D2021/0057Security or safety devices, e.g. for protection against heat, noise, pollution or too much duress; Ergonomic aspects
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F27FURNACES; KILNS; OVENS; RETORTS
    • F27MINDEXING SCHEME RELATING TO ASPECTS OF THE CHARGES OR FURNACES, KILNS, OVENS OR RETORTS
    • F27M2003/00Type of treatment of the charge
    • F27M2003/04Sintering

Abstract

The application discloses a trolley grate bar image shooting method of a sintering machine, which comprises the following steps: carrying out ROI extraction on the collected grate bar image to obtain an image after ROI extraction; dividing the image after the ROI is extracted into a plurality of parts with the same row number as the grate bars; extracting the outer contour of each segmented row of images; judging the size of the extracted contour, judging that the contour is a grate bar contour when the number of pixel points forming the contour is greater than or equal to a set threshold, and judging that the contour is formed by other texture changes in the image and is an invalid contour if the number of the pixel points forming the contour is less than the set threshold; and counting the number of the qualified contours in each sub-region. The design of the method can extract effective grate bar images, thereby conveniently completing the splicing of the grate bar global images.

Description

Trolley grate bar image shooting method and system of sintering machine
Technical Field
The application relates to the technical field of sintering machines, in particular to a trolley grate bar image shooting method of a sintering machine. In addition, the application also relates to a trolley grate bar image shooting system of the sintering machine.
Background
Sintering is the process of mixing various powdered iron-containing raw materials with proper amount of fuel, solvent and water, pelletizing, sintering to produce physical and chemical change and to bind the ore powder grains into block. The sintering operation is the central link of sintering production, and comprises the main processes of material distribution, ignition, sintering and the like, and the key equipment in the sintering operation is a sintering machine. Referring to fig. 1, fig. 1 is a schematic structural diagram of a sintering machine in the prior art.
As shown in fig. 1, the sintering machine includes a pallet 101, a hearth layer material bin 102, a sintering material mixing bin 103, an ignition furnace 104, a head star wheel 105, a tail star wheel 106, a sinter breaker 107, a wind box 108, an exhaust fan 109, and the like. The belt sintering machine is a sintering mechanical device which is driven by a head star wheel and a tail star wheel and is provided with a trolley filled with mixture and an ignition and air draft device. The trolleys are continuously operated on closed tracks in an end-to-end mode, for example, the trolleys are fully paved on the tracks on the upper layer and the lower layer in the figure 1, and one sintering machine comprises hundreds of trolleys. After the iron-containing mixture is fed onto the trolley through the feeding device, the ignition device ignites the surface materials, a series of air boxes are arranged below the bottom of the trolley, one end of each air box is a large-scale exhaust fan, and the materials filled in the trolley are gradually combusted from the surface to the bottom of the trolley through air exhaust.
Grate bars are laid on the trolley. The grate bars of the sintering machine are used as important component parts of the trolley, and the conditions of material leakage, poor air permeability and the like can be caused after the grate bars are broken down, so the condition of the state directly influences the normal operation of sintering production and the condition of sintering quality. The grate bars are fixed on the trolley beam and are used for bearing materials and ensuring the air permeability of sintering reaction. Because the sintering trolley runs continuously for 24 hours, under the action of mineral weight, negative pressure of air draft and repeated high temperature, the grate bars are easy to damage, and the adverse effects caused by the damaged grate bars are as follows:
first, the grate bar is missing. After the grate bars are broken and fall off, the gap width of the grate bars in a single row can be increased, and when the gap is too large, the sintering mixture can fall into a flue from the gap hole, so that a mouse hole is formed on the material surface.
2) The grate bars are inclined. The grate bar inclination degree is influenced by grate bar abrasion and loss, and when the grate bar is excessively inclined, the grate bar cannot be clamped on the trolley body, so that large-area falling is formed.
3) The gaps between the grate bars are stuck. The sintering mineral aggregate is blocked in the gaps of the grate bars, and the large-area blockage causes the air permeability of the sintering reaction to be poor, thereby influencing the quality of the sintering ore.
In addition, because two cameras are provided, images need to be spliced, and when the images are spliced, the effective images comprising most grates are a problem to be solved urgently.
Disclosure of Invention
The technical problem to be solved by the application is to provide the trolley grate bar image shooting method of the sintering machine, and the effective grate bar image can be extracted through the design of the method, so that the global grate bar image splicing is completed very conveniently. In addition, this application still provides a platform truck grate bar image capture system of sintering machine.
In order to solve the technical problem, the present application provides an image capturing method for a pallet grate bar of a sintering machine, the image capturing method comprising the steps of:
carrying out ROI extraction on the collected grate bar image to obtain an image after ROI extraction;
dividing the image after the ROI is extracted into a plurality of parts with the same row number as the grate bars;
extracting the outer contour of each segmented row of images;
judging the size of the extracted contour, judging that the contour is a grate bar contour when the number of pixel points forming the contour is greater than or equal to a set threshold, and judging that the contour is formed by other texture changes in the image and is an invalid contour if the number of the pixel points forming the contour is less than the set threshold; and counting the number of the qualified contours in each sub-region.
Optionally, the image capturing method further includes the following steps:
the average value of the profile values of each row is obtained by the following formula:
Figure BDA0002411340910000021
optionally, the image capturing method further includes the following steps:
the mean square error of each row of contour values is obtained by the following formula:
Figure BDA0002411340910000022
optionally, the image capturing method further includes:
and extracting four angular points of the grate bar by adopting a deep learning algorithm, carrying out rough positioning on the panoramic image, and carrying out perspective transformation based on the angular points so as to flatten the image.
Optionally, the image capturing method further includes:
training a deep learning network to obtain a deep network model:
and manually calibrating the training sample to ensure that the angular point is positioned at the center of the prediction frame, and obtaining the coordinate value of the angular point according to the size of the prediction frame.
Optionally, the image capturing method further includes:
during testing, inputting four corner point images of the panoramic image rough positioning into a trained depth network model to obtain coordinates (X, Y) of the corner points in a prediction frame, and obtaining coordinate values of the corner points in an original image according to the following conversion formula:
top left corner C _ lt: (X + mg, Y)
Lower left corner C _ lb: (X + mg, H-sq + Y)
Upper right corner C _ rt: (W-mg-sq + X, Y)
Lower right corner C _ rb: (W-mg-sq + X, H-sq + Y)
Where H denotes the length of the original, W denotes the width of the original, mg denotes the distance from the left and right edges, and sq denotes the length and width of the prediction frame.
Optionally, the image capturing method further includes:
based on the coordinate values of the four corner points in the original image, the image is flattened by adopting the following four-point perspective transformation, wherein the perspective transformation formula is as follows:
Figure BDA0002411340910000031
wherein the content of the first and second substances,
Figure BDA0002411340910000032
in order to make a transformation matrix of the perspective,
Figure BDA0002411340910000033
for the known point to be moved,
Figure BDA0002411340910000034
is the converted target point.
Optionally, the fetching method further includes:
knowing the coordinate values of the four corner points, calculating the following perspective transformation matrix to obtain the values:
Figure BDA0002411340910000035
in addition, in order to solve the above technical problem, the present application further provides an image capturing system for a pallet grate bar of a sintering machine, the image capturing system comprising:
the ROI extraction unit is used for carrying out ROI extraction on the collected grate bar image to obtain an image after the ROI extraction;
the segmentation unit is used for segmenting the image after the ROI is extracted into a plurality of parts with the same row number as the grate bars;
the outer contour extraction unit is used for extracting the outer contour of each divided row of images;
the judging unit judges the size of the extracted contour, judges that the contour of the grate bar is formed when the number of pixel points forming the contour is greater than or equal to a set threshold, and judges that the contour is formed by other texture changes in the image and is an invalid contour if the number of the pixel points forming the contour is less than the set threshold; and counting the number of the qualified contours in each sub-region.
Optionally, the image capturing system further includes:
the deep learning transformation unit is used for extracting four angular points of the grate bar by adopting a deep learning algorithm, carrying out rough positioning on the panoramic image, and carrying out perspective transformation based on the angular points so as to flatten the image;
the deep learning transformation unit includes:
the deep network model subunit is used for training a deep learning network to obtain a deep network model: and manually calibrating the training sample to ensure that the angular point is positioned at the center of the prediction frame, and obtaining the coordinate value of the angular point according to the size of the prediction frame.
In an embodiment of the present application, the present application provides an image capturing method for a pallet grate bar of a sintering machine, where the image capturing method includes the following steps: carrying out ROI extraction on the collected grate bar image to obtain an image after ROI extraction; dividing the image after the ROI is extracted into a plurality of parts with the same row number as the grate bars; extracting the outer contour of each segmented row of images; judging the size of the extracted contour, judging that the contour is a grate bar contour when the number of pixel points forming the contour is greater than or equal to a set threshold, and judging that the contour is formed by other texture changes in the image and is an invalid contour if the number of the pixel points forming the contour is less than the set threshold; and counting the number of the qualified contours in each sub-region.
The design of the method can extract effective grate bar images, thereby conveniently completing the splicing of the grate bar global images.
Drawings
FIG. 1 is a schematic structural diagram of a sintering machine in the prior art;
FIG. 2 is a functional block diagram of a method for capturing images of grate bars of a sintering machine according to an embodiment of the present disclosure;
FIG. 3 is a schematic view of a portion of the structure of the sintering machine of the present application;
FIG. 3-1 is a logic flow diagram of a method for capturing images of pallet grate bars of a sintering machine according to an embodiment of the present application;
FIG. 4 is a comparison of an invalid ROI map and an effective ROI map acquired by the raster image pickup apparatus;
FIG. 5 is a schematic diagram of a spliced global picture according to an embodiment of the present application;
FIG. 6 is a flow chart of corner location in an embodiment of the present application;
FIG. 7 is a schematic view of the image location of a corner point on the basis of FIG. 5;
FIG. 8 is a schematic diagram of a cropped image;
FIG. 9 is a schematic view of a manual calibration mode;
FIG. 10 is a diagram illustrating corner detection results;
fig. 11 is a schematic diagram of comparison images before and after correction of corner points of the trolley.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention.
In some of the flows described in the present specification and claims and in the above figures, a number of operations are included that occur in a particular order, but it should be clearly understood that these operations may be performed out of order or in parallel as they occur herein, with the order of the operations being indicated as 101, 102, etc. merely to distinguish between the various operations, and the order of the operations by themselves does not represent any order of performance. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 2, fig. 2 is a functional block diagram of an image capturing method for a pallet grate of a sintering machine according to an embodiment of the present invention.
As shown in fig. 2, the functional modules include an image acquisition device, data and model storage, image acquisition, parameter output, feature parameter calculation, an intelligent diagnosis model, state output, and the like. The image acquisition device preprocesses the acquired image and stores the image into the data and model storage module. The data and the model store and output the grate bar image to the image acquisition module, and output the characteristic parameters to the parameter acquisition module. The parameters in the feature parameter calculation model are also stored in the data and model storage module.
Specifically, reference may be made to fig. 3, and fig. 3 is a schematic view of a part of a structure of the sintering machine in the present application.
(1) Image acquisition device
The invention installs a set of image acquisition device at the position of the upper layer maintenance platform of the machine head, the structure of which is shown in figure 3, and the device comprises a camera 201, a light source 202 and a mounting bracket, and is used for acquiring the image of a grate bar on a trolley 203. And selecting one or more proper cameras for acquisition according to the size of the visual field, the parameters of the lens, the parameters of the cameras and the like. Fig. 3 shows an example of synchronous acquisition of grate bar images using two cameras.
Mounted in this position, the captured image is divided into an active map and an inactive map as follows. Referring to fig. 4, fig. 4 is a comparison diagram of the invalid ROI map and the valid ROI map obtained by the grate image capturing apparatus of fig. 3.
Useful patterns in the fault of the grate bars of the trolley are identified, and an effective pattern is a pattern in which all three rows of grate bars appear in the field of view of a camera, so that the video images collected by the camera need to be analyzed on line.
First the region of interest ROI is extracted, i.e. the approximate region in the video where the three rows of grates at the bottom of the trolley appear completely. By ROI extraction, the interference of objects outside the grate bar area to the algorithm can be reduced, and the processing difficulty can be reduced. The results of the processing of the null and active maps are shown in fig. 4, in which the left-hand map in fig. 4 is the null ROI map and the right-hand map is the active ROI map.
The ROI of the effective image is provided with three rows of grate bars, part of the area in the ineffective image comprises the grate bars, and the part of the area is provided with a trolley body and the like. The features and the number of the outlines of the trolley body are less than those of the grate bars, and grate bar images can be screened according to the number of the outlines. And then dividing the ROI into three sub-regions, extracting the outer contour of each sub-region by adopting an algorithm, calculating the number of the outer contours of each row, judging the image to be an effective image if the number of the outer contours exceeds a threshold value, and judging the image to be an ineffective image if the number of the outer contours is smaller than the threshold value.
After the images are judged to be valid images, the left and right camera images at the moment are transmitted to an image stitching module, and the left and right images are stitched by using image stitching algorithms such as SIFT, SURF, FAST, and the like to obtain a global image, and fig. 5 refers to fig. 5 for the global image which is stitched according to an embodiment of the present application.
When the device acquires an image, the position point of the device for acquiring the image is changed, so that the bottom surface of the trolley cannot be ensured to be parallel to the lens surface of the camera, namely, the distance between the grate bar on one side of the trolley and the camera is smaller than that between the grate bar on the other side of the trolley and the camera, and the imaging has certain distortion. In order to correct the distortion, the system adopts a deep learning algorithm to extract four angular points of the grate bar, and perspective transformation is carried out based on the angular points, so that the image is flattened. Please refer to fig. 6 for a process of corner positioning, where fig. 6 is a flowchart of corner positioning in an embodiment of the present application.
As shown in fig. 6, the corner point positioning process includes: training a panoramic image, cutting an image, calibrating coordinates of angular points, training a depth network, a depth network model, coordinates of angular points in a cut image and coordinates of angular points in the panoramic image. On the basis of the process, a real-time panoramic image is obtained, image cutting is carried out, then the corner point coordinates in the image are cut through the depth network model, and then the corner point coordinates in the panoramic image are obtained.
Due to the complexity of the environment and the uncertainty of the angle, the traditional method of directly searching the outline and the characteristic points is difficult to realize accurate positioning, so the angular point positioning adopts a deep learning algorithm.
Firstly, preprocessing an image, wherein angular point positioning mainly comprises four corner areas at the bottom of a trolley, in order to reduce redundant data and improve training speed, firstly cutting the image to obtain an angular point image, such as four areas shown in fig. 7: c _ lt, C _ lb, C _ rt, C _ rb, fig. 7 is a schematic diagram of the image position of the corner point on the basis of fig. 5.
The size of the extracted target images containing the corner points is consistent, the distances from the edges of the original image are also consistent, mg represents the distance from one side of the target area to the left edge and the right edge, and sq represents the length and the width of the frame of the target area as shown in fig. 7. The obtained cut picture is shown in fig. 8, and fig. 8 is a schematic diagram of the cut image.
And then calibrating the cut image in a manner that an angular point is marked by using a square prediction frame bbox with fixed side length in the cut image, wherein the angular point is required to be positioned at the center of the bbox as much as possible, as shown in fig. 9, and fig. 9 is an image schematic diagram of a manual calibration manner.
Using the coordinate value of the center point of bbox in the cutting target graph as the coordinate value of the corner point in the cutting graph, for example:
knowing the coordinates of the prediction frame in the target frame as (X, Y), the coordinates in the target frame are converted to the real coordinates in the original image by the following calculation method:
top left corner C _ lt: (X + mg, Y)
Lower left corner C _ lb: (X + mg, H-sq + Y)
Upper right corner C _ rt: (W-mg-sq + X, Y)
Lower right corner C _ rb: (W-mg-sq + X, H-sq + Y)
Wherein H represents the length of the original, W represents the width of the original,
after a large number of calibrated images are obtained, a proper depth network model for target detection is selected, the calibrated images are used as the input of the model, and the network model for corner detection is obtained through training.
The obtained real-time panoramic image is preprocessed in the same way as the training image to obtain four target areas, the four area images are input into the trained depth model, and the depth network detects to obtain a positioning block diagram of the corner points, as shown in fig. 10, fig. 10 is a schematic diagram of the corner point detection result. After the coordinates of the corner point positioning frame are obtained, the central coordinates (i.e. the coordinates used for representing the corner points) (X, Y) of the positioning frame can be obtained according to the size of the frame, and the coordinate values of the four corner points to be positioned can be obtained according to a coordinate transformation formula from the target frame to the original image.
And after four angular points of the grate bar are obtained, flattening the image by adopting four-point perspective transformation. The perspective transformation formula is:
Figure BDA0002411340910000071
wherein the content of the first and second substances,
Figure BDA0002411340910000081
transforming matrices for perspective
Figure BDA0002411340910000082
Knowing the point to be moved
Figure BDA0002411340910000083
For the converted target point
Knowing four corners, solving a ternary linear equation, calculating a perspective transformation matrix, and realizing image conversion by using the transformation matrix.
It should be noted that the explanation of the above "point known to need to be moved" is as follows:
firstly, a perspective transformation matrix A is unknown, the transformation matrix A can be calculated (solving a ternary linear equation) according to four coordinates of the corner points in the original image and four coordinates in a space needing to be mapped, and then all the points of the original image are calculated through the perspective transformation matrix A to obtain a target image (and an image after perspective correction). The method is introduced by a general perspective transformation formula, and when the method is realized, the transformation matrix can be calculated by detecting four point input functions by adopting the transformation function of opencv, and then the mapped image is solved according to the transformation matrix.
In addition, referring to the above-mentioned "target point", all points of the corrected image obtained by transforming the matrix are target points.
Through corner correction, the obtained front and rear images can be according to the schematic diagram of fig. 11, which is a comparison image schematic diagram before and after the corner correction of the trolley of fig. 11.
The above is the introduction of the technical solution in the scene of the present application. For this specific technical solution, the present application is also introduced as follows.
Referring to fig. 3-1, fig. 3-1 is a logic flow diagram of a method for capturing an image of a grate bar of a pallet of a sintering machine according to an embodiment of the present application.
In one embodiment, as shown in fig. 3-1, an image capturing method for a pallet grate bar of a sintering machine includes the steps of:
step S01: carrying out ROI extraction on the collected grate bar image to obtain an image after ROI extraction;
step S02: dividing the image after ROI extraction into a plurality of parts with the same row number as the grate bars;
step S03: extracting the outer contour of each segmented row of images;
step S04: judging the size of the extracted contour, judging that the contour is a grate bar contour when the number of pixel points forming the contour is greater than or equal to a set threshold, and judging that the contour is formed by other texture changes in the image and is an invalid contour if the number of the pixel points forming the contour is less than the set threshold; and counting the number of the qualified contours in each sub-region.
The design of the method can extract effective grate bar images, thereby conveniently completing the splicing of the grate bar global images.
In the above-described embodiment, a further improved design can be made. For example, the image capturing method further includes the steps of:
the average value of the profile values of each row is obtained by the following formula:
Figure BDA0002411340910000091
the image pickup method further includes the steps of:
the mean square error of each row of contour values is obtained by the following formula:
Figure BDA0002411340910000092
it should be noted that the ROI area is subdivided into three upper, middle and lower portions, and when the ROI area is valid, the three upper, middle and lower portions are three rows of grates of the trolley, respectively, and when the ROI area is invalid, one of the three portions may be the trolley body. The trolley body and the grate bar have different texture structures, so that the outer contours of the three parts can be extracted to obtain the contours of three areas, namely upContours, midContours and downContours, and all the contours detected in the area are stored in each variable. The threshold contoursSize of the size of the outer contour of the grate bar is set.
If upContours[i]>=contoursSize:upNum=upNum+1
If midContours[i]>=contoursSize:midNum=midNum+1
If downContours[i]>=contoursSize:downNum=downNum+1
The initial values of upNum, midNum and downNum are 0, and the initial values are used for counting the number of profiles meeting the conditions in the three rows of grates. And (4) screening out small edge points by setting a judgment condition and utilizing a priori condition of the size of the grate bars.
It should be noted that the invalid diagram is a diagram in which the bottom of the dolly does not completely enter the field of view of the camera, and therefore the image includes the body of the dolly or other areas, and the texture of the other areas is less at this time, while the valid diagram is a diagram in which the bottom of the dolly completely enters the field of view of the camera, and at this time, three rows of grates of the bottom of the dolly all appear in the camera, and the number of the textures is large. This patent also utilizes this principle to obtain valid images.
In addition, in order to eliminate the interference of the small noise point profile on the number of the statistical grate bars, all the profiles need to be screened, and if the number of edge points forming one profile is less than the threshold contoursize, the profile is eliminated. And finally, counting the total number of the contours, namely upNum, midNum and downNum, left in each region.
When the method is effective, the detected number of the three rows of grate bars is close, and three groups of quantity variances are calculated:
Figure BDA0002411340910000101
Figure BDA0002411340910000102
Figure BDA0002411340910000103
representing the average of the number of contours in three rows, S2Standard deviation representing the number of profiles in the three rows. And judging whether the current image is an effective image or not according to the number of the three rows of contours and the discrete degree of the number.
The two cameras have the same acquisition frequency, and after the camera on one side is analyzed to meet the effective image standard, the images of the cameras on the two sides are simultaneously uploaded to the system for image splicing processing, so that the complete grate bar image at the bottom of the whole trolley is obtained.
In the above embodiments, further designs may be made.
For example, the image capturing method further includes:
and extracting four angular points of the grate bar by adopting a deep learning algorithm, carrying out rough positioning on the panoramic image, and carrying out perspective transformation based on the angular points so as to flatten the image. The image pickup method further includes:
training a deep learning network to obtain a deep network model:
and manually calibrating the training sample to ensure that the angular point is positioned at the center of the prediction frame, and obtaining the coordinate value of the angular point according to the size of the prediction frame.
Further, the image pickup method further includes:
during testing, inputting four corner point images of the panoramic image rough positioning into a trained depth network model to obtain coordinates (X, Y) of the corner points in a prediction frame, and obtaining coordinate values of the corner points in an original image according to the following conversion formula:
top left corner C _ lt: (X + mg, Y)
Lower left corner C _ lb: (X + mg, H-sq + Y)
Upper right corner C _ rt: (W-mg-sq + X, Y)
Lower right corner C _ rb: (W-mg-sq + X, H-sq + Y)
Where H denotes the length of the original, W denotes the width of the original, mg denotes the distance from the left and right edges, and sq denotes the length and width of the prediction frame.
Further, the image pickup method further includes:
based on the coordinate values of the four corner points in the original image, the image is flattened by adopting the following four-point perspective transformation, wherein the perspective transformation formula is as follows:
Figure BDA0002411340910000104
wherein the content of the first and second substances,
Figure BDA0002411340910000105
in order to make a transformation matrix of the perspective,
Figure BDA0002411340910000111
for the known point to be moved,
Figure BDA0002411340910000112
is the converted target point.
The image pickup method further includes:
knowing the coordinate values of the four corner points, calculating the following perspective transformation matrix to obtain the values:
Figure BDA0002411340910000113
corresponding to the method embodiment, the application also provides a device embodiment.
In one embodiment, an image capturing system of a pallet grate of a sintering machine, the image capturing system includes:
the ROI extraction unit is used for carrying out ROI extraction on the collected grate bar image to obtain an image after the ROI extraction;
the segmentation unit is used for segmenting the image after the ROI is extracted into a plurality of parts with the same row number as the grate bars;
the outer contour extraction unit is used for extracting the outer contour of each divided row of images;
the judging unit judges the size of the extracted contour, judges that the contour of the grate bar is formed when the number of pixel points forming the contour is greater than or equal to a set threshold, and judges that the contour is formed by other texture changes in the image and is an invalid contour if the number of the pixel points forming the contour is less than the set threshold; and counting the number of the qualified contours in each sub-region.
Further, the image pickup system further includes:
the deep learning transformation unit is used for extracting four angular points of the grate bar by adopting a deep learning algorithm, carrying out rough positioning on the panoramic image, and carrying out perspective transformation based on the angular points so as to flatten the image;
the deep learning transformation unit includes:
the deep network model subunit is used for training a deep learning network to obtain a deep network model: and manually calibrating the training sample to ensure that the angular point is positioned at the center of the prediction frame, and obtaining the coordinate value of the angular point according to the size of the prediction frame.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Reference throughout this specification to "embodiments," "some embodiments," "one embodiment," or "an embodiment," etc., means that a particular feature, component, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases "in various embodiments," "in some embodiments," "in at least one other embodiment," or "in an embodiment," or the like, throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, components, or characteristics may be combined in any suitable manner in one or more embodiments. Thus, without limitation, a particular feature, component, or characteristic illustrated or described in connection with one embodiment may be combined, in whole or in part, with a feature, component, or characteristic of one or more other embodiments. Such modifications and variations are intended to be included within the scope of the present application.
Moreover, those skilled in the art will appreciate that aspects of the present application may be illustrated and described in terms of several patentable species or situations, including any new and useful combination of processes, machines, manufacture, or materials, or any new and useful improvement thereon. Accordingly, various aspects of the present application may be embodied entirely in hardware, entirely in software (including firmware, resident software, micro-code, etc.) or in a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" terminal, "" component, "or" system. Furthermore, aspects of the present application may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media.
It is to be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that an article or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present application and are presented to enable those skilled in the art to understand and practice the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. An image pickup method of a pallet grate bar of a sintering machine, characterized by comprising the steps of:
carrying out ROI extraction on the collected grate bar image to obtain an image after ROI extraction;
dividing the image after the ROI is extracted into a plurality of parts with the same row number as the grate bars;
extracting the outer contour of each segmented row of images;
judging the size of the extracted contour, judging that the contour is a grate bar contour when the number of pixel points forming the contour is greater than or equal to a set threshold, and judging that the contour is formed by other texture changes in the image and is an invalid contour if the number of the pixel points forming the contour is less than the set threshold; and counting the number of the qualified contours in each sub-region.
2. The image pickup method of a pallet grate bar of a sintering machine according to claim 1, wherein the image pickup method further comprises the steps of:
the average value of the profile values of each row is obtained by the following formula:
Figure FDA0002411340900000011
3. the image pickup method of a pallet grate bar of a sintering machine according to claim 2, wherein the image pickup method further comprises the steps of:
the mean square error of each row of contour values is obtained by the following formula:
Figure FDA0002411340900000012
4. the trolley grate bar image pickup method of a sintering machine according to any one of claims 1 to 3, characterized by further comprising:
and extracting four angular points of the grate bar by adopting a deep learning algorithm, carrying out rough positioning on the panoramic image, and carrying out perspective transformation based on the angular points so as to flatten the image.
5. The image pickup method of a pallet grate bar of a sintering machine according to claim 4, wherein the image pickup method further comprises:
training a deep learning network to obtain a deep network model:
and manually calibrating the training sample to ensure that the angular point is positioned at the center of the prediction frame, and obtaining the coordinate value of the angular point according to the size of the prediction frame.
6. The image pickup method of a pallet grate bar of a sintering machine according to claim 5, wherein the image pickup method further comprises:
during testing, inputting four corner point images of the panoramic image rough positioning into a trained depth network model to obtain coordinates (X, Y) of the corner points in a prediction frame, and obtaining coordinate values of the corner points in an original image according to the following conversion formula:
top left corner C _ lt: (X + mg, Y)
Lower left corner C _ lb: (X + mg, H-sq + Y)
Upper right corner C _ rt: (W-mg-sq + X, Y)
Lower right corner C _ rb: (W-mg-sq + X, H-sq + Y)
Where H denotes the length of the original, W denotes the width of the original, mg denotes the distance from the left and right edges, and sq denotes the length and width of the prediction frame.
7. The image pickup method of a pallet grate bar of a sintering machine according to claim 6, further comprising:
based on the coordinate values of the four corner points in the original image, the image is flattened by adopting the following four-point perspective transformation, wherein the perspective transformation formula is as follows:
Figure FDA0002411340900000021
wherein the content of the first and second substances,
Figure FDA0002411340900000022
in order to make a transformation matrix of the perspective,
Figure FDA0002411340900000023
for the known point to be moved,
Figure FDA0002411340900000024
is the converted target point.
8. The image pickup method of a pallet grate bar of a sintering machine according to claim 7, further comprising:
knowing the coordinate values of the four corner points, calculating the following perspective transformation matrix to obtain the values:
Figure FDA0002411340900000025
9. an image pickup system for a pallet grate bar of a sintering machine, comprising:
the ROI extraction unit is used for carrying out ROI extraction on the collected grate bar image to obtain an image after the ROI extraction;
the segmentation unit is used for segmenting the image after the ROI is extracted into a plurality of parts with the same row number as the grate bars;
the outer contour extraction unit is used for extracting the outer contour of each divided row of images;
the judging unit judges the size of the extracted contour, judges that the contour of the grate bar is formed when the number of pixel points forming the contour is greater than or equal to a set threshold, and judges that the contour is formed by other texture changes in the image and is an invalid contour if the number of the pixel points forming the contour is less than the set threshold; and counting the number of the qualified contours in each sub-region.
10. The trolley grate bar image pickup system of a sintering machine according to claim 9, further comprising:
the deep learning transformation unit is used for extracting four angular points of the grate bar by adopting a deep learning algorithm, carrying out rough positioning on the panoramic image, and carrying out perspective transformation based on the angular points so as to flatten the image;
the deep learning transformation unit includes:
the deep network model subunit is used for training a deep learning network to obtain a deep network model: and manually calibrating the training sample to ensure that the angular point is positioned at the center of the prediction frame, and obtaining the coordinate value of the angular point according to the size of the prediction frame.
CN202010177626.5A 2020-03-13 2020-03-13 Trolley grate image pickup method and system of sintering machine Active CN111222510B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010177626.5A CN111222510B (en) 2020-03-13 2020-03-13 Trolley grate image pickup method and system of sintering machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010177626.5A CN111222510B (en) 2020-03-13 2020-03-13 Trolley grate image pickup method and system of sintering machine

Publications (2)

Publication Number Publication Date
CN111222510A true CN111222510A (en) 2020-06-02
CN111222510B CN111222510B (en) 2024-03-15

Family

ID=70807718

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010177626.5A Active CN111222510B (en) 2020-03-13 2020-03-13 Trolley grate image pickup method and system of sintering machine

Country Status (1)

Country Link
CN (1) CN111222510B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476712A (en) * 2020-03-13 2020-07-31 中冶长天国际工程有限责任公司 Method and system for capturing and detecting trolley grate bar image of sintering machine

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4631750A (en) * 1980-04-11 1986-12-23 Ampex Corporation Method and system for spacially transforming images
JP2006311020A (en) * 2005-04-27 2006-11-09 Fuji Xerox Co Ltd Image output method and image output apparatus
CN200979373Y (en) * 2006-12-08 2007-11-21 唐山钢铁股份有限公司 A trolley plate of a sinter cooling machine
CN101231229A (en) * 2007-01-26 2008-07-30 华中科技大学 Non-dyeing automatic counting method for liquid bacterium-containing quantity
CN103077523A (en) * 2013-01-23 2013-05-01 天津大学 Method for shooting and taking evidence through handheld camera
CN104318543A (en) * 2014-01-27 2015-01-28 郑州大学 Board metering method and device based on image processing method
JP2015161680A (en) * 2014-02-28 2015-09-07 株式会社キーエンス Inspection system, image processing apparatus, method, and program
CN107123188A (en) * 2016-12-20 2017-09-01 北京联合众为科技发展有限公司 Ticket of hindering based on template matching algorithm and edge feature is recognized and localization method
CN108016840A (en) * 2017-11-28 2018-05-11 天津工业大学 A kind of LED based multiple views conveyer belt longitudinal tear image-pickup method
CN110020656A (en) * 2019-01-30 2019-07-16 阿里巴巴集团控股有限公司 Bearing calibration, device and the equipment of image
CN110046529A (en) * 2018-12-11 2019-07-23 阿里巴巴集团控股有限公司 Two-dimensional code identification method, device and equipment
CN110378376A (en) * 2019-06-12 2019-10-25 西安交通大学 A kind of oil filler object recognition and detection method based on machine vision

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4631750A (en) * 1980-04-11 1986-12-23 Ampex Corporation Method and system for spacially transforming images
JP2006311020A (en) * 2005-04-27 2006-11-09 Fuji Xerox Co Ltd Image output method and image output apparatus
CN200979373Y (en) * 2006-12-08 2007-11-21 唐山钢铁股份有限公司 A trolley plate of a sinter cooling machine
CN101231229A (en) * 2007-01-26 2008-07-30 华中科技大学 Non-dyeing automatic counting method for liquid bacterium-containing quantity
CN103077523A (en) * 2013-01-23 2013-05-01 天津大学 Method for shooting and taking evidence through handheld camera
CN104318543A (en) * 2014-01-27 2015-01-28 郑州大学 Board metering method and device based on image processing method
JP2015161680A (en) * 2014-02-28 2015-09-07 株式会社キーエンス Inspection system, image processing apparatus, method, and program
CN107123188A (en) * 2016-12-20 2017-09-01 北京联合众为科技发展有限公司 Ticket of hindering based on template matching algorithm and edge feature is recognized and localization method
CN108016840A (en) * 2017-11-28 2018-05-11 天津工业大学 A kind of LED based multiple views conveyer belt longitudinal tear image-pickup method
CN110046529A (en) * 2018-12-11 2019-07-23 阿里巴巴集团控股有限公司 Two-dimensional code identification method, device and equipment
CN110020656A (en) * 2019-01-30 2019-07-16 阿里巴巴集团控股有限公司 Bearing calibration, device and the equipment of image
CN110378376A (en) * 2019-06-12 2019-10-25 西安交通大学 A kind of oil filler object recognition and detection method based on machine vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
尚丽 等: "一种新的掌纹ROI图像定位方法", 《激光与红外》, vol. 42, no. 07, 20 July 2012 (2012-07-20), pages 815 - 820 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476712A (en) * 2020-03-13 2020-07-31 中冶长天国际工程有限责任公司 Method and system for capturing and detecting trolley grate bar image of sintering machine
CN111476712B (en) * 2020-03-13 2024-03-15 中冶长天国际工程有限责任公司 Trolley grate image shooting and detecting method and system of sintering machine

Also Published As

Publication number Publication date
CN111222510B (en) 2024-03-15

Similar Documents

Publication Publication Date Title
CN111476712B (en) Trolley grate image shooting and detecting method and system of sintering machine
CN109523583B (en) Infrared and visible light image registration method for power equipment based on feedback mechanism
CN113706566B (en) Edge detection-based perfuming and spraying performance detection method
CN110570422B (en) Capsule defect visual detection method based on matrix analysis
CN110889827A (en) Transmission line tower online identification and inclination detection method based on vision
CN112561983A (en) Device and method for measuring and calculating surface weak texture and irregular stacking volume
CN109214288B (en) Inter-frame scene matching method and device based on multi-rotor unmanned aerial vehicle aerial video
CN104112118B (en) Method for detecting lane lines for Lane Departure Warning System
CN111383174A (en) Pile bursting data acquisition method for photogrammetry
CN111222510B (en) Trolley grate image pickup method and system of sintering machine
CN115535525A (en) Conveyor belt longitudinal tearing detection system and method based on image matching
CN111223094B (en) Trolley grate spacing detection method and system for sintering machine
CN111223098B (en) Trolley grate inclination angle detection method and system of sintering machine
CN113252103A (en) Method for calculating volume and mass of material pile based on MATLAB image recognition technology
CN108734054B (en) Non-shielding citrus fruit image identification method
CN112614139A (en) Conveyor belt ore agglomerate screening method based on depth map
CN111462250A (en) Correction system and correction method
CN114581447B (en) Conveying belt deviation identification method and device based on machine vision
CN113591548B (en) Target ring identification method and system
CN112798108B (en) Ceramic tile self-adaptive color separation method and device
CN113406091A (en) Unmanned aerial vehicle system for detecting fan blade and control method
CN114494165A (en) Clustering-based light bar extraction method and device
CN116188348A (en) Crack detection method, device and equipment
CN113409297A (en) Aggregate volume calculation method, particle form grading data generation method, system and equipment
CN111415337B (en) Trolley grate inclination angle detection method and system of sintering machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant