CN111222510B - Trolley grate image pickup method and system of sintering machine - Google Patents

Trolley grate image pickup method and system of sintering machine Download PDF

Info

Publication number
CN111222510B
CN111222510B CN202010177626.5A CN202010177626A CN111222510B CN 111222510 B CN111222510 B CN 111222510B CN 202010177626 A CN202010177626 A CN 202010177626A CN 111222510 B CN111222510 B CN 111222510B
Authority
CN
China
Prior art keywords
image
grate
images
contour
corner
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010177626.5A
Other languages
Chinese (zh)
Other versions
CN111222510A (en
Inventor
李宗平
廖婷婷
李曦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongye Changtian International Engineering Co Ltd
Original Assignee
Zhongye Changtian International Engineering Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongye Changtian International Engineering Co Ltd filed Critical Zhongye Changtian International Engineering Co Ltd
Priority to CN202010177626.5A priority Critical patent/CN111222510B/en
Publication of CN111222510A publication Critical patent/CN111222510A/en
Application granted granted Critical
Publication of CN111222510B publication Critical patent/CN111222510B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F27FURNACES; KILNS; OVENS; RETORTS
    • F27BFURNACES, KILNS, OVENS, OR RETORTS IN GENERAL; OPEN SINTERING OR LIKE APPARATUS
    • F27B21/00Open or uncovered sintering apparatus; Other heat-treatment apparatus of like construction
    • F27B21/02Sintering grates or tables
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F27FURNACES; KILNS; OVENS; RETORTS
    • F27DDETAILS OR ACCESSORIES OF FURNACES, KILNS, OVENS, OR RETORTS, IN SO FAR AS THEY ARE OF KINDS OCCURRING IN MORE THAN ONE KIND OF FURNACE
    • F27D21/00Arrangements of monitoring devices; Arrangements of safety devices
    • F27D21/02Observation or illuminating devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F27FURNACES; KILNS; OVENS; RETORTS
    • F27DDETAILS OR ACCESSORIES OF FURNACES, KILNS, OVENS, OR RETORTS, IN SO FAR AS THEY ARE OF KINDS OCCURRING IN MORE THAN ONE KIND OF FURNACE
    • F27D21/00Arrangements of monitoring devices; Arrangements of safety devices
    • F27D2021/0057Security or safety devices, e.g. for protection against heat, noise, pollution or too much duress; Ergonomic aspects
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F27FURNACES; KILNS; OVENS; RETORTS
    • F27MINDEXING SCHEME RELATING TO ASPECTS OF THE CHARGES OR FURNACES, KILNS, OVENS OR RETORTS
    • F27M2003/00Type of treatment of the charge
    • F27M2003/04Sintering

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a trolley grate image shooting method of a sintering machine, which comprises the following steps: extracting the ROI of the acquired grate image to obtain an image after the ROI is extracted; dividing the image after the ROI extraction into a plurality of parts with the same grate bars; carrying out outline extraction on each divided row of images; judging the size of the extracted contour, judging the contour as the outer contour of the grating when the number of pixel points forming the contour is larger than or equal to a set threshold value, and judging the contour as an invalid contour formed by other texture changes in the image when the number of pixel points forming the contour is smaller than the set threshold value; and counting the number of the contours meeting the conditions in each sub-area. The method can extract effective grate images, thereby conveniently completing the splicing of the overall grate images.

Description

Trolley grate image pickup method and system of sintering machine
Technical Field
The application relates to the technical field of sintering machines, in particular to a trolley grate image pickup method of a sintering machine. In addition, the application also relates to a trolley grate image shooting system of the sintering machine.
Background
The sintering is a process that various powdery iron-containing raw materials are mixed with a proper amount of fuel, solvent and water, and then are sintered on equipment after being mixed and pelletized, so that a series of physical and chemical changes are generated on the materials, and mineral powder particles are bonded into blocks. The sintering operation is a central link of sintering production, and comprises main working procedures of material distribution, ignition, sintering and the like, and key equipment in the sintering operation is a sintering machine. Referring to fig. 1, fig. 1 is a schematic structural diagram of a sintering machine in the prior art.
As shown in fig. 1, the sintering machine includes a pallet 101, a hearth layer bin 102, a sintering mixture bin 103, an ignition furnace 104, a head star wheel 105, a tail star wheel 106, a sinter breaker 107, a wind box 108, an exhaust fan 109, and the like. The belt sintering machine is sintering mechanical equipment which is driven by a head star wheel and a tail star wheel and is provided with a trolley filled with mixture and an ignition and exhaust device. The trolleys are continuously operated on a closed track in an end-to-end manner, as in fig. 1, the upper and lower layers of tracks are fully paved with the trolleys, and one sintering machine comprises hundreds of trolleys. After the iron-containing mixture is fed onto the trolley through the feeding device, the ignition device ignites the surface material, a series of bellows are arranged below the bottom of the trolley, one end of each bellows is a large exhaust fan, and the materials filled in the trolley are gradually combusted from the surface to the bottom of the trolley through exhaust.
The grating bars are paved on the trolley. The grate bar of the sintering machine is used as an important component part of the trolley, and the grate bar can cause the conditions of material leakage, poor air permeability and the like after failure, so that the state of the grate bar directly influences the normal operation of the sintering production and the sintering quality. The grate bars are fixed on the trolley cross beams and are used for bearing materials and guaranteeing the air permeability of sintering reaction. Because the sintering trolley runs continuously for 24 hours, the grate bars are easy to damage under the actions of mineral weight, negative air draft pressure and repeated high temperature, and adverse effects caused by the damage of the grate bars are as follows:
the first grate bar is missing. After the grate bars are broken and fall off, the gap width of the single-row grate bars can be increased, and when the gap is too large, the sintered mixture can fall into the flue from the gap holes, so that the material surface forms a rat hole.
2) The grate is inclined. The inclination degree of the grate bars is influenced by the abrasion and the deletion of the grate bars, and when the grate bars are excessively inclined, the grate bars cannot be clamped on the trolley body, so that large-area falling off is formed.
3) The gaps of the grate bars are stuck. The sintering mineral aggregate is blocked in the gaps of the grate bars, and the large-area blockage leads to poor air permeability of the sintering reaction and influences the quality of the sintering mineral.
In addition, because of the two cameras, the images need to be spliced, and when the images are spliced, the effective images comprising most of the grating bars are selected, so that the problem needs to be solved urgently.
Disclosure of Invention
The technical problem to be solved by the application is to provide the trolley grate image shooting method of the sintering machine, and the design of the method can extract effective grate images, so that the overall grate image stitching is very conveniently completed. In addition, the application also provides a trolley grate image shooting system of the sintering machine.
In order to solve the technical problems, the application provides a trolley grate bar image shooting method of a sintering machine, which comprises the following steps:
extracting the ROI of the acquired grate image to obtain an image after the ROI is extracted;
dividing the image after the ROI extraction into a plurality of parts with the same grate bars;
carrying out outline extraction on each divided row of images;
judging the size of the extracted contour, judging the contour as the outer contour of the grating when the number of pixel points forming the contour is larger than or equal to a set threshold value, and judging the contour as an invalid contour formed by other texture changes in the image when the number of pixel points forming the contour is smaller than the set threshold value; and counting the number of the contours meeting the conditions in each sub-area.
Optionally, the image capturing method further includes the steps of:
the average value of the profile values of each row is calculated by the following formula:
optionally, the image capturing method further includes the steps of:
the mean square error of each row of profile values is calculated by the following formula:
optionally, the image capturing method further includes:
four corner points of the grate bar are extracted by adopting a deep learning algorithm, panoramic image rough positioning is carried out, perspective transformation is carried out based on the corner points, and therefore the image is flattened.
Optionally, the image capturing method further includes:
training a deep learning network to obtain a deep network model:
and calibrating the training sample manually, wherein the calibration sample ensures that the corner point is positioned at the center of the prediction frame, and obtaining the coordinate value of the corner point according to the size of the prediction frame.
Optionally, the image capturing method further includes:
during testing, four corner images of rough positioning of the panoramic image are input into a trained depth network model to obtain coordinates (X, Y) of the corner in a prediction frame, and the coordinates of the corner in an original image are obtained according to the following conversion formula:
upper left corner c_lt: (X+mg, Y)
Lower left corner c_lb: (X+mg, H-sq+Y)
Upper right corner c_rt: (W-mg-sq+X, Y)
Lower right corner c_rb: (W-mg-sq+X, H-sq+Y)
Where H represents the length of the original, W represents the width of the original, mg represents the distance from the left and right edges, and sq represents the length and width of the prediction frame.
Optionally, the image capturing method further includes:
based on the coordinate values of the four corner points in the original image, flattening the image by adopting the following four-point perspective transformation, wherein the perspective transformation formula is as follows:
wherein,in order to perspective the transformation matrix,
as the point to be moved is known,
is the converted target point.
Optionally, the method further comprises:
knowing the coordinate values of the four corner points, calculating the following perspective transformation matrix to obtain the values:
in addition, in order to solve the technical problem, the application also provides a trolley grate image shooting system of a sintering machine, wherein the image shooting system comprises:
the ROI extraction unit is used for extracting the ROI of the acquired grate image to obtain an image after the ROI is extracted;
the segmentation unit is used for segmenting the image after the ROI extraction into a plurality of parts with the same grate bars;
an outer contour extraction unit for extracting the outer contour of each divided image;
a judging unit for judging the size of the extracted contour, judging the contour as the outer contour of the grating when the number of the pixel points forming the contour is larger than or equal to a set threshold value, and judging the contour as invalid contour formed by other texture changes in the image when the number of the pixel points forming the contour is smaller than the set threshold value; and counting the number of the contours meeting the conditions in each sub-area.
Optionally, the image capturing system further includes:
the deep learning transformation unit is used for extracting four corner points of the grate bar by adopting a deep learning algorithm, performing rough positioning on the panoramic image, and performing perspective transformation based on the corner points so as to flatten the image;
the deep learning transformation unit includes:
the deep network model subunit is used for training the deep learning network to obtain a deep network model: and calibrating the training sample manually, wherein the calibration sample ensures that the corner point is positioned at the center of the prediction frame, and obtaining the coordinate value of the corner point according to the size of the prediction frame.
In one embodiment of the present application, the method for capturing images of a grate bar of a trolley of a sintering machine provided by the present application includes the following steps: extracting the ROI of the acquired grate image to obtain an image after the ROI is extracted; dividing the image after the ROI extraction into a plurality of parts with the same grate bars; carrying out outline extraction on each divided row of images; judging the size of the extracted contour, judging the contour as the outer contour of the grating when the number of pixel points forming the contour is larger than or equal to a set threshold value, and judging the contour as an invalid contour formed by other texture changes in the image when the number of pixel points forming the contour is smaller than the set threshold value; and counting the number of the contours meeting the conditions in each sub-area.
The method can extract effective grate images, thereby conveniently completing the splicing of the overall grate images.
Drawings
FIG. 1 is a schematic diagram of a sintering machine according to the prior art;
FIG. 2 is a functional block diagram of a method for capturing images of a grate bar of a sintering machine according to one embodiment of the present application;
FIG. 3 is a schematic view of a part of the structure of a sintering machine in the present application;
FIG. 3-1 is a logic flow diagram of a method for capturing images of a grate bar of a sintering machine in accordance with one embodiment of the present application;
FIG. 4 is a comparison of an invalid ROI map and an valid ROI map obtained by the grate image capturing device;
FIG. 5 is a schematic diagram of a global picture stitched according to an embodiment of the present disclosure;
FIG. 6 is a flow chart of corner positioning in one embodiment of the present application;
fig. 7 is a schematic view of the image position of the corner points on the basis of fig. 5;
FIG. 8 is a schematic view of the image after cropping;
FIG. 9 is an image schematic of a manual calibration mode;
fig. 10 is a schematic diagram of the corner detection result;
fig. 11 is a schematic diagram of a contrast image before and after the corner correction of the trolley.
Detailed Description
In order to enable those skilled in the art to better understand the present invention, the following description will make clear and complete descriptions of the technical solutions according to the embodiments of the present invention with reference to the accompanying drawings.
In some of the flows described in the specification and claims of the present invention and in the foregoing figures, a plurality of operations occurring in a particular order are included, but it should be understood that the operations may be performed out of order or performed in parallel, with the order of operations such as 101, 102, etc., being merely used to distinguish between the various operations, the order of the operations themselves not representing any order of execution. In addition, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first" and "second" herein are used to distinguish different messages, devices, modules, etc., and do not represent a sequence, and are not limited to the "first" and the "second" being different types.
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention.
Referring to fig. 2 for the system functional structure of the present application, fig. 2 is a functional block diagram of a trolley grate image capturing method of a sintering machine according to an embodiment of the present application.
As shown in fig. 2, the functional modules include an image acquisition device, data and model storage, image acquisition, parameter output, feature parameter calculation, intelligent diagnosis model, state output, and the like. The image acquisition device is used for preprocessing the acquired image and storing the preprocessed image into the data and model storage module. The data and model store and output the grate image to the image acquisition module, and output the characteristic parameters to the parameter acquisition module. Parameters in the feature parameter calculation model are also stored in the data and model storage module.
The image acquisition device can specifically refer to fig. 3, and fig. 3 is a schematic diagram of a part of the structure of the sintering machine in the application.
(1) Image acquisition device
The invention installs a set of image acquisition device at the upper layer maintenance platform of the machine head, the structure is shown in figure 3, and the device comprises a camera 201, a light source 202 and a mounting bracket, and is used for acquiring the image of the grate bar on the trolley 203. And selecting one or more proper cameras for acquisition according to the size of the field of view, the lens parameters, the camera parameters and the like. Fig. 3 shows an example of synchronous acquisition of grate images by two cameras.
Mounted in this position, the acquired image is divided into an effective image and an ineffective image as follows. Referring specifically to fig. 4, fig. 4 is a comparison diagram of an ineffective ROI image and an effective ROI image obtained by the grate image capturing device in fig. 3.
The useful diagram in the fault of the trolley grate bar is identified, the effective diagram is a diagram of three rows of grate bars all appearing in the view field of the camera, and therefore, the video image acquired by the camera needs to be analyzed on line.
Firstly, extracting a region of interest (ROI), namely, roughly the region in a video in which three rows of grilles at the bottom of the trolley completely appear. Through the ROI extraction, the interference of some objects outside the grate region on the algorithm can be reduced, and the processing difficulty can be reduced. The results obtained after the invalid and valid diagrams are shown in fig. 4, wherein the left diagram in fig. 4 is an invalid ROI diagram, and the right diagram is an valid ROI diagram.
The ROI of the effective image is provided with three rows of grate bars, part of the ineffective image comprises grate bars, and the other part of the ineffective image is provided with a trolley body. The number of features and contours of the trolley body is less than that of the grating bars, and grating bar images can be screened according to the number of contours. And dividing the ROI into three sub-areas, extracting the outer contour of each sub-area by adopting an algorithm, calculating the number of the outer contour of each row, judging that the image is effective if the number exceeds a threshold value, and judging that the image is ineffective if the number is less than the threshold value.
After judging the effective image, transmitting the images of the cameras on the left and right sides of the moment point to an image splicing module, and splicing the images on the left and right sides by adopting an image splicing algorithm such as SIFT, SURF, FAST to obtain a global image, wherein the global image is shown in fig. 5, and fig. 5 is a schematic diagram of a global image spliced by an embodiment of the present application.
When the device acquires images, the position points of the acquired images are changed, so that the bottom surface of the trolley cannot be parallel to the lens surface of the camera, namely, the grating bar on one side of the trolley is closer to the camera than the other side of the trolley, and imaging has certain distortion. In order to correct the distortion, the system adopts a deep learning algorithm to extract four corner points of the grate bars, and performs perspective transformation based on the corner points so as to flatten the image. Referring to fig. 6, fig. 6 is a flowchart of corner positioning in an embodiment of the present application.
As shown in fig. 6, the corner positioning process includes: training a panoramic image, cutting images, calibrating angular point coordinates, training a depth network, a depth network model, and carrying out angular point coordinates in the cut image and angular point coordinates in the panoramic image. Based on the flow, acquiring a real-time panoramic image, cutting an image, cutting the corner coordinates in the image through a depth network model, and then obtaining the corner coordinates in the panoramic image.
Because of the complexity of the environment and the uncertainty of angles, the traditional method of directly adopting contour and feature point searching is difficult to realize accurate positioning, and therefore, the angular point positioning adopts a deep learning algorithm.
Firstly, preprocessing an image, wherein corner positioning is mainly performed on four corner areas at the bottom of a trolley, and in order to reduce redundant data and improve training speed, the image is cut to obtain corner images, and four areas are shown in fig. 7: c_lt, c_lb, c_rt, c_rb, fig. 7 is a schematic diagram of the image position of the corner on the basis of fig. 5.
The sizes of the extracted target images containing the corner points are consistent, the distances from the edges of the original image are consistent, mg represents the distance from one side of the target area to the left and right edges, and sq represents the length and width of the target area frame as shown in fig. 7. The obtained cut-out picture is shown in fig. 8, and fig. 8 is a schematic diagram of the cut-out image.
And then calibrating the cut image in a mode of calibrating the corner points by utilizing a square prediction frame bbox with fixed side length in the cut image, wherein the corner points are required to be positioned at the center of the bbox as much as possible, as shown in fig. 9, and fig. 9 is an image schematic diagram in a manual calibration mode.
Using the coordinate value of the bbox center point in the cropping target graph as the coordinate value of the corner point in the cropping target graph, for example:
the coordinates of the prediction frame in the target frame are known as (X, Y), and the coordinates in the target frame are converted into real coordinates in the original image, and the calculation mode is as follows:
upper left corner c_lt: (X+mg, Y)
Lower left corner c_lb: (X+mg, H-sq+Y)
Upper right corner c_rt: (W-mg-sq+X, Y)
Lower right corner c_rb: (W-mg-sq+X, H-sq+Y)
Wherein H represents the length of the original, W represents the width of the original,
after a large number of calibrated images are obtained, a proper depth network model for target detection is selected, the calibrated images are used as the input of the model, and the network model for corner detection is obtained through training.
The obtained real-time panoramic image is subjected to the same pretreatment as the training image to obtain four target areas, the four area images are input into a trained depth model, a positioning block diagram of the corner points is obtained through depth network detection, as shown in fig. 10, and fig. 10 is a schematic diagram of the corner point detection result. After the coordinates of the angular point positioning frame are obtained, the central coordinates (namely the coordinates used for representing the angular points) of the positioning frame can be obtained according to the size of the frame (X, Y), and the coordinate values of four angular points to be positioned can be obtained according to a coordinate transformation formula from the target frame to the original image.
After four corner points of the grate bar are obtained, four-point perspective transformation is adopted to flatten the image. The perspective transformation formula is:
wherein,for perspective transformation matrix
Points that need to be moved are known
For the converted target point
Knowing the four corner points, solving the ternary one-time equation, calculating a perspective transformation matrix, and realizing image conversion by using the transformation matrix.
It should be noted that, the explanation of the above "the points known to need to be moved" is as follows:
all points need to be transformed, firstly, a perspective transformation matrix A is unknown, according to four coordinates of angular points in an original image and four coordinates in a space needing to be mapped, the transformation matrix A (three-dimensional one-time equation solving) can be calculated, and then all the points of the original image are calculated through the perspective transformation matrix A to obtain a target image (and an image after perspective correction). The method is introduced by a general perspective transformation formula, the transformation function of opencv adopted in the implementation of the method can be used for calculating a transformation matrix by detecting four point input functions, and then the mapped image is obtained according to the transformation matrix.
Furthermore, the "target point" described above is referred to as all points of the corrected image obtained by transforming the matrix are target points.
Through corner correction, the obtained front and rear images can be shown in fig. 11, and the trolley corner correction front and rear comparison image schematic diagram in fig. 11 is shown.
The above is a description of the technical solution of the present application in the scenario. The application further describes the specific technical scheme as follows.
Referring to fig. 3-1, fig. 3-1 is a logic flow diagram of a method for capturing images of a grate bar of a sintering machine according to an embodiment of the present application.
In one embodiment, as shown in fig. 3-1, a method for capturing images of a grate bar of a trolley of a sintering machine, the method for capturing images comprises the following steps:
step S01: extracting the ROI of the acquired grate image to obtain an image after the ROI is extracted;
step S02: dividing the image after the ROI extraction into a plurality of parts with the same number as the grate bars;
step S03: carrying out outline extraction on each divided row of images;
step S04: judging the size of the extracted contour, judging the contour as the outer contour of the grating when the number of pixel points forming the contour is larger than or equal to a set threshold value, and judging the contour as an invalid contour formed by other texture changes in the image when the number of pixel points forming the contour is smaller than the set threshold value; and counting the number of the contours meeting the conditions in each sub-area.
The method can extract effective grate images, thereby conveniently completing the splicing of the overall grate images.
In the above embodiments, further improved designs may be made. For example, the image capturing method further includes the steps of:
the average value of the profile values of each row is calculated by the following formula:
the image pickup method further includes the steps of:
the mean square error of each row of profile values is calculated by the following formula:
it should be noted that, the ROI area is subdivided into an upper part, a middle part and a lower part, when the ROI area is effective, the upper part, the middle part and the lower part are respectively three rows of bars of the trolley, and when the ROI area is ineffective, one part may be the trolley body. The trolley body is different from the grid bar in texture structure, so that the three parts can be subjected to outline extraction to obtain outlines of three areas, namely upContours, midcon tours and downContours, and all outlines detected in the areas are stored in each variable. And setting a threshold value contourSize of the outer contour size of the grate.
If upContours[i]>=contoursSize:upNum=upNum+1
If midContours[i]>=contoursSize:midNum=midNum+1
If downContours[i]>=contoursSize:downNum=downNum+1
upNum, midNum, downNum is 0 and is used for counting the number of contours meeting the conditions in the three rows of grate bars. By setting the judging conditions, small edge points are screened out by using the priori conditions of the grating size.
It should be noted that, the invalid graph is a graph of the trolley bottom not completely entering the camera view field, so that the image contains the trolley body or other areas, the textures of the other areas are less at the moment, the valid graph is a graph of the trolley bottom completely entering the camera view field, at the moment, three rows of grates at the trolley bottom are all present in the camera, and the textures are large in quantity. This patent also uses this principle to obtain a valid image.
In addition, in order to eliminate the interference of small noise contours on the statistics of the grating number, all contours need to be screened, and if the number of edge points forming one contour is smaller than a threshold value contourSize, the contour is eliminated. Finally, the total number of contours remaining in each region is counted upNum, midNum, downNum.
Effectively, the number of the detected three rows of grate bars is close, and three groups of number variances are calculated:
mean value of three rows of contour number S 2 Representing the standard deviation of the number of three rows of profiles. And judging whether the current image is a valid image according to the number of the three rows of outlines and the discrete degree of the number.
And after the two cameras are analyzed to meet the effective image standard, the images of the cameras at the two sides are simultaneously uploaded to a system for image stitching processing, so that a complete grate image at the bottom of the whole trolley is obtained.
In the above embodiments, further designs are also possible.
For example, the image pickup method further includes:
four corner points of the grate bar are extracted by adopting a deep learning algorithm, panoramic image rough positioning is carried out, perspective transformation is carried out based on the corner points, and therefore the image is flattened. The image pickup method further includes:
training a deep learning network to obtain a deep network model:
and calibrating the training sample manually, wherein the calibration sample ensures that the corner point is positioned at the center of the prediction frame, and obtaining the coordinate value of the corner point according to the size of the prediction frame.
Further, the image capturing method further includes:
during testing, four corner images of rough positioning of the panoramic image are input into a trained depth network model to obtain coordinates (X, Y) of the corner in a prediction frame, and the coordinates of the corner in an original image are obtained according to the following conversion formula:
upper left corner c_lt: (X+mg, Y)
Lower left corner c_lb: (X+mg, H-sq+Y)
Upper right corner c_rt: (W-mg-sq+X, Y)
Lower right corner c_rb: (W-mg-sq+X, H-sq+Y)
Where H represents the length of the original, W represents the width of the original, mg represents the distance from the left and right edges, and sq represents the length and width of the prediction frame.
Further, the image capturing method further includes:
based on the coordinate values of the four corner points in the original image, flattening the image by adopting the following four-point perspective transformation, wherein the perspective transformation formula is as follows:
wherein,in order to perspective the transformation matrix,
as the point to be moved is known,
is the converted target point.
The image pickup method further includes:
knowing the coordinate values of the four corner points, calculating the following perspective transformation matrix to obtain the values:
corresponding to the method embodiment described above, the present application also provides an apparatus embodiment.
In one embodiment, a trolley grate image capture system of a sintering machine, the image capture system comprising:
the ROI extraction unit is used for extracting the ROI of the acquired grate image to obtain an image after the ROI is extracted;
the segmentation unit is used for segmenting the image after the ROI extraction into a plurality of parts with the same number as the grate bars;
an outer contour extraction unit for extracting the outer contour of each divided image;
a judging unit for judging the size of the extracted contour, judging the contour as the outer contour of the grating when the number of the pixel points forming the contour is larger than or equal to a set threshold value, and judging the contour as invalid contour formed by other texture changes in the image when the number of the pixel points forming the contour is smaller than the set threshold value; and counting the number of the contours meeting the conditions in each sub-area.
Further, the image pickup system further includes:
the deep learning transformation unit is used for extracting four corner points of the grate bar by adopting a deep learning algorithm, performing rough positioning on the panoramic image, and performing perspective transformation based on the corner points so as to flatten the image;
the deep learning transformation unit includes:
the deep network model subunit is used for training the deep learning network to obtain a deep network model: and calibrating the training sample manually, wherein the calibration sample ensures that the corner point is positioned at the center of the prediction frame, and obtaining the coordinate value of the corner point according to the size of the prediction frame.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Reference throughout this specification to "multiple embodiments," "some embodiments," "one embodiment," or "an embodiment," etc., means that a particular feature, component, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases "in various embodiments," "in some embodiments," "in at least one other embodiment," or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, components, or characteristics may be combined in any suitable manner in one or more embodiments. Thus, a particular feature, component, or characteristic shown or described in connection with one embodiment may be combined, in whole or in part, with features, components, or characteristics of one or more other embodiments, without limitation. Such modifications and variations are intended to be included within the scope of the present application.
Furthermore, those skilled in the art will appreciate that the various aspects of the invention are illustrated and described in the context of a number of patentable categories or circumstances, including any novel and useful procedures, machines, products, or materials, or any novel and useful modifications thereof. Accordingly, aspects of the present application may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.) or by a combination of hardware and software. The above hardware or software may be referred to as a "data block," module, "" engine, "" terminal, "" component, "or" system. Furthermore, aspects of the present application may take the form of a computer product, comprising computer-readable program code, embodied in one or more computer-readable media.
It should be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that an article or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The foregoing is merely a specific embodiment of the application to enable one skilled in the art to understand or practice the application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. The trolley grate image shooting method of the sintering machine is characterized by comprising the following steps of:
extracting the ROI of the acquired grate image to obtain an image after the ROI is extracted, wherein the grate image is synchronously acquired by adopting two cameras, and the acquisition frequencies of the two cameras are consistent;
dividing the image after the ROI extraction into a plurality of parts with the same grate bars;
carrying out outline extraction on each divided row of images;
judging the size of the extracted contour, judging the contour as the outer contour of the grating when the number of pixel points forming the contour is larger than or equal to a set threshold value, and judging the contour as an invalid contour formed by other texture changes in the image when the number of pixel points forming the contour is smaller than the set threshold value; counting the number of contours meeting the conditions in each row of areas; if the number of the contours exceeds the threshold value, judging that the images are effective images, and if the number of the contours is smaller than the threshold value, judging that the images are ineffective images; and performing image stitching processing on the effective images acquired by the two cameras to obtain a complete grate image at the bottom of the trolley, wherein the effective images are images of three rows of grate bars which are all in the field of view of the cameras.
2. The method for capturing images of a grate bar of a sintering machine according to claim 1, further comprising the steps of:
the average value of the profile values of each row is calculated by the following formula:
3. the method for capturing images of a grate bar of a sintering machine according to claim 2, further comprising the steps of:
the mean square error of each row of profile values is calculated by the following formula:
4. a pallet grid image pickup method of a sintering machine according to any one of claims 1 to 3, wherein the image pickup method further comprises:
four corner points of the grate bar are extracted by adopting a deep learning algorithm, panoramic image rough positioning is carried out, perspective transformation is carried out based on the corner points, and therefore the image is flattened.
5. The method for capturing images of a grate bar of a sintering machine according to claim 4, further comprising:
training a deep learning network to obtain a deep network model:
and calibrating the training sample manually, wherein the calibration sample ensures that the corner point is positioned at the center of the prediction frame, and obtaining the coordinate value of the corner point according to the size of the prediction frame.
6. The method for capturing images of a grate bar of a sintering machine according to claim 5, further comprising:
during testing, four corner images of rough positioning of the panoramic image are input into a trained depth network model to obtain coordinates (X, Y) of the corner in a prediction frame, and the coordinates of the corner in an original image are obtained according to the following conversion formula:
upper left corner c_lt: (X+mg, Y)
Lower left corner c_lb: (X+mg, H-sq+Y)
Upper right corner c_rt: (W-mg-sq+X, Y)
Lower right corner c_rb: (W-mg-sq+X, H-sq+Y)
Where H represents the length of the original, W represents the width of the original, mg represents the distance from the left and right edges, and sq represents the length and width of the prediction frame.
7. The method for capturing images of a grate bar of a sintering machine according to claim 6, further comprising:
based on the coordinate values of the four corner points in the original image, flattening the image by adopting the following four-point perspective transformation, wherein the perspective transformation formula is as follows:
wherein,in order to perspective the transformation matrix,
as the point to be moved is known,
is the converted target point.
8. The method for capturing images of a grate bar of a sintering machine according to claim 7, further comprising:
knowing the coordinate values of the four corner points, calculating the following perspective transformation matrix to obtain the values:
9. a trolley grate image capturing system of a sintering machine, characterized in that the image capturing system comprises:
the image acquisition device is used for synchronously acquiring grate images by adopting two cameras, and the acquisition frequencies of the two cameras are consistent;
the ROI extraction unit is used for extracting the ROI of the acquired grate image to obtain an image after the ROI is extracted;
the segmentation unit is used for segmenting the image after the ROI extraction into a plurality of parts with the same grate bars;
an outer contour extraction unit for extracting the outer contour of each divided image;
a judging unit for judging the size of the extracted contour, judging the contour as the outer contour of the grating when the number of the pixel points forming the contour is larger than or equal to a set threshold value, and judging the contour as invalid contour formed by other texture changes in the image when the number of the pixel points forming the contour is smaller than the set threshold value; counting the number of contours meeting the conditions in each row of areas; if the number of the contours exceeds the threshold value, judging that the images are effective images, and if the number of the contours is smaller than the threshold value, judging that the images are ineffective images;
and the image stitching module is used for stitching the effective images acquired by the two cameras to obtain a global image, wherein the effective images are graphs of three rows of grate bars which are all in the camera view field.
10. The sintering machine's trolley grate image capture system of claim 9, further comprising:
the deep learning transformation unit is used for extracting four corner points of the grate bar by adopting a deep learning algorithm, performing rough positioning on the panoramic image, and performing perspective transformation based on the corner points so as to flatten the image;
the deep learning transformation unit includes:
the deep network model subunit is used for training the deep learning network to obtain a deep network model: and calibrating the training sample manually, wherein the calibration sample ensures that the corner point is positioned at the center of the prediction frame, and obtaining the coordinate value of the corner point according to the size of the prediction frame.
CN202010177626.5A 2020-03-13 2020-03-13 Trolley grate image pickup method and system of sintering machine Active CN111222510B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010177626.5A CN111222510B (en) 2020-03-13 2020-03-13 Trolley grate image pickup method and system of sintering machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010177626.5A CN111222510B (en) 2020-03-13 2020-03-13 Trolley grate image pickup method and system of sintering machine

Publications (2)

Publication Number Publication Date
CN111222510A CN111222510A (en) 2020-06-02
CN111222510B true CN111222510B (en) 2024-03-15

Family

ID=70807718

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010177626.5A Active CN111222510B (en) 2020-03-13 2020-03-13 Trolley grate image pickup method and system of sintering machine

Country Status (1)

Country Link
CN (1) CN111222510B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476712B (en) * 2020-03-13 2024-03-15 中冶长天国际工程有限责任公司 Trolley grate image shooting and detecting method and system of sintering machine

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4631750A (en) * 1980-04-11 1986-12-23 Ampex Corporation Method and system for spacially transforming images
JP2006311020A (en) * 2005-04-27 2006-11-09 Fuji Xerox Co Ltd Image output method and image output apparatus
CN200979373Y (en) * 2006-12-08 2007-11-21 唐山钢铁股份有限公司 A trolley plate of a sinter cooling machine
CN101231229A (en) * 2007-01-26 2008-07-30 华中科技大学 Non-dyeing automatic counting method for liquid bacterium-containing quantity
CN103077523A (en) * 2013-01-23 2013-05-01 天津大学 Method for shooting and taking evidence through handheld camera
CN104318543A (en) * 2014-01-27 2015-01-28 郑州大学 Board metering method and device based on image processing method
JP2015161680A (en) * 2014-02-28 2015-09-07 株式会社キーエンス Inspection system, image processing apparatus, method, and program
CN107123188A (en) * 2016-12-20 2017-09-01 北京联合众为科技发展有限公司 Ticket of hindering based on template matching algorithm and edge feature is recognized and localization method
CN108016840A (en) * 2017-11-28 2018-05-11 天津工业大学 A kind of LED based multiple views conveyer belt longitudinal tear image-pickup method
CN110020656A (en) * 2019-01-30 2019-07-16 阿里巴巴集团控股有限公司 Bearing calibration, device and the equipment of image
CN110046529A (en) * 2018-12-11 2019-07-23 阿里巴巴集团控股有限公司 Two-dimensional code identification method, device and equipment
CN110378376A (en) * 2019-06-12 2019-10-25 西安交通大学 A kind of oil filler object recognition and detection method based on machine vision

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4631750A (en) * 1980-04-11 1986-12-23 Ampex Corporation Method and system for spacially transforming images
JP2006311020A (en) * 2005-04-27 2006-11-09 Fuji Xerox Co Ltd Image output method and image output apparatus
CN200979373Y (en) * 2006-12-08 2007-11-21 唐山钢铁股份有限公司 A trolley plate of a sinter cooling machine
CN101231229A (en) * 2007-01-26 2008-07-30 华中科技大学 Non-dyeing automatic counting method for liquid bacterium-containing quantity
CN103077523A (en) * 2013-01-23 2013-05-01 天津大学 Method for shooting and taking evidence through handheld camera
CN104318543A (en) * 2014-01-27 2015-01-28 郑州大学 Board metering method and device based on image processing method
JP2015161680A (en) * 2014-02-28 2015-09-07 株式会社キーエンス Inspection system, image processing apparatus, method, and program
CN107123188A (en) * 2016-12-20 2017-09-01 北京联合众为科技发展有限公司 Ticket of hindering based on template matching algorithm and edge feature is recognized and localization method
CN108016840A (en) * 2017-11-28 2018-05-11 天津工业大学 A kind of LED based multiple views conveyer belt longitudinal tear image-pickup method
CN110046529A (en) * 2018-12-11 2019-07-23 阿里巴巴集团控股有限公司 Two-dimensional code identification method, device and equipment
CN110020656A (en) * 2019-01-30 2019-07-16 阿里巴巴集团控股有限公司 Bearing calibration, device and the equipment of image
CN110378376A (en) * 2019-06-12 2019-10-25 西安交通大学 A kind of oil filler object recognition and detection method based on machine vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种新的掌纹ROI图像定位方法;尚丽 等;《激光与红外》;20120720;第42卷(第07期);第815-820页 *

Also Published As

Publication number Publication date
CN111222510A (en) 2020-06-02

Similar Documents

Publication Publication Date Title
CN110609037B (en) Product defect detection system and method
CN104537659B (en) The automatic calibration method and system of twin camera
CN111126165B (en) Black smoke vehicle detection method and device and electronic equipment
CN111476712B (en) Trolley grate image shooting and detecting method and system of sintering machine
CN104200492B (en) Video object automatic detection tracking of taking photo by plane based on profile constraints
CN109101878B (en) Image analysis system and image analysis method for straw fuel value estimation
CN111079518B (en) Ground-falling abnormal behavior identification method based on law enforcement and case handling area scene
CN108732507A (en) A kind of lithium battery defect detecting device based on battery temperature field and visible images
CN111222510B (en) Trolley grate image pickup method and system of sintering machine
CN113706566B (en) Edge detection-based perfuming and spraying performance detection method
CN114119574A (en) Picking point detection model construction method and picking point positioning method based on machine vision
CN116912715A (en) Unmanned aerial vehicle vision servo control method and system for fan blade inspection
CN115294200A (en) All-time three-dimensional target detection method based on multi-sensor fusion
CN117545583A (en) Method and device for measuring behavior of welding phenomenon, welding system, and program
CN106733686A (en) A kind of streamline object positioning method of view-based access control model and code-disc data fusion
WO2020168515A1 (en) Image processing method and apparatus, image capture processing system, and carrier
CN112907972B (en) Road vehicle flow detection method and system based on unmanned aerial vehicle and computer readable storage medium
CN104568376A (en) Method and system for analyzing instantaneous sediment transportation intensity of pebble gravels through images
CN111223098B (en) Trolley grate inclination angle detection method and system of sintering machine
CN112614176A (en) Belt conveyor material volume measuring method and device and storage medium
KR20180096966A (en) Automatic Counting Method of Rice Plant by Centroid of Closed Rice Plant Contour Image
JP3589271B2 (en) Image information analysis apparatus and image information analysis method
CN114119713A (en) Forest land baldness detection method based on artificial intelligence and unmanned aerial vehicle remote sensing
CN114550069A (en) Piglet nipple counting method based on deep learning
CN113406091A (en) Unmanned aerial vehicle system for detecting fan blade and control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant