CN116499801A - Automatic sampling device of molten steel based on machine vision system - Google Patents

Automatic sampling device of molten steel based on machine vision system Download PDF

Info

Publication number
CN116499801A
CN116499801A CN202310371191.1A CN202310371191A CN116499801A CN 116499801 A CN116499801 A CN 116499801A CN 202310371191 A CN202310371191 A CN 202310371191A CN 116499801 A CN116499801 A CN 116499801A
Authority
CN
China
Prior art keywords
sample
molten steel
phase
area
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310371191.1A
Other languages
Chinese (zh)
Inventor
武迎春
邢津榕
黄莉
朱彦军
黄国梁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taiyuan University of Science and Technology
Original Assignee
Taiyuan University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taiyuan University of Science and Technology filed Critical Taiyuan University of Science and Technology
Priority to CN202310371191.1A priority Critical patent/CN116499801A/en
Publication of CN116499801A publication Critical patent/CN116499801A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N1/00Sampling; Preparing specimens for investigation
    • G01N1/02Devices for withdrawing samples
    • G01N1/10Devices for withdrawing samples in the liquid or fluent state
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30136Metal

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Biochemistry (AREA)
  • Geometry (AREA)
  • Pathology (AREA)
  • Immunology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Hydrology & Water Resources (AREA)
  • Quality & Reliability (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Signal Processing (AREA)
  • Investigating And Analyzing Materials By Characteristic Methods (AREA)

Abstract

The invention discloses a molten steel automatic sampling device based on a machine vision system, and belongs to the technical field of industrial automatic detection. The defect detection module obtains 3D profile distribution of the molten steel sample piece through stripe projection profilometry, calculates the volume of the molten steel sample piece based on 3D data partition integration, and evaluates whether the current sampling is effective or not through comparison with the volume of a standard sample piece.

Description

Automatic sampling device of molten steel based on machine vision system
Technical Field
The invention belongs to the technical field of industrial automatic detection, and particularly relates to an automatic molten steel sampling device based on a machine vision system.
Background
Steel is particularly important as a final production task in steel plants to monitor its quality. However, because the steel production cycle is long, the process flow is complex, and the factors influencing the quality are complex and variable, the accurate control of the steel production flow is required. Molten steel is one of main products in the steelmaking process, and has important significance for quality control of steel.
At present, the method for detecting the quality of molten steel is to quantitatively analyze high-temperature molten steel by a high-precision instrument and then deduce the quality condition of the steel possibly produced finally, the method not only increases the hardware cost, but also causes a certain degree of judgment error because the prior knowledge is used for judging the final production quality of the steel; and direct visual analysis of a sample formed by a small amount of high-temperature molten steel can save cost and avoid the limitation of an indirect detection method. However, because the molten steel production environment is dark, the ambient temperature is high, and the factors such as high working strength are added, the sampling operation of high-temperature molten steel can appear very big potential safety hazard. With the increasing maturity of computer vision theory system, the manufacturing modernization and the intellectualization are breakthrough progress in the industrial field. The mechanical arm is combined with an automation device to design a set of mechanical arm positioning and grabbing module, so that the accidents are effectively avoided.
The key technology of the mechanical arm positioning and grabbing module is that the mechanical arm is utilized to realize accurate positioning and grabbing of the molten steel sample. At present, the positioning technology based on image recognition has the advantage of a non-contact detection mode, so that the positioning technology is widely applied to the fields of industrial automation, biomedical treatment and the like. The technology is mainly researched from two directions of a traditional algorithm and a machine learning algorithm. Traditional algorithms such as Kneip et al divide a target object based on a stereoscopic vision system in combination with a mathematical model to obtain an integral edge model; machine learning algorithms such as Wan et al incorporate various machine learning classifiers and employ one-way analysis of variance algorithms to identify and locate pineapples;
and the defect detection of the molten steel sample can be carried out by carrying out 3D reconstruction on the molten steel sample, and the volume difference between the current sample and the standard sample is calculated to judge whether the sampling is qualified or not. As a common non-contact 3D reconstruction method, the fringe projection contour technique has the advantages of high precision, high efficiency, wide and stable measurement range and the like, and is widely applied to the fields of medical diagnosis, industrial product monitoring, reverse engineering and the like. If Liu et al reconstruct a high-reflectivity welded plate in real time by projecting blue stripes and utilizing Fourier transform profilometry, the error of the edge area of the reconstructed welded plate is larger because the phase calculation method involves frequency domain filtering; the Yang et al reconstruct a complex surface object in 3D by using a phase shift technique through projecting phase shift fringes, and the real-time performance of a measuring system is reduced because a plurality of deformation fringes are required for phase calculation. In order to improve the real-time performance of the system, karpinsky et al increase the speed of fringe projection and acquisition by removing the projector color filter member and adding hardware for externally giving a trigger signal, and simultaneously precisely control the synchronization of projection and acquisition, and the hardware cost of the measurement system is high due to the additional introduction of the device for externally triggering the signal. Whether it is a 3D reconstruction process based on fourier analysis or phase shift techniques, the degree of object steepness and reflectivity both affect the accuracy of the phase unwrapping. In order to inhibit phase expansion errors caused by object surface jump, li et al reconstruct an isolated abrupt object by projecting various different frequency stripes and utilizing a method of guiding high-frequency phase expansion by low frequency, which has the defects that when the abrupt degree of the detected object is large and the error exists in the low-frequency phase expansion result, the error transmission is caused in the phase guiding expansion process, and the high-frequency phase expansion errors are caused. Tan et al have performed 3D reconstruction of metal objects by projecting two sets of frequency fringes, using a double-frequency heterodyne method, and because the frequencies of the projected two sets of fringes are close and the synthesized frequency needs to be larger than the measurement format, the fringe frequency selection is lower, limiting the accuracy of 3D reconstruction.
Disclosure of Invention
Aiming at the problems existing in the prior art, the invention designs the automatic molten steel sampling device based on the machine vision system in order to effectively save the hardware cost and ensure the 3D reconstruction precision of the molten steel sample. The device can realize the automatic sampling of molten steel sample and utilize molten steel sample defect detection module to carry out defect detection to the molten steel sample that takes out, makes the molten steel sample of censoring can satisfy the inspection requirement.
In order to solve the technical problems, the invention adopts the following technical scheme: the automatic molten steel sampling device based on the machine vision system comprises a first mechanical arm, an automatic tray, a cooling water tank, a second mechanical arm and a molten steel sample defect detection module based on fringe projection, wherein the first mechanical arm is positioned between a melting furnace and the automatic tray, the cooling water tank is positioned at one side of the automatic tray, after the automatic tray is overturned, a molten steel sample can fall into the cooling water tank, and the second mechanical arm is positioned between the cooling water tank and the molten steel sample defect detection module based on fringe projection;
the automatic mechanical arm is provided with a turnover device, the automatic tray is provided with a turnover device, molten steel samples are poured into a cooling water tank after being turned over, the tail end of the second mechanical arm is provided with a mechanical arm, and the mechanical arm is controlled by the second mechanical arm to grasp the molten steel samples in the cooling water tank and send the molten steel samples to the molten steel sample defect detection module based on stripe projection.
Further, the specific working process of the device is as follows:
step 1), fixing an industrial ladle at the tail end of a mechanical arm without a grabbing tool, moving the tail end of the mechanical arm to a stove mouth area by utilizing a positioning technology, and setting joint action of the mechanical arm to take out a small amount of molten steel; then positioning the center of a mold groove area of the sample, moving and setting the joint action of the mechanical arm to pour the taken molten steel into the mold groove;
step 2) setting an automatic tray for a certain time and then overturning to pour the molded sample into a cooling pool;
step 3) positioning the sample in the cooling pool by using a mechanical arm positioning and grabbing module, controlling the mechanical arm provided with the mechanical arm to a positioned sample point after positioning by using a machine vision recognition technology, and setting an action signal to realize sample taking out and subsequent detection workbench operation;
and 4) after the detected molten steel sample is placed on a detection workbench, a 3D surface profile distribution of the detected molten steel sample is obtained through fringe projection profilometry by utilizing a molten steel sample defect detection module based on fringe projection, and the volume is calculated to judge whether the sample is qualified or not.
Further, the mechanical arm positioning grabbing module in the step 3) comprises a workbench, a mechanical arm and a CCD, wherein the CCD is fixed at the tail end of the mechanical arm, the CCD is used for shooting an image of a molten steel sample piece in the workbench, the grabbing position and the grabbing direction are obtained through computer processing, and the image is converted into a corresponding gesture signal of the mechanical arm for grabbing, and the specific steps are as follows:
3.1. Feature identification of molten steel sample
The feature recognition of the molten steel sample comprises two steps: collecting the gesture of a target sample and extracting features; the gesture of the target sample is collected by using a camera fixed at the tail end of the mechanical arm; the characteristic extraction is combined with the image recognition technology to process the acquired image, and the specific recognition technology is as follows:
A. positioning coordinates
Dividing a molten steel sample into two partial areas, namely an upper half area 'cylinder' and a lower half area 'handle', firstly identifying the upper half area of the sample, and assisting the upper half area obtained by identification in acquiring the lower half area, wherein the upper half area identification steps are as follows:
firstly, processing an acquired target sample image by using a Canny operator to obtain a primary edge profile I of the sample e (x, y) in order to avoid that the obtained primary edge profile has an unclosed crack, causing a subsequent region selection error, the obtained edge profile is subjected to an expansion process of the following formula,
wherein M is se For eliminating background noise contour in primary edge contour map, 4 pixel connected region is used to detect sample edge contour map I e All closed contours in' (x, y), denoted as D e t, filling the detected contour with,
Filled sample Profile I pad (x, y) detecting the 8-pixel connected region, and taking the outline filling result with the largest detection area as the whole region of the sample, and marking as I outline Performing standard circle fitting on the whole area of the obtained sample by using a constant kernel circle detection algorithm, and marking the circle center seat corresponding to the fitted circle as (x) cir ,y cir ) The radius is R cir Considering that the probability of the circular area obtained by fitting cannot completely cover I outline The upper half area represented by (x, y), therefore, the radius of the fitting circle is 10-pixel extended, denoted R cir Taking the expanded result as the outline of the upper half area of the molten steel sample, and filling the outline of the upper half area of the sample by using the following method;
finally, the upper half area of the obtained sample and the whole area of the sample are subjected to area elimination by the following formula, and the obtained residual area is used as a handle area of the sample;
considering that the structure of the molten steel sample is symmetrical, the center of the searching area can be used as a locating point to better meet the system requirement, the mass center is short for mass center, the effect of searching the center point for an irregular object with uniform mass distribution is better, so that the mass center of a handle area is solved to be used as the locating point of the target sample, a mass center formula is transformed, the mass of the object point is replaced by an image point gray value, and the locating grabbing point can be solved by using the following formula;
Wherein D represents the "handle" region I of the sample handle (x,y),x i 、y i Coordinate value representing image point of region, m i Gray values representing image point areas;
B. direction of deflection
For a two-dimensional image, the length of a vector directional line segment is utilized to define the relation between two pixel points, the mathematical concept of an angle is combined to serve as the posture of a sample piece at the current moment, characteristic points of corresponding areas are respectively selected as characterization points for the molten steel sample piece divided into two areas, and the center (x cir ,y cir ) As a characterization point of the region, a lower half region selects a positioning grabbing point (x m ,y m ) As a characterization point of the region, obtaining a vector representation between two characterization points by using the following formula;
to establish the vector and angle conversion, another vector parallel to the image plane axis is introducedThe conversion into angular representation is performed using the following formula:
wherein round {. Cndot. } is a rounding function;
3.2. mechanical arm positioning based on image feature points
After the obtained positioning direction and deflection direction in the image shot by the camera fixed at the tail end of the mechanical arm, the positioning and grabbing points are converted into positioning and grabbing points under the coordinate system of the mechanical arm base by utilizing the camera calibration and hand-eye calibration technologies, and the tail end executor is commanded to act so as to realize positioning and grabbing of an object;
A. Location grabbing point conversion
Wherein z is c For locating point p pix (x m ,y m ) In the camera coordinate system (X C ,Y C ,Z C ) The value of Z, M ins An internal reference matrix is obtained for camera calibration,marking a matrix for the eyes and hands, and +.>Representing a mapping matrix between the end coordinates at the ith moment and the base coordinates;
B. deflection direction conversion
Because the rotation angle increment and the rotation value increment of the mechanical arm shaft joint are in linear relation, the rotation initial position and theta of the shaft joint are calculated O The reference positions are the same, the rotation directions are consistent, and then the initial rotation angle value theta is read start The axial deviation angle theta is obtained by combining the following H
θ H =θ Ostart (9)。
Further, the molten steel sample defect detection module based on stripe projection in the step 4) comprises a detection workbench, a digital projector and a CCD, wherein the detected molten steel sample is placed on the detection workbench, the digital projector is used for projecting multi-frame stripes onto the detection workbench, the CCD is used for continuously collecting multi-frame deformed stripe patterns, then the collected stripe sets are sent into a computer for selection and time sequence reduction, and then effective stripe images are subjected to algorithms such as phase calculation, phase expansion, phase height mapping and the like to reconstruct the 3D surface profile distribution of the sample; finally, calculating the volume of the sample according to the 3D surface profile distribution, and judging whether the sample is qualified or not; the method comprises the following specific steps:
Step 4.1) after a tested molten steel sample is placed on a detection workbench, a digital projector is utilized to circularly play a flash consisting of a plurality of frames of fringe patterns with certain phase shift, the frame speed is 20fps, a CCD is controlled to continuously collect the plurality of frames of deformed fringe patterns, the frame speed is 30fps, the collection duration is set to be 2s, and the collected fringe is stored in a memory;
step 4.2) sending the collected fringe set into a computer for label identification and sequencing to realize fringe selection and time sequence restoration, and then carrying out phase calculation, phase expansion and phase height mapping on the effective fringe image to reconstruct the 3D surface profile distribution of the sample;
and 4.3) selecting the effective bottom area of the shot image of the measured molten steel sample, and calculating the sample volume by combining the obtained 3D surface profile distribution to judge whether the sample is qualified.
Further, step 4.1) performs a sorting operation by using the label stripes captured by the CCD, and specifically includes the following steps:
for one mark stripe I after capturing cap Performing standard circle fitting by using a constant kernel circle detection algorithm, wherein (x, y) represents pixel coordinates of a shot image, and circle center coordinates corresponding to the fitted circle are marked as (x) circle ,y circle ) The radius is marked as R, a circle obtained by fitting is drawn on an image with the same size as the size of the label stripes, the center coordinates and the radius of the formula (3) are replaced by the current identification result, and the circle is filled to be used as a label template M circle (x,y);
In order to obtain the label embedded in the current label stripe, the label area in the label stripe is reserved by the following formula, and the reserved label information is marked as I label (x,y);
Then extracting the label information obtained by the formula (10) by using the following formula, wherein the extracted label is marked as I label (x,y),
Because the label stripe ordering is realized by adopting a template matching method, the sizes of two label graphs to be matched are required to be consistent, and the gray information representation modes are the same, the extracted labels are firstly scaled to the designed size of the label image, and the scaling of the labels is realized by utilizing nearest neighbor interpolation considering that the designed label information is single, wherein the mapping relation between the labels before scaling and the label pixels after scaling is represented by the following formula:
wherein (X, Y) represents the scaled image pixels, I grade (X, Y) represents the scaled reference number diagram, w, h represent the scaled pre-reference number diagram size, W, H represent the scaled reference number diagram size;
then binarizing the scaled label to obtain a final to-be-processed label figure I num (X,Y)
Wherein thresh is a set threshold, the threshold is calculated by adopting a minimized intra-class variance algorithm,
the reference mark to be processed and the reference mark of the design Template matching is carried out by utilizing the following formula, the template with the smallest difference value in all matching results is the best matching template, and the template is corresponding to the templateThe numbered sequence is denoted as the phase shift sequence of the stripe;
and finally judging whether the identified label is repeatedly identified, if not, naming the label stripe, otherwise, not naming the label stripe, repeating the operation to identify the residual label stripe until all images are identified, and if all designed labels are identified once, performing the next operation, otherwise, indicating that the image data acquisition is not completed.
Further, the specific process of reconstructing the 3D surface profile of the sample by phase calculation, phase unwrapping and phase height mapping in step 4.2) is as follows:
projecting three kinds of frequency stripes, wherein two kinds of stripes with lower frequency calculate the phase by using a double-frequency heterodyne method, and the method can effectively improve the accuracy of phase calculation by guiding the phase calculation of high frequency stripes after obtaining low frequency phase, and the method is set to have five phase shift steps and the frequency is f respectively 1 、f 1′ 、f 2 And the relationship between the three frequencies is expressed as:
the computer-generated transmittance expression for 15 fringes is:
wherein, (x) P ,y P ) Pixel coordinates representing the projected image, a, b being constants;
After 15 stripes are formed into flash, the flash is captured by using a CCD, and the expression of the effective stripe extracted after label identification is as follows:
wherein A (x, y) represents average intensity, B (x, y) represents intensity modulation, phi (x, y) represents fringe phase after being modulated by an object, and then the obtained truncated phase is solved as follows:
wherein phi is i (x, y) represents the truncated phase of the different frequency stripes, then the frequency is f 1 、f 1′ 、f 2 The cut-off phases of the stripes are respectively phi 1 (x,y)、φ 1′ (x,y)、φ 2 (x, y) represents;
the continuous phase difference phi of the two Closing device (x, y) can be derived from double frequency heterodyning:
wherein phi is 1 (x,y)、Φ 1′ (x, y) is the frequency f 1 、f 1′ The phase of the fringes after being modulated by the object, and likewise the frequency f Closing device Can be obtained by the following formula
For a point on the measurement surface, the fringe phase after being modulated by the object can be expressed as
Φ j (x,y)=2πK j (x,y),j=1,1′ (21)
Wherein K is 1 (x,y)、K 1′ (x, y) respectively represent the fringe frequency f 1 、f 1′ The number of stripe stages in the process is as follows
Wherein k is j (x, y) represents the phase order, and when the relative positions of the camera, the projector and the measured object are fixed, the positions of the same point on the object on grating images of different periods are the same, so that the method can obtain
Bringing equation (21) into equation (23) to obtain
The stripe phase phi modulated by the object can be obtained by combining the equation (19) and the equation (24) 1 The expression of (x, y) is
Or (b)
Φ 1 (x,y)=2π·k 1 (x,y)+φ 1 (x,y) (27)
Wherein k is 1 (x, y) is the fringe frequency f 1 The truncated phase at a certain point belongs to the phase order;
obtaining phi 1 (x, y) and then obtaining the frequency f according to the relation between the absolute phases of the high and low frequencies and the frequency 2 Phase order k of truncated phase of stripe 2 The (x, y) calculation formula is:
will k 2 (x, y) into the relation between the cut-off phase and the continuous phase to obtain the continuous phase of the high-frequency stripe, and the continuous phase obtained by unfolding can only be usedReflecting the surface profile of a three-dimensional object, if the actual height of the object is to be obtained, a specific mapping relationship between the two is also required to be known, namely, a phase-height mapping matrix, the matrix needs to move a reference back and forth m (m is more than or equal to 3) times for a designated distance, projection and collection of stripes are carried out after each movement, and the established relationship is generally written as follows:
wherein Z (x, y) represents the actual object height to be solved, phi (x, y) represents the unfolding phase result obtained by solving the deformation stripe modulated by the object, and a (x, y), b (x, y) and c (x, y) are system mapping parameters of the required solution, and can be calculated and obtained by using a least square method.
Further, step 4.3) selecting an effective bottom area of the sample image shot by the module, and calculating the sample volume according to the 3D surface profile distribution, wherein the specific steps are as follows:
From the 3D profile distribution of the molten steel sample, the height distribution Z (x) of the molten steel sample in the world coordinate system can be obtained w ,y w ) The final volume of the molten steel sample can be obtained by the following integral formula
V=∫∫Z(x w ,y w )dx w dy w (30)
In the actual volume calculation process, in order to save calculation resources, the pixel coordinates (x, y) are converted into world coordinates (x) by removing background noise and interference of unavoidable shadows on the volume calculation w ,y w ) Firstly, extracting the effective area of the bottom contour of a molten steel sample, wherein the contour of the molten steel sample is extracted in two parts;
A. sample upper half region contour extraction
When the upper half area outline of the sample is extracted by utilizing the staggered stripe level, a residual error map is obtained by taking the difference between a sample map shot by a measuring system and an undeformed reference stripe map obtained when the sample is not placed on a workbench, so that staggered level information is effectively obtained, and the residual error map can be obtained by the following steps
Wherein I is 1n (x, y) represents a frequency f 1 An undeformed reference fringe pattern obtained when the nth workbench is not placed with a sample piece, and a residual error mean pattern I ref After (x, y), the primary upper half area edge contour I is obtained by Canny operator processing eg (x, y) to avoid region selection errors caused by the edge profile not being closed, M of the formula (1) se The expansion treatment is carried out by replacing the structural element with a plane disc-shaped structural element with the radius of 1, and the treatment result is recorded as I ei (x,y);
Due to uneven illumination distribution caused by working environment, the obtained expanded primary edge profile image I ei (x, y) will contain a small noise profile, to eliminate these invalid profiles, the 4 pixel connected region is used to detect I ei All closed contours in (x, y), noted asq=1, 2,3, ·, and D in the formula (2) e t is replaced by D I q pairs of primary edge contour images I ei (x, y) filling, the filling result is marked as I fill (x,y);
Then the filled image I fill (x, y) detection of 8-pixel connected region, and detection of the outline filling block with the largest area as the upper half region I 'of the sample' fill (x, y) since the processed contours are all expanded, I 'will be' fill (x, y) etching the upper half of the final sample by the following formula:
wherein M is se1 Is a plane disc-shaped structural element with radius of 1;
B. sample lower half region contour extraction
The modulation degree of the stripes can be obtained by the following formula:
binarizing the obtained modulation M (x, y) by using the formula (13) to obtain a primary sample whole region M including a shadow region bina (x,y);
Because the surface characteristics of the upper half area and the lower half area of the sample are different, the gray information of the whole area of the obtained primary sample is expressed as a profile in the upper half area and is expressed as an area in the lower half area, so that the two areas can be divided by carrying out corrosion treatment on the whole area distribution map of the primary sample, and the obtained modulation extremum division map is subjected to corrosion treatment through a formula (1) when the expansion operation is carried out on the image to realize the corrosion effect during the image treatment se Is a plane disc type structural element with radius of 2, and the processing result is recorded as M bi (x,y);
At the same time, in order to eliminate the regional 'corrosion' effect caused by the formula, M is calculated by using the formula (32) bi (x, y) performing an "expansion", M used therefor se1 Is a planar disc-type structural element with radius of 2, and the result is recorded as M close (x,y);
For the processed primary lower half area outline M close (x, y) edge detection by Canny operator, and closing the detected primary edge contour to close the contour crack, and marking the closed contour map as M eg (x, y) while using 4 pixel connectivity detection M for background noise profile cancellation eg All contours in (x, y), noted asp=1, 2,3, ··, and ++in formula (2)>Replaced by->For M eg (x, y) performing contour filling, and marking the filling result as M fill (x,y);
Then the filled image M fill (x, y) searching the 8-neighborhood connected region, taking the largest connected region as the lower half region of the sample, and marking as M' fill (x, y) etching the lower half region by using the formula (32) in consideration of the fact that the edge detection expands the lower half region, and taking the etched region as the lower half region I of the final sample hand (x,y);
Finally, combining the two areas of the obtained sample by using the following formula to serve as a final sample bottom area;
Obtaining the bottom area I of the sample mask After (x, y), obtaining camera system parameters by using a camera calibration algorithm, and converting two-dimensional coordinates of the image into space coordinates by the following coordinate mapping relation;
wherein [ x y 1] T Homogeneous representation of pixel coordinates representing a captured image, [ x ] w y w 1] T Representing a homogeneous representation of the mapped world coordinates, M ext 、M ins Respectively representing external parameters and internal parameters of a camera, calculating the corresponding area of a single pixel after world coordinates of each pixel are obtained, and assuming the unit area of the single pixel as a shadow area S;
when solving the unit area S, four vertexes of the quadrangle corresponding to the point need to be calculated first Coordinate values of (2), coordinate value calculation can generally utilize bilinear interpolation methodSolving, setting the central position of the vertex of the unit in the four-adjacent-domain pixel, and then +.>Can be expressed as
The coordinate values of the other three vertexes can be obtained in the same way, the quadrangle is approximated to a rectangle, the numerical value obtained by the calculation of the following formula is used as the actual physical area of the pixel block, the areas of all the unit blocks are calculated, the corresponding block volumes are obtained, and the numerical value obtained by the calculation of the formula (30) is used as the final sample volume;
the invention has the advantages and positive effects that.
1. The mechanical arm positioning and grabbing module can meet the requirement of accurately positioning and grabbing a molten steel sample, combines the image recognition and calibration principles, extracts the characteristics of the molten steel sample in different areas, takes the extracted result as the front information of the mechanical arm positioning and grabbing module, realizes the conversion from the image characteristics to the mechanical arm positioning and grabbing postures by utilizing the mechanical arm calibration technology and the terminal rotating shaft mapping function, and finally embeds the part into the whole system to realize the automatic sampling of molten steel.
2. Quickly acquiring the positioning coordinates of the molten steel sample image and the sample posture: obtaining an edge map containing the edge profile of the sample by utilizing an image processing algorithm of edge detection, expansion and filling, selecting and determining the whole area of the molten steel sample by selecting the largest neighborhood of a filling area, dividing the whole area into an upper half area 'cylinder' and a lower half area 'handle' according to the geometric characteristics of the molten steel sample, and performing standard circle fitting on the whole area map of the sample by utilizing a non-changing kernel circle detection algorithm if the geometric characteristics of the upper half area are close to circles, performing pixel expansion on the obtained circular profile, and filling the expanded circular profile as the upper half area of the molten steel sample; the lower half area can be obtained by making a difference between the whole outline area of the sample and the upper half area of the sample. And calculating the center of mass of the lower half area as the positioning coordinate of the molten steel sample image, describing the relationship between the two points by using a vector, introducing a unit vector parallel to the image plane axis, and calculating the included angle between the two vectors as the image posture of the molten steel sample.
3. Time sequence 'control' of projection stripes is realized by software: the phase shift stripes are embedded into marks, the marks adopt circular outer contours, the internal patterns are combined by adopting Arabic numerals of 0-9, and the binary image gray information is represented. And sequencing the label stripes to form a flash, continuously shooting by using a projector, and continuously collecting by using a CCD. Firstly, carrying out outline recognition on an acquired label stripe, carrying out circle fitting by using a non-changing kernel circle detection algorithm to determine a label area of the label stripe, extracting pixels in the area, scaling the extracted label graph to the designed label size by using a nearest neighbor difference algorithm, carrying out binarization operation to unify gray information representation modes, finally, carrying out pixel comparison on the label graph and the designed label by using a template matching method, defining the smallest difference value in a matching result as an optimal matching template, marking the label sequence corresponding to the template as a phase shift sequence of the label stripe, finally judging whether the phase shift sequence is repeated, discarding if repeated, otherwise, reserving and repeating the operation until all the label stripes are recognized. The prior method removes the color filter of the projector, and externally adds a hardware capable of giving a trigger signal to realize the time sequence control of stripes, thereby increasing the hardware cost and the complexity of equipment.
4. The dual low frequency guiding high frequency phase computing method provided by the invention realizes high-precision recovery of the object with abrupt surface change and uneven reflectivity. Three fringe frequencies are designed, two fringe frequencies are close and are low frequency, and the other fringe frequency is high frequency. The three frequency fringes are projected onto a measured object, after the three frequency fringes are captured by a CCD, the phase calculation is carried out on two low frequency fringes with relatively close frequencies to obtain a truncated phase, a double-frequency heterodyne method is used for obtaining an object contour (continuous phase) with relatively low precision, then the phase calculation is carried out on the fringes with relatively high frequencies to obtain a high-frequency truncated phase, the continuous phase of the high-frequency fringes is guided by the continuous phase with relatively low precision to obtain the continuous phase of the high-frequency fringes, and the phase is used as the final contour of a sample. The existing method singly adopts a mode of guiding high frequency by low frequency to obtain the phase of an object, but requires accurate phase calculation results of low frequency stripes, and has limitations; and the object phase is obtained by independently adopting a double-frequency heterodyne method, which is limited by fringe frequency selection, so that the measurement precision is limited.
5. And (3) calculating and evaluating the sampling effectiveness by the volume of the molten steel sample piece: and selecting the bottom area of the 3D profile of the detected molten steel sample, and dividing the 3D profile into an upper half area and a lower half area according to the surface property of the molten steel sample. The method comprises the steps that as a sample image shot by a measuring system contains stripe information, the upper half area outline of a molten steel sample is obtained by firstly utilizing difference between deformed stripes and undeformed reference stripes obtained when the sample is not placed on a workbench to obtain a residual image, and an image processing algorithm of edge detection, expansion and filling is carried out on the obtained residual image to obtain an edge image containing the upper half area outline of the sample, and the upper half area outline of the molten steel sample is obtained by selecting the largest neighborhood of a filling area; the method comprises the steps of firstly obtaining the outline of the lower half area of a molten steel sample by using deformed stripes and undeformed reference stripes obtained when the sample is not placed on a workbench, obtaining a modulation degree, then binarizing the modulation degree to obtain the whole area of the sample, performing a closing operation on the whole area of the obtained sample to cut off the upper half area and the lower half area, then performing edge detection, closing operation and a filled image processing algorithm on the primary lower half area of the obtained sample to further obtain an edge map containing the outline of the lower half area of the sample, and determining the outline of the lower half area of the molten steel sample by selecting the largest neighborhood of the filled area. And finally, performing region splicing on the upper half region and the lower half region of the obtained molten steel sample, and utilizing a closing operation and filling method to close the crack to serve as the bottom area profile of the final molten steel sample.
The volume of the molten steel sample is divided into a plurality of small cubes by utilizing the concept of differentiating first and integrating second, and the volumes of the cubes and the volumes serving as the molten steel sample are calculated. The effective bottom area of the molten steel sample piece is the obtained bottom area outline, the obtained bottom area is divided according to the pixel area, the coordinate mapping relation obtained by the camera calibration algorithm of the pixel area is converted into the actual area, the phase information of the molten steel sample piece is converted into the actual height by utilizing the phase height mapping method, the volume of a single pixel is obtained by calculating the height corresponding to the pixel block and the actual area of the pixel block, the volume of all the pixel blocks of the effective area of the sample piece is added to be used as the volume of the molten steel sample piece to be measured, the volume of the standard sample piece is measured by utilizing the same measuring method, and the relative volume error calculation is carried out on the volume of the standard molten steel sample piece and the volume of the molten steel sample piece to be measured to judge the effectiveness of sampling.
Drawings
FIG. 1 is an automated sampling device for molten steel based on a machine vision system.
Fig. 2 is a schematic diagram of a robotic arm positioning gripper module.
FIG. 3 is a schematic diagram of a defect detection module based on fringe projection.
FIG. 4 is a schematic diagram of a molten steel sample to be inspected.
FIG. 5 is a schematic diagram of an image and mechanical arm shaft deflection mapping structure.
Fig. 6 is a design pattern diagram of the reference numerals.
Fig. 7 is a schematic diagram of the phase calculation principle.
FIG. 8 is a schematic diagram showing calculation of the volume of a molten steel sample, wherein (a) is the differentiation of the molten steel sample, (b) is the bottom area segmentation of the molten steel sample, and (c) is a sample diagram shot by a measuring system.
FIG. 9 is a schematic diagram of the sample floor area coordinate relationship.
Fig. 10 is a partial checkerboard view of a robotic arm acquired at different poses.
Fig. 11 is a feature extraction diagram of a sample to be measured.
Fig. 12 is a drawing showing the feature extraction verification of the sample.
FIG. 13 is a striped diagram of a label design and a partially embedded label.
Fig. 14 is a diagram of the sample piece and the fringes collected by the 3D inspection station.
Fig. 15 shows the height distribution effect obtained by 5 phase calculation methods.
Fig. 16 shows the effect of height distribution at different viewing angles.
FIG. 17 is a diagram of the collected 15mm standard block gauge and stripe.
Fig. 18 shows the 15mm gauge height distribution effect obtained by the 5 algorithms.
Fig. 19 shows the 15mm gauge height error results from 5 algorithms.
FIG. 20 shows the process of extracting the bottom area of the upper half of the sample.
Fig. 21 shows the bottom area extraction process of the lower half area of the sample.
FIG. 22 is a sample piece bottom area stitching process.
FIG. 23 is a diagram of samples of molten steel measured for different defect levels.
FIG. 24 is a graph showing the effect of the sample height portion on the phase calculation results of molten steel samples with different defect levels.
Detailed Description
The following detailed description of the present invention will provide further details for the purpose of enabling the understanding of the present invention, as well as the advantages and features.
As shown in FIG. 1, an automatic molten steel sampling device based on a machine vision system comprises a first mechanical arm, an automatic tray, a cooling water tank, a second mechanical arm and a molten steel sample defect detection module based on fringe projection, wherein the first mechanical arm is positioned between a melting furnace and the automatic tray, the cooling water tank is positioned on one side of the automatic tray, after the automatic tray is overturned, a molten steel sample can fall into the cooling water tank, and the second mechanical arm is positioned between the cooling water tank and the molten steel sample defect detection module based on fringe projection.
The automatic mechanical arm is provided with a turnover device, the automatic tray is provided with a turnover device, molten steel samples are poured into a cooling water tank after being turned over, the tail end of the second mechanical arm is provided with a mechanical arm, and the mechanical arm is controlled by the second mechanical arm to grasp the molten steel samples in the cooling water tank and send the molten steel samples to the molten steel sample defect detection module based on stripe projection.
As shown in FIG. 3, the molten steel sample defect detection module based on fringe projection comprises a detection workbench, a digital projector and a CCD, wherein the detected molten steel sample is placed on the detection workbench, the digital projector is used for projecting multi-frame fringes onto the detection workbench, the CCD is used for continuously collecting multi-frame deformed fringe patterns, the CCD collecting frame speed is respectively 30fps, the collecting duration is set to be 2s, and the collected fringes are stored in a memory.
An automatic molten steel sampling device based on a machine vision system comprises the following specific working procedures:
step 1), fixing an industrial ladle at the tail end of a mechanical arm without a grabbing tool, moving the tail end of the mechanical arm to a stove mouth area by utilizing a positioning technology, and setting joint action of the mechanical arm to take out a small amount of molten steel; then positioning the center of a mold groove area of the sample, moving and setting the joint action of the mechanical arm to pour the taken molten steel into the mold groove;
step 2) setting an automatic tray for a certain time and then overturning to pour the molded sample into a cooling pool;
step 3) positioning the sample in the cooling pool by using a mechanical arm positioning and grabbing module, controlling the mechanical arm provided with the mechanical arm to a positioned sample point after positioning by using a machine vision recognition technology, and setting an action signal to realize sample taking out and placing to a detection table after positioning;
And 4) after the detected molten steel sample is placed on a detection workbench, a 3D surface profile distribution of the detected molten steel sample is obtained through fringe projection profilometry by utilizing a molten steel sample defect detection module based on fringe projection, and the volume is calculated to judge whether the sample is qualified or not. The method comprises the following specific steps:
step 4.1) after a tested molten steel sample is placed on a detection workbench, a digital projector is utilized to circularly play a flash consisting of a plurality of frames of fringe patterns with certain phase shift, the frame speed is 20fps, a CCD is controlled to continuously collect the plurality of frames of deformed fringe patterns, the frame speed is 30fps, the collection duration is set to be 2s, and the collected fringe is stored in a memory;
step 4.2) sending the collected fringe set into a computer for label identification and sequencing to realize fringe selection and time sequence restoration, and then carrying out phase calculation, phase expansion and phase height mapping on the effective fringe image to reconstruct the 3D surface profile distribution of the sample;
and 4.3) selecting the effective bottom area of the shot image of the measured molten steel sample, and calculating the sample volume by combining the obtained 3D surface profile distribution to judge whether the sample is qualified.
In the whole process of the system, the mechanical arm is required to perform positioning and grabbing operations in the steps 1) and 3), and the task is required to be completed by combining the positioning technology of image recognition. For the step 1), as the area of the melting furnace mouth is larger, the identification range is wider, and the positioning accuracy requirement is lower; for pouring the molten steel in the ladle into the mould and for these operations of step 3), the work object becomes a sample with a smaller effective area, requiring the system to be able to be positioned accurately. Therefore, the invention is used for finishing the positioning and grabbing task by combining the image recognition technology and the mechanical arm.
The manipulator positioning and grabbing implementation module based on the image recognition technology is shown in fig. 2. When the system works, firstly, a tested molten steel sample piece is placed in a water tank, and a camera is controlled to collect a posture chart; then sending the collected image into a computer for image recognition and other algorithms to extract the image characteristics of the sample; and finally, calculating a mechanical arm positioning point and a deflection direction of grabbing work according to the mapping function so as to drive the mechanical arm to grab.
The sample of molten steel to be measured according to the present invention is shown in FIG. 4. The feature recognition of the molten steel sample comprises two steps: and (5) collecting the gesture of the target sample and extracting the characteristics. The gesture of the target sample is collected by using a camera fixed at the tail end of the mechanical arm; the feature extraction is combined with an image recognition technology to process the acquired image. The specific recognition technology is as follows:
A. positioning coordinates
The target sample may be divided into two partial areas, an upper half area "cylinder" and a lower half area "handle" according to fig. 4. In order to avoid errors in subsequent sample detection caused by contact of the surface of the sample when the mechanical arm grabs the sample, the invention selects one point of the lower half area with smaller contact area as the mechanical arm positioning grabbing coordinate. Because the overall region of the sample is directly identified to obtain the lower half region, the calculation complexity is higher, and the geometric features of the upper half region are noted to be obvious compared with those of the lower half region, the upper half region of the sample is identified first, and the upper half region obtained by identification is assisted in the obtaining of the lower half region. The upper half region identification step is as follows:
Firstly, processing an acquired target sample image by using a Canny operator to obtain a primary edge profile I of the sample e (x, y) in order to avoid that the obtained primary edge profile has an unclosed crack, causing a subsequent region selection error, the obtained edge profile is subjected to an expansion process of the following formula,
wherein M is se For eliminating background noise contour in primary edge contour map, 4 pixel connected region is used to detect sample edge contour map I e All closed contours in' (x, y), denoted as D e t The detected profile is filled with,
filled sample Profile I pad (x, y) detecting the 8-pixel connected region, and taking the outline filling result with the largest detection area as the whole region of the sample, and marking as I outline (x, y). Standard circle fitting is carried out on the whole area of the obtained sample by using a constant-kernel circle detection algorithm proposed by Athereton et al, and the circle center seat corresponding to the fitted circle is marked as (x) cir ,y cir ) The radius is R cir . Considering the probability of circular region obtained by fittingThere is an inability to fully cover I outline The upper half area represented by (x, y), therefore, the radius of the fitting circle is 10-pixel extended, denoted R cir . The expanded result is taken as the outline of the upper half area of the molten steel sample, and the upper half area of the sample is filled by the following formula.
And finally, removing the areas of the upper half area of the obtained sample and the whole area of the sample by the following formula, wherein the obtained residual area is used as a handle area of the sample.
It is noted from fig. 4 that the molten steel sample structure is symmetrical, the center of the searching area can be used as a locating point to better meet the system requirement, the mass center is short for mass center, the effect of searching the center point for an irregular object with uniform mass distribution is better, therefore, the mass center of the handle area is solved to be used as the locating point of the target sample, the mass center formula is transformed, the mass of the object point is replaced by the gray value of the image point, and the locating grabbing point can be solved by the following formula;
wherein D represents the "handle" region I of the sample handle (x,y),x i 、y i Coordinate value representing image point of region, m i Gray values representing image point areas;
B. direction of deflection
For a two-dimensional image, the length of a vector directional line segment is utilized to define the relation between two pixel points, the mathematical concept of an angle is combined to serve as the posture of a sample at the current moment, and characteristic points of corresponding areas are respectively selected as a table for the molten steel sample divided into two areasThe upper half area selects the center (x cir ,y cir ) As a characterization point of the region, a lower half region selects a positioning grabbing point (x m ,y m ) As a characterization point of the region, obtaining a vector representation between two characterization points by using the following formula;
to establish the vector and angle conversion, another vector parallel to the image plane axis is introducedThe conversion into angular representation is performed using the following formula: />
Wherein round {. Cndot. } is a rounding function;
mechanical arm positioning based on image feature points
After the obtained positioning direction and deflection direction in the image shot by the camera fixed at the tail end of the mechanical arm are converted into positioning grabbing points under the coordinate system of the mechanical arm base by utilizing the camera calibration and hand-eye calibration technologies, and the end effector is commanded to act so as to position and grab an object.
A. Location grabbing point conversion
Wherein z is c For locating point p pix (x m ,y m ) In the camera coordinate system (X C ,Y C ,Z C ) The value of Z, M' ins An internal reference matrix is obtained for camera calibration,marking a matrix for the eyes and hands, and +.>Representing a mapping matrix between the end coordinates at the ith moment and the base coordinates;
B. deflection direction conversion
Because the rotation angle increment and the rotation value increment of the mechanical arm shaft joint are in linear relation, the rotation initial position and theta of the shaft joint are calculated O The reference positions are the same and the rotation directions are identical, as shown in fig. 5. Reading the initial rotation angle value theta start The axial deviation angle theta is obtained by combining the following H
θ H =θ Ostart (9)
The mechanical arm positioning and grabbing module is described in detail above, and the molten steel sample defect detection module based on fringe projection of the system is described in detail below. The molten steel sample defect detection module based on fringe projection is shown in fig. 3. When the system works, firstly, a tested molten steel sample piece is placed on a detection workbench, a projector is used for sequentially projecting a plurality of frames of fringe patterns with certain phase shift onto the surface of the sample piece, and a CCD is controlled to continuously collect the plurality of frames of deformation fringe patterns; then sending the collected fringe set into a computer for selection and time sequence reduction, and then carrying out algorithms such as phase calculation, phase expansion, phase height mapping and the like on the effective fringe image to reconstruct the 3D surface profile distribution of the sample; and finally, calculating the volume of the sample according to the 3D surface profile distribution, and judging whether the sample is qualified or not.
Because the system adopts the phase shift technology to acquire the 3D outline of the molten steel sample, the method needs to control the synchronous projection and acquisition of multi-frame stripes during measurement, and the prior equipment mostly adopts an additional hardware control unit to realize the synchronous projection and acquisition actions, thereby increasing the complexity and hardware cost of the measuring equipment. Aiming at the problem, the invention adopts a method of controlling time sequence by software, sinusoidal stripes of phase shift are continuously projected to the belt according to sequence by manufacturing a flash control projector, a CCD continuously shoots multi-frame deformed stripes, and effective stripes are selected by an image recognition method and sequenced to finish phase calculation.
The flash projected by the projection system consists of a plurality of ordered phase shift stripes, the frame speed is 20fps, and the phase shift stripes are circularly played. The CCD acquires the frame rate of 30fps respectively, the acquisition duration is set to be 2s, and the acquired stripes are stored in a memory. For a series of stripes captured, a recognition algorithm with low computational complexity is performed on the image by using software to realize timing control. Since the inherent features of black and white sinusoidal fringe images are not obvious, the image recognition task of directly performing the fringe images to distinguish the fringes of different phase shifts is less viable. A simple and efficient way is to embed the fringe images of different phase shifts into the artificially designed labels, which should have the advantages of low cross-correlation, simple design, easy recognition, high recognition accuracy and no influence on the accuracy of the measurement system, and then to distinguish the fringe images of different phase shifts by recognizing the different labels, thereby achieving the purpose of sorting.
The design of the label pattern is shown in fig. 6 according to the label design requirements. The method adopts the Arabic numerals 0-9 to combine for determining the sequence of phase shift stripes, adopts the outline form of a standard circle to quickly identify the label area, and reduces the computational complexity when the binarization is used for representing the image information to carry out label identification.
The specific steps of the sorting operation by using the label stripes captured by the CCD in the step 4.1) are as follows:
for one mark stripe I after capturing cap (x, y) fitting a standard circle by using a constant kernel circle detection algorithm proposed by Athereton et al, wherein (x, y) represents pixel coordinates of a shot image, and a circle center seat corresponding to the fitted circle is marked as (x) circle ,y circle ) The radius is marked as R, a circle obtained by fitting is drawn on an image with the same size as the size of the label stripes, the center coordinates and the radius of the formula (3) are replaced by the current identification result, and the circle is filled to be used as a label template M circle (x,y);
In order to obtain the label embedded in the current label stripe, the label area in the label stripe is reserved by the following formula, and the reserved label information is marked as I label (x,y);
Then extracting the label information obtained by the formula (10) by using the following formula, wherein the extracted label is marked as I' label (x,y),
Because the label stripe ordering is realized by adopting a template matching method, the sizes of two label graphs to be matched are required to be consistent, and the gray information representation modes are the same, the extracted labels are firstly scaled to the designed size of the label image, and the scaling of the labels is realized by utilizing nearest neighbor interpolation considering that the designed label information is single, wherein the mapping relation between the labels before scaling and the label pixels after scaling is represented by the following formula:
Wherein (X, Y) represents the scaled image pixels, I grade (X, Y) represents the scaled reference number diagram, w, h represent the scaled pre-reference number diagram size, W, H represent the scaled reference number diagram size;
then binarizing the scaled label to obtain a final to-be-processed label figure I num (X,Y)
Wherein thresh is a set threshold value, and the threshold value is calculated and obtained by adopting a minimized intra-class variance algorithm proposed by Otsu et al.
Finally, the reference mark diagram to be processed and the reference mark designed in FIG. 7Template matching is carried out by utilizing the following method, and the difference value in all matching results is minimumIs the best matching template, and marks the label sequence corresponding to the template as the phase shift sequence of the stripe;
and finally judging whether the identified label is repeatedly identified, if not, naming the label stripe, otherwise, not naming the label stripe, repeating the operation to identify the residual label stripe until all images are identified, and if all designed labels are identified once, performing the next operation, otherwise, indicating that the image data acquisition is not completed.
Step 4.2) the specific processes of phase calculation, phase unwrapping and phase height mapping to reconstruct the 3D surface profile of the sample are as follows:
The molten steel sample has a high jump at the edge and the connection part of the upper and lower regions, as shown in fig. 4. The measured truncated phases of these abrupt regions are prone to error when unwrapped. The conventional solution is to use a time phase unwrapping algorithm, and typical algorithms include low-frequency guided high-frequency, double-frequency heterodyning, and the like. The method of low frequency guiding high frequency requires low frequency f 1 The result of the phase calculation of the fringes is accurate because when the low frequency phase calculation is wrong, the error is transferred to the high frequency f 2 A result of the phase measurement; the double-frequency heterodyning requires two frequencies (f 1 And f 1′ ) Must be similar and their combined frequency value f Closing device Larger than the measurement format, which limits the measurement accuracy of the method. In order to solve the problems, the invention designs a double low frequency guiding high frequency phase calculation method, and the calculation process is shown in fig. 7.
Projecting three kinds of frequency stripes, wherein two kinds of stripes with lower frequency calculate the phase by using a double-frequency heterodyne method, and the method can effectively improve the accuracy of phase calculation by guiding the phase calculation of high frequency stripes after obtaining low frequency phase, and the method is set to have five phase shift steps and the frequency is f respectively 1 、f 1′ 、f 2 And the relationship between the three frequencies is expressed as:
The computer-generated transmittance expression for 15 fringes is:
wherein, (x) P ,y P ) Pixel coordinates representing the projected image, a, b being constants;
after 15 stripes are formed into flash, the flash is captured by using a CCD, and the expression of the effective stripe extracted after label identification is as follows:
wherein A (x, y) represents average intensity, B (x, y) represents intensity modulation, phi (x, y) represents fringe phase after being modulated by an object, and then the obtained truncated phase is solved as follows:
wherein phi is i (x, y) represents the truncated phase of the different frequency stripes, then the frequency is f 1 、f 1′ 、f 2 The cut-off phases of the stripes are respectively phi 1 (x,y)、φ 1′ (x,y)、φ 2 (x, y) represents;
the continuous phase difference phi of the two Closing device (x, y) can be derived from double frequency heterodyning:
wherein phi is 1 (x,y)、Φ 1′ (x, y) is the frequency f 1 、f 1′ The phase of the fringes after the fringes are modulated by the object, and, as such,frequency f Closing device Can be obtained by the following formula
For a point on the measurement surface, the fringe phase after being modulated by the object can be expressed as
Φ j (x,y)=2πK j (x,y),j=1,1′ (21)
Wherein K is 1 (x,y)、K 1′ (x, y) respectively represent the fringe frequency f 1 、f 1′ The number of stripe stages in the process is as follows
Wherein k is j (x, y) represents the phase order, and when the relative positions of the camera, the projector and the measured object are fixed, the positions of the same point on the object on grating images of different periods are the same, so that the method can obtain
Bringing equation (21) into equation (23) to obtain
The stripe phase phi modulated by the object can be obtained by combining the equation (19) and the equation (24) 1 The expression of (x, y) is
Or (b)
Φ 1 (x,y)=2π·k 1 (x,y)+φ 1 (x,y) (27)
Wherein k is 1 (x, y) is the fringe frequency f 1 The truncated phase at a certain point belongs to the phase order;
obtaining phi 1 (x, y) and then obtaining the frequency f according to the relation between the absolute phases of the high and low frequencies and the frequency 2 Phase order k of truncated phase of stripe 2 The (x, y) calculation formula is:
will k 2 The continuous phase of the high-frequency stripe can be obtained by taking the relation between the truncated phase and the continuous phase, the unfolded continuous phase can only reflect the surface profile of the three-dimensional object, if the actual height of the object is to be obtained, a specific mapping relation between the two is needed, namely, a phase-height mapping matrix is needed, the matrix needs to move the reference front and back m (m is more than or equal to 3) times for a designated distance, projection and collection of the stripe are carried out after each movement, and the established relation is generally written as:
wherein Z (x, y) represents the actual object height to be solved, phi (x, y) represents the unfolding phase result obtained by solving the deformation stripe modulated by the object, and a (x, y), b (x, y) and c (x, y) are system mapping parameters of the required solution, and can be calculated and obtained by using a least square method.
Step 4.3) calculating the volume of the sample according to the 3D surface profile distribution, wherein the specific steps are as follows:
in order to evaluate the defects of the molten steel sample, the algorithm firstly obtains the 3D profile of the sample, and then converts the 3D profile into a volume to judge the defect degree of the molten steel sample. When the volume calculation is performed based on the 3D contour of the irregular object, the concept of 'differential first and integral second' is adopted. For the bottom area of the molten steel sample pieceThe domain (X, Y plane) is differentiated in pixel units, and the molten steel sample can be divided into a plurality of 'cubes', as shown in fig. 8 (a). Converting pixel coordinates (x, y) of CCD photographed image into world coordinates (x) by camera calibration w ,y w ) The actual physical size of each pixel unit can be obtained; from the 3D profile distribution of the molten steel sample, the height distribution Z (x) of the molten steel sample in the world coordinate system can be obtained w ,y w ) The final volume of the molten steel sample can be obtained by the following integral formula
V=∫∫Z(x w ,y w )dx w dy w (30)
In the actual volume calculation process, in order to save calculation resources, the pixel coordinates (x, y) are converted into world coordinates (x) by removing background noise and interference of unavoidable shadows on the volume calculation w ,y w ) Before, the effective area of the bottom contour of the molten steel sample is extracted.
The extraction of the bottom profile of the molten steel sample can be realized through an edge detection algorithm, and classical edge detection algorithms in image processing comprise a Canny operator, a Sobel operator and the like. In order to obtain the bottom profile of the molten steel sample, the sample is divided into an upper half region and a lower half region according to the surface properties of the sample, and the bottom profile of the molten steel sample is divided as shown in fig. 8 (b).
The molten steel sample defect detection module based on fringe projection is designed, a molten steel sample image covered by the photographed fringe is shown in fig. 8 (c), the upper half area of the molten steel sample is shown in a white dotted line frame area A in the image, and the lower half area is shown in a white solid line frame area B in the image. When the contour extraction is performed on the upper half area of the molten steel sample, as the reflectivity of the surface of the area A is close to that of the area (the area C) where the reference surface is located, the gray level distribution rule of the stripes of the two areas is close, and the contour of the upper half area of the sample cannot be directly extracted by utilizing the edge detection algorithm under the influence of the shadow area (the area D framed by the red dotted line frame). Considering the significant feature of fig. 8 (C) that the fringes are subject to deformation upon modulation by an object, the deformation causes the fringes to have a level of error at the interface of the region C and the region a, so the present invention uses the level of error of the fringes to extract the upper half region outline of the sample. When the contour extraction is carried out on the lower half area of the molten steel sample, the reflectivity of the area B is lower, obvious gray level jump exists at the boundary with the area C, but the contour cannot be directly extracted by utilizing an edge detection algorithm due to the influence of the periodical gray level distribution of stripes, and the gray level distribution characteristic of an object can be directly obtained by considering the calculation of the stripe modulation degree, so that the stripe modulation degree is calculated first and then the contour is extracted from a modulation degree diagram when the contour of the lower half area is extracted.
A. Sample upper half region contour extraction
When the upper half area outline of the sample is extracted by utilizing the staggered stripe level, a residual error map is obtained by taking the difference between a sample map shot by a measuring system and an undeformed reference stripe map obtained when the sample is not placed on a workbench, so that staggered level information is effectively obtained, and the residual error map can be obtained by the following steps
Wherein I is 1n (x, y) represents a frequency f 1 An undeformed reference fringe pattern obtained when the nth workbench is not placed with a sample piece, and a residual error mean pattern I ref After (x, y), the primary upper half area edge contour I is obtained by Canny operator processing eg (x, y) to avoid region selection errors caused by the edge profile not being closed, M of the formula (1) se The expansion treatment is carried out by replacing the structural element with a plane disc-shaped structural element with the radius of 1, and the treatment result is recorded as I ei (x,y);
Due to uneven illumination distribution caused by working environment, the obtained expanded primary edge profile image I ei (x, y) will contain a small noise profile, to eliminate these invalid profiles, the 4 pixel connected region is used to detect I ei All closed contours in (x, y), noted asAnd +.>Replaced by->For primary edge contour image I ei (x, y) filling, the filling result is marked as I fill (x,y);
Then the filled image I fill (x, y) detection of 8-pixel connected region, and detection of the outline filling block with the largest area as the upper half region I 'of the sample' fill (x, y) since the processed contours are all expanded, I 'will be' fill (x, y) etching the upper half of the final sample by the following formula:
wherein M is se1 Is a plane disc-shaped structural element with the radius of 1.
B. Sample lower half region contour extraction
The modulation degree of the stripes can be obtained by the following formula:
the obtained modulation M (x, y) is binarized by using equation (13), and the entire area of the primary sample including the shadow area is obtained.
The gray information of the entire region of the primary sample is expressed as "contour" in the upper half region and as "region" in the lower half region due to the difference in surface characteristics between the upper half region and the lower half region of the sample. Therefore, the two areas can be divided by carrying out corrosion treatment on the integral area distribution diagram of the primary sample, but the reflectivity of the area B is lower than that of the area C, and the obtained integral diagram of the primary sample utilizes extremely low values to describe sample information, so that the corrosion effect is realized during image processingThe result is that the image is subjected to a dilation operation. Then the obtained modulation extremum segmentation diagram is subjected to corrosion treatment by a formula (1), M is used se Is a plane disc type structural element with radius of 2, and the processing result is recorded as M bi (x,y);
At the same time, in order to eliminate the regional 'corrosion' effect caused by the formula, M is calculated by using the formula (32) bi (x, y) performing an "expansion", M used therefor se1 Is a planar disc-type structural element with radius of 2, and the result is recorded as M close (x,y)
For the processed primary lower half area outline M close (x, y) edge detection by Canny operator, and closing the detected primary edge contour to close the contour crack, and marking the closed contour map as M eg (x, y) while using 4 pixel connectivity detection M for background noise profile cancellation eg All contours in (x, y), noted asp=1, 2,3, (S.) and ++in formula (2)>Replaced by->For M eg (x, y) performing contour filling, and marking the filling result as M fill (x,y);
Then the filled image M fill (x, y) searching the 8-neighborhood connected region, taking the largest connected region as the lower half region of the sample, and marking as M fill (x, y) etching the lower half region by using the formula (32) in consideration of the fact that the edge detection expands the lower half region, and taking the etched region as the lower half region I of the final sample hand (x,y):
And finally, merging the two areas of the obtained sample by using the following formula to serve as a final sample bottom area.
Obtaining the bottom area I of the sample mask After (x, y), obtaining camera system parameters by using a camera calibration algorithm, and converting two-dimensional coordinates of the image into space coordinates by the following coordinate mapping relation;
Wherein [ x y 1] T Homogeneous representation of pixel coordinates representing a captured image, [ x ] w y w 1] T Representing a homogeneous representation of the mapped world coordinates, M ext 、M ins Respectively representing the external parameters and the internal parameters of the camera, after world coordinates of each pixel are obtained, calculating the corresponding area of a single pixel, and assuming that the unit area of the single pixel is as a shadow area S in FIG. 9;
when solving the unit area S, four vertexes of the quadrangle corresponding to the point need to be calculated first The coordinate value calculation can be generally solved by bilinear interpolation, and the central position of the unit vertex in the four-neighborhood pixels is set to be +.>Can be expressed as
The coordinate values of the other three vertexes can be obtained by the same way, the quadrangle is approximated to a rectangle, the numerical value obtained by the calculation of the following formula is used as the actual physical area of the pixel block, the areas of all the unit blocks are calculated, the corresponding block volumes are obtained, and the numerical value obtained by the calculation of the formula (30) is used as the final sample volume:
in order to verify the effectiveness of the mechanical arm positioning and grabbing module, an experimental system is built according to the system design requirement. The camera in the system is fixed on a grabbing tool at the tail end of the mechanical arm, and the axial plane of the tail end of the mechanical arm is required to be always parallel to the workbench in the whole process, and the coordinate axis of the base of the mechanical arm and the coordinate axis of the camera are required to be kept parallel when the whole system works.
Calculation and error analysis of hand-eye transformation matrix
A checkerboard of size 230mm x 130mm is placed on the platform and positioned to appear completely within the camera field of view. The CCD resolution is 768×1024, the first image is taken as a reference image, and the corresponding arm end pose on the teach pendant at this time is recorded. And then, the mechanical arm is moved by using the demonstrator, the checkerboard is kept to be completely appeared in the field of view of the camera in the moving process, the checkerboard image is shot, the position and the posture of the tail end of the mechanical arm corresponding to the current moment are recorded, and the operation is repeated for 15 times. A partial checkerboard image taken in this process is shown in fig. 10.
Sending the shot chessboard diagram into a computer for camera calibration to obtain internal and external parameters of the chessboard diagram, and converting the chessboard diagram into homogeneous matrix representation; and then converting the corresponding tail end gesture of the mechanical arm into a corresponding homogeneous matrix representation according to Euler angle transformation, and calculating to obtain a hand-eye conversion matrix. The calculation result of the system is that
In order to quantitatively evaluate the accuracy of the hand-eye transformation matrix result, the mechanical arm is moved to a working posture, an image is shot by a camera, pixel points of the shot image are randomly selected, the world distance after the mapping of the selected point group is calculated by using the world distance between known points of a checkerboard in a reference image as a standard value, and a relative distance error result is calculated, wherein the result is shown in table 1.
TABLE 1 hand-eye matrix mapping distance deviation results
/>
As shown in the table, the calibration accuracy errors of the calibration distances respectively calculated in the X direction and the Y direction can be controlled within +/-2 mm, and the actual distance difference of the selected points can be maintained within 1% after calibration, so that the system design requirement can be met.
Molten steel sample feature identification
According to the feature recognition algorithm proposed herein, feature extraction is performed on the sample placed in fig. 11 (a), and the specific process is shown in fig. 11 (b) - (i).
To verify the effectiveness of feature extraction, measuring the positions of the center points of the two areas by using a vernier caliper as standard points, manually attaching marks on the standard points, placing a molten steel sample on a reference surface, placing the sample to a given deflection position, fixing the position of the sample, and shooting by using a camera to serve as a standard chart, as shown in fig. 12 (a); and after the labels are removed, shooting is carried out again, and feature extraction is carried out on the images. The extraction results are shown in FIGS. 12 (b) - (c).
Changing the sample placing posture, repeating the above operation, taking the center position of the label area in the photographed standard chart as the coordinate standard value, and the deviation between the center position and the identified characteristic point is shown in table 2.
TABLE 2 feature identification positioning coordinate bias results
As shown in the table, the positioning coordinate error obtained by utilizing the feature recognition can be controlled within +/-10 pixels, and the handle area of the molten steel sample piece is in the u direction, namely the effective grabbing pixel range is more than or equal to 10 pixels; the error between the corresponding deflection value obtained by utilizing the characteristic point conversion and the standard deflection angle is within +/-2 rad, so that the positioning grabbing requirement can be effectively met. And has proven effective in utilizing the corresponding angular descriptions of the two region feature points.
And finally, simulating the cooling process of the molten steel sample, putting the molten steel sample to be measured into a semi-closed container, and injecting water into the container to make the container to be immersed in the sample. And randomly placing sample positions, describing the effectiveness of positioning points by comparing positioning coordinates obtained by a feature recognition algorithm with the central positions of the marked areas, converting deflection values of the feature points recognized by the two areas with the central positions of the marked areas, and comparing to describe the effectiveness of gesture deflection, wherein the results are shown in table 3.
TABLE 3 deviation calculation results
/>
The table shows that the coordinate deviation and the angle deviation obtained by calculation can meet the positioning grabbing requirement, and the characteristic extraction algorithm provided by the chapter is proved to be effective in treating the molten steel sample immersed in water.
In order to verify the effectiveness of the molten steel sample defect detection module based on fringe projection, an experimental system is built according to the system design requirement. In the system, a projector and a CCD form a certain included angle when placed, and a plane formed by an optical axis of the projector and the CCD is coplanar with a symmetry axis of a sample; the placement positions of the projector and the camera are continuously adjusted while the background brightness is reduced, so that shadows appear in the upper half area of the molten steel sample; finally, proper exposure parameters are set to meet the requirements of a measurement system.
According to the design of the reference numerals and the phase calculation requirements, the reference numerals of the design are shown in fig. 13 (a), and the partial results after embedding in the stripes are shown in fig. 13 (b) - (d).
And placing the molten steel sample detected at this time on a detection platform. The resolution of the DLP projector and the resolution of the DLP projector are 768×1024 and 768×1024 respectively, the designed fringe images are combined into a video frame sequence, the video frame sequence is projected onto a tested molten steel sample by the projector, and a series of images are captured by a CCD and stored in a computer. The captured sample of molten steel to be measured is shown in fig. 14 (a), and the fringe images are shown in fig. 14 (b) - (d), wherein the reference fringe images are photographed in advance and stored in the memory after the system is built.
Verification of the accuracy of identification of a label in timing control
In order to verify the accuracy of label identification, the acquired images are subjected to identification processing. The identification process and the processing chart are shown in table 4.
TABLE 4 identification procedure for labels
/>
The recognition result obtained from table 4 is accurate. In order to avoid the accident of the experiment, the experiment is repeated 100 times, and the identification accuracy reaches 100 percent, so that the label identification method provided by the invention is effective.
Validity verification of phase computation
In order to verify the effectiveness of the phase calculation method employed in the present invention, a surface profile measurement was performed on the molten steel sample of fig. 14 (a).
The phase calculation method adopts the frequency f respectively 1 Stripe image of=1/30, five-step phase shift method to calculate phase, diamond phase unwrapping algorithm to calculate continuous phase; frequency f 2 Stripe image of=1/15, five-step phase shift method to calculate truncated phase, diamond phase unwrapping algorithm to calculate continuous phase; frequency f 1 Obtained link=1/30Continuous phase guide frequency f 2 Truncated phase calculated for fringe image=1/15; frequency f 1 ' stripe image of 1/31, five-step phase shift method to calculate cut-off phase, and combining frequency f 1 The truncated phase calculated for the fringe image of =1/30 calculates the continuous phase using the double-frequency heterodyne method; and the result obtained by adopting the phase calculation method provided by the invention is shown in fig. 15. And to further highlight the effect distribution of various phase calculation methods, different result viewing angles are reselected, as shown in fig. 16.
As can be seen from fig. 15 and 16, a frequency f is employed 1 The surface profile obtained by measuring the low-frequency stripes of (2) has the problem of error in phase unwrapping caused by high reflectivity and shadow areas, and f with higher frequency is adopted 2 The error area of the fringe measurement increases further; thus using low frequency f 1 Directing high frequency f 2 Will be f 1 Is further transmitted by the error of (a); the double-frequency heterodyne method and the double-frequency heterodyne guiding high-frequency method are not affected by the problem, and contour measurement can be achieved. However, it is noted from fig. 16 that, for the upper half area of the measured object, the result (e 2) measured by the phase calculation method provided by the invention is smoother than the result (d 2) measured by the double-frequency heterodyne method, and the effect is better, because the double-frequency heterodyne method requires lower and closer frequency selection and limits the accuracy.
In order to further evaluate the accuracy of the phase calculation method provided by the invention, the invention uses a standard flat block gauge with the height of 15mm as a measured object, as shown in fig. 17 (a), and the shot fringe patterns are shown in fig. 17 (b) -17 (d). And (3) measuring again by using the method, selecting an effective area of a measurement result, solving the average value of the area, and comparing the average value with the effective area to verify the abrupt change degree of the five methods. The measurement results and error distribution are shown in fig. 18 and 19. The corresponding root mean square error value pairs are shown in table 5.
Table 5 error value comparison
Phase method RMSE
Five-step phase shift method (stripe frequency f 1 ) 1.8600
Five-step phase shift method (stripe frequency f 2 ) 1.8453
Low frequency guided high frequency method (fringe frequency f 1 Frequency f of guide stripe 2 ) 1.8780
Double-frequency heterodyne method (stripe frequency f 1 、f 11 Frequency of synthesis 1.4521
The algorithm of the invention (stripe frequency f 1 、f 11 Post synthesis frequency guide f 2 ) 1.4472
Table 5 further illustrates that if the phase error is large with the low frequency fringes and the high frequency fringes alone, the fringe frequency f will be calculated by using the low frequency-guided high frequency phase calculation method 1 Is further transmitted by the error of (a); the algorithm of the invention can obtain more details than the double-frequency heterodyne algorithm, thereby achieving better measurement effect. The experimental low frequency f in the algorithm of the invention 1 And a high frequency f 2 In the double frequency relation, the measurement effect is further improved along with the increase of the frequency multiplication coefficient.
Volume integration results
Before the bottom area of the obtained sample height is selected, the captured stripe image is cut, the cut result is calculated to be a modulation degree image, and then the bottom area is selected by combining the edge detection algorithm designed by the invention. The selected upper and lower region procedures are shown in fig. 20 and 21, respectively.
The method comprises the steps of performing closed operation on the spliced result, filling the spliced result, and taking the corrected result as the bottom area of a final sample. The specific process is shown in fig. 22.
And after obtaining the bottom area of the molten steel sample, calculating the volume of the sample. The method provided by the invention is used for carrying out volume measurement on the standard molten steel sample for a plurality of times, solving the average value to be used as the standard volume, setting the volume value of the measured sample, and judging that the sample is unqualified if the volume value exceeds the relative volume error of the standard volume by 5%. The qualification degree of the sample is judged after the molten steel samples with different defect degrees are selected and pass through the algorithm flow of the invention, the measured molten steel samples with different defect degrees are shown in fig. 23, the measured surface profile distribution is shown in fig. 24, and the detection result is shown in table 6.
TABLE 6 different sample detection conditions
Molten steel sample to be measured Volume/mm 3 Standard volume mm 3 Relative volumetric error Degree of qualification
1 8929.91 8230.00 8.50% Failure to pass
2 8474.28 8230.00 2.96% Qualified product
3 6366.20 8230.00 22.65% Failure to pass
4 8498.11 8230.00 3.26% Qualified product
5 7952.25 8230.00 3.37% Qualified product
6 8528.35 8230.00 3.63% Qualified product
7 7927.71 8230.00 3.67% Qualified product
8 6379.81 8230.00 22.48% Failure to pass
9 5965.74 8230.00 27.51% Failure to pass
10 7954.98 8230.00 3.34% Qualified product
From the above table, it can be seen that the accuracy of 100% is achieved for a defect-free or overflow molten steel sample; meanwhile, the sample piece with the defect or overflow part is detected, and the accuracy of 100% can be achieved. The detection requirement can be met.
The invention provides a molten steel automatic sampling device based on a machine vision system, which is used for controlling the process in steel smelting and improving the final quality of steel. According to the system, the mechanical arm positioning and grabbing implementation combined with image recognition is researched for molten steel samples in a cooling pool, characteristics of the contours of the samples are analyzed, positioning and grabbing features are extracted by adopting subareas, and mapping from camera coordinates to mechanical arm positioning and grabbing coordinates is achieved for the obtained image features by using a mechanical arm calibration algorithm. The hand-eye transformation matrix experiment verifies that the calibration result can meet the grabbing requirement, and the image feature recognition experiment shows that the feature extraction algorithm is effective and accurate; the qualification evaluation is carried out on the sample piece placed in the detection table, the time sequence control problem of stripe projection in an automatic system is solved in the evaluation process, and the design cost is reduced; meanwhile, in order to improve the detection precision, a double low frequency guiding high frequency phase calculation method is adopted; and finally, selecting the bottom area of the sample by utilizing the characteristics of different areas of the molten steel sample, and calculating the volume to judge the qualification degree of the molten steel sample. The label identification experiment verifies the validity and accuracy of the identification. Phase calculation contrast experiments show that the method can achieve better precision while recovering the profile of the sample. The defect detection experiment proves that the system designed by the invention can achieve the detection effect and meet the detection requirement.
While the embodiments of the present invention have been described in detail, the present invention is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art.

Claims (7)

1. Automatic sampling device of molten steel based on machine vision system, its characterized in that: the device comprises a first mechanical arm, an automatic tray, a cooling water tank, a second mechanical arm and a molten steel sample defect detection module based on fringe projection, wherein the first mechanical arm is positioned between a melting furnace and the automatic tray, the cooling water tank is positioned at one side of the automatic tray, after the automatic tray is overturned, a molten steel sample can fall into the cooling water tank, and the second mechanical arm is positioned between the cooling water tank and the molten steel sample defect detection module based on fringe projection;
the automatic mechanical arm is provided with a turnover device, the automatic tray is provided with a turnover device, molten steel samples are poured into a cooling water tank after being turned over, the tail end of the second mechanical arm is provided with a mechanical arm, and the mechanical arm is controlled by the second mechanical arm to grasp the molten steel samples in the cooling water tank and send the molten steel samples to the molten steel sample defect detection module based on stripe projection.
2. The automatic molten steel sampling device based on the machine vision system as set forth in claim 1, wherein the specific working process of the device is as follows:
step 1), fixing an industrial ladle at the tail end of a mechanical arm without a grabbing tool, moving the tail end of the mechanical arm to a stove mouth area by utilizing a positioning technology, and setting joint action of the mechanical arm to take out a small amount of molten steel; then positioning the center of a mold groove area of the sample, moving and setting the joint action of the mechanical arm to pour the taken molten steel into the mold groove;
step 2) setting an automatic tray for a certain time and then overturning to pour the molded sample into a cooling pool;
step 3) positioning the sample in the cooling pool by using a mechanical arm positioning and grabbing module, controlling the mechanical arm provided with the mechanical arm to a positioned sample point after positioning by using a machine vision recognition technology, and setting an action signal to realize sample taking out and subsequent detection workbench operation;
and 4) after the detected molten steel sample is placed on a detection workbench, a 3D surface profile distribution of the detected molten steel sample is obtained through fringe projection profilometry by utilizing a molten steel sample defect detection module based on fringe projection, and the volume is calculated to judge whether the sample is qualified or not.
3. The automatic molten steel sampling device based on a machine vision system as claimed in claim 2, wherein: the mechanical arm positioning grabbing module in the step 3) comprises a workbench, a mechanical arm and a CCD (charge coupled device), wherein the CCD is fixed at the tail end of the mechanical arm, a molten steel sample image in the workbench is shot by utilizing the CCD, the grabbing position and the grabbing direction are acquired through computer processing, and the grabbing position and the grabbing direction are converted into corresponding gesture signals of the mechanical arm for grabbing, and the specific steps are as follows:
3.1. feature identification of molten steel sample
The feature recognition of the molten steel sample comprises two steps: collecting the gesture of a target sample and extracting features; the gesture of the target sample is collected by using a camera fixed at the tail end of the mechanical arm; the characteristic extraction is combined with the image recognition technology to process the acquired image, and the specific recognition technology is as follows:
A. positioning coordinates
Dividing a molten steel sample into two partial areas, namely an upper half area 'cylinder' and a lower half area 'handle', firstly identifying the upper half area of the sample, and assisting the upper half area obtained by identification in acquiring the lower half area, wherein the upper half area identification steps are as follows:
firstly, processing an acquired target sample image by using a Canny operator to obtain a primary edge profile I of the sample e (x, y) in order to avoid that the obtained primary edge profile has an unclosed crack, causing a subsequent region selection error, the obtained edge profile is subjected to an expansion process of the following formula,
wherein M is se For eliminating the background noise contour in the primary edge contour map, the 4-pixel connected region is used to detect the edge contour map I 'of the sample' e All closed contours in (x, y), noted asThe detected profile is filled with,
filled sample Profile I pad (x, y) detecting the 8-pixel connected region, and taking the outline filling result with the largest detection area as the whole region of the sample, and marking as I outline Performing standard circle fitting on the whole area of the obtained sample by using a constant kernel circle detection algorithm, and marking the circle center seat corresponding to the fitted circle as (x) cir ,y cir ) The radius is R cir Taking into account the probability of existence of a circular region obtained by fittingComplete coverage of I outline The upper half area represented by (x, y), therefore, the radius of the fitting circle is 10-pixel extended, denoted R cir Taking the expanded result as the outline of the upper half area of the molten steel sample, and filling the outline of the upper half area of the sample by using the following method;
finally, the upper half area of the obtained sample and the whole area of the sample are subjected to area elimination by the following formula, and the obtained residual area is used as a handle area of the sample;
Considering that the structure of the molten steel sample is symmetrical, the center of the searching area can be used as a locating point to better meet the system requirement, the mass center is short for mass center, the effect of searching the center point for an irregular object with uniform mass distribution is better, so that the mass center of a handle area is solved to be used as the locating point of the target sample, a mass center formula is transformed, the mass of the object point is replaced by an image point gray value, and the locating grabbing point can be solved by using the following formula;
wherein D represents the "handle" region I of the sample handle (x,y),x i 、y i Coordinate value representing image point of region, m i Gray values representing image point areas;
B. direction of deflection
For a two-dimensional image, the length of a vector directional line segment is utilized to define the relation between two pixel points, the mathematical concept of an angle is combined as the posture of a sample at the current moment, and corresponding areas are respectively selected for the molten steel sample divided into two areasThe characteristic points of the (a) are used as characterization points, and the center (x) cir ,y cir ) As a characterization point of the region, a lower half region selects a positioning grabbing point (x m ,y m ) As a characterization point of the region, obtaining a vector representation between two characterization points by using the following formula;
to establish the vector and angle conversion, another vector parallel to the image plane axis is introduced The conversion into angular representation is performed using the following formula:
wherein round {. Cndot. } is a rounding function;
3.2. mechanical arm positioning based on image feature points
After the obtained positioning direction and deflection direction in the image shot by the camera fixed at the tail end of the mechanical arm, the positioning and grabbing points are converted into positioning and grabbing points under the coordinate system of the mechanical arm base by utilizing the camera calibration and hand-eye calibration technologies, and the tail end executor is commanded to act so as to realize positioning and grabbing of an object;
A. location grabbing point conversion
Wherein z is c For locating point p pix (x m ,y m ) In the camera coordinate system (X C ,Y C ,Z C ) The value of Z, M' ins An internal reference matrix is obtained for camera calibration,marking a matrix for the eyes and hands, and +.>Representing a mapping matrix between the end coordinates at the ith moment and the base coordinates;
B. deflection direction conversion
Because the rotation angle increment and the rotation value increment of the mechanical arm shaft joint are in linear relation, the rotation initial position and theta of the shaft joint are calculated O The reference positions are the same, the rotation directions are consistent, and then the initial rotation angle value theta is read start The axial deviation angle theta is obtained by combining the following H
θ H =θ Ostart (9)。
4. The automatic molten steel sampling device based on a machine vision system as claimed in claim 2, wherein: the molten steel sample defect detection module based on stripe projection in the step 4) comprises a detection workbench, a digital projector and a CCD, wherein the detected molten steel sample is placed on the detection workbench, the digital projector is used for projecting multi-frame stripes onto the detection workbench, the CCD is used for continuously collecting multi-frame deformed stripe patterns, then the collected stripe sets are sent into a computer for selection and time sequence reduction, and then effective stripe images are subjected to algorithms such as phase calculation, phase expansion, phase height mapping and the like to reconstruct the 3D surface profile distribution of the sample; finally, calculating the volume of the sample according to the 3D surface profile distribution, and judging whether the sample is qualified or not; the method comprises the following specific steps:
Step 4.1) after a tested molten steel sample is placed on a detection workbench, a digital projector is utilized to circularly play a flash consisting of a plurality of frames of fringe patterns with certain phase shift, the frame speed is 20fps, a CCD is controlled to continuously collect the plurality of frames of deformed fringe patterns, the frame speed is 30fps, the collection duration is set to be 2s, and the collected fringe is stored in a memory;
step 4.2) sending the collected fringe set into a computer for label identification and sequencing to realize fringe selection and time sequence restoration, and then carrying out phase calculation, phase expansion and phase height mapping on the effective fringe image to reconstruct the 3D surface profile distribution of the sample;
and 4.3) selecting the effective bottom area of the shot image of the measured molten steel sample, and calculating the sample volume by combining the obtained 3D surface profile distribution to judge whether the sample is qualified.
5. The streak projection-based molten steel sample defect detection module as claimed in claim 4, wherein: step 4.1) performing sequencing operation by utilizing label stripes captured by a CCD, wherein the specific steps are as follows:
for one mark stripe I after capturing cap Performing standard circle fitting by using a constant kernel circle detection algorithm, wherein (x, y) represents pixel coordinates of a shot image, and circle center coordinates corresponding to the fitted circle are marked as (x) circle ,y circle ) The radius is marked as R, a circle obtained by fitting is drawn on an image with the same size as the size of the label stripes, the center coordinates and the radius of the formula (3) are replaced by the current identification result, and the circle is filled to be used as a label template M circle (x,y);
In order to obtain the label embedded in the current label stripe, the label area in the label stripe is reserved by the following formula, and the reserved label information is marked as I label (x,y);
Then extracting the label information obtained by the formula (10) by using the following formula, wherein the extracted label is marked as I label (x,y),
Because the label stripe ordering is realized by adopting a template matching method, the sizes of two label graphs to be matched are required to be consistent, and the gray information representation modes are the same, the extracted labels are firstly scaled to the designed size of the label image, and the scaling of the labels is realized by utilizing nearest neighbor interpolation considering that the designed label information is single, wherein the mapping relation between the labels before scaling and the label pixels after scaling is represented by the following formula:
wherein (X, Y) represents the scaled image pixels, I grade (X, Y) represents the scaled reference number diagram, w, h represent the scaled pre-reference number diagram size, W, H represent the scaled reference number diagram size;
then binarizing the scaled label to obtain a final to-be-processed label figure I num (X,Y)
Wherein thresh is a set threshold, the threshold is calculated by adopting a minimized intra-class variance algorithm,
the reference mark to be processed and the reference mark of the designTemplate matching is carried out by utilizing the following steps, the template with the smallest difference value in all matching results is the best matching template, and the label sequence corresponding to the template is marked as the phase shift sequence of the stripe;
and finally judging whether the identified label is repeatedly identified, if not, naming the label stripe, otherwise, not naming the label stripe, repeating the operation to identify the residual label stripe until all images are identified, and if all designed labels are identified once, performing the next operation, otherwise, indicating that the image data acquisition is not completed.
6. The streak projection-based molten steel sample defect detection module as claimed in claim 4, wherein: the specific process of reconstructing the 3D surface profile of the sample by phase computation, phase unwrapping and phase height mapping in step 4.2) is as follows:
projecting three kinds of frequency stripes, wherein two kinds of stripes with lower frequency calculate the phase by using a double-frequency heterodyne method, and the method can effectively improve the accuracy of phase calculation by guiding the phase calculation of high frequency stripes after obtaining low frequency phase, and the method is set to have five phase shift steps and the frequency is f respectively 1 、f 1′ 、f 2 And the relationship between the three frequencies is expressed as:
the computer-generated transmittance expression for 15 fringes is:
wherein, (x) P ,y P ) Pixel coordinates representing the projected image, a, b being constants;
after 15 stripes are formed into flash, the flash is captured by using a CCD, and the expression of the effective stripe extracted after label identification is as follows:
wherein A (x, y) represents average intensity, B (x, y) represents intensity modulation, phi (x, y) represents fringe phase after being modulated by an object, and then the obtained truncated phase is solved as follows:
wherein phi is i (x, y) represents the truncated phase of the different frequency stripes, then the frequency is f 1 、f 1′ 、f 2 The cut-off phases of the stripes are respectively phi 1 (x,y)、φ 1′ (x,y)、φ 2 (x, y) represents;
the continuous phase difference phi of the two Closing device (x, y) can be derived from double frequency heterodyning:
wherein phi is 1 (x,y)、Φ 1′ (x, y) is the frequency f 1 、f 1′ The phase of the fringes after being modulated by the object, and likewise the frequency f Closing device Can be obtained by the following formula
For a point on the measurement surface, the fringe phase after being modulated by the object can be expressed as
Φ j (x,y)=2πK j (x,y),j=1,1′ (21)
Wherein K is 1 (x,y)、K 1′ (x, y) respectively represent the fringe frequency f 1 、f 1′ The number of stripe stages in the process is as follows
Wherein k is j (x, y) represents the phase order, and when the relative positions of the camera, the projector and the measured object are fixed, the positions of the same point on the object on grating images of different periods are the same, so that the method can obtain
Bringing equation (21) into equation (23) to obtain
The stripe phase phi modulated by the object can be obtained by combining the equation (19) and the equation (24) 1 The expression of (x, y) is
Or (b)
Φ 1 (x,y)=2π·k 1 (x,y)+φ 1 (x,y) (27)
Wherein k is 1 (x, y) is the fringe frequency f 1 The truncated phase at a certain point belongs to the phase order;
obtaining phi 1 (x, y) and then obtaining the frequency f according to the relation between the absolute phases of the high and low frequencies and the frequency 2 Phase order k of truncated phase of stripe 2 The (x, y) calculation formula is:
will k 2 (x, y) carrying out a relation between the truncated phase and the continuous phase to obtain the continuous phase of the high-frequency stripe, expanding the obtained continuous phase to only reflect the surface profile of the three-dimensional object, and knowing a specific mapping relation between the two, namely a phase-height mapping matrix, wherein the matrix needs to move the reference front and back m (m is more than or equal to 3) times for a specified distance and is characterized in thatThe projection and acquisition of the fringes is performed after each movement, and the established relationship is generally written as:
wherein Z (x, y) represents the actual object height to be solved, phi (x, y) represents the unfolding phase result obtained by solving the deformation stripe modulated by the object, and a (x, y), b (x, y) and c (x, y) are system mapping parameters of the required solution, and can be calculated and obtained by using a least square method.
7. The streak projection-based molten steel sample defect detection module as claimed in claim 4, wherein: step 4.3) selecting an effective bottom area of a sample image shot by the module, and calculating the volume of the sample according to the 3D surface profile distribution, wherein the specific steps are as follows:
from the 3D profile distribution of the molten steel sample, the height distribution Z (x) of the molten steel sample in the world coordinate system can be obtained w ,y w ) The final volume of the molten steel sample can be obtained by the following integral formula
V=∫∫Z(x w ,y w )dx w dy w (30)
In the actual volume calculation process, in order to save calculation resources, the pixel coordinates (x, y) are converted into world coordinates (x) by removing background noise and interference of unavoidable shadows on the volume calculation w ,y w ) Firstly, extracting the effective area of the bottom contour of a molten steel sample, wherein the contour of the molten steel sample is extracted in two parts;
A. sample upper half region contour extraction
When the upper half area outline of the sample is extracted by utilizing the staggered stripe level, a residual error map is obtained by taking the difference between a sample map shot by a measuring system and an undeformed reference stripe map obtained when the sample is not placed on a workbench, so that staggered level information is effectively obtained, and the residual error map can be obtained by the following steps
Wherein, the liquid crystal display device comprises a liquid crystal display device,representative frequency f 1 An undeformed reference fringe pattern obtained when the nth workbench is not placed with a sample piece, and a residual error mean pattern I ref After (x, y), the primary upper half area edge contour I is obtained by Canny operator processing eg (x, y) to avoid region selection errors caused by the edge profile not being closed, M of the formula (1) se The expansion treatment is carried out by replacing the structural element with a plane disc-shaped structural element with the radius of 1, and the treatment result is recorded as I ei (x,y);
Due to uneven illumination distribution caused by working environment, the obtained expanded primary edge profile image I ei (x, y) will contain a small noise profile, to eliminate these invalid profiles, the 4 pixel connected region is used to detect I ei All closed contours in (x, y), noted asq=1, 2,3, ·, and D in the formula (2) e t Replaced by D I q For primary edge contour image I ei (x, y) filling, the filling result is marked as I fill (x,y);
Then the filled image I fill (x, y) detecting the 8-pixel connected region, and detecting the outline filling block with the largest area as the upper half region I of the sample fill (x, y) since the processed contours are all expanded, I will be fill (x, y) etching the upper half of the final sample by the following formula:
wherein M is se1 Is a plane disc-shaped structural element with radius of 1;
B. sample lower half region contour extraction
The modulation degree of the stripes can be obtained by the following formula:
binarizing the obtained modulation M (x, y) by using the formula (13) to obtain a primary sample whole region M including a shadow region bina (x,y);
Because the surface characteristics of the upper half area and the lower half area of the sample are different, the gray information of the whole area of the obtained primary sample is expressed as a profile in the upper half area and is expressed as an area in the lower half area, so that the two areas can be divided by carrying out corrosion treatment on the whole area distribution map of the primary sample, and the obtained modulation extremum division map is subjected to corrosion treatment through a formula (1) when the expansion operation is carried out on the image to realize the corrosion effect during the image treatment se Is a plane disc type structural element with radius of 2, and the processing result is recorded as M bi (x,y);
At the same time, in order to eliminate the regional 'corrosion' effect caused by the formula, M is calculated by using the formula (32) bi (x, y) performing an "expansion", M used therefor se1 Is a planar disc-type structural element with radius of 2, and the result is recorded as M close (x,y);
For the processed primary lower half area outline M close (x, y) edge detection by Canny operator, and closing the detected primary edge contour to close the contour crack, and marking the closed contour map as M eg (x, y) while using 4 pixel connectivity detection M for background noise profile cancellation eg All contours in (x, y), noted asp=1, 2,3, ··, and ++in formula (2) >Replaced by->For M eg (x, y) performing contour filling, and marking the filling result as M fill (x,y);
Then the filled image M fill (x, y) searching the 8-neighborhood connected region, taking the largest connected region as the lower half region of the sample, and marking as M fill (x, y) etching the lower half region by using the formula (32) in consideration of the fact that the edge detection expands the lower half region, and taking the etched region as the lower half region I of the final sample hand (x,y);
Finally, combining the two areas of the obtained sample by using the following formula to serve as a final sample bottom area;
obtaining the bottom area I of the sample mask After (x, y), obtaining camera system parameters by using a camera calibration algorithm, and converting two-dimensional coordinates of the image into space coordinates by the following coordinate mapping relation;
wherein [ x y 1] T Homogeneous representation of pixel coordinates representing a captured image, [ x ] w y w 1] T Representing a homogeneous representation of the mapped world coordinates, M ext 、M ins Respectively representing external parameters and internal parameters of a camera, calculating the corresponding area of a single pixel after world coordinates of each pixel are obtained, and assuming the unit area of the single pixel as a shadow area S;
when solving the unit area S, four vertexes of the quadrangle corresponding to the point need to be calculated first The coordinate value calculation can be generally solved by bilinear interpolation, and the central position of the unit vertex in the four-neighborhood pixels is set to be +. >Can be expressed as
The coordinate values of the other three vertexes can be obtained in the same way, the quadrangle is approximated to a rectangle, the numerical value obtained by the calculation of the following formula is used as the actual physical area of the pixel block, the areas of all the unit blocks are calculated, the corresponding block volumes are obtained, and the numerical value obtained by the calculation of the formula (30) is used as the final sample volume;
CN202310371191.1A 2023-04-07 2023-04-07 Automatic sampling device of molten steel based on machine vision system Pending CN116499801A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310371191.1A CN116499801A (en) 2023-04-07 2023-04-07 Automatic sampling device of molten steel based on machine vision system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310371191.1A CN116499801A (en) 2023-04-07 2023-04-07 Automatic sampling device of molten steel based on machine vision system

Publications (1)

Publication Number Publication Date
CN116499801A true CN116499801A (en) 2023-07-28

Family

ID=87319437

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310371191.1A Pending CN116499801A (en) 2023-04-07 2023-04-07 Automatic sampling device of molten steel based on machine vision system

Country Status (1)

Country Link
CN (1) CN116499801A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116907368A (en) * 2023-08-26 2023-10-20 广州市西克传感器有限公司 Method for automatically measuring diameter of monocrystalline silicon rod based on height map of multiple 3D cameras
CN117589779A (en) * 2023-11-28 2024-02-23 苏州瑞德智慧精密科技股份有限公司 Visual inspection system and hardware fitting forming equipment
CN117923191A (en) * 2024-03-21 2024-04-26 沈阳奇辉机器人应用技术有限公司 Intelligent control method and system for unloading of car dumper based on unhooking robot

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116907368A (en) * 2023-08-26 2023-10-20 广州市西克传感器有限公司 Method for automatically measuring diameter of monocrystalline silicon rod based on height map of multiple 3D cameras
CN117589779A (en) * 2023-11-28 2024-02-23 苏州瑞德智慧精密科技股份有限公司 Visual inspection system and hardware fitting forming equipment
CN117923191A (en) * 2024-03-21 2024-04-26 沈阳奇辉机器人应用技术有限公司 Intelligent control method and system for unloading of car dumper based on unhooking robot

Similar Documents

Publication Publication Date Title
CN116499801A (en) Automatic sampling device of molten steel based on machine vision system
CN109141232B (en) Online detection method for disc castings based on machine vision
EP3446065B1 (en) Flight parameter measuring apparatus and flight parameter measuring method
US10288418B2 (en) Information processing apparatus, information processing method, and storage medium
CN105335973B (en) Apply to the visual processing method of strip machining production line
JP5868987B2 (en) Method for identifying and defining the basic patterns forming the tread design of a tire
CN107392086B (en) Human body posture assessment device, system and storage device
CN111091562B (en) Method and system for measuring size of digestive tract lesion
CN109903313B (en) Real-time pose tracking method based on target three-dimensional model
CN109900711A (en) Workpiece, defect detection method based on machine vision
US20150269723A1 (en) Stereo vision measurement system and method
CN104897062A (en) Visual measurement method and device for shape and position deviation of part non-coplanar parallel holes
CN107392849B (en) Target identification and positioning method based on image subdivision
CN101256156A (en) Precision measurement method for flat crack and antenna crack
CN108133477A (en) A kind of object detecting method and intelligent machine arm
CN109448045A (en) Plane polygon object measuring method and machine readable storage medium based on SLAM
CN115096206A (en) Part size high-precision measurement method based on machine vision
CN112045676A (en) Method for grabbing transparent object by robot based on deep learning
CN111426282B (en) Method for identifying sealing surface error evaluation defects of optical measurement point cloud
JP3668769B2 (en) Method for calculating position / orientation of target object and method for calculating position / orientation of observation camera
CN113189005B (en) Portable surface defect integrated detection device and surface defect automatic detection method
JP2017166957A (en) Defect detection device, defect detection method and program
JP2002140713A (en) Image processing method and image processor
CN110310336B (en) Touch projection system and image processing method
CN116839502A (en) Method and device for measuring length, width, height and aperture depth of object based on AI technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination