CN113592962B - Batch silicon wafer identification recognition method based on machine vision - Google Patents

Batch silicon wafer identification recognition method based on machine vision Download PDF

Info

Publication number
CN113592962B
CN113592962B CN202110968922.1A CN202110968922A CN113592962B CN 113592962 B CN113592962 B CN 113592962B CN 202110968922 A CN202110968922 A CN 202110968922A CN 113592962 B CN113592962 B CN 113592962B
Authority
CN
China
Prior art keywords
image
silicon wafer
wafer
box
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110968922.1A
Other languages
Chinese (zh)
Other versions
CN113592962A (en
Inventor
田增国
张宏帅
曹芳
姜宝柱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Luoyang Dejing Intelligent Technology Co ltd
Original Assignee
Luoyang Dejing Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Luoyang Dejing Intelligent Technology Co ltd filed Critical Luoyang Dejing Intelligent Technology Co ltd
Priority to CN202110968922.1A priority Critical patent/CN113592962B/en
Publication of CN113592962A publication Critical patent/CN113592962A/en
Application granted granted Critical
Publication of CN113592962B publication Critical patent/CN113592962B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P70/00Climate change mitigation technologies in the production process for final industrial or consumer products
    • Y02P70/50Manufacturing or production processes characterised by the final manufactured product

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Container, Conveyance, Adherence, Positioning, Of Wafer (AREA)

Abstract

The batch silicon wafer identification recognition method based on machine vision comprises the following steps: step 1: obtaining an image of the whole box of silicon wafers through an industrial camera; step 2: performing character perspective deformation correction on the whole box silicon wafer image; step 3: performing a positioning and segmentation algorithm on the character obtained in the step 2; step 4: and recognizing the segmented characters. The invention reduces the difficulty of a character classification algorithm, can realize quick and accurate character recognition by using a lighter classification algorithm, and provides a non-contact recognition method, which can complete the identification recognition of the whole box of silicon wafer without touching or taking out the silicon wafer, and avoids the risks of silicon wafer dirt sticking and scratches.

Description

Batch silicon wafer identification recognition method based on machine vision
Technical Field
The invention belongs to a computer identification method, and particularly relates to an identification method for identifying whole silicon wafer identifiers in batches.
Background
With the rapid development of the large-scale integrated circuit industry, the application of semiconductor materials has been advanced into various fields of national economy. More than 95% of devices in the current semiconductor market use silicon materials, and the production of the silicon materials has an influence on the development of national economy. The semiconductor industry generally adopts a laser engraving mode to engrave silicon chip codes on the edge of a main reference surface of a silicon chip, and the silicon chip codes are used as unique marks of the silicon chip, so that the product is convenient to trace, and basic data is provided for improving quality control.
At present, the mainstream silicon wafer lettering recognition mode mainly comprises manual visual inspection recognition and a single-chip silicon wafer ID recognition instrument, and the manual mode is to observe and recognize by means of a strong light lamp and a magnifying glass, but the strong light lamp is easy to cause visual fatigue of human eyes, so that the problems of low efficiency, high error rate, high labor intensity and the like of manual recognition laser lettering are caused. The recognition speed is slow, and the production requirement in a factory cannot be met. Under the background of the rapid development of photoelectric technology, a technology of substituting a machine for human eyes to perform an identification function, namely a machine vision technology, appears. The machine vision technology has the advantages of high automation, high precision, convenience and safety, and has wide application in manufacturing industry, agriculture and automobile industry. The machine vision technology has great advantages in the aspect of detection precision, a high-performance camera and a lens are configured for image acquisition, and the detection precision can reach the submicron level by utilizing a series of image processing algorithms.
The single-chip silicon wafer identification and recognition instrument is mainly divided into two main categories: the silicon wafers are sequentially taken out from the wafer box by the mechanical arm for identification, for example, IL2000 products of Hywing Technology company, the risk of adhering dirt and scratching the silicon wafers exists in the similar equipment, and the time for identifying the whole box of the silicon wafers is more than two minutes. The other more advanced products adopt special mechanical structures to push the silicon wafers, the identification images can be shot one by one for identification, and the silicon wafers do not need to be taken out of the wafer box, so that the silicon wafers can be prevented from being stained and scratched, for example, an R2D identification reader of Amtech group company and an identification Scope identification reader of GL-Automation company, the equipment can only take 35 seconds for identifying a box of silicon wafer identification, but still the requirement of mass production of silicon wafer manufacturers can not be met, and the silicon wafers are easy to be stained and scratched in the process of taking out the silicon wafers.
The invention discloses a whole box silicon wafer laser marking identification device and an identification method thereof, and the patent application discloses a technology for identifying a whole box silicon wafer, and the technical scheme is as follows: shooting is carried out by changing different directions of the industrial camera, picture splicing is carried out after a plurality of pictures are shot, and the spliced pictures are identified, so that identification of a whole box of silicon wafer is realized. The method needs to establish a whole set of industrial camera shooting system, has higher cost and long shooting time, and causes low efficiency; the accuracy of the identification depends on the angle and the number of the photographed pictures, and the accuracy is not high.
Disclosure of Invention
The invention aims to solve the technical problems that: for a whole box of silicon wafers, how to accurately identify each silicon wafer identifier at one time, so that the method for identifying the batch silicon wafer identifiers based on machine vision is provided.
In order to solve the technical problems, the invention adopts the following technical scheme:
the batch silicon wafer identification recognition method based on machine vision comprises the following steps:
step 1: obtaining an image of the whole box of silicon wafers through an industrial camera: placing a batch of silicon wafers into a wafer box, placing the whole wafer box on a wafer box base, and photographing the whole wafer box by an industrial camera to obtain a whole box silicon wafer image;
step 2: character perspective deformation correction is carried out on the whole box silicon wafer image: firstly, taking the central position of the upper surface of a base as the origin of world coordinates, and calibrating internal and external parameters of a camera to obtain the internal parameters and the external parameters of the camera, namely the relative pose of the camera and a film box; then, constructing a 3D model of the silicon chip and the wafer box in a world coordinate system, and analyzing the pose of each silicon chip to obtain the coordinate of the three-dimensional vertex of the surrounding rectangle of the character area; secondly, projecting four vertexes to a pixel coordinate system, generating a rectangular bounding box according to an actual length-width ratio, thereby obtaining a perspective transformation matrix and realizing correction of image distortion;
step 3: performing a positioning and segmentation algorithm on the character obtained in the step 2;
step 4: and recognizing the segmented characters.
In the step 1, the industrial camera adopts an inclined overlook view angle and maintains an included angle with the axis of the silicon wafer, so that the character areas of all the silicon wafers in the wafer box to be detected can be completely acquired in the whole box silicon wafer image.
The step 2 comprises the following steps:
2.1, building a silicon chip and a wafer box model: definition sheetThe coordinate system of the box and the silicon chip is the world coordinate system, and the position of the center of the base of the box is defined as the origin O of the world coordinate system w The Z axis is vertical to the upper surface of the base, the direction is upward, the X axis and the Y axis are positioned in the upper surface of the wafer box base, the Y axis points to the opening direction of the wafer box, and the X axis is vertical to the opening direction of the wafer box; the world coordinate of the circle center of the silicon wafer is recorded as o i =[0,0,h i ] T I e {1,., N }, from the 3D model of the cassette and base, ith silicon wafer height h i Can be calculated by the following formula:
h i =h 0 +(i-1)·Δh (1)
wherein h is 0 The distance from the bottommost piece to the upper surface of the base is the same as the circle center distance of each adjacent piece, and the circle center distance is delta h;
for a silicon wafer with radius R and ridge length L, the two end points of the ridge can be expressed as:
2.2, calibrating internal and external parameters of the camera: according to camera external parameters (R) CW ,t CW ) I.e. rotation matrix R of camera coordinate system relative to world coordinate system CW And translation vector t CW Thereby connecting point P in world coordinate system W =[X,Y,Z] T The coordinate transformed under the camera coordinate system by the camera external parameters is P C =[X C ,Y C ,Z C ] T
P C =R CW P W +t CW (5)
Further, a calculation formula for projecting points in the world coordinate system to the image coordinate system by the internal reference matrix K of the camera can be written as:
wherein the internal reference matrix K of the camera and the camera coordinate system are relative to the world coordinate systemIs a rotation matrix R of (2) CW And translation vector t CW Sequentially calculating through a calibration algorithm;
2.3, registering the three-dimensional model of the silicon wafer: projecting the camera parameters to the image coordinate system according to the camera parameters and parameters to obtain an endpoint coordinate p in the image coordinate system i1 =[u pi1 ,v pi2 ,1] T 、p i2 =[u pi2 ,v pi2 ,1] T Calculating two end points p under an image coordinate system i1 、p i2 Straight line of i Slope k of (2) i The method comprises the following steps:
the straight line extraction algorithm is adopted for the actually collected image, and the straight line equation of each ridge line is calculated, so that the ridge line l 'in the image can be calculated' i And the silicon wafer o from top to bottom i Correlating to obtain the rotation angle theta of the silicon wafer i
2.4, character image distortion correction: determining the rotation angle theta of a silicon wafer i And then, further calculating the world coordinates of the character areas on each silicon chip.
In the construction of silicon chip and film box model, the rotation angle theta is considered i When the two end points of the ridge can be further calculated by the following formula:
and the center of the upper surface of the wafer box base is taken as a world coordinate system, and a three-dimensional model of the silicon wafer and the wafer box is generated according to the height and the rotation angle of the wafer so as to facilitate the subsequent analysis of the three-dimensional posture of the silicon wafer.
Obtaining the rotation angle theta of the silicon wafer i Calculating by using a step search mode with 0.1rad as a step, and searching a ridge line l 'detected in an image' i Slope k 'of (2)' i And corresponding projection ridge line l i Angle of minimum difference of slope of (c)Defining an objective function as:
and further calculating the rotation angle of each silicon chip in the wafer box.
In step 2.4, 2D coordinates of 4 corner points can be obtained according to the process requirements, and the height h of the silicon wafer is combined i Obtain 3D coordinates Q of 4 corner points i1 、Q i2 、Q i3 、Q i4 The method comprises the steps of carrying out a first treatment on the surface of the Using the calculated rotation angle theta of the silicon wafer i Rotating the four corner points to obtain a rotated corner point coordinate Q '' i1 、Q′ i2 、Q′ i3 、Q′ i4 The method comprises the steps of carrying out a first treatment on the surface of the Further projecting by using the external reference and the internal reference to obtain four corner coordinates on the image; according to the projected four corner coordinates on the image, the actual character area image I can be cut out from the image i
Image I for character region i Restoring the character area on each silicon chip to the target image I 'with W multiplied by H' i On top of that, the corner point q of the original character area is utilized i1 、q i2 、q i3 、q i3 And the corner q 'of the target area' i1 、q′ i2 、q′ i3 、q′ i4 Calculating a perspective transformation matrix T i Thereby for the original character area image I i Is transformed to obtain a corrected target image I' i
In step 3, firstly, denoising, filtering and self-adaptive binarization are carried out on the acquired image, and then a closed operation is used for communicating the hole area in the binary image, so as to complete pretreatment; further counting the number of white points of each row of pixels by a horizontal projection method, and determining the row range of the character; and then the characters are segmented by using a vertical projection method, so that an image of each character is obtained.
In step 4, classifying the character images by adopting a convolutional neural network, selecting a framework of two layers of convolutional neural networks during model recognition training, and finally calling an activation function softMax () to perform nonlinear fitting to obtain multi-classification output.
The invention adopting the technical scheme has the following beneficial effects:
1. after the image of the whole box of silicon wafers is obtained through an industrial camera, a character correction method based on registration of the three-dimensional model is provided for correcting perspective deformation of characters by combining the three-dimensional model of the box of silicon wafers, so that difficulty is reduced for a character classification algorithm, and quick and accurate character recognition can be realized by utilizing a lighter classification algorithm.
2. The invention provides a special light field and imaging system suitable for batch identification of silicon wafer identifiers, which realizes image acquisition with large field of view, large depth of field and high definition;
3. the invention provides a non-contact identification method, which can complete the identification of a whole box of silicon wafer without touching or taking out the silicon wafer, and avoids the risks of silicon wafer sticking and scratches;
4. the invention provides a character distortion correction algorithm based on three-dimensional model registration, which greatly reduces the difficulty of character classification, and further adopts a lightweight classification algorithm to realize accurate and efficient recognition;
5. the invention provides an interface program which can be in seamless connection with a production executive layer system to realize interconnection and intercommunication of production data.
Drawings
FIG. 1 is a schematic diagram of an industrial camera system of the present invention;
FIG. 2 is a schematic view of the overall structure of the cassette and the base;
FIG. 3 is a diagram of a silicon wafer and its center position;
FIG. 4 is a non-rotating top view of a silicon wafer;
fig. 5 is a top view of a wafer with rotation.
Detailed Description
The present invention is not limited by the following examples, and specific embodiments can be determined according to the technical scheme and practical situations of the present invention.
A batch silicon wafer identification recognition method based on machine vision comprises the following steps:
1. and obtaining an image of the whole box of silicon wafers through an industrial camera. The general structure diagram of the industrial camera system is shown in fig. 1, and mainly comprises an upper computer IPC, an industrial camera, a line light source, a wafer box base and the like. Put into the box 2 with batch silicon chip 1, whole box 2 is placed on box base 3, and box base 3 is used for fixed box 2, and box base 3 is provided with the draw-in groove, and box 2 fixed in position when can making every turn image acquisition guarantees image acquisition's uniformity. The two sides of the wafer box 2 are respectively provided with a linear light source for illumination, and for each silicon wafer, the light has a smaller angle, so that the textures are outstanding, the laser lettering characters are extremely clear, and the chamfer edges are also lightened. The industrial camera adopts an inclined overlook view angle and maintains a certain included angle with the axis of the silicon wafer, so that the character areas of all the wafers in the wafer box to be detected can be completely collected in the image. The industrial camera is used as a basic module for image acquisition, the imaging quality directly determines the recognition rate of characters, and proper camera resolution and depth of field range are required to be selected for ensuring clear imaging of all silicon wafer characters in the wafer box. The resolution of the camera is 5472 multiplied by 3648, the focal length of the lens is 25mm, the character width is 1.624mm, and each character is enabled to transversely form 50 pixel points. The structural design of the system can collect clear images and effectively control hardware cost.
2. And carrying out character perspective deformation correction on the whole box silicon wafer image.
Since the camera is inclined to photograph, the character area can generate perspective deformation, and the segmentation and recognition accuracy of the characters are affected, the distortion correction is carried out on the character area. Firstly, the center position of the upper surface of the base is taken as the origin of world coordinates, and the camera is calibrated with internal parameters and external parameters to obtain the internal parameters and the external parameters of the camera, namely the relative pose of the camera and the film box. Then, a 3D model of the silicon chip and the wafer box is built in a world coordinate system, and the pose of each silicon chip is analyzed to obtain the surrounding rectangular three-dimensional vertex coordinates of the character area. And secondly, projecting the four vertexes to a pixel coordinate system, and generating a rectangular bounding box according to the actual length-width ratio, thereby obtaining a perspective transformation matrix and realizing correction of image distortion.
And 2.1, constructing a silicon chip and wafer box model.
In order to realize three-dimensional model registration, a silicon wafer and a three-dimensional model of the wafer box are firstly constructed, and a coordinate system where the wafer box and the silicon wafer are located is defined as a world coordinate system, as shown in fig. 2 and 3. Defining the position of the center of the base of the wafer box as the origin O of the world coordinate system w The Z axis is vertical to the upper surface of the base, the direction is upward, the X axis and the Y axis are positioned in the upper surface of the base of the wafer box, the Y axis points to the opening direction of the wafer box, and the X axis is vertical to the opening direction of the wafer box. When the silicon chips are placed in the wafer box, the circle center positions of the silicon chips are consistent, and coaxiality can be ensured.
The world coordinate of the circle center of the silicon wafer is recorded as o i =[0,0,h i ] T I e {1,., N }, from the 3D model of the cassette and base, ith silicon wafer height h i Can be calculated by the following formula:
h i =h 0 +(i-1)·Δh (1)
wherein h is 0 The distance from the bottommost piece to the upper surface of the base is delta h, and the circle center distances of every two adjacent pieces are equal. For the ith wafer in the wafer box, if the silicon wafer does not have a rotation angle, the edge line P of the ith wafer i1 P i2 Parallel to the X axis as shown in fig. 4. However, in the actual production process, the edge line cannot be ensured to be parallel to the X axis due to the shaking device, i.e. each piece has a certain rotation angle theta in the X-Y plane i As shown in fig. 5. When the whole box of silicon wafers is shot in an inclined mode, the rotation angle of the silicon wafers can enable images to generate perspective deformation with different degrees, and character recognition accuracy is affected. According to the invention, the rotation angles of different silicon wafers are calculated, so that perspective deformation is recovered, and the correction of the character area is realized.
According to the processing technology requirement of the silicon wafer, for the silicon wafer with the radius of R, the length of the ridge is L, and then two end points of the ridge can be expressed as:
in consideration of rotationAngle theta i When the two end points of the ridge can be further calculated by the following formula:
and the center of the upper surface of the wafer box base is taken as a world coordinate system, and a three-dimensional model of the silicon wafer and the wafer box is generated according to the height and the rotation angle of the wafer so as to facilitate the subsequent analysis of the three-dimensional posture of the silicon wafer.
2.2, calibrating internal and external parameters of the camera.
In order to acquire the conversion relation between the image coordinate system and the world coordinate system, the internal reference and external reference calibration of the camera are required, and the radial and tangential distortion coefficients can be ignored because of the precision of the industrial camera and the lens. For a pinhole camera model, a point P in the camera coordinate system c =[X c ,Y c ,Z c ] T The formula projected under the image coordinate system can be expressed as:
obtaining the corresponding pixel coordinate P in the image coordinate system uv =[u,v,1] T . Further, according to camera external parameters (R CW ,t CW ) I.e. rotation matrix R of camera coordinate system relative to world coordinate system CW And translation vector t CW Thereby connecting point P in world coordinate system W =[X,Y,Z] T The coordinate transformed under the camera coordinate system by the camera external parameters is P C =[X C ,Y C ,Z C ] T
P C =R CW P W +t CW (5)
Further, a calculation formula for projecting points in the world coordinate system to the image coordinate system by the internal reference matrix K of the camera can be written as:
wherein the internal reference matrix K of the camera and the rotation matrix R of the camera coordinate system relative to the world coordinate system CW And translation vector t CW The method is calculated by a calibration algorithm. The patent adopts Zhang Zhengyou checkerboard calibration method to calibrate the internal parameters of the camera, shoots a plurality of groups of checkerboard images with different positions, angles and postures, then extracts the checkerboard corner points from the images, and calculates the internal parameter matrix K of the camera by using a nonlinear optimization mode.
The camera external parameters comprise a rotation matrix R and a translation vector T, namely the relative pose of the camera and the film box. The central angular point of the checkerboard is aligned with the axis of the silicon chip as the origin of world coordinates by placing the checkerboard calibration plate on the base of the silicon chip, so that the checkerboard is transversely aligned with the y axis of the reference surface of the silicon chip in parallel. According to the relation between the physical coordinate value and the pixel coordinate value, calculating to obtain a homography matrix H to solve the camera external parameters, wherein the homography matrix has the expression form:
after the homography matrix is obtained, the homography matrix is decomposed, and a corresponding rotation matrix R and a translation vector T can be solved.
And 2.3, registering the three-dimensional model of the silicon wafer.
And after the pose of the three-dimensional model in the world coordinate system is obtained, registering the actual three-dimensional model, and correcting the character area according to the corresponding pose so as to reduce the recognition error rate.
Projecting two end point coordinates of the ridge obtained by calculation in the 3D model under an image coordinate system according to the internal parameters and the external parameters of the camera to obtain an end point coordinate p under the image coordinate system i1 =[u pi1 ,v pi2 ,1] T 、p i2 =[u pi2 ,v pi2 ,1] T
Calculating two end points p under an image coordinate system i1 、p i2 Straight line of i Slope k of (2) i The method comprises the following steps:
the straight line extraction algorithm is adopted for the actually collected image, and the straight line equation of each ridge line is calculated, so that the ridge line l 'in the image can be calculated' i And the silicon wafer o from top to bottom i And carrying out association.
To obtain the rotation angle theta of the silicon wafer i The invention adopts a stepping search mode to calculate and search the detected ridge line l 'in the image' i Slope k 'of (2)' i And corresponding projection ridge line l i Angle of minimum difference of slope of (c)Defining an objective function as:
in order to accelerate the search speed, the search range of the angle is limited in consideration that the actual rotation angle range of the silicon wafer does not exceed + -5 DEG, and the search is performed in steps of 0.1 rad. Through the process, the rotation angle of each silicon chip in the wafer box can be calculated.
And 2.4, correcting character image distortion.
After the corners of the silicon wafers are determined, the world coordinates of the character areas on each silicon wafer are further calculated. According to the technological requirements, 2D coordinates of 4 corner points can be obtained and combined with the height h of the silicon wafer i Obtain 3D coordinates Q of 4 corner points i1 、Q i2 、Q i3 、Q i4
Using the calculated rotation angle theta of the silicon wafer i Rotating the four corner points to obtain a rotated corner point coordinate Q '' i1 、Q′ i2 、Q′ i3 、Q′ i4
Further utilizing the external reference and the internal reference to project to obtain four corner coordinates on the image:
according to the projected four corner coordinates on the image, the actual character area image I can be cut out from the image i . Restoring the character area on each silicon chip to the target image I 'with W multiplied by H' i And 4 corner coordinates of the target area are as follows:
using the corner q of the original character region i1 、q i2 、q i3 、q i3 And the corner q 'of the target area' i1 、q′ i2 、q′ i3 、q′ i4 Calculating a perspective transformation matrix T i . The perspective transformation is to project the picture to a new view plane, and the expression of the perspective transformation is:
u and v are original picture coordinates, and the transformed picture coordinates x and y are obtained correspondingly, whereinThe perspective transformation can be further expressed as:
knowing the four points to which the transform corresponds, the transform formula can be found:
solving the transformation matrix T, thereby obtaining the original image I i Is transformed to obtain a corrected target image I' i
p′=p·T i p∈I i ,p′∈I′ i (21)
The character image before and after distortion correction, the minimum bounding box of the character string before correction is a tilted quadrangle, and the character bounding box after correction becomes a regular rectangle.
3. And positioning and segmenting the obtained character.
When correcting character distortion, a rough character area of the silicon wafer is obtained, then the character area is precisely cut, a minimum bounding box of the character area is obtained through horizontal and vertical projection, and the segmentation of single characters is realized. Firstly, denoising, filtering and self-adaptive binarization are carried out on an obtained image, and then, a closed operation is used for communicating hole areas in a binary image, so that pretreatment is completed. Further, the number of white points of each row of pixels is counted through a horizontal projection method, and the row range of the character is determined. And then the character is segmented by using a vertical projection method, namely, the number of white spots of each column of pixels is counted, and the column with zero number of white spots is the character gap. Cutting based on image projection is to map the image into the characteristics, then judge the dividing position and cut the original image to obtain the image of each character.
4. And recognizing the segmented characters.
The patent classifies character images using convolutional neural networks. Neural networks have better learning, processing and classification capabilities in face of complex, uncertainty, non-linearity problems. Convolutional neural networks are a type of feedforward neural network that includes deep convolutional computations and has a deep structure, typically including an input layer, an hidden layer, and an output layer, where the hidden layer in turn includes a convolutional layer, a pooling layer, a fully connected layer, and the like. And the convolution kernel parameter sharing and the sparsity of interlayer connection in the CNN hidden layer can enable the CNN to perform latticed features with smaller calculation amount, can perform better learning on pixels, has a stable effect and has no additional engineering feature requirements on data.
The convolutional layer principle is as follows:
wherein b is the deviation, Z l And Z l+1 Convolution input and output of the first layer+1; l (L) l+1 Is Z l+1 Is a dimension of (2); z (i+j) corresponds to a pixel of the feature map; k is the number of channels; f is the convolution kernel size; s is(s) 0 Is the convolution step length; p is the number of filler layers.
The pooling layer principle is as follows:
wherein A (i, j) corresponds to a pixel of the feature map; s is(s) 0 Is a poolStep length is changed; p is a pre-determined parameter.
The fully connected layer is typically built on the last part of the hidden layer in the CNN and only signals are passed to the other fully connected layers. The feature map loses three-dimensional structure in the fully connected layer, is expanded into vectors and is transferred to the next layer through the excitation function, and the formula is as follows:
L l+1 =L
the frame of two layers of convolutional neural networks is selected during recognition model training, the setting of hidden layers is shown in fig. 5, the first layer of convolutional layer extracts low-level features of 32 original pictures through a 5×5 convolutional kernel, the extracted low-level features are extracted through a 2×2 pooling layer, the second layer of convolutional layer extracts 64 pooled high-level features through the 5×5 convolutional kernel, the output of image features is realized through the 2×2 pooling layer and two layers of full-connection layers, and finally, an activation function SoftMax () is called to perform nonlinear fitting, so that multi-classification output is obtained.
And acquiring an image of the silicon wafer to be detected, correcting and dividing the acquired silicon wafer image to obtain a single character picture, normalizing the length and the width to 28px multiplied by 28px, and carrying out batch identification test.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that it will be apparent to those skilled in the art that several changes and modifications can be made without departing from the general inventive concept, and these should also be regarded as the scope of the invention.

Claims (7)

1. The machine vision-based batch silicon wafer identification recognition method is characterized by comprising the following steps of:
step 1: obtaining an image of the whole box of silicon wafers through an industrial camera: placing a batch of silicon wafers into a wafer box, placing the whole wafer box on a wafer box base, and photographing the whole wafer box by an industrial camera to obtain a whole box silicon wafer image;
step 2: character perspective deformation correction is carried out on the whole box silicon wafer image: firstly, taking the central position of the upper surface of a base as the origin of world coordinates, and calibrating internal and external parameters of a camera to obtain the internal parameters and the external parameters of the camera, namely the relative pose of the camera and a film box; then, constructing a 3D model of the silicon chip and the wafer box in a world coordinate system, and analyzing the pose of each silicon chip to obtain the coordinate of the three-dimensional vertex of the surrounding rectangle of the character area; secondly, projecting four vertexes to a pixel coordinate system, generating a rectangular bounding box according to an actual length-width ratio, thereby obtaining a perspective transformation matrix and realizing correction of image distortion;
the step 2 comprises the following steps:
2.1, building a silicon chip and a wafer box model: defining the coordinate system of the wafer box and the silicon wafer as the world coordinate system, and defining the position of the center of the wafer box base as the origin o of the world coordinate system w The Z axis is vertical to the upper surface of the base, the direction is upward, the X axis and the Y axis are positioned in the upper surface of the wafer box base, the Y axis points to the opening direction of the wafer box, and the X axis is vertical to the opening direction of the wafer box; the world coordinates of the circle centers of all silicon wafers are recorded as O i =[0,0,h i ] T I e {1, …, N }; h according to the 3D model of the wafer box and the base i Can be calculated by the following formula:
h i =h 0 +(i-1)·Δh (1)
wherein h is 0 The distance from the bottommost piece to the upper surface of the base is the same as the circle center distance of each adjacent piece, and the circle center distance is delta h;
for a silicon wafer with radius R and ridge length L, the two end points of the ridge can be expressed as:
2.2, calibrating internal and external parameters of the camera: according to camera external parameters (R) CW ,t cW ) I.e. rotation matrix R of camera coordinate system relative to world coordinate system CW And translation vector t CW Thereby connecting point P in world coordinate system W =[X,Y,Z] T The coordinates transformed under the camera coordinate system through the camera external parameters areP C =[X C ,Y C ,Z C ] T
P C =R CW P W +t CW (5)
Further, a calculation formula for projecting points in the world coordinate system to the image coordinate system by the internal reference matrix K of the camera can be written as:
wherein the internal reference matrix K of the camera and the rotation matrix R of the camera coordinate system relative to the world coordinate system CW And translation vector t Cw The method comprises the steps that the calibration algorithm is needed to be calculated in sequence;
2.3, registering the three-dimensional model of the silicon wafer: projecting the camera parameters to the image coordinate system according to the camera parameters and parameters to obtain the endpoint coordinates in the image coordinate systemCalculating two end points p under an image coordinate system i1 、p i2 Straight line of i Slope k of (2) i The method comprises the following steps:
the straight line extraction algorithm is adopted for the actually collected image, and the straight line equation of each ridge line is calculated, so that the ridge line l 'in the image can be calculated' i In the order from top to bottom with silicon wafer O i Performing association;
2.4, character image distortion correction: determining the rotation angle theta of a silicon wafer i Then, further calculating world coordinates of the character areas on each silicon wafer;
step 3: performing a positioning and segmentation algorithm on the character obtained in the step 2; in step 3, firstly, denoising, filtering and self-adaptive binarization are carried out on the acquired image, and then a closed operation is used for communicating the hole area in the binary image, so as to complete pretreatment; further counting the number of white points of each row of pixels by a horizontal projection method, and determining the row range of the character; dividing the characters by using a vertical projection method to obtain an image of each character;
step 4: and recognizing the segmented characters.
2. The machine vision based batch silicon wafer identification method as set forth in claim 1, wherein: in the step 1, the industrial camera adopts an inclined overlooking view angle to the character areas of all the silicon wafers in the to-be-detected wafer box and keeps an included angle with the axis of the silicon wafers, so that the character areas of all the silicon wafers in the to-be-detected wafer box can be completely acquired in the whole silicon wafer image.
3. The machine vision based batch silicon wafer identification method as set forth in claim 1, wherein: in the construction of silicon chip and film box model, the rotation angle theta is considered i When the two end points of the ridge can be further calculated by the following formula:
and the center of the upper surface of the wafer box base is taken as a world coordinate system, and a three-dimensional model of the silicon wafer and the wafer box is generated according to the height and the rotation angle of the wafer so as to facilitate the subsequent analysis of the three-dimensional posture of the silicon wafer.
4. The machine vision based batch silicon wafer identification method as set forth in claim 1, wherein: obtaining the rotation angle theta of the silicon wafer i Calculating by using 0.1rad as a step search mode, and searching for a detected ridge line l 'in the image' i Slope k 'of (2)' i And corresponding projection ridge line l i Angle of minimum difference of slope of (c)Defining an objective function as:
and further obtaining the rotation angle of each silicon chip in the wafer box.
5. The machine vision based batch silicon wafer identification method as set forth in claim 1, wherein: in step 2.4, 2D coordinates of 4 corner points can be obtained according to the process requirements, and the height h of the silicon wafer is combined i Obtain 3D coordinates Q of 4 corner points i1 、Q i2 、Q i3 、Q i4 The method comprises the steps of carrying out a first treatment on the surface of the Using the calculated rotation angle theta of the silicon wafer i Rotating the four corner points to obtain a rotated corner point coordinate Q i i1 、Q i2 、Q i3 、Q i4 The method comprises the steps of carrying out a first treatment on the surface of the Further projecting by using the external reference and the internal reference to obtain four corner coordinates on the image; according to the projected four corner coordinates on the image, the actual character area image I can be cut out from the image i
6. The machine vision based batch silicon wafer identification method as set forth in claim 5, wherein: image I for character region i Restoring the character area on each silicon wafer to the target image I with W multiplied by H i On top of that, the corner point q of the original character area is utilized i1 、q i2 、q i3 、q i3 And the corner q of the target area i1 、q i2 、q i3 、q i4 Calculating a perspective transformation matrix T i Thereby for the original character area image I i Each point of the target image I after correction can be obtained by transformation i
7. The machine vision based batch silicon wafer identification method as set forth in claim 1, wherein: in step 4, classifying the character images by adopting a convolutional neural network, selecting a framework of two layers of convolutional neural networks during model recognition training, and finally calling an activation function softMax () to perform nonlinear fitting to obtain multi-classification output.
CN202110968922.1A 2021-08-23 2021-08-23 Batch silicon wafer identification recognition method based on machine vision Active CN113592962B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110968922.1A CN113592962B (en) 2021-08-23 2021-08-23 Batch silicon wafer identification recognition method based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110968922.1A CN113592962B (en) 2021-08-23 2021-08-23 Batch silicon wafer identification recognition method based on machine vision

Publications (2)

Publication Number Publication Date
CN113592962A CN113592962A (en) 2021-11-02
CN113592962B true CN113592962B (en) 2024-04-09

Family

ID=78239007

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110968922.1A Active CN113592962B (en) 2021-08-23 2021-08-23 Batch silicon wafer identification recognition method based on machine vision

Country Status (1)

Country Link
CN (1) CN113592962B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114418994B (en) * 2022-01-19 2022-11-15 生态环境部长江流域生态环境监督管理局生态环境监测与科学研究中心 Brittle stalk colony algae cell statistical method based on microscope image

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103345765A (en) * 2013-07-19 2013-10-09 南京理工大学 Detecting device and detecting method for moving objects under mobile platform based on DSP+FPGA
CN108537217A (en) * 2018-04-04 2018-09-14 湖南科技大学 Identification based on character code mark and localization method
CN109409372A (en) * 2018-08-22 2019-03-01 珠海格力电器股份有限公司 A kind of character segmentation method, device, storage medium and vision detection system
CN111914847A (en) * 2020-07-23 2020-11-10 厦门商集网络科技有限责任公司 OCR recognition method and system based on template matching
CN112257715A (en) * 2020-11-18 2021-01-22 西南交通大学 Method and system for identifying adhesive characters

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103345765A (en) * 2013-07-19 2013-10-09 南京理工大学 Detecting device and detecting method for moving objects under mobile platform based on DSP+FPGA
CN108537217A (en) * 2018-04-04 2018-09-14 湖南科技大学 Identification based on character code mark and localization method
CN109409372A (en) * 2018-08-22 2019-03-01 珠海格力电器股份有限公司 A kind of character segmentation method, device, storage medium and vision detection system
CN111914847A (en) * 2020-07-23 2020-11-10 厦门商集网络科技有限责任公司 OCR recognition method and system based on template matching
CN112257715A (en) * 2020-11-18 2021-01-22 西南交通大学 Method and system for identifying adhesive characters

Also Published As

Publication number Publication date
CN113592962A (en) 2021-11-02

Similar Documents

Publication Publication Date Title
CN110497187B (en) Sun flower pattern assembly system based on visual guidance
CN108555908B (en) Stacked workpiece posture recognition and pickup method based on RGBD camera
US9124873B2 (en) System and method for finding correspondence between cameras in a three-dimensional vision system
CN111604598B (en) Tool setting method of mechanical arm feeding type laser etching system
CN111507976B (en) Defect detection method and system based on multi-angle imaging
CN110189322B (en) Flatness detection method, device, equipment, storage medium and system
CN114494045B (en) Large spur gear geometric parameter measurement system and method based on machine vision
CN111046843B (en) Monocular ranging method in intelligent driving environment
JP7174074B2 (en) Image processing equipment, work robots, substrate inspection equipment and specimen inspection equipment
CN113324478A (en) Center extraction method of line structured light and three-dimensional measurement method of forge piece
CN112697112B (en) Method and device for measuring horizontal plane inclination angle of camera
EP4104137B1 (en) Composite three-dimensional blob tool and method for operating the same
CN114331924B (en) Large workpiece multi-camera vision measurement method
Wang et al. Error analysis and improved calibration algorithm for LED chip localization system based on visual feedback
CN107957246A (en) Article geometrical size measuring method on conveyer belt based on binocular vision
CN113592962B (en) Batch silicon wafer identification recognition method based on machine vision
CN113221953A (en) Target attitude identification system and method based on example segmentation and binocular depth estimation
CN116740048A (en) Lithium battery surface defect detection method based on fusion target detection technology
CN114998448A (en) Method for calibrating multi-constraint binocular fisheye camera and positioning space point
CN110853103B (en) Data set manufacturing method for deep learning attitude estimation
CN116309817A (en) Tray detection and positioning method based on RGB-D camera
CN115082538A (en) System and method for three-dimensional reconstruction of surface of multi-view vision balance ring part based on line structure light projection
CN116091401A (en) Spacecraft assembly part identification positioning method based on target detection and composite target code
CN115112098A (en) Monocular vision one-dimensional two-dimensional measurement method
CN114463317A (en) Structure in-situ repairing 3D printing method based on computer vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant