CN112233175A - Chip positioning method based on YOLOv3-tiny algorithm and integrated positioning platform - Google Patents
Chip positioning method based on YOLOv3-tiny algorithm and integrated positioning platform Download PDFInfo
- Publication number
- CN112233175A CN112233175A CN202011014606.2A CN202011014606A CN112233175A CN 112233175 A CN112233175 A CN 112233175A CN 202011014606 A CN202011014606 A CN 202011014606A CN 112233175 A CN112233175 A CN 112233175A
- Authority
- CN
- China
- Prior art keywords
- chip
- yolov3
- tiny
- image
- algorithm
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000012545 processing Methods 0.000 claims abstract description 17
- 238000007781 pre-processing Methods 0.000 claims abstract description 7
- 238000001514 detection method Methods 0.000 claims description 7
- 230000008569 process Effects 0.000 claims description 6
- 235000002566 Capsicum Nutrition 0.000 claims description 3
- 239000006002 Pepper Substances 0.000 claims description 3
- 235000016761 Piper aduncum Nutrition 0.000 claims description 3
- 235000017804 Piper guineense Nutrition 0.000 claims description 3
- 235000008184 Piper nigrum Nutrition 0.000 claims description 3
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 150000003839 salts Chemical class 0.000 claims description 3
- 230000005856 abnormality Effects 0.000 claims description 2
- 244000203593 Piper nigrum Species 0.000 claims 1
- 238000004519 manufacturing process Methods 0.000 abstract description 9
- 238000013135 deep learning Methods 0.000 abstract description 6
- 238000005286 illumination Methods 0.000 description 18
- 230000006870 function Effects 0.000 description 12
- 230000008859 change Effects 0.000 description 8
- 238000012360 testing method Methods 0.000 description 8
- 230000005540 biological transmission Effects 0.000 description 3
- 238000009776 industrial production Methods 0.000 description 3
- 238000004806 packaging method and process Methods 0.000 description 3
- 241000722363 Piper Species 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000012858 packaging process Methods 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 239000000571 coke Substances 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000036039 immunity Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30148—Semiconductor; IC; Wafer
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a chip positioning method and an integrated positioning platform based on a YOLOv3-tiny algorithm, comprising the following steps: 1) collecting chip disk images and preprocessing the images, and expanding the number and the types of picture sets of the deep learning network in the following modes: 1.1) rotating the picture angle; 1.2) adjusting the exposure; 1.3) increased noise; step 2) inputting the labeled data set into a YOLOv3-tiny model to train a network; 3) when the chip tray runs to the position below the industrial camera through the guide rail, the industrial camera sends the acquired chip tray image to the processor for image processing; 4) positioning the chip in the chip disk image by using a trained YOLOv3-tiny network in an integrated positioning platform to acquire real-time coordinate information of the chip; 5) the robot grabs the chip according to the chip coordinates provided by the vision system. The method has better robustness, can simultaneously output pixel-level coordinates of a plurality of targets, has full-angle processing time of millisecond level, and meets the actual time requirement of production.
Description
Technical Field
The invention belongs to the technical field of chip positioning and deep learning, and particularly relates to a chip positioning method based on a YOLOv3-tiny algorithm, an integrated positioning platform and a positioning method.
Background
With the development of integrated circuit technology, more and more chips are produced from a single wafer, and the proportion of the cost of a chip bare chip in the whole process from the integrated circuit design to an application end is smaller and smaller, but meanwhile, due to the increase of chip functions and the increase of package pins, the packaging of the integrated circuit chip is more and more challenging, and the cost occupied by the packaging is larger and larger. It is therefore becoming more and more important how to reduce the failure rate during the packaging process. Picking up the chip die, which is the primary step in chip packaging, also becomes of particular importance. In order to accurately pick up the chip, accurate chip positioning becomes a difficult problem to be solved urgently.
At present, many foreign institutions have developed image processing software with good performance, and have been widely applied to industrial production practice and achieve good robustness, but the software is extremely high in copyright authorization cost and hard to bear for individual development. More software licensing costs are paid for commercial use by domestic companies. Although the performance of the positioning algorithm published in China is obviously improved compared with the prior art, the performance of the positioning algorithm is still in a large gap with respect to the performance of commercialized software, and the high precision and the high real-time performance required by the actual chip production cannot be achieved at the same time. At present, the industry has only to apply deep learning-based target detection and positioning to actual literature or news introduction of chip production, and it can be seen that the research is very pioneering.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide a chip positioning method based on a YOLOv3-tiny algorithm, an integrated positioning platform and a positioning method, which can be used for positioning and grabbing a chip by taking theories such as machine vision, deep learning and the like as means aiming at the practical industrial production, and realize the quick and accurate positioning of the chip in the chip packaging process.
In order to achieve the purpose, the invention adopts the technical scheme that:
the chip positioning method based on the YOLOv3-tiny algorithm comprises the following steps:
step 1), collecting chip tray images and carrying out image preprocessing to expand the number and the types of picture sets and marking the preprocessed images;
step 2), inputting the data set labeled in the step 1 into a YOLOv3-tiny model to train a YOLOv3-tiny network;
and 3) acquiring a chip disk image, positioning the chip from the chip disk image by using the trained YOLOv3-tiny network, and acquiring real-time coordinate information of the chip.
In the step 1), the image preprocessing includes:
1.1) rotating the image at different angles (including large and small angles);
1.2) adjusting the image exposure to add underexposed and overexposed images to the data set;
1.3) increasing common noise of images, including: gaussian noise, poisson noise, multiplicative noise, and salt and pepper noise.
The expanded image set can effectively cope with various conditions of non-artificial rotation of a chip disc, illumination change, noise brought in the image acquisition and transmission process and the like in the actual production, thereby providing a unified standard for testing the performance of the algorithm.
In the step 1), labeling the preprocessed image by using LabelImg software.
In the step 2), compared with the YOLOv3 model, the YOLOv3-tiny model does not use the residual layer in the Darknet-53, reduces the number of convolutional layers, and only outputs at two positions, and finally obtains two results with different scales for detection.
In the step 2), the loss function of the YOLOv3-tiny model includes three major parts: coordinate error, confidence error, and classification error, wherein:
the coordinate errors include a center coordinate error and a width-to-height coordinate error.
the confidence error is:
when the picture is input into the neural network, the picture is divided into K multiplied by K grids, and each grid generates M candidate frames. Parameter(s)Indicating whether the jth prior box of the ith grid is responsible for the target object, if soOtherwise, the value is 0; parameter(s)Is equal to(xi,yi,wi,hi) Representing the position and size of the actual frame;representing the position and size of the prediction box; parameter lambdacoordA coordination coefficient is set for coordinating the inconsistency of the contribution of the rectangular frames with different sizes to the error function; parameter lambdanoobjIs to reduce the contribution weight set by the no object calculation section; parameter(s)Representing the probability score of the target object contained in the prediction frame; parameter(s)Represents the true value; parameter(s)A true value representing the category to which the markup box belongs; parameter pi(c) Representing the probability that the prediction box belongs to category c.
The final loss function expression is:
the invention also provides an integrated positioning platform using the chip positioning method based on the YOLOv3-tiny algorithm, which comprises the following steps:
the industrial camera is arranged above the chip tray running guide rail and used for collecting chip tray images;
the processor with the monitor receives a chip disk image acquired by the industrial camera, carries the trained YOLOv3-tiny network, positions the chip in the chip disk image and acquires the real-time coordinate information of the chip;
and the robot with the mechanical gripper grabs the chip from the chip tray according to the chip coordinate provided by the processor.
The guide rail is provided with the screens device that is used for fixing the chip dish below the industry camera.
The guide rail is divided into a normal channel and a problem channel at the tail end, if the chip grabbing process is not abnormal, the chip tray can be used for carrying out the next procedure through the normal channel, and otherwise, the chip tray is led into the problem channel.
Utilize this integration locating platform:
when the chip tray runs to the position below the industrial camera through the guide rail, the industrial camera sends the acquired chip tray image to the processor for image processing;
positioning the chip in the acquired chip disk image by using a trained YOLOv3-tiny network in a processor to acquire real-time coordinate information of the chip;
the robot grabs the chip according to the chip coordinates provided by the processor.
Compared with the prior art, the invention has the beneficial effects that:
1) the algorithm adopted by the invention is extremely insensitive to the linear change of illumination, namely, the illumination with the change of brightness does not have obvious influence on the matching result of the algorithm.
2) The invention also shows good robustness in response to nonlinear illumination.
3) When the image has noise, virtual focus or interference and the like, the algorithm adopted by the invention shows good anti-noise and anti-interference performance.
4) The positioning precision of the invention is at the pixel level, the positioning precision is good, and the precision requirement required by the production practice is completely met. Therefore, the positioning precision is prevented from being improved by improving the resolution of the image acquisition equipment, and the overall cost is well controlled.
5) The YOLOv3-tiny network used in the invention ensures good performance when picture processing is carried out, the processing time is between 3.5ms and 3.8ms on a flagship type display card RTX 2080Ti 11GB, and the running time can be controlled to be about 6ms on an economic type display card GTX 10603 GB, thereby completely meeting the harsh requirements of actual production on the processing time.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 shows a YOLOv3-tiny network structure.
FIG. 3 is a graph of loss function versus accuracy.
Fig. 4 is a schematic view of a chip positioning and grasping apparatus.
FIG. 5 is a diagram of a software design framework and concepts.
FIG. 6 is a chip locating application interface based on deep learning.
Fig. 7 is a diagram illustrating the matching result of the algorithm.
FIG. 8 is a robustness test result of the YOLOv3-tiny algorithm under the condition of linear variation of illumination. Wherein: (a) the illumination condition is original illumination and no rotation; (b) the lighting condition is lighting enhancement and no rotation; (c) the lighting condition is light enhancement and rotation; (d) the lighting condition is that the lighting is reduced and the lighting is rotated.
FIG. 9 shows the robustness test result of the YOLOv3-tiny algorithm under the condition of non-linear illumination. Wherein: (a) the lighting condition of the center of the chip with shadow; (b) the lighting conditions are increased for shadows compared to (a) surrounding.
FIG. 10 shows the robustness test results of the YOLOv3-tiny algorithm under noise, virtual focus and interference conditions. Wherein: (a) a color noise condition; (b) the condition of the deficient coke is adopted; (c) is an interference condition.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
Referring to fig. 1, the chip positioning method based on YOLOv3-tiny algorithm provided by the invention comprises the following steps:
step 1), collecting chip disk images and carrying out image preprocessing to expand the number and the types of image sets, and labeling the preprocessed images by using LabelImg and other similar software; specifically, the method comprises the following steps:
the image preprocessing comprises the following steps:
1.1) rotating the image at different angles (including large and small angles);
1.2) adjusting the image exposure to add underexposed and overexposed images to the data set;
1.3) increasing common noise of images, including: gaussian noise, poisson noise, multiplicative noise, and salt and pepper noise.
The expanded image set can effectively cope with various conditions of non-artificial rotation of a chip disc, illumination change, noise brought in the image acquisition and transmission process and the like in the actual production, thereby providing a unified standard for testing the performance of the algorithm.
Step 2), inputting the data set labeled in the step 1 into a YOLOv3-tiny model to train a YOLOv3-tiny network; compared with a YOLOv3 model, the YOLOv3-tiny model does not use a residual layer in Darknet-53, reduces the number of convolutional layers, only outputs the convolutional layers at two positions, and finally obtains results of two different scales for detection.
The loss function of the YOLOv3-tiny model includes three major parts: coordinate error, confidence error, and classification error, wherein:
the coordinate errors include center coordinate errors and width-to-height coordinate errors.
the confidence error is:
when the picture is input into the neural network, the picture is divided into K multiplied by K grids, and each grid generates M candidate frames. Parameter(s)J (th) representing i (th) gridWhether the prior box is responsible for the target object, if soOtherwise, the value is 0; parameter(s)Is equal to(xi,yi,wi,hi) Representing the position and size of the actual frame;representing the position and size of the prediction box; parameter lambdacoordA coordination coefficient is set for coordinating the inconsistency of the contribution of the rectangular frames with different sizes to the error function; parameter lambdanoobjIs to reduce the contribution weight set by the no object calculation section; parameter(s)Representing the probability score of the target object contained in the prediction frame; parameter(s)Represents the true value; parameter(s)A true value representing the category to which the markup box belongs; parameter pi(c) Representing the probability that the prediction box belongs to category c.
The final loss function expression is:
and 3) acquiring a chip disk image, positioning the chip from the chip disk image by using the trained YOLOv3-tiny network, and acquiring real-time coordinate information of the chip.
Referring to fig. 2, when the size of the input image of the yollov 3-tiny model is 832 × 832 × 3, a 52 × 52 × 256 tensor T1 is obtained by passing through 5 convolutional layers and 4 pooling layers, a 26 × 26 × 256 tensor T2 is obtained by passing through 3 convolutional layers and 2 pooling layers, the convolutional layers are converted into a 52 × 52 × 128 tensor T3 by upsampling, a 52 × 52 × 384 tensor T4 is obtained by splicing T1 and T3, and convolution operations are performed twice on T2 and T4 respectively to obtain a 26 × 26 × 18 tensor T5 and a 52 × 52 × 18 tensor T6. T5 and T6 are tensors for YOLO detection, and the result is finally output.
Referring to fig. 3, the overall loss function has a decreasing trend of fluctuation, the loss function value is less than 2.0 after 3200 times of iteration, the loss function value is less than 1.5 after 6400 times of iteration, and the loss function is about 1.25 after 8000 times of iteration. After 8000 additional iterations of training, the final accuracy of the network is about 99.7%.
Based on the above positioning method, the present invention further provides an integrated positioning platform, referring to fig. 4 and fig. 1, which includes:
the industrial camera 1 is arranged above the chip disc 2 operation guide rail 3 and is used for collecting chip disc images, wherein the guide rail 3 is provided with a clamping device 4 used for fixing the chip disc 2 at a collecting lens position below the industrial camera 1, and the tail end of the guide rail 3 is divided into a normal channel 5 and a problem channel 6;
the processor 7 with the monitor receives the chip disk image acquired by the industrial camera 1, carries a trained YOLOv3-tiny network, positions the chip 8 in the chip disk image and acquires the real-time coordinate information of the chip 8;
a robot 9 with a mechanical gripper picks up chips 8 from the chip tray 2 according to the chip coordinates provided by the processor 7.
The hardware structure shown in fig. 4 is a set of chip image acquisition and guiding positioning and grasping equipment which is designed for positioning and grasping chips according to the industrial production practice of the present invention. The industrial camera 1 is fixed on the supporting arm through a support, and the height, the pupil distance and the like can be adjusted through a knob. The chip tray 2 is transported by the guide rails 3 and stops when it runs under the robot 9, the chip tray 2 will be fixed by the clamping device 4, after which the robot 9 grabs the chips 8 according to the coordinates provided by the vision system. After all chips 8 in the block are picked up, the guide rail 3 continues to run to deliver the next chip tray 2 to the robot 9 and under the industrial camera 1. If no abnormality occurs during the operation, the chip tray 2 is subjected to the next process through the normal passage 5, otherwise, the lead-out mechanism controls the connection of the guide rail 3, and the chip tray 2 is led into the problem passage 6 by the lead-out mechanism 10 connected to the computer.
The algorithm implementation process mainly comprises algorithm design, writing and debugging on Visual Studio by using a C + + language, and establishing a software interaction interface by using an MFC (micro-functional logic controller). The software interactive interface designed by the invention mainly comprises a data acquisition module, a template selection module, an image to be searched selection module, image processing (a chip positioning module based on YOLOv 3), a result data display module and a result visualization module.
The structure of the integrated rapid chip positioning platform is shown in fig. 5, and the specific operation steps are as follows:
1) FIG. 6 is an application initialization interface of the present invention. The main interface of the application program mainly comprises the following modules: a "select pictures" button, a "load templates" button, an "image pyramid" button, a "trigger (on/off)" button, an "exit" button, a "YOLO match (pictures)" button, a "YOLO match (videos)" button, and a template image display area, a result data display area, and a picture to be searched display area.
2) Clicking the "YOLO match (picture)" button will detect the picture, and the detection result is shown in fig. 7, where the right area of the figure frames all the detected chips, and the coordinate values corresponding to each chip are output in the left cmd command line, and the time-3.60500 ms for processing the picture.
3) Clicking the "YOLO match (video)" button will perform the detection based on the deep learning algorithm on the video stream, and the presentation result is similar to that in fig. 7.
The positioning robustness, the positioning accuracy and the positioning time of the YOLOv3-tiny algorithm are tested under the hardware configuration shown in the table 1, and the results are as follows.
TABLE 1 Experimental platform hardware configuration
1) Positioning robustness test
1.1) Linear variation of illumination
The experiment results show that the algorithm provided by the invention is extremely insensitive to the linear change of illumination, namely, the illumination with the change of brightness can not generate obvious influence on the matching result of the algorithm. See (a), (b), (c), and (d) in fig. 8, which are the matching results of the algorithm of the present invention.
1.2) nonlinear illumination
When the chip is precisely positioned, although special equipment is adopted for illumination, it is difficult to ensure ideal uniform illumination conditions. Therefore, the algorithm provided by the invention should not be sensitive to nonlinear illumination. Referring to (a) and (b) in fig. 9, the matching result of the algorithm of the present invention shows that the algorithm has excellent robustness under the nonlinear lighting condition.
1.3) noise, virtual Focus and interference
In the actual production, due to various reasons, such as unstable lines in the image acquisition and transmission processes; voltage, etc. cause illumination instability; due to the change of the working environment of the system, such as the influence of many factors like external illumination change, vibration and the like, the processed image may have noise, virtual focus or interference and the like. The robustness of the algorithm is largely reflected in the image processing under such factors. See (a), (b) and (c) in fig. 10, which show the results obtained by algorithm matching under noise, virtual focus and interference. It can be seen that the YOLOv3-tiny algorithm also has good noise and interference immunity performance under such conditions.
2) Positioning accuracy test
The biggest advantage of the YOLOv3-tiny algorithm of the present invention is that the coordinates of multiple targets can be output simultaneously, and the result of the processing data in fig. 8(a) is shown in table 2.
TABLE 2 YOLOv3-tiny Algorithm for the results of processing data of FIG. 8(a)
As can be seen from table 2 and fig. 8(a), the positioning accuracy of the YOLOv3-tiny algorithm is at the pixel level, and although there is no output rotation angle, the image can still be accurately matched with the target when rotated, and a circumscribed rectangle is drawn, so that the algorithm accuracy can be guaranteed for the initial coarse positioning.
3) Positioning time testing
In practical production, the requirement on processing time is extremely strict, and for a picture with the size of 640 × 480, the processing time of a full angle (-180 °) needs to reach millisecond level, that is, the processing time of a single picture is less than 10 ms. The YOLOv3-tiny algorithm can ensure good performance when processing pictures, the processing time on a flagship type display card RTX 2080Ti 11GB is between 3.5ms and 3.8ms, the running time on an economic type display card GTX 10603 GB can be controlled to be about 6ms, the required time requirement is completely met, and the difference of one order of magnitude between the prices of GTX 10603 GB and RTX 2080Ti 11GB is considered, and the former algorithm is obviously a better choice in industrial actual deployment.
The above embodiments are merely exemplary embodiments of the present invention, which is not intended to limit the present invention, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (9)
1. A chip positioning method based on a YOLOv3-tiny algorithm and an integrated positioning platform are characterized by comprising the following steps:
step 1), collecting chip tray images and carrying out image preprocessing to expand the number and the types of picture sets and marking the preprocessed images;
step 2), inputting the data set labeled in the step 1 into a YOLOv3-tiny model to train a YOLOv3-tiny network;
and 3) acquiring a chip disk image, positioning the chip from the chip disk image by using the trained YOLOv3-tiny network, and acquiring real-time coordinate information of the chip.
2. The chip positioning method based on the YOLOv3-tiny algorithm as claimed in claim 1, wherein the image pre-processing in step 1) comprises:
1.1) rotating the image at different angles;
1.2) adjusting the image exposure to add underexposed and overexposed images to the data set;
1.3) increasing common noise of images, including: gaussian noise, poisson noise, multiplicative noise, and salt and pepper noise.
3. The chip positioning method based on the YOLOv3-tiny algorithm as claimed in claim 1 or 2, wherein in step 1), LabelImg software is used to label the pre-processed image.
4. The chip positioning method based on YOLOv3-tiny algorithm as claimed in claim 1, wherein in the step 2), the YOLOv3-tiny model does not use the residual layer in Darknet-53 compared to the YOLOv3 model, and reduces the number of convolutional layer, and only outputs at two positions, finally obtains two different scales of results for detection.
5. The chip positioning method based on the YOLOv3-tiny algorithm of claim 1, wherein in the step 2), the loss function of the YOLOv3-tiny model comprises three major parts: coordinate error, confidence error, and classification error, wherein:
the coordinate errors comprise a center coordinate error and a width and height coordinate error;
the confidence error is:
when the picture is input into the neural network, the picture is divided into K multiplied by K grids, and each grid generates M candidate frames and parametersIndicates whether the jth prior box of the ith grid is responsible for the target object, and if so, whether the jth prior box of the ith grid is responsible for the target objectOtherwise, the value is 0; parameter(s)Is equal to(xi,yi,wi,hi) Representing the position and size of the actual frame;representing the position and size of the prediction box; parameter lambdacoordA coordination coefficient is set for coordinating the inconsistency of the contribution of the rectangular frames with different sizes to the error function; parameter lambdanoobjIs to reduce the contribution weight set by the no object calculation section; parameter(s)Representing the probability score of the target object contained in the prediction frame; parameter(s)Represents the true value; parameter(s)A true value representing the category to which the markup box belongs; parameter pi(c) Representing the probability that the prediction box belongs to the category c;
the final loss function expression is:
6. an integrated positioning platform using the chip positioning method based on YOLOv3-tiny algorithm in claim 1, comprising:
the industrial camera is arranged above the chip tray running guide rail and used for collecting chip tray images;
the processor with the monitor receives a chip disk image acquired by the industrial camera, carries the trained YOLOv3-tiny network, positions the chip in the chip disk image and acquires the real-time coordinate information of the chip;
and the robot with the mechanical gripper grabs the chip from the chip tray according to the chip coordinate provided by the processor.
7. The integrated positioning platform of claim 6, wherein the guide rail is provided with a detent device for fixing the chip tray under the industrial camera.
8. The integrated positioning platform of claim 6, wherein the guide rail is divided into a normal channel and a problem channel at the end, if no abnormality occurs in the chip grabbing process, the chip tray will pass through the normal channel to perform the next process, otherwise, the chip tray is guided into the problem channel.
9. The integrated positioning platform of claim 6, wherein when the chip tray travels under the industrial camera via the guide rail, the industrial camera sends the acquired chip tray image to the processor for image processing; positioning the chip in the acquired chip disk image by using a trained YOLOv3-tiny network in a processor to acquire real-time coordinate information of the chip; the robot grabs the chip according to the chip coordinates provided by the processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011014606.2A CN112233175B (en) | 2020-09-24 | 2020-09-24 | Chip positioning method and integrated positioning platform based on YOLOv3-tiny algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011014606.2A CN112233175B (en) | 2020-09-24 | 2020-09-24 | Chip positioning method and integrated positioning platform based on YOLOv3-tiny algorithm |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112233175A true CN112233175A (en) | 2021-01-15 |
CN112233175B CN112233175B (en) | 2023-10-24 |
Family
ID=74107070
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011014606.2A Active CN112233175B (en) | 2020-09-24 | 2020-09-24 | Chip positioning method and integrated positioning platform based on YOLOv3-tiny algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112233175B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113505808A (en) * | 2021-06-05 | 2021-10-15 | 北京超维世纪科技有限公司 | Detection and identification algorithm for power distribution facility switch based on deep learning |
CN114638829A (en) * | 2022-05-18 | 2022-06-17 | 安徽数智建造研究院有限公司 | Anti-interference training method of tunnel lining detection model and tunnel lining detection method |
CN115201667A (en) * | 2022-09-15 | 2022-10-18 | 武汉普赛斯电子技术有限公司 | Method and device for calibrating and positioning semiconductor laser chip and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109325418A (en) * | 2018-08-23 | 2019-02-12 | 华南理工大学 | Based on pedestrian recognition method under the road traffic environment for improving YOLOv3 |
CN110807429A (en) * | 2019-10-23 | 2020-02-18 | 西安科技大学 | Construction safety detection method and system based on tiny-YOLOv3 |
CN110929577A (en) * | 2019-10-23 | 2020-03-27 | 桂林电子科技大学 | Improved target identification method based on YOLOv3 lightweight framework |
CN111401148A (en) * | 2020-02-27 | 2020-07-10 | 江苏大学 | Road multi-target detection method based on improved multilevel YO L Ov3 |
WO2020181685A1 (en) * | 2019-03-12 | 2020-09-17 | 南京邮电大学 | Vehicle-mounted video target detection method based on deep learning |
-
2020
- 2020-09-24 CN CN202011014606.2A patent/CN112233175B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109325418A (en) * | 2018-08-23 | 2019-02-12 | 华南理工大学 | Based on pedestrian recognition method under the road traffic environment for improving YOLOv3 |
WO2020181685A1 (en) * | 2019-03-12 | 2020-09-17 | 南京邮电大学 | Vehicle-mounted video target detection method based on deep learning |
CN110807429A (en) * | 2019-10-23 | 2020-02-18 | 西安科技大学 | Construction safety detection method and system based on tiny-YOLOv3 |
CN110929577A (en) * | 2019-10-23 | 2020-03-27 | 桂林电子科技大学 | Improved target identification method based on YOLOv3 lightweight framework |
CN111401148A (en) * | 2020-02-27 | 2020-07-10 | 江苏大学 | Road multi-target detection method based on improved multilevel YO L Ov3 |
Non-Patent Citations (1)
Title |
---|
刘俊;张文风;: "基于YOLOv3算法的高速公路火灾检测", 上海船舶运输科学研究所学报, no. 04 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113505808A (en) * | 2021-06-05 | 2021-10-15 | 北京超维世纪科技有限公司 | Detection and identification algorithm for power distribution facility switch based on deep learning |
CN114638829A (en) * | 2022-05-18 | 2022-06-17 | 安徽数智建造研究院有限公司 | Anti-interference training method of tunnel lining detection model and tunnel lining detection method |
CN115201667A (en) * | 2022-09-15 | 2022-10-18 | 武汉普赛斯电子技术有限公司 | Method and device for calibrating and positioning semiconductor laser chip and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112233175B (en) | 2023-10-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112233175A (en) | Chip positioning method based on YOLOv3-tiny algorithm and integrated positioning platform | |
Yang et al. | Real-time tiny part defect detection system in manufacturing using deep learning | |
US20200292462A1 (en) | Surface defect detection system and method thereof | |
Richter et al. | On the development of intelligent optical inspections | |
CN110930390B (en) | Chip pin missing detection method based on semi-supervised deep learning | |
CN107992881A (en) | A kind of Robotic Dynamic grasping means and system | |
WO2022227424A1 (en) | Method, apparatus and device for detecting multi-scale appearance defects of ic package carrier plate, and medium | |
Guo et al. | Research of the machine vision based PCB defect inspection system | |
Bai et al. | Corner point-based coarse–fine method for surface-mount component positioning | |
Wang et al. | Attention-based deep learning for chip-surface-defect detection | |
KR20210020065A (en) | Systems and methods for finding and classifying patterns in images with vision systems | |
KR20200099977A (en) | Image generating apparatus, inspection apparatus, and image generating method | |
Liao et al. | Guidelines of automated optical inspection (AOI) system development | |
Zhang et al. | Multi-scale defect detection of printed circuit board based on feature pyramid network | |
Yixuan et al. | Aeroengine blade surface defect detection system based on improved faster RCNN | |
CN114136975A (en) | Intelligent detection system and method for surface defects of microwave bare chip | |
Sun et al. | Cascaded detection method for surface defects of lead frame based on high-resolution detection images | |
CN109816634A (en) | Detection method, model training method, device and equipment | |
CN115205926A (en) | Lightweight robust face alignment method and system based on multitask learning | |
Liu et al. | A novel subpixel industrial chip detection method based on the dual-edge model for surface mount equipment | |
CN112101060B (en) | Two-dimensional code positioning method based on translation invariance and small-area template matching | |
Klco et al. | Automated detection of soldering splashes using YOLOv5 algorithm | |
Abbas | Recovering homography from camera captured documents using convolutional neural networks | |
Blanz et al. | Image analysis methods for solderball inspection in integrated circuit manufacturing | |
Xiang | Industrial automatic assembly technology based on machine vision recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |