CN112233175B - Chip positioning method and integrated positioning platform based on YOLOv3-tiny algorithm - Google Patents

Chip positioning method and integrated positioning platform based on YOLOv3-tiny algorithm Download PDF

Info

Publication number
CN112233175B
CN112233175B CN202011014606.2A CN202011014606A CN112233175B CN 112233175 B CN112233175 B CN 112233175B CN 202011014606 A CN202011014606 A CN 202011014606A CN 112233175 B CN112233175 B CN 112233175B
Authority
CN
China
Prior art keywords
chip
image
yolov3
tiny
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011014606.2A
Other languages
Chinese (zh)
Other versions
CN112233175A (en
Inventor
张新曼
程昭晖
张家钰
寇杰
王静静
彭羽瑞
毛乙舒
陆罩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202011014606.2A priority Critical patent/CN112233175B/en
Publication of CN112233175A publication Critical patent/CN112233175A/en
Application granted granted Critical
Publication of CN112233175B publication Critical patent/CN112233175B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30148Semiconductor; IC; Wafer

Abstract

The invention discloses a chip positioning method and an integrated positioning platform based on a YOLOv3-tiny algorithm, comprising the following steps: 1) The chip disc images are collected and subjected to image preprocessing, and the number and types of the picture sets of the deep learning network are expanded in the following manner: 1.1 Rotating the picture angle; 1.2 Adjusting the exposure; 1.3 Increasing noise; step 2) inputting the marked data set into a YOLOv3-tiny model to train a network; 3) When the chip tray moves to the lower part of the industrial camera through the guide rail, the industrial camera sends the acquired chip tray image to the processor for image processing; 4) Positioning a chip in the chip disc image by using a trained YOLOv3-tiny network in an integrated positioning platform to acquire real-time coordinate information of the chip; 5) The robot grabs the chip according to the chip coordinates provided by the vision system. The method has better robustness, can output the pixel-level coordinates of a plurality of targets at the same time, has the full-angle processing time of millisecond level, and meets the practical time requirement of production.

Description

Chip positioning method and integrated positioning platform based on YOLOv3-tiny algorithm
Technical Field
The invention belongs to the technical field of chip positioning and deep learning, and particularly relates to a chip positioning method based on a YOLOv3-tiny algorithm, an integrated positioning platform and a positioning method.
Background
With the development of integrated circuit technology, more chips are produced by a single wafer, and the proportion of the cost of a chip bare chip in the process from the whole integrated circuit design to the application end is smaller, but at the same time, the packaging of the integrated circuit chip is more challenging due to the increase of chip functions and package pins, and the cost occupied by the package is larger. It is becoming increasingly important how to reduce the failure rate during packaging. Picking up chip die as a first step in chip packaging has also become particularly important. In order to accurately pick up the chip, accurate chip positioning becomes a difficult problem to be solved.
At present, a plurality of foreign institutions have developed a plurality of image processing software with good performance, and the image processing software is widely applied to industrial production practice and achieves good robustness, but the software is extremely high in legal authorization cost and is hard to bear by personal development. For commercial applications of domestic companies, more software licensing fees are paid. The performance of the positioning algorithm published in China is obviously improved compared with the prior art, but the performance of the positioning algorithm is still quite different from that of commercial software, and the high precision and the high real-time performance actually required by chip production cannot be achieved at the same time. The application of the target detection positioning based on deep learning to the actual literature or news introduction of chip production is quite rare in the industry, and therefore the research is quite pioneering.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide a chip positioning method based on a YOLOv3-tiny algorithm, an integrated positioning platform and a positioning method, and aims at industrial production practice, and the chip positioning and grabbing method can be used for positioning and grabbing chips by taking machine vision, deep learning and other theories as means, so that the chip rapid and accurate positioning in the chip packaging process is realized.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
the chip positioning method based on the YOLOv3-tiny algorithm comprises the following steps:
step 1), acquiring chip disc images, performing image preprocessing to expand the number and the types of picture sets, and labeling the preprocessed images;
step 2), inputting the data set marked in the step 1 into a Yolov3-tiny model to train the Yolov3-tiny network;
and 3) acquiring a chip disc image, and positioning a chip from the chip disc image by utilizing a trained YOLOv3-tiny network to acquire real-time coordinate information of the chip.
In the step 1), the image preprocessing includes:
1.1 Rotating the image at different angles, including large and small angles;
1.2 Adjusting the exposure of the image, increasing the underexposed and overexposed image for the data set;
1.3 Increasing image common noise, including: gaussian noise, poisson noise, multiplicative noise, and pretzel noise.
The expanded image set can effectively cope with various conditions of non-artificial rotation of a chip disc, illumination change, noise and the like in the image acquisition and transmission process existing in the production practice, thereby providing unified standards for the performance of a test algorithm.
In the step 1), the preprocessed image is marked by using LabelImg software.
In the step 2), compared with the YOLOv3-tiny model, the residual layer in the dark-53 is not used, the number of convolution layers is reduced, and the output is carried out only at two positions, so that the results with two different scales are finally obtained for detection.
In the step 2), the loss function of the YOLOv3-tiny model includes three major parts: coordinate error, confidence error, and classification error, wherein:
the coordinate errors include center coordinate errors and wide-high coordinate errors.
The center coordinate error is:
the wide-high coordinate error:
the confidence error is:
the classification error is as follows:
when the picture is input into the neural network, the picture is divided into K multiplied by K grids, and each grid generates M candidate frames. Parameters (parameters)A j-th prior box representing an i-th grid is responsible for this object, if it is responsible +.>Otherwise, 0; parameters (parameters)Equal to->(x i ,y i ,w i ,h i ) Representing the position and size of the actual frame; />Representing the position and the size of the prediction frame; parameter lambda coord A coordination coefficient used for coordinating the inconsistent contribution of rectangular frames with different sizes to the error function; parameter lambda noobj To reduce the contribution weight set by the no object calculating section; parameter->A probability score representing the inclusion of the target object within the prediction frame; parameter->Representing trueReal values; parameter->A true value representing the category to which the marker frame belongs; parameter p i (c) Representing the probability that the prediction box belongs to category c.
The final loss function expression is:
the invention also provides an integrated positioning platform utilizing the chip positioning method based on the YOLOv3-tiny algorithm, which comprises the following steps:
the industrial camera is arranged above the chip disc running guide rail and used for collecting chip disc images;
a processor with a monitor receives a chip disc image acquired by an industrial camera, carries the trained YOLOv3-tiny network, positions a chip in the chip disc image and acquires chip real-time coordinate information;
and the robot with the mechanical gripper grabs the chip from the chip tray according to the chip coordinates provided by the processor.
The guide rail is provided with a clamping device below the industrial camera for fixing the chip tray.
The guide rail is divided into a normal channel and a problem channel at the tail end, if no abnormality occurs in the chip grabbing process, the chip tray will perform the next procedure through the normal channel, otherwise, the chip tray is guided into the problem channel.
Utilize this integration location platform:
when the chip tray moves to the lower part of the industrial camera through the guide rail, the industrial camera sends the acquired chip tray image to the processor for image processing;
positioning a chip in the acquired chip disc image by using a trained YOLOv3-tiny network in a processor to acquire real-time coordinate information of the chip;
the robot grabs the chip according to the chip coordinates provided by the processor.
Compared with the prior art, the invention has the beneficial effects that:
1) The algorithm adopted by the invention is extremely insensitive to the linear change of illumination, namely the illumination with light and shade change does not have obvious influence on the matching result of the algorithm.
2) The invention also shows good robustness against nonlinear illumination.
3) When the image is noisy, virtually focused or interfered, the algorithm adopted by the invention shows good anti-noise and anti-interference performance.
4) The positioning precision of the invention is pixel level, the positioning precision is good, and the requirement of the precision actually required by production is completely met. Thereby avoiding the improvement of positioning accuracy by improving the resolution of the image acquisition equipment and well controlling the overall cost.
5) The YOLOv3-tiny network used by the invention ensures good performance when performing picture processing, the processing time is between 3.5ms and 3.8ms on the flagship-type display card RTX 2080Ti 11GB, and the running time can be controlled to be about 6ms on the economical display card GTX 1060 3GB, thereby completely meeting the harsh requirements of production practice on the processing time.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 is a YOLOv3-tiny network structure.
FIG. 3 is a graph of loss function versus accuracy.
Fig. 4 is a schematic diagram of a chip positioning and gripping apparatus.
Fig. 5 is a schematic of a software design framework and idea.
FIG. 6 is a chip positioning application program interface based on deep learning.
Fig. 7 is a schematic diagram of the algorithm matching result.
FIG. 8 is a graph showing the robustness test result of the YOLOv3-tiny algorithm under the condition of linear illumination change. Wherein: (a) is an original illumination and no rotation illumination condition; (b) light conditions of enhanced illumination and no rotation; (c) light conditions for light enhancement and rotation; (d) is a reduced illumination, rotated illumination condition.
FIG. 9 is a graph showing the robustness test result of the Yolov3-tiny algorithm under nonlinear illumination conditions. Wherein: (a) a light condition in which the center of the chip is shaded; (b) is a lighting condition that increases the shadow around compared to (a).
FIG. 10 is a graph showing the results of a robustness test of the Yolov3-tiny algorithm under noise, virtual focus and interference conditions. Wherein: (a) is a color noise condition; (b) is a virtual focus condition; (c) is an interference condition.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
Referring to fig. 1, the chip positioning method based on YOLOv 3-tiniy algorithm provided by the invention comprises the following steps:
step 1), acquiring chip disc images, performing image preprocessing to expand the number and the types of picture sets, and marking the preprocessed images by using LabelImg and other similar software; specifically:
the image preprocessing comprises the following steps:
1.1 Rotating the image at different angles, including large and small angles;
1.2 Adjusting the exposure of the image, increasing the underexposed and overexposed image for the data set;
1.3 Increasing image common noise, including: gaussian noise, poisson noise, multiplicative noise, and pretzel noise.
The expanded image set can effectively cope with various conditions of non-artificial rotation of a chip disc, illumination change, noise and the like in the image acquisition and transmission process existing in the production practice, thereby providing unified standards for the performance of a test algorithm.
Step 2), inputting the data set marked in the step 1 into a Yolov3-tiny model to train the Yolov3-tiny network; compared with the Yolov3 model, the Yolov3-tiny model does not use a residual layer in Darknet-53, meanwhile, the number of convolution layers is reduced, and output is carried out only at two positions, so that two results with different scales are finally obtained for detection.
The loss function of the YOLOv3-tiny model includes three major parts: coordinate error, confidence error, and classification error, wherein:
the coordinate errors include center coordinate errors and wide-high coordinate errors.
The center coordinate error is:
wide-high coordinate error:
the confidence error is:
the classification errors are as follows:
when the picture is input into the neural network, the picture is divided into K multiplied by K grids, and each grid generates M candidate frames. Parameters (parameters)A j-th prior box representing an i-th grid is responsible for this object, if it is responsible +.>Otherwise, 0; parameters (parameters)Equal to->(x i ,y i ,w i ,h i ) Representing the position and size of the actual frame; />Representing the position and the size of the prediction frame; parameter lambda coord A coordination coefficient used for coordinating the inconsistent contribution of rectangular frames with different sizes to the error function; parameter lambda noobj To reduce the contribution weight set by the no object calculating section; parameter->A probability score representing the inclusion of the target object within the prediction frame; parameter->Representing a true value; parameter->A true value representing the category to which the marker frame belongs; parameter p i (c) Representing the probability that the prediction box belongs to category c.
The final loss function expression is:
and 3) acquiring a chip disc image, and positioning a chip from the chip disc image by utilizing a trained YOLOv3-tiny network to acquire real-time coordinate information of the chip.
Referring to fig. 2, when the input image size of the YOLOv3-tiny model is 832×832×3, a 52×52×256 tensor T1 is obtained by 5 convolution layers and 4 pooling layers, a 26×26×256 tensor T2 is obtained by 3 convolution layers and 2 pooling layers, the 52×52×128 tensor T3 is obtained by upsampling, a 52×52×384 tensor T4 is obtained by stitching T1 with T3, and two convolution operations are performed on T2 and T4, respectively, to obtain a tensor T5 of 26×26×18 and a tensor T6 of 52×52×18. T5 and T6 are tensors for YOLO detection, and the final output result is obtained.
Referring to fig. 3, the loss function overall has a decreasing trend of fluctuation, the loss function value is less than 2.0 after 3200 iterations, less than 1.5 after 6400 iterations, and about 1.25 after 8000 iterations. In addition, the final accuracy of the network is about 99.7% after 8000 iterative training.
Based on the above positioning method, the present invention further provides an integrated positioning platform, referring to fig. 4 and fig. 1, which includes:
the industrial camera 1 is arranged above the chip tray 2 running guide rail 3 to collect chip tray images, wherein the guide rail 3 is provided with a clamping device 4 for fixing the chip tray 2 at the position of a collecting lens below the industrial camera 1, and the guide rail 3 is divided into a normal channel 5 and a problem channel 6 at the tail end;
a processor 7 with a monitor receives the chip disc image acquired by the industrial camera 1, carries a trained YOLOv3-tiny network, positions a chip 8 in the chip disc image and acquires real-time coordinate information of the chip 8;
a robot 9 with a mechanical gripper grips the chip 8 from the chip tray 2 according to the chip coordinates provided by the processor 7.
The hardware structure shown in fig. 4 is a set of chip image acquisition and guiding positioning and grabbing equipment which is designed for industrial production practice and can be used for chip positioning and grabbing. The industrial camera 1 is fixed on the supporting arm through a bracket, and the height, the interpupillary distance and the like can be adjusted through a knob. The chip tray 2 is transported by the guide rail 3 and stops when it runs under the robot 9, the clamping device 4 will fix the chip tray 2, after which the robot 9 grabs the chip 8 according to the coordinates provided by the vision system. After all chips 8 of the block are grabbed, the guide rail 3 continues to run to send the next chip tray 2 to the robot 9 and under the industrial camera 1. If no abnormality occurs in the operation process, the chip tray 2 will go to the next process through the normal channel 5, otherwise the export mechanism will control the connection of the guide rail 3, and the export mechanism 10 connected with the computer is used to import the chip tray 2 into the problem channel 6.
The algorithm implementation process mainly comprises the steps of designing an algorithm, writing and debugging on a Visual Studio by using a C++ language, and constructing a software interaction interface by using an MFC. The functions of the software interactive interface designed by the invention mainly comprise a data acquisition module, a template selection module, an image selection module to be searched, image processing (a chip positioning module based on YOLOv 3), a result data display module and a result visualization module.
The architecture composition of the integrated rapid chip positioning platform is shown in fig. 5, and the specific operation steps are as follows:
1) FIG. 6 is an application initialization interface of the present invention. The application main interface of the invention mainly comprises the following modules: a "select picture" button, a "load template" button, an "image pyramid" button, a "trigger (on/off)" button, an "exit" button, a "YOLO match (picture)" button, a "YOLO match (video)" button, and template image display area, a result data display area, and a picture display area to be searched.
2) Clicking the "YOLO match (picture)" button detects the picture, the detection result is shown in fig. 7, the right area boxes all detected chips in the figure, and the coordinate value corresponding to each chip is output in the left cmd command line, and the time spent processing the picture is 3.60500ms.
3) Clicking the "YOLO match (video)" button then detects the video stream based on the deep learning algorithm, presenting results similar to those of fig. 7.
The invention tests the positioning robustness, positioning precision and positioning time of the YOLOv3-tiny algorithm under the hardware configuration shown in the table 1, and the results are as follows.
Table 1 experiment platform hardware configuration
1) Positioning robustness test
1.1 Linear change of illumination
Experiments are carried out on different chips under a large number of different illuminations and rotation angles, and the experimental results show that the algorithm provided by the invention is extremely insensitive to the linear change of the illumination, i.e. the illumination with brightness change does not have obvious influence on the matching result of the algorithm. See fig. 8 (a), (b), (c), and (d), which are the matching results of the algorithm of the present invention.
1.2 Non-linear illumination)
When the chip is precisely positioned, although special equipment is adopted for illumination, the ideal uniform illumination condition is difficult to ensure. The algorithm proposed by the present invention should therefore also be insensitive to nonlinear illumination. Referring to fig. 9 (a) and (b), as a result of the matching of the algorithm of the present invention, the result shows that the algorithm has excellent robustness under nonlinear illumination conditions.
1.3 Noise, virtual focus and interference
In the practical production, the line is unstable due to various reasons, such as image acquisition and transmission; voltage and the like cause unstable illumination; the processed image may be noisy, or may be virtually focused or noisy due to changes in the system operating environment, such as ambient light changes, vibrations, and the like. The robustness of the algorithm is largely manifested in the image processing under such factors. Referring to fig. 10 (a), (b), and (c), the results of algorithm matching under noise, virtual focus, and interference are shown. It can be seen that the YOLOv3-tiny algorithm also has good anti-noise and interference performance in such cases.
2) Positioning accuracy test
The YOLOv3-tiny algorithm has the greatest advantage that the coordinates of a plurality of targets can be output simultaneously, and the processed data result of fig. 8 (a) is shown in table 2.
TABLE 2 Yolov3-tiny Algorithm for data processing results of FIG. 8 (a)
As can be seen from table 2 and fig. 8 (a), the positioning accuracy of the YOLOv3-tiny algorithm is at the pixel level, and although there is no output rotation angle, the image can still be accurately matched to the target when the image rotates, and an external rectangle is drawn, so that the algorithm accuracy can be ensured for coarse positioning at the beginning.
3) Positioning time test
In practice, the processing time is very demanding, and the processing time of a full angle (-180 °) is required to reach millisecond level for a picture with 640×480, i.e. the processing time of a single picture is less than 10ms. The YOLOv3-tiny algorithm can ensure good performance when processing pictures, the processing time on the flagship-type display card RTX 2080Ti 11GB is between 3.5ms and 3.8ms, the running time on the economic display card GTX 1060 3GB can be controlled to be about 6ms, the required time requirement is completely met, and the difference of one order of magnitude between the GTX 1060 3GB and the RTX 2080Ti 11GB is considered, so that the former is obviously better choice in industrial actual deployment.
The above embodiments are merely preferred examples of the present invention and are not intended to limit the present invention, and any modifications, equivalent substitutions, improvements, etc. within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (7)

1. A chip positioning method based on a YOLOv3-tiny algorithm is characterized by comprising the following steps:
step 1), acquiring chip disc images, performing image preprocessing to expand the number and the types of picture sets, and labeling the preprocessed images;
step 2), inputting the data set marked in the step 1 into a Yolov3-tiny model to train the Yolov3-tiny network; compared with the YOLOv3 model, the YOLOv3-tiny model does not use a residual layer in the Darknet-53, meanwhile, the number of convolution layers is reduced, and the results with two different scales are finally obtained for detection only at two positions;
the loss function of the YOLOv3-tiny model includes three major parts: coordinate error, confidence error, and classification error, wherein:
the coordinate errors comprise center coordinate errors and wide-high coordinate errors;
the center coordinate error is:
the wide-high coordinate error is as follows:
the confidence error is:
the classification error is as follows:
when the picture is input into the neural network, the picture is divided into K multiplied by K grids, each grid generates M candidate frames and parametersA j priori box representing the i grid is responsible for this object, if it is>Otherwise, 0; parameter->Equal to->(x i ,yi,w i ,h i ) Representing the position and size of the actual frame; />Representing the position and the size of the prediction frame; parameter lambda coord A coordination coefficient used for coordinating the inconsistent contribution of rectangular frames with different sizes to the error function; parameter lambda noobj To reduce the contribution weight set by the no object calculating section; parameter->A probability score representing the inclusion of the target object within the prediction frame; parameter->Representing a true value; parameter->A true value representing the category to which the marker frame belongs; parameter p i (c) Representing the probability that the prediction box belongs to category c;
the final loss function expression is:
and 3) acquiring a chip disc image, and positioning a chip from the chip disc image by utilizing a trained YOLOv3-tiny network to acquire real-time coordinate information of the chip.
2. The method for positioning a chip based on YOLOv 3-tini algorithm according to claim 1, wherein in the step 1), the image preprocessing includes:
1.1 Rotating the image at different angles;
1.2 Adjusting the exposure of the image, increasing the underexposed and overexposed image for the data set;
1.3 Increasing image common noise, including: gaussian noise, poisson noise, multiplicative noise, and pretzel noise.
3. The method for positioning a chip based on YOLOv 3-tini algorithm according to claim 1 or 2, wherein in step 1), the preprocessed image is labeled by LabelImg software.
4. An integrated positioning platform using the YOLOv3-tiny algorithm-based chip positioning method of claim 1, comprising:
the industrial camera is arranged above the chip disc running guide rail and used for collecting chip disc images;
a processor with a monitor receives a chip disc image acquired by an industrial camera, carries the trained YOLOv3-tiny network, positions a chip in the chip disc image and acquires chip real-time coordinate information;
and the robot with the mechanical gripper grabs the chip from the chip tray according to the chip coordinates provided by the processor.
5. The integrated positioning platform of claim 4, wherein the guide rail is provided with a clamping device for fixing the chip tray below the industrial camera.
6. The integrated positioning platform of claim 4, wherein the guide rail is divided into a normal channel and a problem channel at the end, if no abnormality occurs in the process of grabbing the chip, the chip tray will go to the next process through the normal channel, otherwise, the chip tray is guided into the problem channel.
7. The integrated positioning platform of claim 4, wherein the industrial camera sends the acquired chip tray image to the processor for image processing when the chip tray moves below the industrial camera through the guide rail; positioning a chip in the acquired chip disc image by using a trained YOLOv3-tiny network in a processor to acquire real-time coordinate information of the chip; the robot grabs the chip according to the chip coordinates provided by the processor.
CN202011014606.2A 2020-09-24 2020-09-24 Chip positioning method and integrated positioning platform based on YOLOv3-tiny algorithm Active CN112233175B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011014606.2A CN112233175B (en) 2020-09-24 2020-09-24 Chip positioning method and integrated positioning platform based on YOLOv3-tiny algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011014606.2A CN112233175B (en) 2020-09-24 2020-09-24 Chip positioning method and integrated positioning platform based on YOLOv3-tiny algorithm

Publications (2)

Publication Number Publication Date
CN112233175A CN112233175A (en) 2021-01-15
CN112233175B true CN112233175B (en) 2023-10-24

Family

ID=74107070

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011014606.2A Active CN112233175B (en) 2020-09-24 2020-09-24 Chip positioning method and integrated positioning platform based on YOLOv3-tiny algorithm

Country Status (1)

Country Link
CN (1) CN112233175B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113505808A (en) * 2021-06-05 2021-10-15 北京超维世纪科技有限公司 Detection and identification algorithm for power distribution facility switch based on deep learning
CN114638829A (en) * 2022-05-18 2022-06-17 安徽数智建造研究院有限公司 Anti-interference training method of tunnel lining detection model and tunnel lining detection method
CN115201667B (en) * 2022-09-15 2022-12-23 武汉普赛斯电子技术有限公司 Method and device for calibrating and positioning semiconductor laser chip and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109325418A (en) * 2018-08-23 2019-02-12 华南理工大学 Based on pedestrian recognition method under the road traffic environment for improving YOLOv3
CN110807429A (en) * 2019-10-23 2020-02-18 西安科技大学 Construction safety detection method and system based on tiny-YOLOv3
CN110929577A (en) * 2019-10-23 2020-03-27 桂林电子科技大学 Improved target identification method based on YOLOv3 lightweight framework
CN111401148A (en) * 2020-02-27 2020-07-10 江苏大学 Road multi-target detection method based on improved multilevel YO L Ov3
WO2020181685A1 (en) * 2019-03-12 2020-09-17 南京邮电大学 Vehicle-mounted video target detection method based on deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109325418A (en) * 2018-08-23 2019-02-12 华南理工大学 Based on pedestrian recognition method under the road traffic environment for improving YOLOv3
WO2020181685A1 (en) * 2019-03-12 2020-09-17 南京邮电大学 Vehicle-mounted video target detection method based on deep learning
CN110807429A (en) * 2019-10-23 2020-02-18 西安科技大学 Construction safety detection method and system based on tiny-YOLOv3
CN110929577A (en) * 2019-10-23 2020-03-27 桂林电子科技大学 Improved target identification method based on YOLOv3 lightweight framework
CN111401148A (en) * 2020-02-27 2020-07-10 江苏大学 Road multi-target detection method based on improved multilevel YO L Ov3

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于YOLOv3算法的高速公路火灾检测;刘俊;张文风;;上海船舶运输科学研究所学报(第04期);全文 *

Also Published As

Publication number Publication date
CN112233175A (en) 2021-01-15

Similar Documents

Publication Publication Date Title
CN112233175B (en) Chip positioning method and integrated positioning platform based on YOLOv3-tiny algorithm
US20190103505A1 (en) Apparatus and method for attaching led chips
US11676257B2 (en) Method and device for detecting defect of meal box, server, and storage medium
CN109816725A (en) A kind of monocular camera object pose estimation method and device based on deep learning
CN110400315A (en) A kind of defect inspection method, apparatus and system
CN111652852A (en) Method, device and equipment for detecting surface defects of product
CN111652225B (en) Non-invasive camera shooting and reading method and system based on deep learning
CN107590837A (en) A kind of vision positioning intelligent precise puts together machines people and its camera vision scaling method
CN111680594A (en) Augmented reality interaction method based on gesture recognition
WO2022227424A1 (en) Method, apparatus and device for detecting multi-scale appearance defects of ic package carrier plate, and medium
Guo et al. Research of the machine vision based PCB defect inspection system
Bai et al. Corner point-based coarse–fine method for surface-mount component positioning
CN112950667A (en) Video annotation method, device, equipment and computer readable storage medium
CN109816634B (en) Detection method, model training method, device and equipment
CN113011401A (en) Face image posture estimation and correction method, system, medium and electronic equipment
CN112947458B (en) Robot accurate grabbing method based on multi-mode information and computer readable medium
Huang et al. Deep learning object detection applied to defect recognition of memory modules
CN113705564A (en) Pointer type instrument identification reading method
CN115527089A (en) Yolo-based target detection model training method and application and device thereof
CN115035032A (en) Neural network training method, related method, device, terminal and storage medium
CN114138458A (en) Intelligent vision processing system
CN114140612A (en) Method, device, equipment and storage medium for detecting hidden danger of power equipment
CN116705642B (en) Method and system for detecting silver plating defect of semiconductor lead frame and electronic equipment
Le et al. Computer vision–based system for automation and industrial applications
Mukhametshin et al. Sensor tag detection, tracking and recognition for AR application

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant