GB2440951A - Edge detection for checking component position on a circuit board - Google Patents

Edge detection for checking component position on a circuit board Download PDF

Info

Publication number
GB2440951A
GB2440951A GB0616167A GB0616167A GB2440951A GB 2440951 A GB2440951 A GB 2440951A GB 0616167 A GB0616167 A GB 0616167A GB 0616167 A GB0616167 A GB 0616167A GB 2440951 A GB2440951 A GB 2440951A
Authority
GB
United Kingdom
Prior art keywords
item
substrate
image
training data
histogram
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0616167A
Other versions
GB0616167D0 (en
Inventor
Richard Evans
James Mahon
Gareth Bradshaw
Iain Lennox
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MV Res Ltd
Original Assignee
MV Res Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MV Res Ltd filed Critical MV Res Ltd
Priority to GB0616167A priority Critical patent/GB2440951A/en
Publication of GB0616167D0 publication Critical patent/GB0616167D0/en
Publication of GB2440951A publication Critical patent/GB2440951A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0006Industrial image inspection using a design-rule based approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05KPRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
    • H05K13/00Apparatus or processes specially adapted for manufacturing or adjusting assemblages of electric components
    • H05K13/08Monitoring manufacture of assemblages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30141Printed circuit board [PCB]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Operations Research (AREA)
  • Manufacturing & Machinery (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Quality & Reliability (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

An automated optical inspection (AOI) system is used to determine the position of a surface mount technology (SMT) component on a circuit board. Colour images, e.g. RGB, are captured of the component on the circuit board and an image processor analyses regions of interest (ROI), namely an ROI which is expected to include only pixels of the board, and an ROI which is expected to include only pixels of the component. The system performs training on pixels representing the component and the board, and uses this training to detect an edge of the component. This may be done by providing a 3D histogram for the component colour and a separate 3D histogram for the board colour. The edge detection can be used to check the correct positioning of components on the board even when there is low contrast between the component and the board.

Description

<p>"Inspection of Components in SMT"</p>
<p>Introduction</p>
<p>The invention relates to inspection of discrete items such as electronic components on a substrate such as a circuit board.</p>
<p>Quality control on SMT (surface mount technology) lines can be performed by an automated optical inspection machine (AOl). An important function of SMT post reflow quality control is the ability to measure the positional placement error of components on the circuit board. AOl inspection of printed circuit boards works by using a lighting arrangement to illuminate a board and then analysing the resulting images.</p>
<p>Existing AOl machines locate the component by finding edges in i single image plane. While this is simple and fast, it suffers from the disadvantage that clutter on the board such as screen print can provide a stronger edge than that of the component and thus cause mislocations. It is possible to reduce the impact of clutter by taking multiple images (e.g. RGB planes from a colour image) and combining them to produce a single image using a formula that is either fixed or programmed by a user, such as a Hue transform. For some situations, for example where the component and background are both dark, the performance of such a transform can degrade suddenly due to subtle changes in appearance of board and component due to manufacturjng process variability.</p>
<p>The invention is therefore directed towards providing an improved method and system for AOl inspection of discrete items on substrates such as SMT components on a board.</p>
<p>Summary of the Invention</p>
<p>According to the invention, there is provided a method of machine vision inspection of a discrete item on a substrate with low item/substrate contrast to detect edges of the item, the method being carried out by a machine vision inspection system comprising a camera for capturing images of a scene and an image processor, the method comprising the steps of: capturing an image of a scene including the item and surrounding substrate; generating substrate training data for pixels expected to correspond to the substrate; generating item training data for pixels expected to correspond to the item; for at least some pixels of the captured image, analysing the pixel according to the training data to determine if it represents the substrate or the item, and determining an edge of the item according to the analysis.</p>
<p>In one embodiment, a set of plurality of colour images of the scene is captured and processed.</p>
<p>In one embodiment, the training data compriss a histogram.</p>
<p>In one embodiment, a set of plurality of colour images of the scene is captured and processed; and the histogram is multi-dimensional, with a dimension for each image plane of the colour images.</p>
<p>In another embodiment, the training data is generated for each captured image or set of images of a scene.</p>
<p>In one embodiment, the expected locations of the item and substrate are indicated by pre-set regions of interest.</p>
<p>In a further embodiment, there is user interactivity for selecting a region of interest.</p>
<p>In one embodiment, each histogram is generated by incrementing counters according to pixel values of the images.</p>
<p>In one embodiment, each pixel grey level is quantised before incrementing a relevant counter.</p>
<p>In one embodiment, during the analysis step the edge of the item is determined by transforming image pixels to provide a transformed image having improved contrast between the item and the substrate.</p>
<p>In another embodiment, the transformation applies a transformed value to each of the item and the substrate pixels in the transformed image, said transformed value being based on the likelihood that the pixel represents the item or the substrate In one embodiment, the training data comprises a histogram for the substrate and a histogram for the item, and the transforming step comprises determining a pixel value for the transformed image according to comparison between the pixel value and a counter value from each histogram.</p>
<p>In one embodiment, the method is repeated a plurality of times for a single image or set of images with progression of expected locations of the item and substrate.</p>
<p>In a further embodiment, the said repetition takes place if it is expected that the item is offset to a large extent.</p>
<p>In one embodiment, the method comprises the further steps of storing historic training data and subsequently using said historic training data for scenes of a similar type.</p>
<p>In one embodiment, the item is an SMT diode and the substrate is a circuit board.</p>
<p>In one embodiment, the item is a flip-clip and the substrate is a circuit board.</p>
<p>In one embodiment, the item is a ball grid array and the substrate is a circuit board.</p>
<p>In another aspect, the invention provides a machine vision inspection system comprising a camera for capturing images of a scene and an image processor, wherein, the camera captures an image of a scene including an item and a surrounding substrate, and wherein the image processor: generates substrate training data for pixels expected to correspond to the substrate; generates item training data for pixels expected to correspond to the item; for at least some pixels of the captured image, analyses the pixel according to the training data to determine if it represents the substrate or the item, and determines an edge of the item according to the analysis.</p>
<p>In another aspect, the invention provides a computer readable medium comprising software code for implementing image processor steps of any method defined above when executing on a digital processor.</p>
<p>Detailed DescrjDtjon of the Invention The invention will be more clearly understood from the following description of some embodiments thereof, given by way of example only with reference to the accompanying drawings in which:-Fig. 1 is a flow diagram illustrating an AOl inspection method of the invention; Fig. 2 shows an original image of the method on the left, and a transformed image of the method on the right; and Figs. 3 to 11 are a sequence of screen shots including images to illustrate implementations of the method in different use case scenarios.</p>
<p>An AOl system operates in a method 1 of Fig. 1 to accurately determine position of an SMT component on a board.</p>
<p>Colour images 2, in this embodiment RGB, are captured of the scene and an image processor of the system retrieves from storage pre-set regions of interest (ROl) 3, namely: an ROl which is expected to include only pixels of the board, and an ROI which is expected to include only pixels of the component.</p>
<p>Pixels within the two ROIs are extracted (4) to provide extracted pixels 5.</p>
<p>The system performs training 6 to provide a 3D histogram 7 for the component colour and a separate 3D histogram 8 for the board colour. In each 3D histogram 7 and 8 there is a dimension for each colour of the captured colour images 2.</p>
<p>In a step 9 the pixels 5 are transformed into a transformed image which has excellent component/substrate contrast. Fig. 2 shows a sample source image on the left and a corresponding transformed image on the right.</p>
<p>S Tuini</p>
<p>The training step 6 uses N dimensional histograms (or Look-up Tables (LUT)) with each colour plane corresponding to a different dimension. A given colour combination e.g. combination of particular RGB values maps to a particular bin in the histogram.</p>
<p>Fig. I shows the use of three colour planes but even better results can be obtained using four colour planes by mixing not just images obtained using lighting at different wavelengths but also different lighting angles.</p>
<p>In more detail, the training step 6 operates as follows: The input is a list of extracted pixels 5 with a pixel intensity for each image plane for each pixel location within the image. The histogram is built up by the system as follows: If training the board histogram 8, it updates count values in the board histogram 8.</p>
<p>If training the body histogram 7, it updates count values in the body histogram 7.</p>
<p>For each pixel location in the region to be trained, the pixel intensities in each image plane (red, green and blue) are combined to decide which counter is to be incremented.</p>
<p>For a 16x16x16 histogram (with 4096 counters) the first step performed by the image processor is to quantise each grey level into a number between 0 and 15.</p>
<p>For example, say there are the intensities red = 128, green = 23, blue 34 -.7- (where grey levels are in the range 0 -255) it divides by 16 to get the coordinates of the counter [8,1,2]. To convert these 3D coordinates to the number of the counter it gets 8*256 + * 16+ 2*1 = 2066. So it then adds I to the value in counter number 2066.</p>
<p>Transformation 9 Once the histograms 7 and 8 have been generated, the image processor then in step 9 performs a transformation by analysing each pixel of the colour images 2 according to the histograms. The entire region surrounding the component is transformed. For each pixel location in the image the two corresponding counts from the board and body histograms are retrieved and combined to form a pixel intensity. For example, if the body count is very much higher than the board count then the pixel will have an intensity of 255. Each pixel within the search region of the component is used to produce a transformed image that indicates which pixels are part of the component body and which are part of the background (board, silkscreen etc.). The pair of histograms 7 and 8 are queried to produce a count that indicates the likelihood that the given pixel is part of the class represented by that histogram (i.e. component or not).</p>
<p>These two counts are then combined to produce a confidence that that point is part of the body. For example if the pixel scores a very high count for being part of the body and a very low count for being part of the board then the transformed image pixel will be very bright (grey scale of 255) indicating a high confidence in the pixel being part of the body.</p>
<p>For each pixel position to be transformed the image processor calculates the number of the counter as for the training 6 from the set of grey level intensities. Then it reads the value from the counter in the board histogram (a) as well as the body histogram (b). The corresponding pixel value in the transformed image is then set to 256*b/(b+a) or 255 whichever is less. If both a and b are zero it sets the pixel value to zero.</p>
<p>The effect of the transformation can be seen in Fig. 2 where the image on the left is an original 3-plane coJour image of a component on a circuit board, and the image on the right is the corresponding transformed image. It will be apparent that the original image has low contrast against the background, whereas the transformed image has significantly improved contrast.</p>
<p>Examole I The component is a small diode having a low background contrast. The following steps are implemented as seen from the user's perspective.</p>
<p>* Set the diode size.</p>
<p>* Multiply the body sizes x2 to get appropriate search area sizes, but ensure the search area does not enclose any neighbouring parts of similar colour to the device being programmed.</p>
<p>* Go to a location pane, shown in Fig. 3.</p>
<p>* Set Location mode to "Low contrast box filter" and press inspect, as shown in Fig. 3 * The system generates a transformed image, shown in Fig. 4. This can be made visible by setting a "Display Transform" option to "On" * The contrast may be improved for small parts (<3000 jim) by setting "Colour contrast" to "Small flat parts", as shown in Fig. 5. If the top surface of the part is not planar then the "Standard" option may give better results. The "Small flat parts" option consumes Sl2KBytes of memory for each device for which it is used, while the "Standard" mode consumes 32Kbytes. The "Small flat parts" setting switches on the use of a 4 dimensional histogram using 4 colour planes.</p>
<p>* More accurate skew measurements can be obtained for small parts like this by using a "Rotate 90" option. The user sets this parameter to "Both", and then presses "Inspect", as shown in Fig. 6.</p>
<p>In implementation of the method I there is an automatic switch to "Historic" mode which gives improved location accuracy for heavily of.fet components and uses a blue location box, as shown in Fig. 7. This mode utilises training from previous inspections with the same device type on the current board. The training is initialised for each device type by its first inspection after a board program is loaded. If in}Iistoric" mode the colour of the part or board changes significantly for some reason, then the method is automatically carried out a second time in order to learn the colour change.</p>
<p>This could, for example occur due to heavily textured parts or significant tilt variations.</p>
<p>If components are heavily offset, this can result in the component body falling within the region interest which is expected to contain board, and vice versa. This may cause learning of inappropriate colours and reduces the contrast of the transformed image, giving degraded location accuracy or even complete mislocation.</p>
<p>The "historic" mode gets around this problem as follows: * Before the first component is inspected the histograms contain no information (all counts are zero). During inspection the histograms are trained from the current image.</p>
<p>* In historic mode the histogram counts are saved for future use instead of being discarded.</p>
<p>* For all subsequent inspections when the component is inspected, the transform is carried out using the previously learned histogram counts.</p>
<p>* After the component is correctly located, the regions of interest are aligned with the actual component location and the histogram counts are supplemented with new learning from the current component. To prevent the counts increasing without limit, before the training is supplemented the existing count values are reduced by 10%. Thus the histograms contain a rolling average of the most recent component inspections.</p>
<p>Example 2</p>
<p>In this example the component is a small flip clip. Flip chips can have extremely low body/board contrast and can also have a heavy textured appearance depending on surface finish. Small flip chips (e.g. l000x 1000 microns) are among the most difficult devices to locate.</p>
<p>The following are the steps, as seen from a user's perspective: * Set the body sizes.</p>
<p>* Multiply the body sizes x2 to get appropriate search area sizes, but ensure the search area does not enclose any neighbouring parts of similar colour to the device being programmed.</p>
<p>* Go to the location pane * Set Location mode to "Low contrast box filter" and press inspect, as shown in Fig. 8.</p>
<p>* The part is programmed as before but slightly better skew results can be obtained on square parts with an edge % of 15%, as shown in Fig. 9.</p>
<p>Example 3</p>
<p>In this example the component is a ball grid array (BGA). The steps are as follows: * Set the body sizes.</p>
<p>* On a large device the search area is less critical than for a small part. However it is larger to ensure the search area does not enclose neighbouring parts of similar colour to the device being programmed.</p>
<p>* Go to the location pane * Set Location mode to "Low contrast box filter", and press inspect as shown in Fig. 10.</p>
<p>* On a large part "Color Contrast" being set to "Standard" should give results that are most robust.</p>
<p>* For tall parts with tight placement tolerances it may be necessary to enable "Parallax Correction". The new transform tends to highlight the top surface (and not the sides) of the part more so than with previously available colour transforms. On tall components near the edge of the field of view this can result in offset errors due to parallax. To enable, set "Parallax Correction" to -11 - "On". This will make a new Body height parameter appear. The height of the part off the board in microns is entered and "Inspect" is pressed. The location results are adjusted depending on the position in view of the component.</p>
<p>In summary, the method identifies the statistical pattern of intensities from multiple images of the scene from learning regions. These correspond to the body of the component and also to the board on which the component is placed. It then analyses these patterns to infer for each pixel in the scene the probability that the pixel is part of the component. This probability is converted to a pixel intensity within a transformed image which then shows a bright component against a dark background.</p>
<p>The techniques used to cope with even more heavily offset components are very advantageous. They maintain training data from inspections of the corresponding component on previously inspected boards. The counts in these histograms form a rolling average of counts obtained. This is achieved by reducing all histogram counts (e.g by 10%) prior to the training of each component. This data is self refreshing so that it can cope with changes in board or component colours, but will allow a reasonably accurate location of even a very heavily offset component. A second pass using the previously described standard technique can then be performed to obtain a more accurate location result.</p>
<p>It will be appreciated that the invention provides for improved component edge location where conventional techniques do not apply. Examples of such situations are where the component is low contrast against the background either due to similarity in colour or low lighting levels.</p>
<p>The invention is not limited to the embodiments described but may be varied in construction and detail. For example, the items may be components of different types than described.</p>

Claims (1)

  1. <p>Claims I. A method of machine vision inspection of a discrete item on a
    substrate with low item/substrate contrast to detect edges of the item, the method being carried out by a machine vision inspection system comprising a camera for capturing images of a scene and an image processor, the method comprising the steps of: capturing an image of a scene including the item and surrounding substrate; generating substrate training data for pixels expected to correspond to the substrate; generating item training data for pixels expected to correspond to the item; for at least some pixels of the captured image, analysing the pixel according to the training data to determine if it represents the substrate or the item, and determining an edge of the item according to the analysis.</p>
    <p>2. A method as claimed in claim 1, wherein a set of plurality of colour images of the scene is captured and processed.</p>
    <p>3. A method as claimed in claims I or 2, wherein the training data comprises a histogram.</p>
    <p>4. A method as claimed in claim 3, wherein a set of plurality of colour images of the scene is captured and processed; and wherein the histogram is multi-dimensional, with a dimension for each image plane of the colour images.</p>
    <p>5. A method as claimed in any preceding claim, wherein the training data is generated for each captured image or set of images of a scene.</p>
    <p>6. A method as claimed in any preceding claim, wherein the expected locations of the item and substrate are indicated by pre-set regions of interest.</p>
    <p>7. A method as claimed in claim 6, wherein there is user interactivity for selecting a region of interest.</p>
    <p>8. A method as claimed in any of claims 3 to 7, wherein each histogram is generated by incrementing counters according to pixel values of the images.</p>
    <p>9. A method as claimed in claim 8 wherein each pixel grey level is quantised before incrementing a relevant counter.</p>
    <p>10. A method as claimed in any preceding claim, wherein during the analysis step the edge of the item is determined by transforming image pixels to provide a transformed image having improved contrast between the item and the substrate.</p>
    <p>11. A method as claimed in claim 10, wherein the transformation applies a transformed value to each of the item and the substrate pixels in the transformed image, said transformed value being based on the likelihood that the pixel represents the item or the substrate.</p>
    <p>12. A method as claimed in claims 10 or Ii, wherein the training data comprises a histogram for the substrate and a histogram for the item, and the transforming step comprises determining a pixel value for the transformed image according to comparison between the pixel value and a counter value from each histogram. -14-</p>
    <p>13. A method as claimed in any preceding claim, wherein the method is repeated a plurality of times for a single image or set of images with progression of expected locations of the item and substrate.</p>
    <p>14. A method as claimed in claim 13. wherein the said repetition takes place if it is expected that the item is offset to a large extent.</p>
    <p>15. A method as claimed in any preceding claim, comprising the further steps of storing historic training data and subsequently using said historic training data for scenes of a similar type.</p>
    <p>16. A method as claimed in any preceding claim, wherein the item is an SMT diode and the substrate is a circuit board.</p>
    <p>17. A method as claimed in any of claims I to 15, wherein the item is a flip-clip and the substrate is a circuit board.</p>
    <p>18. A method as claimed in any of claims I to 15, wherein the item is a ball grid array and the substrate is a circuit board.</p>
    <p>19. A machine vision inspection system comprising a camera for capturing images of a scene and an image processor, wherein, the camera captures an image of a scene including an item and a surrounding substrate, and wherein the image processor: generates substrate training data for pixels expected to correspond to the substrate; generates item training data for pixels expected to correspond to the item; for at least some pixels of the captured image, analyses the pixel according to the training data to determine if it represents the substrate or the item, and determines an edge of the item according to the analysis.</p>
    <p>20. A system as claimed in claim 19, wherein the camera captures a set of plurality of colour images of the scene and the image processor processes said set of images.</p>
    <p>21. A system as claimed in claims 19 or 20, wherein the training data comprises a histogram.</p>
    <p>22. A system as claimed in claim 21, wherein the histogram is multi-dimensional, with a dimension for each image plane of the colour images.</p>
    <p>23. A system as claimed in any of claims 19 to 22, wherein the image processor generates training data for each captured image or set of images of a scene.</p>
    <p>24. A system as claimed in any of claims 19 to 23, wherein the expected locations of the item and substrate are indicated by pre-set regions of interest, and the system comprises a database storing said regions of interest.</p>
    <p>25. A computer readable medium comprising software code for implementing image processor steps of a method of any of claims I to 19 when executing on a digital processor.</p>
GB0616167A 2006-08-15 2006-08-15 Edge detection for checking component position on a circuit board Withdrawn GB2440951A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB0616167A GB2440951A (en) 2006-08-15 2006-08-15 Edge detection for checking component position on a circuit board

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0616167A GB2440951A (en) 2006-08-15 2006-08-15 Edge detection for checking component position on a circuit board

Publications (2)

Publication Number Publication Date
GB0616167D0 GB0616167D0 (en) 2006-09-20
GB2440951A true GB2440951A (en) 2008-02-20

Family

ID=37056353

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0616167A Withdrawn GB2440951A (en) 2006-08-15 2006-08-15 Edge detection for checking component position on a circuit board

Country Status (1)

Country Link
GB (1) GB2440951A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102435137A (en) * 2010-09-29 2012-05-02 竞陆电子(昆山)有限公司 Special connecting piece bearing jig for module board automated visual inspection (AVI) equipment
CN101566460B (en) * 2008-04-24 2012-07-18 鸿富锦精密工业(深圳)有限公司 Substrate detection system for double-sided circuit board, detection method and detection carrier therefor
CN105891622A (en) * 2014-12-23 2016-08-24 中国人民解放军65049部队 Picture pixel dynamic linking system for assisting circuit board maintenance

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113739695A (en) * 2020-05-27 2021-12-03 云米互联科技(广东)有限公司 Image-based radio frequency connector detection method, detection device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5311600A (en) * 1992-09-29 1994-05-10 The Board Of Trustees Of The Leland Stanford Junior University Method of edge detection in optical images using neural network classifier
US20040066964A1 (en) * 2002-10-02 2004-04-08 Claus Neubauer Fast two dimensional object localization based on oriented edges
EP1694109A2 (en) * 2005-02-21 2006-08-23 Omron Corporation Printed circuit board inspecting method and apparatus inspection logic setting method and apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5311600A (en) * 1992-09-29 1994-05-10 The Board Of Trustees Of The Leland Stanford Junior University Method of edge detection in optical images using neural network classifier
US20040066964A1 (en) * 2002-10-02 2004-04-08 Claus Neubauer Fast two dimensional object localization based on oriented edges
EP1694109A2 (en) * 2005-02-21 2006-08-23 Omron Corporation Printed circuit board inspecting method and apparatus inspection logic setting method and apparatus

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101566460B (en) * 2008-04-24 2012-07-18 鸿富锦精密工业(深圳)有限公司 Substrate detection system for double-sided circuit board, detection method and detection carrier therefor
CN102435137A (en) * 2010-09-29 2012-05-02 竞陆电子(昆山)有限公司 Special connecting piece bearing jig for module board automated visual inspection (AVI) equipment
CN105891622A (en) * 2014-12-23 2016-08-24 中国人民解放军65049部队 Picture pixel dynamic linking system for assisting circuit board maintenance
CN105891622B (en) * 2014-12-23 2019-10-15 中科众志信通(大连)科技有限公司 Circuit board repair assists picture pixels dynamic link method

Also Published As

Publication number Publication date
GB0616167D0 (en) 2006-09-20

Similar Documents

Publication Publication Date Title
CN108009675B (en) Goods packing method, device and system
CN101283604B (en) Image processing device with automatic white balance
CN110189322B (en) Flatness detection method, device, equipment, storage medium and system
CN110493595B (en) Camera detection method and device, storage medium and electronic device
US8189942B2 (en) Method for discriminating focus quality of image pickup device
KR20100020903A (en) Image identifying method and imaging apparatus
US20100195902A1 (en) System and method for calibration of image colors
CN111583258B (en) Defect detection method, device, system and storage medium
GB2440951A (en) Edge detection for checking component position on a circuit board
US6675120B2 (en) Color optical inspection system
US7003160B2 (en) Image processing apparatus, image processing method, and computer readable recording medium recording image processing program for processing image obtained by picking up or reading original
CN111917986A (en) Image processing method, medium thereof, and electronic device
CN116012242A (en) Camera distortion correction effect evaluation method, device, medium and equipment
CN112734721B (en) Optical axis deflection angle detection method, device, equipment and medium
US10958899B2 (en) Evaluation of dynamic ranges of imaging devices
JP2001024321A (en) Method for generating inspection data
CN114881899A (en) Rapid color-preserving fusion method and device for visible light and infrared image pair
CN106447655A (en) Method for detecting the abnormal colors and the slight recession on the surface of a smooth object
KR101383827B1 (en) System and method for automatic extraction of soldering regions in pcb
CN114913316B (en) Image classification method and device for meter recognition of industrial equipment, electronic equipment and storage medium
US11232289B2 (en) Face identification method and terminal device using the same
CN117522792A (en) Color difference detection method and device, electronic equipment and storage medium
CN116993654A (en) Camera module defect detection method, device, equipment, storage medium and product
JP4365619B2 (en) Edge detection device, component recognition device, edge detection method, and component recognition method
CN116843659A (en) Circuit board fault automatic detection method based on infrared image

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)