WO2017088537A1 - 一种元件分类方法及装置 - Google Patents

一种元件分类方法及装置 Download PDF

Info

Publication number
WO2017088537A1
WO2017088537A1 PCT/CN2016/096747 CN2016096747W WO2017088537A1 WO 2017088537 A1 WO2017088537 A1 WO 2017088537A1 CN 2016096747 W CN2016096747 W CN 2016096747W WO 2017088537 A1 WO2017088537 A1 WO 2017088537A1
Authority
WO
WIPO (PCT)
Prior art keywords
component
component image
image
neural network
convolutional neural
Prior art date
Application number
PCT/CN2016/096747
Other languages
English (en)
French (fr)
Inventor
杨铭
Original Assignee
广州视源电子科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州视源电子科技股份有限公司 filed Critical 广州视源电子科技股份有限公司
Publication of WO2017088537A1 publication Critical patent/WO2017088537A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the present invention relates to the field of computers, and in particular, to a component classification method and apparatus.
  • PCB Printed circuit board
  • PCB refers to a circuit board that provides connections for various electronic components. As electronic devices become more and more complex, the number of electronic components on the PCB is also increasing. The electronic components on the PCB are tested, and the electronic components need to be classified to automatically mark the electronic components to reduce the workload of manual plate making, and also provide component information for subsequent component inspection.
  • the image is mainly learned based on a conventional machine learning method, and the component image is classified by the feature, but
  • the characteristics of the component images learned based on the traditional machine learning method are easily affected by the external environment, so in some scenarios, such as uneven illumination, the classification effect on the component images may be poor.
  • Embodiments of the present invention provide a component classification method and apparatus, in order to accurately classify component images.
  • a first aspect of the embodiments of the present invention provides a component classification method, including:
  • the category corresponding to the largest probability among the probabilities is the category of the component image.
  • a second aspect of the embodiments of the present invention provides a component classification apparatus, including:
  • a first calculating module configured to input the image of the component to be classified into the trained convolutional neural network, and calculate advanced features of the component image
  • a second calculating module using the advanced feature to calculate a probability that the component image belongs to each category
  • a classification module configured to take a category corresponding to a maximum probability of the probability as a category of the component image.
  • the image of the component to be classified is input into the trained convolutional neural network, and the advanced features of the component image are calculated; and the component is calculated by using the advanced feature.
  • the probability that the image belongs to each category; the category corresponding to the largest probability among the probabilities is the category of the component image. Since the convolutional neural network can learn the advanced features of the component image, the embodiment of the present invention uses the convolutional neural network to classify the component image, so that the collection of the component image is not subject to the scene, the classification effect is good, and the accuracy is high. .
  • the computational complexity can be reduced, and the classification efficiency is high.
  • FIG. 1 is a schematic flowchart of a component classification method according to a first embodiment of the present invention
  • Figure 1-b is a network structure diagram of a convolutional neural network
  • Figure 1-c is a network structure diagram of the trained convolutional neural network
  • FIG. 2 is a schematic flow chart of a component classification method according to a second embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of a component classification device according to a third embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of a component classification device according to a fourth embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of a component sorting apparatus according to a fifth embodiment of the present invention.
  • Embodiments of the present invention provide a component classification method and apparatus, in order to accurately classify component images.
  • a component classification method includes: inputting a component image to be classified into a trained convolutional neural network, and calculating an advanced feature of the component image; and calculating the advanced feature by using the advanced feature
  • the component image belongs to each category of probability; the category corresponding to the largest probability among the probabilities is the category of the component image.
  • FIG. 1 is a schematic flow chart of a component classification method according to a first embodiment of the present invention.
  • a component classification method provided by the first embodiment of the present invention may include:
  • Figure 1-b is the network structure diagram of the convolutional neural network.
  • Convolutional neural network is a deep learning network.
  • the convolutional neural network has unique advantages in image processing with its special structure of local weight sharing. Its layout is closer to the actual biological neural network, and the weight sharing is reduced.
  • the complexity of the network, especially the image of the multi-dimensional input vector can be directly input into the network, which avoids the complexity of data reconstruction in the feature extraction and classification process.
  • it is first necessary to use a large number of samples to train component classifiers, and then use the trained convolutional neural network to classify component images.
  • the convolutional neural network can learn the features of each level of the component image, including the low-level features and advanced features, the advanced features of the image are not affected by the shooting scene, that is, the component images captured even in complex scenes can utilize advanced features.
  • the component image is identified so that the component image can be accurately classified and identified using the feature. Therefore, the convolutional neural network can be divided into two processes when calculating the class of the component image, that is, calculating the advanced features of the component image, and calculating the class of the component image using the advanced feature.
  • the component image refers to an image taken at a certain component position from the PCB board.
  • the image is an image of the component containing the component, but if the component is missing, the component image may not contain the component, or if the component is incorrectly inserted, the component image may also be a component containing other components. image.
  • the convolutional neural network referred to in the embodiment of the present invention is a convolutional neural network including an N layer, where N is a positive integer greater than 1.
  • the first N-1 layer of the convolutional neural network is used to calculate features of each level of the component image
  • the Nth layer of the convolutional neural network is used to calculate the component image according to the feature of the component image calculated by the previous N-1 layer. Category.
  • the value of N is 7.
  • the features of each level of the component image calculated by the first N-1 layer of the convolutional neural network include low-level features and advanced features.
  • the category of the component image refers to the classification of the component image according to the different types of components on the PCB. For example, if there are 100 components on the PCB, there are at least 100 types of component images, and between 1-100 can be used. The numbers are classified and identified, and other symbols can be used to classify different component image categories.
  • the category values of the component images to be classified may be determined using the probability values of the respective component images.
  • the Nth layer of the convolutional neural network is used according to the former.
  • the features calculated by the N-1 layer calculate the probability that the component image belongs to each category, and then use the probability to determine the class of the component image.
  • the category corresponding to the maximum probability of taking the probability of each category is a category of the component image.
  • the probability value of the component image belonging to each category indicates the possibility that the component image belongs to each category. Obviously, the greater the probability, the greater the probability that the category belongs to the category, and the maximum probability corresponding to each category is corresponding. The category of the component image will make the classification result the most accurate.
  • the component image type when the components in the component image are not present, the component image type will be other, such as if there are 100 components on the PCB, if 1-100 pairs The component images corresponding to the 100 components are classified, and for the component image of the missing component, the category may be 101.
  • the component image to be classified is input into the trained convolutional neural network, and the advanced features of the component image are calculated; and the component image is calculated by using the advanced feature to belong to each category.
  • the probability of taking the largest probability among the probabilities is the category of the component image. Since the convolutional neural network can learn the advanced features of the component image, the embodiment of the present invention uses the convolutional neural network to classify the component image, so that the collection of the component image is not subject to the scene, the classification effect is good, and the accuracy is high. .
  • the computational complexity can be reduced, and the classification efficiency is high.
  • the method before the inputting the component image into the trained convolutional neural network, the method further includes:
  • the component image is normalized to trigger the step of performing the input of the component image into the trained convolutional neural network.
  • the component image is a component image intercepted from the template image, in order to make the calculation of the component image more accurate by the neural network, it is necessary to make the component in the captured component image located at the center of the image, and at the same time
  • the size of the image is normalized to ensure the accuracy of subsequent processing. This process is called a pretreatment process.
  • the component image may not be pre-processed.
  • the method before the inputting the component image into the trained convolutional neural network, the method further includes:
  • the convolutional neural network is further trained by the sample set to fine tune the initial parameters of the convolutional neural network, and triggering execution of the input of the component image into the trained volume The steps of the neural network.
  • the sample set of the component image refers to the component image collected from the PCB for training the convolutional neural network
  • the image recognition database (ImageNet) is an existing collection from the natural world.
  • ImageNet image recognition database
  • the convolutional neural network in order to obtain a better classification effect, is trained to collect as many samples as possible to train the convolutional neural network. Therefore, it is necessary to collect samples taken under various scenes, such as component images captured on PCB samples taken in poor light conditions, sample images taken from different positions or angles, or other complicated scenes. The sample image to.
  • Figure 1-c is a network structure diagram of the trained convolutional neural network. After the training of the convolutional neural network is performed using the sample set of the component image, as shown in Figure 1-c.
  • Figure 1-b differs from Figure 1-c in that after moving learning, the number of nodes in the last layer of the convolutional neural network changes from the original 1000 nodes to the current N nodes.
  • N is the number of categories of component images.
  • the number of nodes in the last layer of the convolutional neural network is 1000, when based on the pre-training
  • the number of nodes in the last layer of the convolutional neural network is changed to the number of classes of the component. If the component has N classes, the layer is changed to N nodes.
  • the convolutional neural network may be pre-trained without using ImageNet, and a sample training convolutional neural network with more component images may be collected.
  • the sample set of the component image includes:
  • a training sample set of the component image and a test sample set of the component image are provided.
  • the training sample set of the component image is used for training the convolutional neural network in the training phase
  • the test sample set of the component image is a sample for testing the classification effect of the convolutional neural network after the trained original set training in the training phase. set.
  • the training sample set of the convolutional neural network is the same as the acquisition method, and a part of the test sample set of the component image can be taken from the sample set of the acquired component image.
  • the convolutional neural network in the training phase of the convolutional neural network, if the classification result obtained by testing the test sample set using the convolutional neural network is not good, the convolutional neural network may be further training.
  • the convolutional neural network is trained by using the training sample set of the component image, respectively, and testing the component image by using the test sample set of the component image will make the convolutional neural network The training effect is better.
  • the generating a sample set of the component image includes:
  • a sample set of the component images is acquired from the marked component image.
  • each component needs to be classified, when collecting samples of the component image, it is necessary to intercept the sample set of the image of each component above the PCB circuit board image for classification training. Also, in order to distinguish each component sample, it is necessary to label each component image to distinguish different components before training.
  • a camera may be installed on the production line, and different types of PCB card images may be collected in batches, and the card tracking technology may be used to avoid repeatedly shooting a certain PCB card.
  • each model of the PCB card contains a plurality of image samples, and each image sample corresponds to a certain type of PCB card, so that the component images on the acquired PCB card are also from different boards. Ensure that the sample is versatile.
  • the capturing an image of the component on the printed circuit board includes:
  • the component image is automatically captured using the positional information of the components on the printed board image.
  • the component image can be automatically intercepted according to the position information.
  • the location information of the acquiring component image may be through position information of the component recorded in the panel file, or by manually marking the location letter. Interest to get.
  • the labeling the component image includes:
  • the components may also be labeled by other means that distinguish the components.
  • the method before the collecting the sample set of the component image, the method further includes:
  • the component image is normalized to trigger the step of performing the collection of sample sets of the component images.
  • the images are aligned when the sample image of the training convolutional neural network is acquired to position the component at the center of the image, and the image is normalized. In order to preprocess the image, preprocessing the component image will make the training effect better.
  • the trained convolutional neural network when the component image is classified by using a convolutional neural network, if the component image sample is preprocessed in the training phase, the trained convolutional neural network is utilized. When testing the component image, it is also necessary to preprocess the component image; if the component image is not preprocessed during the training phase, the component image is not preprocessed when the component image is tested using the trained convolutional neural network. .
  • the following is exemplified in conjunction with some specific application scenarios.
  • FIG. 2 is a schematic flowchart of a component classification method according to a second embodiment of the present invention.
  • a component classification method according to a second embodiment of the present invention may include:
  • the PCB board image refers to an image that can be directly captured and includes a component image
  • the component image refers to an image at a certain component position taken from the PCB board.
  • the image is a component image including the component.
  • the component image may not contain components.
  • the component plug-in is incorrect, the component image may also contain component images of other components;
  • the category of component images refers to the classification of component images according to the types of components on the PCB. For example, if there are 100 components on the PCB, there are at least 100 categories of component images, and the number between 1-100 can be used. It can be classified and identified, and other symbols can be used to classify different component image categories.
  • a camera may be installed on the production line, and different types of PCB card images may be collected in batches, and the card tracking technology may be used to avoid repeatedly shooting a certain PCB card.
  • each model of the PCB card contains a plurality of image samples, and each image sample corresponds to a certain type of PCB card, so that the component images on the acquired PCB card are also from different boards.
  • Image samples are versatile.
  • the component image is taken as a sample image of the component image on the PCB card image for training the convolutional neural network, and may also be used as a component image test.
  • An image for classifying test images based on a trained convolutional neural network is taken as a sample image of the component image on the PCB card image for training the convolutional neural network.
  • the capturing an image of the component on the printed circuit board includes:
  • the component image is automatically captured using the positional information of the components on the printed board image.
  • the component image can be automatically intercepted according to the position information.
  • the location information of the acquiring component image may be through position information of the component recorded in the panel file, or by manually marking the location letter. Interest to get.
  • the labeling the component image includes:
  • the components may also be labeled by other means that distinguish the components.
  • the sample set of the component image refers to a component image collected from the PCB for training the convolutional neural network.
  • the method before the collecting the sample set of the component image, the method further includes:
  • the component image is normalized to trigger the step of performing the collection of sample sets of the component images.
  • the images are aligned when the sample image of the training convolutional neural network is acquired to position the component at the center of the image, and the image is normalized. In order to preprocess the image, preprocessing the component image will make the training effect better.
  • the convolutional neural network is trained to collect as many samples as possible to train the convolutional neural network. Since the sample set of the component image refers to the component image collected from the PCB for training the convolutional neural network, it can be understood that ensuring the diversity of the PCB acquisition and ensuring the diversity of the component image.
  • the sample set of the component image includes:
  • a training sample set of the component image and a test sample set of the component image are provided.
  • the training sample set of the component image is used to train the convolutional neural network in the training phase
  • the test sample set of the component image is used to test the convolutional nerve after training in the training phase.
  • a sample set of the classification effects of the network is used to train the convolutional neural network in the training phase
  • the training sample set of the convolutional neural network and the test sample set of the convolutional neural network are the same as the acquisition method of the sample image of the acquired component image. Part of the test sample set for the component image.
  • the convolutional neural network in the training phase of the convolutional neural network, if the classification result obtained by testing the test sample set using the convolutional neural network is not good, the convolutional neural network may be further training.
  • the convolutional neural network is trained by using the training sample set of the component image, respectively, and testing the component image by using the test sample set of the component image will make the convolutional neural network The training effect is better.
  • S204 Pre-training the convolutional neural network by using an image recognition database to obtain initial parameters of the convolutional neural network.
  • the image recognition database includes various types of natural images collected from nature, and the image recognition database (ImageNet) is an existing image base database containing various categories collected from nature, although ImageNet is not electronic component data. Set, but it contains more than 22,000 categories of 15 million labeled natural images, used for pre-training convolutional neural networks to learn the general image features of each level, get better initial values of convolutional neural networks.
  • ImageNet image recognition database
  • the classifier it is first necessary to train the convolutional neural network with a sufficient sample set, and the number of sample sets of the acquired component images is generally limited, so in order to better train the convolutional neural network, use The existing ImageNet first pre-trains the convolutional neural network to obtain the initial parameter values of the convolutional neural network. Based on the initial parameters, the existing convolutional neural network is further trained by using the sample set of the acquired component images. The resulting classifier is thus obtained.
  • the convolutional neural network may also be pre-trained without using an image recognition database, and then component image samples may be collected as much as possible for training the convolutional neural network.
  • the convolutional neural network is further trained by using the sample set to fine-tune the initial parameters of the convolutional neural network.
  • convolutional neural network is a deep learning network.
  • Convolutional neural network has unique advantages in image processing with its special structure of local weight sharing. Its layout is closer to the actual biological neural network, weight sharing. The complexity of the network is reduced, especially the image of the multi-dimensional input vector can be directly input into the network, which avoids the complexity of data reconstruction in the feature extraction and classification process. Utilization volume When the product neural network classifies the component images, it first needs to use a large number of samples to train the component classifier, and then use the trained convolutional neural network to classify the component images.
  • the convolutional neural network can learn the features of each level of the component image, including the low-level features and advanced features, the advanced features of the image are not affected by the shooting scene, that is, the component images captured even in complex scenes can utilize advanced features.
  • the component image is identified so that the component image can be accurately classified and identified using the feature.
  • the number of nodes in the last layer of the convolutional neural network is 1000, when based on the pre-training
  • the number of nodes in the last layer of the convolutional neural network is changed to the number of classes of the component. If the component has N classes, the layer is changed to N nodes.
  • the convolutional neural network may be directly trained by using more component image samples.
  • the convolutional neural network referred to in the embodiment of the present invention is a convolutional neural network including an N layer, where N is a positive integer greater than 1.
  • the first N-1 layer of the convolutional neural network is used to calculate features of each level of the component image
  • the Nth layer of the convolutional neural network is used to calculate the component image according to the feature of the component image calculated by the previous N-1 layer. Category.
  • the value of N is 7.
  • the features of each level of the component image calculated by the first N-1 layer of the convolutional neural network include low-level features and advanced features.
  • the category values of the component images to be classified may be determined using the probability values of the respective component images.
  • the Nth layer of the convolutional neural network is used according to the former.
  • the features calculated by the N-1 layer calculate the probability that the component image belongs to each category, and then use the probability to determine the class of the component image.
  • the inputting the component image is trained Before the practiced convolutional neural network, the method further includes:
  • the component image is normalized to trigger the step of performing the input of the component image into the trained convolutional neural network.
  • the component image is a component image intercepted from the template image, in order to make the calculation of the component image more accurate by the neural network, it is necessary to make the component in the captured component image located at the center of the image, and at the same time
  • the size of the image is normalized to ensure the accuracy of subsequent processing. This process is called a pretreatment process.
  • the component image may not be pre-processed.
  • the trained convolutional neural network when the component image is classified by using a convolutional neural network, if the component image sample is preprocessed in the training phase, the trained convolutional neural network is utilized.
  • the trained convolutional neural network When classifying the component image, it is also necessary to preprocess the component image; if the component image is not preprocessed during the training phase, the component image is not preprocessed when the component image is classified by the trained convolutional neural network.
  • the category corresponding to the largest probability among the probabilities of each category is the category of the component image.
  • the probability value of the component image belonging to each category indicates the possibility that the component image belongs to each category. Obviously, the greater the probability, the greater the probability that the category belongs to the category, and the maximum probability corresponding to each category is corresponding. The category of the component image will make the classification result the most accurate.
  • the component image type when the components in the component image are not present, the component image type will be other, such as if there are 100 components on the PCB, if 1-100 pairs The component images corresponding to the 100 components are classified, and for the component image of the missing component, the category may be 101.
  • the component image to be classified is input into the trained convolutional neural network, and the advanced features of the component image are calculated; and the component image is calculated by using the advanced feature to belong to each category.
  • the probability of taking the largest probability among the probabilities is the category of the component image. Since the convolutional neural network can learn the advanced features of the component image, the embodiment of the present invention uses the convolutional neural network to classify the component image, so that the collection of the component image is not subject to the scene, the classification effect is good, and the accuracy is high. .
  • the computational complexity can be reduced, and the classification efficiency is high.
  • the embodiment of the invention further provides a component classification device, the device comprising:
  • a first calculating module configured to input the image of the component to be classified into the trained convolutional neural network, and calculate advanced features of the component image
  • a second calculating module using the advanced feature to calculate a probability that the component image belongs to each category
  • a classification module configured to take a category corresponding to a maximum probability of the probability as a category of the component image.
  • FIG. 3 is a schematic structural diagram of a component sorting apparatus according to a third embodiment of the present invention, wherein, as shown in FIG. 3, a component sorting apparatus 300 according to a third embodiment of the present invention is provided.
  • Can include:
  • the first calculation module 310 the second calculation module 320, and the classification module 330.
  • the first calculation module 310 is configured to input the image of the component to be classified into the trained convolutional neural network, and calculate advanced features of the component image.
  • convolutional neural network is a deep learning network.
  • Convolutional neural network has unique advantages in image processing with its special structure of local weight sharing. Its layout is closer to the actual biological neural network, weight sharing. The complexity of the network is reduced, especially the image of the multi-dimensional input vector can be directly input into the network, which avoids the complexity of data reconstruction in the feature extraction and classification process.
  • it is first necessary to use a large number of samples to train component classifiers, and then use the trained convolutional neural network to classify component images.
  • the convolutional neural network can learn the features of each level of the component image, including the low-level features and advanced features, the advanced features of the image are not affected by the shooting scene, that is, the component images captured even in complex scenes can utilize advanced features.
  • the component image is identified so that the component image can be accurately classified and identified using the feature. Therefore, the convolutional neural network can be divided into two processes when calculating the class of the component image, that is, calculating the advanced features of the component image, and calculating the class of the component image using the advanced feature.
  • the component image refers to an image at a certain component position taken from the PCB board.
  • the image is an image of the component including the component, but if the component is missing, the component image may not be Including components, or in the case of component plug-ins, the component image may also contain component images of other components.
  • the convolutional neural network referred to in the embodiment of the present invention is a convolutional neural network including an N layer, Where N is a positive integer greater than one.
  • N is a positive integer greater than one.
  • the first N-1 layer of the convolutional neural network is used to calculate features of each level of the component image
  • the Nth layer of the convolutional neural network is used to calculate the component image according to the feature of the component image calculated by the previous N-1 layer. Category.
  • the value of N is 7.
  • the features of each level of the component image calculated by the first N-1 layer of the convolutional neural network include low-level features and advanced features.
  • the second calculating module 320 is configured to calculate, by using the advanced feature, a probability that the component image belongs to each category.
  • the category of the component image refers to the classification of the component image according to the different types of components on the PCB. For example, if there are 100 components on the PCB, there are at least 100 types of component images, and between 1-100 can be used. The numbers are classified and identified, and other symbols can be used to classify different component image categories.
  • the category values of the component images to be classified may be determined using the probability values of the respective component images.
  • the Nth layer of the convolutional neural network is used according to the former.
  • the features calculated by the N-1 layer calculate the probability that the component image belongs to each category, and then use the probability to determine the class of the component image.
  • the classification module 330 is configured to take the category corresponding to the largest probability among the probabilities of the respective categories as the category of the component image.
  • the probability value of the component image belonging to each category indicates the possibility that the component image belongs to each category. Obviously, the greater the probability, the greater the probability that the category belongs to the category, and the maximum probability corresponding to each category is corresponding. The category of the component image will make the classification result the most accurate.
  • the component image type when the components in the component image are not present, the component image type will be other, such as if there are 100 components on the PCB, if 1-100 pairs The component images corresponding to the 100 components are classified, and for the component image of the missing component, the category may be 101.
  • the component classifying device 300 inputs the component image to be classified into the trained convolutional neural network, and calculates the advanced features of the component image; the component classifying device 300 reuses the advanced The feature calculates a probability that the component image belongs to each category; and takes a category corresponding to the largest probability among the probabilities as a category of the component image. Since the convolutional neural network can learn the advanced features of the component image, the embodiment of the present invention uses the convolutional neural network to classify the component image, so that the collection of the component image is not subject to the scene, the classification effect is good, and the accuracy is high. .
  • the computational complexity can be reduced, and the classification efficiency is high.
  • FIG. 4 is a schematic structural diagram of a component classification device according to a fourth embodiment of the present invention.
  • a component classification device 400 according to a fourth embodiment of the present invention may include:
  • the first calculation module 410 the second calculation module 420, and the classification module 430.
  • the first calculation module 410 is configured to input the component image to be classified into the trained convolutional neural network, and calculate advanced features of the component image.
  • convolutional neural network is a deep learning network.
  • Convolutional neural network has unique advantages in image processing with its special structure of local weight sharing. Its layout is closer to the actual biological neural network, weight sharing. The complexity of the network is reduced, especially the image of the multi-dimensional input vector can be directly input into the network, which avoids the complexity of data reconstruction in the feature extraction and classification process.
  • it is first necessary to use a large number of samples to train component classifiers, and then use the trained convolutional neural network to classify component images.
  • the convolutional neural network can learn the features of each level of the component image, including the low-level features and advanced features, the advanced features of the image are not affected by the shooting scene, that is, the component images captured even in complex scenes can utilize advanced features.
  • the component image is identified so that the component image can be accurately classified and identified using the feature. Therefore, the convolutional neural network can be divided into two processes when calculating the class of the component image, that is, calculating the advanced features of the component image, and calculating the class of the component image using the advanced feature.
  • the component image refers to an image at a certain component position taken from the PCB board.
  • the image is an image of the component including the component, but if the component is missing, the component image may not be Including components, or in the case of component plug-ins, the component image may also contain component images of other components.
  • the convolutional neural network referred to in the embodiment of the present invention is a convolutional neural network including an N layer, Where N is a positive integer greater than one.
  • N is a positive integer greater than one.
  • the first N-1 layer of the convolutional neural network is used to calculate features of each level of the component image
  • the Nth layer of the convolutional neural network is used to calculate the component image according to the feature of the component image calculated by the previous N-1 layer. Category.
  • the value of N is 7.
  • the features of each level of the component image calculated by the first N-1 layer of the convolutional neural network include low-level features and advanced features.
  • the second calculating module 420 is configured to calculate, by using the advanced feature, a probability that the component image belongs to each category.
  • the category of the component image refers to the classification of the component image according to the different types of components on the PCB. For example, if there are 100 components on the PCB, there are at least 100 types of component images, and between 1-100 can be used. The numbers are classified and identified, and other symbols can be used to classify different component image categories.
  • the category values of the component images to be classified may be determined using the probability values of the respective component images.
  • the Nth layer of the convolutional neural network is used according to the former.
  • the features calculated by the N-1 layer calculate the probability that the component image belongs to each category, and then use the probability to determine the class of the component image.
  • the classification module 430 is configured to take the category corresponding to the largest probability among the probabilities of the respective categories as the category of the component image.
  • the probability value of the component image belonging to each category indicates the possibility that the component image belongs to each category. Obviously, the greater the probability, the greater the probability that the category belongs to the category, and the maximum probability corresponding to each category is corresponding. The category of the component image will make the classification result the most accurate.
  • the component image type when the components in the component image are not present, the component image type will be other, such as if there are 100 components on the PCB, if 1-100 pairs The component images corresponding to the 100 components are classified, and for the component image of the missing component, the category may be 101.
  • the component classifying apparatus 400 further includes:
  • a pre-processing module 440 configured to obtain the position of the component in the component image by template matching and Aligning the component images
  • the component image is normalized to trigger the first computing module 410 to perform the step of inputting the component image to be classified into the trained convolutional neural network.
  • the component image is a component image intercepted from the template image, in order to make the calculation of the component image more accurate by the neural network, it is necessary to make the component in the captured component image located at the center of the image, and at the same time
  • the size of the image is normalized to ensure the accuracy of subsequent processing. This process is called a pretreatment process.
  • the component image may not be pre-processed.
  • the device further includes:
  • a sample creation module 450 configured to create a sample set of the component image
  • a first training module 460 configured to pre-train the convolutional neural network by using an image recognition database to obtain initial parameters of the convolutional neural network, where the image recognition database includes various types of natural images collected from nature;
  • a second training module 470 configured to further train the convolutional neural network by using the sample set to fine tune an initial parameter of the convolutional neural network based on the convolutional neural network initial parameter, and trigger the first
  • the calculation module 410 performs the step of inputting the image of the component to be classified into the trained convolutional neural network.
  • the sample set of the component image refers to the component image collected from the PCB for training the convolutional neural network
  • the image recognition database (ImageNet) is an existing image basic database collected from nature and containing various categories.
  • ImageNet is not an electronic component dataset, it contains more than 15 million labeled natural images in more than 22,000 categories. It can be used for pre-training convolutional neural networks to learn general-level image features at various levels and obtain better convolution. Neural network initial parameter values.
  • the convolutional neural network in order to obtain a better classification effect, is trained to collect as many samples as possible to train the convolutional neural network. Therefore, it is necessary to collect samples taken under various scenes, such as component images captured on PCB samples taken in poor light conditions, sample images taken from different positions or angles, or other complicated scenes. The sample image to.
  • the classifier before the classifier is obtained, it is first necessary to train the convolutional neural network with a sufficient sample set, and the number of sample sets of the acquired component images is generally limited, so in order to better Train the convolutional neural network, use the existing ImageNet to pre-train the convolutional neural network, obtain the initial parameter values of the convolutional neural network, and then use the sample set of the acquired component images to reconcile the convolutional neural network based on the initial parameters. The neural network is further trained to get the final classifier.
  • the number of nodes in the last layer of the convolutional neural network is 1000, when based on the pre-training
  • the number of nodes in the last layer of the convolutional neural network is changed to the number of classes of the component. If the component has N classes, the layer is changed to N nodes.
  • the convolutional neural network may be pre-trained without using ImageNet, and a sample training convolutional neural network with more component images may be collected.
  • the sample set of the component image includes:
  • a training sample set of the component image and a test sample set of the component image are provided.
  • the training sample set of the component image is used for training the convolutional neural network in the training phase
  • the test sample set of the component image is a sample for testing the classification effect of the convolutional neural network after the trained original set training in the training phase. set.
  • the training sample set of the convolutional neural network and the test sample set of the convolutional neural network are the same as the acquisition method of the sample image of the acquired component image. Part of the test sample set for the component image.
  • the convolutional neural network in the training phase of the convolutional neural network, if the classification result obtained by testing the test sample set using the convolutional neural network is not good, the convolutional neural network may be further training.
  • the convolutional neural network is trained by using the training sample set of the component image, respectively, and testing the component image by using the test sample set of the component image will make the convolutional neural network The training effect is better.
  • the sample creation module 450 includes:
  • a first collecting unit 451, configured to collect a printed circuit board image
  • An intercepting unit 452 configured to take a component image on the printed circuit board image and mark the component image to record a category of the component image, with reference to a printed circuit board template image;
  • the second collecting unit 453 is configured to collect a sample set of the component image from the marked component image.
  • a camera may be installed on the production line, and different types of PCB card images may be collected in batches, and the card tracking technology may be used to avoid repeatedly shooting a certain PCB card.
  • each model of the PCB card contains a plurality of image samples, and each image sample corresponds to a certain type of PCB card, so that the component images on the acquired PCB card are also from different boards. Ensure that the sample is versatile.
  • the sample creation module 440 intercepts the component image on the printed circuit, including:
  • the component image is automatically captured using the positional information of the components on the printed board image.
  • the component image can be automatically intercepted according to the position information.
  • the location information of the acquiring component image may be obtained by using location information of the component recorded in the panel file or by manually labeling location information.
  • the labeling module 450 labels the component image, including:
  • the components may also be labeled by other means that distinguish the components.
  • the pre-processing module 470 the pre-processing module 470
  • the component image is normalized to trigger the sample creation module 450 to perform the step of acquiring a sample set of the component image.
  • the images are aligned while the sample image of the training convolutional neural network is acquired to position the component at the center of the image, and the image is Normalization, this process is called the pre-processing of the image, and pre-processing the component image will make the training effect better.
  • the trained convolutional neural network when the component image is classified by using a convolutional neural network, if the component image sample is preprocessed in the training phase, the trained convolutional neural network is utilized.
  • the trained convolutional neural network When testing the component image, it is also necessary to preprocess the component image; if the component image is not preprocessed during the training phase, the component image is not preprocessed when the component image is tested using the trained convolutional neural network.
  • the component classifying device 400 inputs the component image to be classified into the trained convolutional neural network, and calculates the advanced feature of the component image; the component classifying device 400 reuses the advanced The feature calculates a probability that the component image belongs to each category; and takes a category corresponding to the largest probability among the probabilities as a category of the component image. Since the convolutional neural network can learn the advanced features of the component image, the embodiment of the present invention uses the convolutional neural network to classify the component image, so that the collection of the component image is not subject to the scene, the classification effect is good, and the accuracy is high. .
  • the computational complexity can be reduced, and the classification efficiency is high.
  • FIG. 5 is a schematic structural diagram of a component classification device according to a fifth embodiment of the present invention.
  • a component classifying apparatus 500 may include at least one bus 501, at least one processor 502 connected to the bus, and at least one memory 503 connected to the bus.
  • the processor 502 calls the code stored in the memory 503 via the bus 501 for inputting the component image to be classified into the trained convolutional neural network, and calculating the advanced features of the component image; using the advanced feature Calculating a probability that the component image belongs to each category; taking a category corresponding to the largest probability among the probabilities is a category of the component image.
  • convolutional neural network is a deep learning network.
  • Convolutional neural network has unique advantages in image processing with its special structure of local weight sharing. Its layout is closer to the actual biological neural network, weight sharing. The complexity of the network is reduced, especially the image of the multi-dimensional input vector can be directly input into the network, which avoids the complexity of data reconstruction in the feature extraction and classification process. Utilization volume When the product neural network classifies the component images, it first needs to use a large number of samples to train the component classifier, and then use the trained convolutional neural network to classify the component images.
  • the convolutional neural network can learn the features of each level of the component image, including the low-level features and advanced features, the advanced features of the image are not affected by the shooting scene, that is, the component images captured even in complex scenes can utilize advanced features.
  • the component image is identified so that the component image can be accurately classified and identified using the feature. Therefore, the convolutional neural network can be divided into two processes when calculating the class of the component image, that is, calculating the advanced features of the component image, and calculating the class of the component image using the advanced feature.
  • the component image refers to an image at a certain component position taken from the PCB board.
  • the image is an image of the component including the component, but if the component is missing, the component image may not be Including the component, or in the case of an incorrect component plug-in, the component image may also contain component images of other components;
  • the category of component images refers to the classification of component images according to the types of components on the PCB. For example, if there are 100 components on the PCB, there are at least 100 categories of component images, and the number between 1-100 can be used. It can be classified and identified, and other symbols can be used to classify different component image categories.
  • the probability values of the test images belonging to each category indicate that the test image belongs to a possible class of each category. Obviously, the greater the probability, the greater the probability that the category belongs to the category, and the largest probability corresponding to each category is corresponding. The category of the test image will make the classification result the most accurate.
  • the processor 502 before the processor 502 inputs the test image into the trained convolutional neural network, the processor 502 is further configured to:
  • the component image is normalized to trigger the step of performing the input of the component image into the trained convolutional neural network.
  • the processor 502 before the processor 502 inputs the component image into the trained convolutional neural network, the processor 502 is further configured to:
  • the network is used to fine tune the initial parameters of the convolutional neural network and trigger the step of performing the input of the component image into the trained convolutional neural network.
  • the sample set of the component image refers to the component image collected from the PCB for training the convolutional neural network
  • the image recognition database (ImageNet) is an existing image basic database collected from nature and containing various categories.
  • ImageNet is not an electronic component dataset, it contains more than 15 million labeled natural images in more than 22,000 categories. It can be used for pre-training convolutional neural networks to learn general-level image features at various levels and obtain better convolution. Neural network initial parameter values.
  • the sample set of the component image includes:
  • a training sample set of the component image and a test sample set of the component image are provided.
  • the training sample set of the component image is used for training the convolutional neural network in the training phase
  • the test sample set of the component image is a sample for testing the classification effect of the convolutional neural network after the trained original set training in the training phase. set.
  • the generating, by the processor 502, the sample set of the component image includes:
  • a sample set of the component images is acquired from the marked component image.
  • the processor 502 intercepts the component image on the printed circuit board, including:
  • the component image is automatically captured using the positional information of the components on the printed board image.
  • the component image can be automatically intercepted according to the position information.
  • the location information of the acquiring component image may be obtained by using location information of the component recorded in the panel file or by manually labeling location information.
  • the labeling, by the processor 502, the component image includes:
  • the processor 502 collects the element Before the sample set of pieces of images, the processor 502 is further configured to:
  • the component image is normalized to trigger the step of performing the collection of sample sets of the component images.
  • the component classifying device 500 inputs the component image to be classified into the trained convolutional neural network, and calculates the advanced features of the component image; the component classifying device 500 reuses the advanced The feature calculates a probability that the component image belongs to each category; and takes a category corresponding to the largest probability among the probabilities as a category of the component image. Since the convolutional neural network can learn the advanced features of the component image, the embodiment of the present invention uses the convolutional neural network to classify the component image, so that the collection of the component image is not subject to the scene, the classification effect is good, and the accuracy is high. .
  • the computational complexity can be reduced, and the classification efficiency is high.
  • the embodiment of the present invention further provides a computer storage medium, wherein the computer storage medium can store a program, and the program includes some or all of the steps of any component classification method described in the foregoing method embodiments.
  • the disclosed apparatus may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or may be Integrated into another system, or some features can be ignored, Or not.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be an embeddable device personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

本发明实施例公开了一种元件分类方法及装置,所述方法,包括:将待分类的元件图像输入经过训练后的卷积神经网络,并计算所述元件图像的高级特征;利用所述高级特征计算所述元件图像属于各个类别的概率;取所述概率中最大的概率对应的类别为所述元件图像的类别。由于卷积神经网络能学习到元件图像的高级特征,所述本发明实施例利用卷积神经网络对元件图像进行分类时,将使元件图像的采集不受场景约束,分类效果好,准确性高,同时由于卷积神经网络局部权值共享,所以在利用卷积神经网络对元件图像分类的过程中,可以降低计算复杂度,分类效率高。

Description

一种元件分类方法及装置 技术领域
本发明涉及计算机领域,具体涉及一种元件分类方法及装置。
背景技术
印刷线路板(Printed circuit board,简称PCB板)是指为各种电子元器件提供连接的电路板,随着电子设备越来越复杂,PCB板上的电子元件数量也越来越多,为了对PCB板上的电子元件进行检测,需要对电子元件进行分类,从而对电子元件进行自动标注,以减轻人工制版的工作量,也为后续的元件检测提供元件信息。
目前,对从PCB板上截取到的、位于元件位置的、包含单个元件的元件图像进行分类时,主要基于传统的机器学习方法学习图像的特征,再利用该特征对元件图像进行分类,但是由于基于传统的机器学习方法学习到的元件图像的特征容易受外界环境的影响,所以在某些场景下,如光照不均匀的情况下,会导致对元件图像的分类效果差。
发明内容
本发明实施例提供了一种元件分类方法及装置,以期可以对元件图像进行准确地分类。
本发明实施例第一方面提供一种元件分类方法,包括:
将待分类的元件图像输入经过训练后的卷积神经网络,并计算所述元件图像的高级特征;
利用所述高级特征计算所述元件图像属于各个类别的概率;
取所述概率中最大的概率对应的类别为所述元件图像的类别。
本发明实施例第二方面提供一种元件分类装置,包括:
第一计算模块,用于将待分类的元件图像输入经过训练后的卷积神经网络,并计算所述元件图像的高级特征;
第二计算模块,利用所述高级特征计算所述元件图像属于各个类别的概率;
分类模块,用于取所述概率中最大的概率对应的类别为所述元件图像的类别。
可以看出,在本发明实施例提供的技术方案中,将待分类的元件图像输入经过训练后的卷积神经网络,并计算所述元件图像的高级特征;利用所述高级特征计算所述元件图像属于各个类别的概率;取所述概率中最大的概率对应的类别为所述元件图像的类别。由于卷积神经网络能学习到元件图像的高级特征,所述本发明实施例利用卷积神经网络对元件图像进行分类时,将使元件图像的采集不受场景约束,分类效果好,准确性高。
更进一步地,由于卷积神经网络局部权值共享,所以在利用卷积神经网络对元件图像分类的过程中,可以降低计算复杂度,分类效率高。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1-a是本发明第一实施例提供的一种元件分类方法的流程示意图;
图1-b是卷积神经网络的网络结构图;
图1-c是经过训练后的卷积神经网络的网络结构图;
图2是本发明第二实施例提供的一种元件分类方法的流程示意图;
图3是本发明第三实施例提供的一种元件分类装置的结构示意图;
图4是本发明第四实施例提供的一种元件分类装置的结构示意图;
图5是本发明第五实施例提供的一种元件分类装置的结构示意图。
具体实施方式
本发明实施例提供了一种元件分类方法及装置,以期可以对元件图像进行准确地分类。
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发 明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。
本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”和“第三”等是用于区别不同对象,而非用于描述特定顺序。此外,术语“包括”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。
本发明实施例的一种元件分类方法,一种元件分类方法包括:将待分类的元件图像输入经过训练后的卷积神经网络,并计算所述元件图像的高级特征;利用所述高级特征计算所述元件图像属于各个类别的概率;取所述概率中最大的概率对应的类别为所述元件图像的类别。
首先参见图1,图1是本发明第一实施例提供的一种元件分类方法的流程示意图。其中,如图1所示,本发明第一实施例提供的一种元件分类方法可以包括:
S101、将待分类的元件图像输入经过训练后的卷积神经网络,并计算所述元件图像的高级特征。
其中,参见图1-b,图1-b是卷积神经网络的网络结构图。卷积神经网络是一种深度学习网络,卷积神经网络以其局部权值共享的特殊结构在图像处理方面有着独特的优越性,其布局更接近于实际的生物神经网络,权值共享降低了网络的复杂性,特别是多维输入向量的图像可以直接输入网络这一特点避免了特征提取和分类过程中数据重建的复杂度。利用卷积神经网络对元件图像进行分类时,首先需要利用大量的样本训练元件分类器,然后再利用训练好的卷积神经网络对元件图像进行分类。由于卷积神经网络可学习到元件图像各个层次的特征,包括低层特征和高级特征,而图像的高级特征不受拍摄场景影响,也即即使在复杂的场景下拍摄的元件图像也能利用高级特征对元件图像进行识别,从而利用该特征可以准确地对元件图像进行分类识别。所以,卷积神经网络计算元件图像的类别时可以分为两个过程,即计算元件图像的高级特征,以及利用该高级特征计算元件图像的类别。
其中,元件图像是指从PCB板上截取的在某个元件位置的图像,一般来说, 该图像为包含该元件的元件图像,但如果在元件漏件的情况下,该元件图像也有可能不包含元件,或在元件插件有误的情况下,该元件图像也有可能为包含其它元件的元件图像。
可选地,本发明实施例中所指的卷积神经网络为包含N层的卷积神经网络,其中,N为大于1的正整数。其中,该卷积神经网络的前N-1层用于计算元件图像各个层次的特征,该卷积神经网络的第N层用于根据前N-1层计算出来的元件图像的特征计算元件图像的类别。
优选地,N的值为7。
优选地,该卷积神经网络的前N-1层计算出来的元件图像的各个层次的特征包括低级特征以及高级特征。
S102、利用所述高级特征计算所述元件图像属于各个类别的概率。其中,元件图像的类别是指根据PCB板上元件种类的不同对元件图像进行的分类标注,如PCB板上共有100种元件,则元件图像的类别共至少有100种,可用1-100之间的数字对其进行分类标识,也可用其它符号对不同元件图像类别进行分类标识。
可选地,在本发明的实施例中,可以用各个元件图像的概率值判断待分类的元件图像的类别。
优选地,对于有N层网络的卷积神经网络,若卷积神经网络的前N-1层用于计算元件图像的各个层次的特征,则该卷积神经网络的第N层用于根据前N-1层计算得到的特征(包括低级特征和高级特征)计算该元件图像属于各个类别的概率,再利用该概率判断元件图像的类别。
S103、取所述各个类别的概率中最大的概率对应的类别为所述元件图像的类别。
其中,元件图像属于各个类别的概率值表示了该元件图像属于各个类别的可能性,很显然,概率越大,表示属于该类别的可能性越大,从而取各个类别的概率中最大的概率对应的类别为元件图像的类别,将使得分类结果最为准确。
可选地,在本发明的一些可能的实施方式中,当元件图像中的元件不存在时,元件图像的类别将为其它,如PCB板上若有100个元件,若以1-100对这100个元件对应的元件图像进行分类,对于漏件的元件图像来说,类别可以为101。
可以看出,本实施例的方案中,将待分类的元件图像输入经过训练后的卷积神经网络,并计算所述元件图像的高级特征;利用所述高级特征计算所述元件图像属于各个类别的概率;取所述概率中最大的概率对应的类别为所述元件图像的类别。由于卷积神经网络能学习到元件图像的高级特征,所述本发明实施例利用卷积神经网络对元件图像进行分类时,将使元件图像的采集不受场景约束,分类效果好,准确性高。
更进一步地,由于卷积神经网络局部权值共享,所以在利用卷积神经网络对元件图像分类的过程中,可以降低计算复杂度,分类效率高。
可选地,在本发明的一些可能的实施方式中,所述将元件图像输入经过训练后的卷积神经网络之前,所述方法还包括:
利用模板匹配得到所述元件图像中元件的位置并对所述元件图像进行对齐;
对所述元件图像进行归一化,以触发执行所述将元件图像输入经过训练后的卷积神经网络的步骤。
可以理解,由于元件图像是从模板图像上截取下来的一块元件图像,为了使神经网络对元件图像的计算更为准确,需要使截取到的元件图像中的元件位于图像的中心位置,并同时对图像的大小进行归一化,这样以保证后续处理的准确性。该过程称为预处理过程。
可选地,在本发明的一些可能的实施方式中,也可以不对元件图像进行预处理。
可选地,在本发明的一些可能的实施方式中,所述将元件图像输入经过训练后的卷积神经网络之前,所述方法还包括:
创建所述元件图像的样本集;
利用图像识别数据库预训练所述卷积神经网络,得到所述卷积神经网络初始参数,所述图像识别数据库包含从自然界采集到的各种类别的自然图像;
基于所述卷积神经网络初始参数,利用所述样本集进一步训练所述卷积神经网络以对所述卷积神经网络初始参数进行微调,并触发执行所述将元件图像输入经过训练后的卷积神经网络的步骤。
其中,元件图像的样本集是指从PCB板上采集到的用于训练卷积神经网络的元件图像,图像识别数据库(ImageNet)是现有的从自然界采集到的包含各 种类别的图像基础数据库,虽然ImageNet并非电子元件数据集,但其包含超过22000个类别的1500万张带标注的自然图像,用于预训练卷积神经网络可学习出各层次的通用图像特征,得到较好的卷积神经网络初始参数值。
可选地,在本发明的一些可能的实施方式中,为了得到较好的分类效果,所以在对卷积神经网络进行训练的时候尽可能多的采集更多的样本对卷积神经网络进行训练,所以需要采集各个场景下拍摄的样本,如在光线不好的情况下拍摄的PCB板样本上截取到的元件图像,以及从不同的位置或者角度拍摄到的样本图像,或者其它复杂场景下拍摄到的样本图像。
可以理解,在得到分类器之前,首先需要利用足够的样本集对卷积神经网络进行训练,而采集到的元件图像的样本集的数量一般有限,所以为了更好地训练卷积神经网络,利用现有的ImageNet首先对卷积神经网络进行预训练,得到卷积神经网络的初始参数值,再基于该初始参数,利用采集到的元件图像的样本集再对卷积神经网络进行进一步的训练,从而得到最终的分类器。参见图1-c,图1-c是经过训练后的卷积神经网络的网络结构图,其中,如图1-c所示,利用元件图像的样本集对卷积神经网络进行进一步的训练后微调卷积神经网络的参数。图1-b与图1-c的不同之处在于经过移动学习后,卷积神经网络的最后一层的节点数从原来的1000个节点变为现在的N个节点。其中,N为元件图像的类别数。
可选地,在本发明的一些可能的实施方式中,在利用ImageNet首先对卷积神经网络进行预训练时,卷积神经网络最后一层的节点数为1000个,当再基于预训练后的神经网络利用元件图像的样本集进行训练时,将卷积神经网络最后一层的节点数改为元件的类别数,如元件共有N类,则将该层改为N个节点。
可选地,在本发明的一些可能的实施方式中,也可以不利用ImageNet对卷积神经网络进行预训练,可采集较多的元件图像的样本训练卷积神经网络。
可选地,在本发明的一些可能的实施方式中,所述元件图像的样本集包括:
所述元件图像的训练样本集和所述元件图像的测试样本集。
其中,元件图像的训练样本集是用于在训练阶段训练卷积神经网络的,元件图像的测试样本集是用于在训练阶段测试经过训练本来集训练后的卷积神经网络的分类效果的样本集。
可选地,在本发明的一些可能的实施方式中,卷积神经网络的训练样本集 和卷积神经网络的测试样本集在的采集方法一样,可以从所采集到的元件图像的样本集取一部分做元件图像的测试样本集。
可选地,在本发明的一些可能的实施方式中,在卷积神经网络的训练阶段,若利用卷积神经网络的测试样本集测试得到的分类效果不佳时,可对卷积神经网络进一步训练。
可以理解,在对卷积神经网络进行训练时,分别利用元件图像的训练样本集对卷积神经网络进行训练,以及利用元件图像的测试样本集对元件图像进行测试将使得对卷积神经网络的训练效果更佳。
可选地,在本发明的一些可能的实施方式中,所述创建所述元件图像的样本集包括:
采集印刷电路板图像;
以印刷电路板模板图像为参考,在所述印刷电路板图像上截取元件图像并对所述元件图像进行标记以记录所述元件图像的类别;
从所述经过标记后的元件图像中采集所述元件图像的样本集。
可以理解,由于是需要对每个元件进行分类,所以在采集元件图像的样本时,需要截取PCB电路板图像上面每个元件的图像的样本集合进行分类训练。并且,为了对每个元件样本进行区分,所以在训练之前需要对各个元件图像进行标注以区分不同的元件。
可选地,在本发明的一些可能的实施方式中,可以在生产线上架设摄像头,批量采集不同型号的PCB板卡图像,并以板卡跟踪技术避免重复拍摄某一PCB板卡。这样每个型号的PCB板卡均包含多个图像样本,每个图像样本对应某一型号的某张PCB板卡,从而这样在获取到的PCB板卡上的元件图像也来自不同板卡上,保证样本具备多样性。
可选地,在本发明的一些可能的实施方式中,所述在所述印刷板电路上截取元件图像,包括:
利用印刷板图像上面的元件的位置信息自动截取元件图像。
可以理解,当知道元件的位置信息后,则可以根据该位置信息自动截取元件图像。
可选地,在本发明的一些可能的实施方式中,所述获取元件图像的位置信息可以通过板式文件中所记录的元件的位置信息,或者通过人工标注的位置信 息来获取。
可选地,在本发明的一些可能的实施方式中,所述对元件图像进行标注包括:
根据元件类别信息进行标注。
可以理解,需要对元件的类别进行区分,从而在训练的时候记录元件的类别才能准确地对元件进行漏件检测。
可选地,在本发明的另一些可能的实施方式中,也可以通过其它能对元件进行区分的方式对元件进行标注。
可选地,在本发明的一些可能的实施方式中,所述采集所述元件图像的样本集之前,所述方法还包括:
利用模板匹配得到所述元件图像中元件的位置并对所述元件图像进行对齐;
对所述元件图像进行归一化,以触发执行所述采集所述元件图像的样本集的步骤。
可以理解,与利用卷积神经网络进行测试的过程类似,在采集训练卷积神经网络的样本图像时对图像进行对齐以使元件位于图像的中心位置,并对图像进行归一化,该过程称为对图像的预处理过程,对元件图像进行预处理会使训练效果更好。
可选地,在本发明的一些可能的实施方式中,在利用卷积神经网络对元件图像进行分类时,如果在训练阶段对元件图像样本进行预处理,那么在利用训练后的卷积神经网络对元件图像测试时,也需要对元件图像进行预处理;如果在训练阶段不对元件图像进行预处理,那么在利用训练后的卷积神经网络对元件图像进行测试时,也不对元件图像进行预处理。为了便于更好理解和实施本发明实施例的上述方案,下面结合一些具体的应用场景进行举例说明。
请参见图2,图2是本发明第二实施例提供的一种元件分类方法的流程示意图,其中,如图2所示,本发明第二实施例提供的一种元件分类方法可以包括:
S201、采集印刷电路板图像。
其中,PCB板卡图像是指可直接拍摄到的包含元件图像的图像,元件图像是指从PCB板上截取的在某个元件位置的图像,一般来说,该图像为包含该元件的元件图像,但如果在元件漏件的情况下,该元件图像也有可能不包含元件, 或在元件插件有误的情况下,该元件图像也有可能包含其它的元件的元件图像;
元件图像的类别是指根据PCB板上元件种类的不同对元件图像进行的分类标注,如PCB板上共有100种元件,则元件图像的类别共至少有100种,可用1-100之间的数字对其进行分类标识,也可用其它符号对不同元件图像类别进行分类标识。
可选地,在本发明的一些可能的实施方式中,可以在生产线上架设摄像头,批量采集不同型号的PCB板卡图像,并以板卡跟踪技术避免重复拍摄某一PCB板卡。这样每个型号的PCB板卡均包含多个图像样本,每个图像样本对应某一型号的某张PCB板卡,从而这样在获取到的PCB板卡上的元件图像也来自不同板卡上,同时,需要拍摄各个场景下拍摄的样本,如在光线不好的情况下拍摄的PCB板图像,以及从不同的位置或角度拍摄到的PCB板图像,从而保证从PCB板图像上截取到的元件图像样本具备多样性。
可以理解,为了对元件进行分类,首先需要采集PCB板卡图像,再从PCB板卡图像上截取元件图像。
S202、以印刷电路板模板图像为参考,在印刷电路板图像上截取元件图像并对元件图像进行标记以记录元件图像的类别。
可选地,在本发明的一些可能的实施方式中,在PCB板卡图像上截取元件图像做为元件图像的样本图像,用于对卷积神经网络进行训练,也可以做为元件图像的测试图像,用于基于训练好的卷积神经网络对测试图像进行分类。
可以理解,当截取元件图像的样本时,由于是需要对每个元件进行分类,所以在采集元件图像的样本时,需要截取PCB电路板图像上面每个元件的图像的样本集合进行分类训练。并且,为了对每个元件样本进行区分,所以在训练之前需要对各个元件图像进行标注以区分不同的元件。
可选地,在本发明的一些可能的实施方式中,所述在所述印刷板电路上截取元件图像,包括:
利用印刷板图像上面的元件的位置信息自动截取元件图像。
可以理解,当知道元件的位置信息后,则可以根据该位置信息自动截取元件图像。
可选地,在本发明的一些可能的实施方式中,所述获取元件图像的位置信息可以通过板式文件中所记录的元件的位置信息,或者通过人工标注的位置信 息来获取。
可选地,在本发明的一些可能的实施方式中,所述对元件图像进行标注包括:
根据元件类别信息进行标注。
可以理解,需要对元件的类别进行区分,从而在训练的时候记录元件的类别才能准确地对元件进行漏件检测。
可选地,在本发明的另一些可能的实施方式中,也可以通过其它能对元件进行区分的方式对元件进行标注。
S203、从经过标记后的元件图像中采集元件图像的样本集。
其中,元件图像的样本集是指从PCB板上采集到的用于训练卷积神经网络的元件图像。
可选地,在本发明的一些可能的实施方式中,所述采集所述元件图像的样本集之前,所述方法还包括:
利用模板匹配得到所述元件图像中元件的位置并对所述元件图像进行对齐;
对所述元件图像进行归一化,以触发执行所述采集所述元件图像的样本集的步骤。
可以理解,与利用卷积神经网络进行测试的过程类似,在采集训练卷积神经网络的样本图像时对图像进行对齐以使元件位于图像的中心位置,并对图像进行归一化,该过程称为对图像的预处理过程,对元件图像进行预处理会使训练效果更好。
可选地,在本发明的一些可能的实施方式中,为了得到较好的分类效果,所以在对卷积神经网络进行训练的时候尽可能多的采集更多的样本对卷积神经网络进行训练,由于元件图像的样本集是指从PCB板上采集到的用于训练卷积神经网络的元件图像,所以可以理解,保证PCB板采集时的多样性,将保证元件图像的多样性。
可选地,在本发明的一些可能的实施方式中,所述元件图像的样本集包括:
所述元件图像的训练样本集和所述元件图像的测试样本集。
其中,元件图像的训练样本集是用于在训练阶段训练卷积神经网络的,元件图像的测试样本集是用于在训练阶段测试经过训练本来集训练后的卷积神经 网络的分类效果的样本集。
可选地,在本发明的一些可能的实施方式中,卷积神经网络的训练样本集和卷积神经网络的测试样本集在的采集方法一样,可以从所采集到的元件图像的样本集取一部分做元件图像的测试样本集。
可选地,在本发明的一些可能的实施方式中,在卷积神经网络的训练阶段,若利用卷积神经网络的测试样本集测试得到的分类效果不佳时,可对卷积神经网络进一步训练。
可以理解,在对卷积神经网络进行训练时,分别利用元件图像的训练样本集对卷积神经网络进行训练,以及利用元件图像的测试样本集对元件图像进行测试将使得对卷积神经网络的训练效果更佳。S204、利用图像识别数据库预训练卷积神经网络,得到卷积神经网络初始参数。
其中,所述图像识别数据库包含从自然界采集到的各种类别的自然图像,图像识别数据库(ImageNet)是现有的从自然界采集到的包含各种类别的图像基础数据库,虽然ImageNet并非电子元件数据集,但其包含超过22000个类别的1500万张带标注的自然图像,用于预训练卷积神经网络可学习出各层次的通用图像特征,得到较好的卷积神经网络初始参数值。
可以理解,在得到分类器之前,首先需要利用足够的样本集对卷积神经网络进行训练,而采集到的元件图像的样本集的数量一般有限,所以为了更好地训练卷积神经网络,利用现有的ImageNet首先对卷积神经网络进行预训练,得到卷积神经网络的初始参数值,再基于该初始参数,利用采集到的元件图像的样本集再对卷积神经网络进行进一步的训练,从而得到最终的分类器。
可选地,在本发明的一些可能的实施方式中,也可以不利用图像识别数据库预训练卷积神经网络,那么此时可以尽可能多地采集元件图像样本用于训练卷积神经网络。
S205、基于卷积神经网络初始参数,利用样本集进一步训练卷积神经网络以对卷积神经网络初始参数进行微调。
其中,卷积神经网络是一种深度学习网络,卷积神经网络以其局部权值共享的特殊结构在图像处理方面有着独特的优越性,其布局更接近于实际的生物神经网络,权值共享降低了网络的复杂性,特别是多维输入向量的图像可以直接输入网络这一特点避免了特征提取和分类过程中数据重建的复杂度。利用卷 积神经网络对元件图像进行分类时,首先需要利用大量的样本训练元件分类器,然后再利用训练好的卷积神经网络对元件图像进行分类。由于卷积神经网络可学习到元件图像各个层次的特征,包括低层特征和高级特征,而图像的高级特征不受拍摄场景影响,也即即使在复杂的场景下拍摄的元件图像也能利用高级特征对元件图像进行识别,从而利用该特征可以准确地对元件图像进行分类识别。
可选地,在本发明的一些可能的实施方式中,在利用ImageNet首先对卷积神经网络进行预训练时,卷积神经网络最后一层的节点数为1000个,当再基于预训练后的神经网络利用元件图像的样本集进行训练时,将卷积神经网络最后一层的节点数改为元件的类别数,如元件共有N类,则将该层改为N个节点。
可选地,在本发明的一些可能的实施方式中,也可以不基于ImageNet训练后的卷积神经网络的初始参数,而是直接利用较多的元件图像样本训练卷积神经网络。
S206、将待分类的元件图像输入经过训练后的卷积神经网络,并计算所述元件图像的高级特征。
可选地,本发明实施例中所指的卷积神经网络为包含N层的卷积神经网络,其中,N为大于1的正整数。其中,该卷积神经网络的前N-1层用于计算元件图像各个层次的特征,该卷积神经网络的第N层用于根据前N-1层计算出来的元件图像的特征计算元件图像的类别。
优选地,N的值为7。
优选地,该卷积神经网络的前N-1层计算出来的元件图像的各个层次的特征包括低级特征以及高级特征。
S207、利用高级特征计算元件图像属于各个类别的概率。
可选地,在本发明的实施例中,可以用各个元件图像的概率值判断待分类的元件图像的类别。
优选地,对于有N层网络的卷积神经网络,若卷积神经网络的前N-1层用于计算元件图像的各个层次的特征,则该卷积神经网络的第N层用于根据前N-1层计算得到的特征(包括低级特征和高级特征)计算该元件图像属于各个类别的概率,再利用该概率判断元件图像的类别。
可选地,在本发明的一些可能的实施方式中,所述将元件图像输入经过训 练后的卷积神经网络之前,所述方法还包括:
利用模板匹配得到所述元件图像中元件的位置并对所述元件图像进行对齐;
对所述元件图像进行归一化,以触发执行所述将元件图像输入经过训练后的卷积神经网络的步骤。
可以理解,由于元件图像是从模板图像上截取下来的一块元件图像,为了使神经网络对元件图像的计算更为准确,需要使截取到的元件图像中的元件位于图像的中心位置,并同时对图像的大小进行归一化,这样以保证后续处理的准确性。该过程称为预处理过程。
可选地,在本发明的一些可能的实施方式中,也可以不对元件图像进行预处理。
可选地,在本发明的一些可能的实施方式中,在利用卷积神经网络对元件图像进行分类时,如果在训练阶段对元件图像样本进行预处理,那么在利用训练后的卷积神经网络对元件图像分类时,也需要对元件图像进行预处理;如果在训练阶段不对元件图像进行预处理,那么在利用训练后的卷积神经网络对元件图像进行分类时,也不对元件图像进行预处理。
S208、取各个类别的概率中最大的概率对应的类别为元件图像的类别。
其中,元件图像属于各个类别的概率值表示了该元件图像属于各个类别的可能性,很显然,概率越大,表示属于该类别的可能性越大,从而取各个类别的概率中最大的概率对应的类别为元件图像的类别,将使得分类结果最为准确。
可选地,在本发明的一些可能的实施方式中,当元件图像中的元件不存在时,元件图像的类别将为其它,如PCB板上若有100个元件,若以1-100对这100个元件对应的元件图像进行分类,对于漏件的元件图像来说,类别可以为101。
可以看出,本实施例的方案中,将待分类的元件图像输入经过训练后的卷积神经网络,并计算所述元件图像的高级特征;利用所述高级特征计算所述元件图像属于各个类别的概率;取所述概率中最大的概率对应的类别为所述元件图像的类别。由于卷积神经网络能学习到元件图像的高级特征,所述本发明实施例利用卷积神经网络对元件图像进行分类时,将使元件图像的采集不受场景约束,分类效果好,准确性高。
更进一步地,由于卷积神经网络局部权值共享,所以在利用卷积神经网络对元件图像分类的过程中,可以降低计算复杂度,分类效率高。
本发明实施例还提供一种元件分类装置,该装置包括:
第一计算模块,用于将待分类的元件图像输入经过训练后的卷积神经网络,并计算所述元件图像的高级特征;
第二计算模块,利用所述高级特征计算所述元件图像属于各个类别的概率;
分类模块,用于取所述概率中最大的概率对应的类别为所述元件图像的类别。
具体的,请参见图3,图3是本发明第三实施例提供的一种元件分类装置的结构示意图,其中,如图3所示,本发明第三实施例提供的一种元件分类装置300可以包括:
第一计算模块310、第二计算模块320和分类模块330。
其中,第一计算模块310,用于将待分类的元件图像输入经过训练后的卷积神经网络,并计算所述元件图像的高级特征。
其中,卷积神经网络是一种深度学习网络,卷积神经网络以其局部权值共享的特殊结构在图像处理方面有着独特的优越性,其布局更接近于实际的生物神经网络,权值共享降低了网络的复杂性,特别是多维输入向量的图像可以直接输入网络这一特点避免了特征提取和分类过程中数据重建的复杂度。利用卷积神经网络对元件图像进行分类时,首先需要利用大量的样本训练元件分类器,然后再利用训练好的卷积神经网络对元件图像进行分类。由于卷积神经网络可学习到元件图像各个层次的特征,包括低层特征和高级特征,而图像的高级特征不受拍摄场景影响,也即即使在复杂的场景下拍摄的元件图像也能利用高级特征对元件图像进行识别,从而利用该特征可以准确地对元件图像进行分类识别。所以,卷积神经网络计算元件图像的类别时可以分为两个过程,即计算元件图像的高级特征,以及利用该高级特征计算元件图像的类别。
其中,元件图像是指从PCB板上截取的在某个元件位置的图像,一般来说,该图像为包含该元件的元件图像,但如果在元件漏件的情况下,该元件图像也有可能不包含元件,或在元件插件有误的情况下,该元件图像也有可能包含其它元件的元件图像。
可选地,本发明实施例中所指的卷积神经网络为包含N层的卷积神经网络, 其中,N为大于1的正整数。其中,该卷积神经网络的前N-1层用于计算元件图像各个层次的特征,该卷积神经网络的第N层用于根据前N-1层计算出来的元件图像的特征计算元件图像的类别。
优选地,N的值为7。
优选地,该卷积神经网络的前N-1层计算出来的元件图像的各个层次的特征包括低级特征以及高级特征。
第二计算模块320,用于利用所述高级特征计算所述元件图像属于各个类别的概率。
其中,元件图像的类别是指根据PCB板上元件种类的不同对元件图像进行的分类标注,如PCB板上共有100种元件,则元件图像的类别共至少有100种,可用1-100之间的数字对其进行分类标识,也可用其它符号对不同元件图像类别进行分类标识。
可选地,在本发明的实施例中,可以用各个元件图像的概率值判断待分类的元件图像的类别。
优选地,对于有N层网络的卷积神经网络,若卷积神经网络的前N-1层用于计算元件图像的各个层次的特征,则该卷积神经网络的第N层用于根据前N-1层计算得到的特征(包括低级特征和高级特征)计算该元件图像属于各个类别的概率,再利用该概率判断元件图像的类别。
分类模块330,用于取所述各个类别的概率中最大的概率对应的类别为所述元件图像的类别。
其中,元件图像属于各个类别的概率值表示了该元件图像属于各个类别的可能性,很显然,概率越大,表示属于该类别的可能性越大,从而取各个类别的概率中最大的概率对应的类别为元件图像的类别,将使得分类结果最为准确。
可选地,在本发明的一些可能的实施方式中,当元件图像中的元件不存在时,元件图像的类别将为其它,如PCB板上若有100个元件,若以1-100对这100个元件对应的元件图像进行分类,对于漏件的元件图像来说,类别可以为101。
可以理解的是,本实施例的元件分类装置300的各功能模块的功能可根据上述方法实施例中的方法具体实现,其具体实现过程可以参照上述方法实施例的相关描述,此处不再赘述。
可以看出,本实施例的方案中,元件分类装置300将待分类的元件图像输入经过训练后的卷积神经网络,并计算所述元件图像的高级特征;元件分类装置300再利用所述高级特征计算所述元件图像属于各个类别的概率;并取所述概率中最大的概率对应的类别为所述元件图像的类别。由于卷积神经网络能学习到元件图像的高级特征,所述本发明实施例利用卷积神经网络对元件图像进行分类时,将使元件图像的采集不受场景约束,分类效果好,准确性高。
更进一步地,由于卷积神经网络局部权值共享,所以在利用卷积神经网络对元件图像分类的过程中,可以降低计算复杂度,分类效率高。
请参见图4,图4是本发明第四实施例提供的一种元件分类装置的结构示意图,其中,如图4所示,本发明第四实施例提供的一种元件分类装置400可以包括:
第一计算模块410、第二计算模块420和分类模块430。
其中,第一计算模块410,用于将待分类的元件图像输入经过训练后的卷积神经网络,并计算所述元件图像的高级特征。
其中,卷积神经网络是一种深度学习网络,卷积神经网络以其局部权值共享的特殊结构在图像处理方面有着独特的优越性,其布局更接近于实际的生物神经网络,权值共享降低了网络的复杂性,特别是多维输入向量的图像可以直接输入网络这一特点避免了特征提取和分类过程中数据重建的复杂度。利用卷积神经网络对元件图像进行分类时,首先需要利用大量的样本训练元件分类器,然后再利用训练好的卷积神经网络对元件图像进行分类。由于卷积神经网络可学习到元件图像各个层次的特征,包括低层特征和高级特征,而图像的高级特征不受拍摄场景影响,也即即使在复杂的场景下拍摄的元件图像也能利用高级特征对元件图像进行识别,从而利用该特征可以准确地对元件图像进行分类识别。所以,卷积神经网络计算元件图像的类别时可以分为两个过程,即计算元件图像的高级特征,以及利用该高级特征计算元件图像的类别。
其中,元件图像是指从PCB板上截取的在某个元件位置的图像,一般来说,该图像为包含该元件的元件图像,但如果在元件漏件的情况下,该元件图像也有可能不包含元件,或在元件插件有误的情况下,该元件图像也有可能包含其它元件的元件图像。
可选地,本发明实施例中所指的卷积神经网络为包含N层的卷积神经网络, 其中,N为大于1的正整数。其中,该卷积神经网络的前N-1层用于计算元件图像各个层次的特征,该卷积神经网络的第N层用于根据前N-1层计算出来的元件图像的特征计算元件图像的类别。
优选地,N的值为7。
优选地,该卷积神经网络的前N-1层计算出来的元件图像的各个层次的特征包括低级特征以及高级特征。
第二计算模块420,用于利用所述高级特征计算所述元件图像属于各个类别的概率。
其中,元件图像的类别是指根据PCB板上元件种类的不同对元件图像进行的分类标注,如PCB板上共有100种元件,则元件图像的类别共至少有100种,可用1-100之间的数字对其进行分类标识,也可用其它符号对不同元件图像类别进行分类标识。
可选地,在本发明的实施例中,可以用各个元件图像的概率值判断待分类的元件图像的类别。
优选地,对于有N层网络的卷积神经网络,若卷积神经网络的前N-1层用于计算元件图像的各个层次的特征,则该卷积神经网络的第N层用于根据前N-1层计算得到的特征(包括低级特征和高级特征)计算该元件图像属于各个类别的概率,再利用该概率判断元件图像的类别。
分类模块430,用于取所述各个类别的概率中最大的概率对应的类别为所述元件图像的类别。
其中,元件图像属于各个类别的概率值表示了该元件图像属于各个类别的可能性,很显然,概率越大,表示属于该类别的可能性越大,从而取各个类别的概率中最大的概率对应的类别为元件图像的类别,将使得分类结果最为准确。
可选地,在本发明的一些可能的实施方式中,当元件图像中的元件不存在时,元件图像的类别将为其它,如PCB板上若有100个元件,若以1-100对这100个元件对应的元件图像进行分类,对于漏件的元件图像来说,类别可以为101。
可选地,在本发明的一些可能的实施方式中,所述元件分类装置400还包括:
预处理模块440,用于利用模板匹配得到所述元件图像中元件的位置并对所 述元件图像进行对齐;
对所述元件图像进行归一化,以触发所述第一计算模块410执行所述将待分类的元件图像输入经过训练后的卷积神经网络的步骤。
可以理解,由于元件图像是从模板图像上截取下来的一块元件图像,为了使神经网络对元件图像的计算更为准确,需要使截取到的元件图像中的元件位于图像的中心位置,并同时对图像的大小进行归一化,这样以保证后续处理的准确性。该过程称为预处理过程。
可选地,在本发明的一些可能的实施方式中,也可以不对元件图像进行预处理。
可选地,在本发明的一些可能的实施方式中,所述装置还包括:
样本创建模块450,用于创建所述元件图像的样本集;
第一训练模块460,用于利用图像识别数据库预训练所述卷积神经网络,得到所述卷积神经网络初始参数,所述图像识别数据库包含从自然界采集到的各种类别的自然图像;
第二训练模块470,用于基于所述卷积神经网络初始参数,利用所述样本集进一步训练所述卷积神经网络以对所述卷积神经网络初始参数进行微调,并触发所述第一计算模块410执行所述将待分类的元件图像输入经过训练后的卷积神经网络的步骤。
其中,元件图像的样本集是指从PCB板上采集到的用于训练卷积神经网络的元件图像,图像识别数据库(ImageNet)是现有的从自然界采集到的包含各种类别的图像基础数据库,虽然ImageNet并非电子元件数据集,但其包含超过22000个类别的1500万张带标注的自然图像,用于预训练卷积神经网络可学习出各层次的通用图像特征,得到较好的卷积神经网络初始参数值。
可选地,在本发明的一些可能的实施方式中,为了得到较好的分类效果,所以在对卷积神经网络进行训练的时候尽可能多的采集更多的样本对卷积神经网络进行训练,所以需要采集各个场景下拍摄的样本,如在光线不好的情况下拍摄的PCB板样本上截取到的元件图像,以及从不同的位置或者角度拍摄到的样本图像,或者其它复杂场景下拍摄到的样本图像。
可以理解,在得到分类器之前,首先需要利用足够的样本集对卷积神经网络进行训练,而采集到的元件图像的样本集的数量一般有限,所以为了更好地 训练卷积神经网络,利用现有的ImageNet首先对卷积神经网络进行预训练,得到卷积神经网络的初始参数值,再基于该初始参数,利用采集到的元件图像的样本集再对卷积神经网络进行进一步的训练,从而得到最终的分类器。
可选地,在本发明的一些可能的实施方式中,在利用ImageNet首先对卷积神经网络进行预训练时,卷积神经网络最后一层的节点数为1000个,当再基于预训练后的神经网络利用元件图像的样本集进行训练时,将卷积神经网络最后一层的节点数改为元件的类别数,如元件共有N类,则将该层改为N个节点。
可选地,在本发明的一些可能的实施方式中,也可以不利用ImageNet对卷积神经网络进行预训练,可采集较多的元件图像的样本训练卷积神经网络。
可选地,在本发明的一些可能的实施方式中,所述元件图像的样本集包括:
所述元件图像的训练样本集和所述元件图像的测试样本集。
其中,元件图像的训练样本集是用于在训练阶段训练卷积神经网络的,元件图像的测试样本集是用于在训练阶段测试经过训练本来集训练后的卷积神经网络的分类效果的样本集。
可选地,在本发明的一些可能的实施方式中,卷积神经网络的训练样本集和卷积神经网络的测试样本集在的采集方法一样,可以从所采集到的元件图像的样本集取一部分做元件图像的测试样本集。
可选地,在本发明的一些可能的实施方式中,在卷积神经网络的训练阶段,若利用卷积神经网络的测试样本集测试得到的分类效果不佳时,可对卷积神经网络进一步训练。
可以理解,在对卷积神经网络进行训练时,分别利用元件图像的训练样本集对卷积神经网络进行训练,以及利用元件图像的测试样本集对元件图像进行测试将使得对卷积神经网络的训练效果更佳。
可选地,在本发明的一些可能的实施方式中,所述样本创建模块450包括:
第一采集单元451,用于采集印刷电路板图像;
截取单元452,用于以印刷电路板模板图像为参考,在所述印刷电路板图像上截取元件图像并对所述元件图像进行标记以记录所述元件图像的类别;
第二采集单元453,用于从所述经过标记后的元件图像中采集所述元件图像的样本集。
可以理解,由于是需要对每个元件进行分类,所以在采集元件图像的样本 时,需要截取PCB电路板图像上面每个元件的图像的样本集合进行分类训练。并且,为了对每个元件样本进行区分,所以在训练之前需要对各个元件图像进行标注以区分不同的元件。
可选地,在本发明的一些可能的实施方式中,可以在生产线上架设摄像头,批量采集不同型号的PCB板卡图像,并以板卡跟踪技术避免重复拍摄某一PCB板卡。这样每个型号的PCB板卡均包含多个图像样本,每个图像样本对应某一型号的某张PCB板卡,从而这样在获取到的PCB板卡上的元件图像也来自不同板卡上,保证样本具备多样性。
可选地,在本发明的一些可能的实施方式中,所述样本创建模块440在所述印刷板电路上截取元件图像,包括:
利用印刷板图像上面的元件的位置信息自动截取元件图像。
可以理解,当知道元件的位置信息后,则可以根据该位置信息自动截取元件图像。
可选地,在本发明的一些可能的实施方式中,所述获取元件图像的位置信息可以通过板式文件中所记录的元件的位置信息,或者通过人工标注的位置信息来获取。
可选地,在本发明的一些可能的实施方式中,所述样本创建模块450对元件图像进行标注包括:
根据元件类别信息进行标注。
可以理解,需要对元件的类别进行区分,从而在训练的时候记录元件的类别才能准确地对元件进行漏件检测。
可选地,在本发明的另一些可能的实施方式中,也可以通过其它能对元件进行区分的方式对元件进行标注。
可选地,在本发明的另一些可能的实施方式中,所述预处理模块470,
还用于利用模板匹配得到所述元件图像中元件的位置并对所述元件图像进行对齐;
对所述元件图像进行归一化,以触发所述样本创建模块450执行所述采集所述元件图像的样本集的步骤。
可以理解,与利用卷积神经网络进行测试的过程类似,在采集训练卷积神经网络的样本图像时对图像进行对齐以使元件位于图像的中心位置,并对图像 进行归一化,该过程称为对图像的预处理过程,对元件图像进行预处理会使训练效果更好。
可选地,在本发明的一些可能的实施方式中,在利用卷积神经网络对元件图像进行分类时,如果在训练阶段对元件图像样本进行预处理,那么在利用训练后的卷积神经网络对元件图像测试时,也需要对元件图像进行预处理;如果在训练阶段不对元件图像进行预处理,那么在利用训练后的卷积神经网络对元件图像进行测试时,也不对元件图像进行预处理。
可以理解的是,本实施例的元件分类装置400的各功能模块的功能可根据上述方法实施例中的方法具体实现,其具体实现过程可以参照上述方法实施例的相关描述,此处不再赘述。
可以看出,本实施例的方案中,元件分类装置400将待分类的元件图像输入经过训练后的卷积神经网络,并计算所述元件图像的高级特征;元件分类装置400再利用所述高级特征计算所述元件图像属于各个类别的概率;并取所述概率中最大的概率对应的类别为所述元件图像的类别。由于卷积神经网络能学习到元件图像的高级特征,所述本发明实施例利用卷积神经网络对元件图像进行分类时,将使元件图像的采集不受场景约束,分类效果好,准确性高。
更进一步地,由于卷积神经网络局部权值共享,所以在利用卷积神经网络对元件图像分类的过程中,可以降低计算复杂度,分类效率高。
参见图5,图5是本发明第五实施例提供的一种元件分类装置的结构示意图。如图5所示,本发明第五实施例提供的一种元件分类装置500可以包括:至少一个总线501、与总线相连的至少一个处理器502以及与总线相连的至少一个存储器503。
其中,处理器502通过总线501,调用存储器503中存储的代码以用于将待分类的元件图像输入经过训练后的卷积神经网络,并计算所述元件图像的高级特征;利用所述高级特征计算所述元件图像属于各个类别的概率;取所述概率中最大的概率对应的类别为所述元件图像的类别。
其中,卷积神经网络是一种深度学习网络,卷积神经网络以其局部权值共享的特殊结构在图像处理方面有着独特的优越性,其布局更接近于实际的生物神经网络,权值共享降低了网络的复杂性,特别是多维输入向量的图像可以直接输入网络这一特点避免了特征提取和分类过程中数据重建的复杂度。利用卷 积神经网络对元件图像进行分类时,首先需要利用大量的样本训练元件分类器,然后再利用训练好的卷积神经网络对元件图像进行分类。由于卷积神经网络可学习到元件图像各个层次的特征,包括低层特征和高级特征,而图像的高级特征不受拍摄场景影响,也即即使在复杂的场景下拍摄的元件图像也能利用高级特征对元件图像进行识别,从而利用该特征可以准确地对元件图像进行分类识别。所以,卷积神经网络计算元件图像的类别时可以分为两个过程,即计算元件图像的高级特征,以及利用该高级特征计算元件图像的类别。
其中,元件图像是指从PCB板上截取的在某个元件位置的图像,一般来说,该图像为包含该元件的元件图像,但如果在元件漏件的情况下,该元件图像也有可能不包含元件,或在元件插件有误的情况下,该元件图像也有可能包含其它的元件的元件图像;
元件图像的类别是指根据PCB板上元件种类的不同对元件图像进行的分类标注,如PCB板上共有100种元件,则元件图像的类别共至少有100种,可用1-100之间的数字对其进行分类标识,也可用其它符号对不同元件图像类别进行分类标识。
其中,测试图像属于各个类别的概率值表示了该测试图像属于各个类别的可能类,很显然,概率越大,表示属于该类别的可能性越大,从而取各个类别的概率中最大的概率对应的类别为测试图像的类别,将使得分类结果最为准确。
可选地,在本发明的一些可能的实施方式中,所述处理器502将测试图像输入经过训练后的卷积神经网络之前,所述处理器502还用于:
利用模板匹配得到所述元件图像中元件的位置并对所述元件图像进行对齐;
对所述元件图像进行归一化,以触发执行所述将元件图像输入经过训练后的卷积神经网络的步骤。
可选地,在本发明的一些可能的实施方式中,所述处理器502将元件图像输入经过训练后的卷积神经网络之前,所述处理器502还用于:
创建所述元件图像的样本集;
利用图像识别数据库预训练所述卷积神经网络,得到所述卷积神经网络初始参数,所述图像识别数据库包含从自然界采集到的各种类别的自然图像;
基于所述卷积神经网络初始参数,利用所述样本集进一步训练所述卷积神 经网络以对所述卷积神经网络初始参数进行微调,并触发执行所述将元件图像输入经过训练后的卷积神经网络的步骤。
其中,元件图像的样本集是指从PCB板上采集到的用于训练卷积神经网络的元件图像,图像识别数据库(ImageNet)是现有的从自然界采集到的包含各种类别的图像基础数据库,虽然ImageNet并非电子元件数据集,但其包含超过22000个类别的1500万张带标注的自然图像,用于预训练卷积神经网络可学习出各层次的通用图像特征,得到较好的卷积神经网络初始参数值。
可选地,在本发明的一些可能的实施方式中,所述元件图像的样本集包括:
所述元件图像的训练样本集和所述元件图像的测试样本集。
其中,元件图像的训练样本集是用于在训练阶段训练卷积神经网络的,元件图像的测试样本集是用于在训练阶段测试经过训练本来集训练后的卷积神经网络的分类效果的样本集。
可选地,在本发明的一些可能的实施方式中,所述处理器502创建所述元件图像的样本集包括:
采集印刷电路板图像;
以印刷电路板模板图像为参考,在所述印刷电路板图像上截取元件图像并对所述元件图像进行标记以记录所述元件图像的类别;
从所述经过标记后的元件图像中采集所述元件图像的样本集。
可选地,在本发明的一些可能的实施方式中,所述处理器502在所述印刷板电路上截取元件图像,包括:
利用印刷板图像上面的元件的位置信息自动截取元件图像。
可以理解,当知道元件的位置信息后,则可以根据该位置信息自动截取元件图像。
可选地,在本发明的一些可能的实施方式中,所述获取元件图像的位置信息可以通过板式文件中所记录的元件的位置信息,或者通过人工标注的位置信息来获取。
可选地,在本发明的一些可能的实施方式中,所述处理器502对元件图像进行标注包括:
根据元件类别信息进行标注。
可选地,在本发明的一些可能的实施方式中,所述处理器502采集所述元 件图像的样本集之前,所述处理器502还用于:
利用模板匹配得到所述元件图像中元件的位置并对所述元件图像进行对齐;
对所述元件图像进行归一化,以触发执行所述采集所述元件图像的样本集的步骤。
可以理解的是,本实施例的元件分类装置500的各功能模块的功能可根据上述方法实施例中的方法具体实现,其具体实现过程可以参照上述方法实施例的相关描述,此处不再赘述。
可以看出,本实施例的方案中,元件分类装置500将待分类的元件图像输入经过训练后的卷积神经网络,并计算所述元件图像的高级特征;元件分类装置500再利用所述高级特征计算所述元件图像属于各个类别的概率;并取所述概率中最大的概率对应的类别为所述元件图像的类别。由于卷积神经网络能学习到元件图像的高级特征,所述本发明实施例利用卷积神经网络对元件图像进行分类时,将使元件图像的采集不受场景约束,分类效果好,准确性高。
更进一步地,由于卷积神经网络局部权值共享,所以在利用卷积神经网络对元件图像分类的过程中,可以降低计算复杂度,分类效率高。
本发明实施例还提供一种计算机存储介质,其中,该计算机存储介质可存储有程序,该程序执行时包括上述方法实施例中记载的任何元件分类方法的部分或全部步骤。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本发明,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本发明所必须的。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略, 或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明的各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为可嵌入设备个人计算机、服务器或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (10)

  1. 一种元件分类方法,其特征在于,所述方法包括:
    将待分类的元件图像输入经过训练后的卷积神经网络,并计算所述元件图像的高级特征;
    利用所述高级特征计算所述元件图像属于各个类别的概率;
    取所述概率中最大的概率对应的类别为所述元件图像的类别。
  2. 根据权利要求1所述的方法,其特征在于,所述将待分类的元件图像输入经过训练后的卷积神经网络之前,所述方法还包括:
    利用模板匹配得到所述元件图像中元件的位置并对所述元件图像进行对齐;
    对所述元件图像进行归一化,以触发执行所述将元件图像输入经过训练后的卷积神经网络的步骤。
  3. 根据权利要求1或2所述的方法,其特征在于,所述将元件图像输入经过训练后的卷积神经网络之前,所述方法还包括:
    创建所述元件图像的样本集;
    利用图像识别数据库预训练所述卷积神经网络,得到所述卷积神经网络初始参数,所述图像识别数据库包含从自然界采集到的各种类别的自然图像;
    基于所述卷积神经网络初始参数,利用所述样本集进一步训练所述卷积神经网络以对所述卷积神经网络初始参数进行微调,并触发执行所述将元件图像输入经过训练后的卷积神经网络的步骤。
  4. 根据权利要求3所述的方法,其特征在于,所述创建所述元件图像的样本集包括:
    采集印刷电路板图像;
    以印刷电路板模板图像为参考,在所述印刷电路板图像上截取元件图像并对所述元件图像进行标记以记录所述元件图像的类别;
    从所述经过标记后的元件图像中采集所述元件图像的样本集。
  5. 根据权利要求4所述的方法,其特征在于,所述元件图像的样本集包括:
    所述元件图像的训练样本集和所述元件图像的测试样本集。
  6. 一种元件分类装置,其特征在于,所述装置包括:
    第一计算模块,用于将待分类的元件图像输入经过训练后的卷积神经网络,并计算所述元件图像的高级特征;
    第二计算模块,利用所述高级特征计算所述元件图像属于各个类别的概率;
    分类模块,用于取所述概率中最大的概率对应的类别为所述元件图像的类别。
  7. 根据权利要求6所述的装置,其特征在于,所述装置还包括:
    预处理模块,用于利用模板匹配得到所述元件图像中元件的位置并对所述元件图像进行对齐;
    对所述元件图像进行归一化,以触发所述第一计算模块执行所述将分类的元件图像输入经过训练后的卷积神经网络的步骤。
  8. 根据权利要求6或7所述的装置,其特征在于,所述装置还包括:
    样本创建模块,用于创建所述元件图像的样本集;
    第一训练模块,利用图像识别数据库预训练所述卷积神经网络,得到所述卷积神经网络初始参数,所述图像识别数据库包含从自然界采集到的各种类别的自然图像;
    第二训练模块,用于基于所述卷积神经网络初始参数,利用所述样本集进一步训练所述卷积神经网络以对所述卷积神经网络初始参数进行微调,以触发所述第一计算模块执行所述将分类的元件图像输入经过训练后的卷积神经网络的步骤。
  9. 根据权利要求8所述的装置,其特征在于,所述样本创建模块包括:
    第一采集单元,用于采集印刷电路板图像;
    截取单元,用于以印刷电路板模板图像为参考,在所述印刷电路板图像上 截取元件图像并对所述元件图像进行标记以记录所述元件图像的类别;
    第二采集单元,用于从所述经过标记后的元件图像中采集所述元件图像的样本集。
  10. 根据权利要求9所述的装置,其特征在于,所述元件图像的样本集包括:
    所述元件图像的训练样本集和所述元件图像的测试样本集。
PCT/CN2016/096747 2015-11-23 2016-08-25 一种元件分类方法及装置 WO2017088537A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510819514.4 2015-11-23
CN201510819514.4A CN105426917A (zh) 2015-11-23 2015-11-23 一种元件分类方法及装置

Publications (1)

Publication Number Publication Date
WO2017088537A1 true WO2017088537A1 (zh) 2017-06-01

Family

ID=55505115

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/096747 WO2017088537A1 (zh) 2015-11-23 2016-08-25 一种元件分类方法及装置

Country Status (2)

Country Link
CN (1) CN105426917A (zh)
WO (1) WO2017088537A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109657374A (zh) * 2018-12-25 2019-04-19 曙光信息产业(北京)有限公司 印刷电路板的建模系统以及建模方法

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426917A (zh) * 2015-11-23 2016-03-23 广州视源电子科技股份有限公司 一种元件分类方法及装置
CN107871100B (zh) * 2016-09-23 2021-07-06 北京眼神科技有限公司 人脸模型的训练方法和装置、人脸认证方法和装置
CN106529564B (zh) * 2016-09-26 2019-05-31 浙江工业大学 一种基于卷积神经网络的食物图像自动分类方法
CN107256384A (zh) * 2017-05-22 2017-10-17 汕头大学 一种基于图像与信号处理的卡片识别与计数方法
CN107886131A (zh) * 2017-11-24 2018-04-06 佛山科学技术学院 一种基于卷积神经网络检测电路板元器件极性方法和装置
CN109359517A (zh) * 2018-08-31 2019-02-19 深圳市商汤科技有限公司 图像识别方法和装置、电子设备、存储介质、程序产品
CN109446885B (zh) * 2018-09-07 2022-03-15 广州算易软件科技有限公司 一种基于文本的元器件识别方法、系统、装置和存储介质
CN108984992B (zh) * 2018-09-25 2022-03-04 郑州云海信息技术有限公司 一种电路板设计方法和装置
CN111191655B (zh) * 2018-11-14 2024-04-16 佳能株式会社 对象识别方法和装置
CN109800470A (zh) * 2018-12-25 2019-05-24 山东爱普电气设备有限公司 一种固定式低压成套开关设备标准母线计算方法及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463241A (zh) * 2014-10-31 2015-03-25 北京理工大学 一种智能交通监控系统中的车辆类型识别方法
CN104850890A (zh) * 2015-04-14 2015-08-19 西安电子科技大学 基于实例学习和Sadowsky分布的卷积神经网络参数调整方法
CN104992142A (zh) * 2015-06-03 2015-10-21 江苏大学 一种基于深度学习和属性学习相结合的行人识别方法
CN105426917A (zh) * 2015-11-23 2016-03-23 广州视源电子科技股份有限公司 一种元件分类方法及装置
CN105469400A (zh) * 2015-11-23 2016-04-06 广州视源电子科技股份有限公司 电子元件极性方向的快速识别、标注的方法和系统
CN105513046A (zh) * 2015-11-23 2016-04-20 广州视源电子科技股份有限公司 电子元件极性的识别方法和系统、标注方法和系统

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8582807B2 (en) * 2010-03-15 2013-11-12 Nec Laboratories America, Inc. Systems and methods for determining personal characteristics
CN104809426B (zh) * 2014-01-27 2019-04-05 日本电气株式会社 卷积神经网络的训练方法、目标识别方法及装置
CN103886318B (zh) * 2014-03-31 2017-03-01 武汉天仁影像科技有限公司 尘肺病大体成像中病灶区域的提取与分析方法
CN103927534B (zh) * 2014-04-26 2017-12-26 无锡信捷电气股份有限公司 一种基于卷积神经网络的喷码字符在线视觉检测方法
CN104036474B (zh) * 2014-06-12 2017-12-19 厦门美图之家科技有限公司 一种图像亮度和对比度的自动调节方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463241A (zh) * 2014-10-31 2015-03-25 北京理工大学 一种智能交通监控系统中的车辆类型识别方法
CN104850890A (zh) * 2015-04-14 2015-08-19 西安电子科技大学 基于实例学习和Sadowsky分布的卷积神经网络参数调整方法
CN104992142A (zh) * 2015-06-03 2015-10-21 江苏大学 一种基于深度学习和属性学习相结合的行人识别方法
CN105426917A (zh) * 2015-11-23 2016-03-23 广州视源电子科技股份有限公司 一种元件分类方法及装置
CN105469400A (zh) * 2015-11-23 2016-04-06 广州视源电子科技股份有限公司 电子元件极性方向的快速识别、标注的方法和系统
CN105513046A (zh) * 2015-11-23 2016-04-20 广州视源电子科技股份有限公司 电子元件极性的识别方法和系统、标注方法和系统

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109657374A (zh) * 2018-12-25 2019-04-19 曙光信息产业(北京)有限公司 印刷电路板的建模系统以及建模方法

Also Published As

Publication number Publication date
CN105426917A (zh) 2016-03-23

Similar Documents

Publication Publication Date Title
WO2017088537A1 (zh) 一种元件分类方法及装置
Nayef et al. Icdar2017 robust reading challenge on multi-lingual scene text detection and script identification-rrc-mlt
Nech et al. Level playing field for million scale face recognition
Goodfellow et al. Multi-digit number recognition from street view imagery using deep convolutional neural networks
Cozzolino et al. Image forgery detection through residual-based local descriptors and block-matching
WO2017032311A1 (zh) 一种检测方法及装置
WO2017088553A1 (zh) 电子元件极性方向的快速识别、标注的方法和系统
Manivannan et al. HEp-2 cell classification using multi-resolution local patterns and ensemble SVMs
WO2022247005A1 (zh) 图像中目标物识别方法、装置、电子设备及存储介质
US10423817B2 (en) Latent fingerprint ridge flow map improvement
CN111046886A (zh) 号码牌自动识别方法、装置、设备及计算机可读存储介质
Chandran et al. Missing child identification system using deep learning and multiclass SVM
CN113239807B (zh) 训练票据识别模型和票据识别的方法和装置
CN111046879A (zh) 证件图像分类方法、装置、计算机设备及可读存储介质
CN109740417A (zh) 发票类型识别方法、装置、存储介质和计算机设备
CN110532886A (zh) 一种基于孪生神经网络的目标检测算法
Yang et al. ICDAR2017 robust reading challenge on text extraction from biomedical literature figures (DeTEXT)
CN113221918A (zh) 目标检测方法、目标检测模型的训练方法及装置
CN113723157A (zh) 一种农作物病害识别方法、装置、电子设备及存储介质
CN114639152A (zh) 基于人脸识别的多模态语音交互方法、装置、设备及介质
CN106709490B (zh) 一种字符识别方法和装置
Xu et al. Robust seed localization and growing with deep convolutional features for scene text detection
CN111652242B (zh) 图像处理方法、装置、电子设备及存储介质
CN110689066B (zh) 一种人脸识别数据均衡与增强相结合的训练方法
CN111680553A (zh) 一种基于深度可分离卷积的病理图像识别方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16867764

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16867764

Country of ref document: EP

Kind code of ref document: A1