WO2023140829A1 - Applied machine learning system for quality control and pattern recommendation in smart manufacturing - Google Patents

Applied machine learning system for quality control and pattern recommendation in smart manufacturing Download PDF

Info

Publication number
WO2023140829A1
WO2023140829A1 PCT/US2022/012797 US2022012797W WO2023140829A1 WO 2023140829 A1 WO2023140829 A1 WO 2023140829A1 US 2022012797 W US2022012797 W US 2022012797W WO 2023140829 A1 WO2023140829 A1 WO 2023140829A1
Authority
WO
WIPO (PCT)
Prior art keywords
inspection
product
multiple components
machine learning
data
Prior art date
Application number
PCT/US2022/012797
Other languages
French (fr)
Inventor
Lili Zheng
Original Assignee
Hitachi America, Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi America, Ltd. filed Critical Hitachi America, Ltd.
Priority to PCT/US2022/012797 priority Critical patent/WO2023140829A1/en
Publication of WO2023140829A1 publication Critical patent/WO2023140829A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
    • G05B19/41875Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM] characterised by quality surveillance of production
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/32Operator till task planning
    • G05B2219/32193Ann, neural base quality management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the present disclosure is generally directed to manufacturing systems, and more specifically, to an applied machine learning system for quality control and pattern recommendation in smart manufacturing.
  • ML Machine learning
  • ML a branch of artificial intelligence that employs a variety of statistical, probabilistic, and optimization techniques, allows computers to learn from experience and detect hard-to-discem patterns from large, noisy or complex data sets.
  • the aim of ML is to develop general purpose algorithms which can automatically detect patterns in complex data through a training process, and then use these discovered paterns to make predictions for future unknown data. Therefore, ML is a powerful tool that allows researchers to make generalizations from limited data rather than exhaustively examining all the possibilities. .ML. has demonstrated its powerful abilities in various fields such as face recognition, character recognition, spam detection, speech recognition, medical perdition, among others.
  • Example implementations described herein introduce a machine learning system used in production line to predict the final line quality with inputs from the inspection result from sub-stations.
  • the use of the inspection result does not necessarily mean that the inspection judgement will be followed.
  • the parameters measured through inspection are considered, such as x, y locations, height, color, and so on.
  • the inspection parameters include but is not limited to location, height, width, color, and so on.
  • Example implementations utilize the convolutional neural network and the deep learning algorithms, which are commonly applied to analyzing visual image.
  • Deep learning a subfield of artificial intelligence (Al) studies, has revolutionized many engineering fields.
  • DL algorithms representing a stack of multi-layer perceptrons, mimic the neuron operations of the human brain.
  • DL algorithms have offered great success in competing in human games such as Go and porker.
  • DL has been exploited in several applications including, but certainly not limited to, visual and neural language recognition, diagnose system for medical radiology, protein study and drug discovery.
  • Recently DL have been utilized in the search for new fields, including materials design, material properties prediction and so on.
  • DL is rarely used because of the lack of existing designs.
  • Al based learning algorithms have already been used in image analysis to detect defect in production. However, there is little experience to apply the Al machine learning to the real-time production line quality control.
  • example implementations involve a supervised learning system taking sub-station inspection result as input features to predict the quality in the final assembly station inspection.
  • aspects of the present disclosure can involve a method, which can involve receiving inspection data for a product with multiple components generated by one or more inspection stations, the inspection data including a plurality of parameters associated with inspection results of the product; training a machine learning model with the inspection data and historical inspection data to identify delects occurring in one or more of the multiple components of the product; and applying the trained machine learning model to the one or more inspection stations to identify defec ts in the one or more of the multiple components of the product.
  • aspects of the present disclosure can involve a computer program, which can involve instructions involving receiving inspection data for a product with multiple components generated by one or more inspection stations, the inspection data including a plurality of parameters associated with inspection results of the product; training a machine learning model with the inspection data and historical inspection data to identify defects occurring in one or more of the multiple components of the product; and applying the trained machine learning model to the one or more inspection stations to identify defects in the one or more of the multiple components of the product.
  • the computer program and/or the instructions can be stored on a non-transitory computer readable medium and executed by one or more processors.
  • aspects of the present disclosure can involve a system, which can involve means for receiving inspection data for a product with multiple components generated by one or more inspection stations, the inspection data including a plurality of parameters associated with inspection results of the product; means for training a machine learning model with the inspection data and historical inspection data to identify defects occurring in one or more of the multiple components of the product; and means for apply ing the trained machine learning model to the one or more inspection stations to identify defects in the one or more of the multiple components of the product.
  • aspects of the present disclosure can involve an apparatus, which can involve a processor, configured to receive inspection data for a product with multiple components generated by one or more inspection stations, the inspection data involving a plurality of parameters associated with inspection results of the product; train a machine learning model with the inspection data and historical inspection data to identify defects occurring in one or more of the multiple components of the product; and apply the trained machine learning model to the one or more inspection stations to identify defects in the one or more of the multiple components of the product.
  • FIGS. 1(A) to 1(C) are schematic illustrations for the proposed solution, in accordance with an example irnplemaschineriation.
  • FIG. 2 is an illustration of production line and station distribution, in accordance with an example implementation.
  • FIG. 3(A) illustrates the inspection spots in an example product, in accordance with an example implementation.
  • FIG. 3(B) illustrates an example inspection data table, in accordance with an example implementation.
  • FIG. 4 illustrates the machine learning process, in accordance with an example implementation.
  • FIG. S is a schematic illustration showing the convolutional neural network (CNN) model, in accordance with an example implementation.
  • FIG. 6(A) is an example showing difference between minimum features of failed and passed samples, in accordance with an example implementation.
  • FIG. 6(B) is an example showing difference between maximum features of failed and passed samples, in accordance with an example implementation.
  • I ⁇ IG. 7 illustrates an example flow on which example implementations can be implemented.
  • FIG. 8 illustrates a system involving a plurality of inspection systems networked to a management apparatus, in accordance with an example implementation
  • FIG. 9 illustrates an example computing environment with an example computer device suitable for use in some example impleinentations.
  • Example implementations described herein are directed to improvements to the production line, in particular, for complex production lines in complex systems that may have several sub lines.
  • the example implementations work to improve the quality of the final sampling production products as well as to reduce the inspection costs.
  • FIGS. 1(A) to 1(C) are schematic illustrations for the proposed solution, in accordance with an example implementation.
  • a machine learning system architecture design method is proposed as a solution to resolve the current quality control issue In the manufacturing industry.
  • real-time monitoring has been applied broadly in the production line.
  • the machine learning, 'artificial intelligence related development is still restricted in certain areas, such as image analysis.
  • inspection data is taken from sub production line io train the supervised deep learning model to predict the quality in final production line inspection.
  • the desired pattern can be provided for each sub operational process.
  • FIG. 1 (A) illustrates a production line procedure, in accordance with an example implementation.
  • the production line system structure involving several sub lines and also the final inspection.
  • AOI automated optical inspection
  • AOI is used to determine whether a product is okay or if it has defects.
  • FIG. 1(B) illustrates an example grid for product inspection points, in accordance with an example implementation.
  • the whole product is represented in a matrix of values, with each value representing a sub component of the product that is further associated with its own values (e.g., location, inspection data, etc.).
  • Example implementations described herein involve a machine learning model that takes advantage of the convolutional neural network (CNN) and its ability to train based on information that is represented as pixels or a matrix of values.
  • a product may have parameters associated with inspection data for numerous components in a final product as produced from various sublines.
  • CNN convolutional neural network
  • the product itself can be composed of many su b components, and the hole product can be treated as a matrix of values with each value or “pixel” corresponding to a sub component for CNN training.
  • sublines for the assembly can be modeled, and an assessment can be made as to whether the final product will be ok or will have detects, and the type of defects that would occur.
  • the model generated from CNN training can correlate the sub line inspection results to the final assembly line results. Accordingly, inspection data as opposed to image data is used to determine the poten tial defects of the components.
  • FIG. 1 (C) illustrates a machine learning architecture, in accordance with an example implementation, in the machine learning architecture, during the training process, the inspection data I 00 is used to form the training data 101, so that the training data generation is done by using real historical inspection data 100.
  • the training data 101 is formed from changing the inspection data 100 through preprocessing in accordance with the desired implementation, as well as to select, the data that is more relevant for use in the inspection results.
  • Enhanced training data 102 can be formed in accordance with the desired implementation to enhance the training data 101 , such as by simulation or by other techniques as known in the art.
  • the enhanced training data 102 is used to conduct transfer learning 103 through the use of convolutional neural networks (CNN) 1 10.
  • CNN convolutional neural networks
  • CNN 1 10 is used and trained based on the inspection data to identity faults and components after it is trained to conduct the transfer learning 103, although CNN 1 10 is used in this example due to its robustness in training based on “pixels” or matrices of values, other machine learning techniques may be utilized and the present disclosure is not limited thereto.
  • the CNN 110 can be trained using the enhanced training data 102 and the resultant model can be tested with historical inspection data to test against a ground truth until the CNN 1 10 training is done.
  • the resultant model is trimmed to form a segment model 104 that is trained to determine whether a component is ok or it has a defect, along with the defect type 105.
  • real inspection data from substations can be fed into the model to determine defects in accordance with the desired implementation.
  • the example provided is directed to a single model for the final product, it is possible to also have separate models to model specific sub lines and the output of such sub lines in accordance with the desired implementation.
  • FIG. 2 is an Illustration of production line and station distribution, in accordance with an example implementation.
  • the sample production line is used as an example.
  • FIG, 2 there is a sub-assembly line and a main-assembly line as joined at Station 4.
  • There are 3 stations in the entire line are marked as critical.
  • the critical station is only considered to affect quality.
  • Station 7 is the final assembly station and the quality inspection result at this station determines a good/defect product. How to decide critical stations should be based on the theoretical and technical background.
  • the technical details need to be examined to identify the critical stations. For example, if the solder joint line is examined, it is not difficult to identify the critical stations are “solder paste” and “reflow”.
  • defects in the final assembly is from the critical stations.
  • the inspections results should include (wo parts: the inspection parameters and the quality decision.
  • data is only taken if the output passes the quality inspection standard, because the output cannot proceed to Station 7 if it is failed to pass the standard.
  • inspection records usually inspection records as many parameters as possible, for which some research may be needed to determine which such parameters are useful. For example, in Station 3 and Station 6 nl and n2 key parameters were discovered accordingly.
  • the information to be used in the mode! can include, but is not limited to, qualify inspection results (e.g., pass or defect), type of defect if any, and so on in accordance with the desired implementation.
  • FIG. 3(A) illustrates the inspection spots in an example product, in accordance with an example implementation.
  • FIG. 3(A) shows an example of the inspection pattern in operation. The entire square represents a product needs to be inspected, and the numbers on the square are the inspection spots.
  • the example illustrated is an idle case; inspection spots are not normally distributed homogeneously in a real production line in operation. However, the ideal case makes it easier to understand that each parameter measured on each inspection spot can be treated as a feature of the learning system being built.
  • FIG. 3(B) illustrates an example inspection data table, in accordance with an example implementation.
  • Each of the components in the inspection pattern can correspond to a component which can also be associated with a vector involving inspection result, features, line ID and so on in accordance with the desired implementation.
  • the inspection data table can represent the data associated with each inspection pattern in operation.
  • the inspection system involves a solder paste inspection process.
  • the electronic controller unit (ECU) board is passed into the solder paste inspection machine.
  • 3D solder paste inspection equipment featuring high quality measurement accuracy and inspection reliability is applied.
  • the measured features may include volume, height, area, of'fsetX, offsetY, barcode, sizeX, sizeY, and so on in accordance with the desired implementation.
  • the inspection result is “GOOD”
  • the board is sent forward to following process until the final inspection station. From all the features that captured by the solder past inspection, all the key features that are critical io the final quality of the board are selected to train the mach ine learning model.
  • FIG. 4 illustrates the machine learning process, in accordance with an example implementation. From such implementations, the learning process can thereby use the inspection data and connect each individual station. Ail useful data required to construct the deep learning model for quality prediction in the final assembly station can be collected. The features are selected measured n i and n2 parameters from Station 3 and 6. Labels are quality inspection result in Station 7, Here as an example, assume that there are n.3 failure types in Station 7. In real production, datasets are usually very large. For example, in a typical line, there can be more than 10000 pieces of inspection records in one station per day. And considering there are multiple features that are need to select per record, thus the amount of data is quite huge.
  • FIG. 4 shows the process of the machine learning from data acquisition, flattening, input to machine learning, and to the learning process to get prediction result.
  • the raw data requires preprocessing to obtain cleaned, scaled, and normalized data.
  • Most of the features in the dataset vary in range and are not efficient for machine learning model implementations, because most machine learning algorithms use Euclidean distance as the metrics to measure the distance between two data points.
  • Euclidean distance As the metrics to measure the distance between two data points.
  • the data needs to be scaled. There are many ways to do this. For example, StandardScaler under sk learn can be used to calculate the standard deviation on the feature, from which the scaling function can be appl ied.
  • FIG. 5 is a schematic illustration showing the convolutional neural network (CNN) model, in accordance with an example implementation.
  • Convolution neural networks are a popular deep learning method for image recognition and other image related tasks as shown in FIG. 5.
  • the model type used is Sequential. Sequential is the easiest way to build a model in Keras, which allows the construction of a model layer by layer at 401 . In this case, two middle layers with activation function are added with ReLU, or Rectified Linear Activation. This activation function has been proven io work well in neural networks. For the output layer, the softmax activation function can be used. Softmax makes the output sum up to 1 so the output can be interpreted as probabilities. Once the layers are assembled. the machine learning model 402 is thereby formed. The model will then make its prediction based on which option has the highest probability as results 403.
  • fodam can be used as the optimizer.
  • Adam is generally a good optimizer to use for many cases.
  • the adam optimizer adjusts the learning rate throughout training. The learning rate determines how fast the optimal weights for the model are calculated. A smaller learning rate may lead to more accurate weights (up to a certain point), but the time it takes to compute the weights will be longer, ‘categorical crossentropy 1 can be used for the loss function, which is a common choice for classification. .A lower score indicates that the model is performing better.
  • the model was trained twice. In the first time, 5300 datasets were used in total, 80% used for training and 20% used for validation, which produced an accuracy of 94%. In the second time. 14474 datasets were selected and 80% was used for training and 20% used for validation, which resulted in an accuracy around 98%.
  • FIG. 6(A) is an example showing difference between minimum features of faded and passed samples.
  • FIG. 6(B) is an example showing difference between maximum features of failed and passed samples.
  • the model can also provide desired inspection parameters in station 3 and 6. For example, in FIG. 6(A) and 6(8), it is showing the difference between failed and passed samples. From learning process, the parameters range can be further determined with prediction result to be “pass” so that a new inspection standard can be setup according to the machine learning system, which will significantly reduce defect chances in final station 7.
  • FIG. 7 illustrates an example flow on which example implementations can be implemented to effect the flow diagrams of FIG. 1 (C) and FIG. 4.
  • historical inspection data Is loaded.
  • the flow from 702 to 707 are conducted.
  • the traceability between Intermediate beauon(s) and the final inspection station is explored (e.g., by barcode, by quick release code, etc.).
  • features are extracted from intermediate station(s).
  • the intermediate inspection results are projected to the final inspection results.
  • the input inspection data is flatten to I D.
  • the data is normalized.
  • the final inspection result column are reshaped by one-hot encoding.
  • T o facilitate the transfer learning I 03 through CNN 110 and the flow of FIG. 4, the flow of 708-712 is conducted.
  • the linear stack of layers with the sequential model are built as described in FIG. 4.
  • the hidden layer is added as described in FIG. 4, and then set at 710.
  • the model is compiled and undergoes training to produce the training model that is saved at 712 as the segment model 104.
  • the real-time intermediate inspection data is passed along for preprocessing.
  • the model is loaded and executed to conduct the prediction on the preprocessed real-time intermediate inspection data.
  • FIG. 8 illustrates a system involving a plurality of inspection systems networked to a management apparatus, in accordance with an example implementation.
  • One or more inspection systems 801 are communicatively coupled to a network 800 (e.g., local area network (LAN), wide area network (WAN)) through the corresponding on-board computer or Internet of Things (loT) device of the inspection systems 801 , which is connected to a management apparatus 802.
  • the management apparatus 802 manages a database 803, which contains historical data collected from the inspection systems 801 and also facilitates remote control to each of the inspection systems 801.
  • the data from the inspection systems can be stored to a central repository or central database such as proprietary databases that intake data, or systems such as enterprise resource planning systems, and the management apparatus 802 can access or retrieve the data from the central repository or central database.
  • Inspection system 801 can involve any physical system in accordance with the desired implementation, such as but not limited to solder paste inspection machines, lithography inspection systems, and so on in accordance with the desired implementation.
  • FIG. 9 illustrates an example computing environment with an example computer device suitable for use in some example implementations, such as a management apparatus 802 as illustrated in FIG. 8, or as an on-board computer of an inspection system 801.
  • Computer device 905 in computing environment 900 can include one or more processing units, cores, or processors 910, memory 915 (e.g., RAM, ROM, and/or the like), internal storage 920 (e.g., magnetic, optical, solid state storage, and/or organic), and/or I/O interface 925, any of which can be coupled on a communication mechanism or bus 930 for communicating information or embedded in the computer device 905.
  • I/O interface 925 is also configured to receive images from cameras or provide images to projectors or displays, depending on the desired implementation.
  • Computer device 905 can be communicatively coupled to input/user interface 935 and output device/interface 940. Either one or both of input/user interface 935 and output device/interface 940 can be a wired or wireless interface and can be detachable.
  • Input/user interface 935 may include any device, component, sensor, or interface, physical or virtual, that can be used to provide input (e.g., buttons, touch-screen interface, keyboard, a pointing/cursor control, microphone, camera, braille, motion sensor, optical reader, and/or the like).
  • Output device/interface 940 may include a display, television, monitor, printer, speaker, braille, or the like.
  • input/user interface 935 and output device/interface 940 can be embedded with or physically coupled to the computer device 905, In other example implementations, other computer devices may function as or provide the functions of input/user interface 935 and output device/interface 940 for a computer device 905.
  • Examples of computer device 905 may include, but are not limited to, highly mobile devices (e.g., smartphones, devices in vehicles and other machines, devices carried by humans and animals, and the like), mobile devices (e.g., tablets, notebooks, laptops, personal computers, portable televisions, radios, and the like), and devices not designed for mobility (e.g., desktop computers, other computers, information kiosks, televisions with one or more processors embedded therein and/or coupled thereto, radios, and. the like).
  • highly mobile devices e.g., smartphones, devices in vehicles and other machines, devices carried by humans and animals, and the like
  • mobile devices e.g., tablets, notebooks, laptops, personal computers, portable televisions, radios, and the like
  • devices not designed for mobility e.g., desktop computers, other computers, information kiosks, televisions with one or more processors embedded therein and/or coupled thereto, radios, and. the like.
  • Computer device 905 can be communicatively coupled (e.g., via I/O interface 925) to external storage 945 and network 950 for communicating with any number of networked components, devices, and systems, including one or more computer devices of the same or different configuration.
  • Computer device 905 or any connected computer device can be functioning as, providing services of, or referred to as a server, client, thin server, general machine, special-purpose machine, or another label.
  • I/O interface 925 can include, but is not limited to, wired and/or wireless interfaces using any communication or I/O protocols or standards (e.g,, Ethernet, 802.1 1 x, Universal System Bus, WiMax, modem, a cellular network protocol, and the like) for communicating information to and/or from al least all the connected components, devices, and network in computing environment 900.
  • Network 950 can be any network or combination of networks (e.g., the Internet, local area network, wide area network, a telephonic network, a cellular network, satellite network, and the like).
  • Computer device 905 can use and/or communicate using computer-usable or computer-readable media, including transitory media and non-transitory media.
  • Transitory media include transmission media (e.g., metal cables, fiber optics), signals, carrier waves, and the like.
  • Non-transitory media include magnetic media (e.g., disks and tapes), optical media (e.g., CD ROM, digital video disks, Blu-ray disks), solid state media (e.g,, RAM, ROM, flash memory, solid-state storage), and other non-volatile storage or memory,
  • Computer device 905 can be used to implement techniques, methods, applications, processes, or computer-executable instructions in some example computing environments.
  • Computer-executable instructions can be retrieved from transitory media, and stored on and retrieved from non-transitory media.
  • the executable instructions can originate from one or more of any programming, scripting, and machine languages (e.g,, C, C++, C#, Java, Visual Basic, Python, Perl, JavaScript, and others).
  • Processor) s) 910 can execute under any operating system (OS) (not shown), in a native or virtual environment.
  • OS operating system
  • One or more applications can be deployed that include logic unit 960, application programming interface (API) unit 965, input unit 970, output milt 975, and inter-unit communication mechanism 995 for the different units to communicate with each other, with the OS, and with other applications (not shown).
  • the described units and elements can be varied in design, function, configuration, or implementation and are not limited to the descriptions provided.
  • Processors) 910 can be in the form of hardware processors such as central processing units (CPUs) or in a combination of hardware and software units.
  • API unit 965 when information or an execution instruction is received by API unit 965, it may be communicated to one or more other units (e.g,, logic unit 960, input, unit 970, output unit 975).
  • logic unit 960 may be configured to control the information flow among the units and direct the services provided by API unit 965, input unit 970, output unit 975, in some example implementations described above.
  • the flow of one or more processes or implementations may be controlled by logic unit 960 alone or in conjunction with API unit 965.
  • the input unit 970 may be configured to obtain input for the calculations described in the example implementations
  • the output unit 975 may be configured to provide output based on the calculations described in example i mplementations.
  • processors ) 910 can be configured to execute a method or instructions, which can involve receiving inspection data for a product with multiple components generated by one or more inspection stations, the inspection data involving a plurality of parameters associated with inspection results of the product; training a machine learning model with the inspection data and historical inspection data to identify defects occurring in one or more of the multiple components of the product; and applying the trained machine learning model to the one or more inspection stations to identify defects in the one or more of the multiple components of the product as illustrated in FIG. 1 (C) and FIGS. 3(A) to 4.
  • processors 910 can be configured to execute a method or instructions for training the machine learning models as that in the first aspect, which includes training a convolutional neural network (CNN) io inspect a product based on an inspection pattern Involving a plurality of inspection spots, each of the plurality of inspection spots associated with a component from the multiple components and the parameters associated with inspection results of the componen t from the multiple components from the inspection data as illustrated in .FIGS. 1(A) to 1 (C) and FIGS. 3(A) to 4. In this manner, the example implementations can thereby use CNN io treat the product as a matrix of values from inspection spots.
  • CNN convolutional neural network
  • processors 910 can be configured to execute a method or instructions as that in any of the above aspects, wherein the inspection data for each of the plurality of multiple components is generated from a corresponding inspection station, the plurality of parameters generated based on the corresponding inspection station as illustrated in FIGS. 1(C), 2, and 8.
  • processors 910 can be configured to execute a method or instructions as that in any of the above aspects, and further involve determining traceability from the one or more inspection stations associated with the multiple components and a final inspection station from the one or more inspection stations for the product; extracting features from the sub stations for use in training the machine learning model; and projecting intermediate inspection results to a final inspection result of the final inspection station; wherein the training the machine learning model comprises incorporating the features and the projected intermediate inspection results in a convolutional neural network (CNN) as illustrated in FIG. 7, 702-704.
  • CNN convolutional neural network
  • processors 910 can be configured to execute a method or instructions as that in any of the above aspects, wherein the training the machine learning model with the inspection data and historical inspection data to identify defects occurring in one or more of the multiple components of the product involves constructing, with a convolutional neural network (CNN), a sequential model comprising a plurality of layers, the plurality of layers being a linear set of layers to generate the sequential model; setting hidden layers in the sequential model; and forming a segment model from the sequential model trained to determine a product to be either acceptable or to identify a defect in a component along with a type of defect based on the inspection data and the historical inspection data as illustrated in FIG. 7, 708-713.
  • CNN convolutional neural network
  • Example implementations may also relate to an apparatus for performing the operations herein.
  • This apparatus may be specially constructed for the required purposes, or it may include one or more general -purpose computers selectively activated or reconfigured by one or more computer programs.
  • Such computer programs may be stored in a computer readable medium, such as a computer-readable storage medium or a computer-readable signal medium.
  • a computer- readable storage medium may involve tangible mediums such as, but not limited to optical disks, magnetic disks, read-only memories, random access memories, solid state devices and drives, or any other types of tangible or non-transitory media suitable for storing electronic information.
  • a computer readable signal medium may include mediums such as carrier waves.
  • the algorithms and displays presented herein are not inherently related to any particular computer or other apparatus.
  • Computer programs can involve pure software implementations that involve instructions that perform the operations of the desired implementation.
  • the operations described above can be performed by hardware, software, or some combination of software and hardware.
  • Various aspects of the example implementations may be implemented using circuits and logic devices (hardware), while other aspects may be implemented using instructions stored on a machine-readable medium (software), which if executed by a processor, would cause the processor to perform a method io carry out implementations of the present application.
  • some example implementations of the present application may be performed solely in hardware, whereas other example implementations may be performed solely in software.
  • the various functions described can be performed in a single unit, or can be spread across a number of components in any number of ways.
  • the methods may be executed by a processor, such as a general-purpose computer, based on instructions stored on a computer-readable medium. If desired, the instructions can be stored on the medium in a compressed and/or encrypted format.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Manufacturing & Machinery (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)

Abstract

Example implementations are directed to systems and methods can involve receiving inspection data for a product with multiple components generated by one or more inspection stations, the inspection data comprising a plurality of parameters associated with inspection results of the product; training a machine learning model with the inspection data and historical inspection data to identify defects occurring in one or more of the multiple components of the product; and applying the trained machine learning model to the one or more inspection stations to identify defects in the one or more of the multiple components of the product.

Description

APPLIED MACHINE LEARNING SYSTEM FOR QUALITY CONTROL AND PATTERN RECOMMENDATION IN SMART MANUFACTURING
BACKGROUND
Field
|00011 The present disclosure is generally directed to manufacturing systems, and more specifically, to an applied machine learning system for quality control and pattern recommendation in smart manufacturing.
Related Art
[0002] .Machine learning (ML), a branch of artificial intelligence that employs a variety of statistical, probabilistic, and optimization techniques, allows computers to learn from experience and detect hard-to-discem patterns from large, noisy or complex data sets. The aim of ML is to develop general purpose algorithms which can automatically detect patterns in complex data through a training process, and then use these discovered paterns to make predictions for future unknown data. Therefore, ML is a powerful tool that allows researchers to make generalizations from limited data rather than exhaustively examining all the possibilities. .ML. has demonstrated its powerful abilities in various fields such as face recognition, character recognition, spam detection, speech recognition, medical perdition, among others.
[0003] In manufacturing, the factors that control the final quality are extremely complex since the overall line may include several relational/nonrelational sub lines. In addition to quality control for each individual sub line, the effect from one sub-line to another should alsobe considered when evaluating the effects to final quality. However, this process can be too complicated and it can be difficult io generalize an algorithm to achieve the goal. For example, in the solder joining process, there are multiple sab-stations with one final assembly station. Issues in one station will not only affect the quality of other stations, but also may cause machine alarms or machine downtimes in other stations. In essence, the product passed to other stations should be already passed the inspection requirement. If inspection can resolve all prior issues, then there will be no alarms, no machine downtime, and no defects. SUMMARY
[0004] Example implementations described herein introduce a machine learning system used in production line to predict the final line quality with inputs from the inspection result from sub-stations. In example implementations described herein, the use of the inspection result does not necessarily mean that the inspection judgement will be followed. Instead, the parameters measured through inspection are considered, such as x, y locations, height, color, and so on. Usually in manufacturing, the inspection parameters include but is not limited to location, height, width, color, and so on.
[0005] Example implementations utilize the convolutional neural network and the deep learning algorithms, which are commonly applied to analyzing visual image. Deep learning (DL), a subfield of artificial intelligence (Al) studies, has revolutionized many engineering fields. DL algorithms, representing a stack of multi-layer perceptrons, mimic the neuron operations of the human brain. DL algorithms have offered great success in competing in human games such as Go and porker. DL has been exploited in several applications including, but certainly not limited to, visual and neural language recognition, diagnose system for medical radiology, protein study and drug discovery. Recently DL have been utilized in the search for new fields, including materials design, material properties prediction and so on. However, with respect to the complexity of real manufacturing process, DL is rarely used because of the lack of existing designs.
[0006] Al based learning algorithms have already been used in image analysis to detect defect in production. However, there is little experience to apply the Al machine learning to the real-time production line quality control. In the present disclosure, example implementations involve a supervised learning system taking sub-station inspection result as input features to predict the quality in the final assembly station inspection.
[0007] In an experiment with the system in accordance with example implementations described herein, real production data was used including two critical sub stations and one final assembly station. Data is only obtained from one o f the sub-stations and the final station. Thus, the inspection information from one sub-station was used as features, and the final station inspection information was used as label. Six months of data are used to train the system and eventually it was shown that the system as described herein can predict the quality in the final station with accuracy that can exceed 94%. In addition to prediction, the system can explore ideal paterns from the sub-station so as to improve quality in final station. fOOOS) Aspects of the present disclosure can involve a method, which can involve receiving inspection data for a product with multiple components generated by one or more inspection stations, the inspection data including a plurality of parameters associated with inspection results of the product; training a machine learning model with the inspection data and historical inspection data to identify delects occurring in one or more of the multiple components of the product; and applying the trained machine learning model to the one or more inspection stations to identify defec ts in the one or more of the multiple components of the product.
(00091 Aspects of the present disclosure can involve a computer program, which can involve instructions involving receiving inspection data for a product with multiple components generated by one or more inspection stations, the inspection data including a plurality of parameters associated with inspection results of the product; training a machine learning model with the inspection data and historical inspection data to identify defects occurring in one or more of the multiple components of the product; and applying the trained machine learning model to the one or more inspection stations to identify defects in the one or more of the multiple components of the product. The computer program and/or the instructions can be stored on a non-transitory computer readable medium and executed by one or more processors.
[0010] Aspects of the present disclosure can involve a system, which can involve means for receiving inspection data for a product with multiple components generated by one or more inspection stations, the inspection data including a plurality of parameters associated with inspection results of the product; means for training a machine learning model with the inspection data and historical inspection data to identify defects occurring in one or more of the multiple components of the product; and means for apply ing the trained machine learning model to the one or more inspection stations to identify defects in the one or more of the multiple components of the product.
[0011 J Aspects of the present disclosure can involve an apparatus, which can involve a processor, configured to receive inspection data for a product with multiple components generated by one or more inspection stations, the inspection data involving a plurality of parameters associated with inspection results of the product; train a machine learning model with the inspection data and historical inspection data to identify defects occurring in one or more of the multiple components of the product; and apply the trained machine learning model to the one or more inspection stations to identify defects in the one or more of the multiple components of the product.
BRIEF DESCRIPTION OF DRAWINGS
£0012] FIGS. 1(A) to 1(C) are schematic illustrations for the proposed solution, in accordance with an example irnplenieiriation.
[0013] FIG. 2 is an illustration of production line and station distribution, in accordance with an example implementation.
(0014] FIG. 3(A) illustrates the inspection spots in an example product, in accordance with an example implementation.
£0015] FIG. 3(B) illustrates an example inspection data table, in accordance with an example implementation.
[00161 FIG. 4 illustrates the machine learning process, in accordance with an example implementation.
(0017] FIG. S is a schematic illustration showing the convolutional neural network (CNN) model, in accordance with an example implementation.
[0018] FIG. 6(A) is an example showing difference between minimum features of failed and passed samples, in accordance with an example implementation.
£0019] FIG. 6(B) is an example showing difference between maximum features of failed and passed samples, in accordance with an example implementation.
£0020] I <IG. 7 illustrates an example flow on which example implementations can be implemented.
£0021] FIG. 8 illustrates a system involving a plurality of inspection systems networked to a management apparatus, in accordance with an example implementation,
£0022] FIG. 9 illustrates an example computing environment with an example computer device suitable for use in some example impleinentations. DETAILED DESCRIPTION
[0023] The following detailed description provides details of the figures and example implementations of the present application. Reference numerals and descriptions of redundant elements between figures are omitted for clarity. Terms used throughout the description are provided as examples and are not intended to be limiting. For example, the use of the term '‘automatic” may involve folly automatic or semi-automatic implementations involving user or administrator control over certain aspects of the implementation, depending on the desired implementation of one of ordinary skill in the art practicing implementations of the present application. Selection can be conducted by a user through a user interface or other input means, or can be implemented through a desired algorithm. Example implementations as described herein can be utilized either singularly or in combination and the functionality of the example implementations can be implemented through any means according to the desired implementations.
[0024] Example implementations described herein are directed to improvements to the production line, in particular, for complex production lines in complex systems that may have several sub lines. The example implementations work to improve the quality of the final sampling production products as well as to reduce the inspection costs.
[0025] Example implemeniaiions described herein are described with respect to a production l ine and station distribution example, FIGS. 1(A) to 1(C) are schematic illustrations for the proposed solution, in accordance with an example implementation. A machine learning system architecture design method is proposed as a solution to resolve the current quality control issue In the manufacturing industry. As part of industry 4.0 or smart manufacturing, real-time monitoring has been applied broadly in the production line. However, the machine learning, 'artificial intelligence related development is still restricted in certain areas, such as image analysis. In this proposed solution, inspection data is taken from sub production line io train the supervised deep learning model to predict the quality in final production line inspection. In addition, by tracking backward, the desired pattern can be provided for each sub operational process.
[0026] FIG. 1 (A) illustrates a production line procedure, in accordance with an example implementation. In the example of FIG. 1(A), the production line system structure involving several sub lines and also the final inspection. In example implementations described herein, there can be several different sublines In a production line, and a final assembly line involving automated optical inspection (AOI). Once the AO1 is completed, then final assembly is conducted. In example implementations, AOI is used to determine whether a product is okay or if it has defects.
|0027J In the production line, there can be several sublines or substations, which can also involve critical sablines depending on the desired implementation. The inspection results of the critical sublines are the ones that affect the final assembly quality in an impactful manner (i.e., above a threshold). In the illustration of FIG. 2, products can be composed of several components which are analyzed in the sub production lines for quality, which is provided in inspection data along with the location of each of such components. In example implementations, the idea is to correlate the quality from the quality inspection results from critical sublines to the final assembly.
[0028] FIG. 1(B) illustrates an example grid for product inspection points, in accordance with an example implementation. In the example of FIG. I (B), the whole product is represented in a matrix of values, with each value representing a sub component of the product that is further associated with its own values (e.g., location, inspection data, etc.). Example implementations described herein involve a machine learning model that takes advantage of the convolutional neural network (CNN) and its ability to train based on information that is represented as pixels or a matrix of values. In example implementations described herein, a product may have parameters associated with inspection data for numerous components in a final product as produced from various sublines. Accordingly, the product itself can be composed of many su b components, and the hole product can be treated as a matrix of values with each value or “pixel” corresponding to a sub component for CNN training. Through use of such example implementations, sublines for the assembly can be modeled, and an assessment can be made as to whether the final product will be ok or will have detects, and the type of defects that would occur. The model generated from CNN training can correlate the sub line inspection results to the final assembly line results. Accordingly, inspection data as opposed to image data is used to determine the poten tial defects of the components.
[0029J FIG. 1 (C) illustrates a machine learning architecture, in accordance with an example implementation, in the machine learning architecture, during the training process, the inspection data I 00 is used to form the training data 101, so that the training data generation is done by using real historical inspection data 100. The training data 101 is formed from changing the inspection data 100 through preprocessing in accordance with the desired implementation, as well as to select, the data that is more relevant for use in the inspection results. Enhanced training data 102 can be formed in accordance with the desired implementation to enhance the training data 101 , such as by simulation or by other techniques as known in the art. The enhanced training data 102 is used to conduct transfer learning 103 through the use of convolutional neural networks (CNN) 1 10. CNN 1 10 is used and trained based on the inspection data to identity faults and components after it is trained to conduct the transfer learning 103, Although CNN 1 10 is used in this example due to its robustness in training based on “pixels” or matrices of values, other machine learning techniques may be utilized and the present disclosure is not limited thereto. The CNN 110 can be trained using the enhanced training data 102 and the resultant model can be tested with historical inspection data to test against a ground truth until the CNN 1 10 training is done.
[0030] Once the transfer learning 103 is conducted, the resultant model is trimmed to form a segment model 104 that is trained to determine whether a component is ok or it has a defect, along with the defect type 105. Once trained, real inspection data from substations can be fed into the model to determine defects in accordance with the desired implementation. Although the example provided is directed to a single model for the final product, it is possible to also have separate models to model specific sub lines and the output of such sub lines in accordance with the desired implementation.
[0031] FIG. 2 is an Illustration of production line and station distribution, in accordance with an example implementation.
[0032] In this disclosure, the sample production line is used as an example. As shown in FIG, 2, there is a sub-assembly line and a main-assembly line as joined at Station 4. There are 3 stations in the entire line are marked as critical. Here the critical station is only considered to affect quality. Station 7 is the final assembly station and the quality inspection result at this station determines a good/defect product. How to decide critical stations should be based on the theoretical and technical background. Usually when the line is observed, the technical details need to be examined to identify the critical stations. For example, if the solder joint line is examined, it is not difficult to identify the critical stations are “solder paste” and “reflow”. Typically, defects in the final assembly is from the critical stations.
[0033] In this example, suppose Station 6 and Station 3 are identified as critical stations. The second procedure is to collect inspection data from these two stations and Station 7, Usually for intermediate stations such as Station 6 and 7. the inspections results should include (wo parts: the inspection parameters and the quality decision. Here, data is only taken if the output passes the quality inspection standard, because the output cannot proceed to Station 7 if it is failed to pass the standard. With the successful ones, usually inspection records as many parameters as possible, for which some research may be needed to determine which such parameters are useful. For example, in Station 3 and Station 6 nl and n2 key parameters were discovered accordingly. In Station 7 the information to be used in the mode! can include, but is not limited to, qualify inspection results (e.g., pass or defect), type of defect if any, and so on in accordance with the desired implementation.
[00341 FIG. 3(A) illustrates the inspection spots in an example product, in accordance with an example implementation. In building the machine learning model, after understanding howto connect the individual stations in the line in the learning process, next for consideration is with regards to how to utilize the inspection data. Even though, we are not working the image analyzing, our inspection data is similar to image data. FIG. 3(A) shows an example of the inspection pattern in operation. The entire square represents a product needs to be inspected, and the numbers on the square are the inspection spots. The example illustrated is an idle case; inspection spots are not normally distributed homogeneously in a real production line in operation. However, the ideal case makes it easier to understand that each parameter measured on each inspection spot can be treated as a feature of the learning system being built.
[0035J FIG. 3(B) illustrates an example inspection data table, in accordance with an example implementation. Each of the components in the inspection pattern can correspond to a component which can also be associated with a vector involving inspection result, features, line ID and so on in accordance with the desired implementation. The inspection data table can represent the data associated with each inspection pattern in operation.
[0036 | In an example implementation, suppose the inspection system involves a solder paste inspection process. After solder paste is conducted from the previous process, the electronic controller unit (ECU) board is passed into the solder paste inspection machine. For each inspection, 3D solder paste inspection equipment featuring high quality measurement accuracy and inspection reliability is applied. For each board, the measured features may include volume, height, area, of'fsetX, offsetY, barcode, sizeX, sizeY, and so on in accordance with the desired implementation. As long as the inspection result is “GOOD”, the board is sent forward to following process until the final inspection station. From all the features that captured by the solder past inspection, all the key features that are critical io the final quality of the board are selected to train the mach ine learning model.
|0037] FIG. 4 illustrates the machine learning process, in accordance with an example implementation. From such implementations, the learning process can thereby use the inspection data and connect each individual station. Ail useful data required to construct the deep learning model for quality prediction in the final assembly station can be collected. The features are selected measured n i and n2 parameters from Station 3 and 6. Labels are quality inspection result in Station 7, Here as an example, assume that there are n.3 failure types in Station 7. In real production, datasets are usually very large. For example, in a typical line, there can be more than 10000 pieces of inspection records in one station per day. And considering there are multiple features that are need to select per record, thus the amount of data is quite huge. FIG. 4 shows the process of the machine learning from data acquisition, flattening, input to machine learning, and to the learning process to get prediction result.
(0038] For the first step at 400, to get a high-quality data set, the raw data requires preprocessing to obtain cleaned, scaled, and normalized data. Most of the features in the dataset vary in range and are not efficient for machine learning model implementations, because most machine learning algorithms use Euclidean distance as the metrics to measure the distance between two data points. To make sure the features are in the same range, the data needs to be scaled. There are many ways to do this. For example, StandardScaler under sk learn can be used to calculate the standard deviation on the feature, from which the scaling function can be appl ied.
[0039] As a simple case, first label all our dataset with a binary array [1,0] denoted as the good portion and bad portion, FIG. 5 is a schematic illustration showing the convolutional neural network (CNN) model, in accordance with an example implementation. Convolution neural networks are a popular deep learning method for image recognition and other image related tasks as shown in FIG. 5. The model type used is Sequential. Sequential is the easiest way to build a model in Keras, which allows the construction of a model layer by layer at 401 . In this case, two middle layers with activation function are added with ReLU, or Rectified Linear Activation. This activation function has been proven io work well in neural networks. For the output layer, the softmax activation function can be used. Softmax makes the output sum up to 1 so the output can be interpreted as probabilities. Once the layers are assembled. the machine learning model 402 is thereby formed. The model will then make its prediction based on which option has the highest probability as results 403.
[00401 For compiling, fodam’ can be used as the optimizer. Adam is generally a good optimizer to use for many cases. The adam optimizer adjusts the learning rate throughout training. The learning rate determines how fast the optimal weights for the model are calculated. A smaller learning rate may lead to more accurate weights (up to a certain point), but the time it takes to compute the weights will be longer, ‘categorical crossentropy1 can be used for the loss function, which is a common choice for classification. .A lower score indicates that the model is performing better.
10041 J In an example compilation, the model was trained twice. In the first time, 5300 datasets were used in total, 80% used for training and 20% used for validation, which produced an accuracy of 94%. In the second time. 14474 datasets were selected and 80% was used for training and 20% used for validation, which resulted in an accuracy around 98%.
[0042] To illustrate an example desired pattern for inspection parameter in station 3 and 6, FIG, 6(A) is an example showing difference between minimum features of faded and passed samples. FIG. 6(B) is an example showing difference between maximum features of failed and passed samples. In addition to quality prediction in final station 7, the model can also provide desired inspection parameters in station 3 and 6. For example, in FIG. 6(A) and 6(8), it is showing the difference between failed and passed samples. From learning process, the parameters range can be further determined with prediction result to be “pass” so that a new inspection standard can be setup according to the machine learning system, which will significantly reduce defect chances in final station 7.
[0043j FIG. 7 illustrates an example flow on which example implementations can be implemented to effect the flow diagrams of FIG. 1 (C) and FIG. 4. At 701 , historical inspection data Is loaded. To preprocess the historical inspection data to form enhanced training data 102, the flow from 702 to 707 are conducted. At 702. the traceability between Intermediate statton(s) and the final inspection station is explored (e.g., by barcode, by quick release code, etc.). At 703, features are extracted from intermediate station(s). At 704. the intermediate inspection results are projected to the final inspection results. At 705, the input inspection data is flatten to I D. At 706, the data is normalized. At 707, the final inspection result column are reshaped by one-hot encoding. {0044] T o facilitate the transfer learning I 03 through CNN 110 and the flow of FIG. 4, the flow of 708-712 is conducted. At 708, the linear stack of layers with the sequential model are built as described in FIG. 4. At 709. the hidden layer is added as described in FIG. 4, and then set at 710. At 71 1 , the model is compiled and undergoes training to produce the training model that is saved at 712 as the segment model 104.
|0045] To facilitate the prediction flow of FIG. 1(C), at 713, the real-time intermediate inspection data is passed along for preprocessing. At 714, the model is loaded and executed to conduct the prediction on the preprocessed real-time intermediate inspection data.
[0046] FIG. 8 illustrates a system involving a plurality of inspection systems networked to a management apparatus, in accordance with an example implementation. One or more inspection systems 801 are communicatively coupled to a network 800 (e.g., local area network (LAN), wide area network (WAN)) through the corresponding on-board computer or Internet of Things (loT) device of the inspection systems 801 , which is connected to a management apparatus 802. The management apparatus 802 manages a database 803, which contains historical data collected from the inspection systems 801 and also facilitates remote control to each of the inspection systems 801. In alternate example implementations, the data from the inspection systems can be stored to a central repository or central database such as proprietary databases that intake data, or systems such as enterprise resource planning systems, and the management apparatus 802 can access or retrieve the data from the central repository or central database. Inspection system 801 can involve any physical system in accordance with the desired implementation, such as but not limited to solder paste inspection machines, lithography inspection systems, and so on in accordance with the desired implementation.
[0047] FIG. 9 illustrates an example computing environment with an example computer device suitable for use in some example implementations, such as a management apparatus 802 as illustrated in FIG. 8, or as an on-board computer of an inspection system 801. Computer device 905 in computing environment 900 can include one or more processing units, cores, or processors 910, memory 915 (e.g., RAM, ROM, and/or the like), internal storage 920 (e.g., magnetic, optical, solid state storage, and/or organic), and/or I/O interface 925, any of which can be coupled on a communication mechanism or bus 930 for communicating information or embedded in the computer device 905. I/O interface 925 is also configured to receive images from cameras or provide images to projectors or displays, depending on the desired implementation. [0048] Computer device 905 can be communicatively coupled to input/user interface 935 and output device/interface 940. Either one or both of input/user interface 935 and output device/interface 940 can be a wired or wireless interface and can be detachable. Input/user interface 935 may include any device, component, sensor, or interface, physical or virtual, that can be used to provide input (e.g., buttons, touch-screen interface, keyboard, a pointing/cursor control, microphone, camera, braille, motion sensor, optical reader, and/or the like). Output device/interface 940 may include a display, television, monitor, printer, speaker, braille, or the like. In some example implementations, input/user interface 935 and output device/interface 940 can be embedded with or physically coupled to the computer device 905, In other example implementations, other computer devices may function as or provide the functions of input/user interface 935 and output device/interface 940 for a computer device 905.
[0049] Examples of computer device 905 may include, but are not limited to, highly mobile devices (e.g., smartphones, devices in vehicles and other machines, devices carried by humans and animals, and the like), mobile devices (e.g., tablets, notebooks, laptops, personal computers, portable televisions, radios, and the like), and devices not designed for mobility (e.g., desktop computers, other computers, information kiosks, televisions with one or more processors embedded therein and/or coupled thereto, radios, and. the like).
[0050] Computer device 905 can be communicatively coupled (e.g., via I/O interface 925) to external storage 945 and network 950 for communicating with any number of networked components, devices, and systems, including one or more computer devices of the same or different configuration. Computer device 905 or any connected computer device can be functioning as, providing services of, or referred to as a server, client, thin server, general machine, special-purpose machine, or another label.
[0051] I/O interface 925 can include, but is not limited to, wired and/or wireless interfaces using any communication or I/O protocols or standards (e.g,, Ethernet, 802.1 1 x, Universal System Bus, WiMax, modem, a cellular network protocol, and the like) for communicating information to and/or from al least all the connected components, devices, and network in computing environment 900. Network 950 can be any network or combination of networks (e.g., the Internet, local area network, wide area network, a telephonic network, a cellular network, satellite network, and the like). [0052] Computer device 905 can use and/or communicate using computer-usable or computer-readable media, including transitory media and non-transitory media. Transitory media include transmission media (e.g., metal cables, fiber optics), signals, carrier waves, and the like. Non-transitory media include magnetic media (e.g., disks and tapes), optical media (e.g., CD ROM, digital video disks, Blu-ray disks), solid state media (e.g,, RAM, ROM, flash memory, solid-state storage), and other non-volatile storage or memory,
[0053] Computer device 905 can be used to implement techniques, methods, applications, processes, or computer-executable instructions in some example computing environments. Computer-executable instructions can be retrieved from transitory media, and stored on and retrieved from non-transitory media. The executable instructions can originate from one or more of any programming, scripting, and machine languages (e.g,, C, C++, C#, Java, Visual Basic, Python, Perl, JavaScript, and others).
[0054 | Processor) s) 910 can execute under any operating system (OS) (not shown), in a native or virtual environment. One or more applications can be deployed that include logic unit 960, application programming interface (API) unit 965, input unit 970, output milt 975, and inter-unit communication mechanism 995 for the different units to communicate with each other, with the OS, and with other applications (not shown). The described units and elements can be varied in design, function, configuration, or implementation and are not limited to the descriptions provided. Processors) 910 can be in the form of hardware processors such as central processing units (CPUs) or in a combination of hardware and software units.
[00551 In some example implementations, when information or an execution instruction is received by API unit 965, it may be communicated to one or more other units (e.g,, logic unit 960, input, unit 970, output unit 975). In some instances, logic unit 960 may be configured to control the information flow among the units and direct the services provided by API unit 965, input unit 970, output unit 975, in some example implementations described above. For example, the flow of one or more processes or implementations may be controlled by logic unit 960 alone or in conjunction with API unit 965. The input unit 970 may be configured to obtain input for the calculations described in the example implementations, and the output unit 975 may be configured to provide output based on the calculations described in example i mplementations. [0056] In a first aspect, processors ) 910 can be configured to execute a method or instructions, which can involve receiving inspection data for a product with multiple components generated by one or more inspection stations, the inspection data involving a plurality of parameters associated with inspection results of the product; training a machine learning model with the inspection data and historical inspection data to identify defects occurring in one or more of the multiple components of the product; and applying the trained machine learning model to the one or more inspection stations to identify defects in the one or more of the multiple components of the product as illustrated in FIG. 1 (C) and FIGS. 3(A) to 4.
[0957] In a second aspect processors) 910 can be configured to execute a method or instructions for training the machine learning models as that in the first aspect, which includes training a convolutional neural network (CNN) io inspect a product based on an inspection pattern Involving a plurality of inspection spots, each of the plurality of inspection spots associated with a component from the multiple components and the parameters associated with inspection results of the componen t from the multiple components from the inspection data as illustrated in .FIGS. 1(A) to 1 (C) and FIGS. 3(A) to 4. In this manner, the example implementations can thereby use CNN io treat the product as a matrix of values from inspection spots.
[0058] In a third aspect, processors) 910 can be configured to execute a method or instructions as that in any of the above aspects, wherein the inspection data for each of the plurality of multiple components is generated from a corresponding inspection station, the plurality of parameters generated based on the corresponding inspection station as illustrated in FIGS. 1(C), 2, and 8.
J00S9] In a fourth aspect, processors) 910 can be configured to execute a method or instructions as that in any of the above aspects, and further involve determining traceability from the one or more inspection stations associated with the multiple components and a final inspection station from the one or more inspection stations for the product; extracting features from the sub stations for use in training the machine learning model; and projecting intermediate inspection results to a final inspection result of the final inspection station; wherein the training the machine learning model comprises incorporating the features and the projected intermediate inspection results in a convolutional neural network (CNN) as illustrated in FIG. 7, 702-704. [0060] hi. a fifth aspect, processors) 910 can be configured to execute a method or instructions as that in any of the above aspects, wherein the training the machine learning model with the inspection data and historical inspection data to identify defects occurring in one or more of the multiple components of the product involves constructing, with a convolutional neural network (CNN), a sequential model comprising a plurality of layers, the plurality of layers being a linear set of layers to generate the sequential model; setting hidden layers in the sequential model; and forming a segment model from the sequential model trained to determine a product to be either acceptable or to identify a defect in a component along with a type of defect based on the inspection data and the historical inspection data as illustrated in FIG. 7, 708-713.
|0061] Some portions of the detailed description are presented in terms of algorithms and symbolic representations of operations within a computer. These algorithmic descriptions and symbolic representations are the means used by those skilled in the data processing arts to convey the essence of their innovations to others skilled in the art. An algorithm is a series of defined steps leading io a desired end state or result. In example implementations, the steps carried out require physical manipulations of tangible quantities for achieving a tangible result
(0062) Unless specifically stated otherwise, as apparent from the discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining,” “displaying,” or the like, can include the actions and processes of a computer system or other information processing device that manipulates and transforms data represented as physical (electronic) quantities within the computer system’s registers and memories into other data similarly represented as physical quantities within the computer system’s memories or registers or other information storage, transmission or display devices,
(0063] Example implementations may also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may include one or more general -purpose computers selectively activated or reconfigured by one or more computer programs. Such computer programs may be stored in a computer readable medium, such as a computer-readable storage medium or a computer-readable signal medium. A computer- readable storage medium may involve tangible mediums such as, but not limited to optical disks, magnetic disks, read-only memories, random access memories, solid state devices and drives, or any other types of tangible or non-transitory media suitable for storing electronic information. A computer readable signal medium may include mediums such as carrier waves. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Computer programs can involve pure software implementations that involve instructions that perform the operations of the desired implementation.
{0064] Various general-purpose systems may be used with programs and modules in accordance with the examples herein, or it may prove convenient to construct a more specialized apparatus to perform desired method steps. In addition, the example implementations are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the techniques of the example implementations as described herein. The instructions of the programming languages) may be executed by one or more processing devices, e.g., central processing units (CPUs), processors, or controllers.
{0065] As is known in the art, the operations described above can be performed by hardware, software, or some combination of software and hardware. Various aspects of the example implementations may be implemented using circuits and logic devices (hardware), while other aspects may be implemented using instructions stored on a machine-readable medium (software), which if executed by a processor, would cause the processor to perform a method io carry out implementations of the present application. Further, some example implementations of the present application may be performed solely in hardware, whereas other example implementations may be performed solely in software. Moreover, the various functions described can be performed in a single unit, or can be spread across a number of components in any number of ways. When performed by software, the methods may be executed by a processor, such as a general-purpose computer, based on instructions stored on a computer-readable medium. If desired, the instructions can be stored on the medium in a compressed and/or encrypted format.
(0066] Moreover, other implementations of the present application will be apparent io those skilled in the art from consideration of the specification and practice of the techniques of the present application. Various aspects and/or components of the described example implementations may be used singly or in any combination. It is intended that the specification and example implementations be considered as examples only, with the true scope and spirit of the present application being indicated by the following claims.

Claims

What is claimed is:
1 . A method, comprising: receiving inspection data for a product with multiple components generated by one or more inspection stations, the inspection data comprising a plurality of parameters associated wi th inspection results of the product; training a machine learning model with the inspection data and historical inspection da ta to identify defects occurring in one or more of the multiple components of the product; and applying the trained machine learning model to the one or more inspection stations to identify defects in the one or more of the multipie components of the product.
2. The method of claim i , wherein training the machine learning models comprises training a convolutional neural network (CNN) to inspect a product based on an inspection pattern comprising a plurality of inspection spots, each of the plurality of inspection spots associated with a component from the multiple components and the parameters associated with inspection results of the component from the multiple components from the inspection data.
3. The method of claim 1 , wherein the inspection data for each of the plurality of multiple components is generated from a corresponding inspection station, the plurality of parameters generated based on the corresponding inspection station.
4. The method of claim 1 , further comprising: determining traceability from the one or more inspection stations associated with the multiple components and a final inspection station from the one or more inspection stations for the product; extracting features from the sub stations for use in training the machine learning model; and projecting intermediate inspection results to a final inspection result of the final inspection station: wherein the training the machine learning model comprises incorporating the features and the projected intermediate inspection results in a convolutional neural network (CNN).
5. The method of claim 1 , wherein the training the machine learning model with the inspection data and historical inspection data to identify defects occurring in one or more of the multiple components of the product comprises: constructing, with a convolutional neural network (CNN), a sequential model comprising a plurality of layers, the plurality of layers being a linear set of layers to generate the sequential model; setting hidden layers in the sequential model; and forming a segment model from the sequential model trained to determine a product to be either acceptable or to identify a defect in a component along with a type of defect based on the inspection data and the historical inspection data.
6. A computer program, storing instructions to execute a process, the instructions comprising: receiving inspection data for a product with multiple components generated by one or more inspection stations , the inspection data comprising a plura li ty of parameters associated with inspection results of the product: training a machine learning model with the inspection data and historical inspection data to identify defects occurring in one or more of the multiple components of the product: and applying the trained machine learning model to the one or more inspection stations to identify defects in the one or more of the multiple components of the product,
7. The computer program of claim 7, wherein training the machine learning models comprises training a convolutional neural network (CNN) to inspect a product based on an inspection patern comprising a plurality of inspection spots, each of the plurality of inspection spots associated with a component from the multiple components and the parameters associated with inspection results of the component from the multiple components from the inspection data.
8. The computer program of claim 7, wherein the inspection data for each of the plurality of multiple components is generated from a corresponding inspection station, the plurality of parameters generated based on the corresponding inspection station.
9. The computer program of claim 7. further comprising: determining traceability from the one or more inspection stations associated with the multiple components and a final inspection station from the one or more inspection stations for the product; extracting features from the sub stations for use in training the machine learning model; and projecting intermediate inspection results to a final inspection result of the final inspection station; wherein the training the machine learning model comprises incorporating the features and the projected intermediate inspection results in a convolutional neural network (CNN).
10. Tlie computer program of claim 7, wherein the training the machine learning model with the inspection data and historical inspection data to identify defects occurring in one or more of the multiple components of the product comprises: constructing, with a convolutional neural network (CNN), a sequential model comprising a plurality of layers, the plurality of layers being a linear set of layers to generate the sequential model; setting hidden layers in the sequential model; and forming a segment model front the sequential model trained to determine a product io be either acceptable or to identify a defect in a component along with a type of defect based on the inspection data and the historical inspection data.
1 1. An apparatus, comprising: a processor, configured to: receive inspection data for a product with multiple components generated by one or more inspection stations, the inspection data comprising a plurality of parameters associated with inspection results of the product; train a machine learningmodel with the inspection data and historical inspection data to identify defects occurring in one or more of the multiple components of the product; and apply the trained machine learning model to the one or more inspection stations to identify defects in the one or more of the multiple components of the product.
12. 'The apparatus of claim 1 I , wTierein the processor is configured to train the machine learning models by training a convolutional neural network (CNN) to inspect a product based on an inspection patern comprising a plurality of inspection spots, each of the plurality of inspection spots associated with a component from the multiple components and the parameters associated with inspection results of the component from the multiple components from the inspection data.
13. The apparatus of claim 1 1 , wherein the inspection data for each of the plurality of multiple components is generated from a corresponding inspection station, the plurality of parameters generated based on the corresponding inspection station ,
14. The apparatus of claim 1 1 , the processor further configured to: determine traceability from the one or more inspection stations associated with the multiple components and a final inspection station from the one or more inspection stations for the product; extract features from the sub stations for use in training the machine learning model; arid project intermediate inspection results to a final inspection result of the final inspection station; wherein the processor is configured to train the machine learning model by incorporating the features and the projected intermediate inspection results in a convolutional neural network (CNN).
15. The apparatus of claim 11 , wherein the processor is configured to train the machine learning model with the inspection data and historical inspection data to identify defects occurring m one or more of the multiple components of the product by: constructing, with a convolutional neural network (CNN), a sequential model comprising a plurality of layers, the plurality of layers being a linear set of layers to generate the sequential model; setting hidden layers in the sequential model; and forming a segment model from the sequential model trained to determine a product to be either acceptable or to identify a defect in a component along with a type of defect based on the inspection data and the historical inspection data.
PCT/US2022/012797 2022-01-18 2022-01-18 Applied machine learning system for quality control and pattern recommendation in smart manufacturing WO2023140829A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2022/012797 WO2023140829A1 (en) 2022-01-18 2022-01-18 Applied machine learning system for quality control and pattern recommendation in smart manufacturing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2022/012797 WO2023140829A1 (en) 2022-01-18 2022-01-18 Applied machine learning system for quality control and pattern recommendation in smart manufacturing

Publications (1)

Publication Number Publication Date
WO2023140829A1 true WO2023140829A1 (en) 2023-07-27

Family

ID=87348754

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/012797 WO2023140829A1 (en) 2022-01-18 2022-01-18 Applied machine learning system for quality control and pattern recommendation in smart manufacturing

Country Status (1)

Country Link
WO (1) WO2023140829A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180357756A1 (en) * 2017-06-13 2018-12-13 The Procter & Gamble Company Systems and Methods for Inspecting Absorbent Articles on A Converting Line
US20190096135A1 (en) * 2017-09-26 2019-03-28 Aquifi, Inc. Systems and methods for visual inspection based on augmented reality
US20200160497A1 (en) * 2018-11-16 2020-05-21 Align Technology, Inc. Machine based three-dimensional (3d) object defect detection
US20200327651A1 (en) * 2019-04-12 2020-10-15 The Boeing Company Automated inspection using artificial intelligence
US20200334802A1 (en) * 2017-04-13 2020-10-22 Instrumental, Inc. Method for predicting defects in assembly units

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200334802A1 (en) * 2017-04-13 2020-10-22 Instrumental, Inc. Method for predicting defects in assembly units
US20180357756A1 (en) * 2017-06-13 2018-12-13 The Procter & Gamble Company Systems and Methods for Inspecting Absorbent Articles on A Converting Line
US20190096135A1 (en) * 2017-09-26 2019-03-28 Aquifi, Inc. Systems and methods for visual inspection based on augmented reality
US20200160497A1 (en) * 2018-11-16 2020-05-21 Align Technology, Inc. Machine based three-dimensional (3d) object defect detection
US20200327651A1 (en) * 2019-04-12 2020-10-15 The Boeing Company Automated inspection using artificial intelligence

Similar Documents

Publication Publication Date Title
CN111210024B (en) Model training method, device, computer equipment and storage medium
Bosilj et al. Transfer learning between crop types for semantic segmentation of crops versus weeds in precision agriculture
Sun et al. MEAN-SSD: A novel real-time detector for apple leaf diseases using improved light-weight convolutional neural networks
Lade et al. Manufacturing analytics and industrial internet of things
CN107251059A (en) Sparse reasoning module for deep learning
US10803571B2 (en) Data-analysis pipeline with visual performance feedback
US20220208188A1 (en) Artificial intelligence assisted speech and image analysis in technical support operations
US11227378B2 (en) Systems and methods of generating datasets for training neural networks
CN111709371B (en) Classification method, device, server and storage medium based on artificial intelligence
US11574166B2 (en) Method for reproducibility of deep learning classifiers using ensembles
US20220366244A1 (en) Modeling Human Behavior in Work Environment Using Neural Networks
Surya Machine learning-future of quality assurance
Li et al. IC solder joint inspection via generator-adversarial-network based template
CN108416797A (en) A kind of method, equipment and the storage medium of detection Behavioral change
Tong et al. Two-stage reverse knowledge distillation incorporated and Self-Supervised Masking strategy for industrial anomaly detection
Variz et al. Machine learning applied to an intelligent and adaptive robotic inspection station
US11625608B2 (en) Methods and systems for operating applications through user interfaces
US10402289B2 (en) Fine-grained causal anomaly inference for complex system fault diagnosis
KR20220105689A (en) System for determining defect of display panel based on machine learning model
WO2023140829A1 (en) Applied machine learning system for quality control and pattern recommendation in smart manufacturing
CN112306816A (en) Method and system for evaluating entity robot response based on deep learning
CN116403019A (en) Remote sensing image quantum identification method and device, storage medium and electronic device
Augusto Costa et al. A Convolutional Neural Network for Detecting Faults in Power Distribution Networks along a Railway: A Case Study Using YOLO
Upadhyay et al. Artificial intelligence application to D and D—20492
KR20220161601A (en) System for determining defect of image inspection target using deep learning model

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22922421

Country of ref document: EP

Kind code of ref document: A1