WO2018204410A1 - Metrology system for machine learning-based manufacturing error predictions - Google Patents

Metrology system for machine learning-based manufacturing error predictions Download PDF

Info

Publication number
WO2018204410A1
WO2018204410A1 PCT/US2018/030523 US2018030523W WO2018204410A1 WO 2018204410 A1 WO2018204410 A1 WO 2018204410A1 US 2018030523 W US2018030523 W US 2018030523W WO 2018204410 A1 WO2018204410 A1 WO 2018204410A1
Authority
WO
WIPO (PCT)
Prior art keywords
tolerance
data
parts
inspection
measurement
Prior art date
Application number
PCT/US2018/030523
Other languages
French (fr)
Inventor
Jacob Daniel HOCKETT
Original Assignee
Minds Mechanical, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Minds Mechanical, Llc filed Critical Minds Mechanical, Llc
Publication of WO2018204410A1 publication Critical patent/WO2018204410A1/en

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/0265Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion
    • G05B13/027Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion using neural networks only
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM]
    • G05B19/41875Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM] characterised by quality surveillance of production
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B5/00Measuring arrangements characterised by the use of mechanical techniques
    • G01B5/004Measuring arrangements characterised by the use of mechanical techniques for measuring coordinates of points
    • G01B5/008Measuring arrangements characterised by the use of mechanical techniques for measuring coordinates of points using coordinate measuring machines
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/80Management or planning

Definitions

  • the present disclosure relates to metrology systems, and, more particularly, to machine learning systems and methods for analyzing metrology inspection information to predict manufacturing errors including out-of-tolerance parts.
  • Dimensional metrology is used to measure the conformity of a physical part to its intended design.
  • One aspect relates to a system comprising a first data link to a manufacturing system configured to create a run of parts based on a common engineering schematic; a second data link to a metrology device configured to measure at least some parts in the run of parts to generate measurement data representing a physical shape of each part of the at least some parts; and a machine learning system including one or more processors in communication with a computer-readable memory storing executable instructions, wherein the one or more processors are programmed by the executable instructions to at least access a neural network trained, based on measurement data of past parts in the run of parts, to make a prediction about a future part in the run of parts; forward pass the measurement data through the neural network to generate the prediction about the future part in the run; and determine whether to output instructions for adjusting operations of the manufacturing system based on the prediction
  • the neural network includes parameters (e.g., weights of particular node to node connections) trained based on metrology inspection data of the past parts in the run of parts, and wherein the prediction represents an aspect of a metrology inspection of the future part (a "metrology prediction," e.g., a particular measurement, or whether a measurement will be in or out of tolerance).
  • a "metrology prediction" e.g., a particular measurement, or whether a measurement will be in or out of tolerance.
  • the neural network can be trained using approximately equal numbers of positive training cases (where the expected output is an in-tolerance inspection) and negative training cases (where the expected output is an out-of-tolerance inspection) to make predictions regarding whether future parts will be in or out of tolerance, or to make predictions regarding the specific measurements of the future parts.
  • the neural network is trained to output a predicted measurement for a feature of the future part, and wherein the one or more processors are further programmed by the executable instructions to compare the predicted measurement to a tolerance specified for the predicted measurement.
  • the one or more processors can be further programmed by the executable instructions to determine to adjust the operations of the manufacturing system based on determining that the predicted measurement is outside of the tolerance or a predetermined percentage of the tolerance.
  • the one or more processors can be further programmed by the executable instructions to determine an offset for a tool of the manufacturing system based on the predicted measurement, wherein the tool is configured to create the feature.
  • the neural network is trained to output a likelihood that a feature of the future part will be out of tolerance, and wherein the one or more processors are further programmed by the executable instructions to determine to adjust the operations of the manufacturing system based on the likelihood exceeding a predetermined threshold (e.g., 50% likely, 75% likely, 100% likely, or any other suitable percentage).
  • a predetermined threshold e.g. 50% likely, 75% likely, 100% likely, or any other suitable percentage
  • the one or more processors are further programmed by the executable instructions to determine to halt the operations of manufacturing system in response to the prediction indicating that the future part will be out of a predetermined percentage of tolerance (e.g., 30% of tolerance, 50% of tolerance, 100% of tolerance, or any other suitable percentage of tolerance).
  • a predetermined percentage of tolerance e.g. 30% of tolerance, 50% of tolerance, 100% of tolerance, or any other suitable percentage of tolerance.
  • the one or more processors are further programmed by the executable instructions to determine a modification to the operations of the manufacturing system in response to the prediction indicating that the future part will be out of a predetermined percentage of tolerance, wherein the modification includes a tool offset.
  • the one or more processors are further programmed by the executable instructions to compute at least one statistical process control metric based on the measurement data; and provide the at least one statistical process control metric as an input into the neural network to generate the prediction.
  • the neural network can be structured to have one or more initial layers of locally connected nodes that perform computations to generate desired statistical process control metrics from input measurement data. The local connections between these nodes can be optimized to minimize or eliminate duplicate computations.
  • the output of the initial layer(s) is the desired statistical process control metrics (described in more detail below), which can be provided (optionally together with inspection data values) to a fully connected portion of the neural network.
  • Another aspect relates to a computer-implemented method comprising receiving, from a metrology device, measurement data representing a physical shape of each of a number of parts in a run of parts, wherein parts in the run of parts are manufactured based on a common engineering schematic; accessing a machine learning model trained, based on measurements of past parts in the run of parts, to make a prediction about a future part in the run of parts; performing a forward pass of the measurement data through the machine learning model to generate the prediction about the future part in the run; and determining whether to output an alert for adjusting operations of the manufacturing system based on the prediction.
  • the machine learning model is trained to output a predicted measurement for a feature of the future part
  • the computer-implemented method further comprises comparing the predicted measurement to a tolerance specified for the predicted measurement.
  • the computer-implemented method can further comprise determining to adjust the operations of the manufacturing system based on determining that the predicted measurement is outside of a predetermined percentage of the tolerance.
  • the computer- implemented method can further comprise determining an offset for a tool of the manufacturing system based on the predicted measurement, wherein the tool is configured to create the feature.
  • the machine learning model is trained to output a likelihood that a feature of the future part will be out of tolerance
  • the computer-implemented method further comprising determining to adjust the operations of the manufacturing system based on the likelihood exceeding a predetermined threshold.
  • the computer-implemented method can further comprise determining to halt the operations of manufacturing system in response to the prediction indicating that the future part will be out of a predetermined percentage of tolerance.
  • the computer-implemented method can further comprise determining a modification to the operations of the manufacturing system in response to the prediction indicating that the future part will be out of a predetermined percentage of tolerance, wherein the modification includes a tool offset.
  • the computer-implemented method can further comprise computing at least one statistical process control metric based on the measurement data; and providing the at least one statistical process control metric as an input into the machine learning model to generate the prediction.
  • the computer-implemented method can further comprise outputting the alert to a control system configured to control operations of the manufacturing system.
  • the computer-implemented method can further comprise, by the control system, halting or correcting operation of the manufacturing system in response to receiving the alert.
  • the machine learning model comprises a neural network including at least an input layer and an output layer
  • the computer-implemented method can further comprise providing the measurement data to nodes of the input layer; and determining whether the future part will be out of tolerance based on values of nodes of the output layer.
  • the computer-implemented method can further comprise identifying a node of the output layer having a value indicative of an out-of-tolerance measurement predicted for the future part; accessing a mapping between the identified node and a geometric feature of the common engineering schematic; accessing a mapping between the geometric feature and a tool of the manufacturing system; and including an identification of the tool in the alert.
  • Further embodiments can comprise identifying a predicted deviation from tolerance based on the value of the identified node; calculating a position bias for controlling the tool to mitigate the predicted out-of-tolerance measurement; generating the alert to include the position bias; and outputting the alert to a control system configured to control operations of the manufacturing system.
  • the computer-implemented method can further comprise, by the control system, controlling the manufacturing system to apply the position bias during control of the tool during manufacture of the geometric feature of the future part.
  • the machine learning model is trained to make the prediction based on a first set of inspections in the measurement data
  • the computer- implemented method can further comprise accessing an additional machine learning model trained to make an additional prediction about the future part based on a second set of inspections in the measurement data, wherein the first set of inspections and the second set of inspections represent different sets of parts in the run of parts; and determining whether the future part will be out of tolerance based on the prediction of the machine learning model and on the additional prediction of the additional machine learning model.
  • Another aspect relates to a non-transitory computer readable medium storing computer-executable instructions that, when executed by a computing system comprising one or more computing devices, causes the computing system to perform operations comprising identifying an inspection of an out-of-tolerance part in a run of parts manufactured based on a common engineering schematic; identifying a set of inspections of in-tolerance parts manufactured prior in the run to the out-of-tolerance part; generating input data based on the set of inspections of the in-tolerance parts; generating expected output data based on the inspection of the out-of-tolerance part; training a machine learning model for predicting out-of-tolerance parts to predict the expected output data from the input data; and providing the trained machine learning model to a control system configured to control operations of a manufacturing system in manufacturing additional parts based on the common engineering schematic.
  • the machine learning model comprises a neural network comprising at least a statistical process control metric generation portion and a connected portion including an input layer, a hidden layer, and an output layer
  • the operations further comprise providing the input data to nodes of the statistical process control metric generation portion, wherein the input data comprises measurement data representing measured values of physical features of the in-tolerance parts; generating at least one statistical process control metric at the nodes of the statistical process control metric generation portion; providing the at least one statistical process control metric of each node of the statistical process control metric generation portion to a corresponding node of the input layer; providing the output data to nodes of the output layer; and tuning parameters of nodes of the hidden layer based on back-propagation.
  • the operations further comprise providing the measurement data to additional nodes of the input layer.
  • the operations further comprise updating the training during manufacture of the run of parts based on inspections of additional parts in the run of parts.
  • the operations further comprise identifying a second set of inspections of in-tolerance parts manufactured prior in the run to the out-of- tolerance part, wherein the set of inspections and the second set of inspections represent at least one different in-tolerance part; generating second input data based on the second set of inspections of the in-tolerance parts; training a second machine learning model for predicting out-of-tolerance parts to predict the expected output data from the second input data; and providing the trained machine learning model and the second trained machine learning model as an ensemble to the control system.
  • the operations further comprise, by the control system, using the trained machine learning model to generate a metrology prediction regarding a future part in the run (e.g., a part that the manufacturing system has not yet begun to create). In some implementations, the operations further comprise, by the control system, determining whether and/or how to correct or halt operations of the manufacturing system based on the metrology prediction.
  • the machine learning model comprises an artificial neural network comprising at least an input layer, a hidden layer, and an output layer
  • the operations further comprise providing the input data to nodes of the input layer; providing the output data to nodes of the output layer; and tuning parameters of nodes of the hidden layer based on back-propagation.
  • the operations further comprise accessing data representing one or more manufacturing process parameters including a manufacturing system used to create the in-tolerance parts and the out-of-tolerance part, a metrology device used to generate the inspection and set of inspections, and an inspection operator who operated the metrology device to generate the inspection and set of inspections; and providing an identifier of the one or more manufacturing process parameters to at least one additional node of the input layer. Using such an identifier, in training and inference, can enable generation of predictions that are specific to a certain machine or human inspector involved in creating the run of parts.
  • Figure 1A illustrates a schematic block diagram of an example metrology lifecycle management system and network environment as described herein.
  • Figure IB illustrates a schematic block diagram of an example of the metrology lifecycle management system of Figure 1A.
  • Figure 2 depicts a flowchart of an example process for generating standardized feature-based inspection reports as described herein.
  • Figure 3 illustrates example measurements with associated uncertainty intervals relative to a tolerance range.
  • Figure 4 depicts a flowchart of an example process for determining and utilizing manufacturing or inspecting uncertainty.
  • Figure 5 depicts a flowchart of an example process for calculating an uncertainty score for an inspection operator.
  • Figure 6 depicts a flowchart of an example process for calculating an uncertainty value for a manufacturing system and/or a metrology device.
  • Figure 7 depicts an example feedback loop for refining inspector and machine uncertainty values.
  • Figure 8A depicts an example view of a model of a part and GD&T data.
  • Figure 8B depicts a timeline of different parts in a run manufactured based on the model of Figure 8 A.
  • Figure 8C depicts an example set of training data including inspection reports of different parts in the run of Figure 8B.
  • Figure 9A depicts an example topology of a neural network for predicting out-of-tolerance parts.
  • Figure 9B depicts an example set of statistical process control metrics for input into the machine learning layers of the network of Figure 9A.
  • Figure 9C depicts example sets of training data and an ensemble of networks for predicting out-of-tolerance parts.
  • Figure 10 depicts an example data structure for analysis of machine learning model output, for example the output of the neural network of Figures 9 A and 9C.
  • Figure 11 depicts a schematic block diagram of an example of the prediction engine of Figure IB.
  • Figure 12A depicts a flow diagram of an illustrative process for training a machine learning model, for example in the prediction engine 165 of Figure 11 and/or as discussed with respect to Figures 8C-9B.
  • Figure 12B depicts a flow diagram of an illustrative process for providing out-of-tolerance parts predictions in the prediction engine of FIG. 1 via a model trained as described with respect to FIG. 12 A.
  • Figure 13 depicts a flow diagram of an illustrative process for inspection- based manufacturing process controls.
  • aspects of the disclosure relate to systems and techniques for leveraging insights gleaned from metrology inspection data to improve manufacturing and inspection processes.
  • the disclosed technology can analyze inspection data to learn the capabilities of the manufacturing systems, metrology inspection devices, and human inspection operators involved in the part creation cycle. The knowledge of these capabilities can be leveraged to create leaner, more efficient manufacturing processes that are less likely to scrap good parts (e.g., due to excessive uncertainty about the inspection values preventing approval of the part inspection) or to approve bad parts (e.g., due to an inaccurate measurement process that causes the bad part to appear in-tolerance).
  • the disclosed technology can acquire metrology inspection data of a number of physical, manufactured parts and standardize the inspection data in a manner that enables the system to isolate a portion of overall measurement uncertainty that is attributable to particular stages of the manufacturing process.
  • Measurement uncertainty is a metric for the amount which the measured shape of an inspected part may reasonably be expected to differ from its actual shape (for further details on measurement uncertainty, see Figure 3 and associated description).
  • the present technology can identify portions of the inspection data representing specific geometric features of the inspected parts and generate feature-based inspection reports for analysis, rather than relying on the native inspection report format received from a particular metrology device.
  • This technique allows the disclosed inspection data analyses to learn device and inspector capabilities with respect to particular geometric features (e.g., a cylinder, a hole, a flat surface), regardless of whether the same metrology system was used to inspect two different parts in the inspection data or whether those parts have the same shape.
  • the learned inspector and device capabilities can in turn be used to structure manufacturing processes that have small uncertainty compared to the allowable deviations in part shape.
  • the system analyzes feature-based inspection reports to identify how precisely a machine can manufacture or measure a certain geometric feature, or how accurately different inspection operators are able to measure certain geometric features.
  • the feature-based analysis enabled by the disclosed inspection data standardization can reveal measurement uncertainties associated with specific metrology inspectors, and such uncertainties can be used to select appropriate inspectors for specific measurement projects or for further training. This can yield more efficient manufacturing processes in some implementations, as the uncertainty attributable to a human inspector of a manual metrology device is typically orders of magnitude larger than the uncertainty due to the device itself.
  • the described systems can further manage dissemination of the inspections, analyses, and/or other manufacturing and inspection data within a company or supply chain.
  • trends in inspection data can reveal problem areas or inefficiencies in manufacturing and inspection processes that may otherwise be undiscovered, and a centralized electronic inspection database can provide for heighted data usability, integrity, and traceability.
  • a company selling an end product may request that a portion of a project be manufactured by an outside company (referred to herein as a "supplier").
  • OEM company can outsource different portions of the project to a number of different suppliers, and in some cases the same portion can be outsourced to a number of different suppliers.
  • suppliers may outsource smaller components of their portion of the project to other companies specializing in those components (“sub-suppliers").
  • sub-suppliers the supply chain for the project can extend through many levels of different companies.
  • Existing 3D metrology software is desktop software installed locally on user devices, providing single part analysis to pass or fail one manufactured part at a time based on whether the metrology measurements conform to specified tolerances.
  • Such software can receive, from metrology devices, inspection data representing measurements of a manufactured part and can compare the inspection data to predetermined metrology specifications to determine whether the part has been manufactured within required tolerances.
  • the inspection data is typically not utilized beyond determining conformance or nonconformance of the manufactured part, in part due to the differences in the outputs of different metrology devices and software options.
  • a metrology device like a laser scanner can acquire millions of data points representing the surfaces of an inspected part, while a portable coordinate measurement machine arm (“PCMM arm”) can measure a much smaller number of designated points on the surface.
  • PCMM arm portable coordinate measurement machine arm
  • each metrology software package has its own way of storing this data in inspection reports, such that there are differences even between two reports from different metrology software package options that represent the same inspections of the same physical.
  • Different metrology software options are optimized for use with different metrology hardware, and both within a single company and throughout a supply chain there are commonly inspections performed using a wide variety of metrology devices and software options.
  • SPC statistical process control
  • the above-described problems are addressed, in some embodiments, by the electronic metrology lifecycle management systems ("MLM system") and techniques described herein.
  • the MLM system includes a standardization engine for importing measurement data from a number of different metrology hardware and/or software sources and standardizing the measurement data into a feature-based format for data storage and/or display.
  • the MLM system can apply machine learning to glean actionable insights from measurement and manufacturing process data, for example by predicting that a part will be scrapped before its manufacturing is begun.
  • measurements and measurement data can describe metrology inspection data of points located on a measurand - a physical, manufactured part being inspected.
  • the standardized format can cluster inspection data by geometric features present in the measured part. Geometric features include cylinders, holes, flat or contoured surfaces, spheres, threads, slots, and toroids, to name just a few examples.
  • inspection data of varying densities and formats acquired from varying metrology hardware and software sources can be aggregated together by feature for analysis.
  • the disclosed feature-based standardization removes the restrictive point-matching limitation from SPC analysis, allowing aggregation and analysis of measurements of the same feature from different parts (for example, two or more parts manufactured based on different models), measurements of the same feature taken by different metrology devices, measurements of the same feature at different locations on the feature, and the like.
  • analysis based on the disclosed feature-based reports can be used flexibly across a wide range of manufacturing process and provide new insights into process capabilities compared to existing SPC systems.
  • the feature-based analysis enabled by the disclosed inspection data standardization can reveal measurement uncertainties associated with specific metrology inspectors, manufacturing systems, and/or metrology devices.
  • Measurement uncertainty can represent the precision or accuracy of a device or inspector in creating or measuring a part.
  • a manufacturing uncertainty interval can represent a range of part feature shapes that are likely produced by a given manufacturing system when trying to produce a feature according to a given nominal value.
  • a measurement uncertainty interval can represent, for a given metrology device, a range around a measured value in which the possible actual measurement of the inspected feature can reasonably be expected to fall.
  • Calibration data for manufacturing systems and metrology devices typically includes an approximation of the uncertainty that may be attributed to the device, however this is a static value that does not account for wear and changing conditions over time.
  • the disclosed techniques for feature- based inspection data analysis can be used to adjust these values dynamically as further inspection data is collected over time, such that the machine-specific uncertainty scores of the MLM system stay up-to-date without requiring any additional testing and calibration processes.
  • An inspector uncertainty interval can represent a range of possible actual measurements of the feature when inspected by a particular human inspection operator.
  • Human inspectors may undergo gage repeatability and reproducibility ("gage r&r") testing to approximate the amount of uncertainty they introduce into the measurement process.
  • gage r&r gage repeatability and reproducibility
  • a number of different inspectors take turns using the same metrology device to repeatedly measure the exact same part. Because the inspector is the only major variability between measurements, this type of testing can reveal measurement differences between the participating inspectors.
  • the resulting uncertainty measure has limited applicability as it (1) relates only to the particular test part subjected to the gage r&r, and (2) is a high-level metric of generally how much uncertainty may be attributable to the inspector relative to other inspectors.
  • the disclosed techniques for feature-based inspection data analysis beneficially avoid these limitations, as they operate on inspection data that is already gathered by the inspector during the course of his or her typical employment and thus require no additional testing procedures.
  • the inspection data can involve a diverse set of parts having different shapes and sizes.
  • the resulting uncertainty measures represent inspector capabilities for measuring particular geometric features. These can be used to identify the capability of the inspector with respect to new part shapes (sharing those geometric features) that were not included in the inspection data set from which the uncertainty measures were derived.
  • the MLM system can guide manufacturers to select appropriate equipment/device/inspector combinations for specific parts to improve efficiency and reduce the number of scrapped parts.
  • FIG. 1 A illustrates a schematic block diagram of the MLM system 110 in a network environment 100
  • Figure IB illustrates a schematic block diagram of an example of the MLM system 110.
  • the MLM system 110 can be implemented on a single computer or with one or more physical servers or computing machines, including the example illustrated group of servers.
  • the MLM system 110 includes one or more processor(s) and a memory 125 storing modules of computer-readable instructions that configure the processor(s) to perform the functions described herein.
  • the network environment 100 includes MLM system 110, network 108, metrology devices 102, manufacturing systems 104, control system 105, and user devices 106.
  • the MLM system 110 can include data links to any or all of the metrology devices 102, manufacturing systems 104, control system 105, and user devices 106, for example using the network 108 and suitable communications protocols/buses.
  • the MLM system 1 10 can acquire part measurement data in any of a number of different formats, for example from various metrology devices 102A-C including a coordinate measurement machine 102A, PCMM arm 102B, and laser tracker 102C, to name a few.
  • metrology devices include profilometers, optical comparators, laser scanners, interferometers, LiDAR devices, computed tomography metrology devices, and other devices capable of obtaining measurements representing surfaces of manufactured parts.
  • the metrology devices 102A-102C can cooperate with one or more different metrology software platforms to generate measurement data representing coordinate points along surfaces of physical, manufactured parts.
  • the measurement data can be provided automatically from the metrology devices 102 and/or metrology software to the MLM system 110 via network 108 in some implementations.
  • the MLM system 110 can additionally or alternatively include functionality for users to upload measurement data into the MLM system 110, for example via a browser-based user interface on a user device 106.
  • the MLM system 110 can also receive and/or send information from/to one or more manufacturing systems 104.
  • Manufacturing systems can include mills, lathes, Swiss turn, mold-based manufacturing systems, cutting systems implementing water jets, plasma, or electronic cutting means, 3D printers, routers, and the like.
  • the MLM system 110 can receive, for example, machining plans, machine operation parameters, data from sensors positioned to observe a manufacturing system, and the like.
  • the MLM system 1 10 can send, for example, machining plans and/or machine operation parameters that have been input or updated by users of the MLM system 110 and/or based on automated analyses of the MLM system 110.
  • Control system 105 can represent a locally-installed module, for example a local component of the MLM system 110, that can generate and send instructions for controlling operation of a manufacturing system 104. As such, control system 105 can be connected to both the network 108 and the manufacturing system 104.
  • users can access the MLM system 110 with user devices 106.
  • the user devices 106 that access the MLM system 110 can include computing devices, such as desktop computers, laptop computers, tablets, personal digital assistants (PDAs), mobile phones (including smartphones), electronic book readers, media players, game platforms, and electronic communication devices incorporated into vehicles or machinery, among others.
  • the user devices 106 can access the MLM system 110 over a network 108, for example through a browser-based portal.
  • the network 108 may be any wired network, wireless network, or combination thereof.
  • the network 108 may be a personal area network, local area network, wide area network, over-the-air broadcast network, cable network, satellite network, cellular telephone network, or combination thereof.
  • the communication network 108 may be a publicly accessible network of linked networks, possibly operated by various distinct parties, such as the Internet.
  • the communication network 108 may be a private or semi-private network, such as a corporate or university intranet.
  • the communication network 108 may include one or more wireless networks, such as a Global System for Mobile Communications (GSM) network, a Code Division Multiple Access (CDMA) network, a Long Term Evolution (LTE) network, or any other type of wireless network. Protocols and components for communicating via the Internet or any of the other aforementioned types of communication networks are well known to those skilled in the art of electronic communications and thus, need not be described in more detail herein.
  • GSM Global System for Mobile Communications
  • CDMA Code Division Multiple Access
  • LTE Long Term Evolution
  • the MLM system 110 includes one or more servers 120, a number of data repositories 170, 180, 185, and a working memory 125 storing standardization engine 140, multi-tenant manager 130, analytics engine 150, model viewing engine 160, prediction engine 165, and machine controller 175.
  • the MLM system 110 can be implemented with one or more physical servers or computing machines, including the servers 120 shown (among possibly others).
  • Server(s) 120 can include one or more processor(s) 122 and a memory 124 storing computer-readable instructions that configure the processor(s) 122 to perform the functions described herein.
  • the standardization engine 140, multi-tenant manager 130, analytics engine 150, and model viewing engine 160 can be modules of computer-readable instructions stored in the memory 124 of the server(s).
  • the standardization engine 140, multi-tenant manager 130, analytics engine 150, model viewing engine 160, prediction engine 165, and machine controller 175 can be stored and/or executed elsewhere, for example on a local computer 106 networked with the MLM system 110, in a virtual machine, or either of the above in combination with the servers 120.
  • These servers 120 can access back-end computing devices, which may implement some of the described functionality of the MLM system 110. Other computing arrangements and configurations are also possible.
  • each of the components depicted in the MLM system 110 can include hardware and/or hardware and software for performing various features.
  • the MLM system 110 is a network site (such as a web site) or a collection of network sites, which serve network pages (such as web pages) to users.
  • the MLM system 110 hosts content for one or more mobile applications or other applications executed by connected metrology devices 102A-C, manufacturing systems 104, and/or user devices 106.
  • this specification often refers to the MLM system 110 in the web site context as being accessed through a browser-based portal.
  • the MLM system 110 can be adapted for presentation in desktop applications, mobile applications, or other suitable applications.
  • the processing of the various components of the MLM system 110 can be distributed across multiple machines, networks, or other computing resources.
  • the various components of the MLM system 110 can also be implemented in one or more virtual machines or hosted computing environment (a.k.a., "cloud") resources, rather than in dedicated servers.
  • the data repositories shown can represent physical and/or logical data storage, including, for example, storage area networks or other distributed storage systems.
  • the connections between the components shown represent possible paths of data flow, rather than actual connections between hardware.
  • Executable code modules that implement various functionalities of the MLM system 110 can be stored in the memories of the servers 120 and/or on other types of non-transitory computer-readable storage media. While some examples of possible connections are shown, any subset of the components shown can communicate with any other subset of components in various implementations.
  • Standardization engine 140 can receive measurement data from one or more metrology devices 102 and converts the measurement data to a standardized format. As described herein, the standardization engine 140 can identify geometric features of a part based on its associated GD&T, computer model, or a user-provided template for parts without GD&T or computer models. Inspection data associated with each geometric feature can be stored in association with the geometric feature. Thus, the standardized, feature-based reports of the MLM system 110 are independent of any format specific to a certain metrology hardware or software, and inspection data from a variety of sources can be collected and/or compared by feature.
  • the standardization engine 140 beneficially enables aggregation of data from different sources for analysis.
  • the standardized data can be stored in association with one or more other values, for example in association with an engineering schematic from which a part was created, a manufacturing system used to create the part, an assembly or project including the part, a type of metrology hardware and/or software used to generate the measurement data of the part, an inspection operator user who performed machining and/or metrology inspection of the part, a supplier responsible for manufacturing the part, an OEM requesting manufacture of the part from the supplier, and the like.
  • Such additional parameters can be used to identify specific trends in inspection data and/or to provide alerts to designated users.
  • Metrology data repository 170 is a data storage device that stores such feature-based aggregated standardized measurement data.
  • the original inspection file can also be stored, for example for provision to a requesting company or OEM.
  • Aggregated data sets in data repository 170 can include, for example, measurements of a number of different parts, measurements of a number of the same part (that is, parts in a run that are manufactured based on the same model or blueprint), or different measurements of the same part at different times and/or by different metrology equipment.
  • standardized inspection data can be aggregated by feature and associated with a specific manufacturing system used to manufacture the inspected part, metrology device used to perform the inspection, and/or inspection operator performing the inspection.
  • the data repository 170 can be used to log the quality of a particular part and/or assembly at different points in time.
  • the measurement data can include initial inspections of parts performed in the manufacturing pipeline, inspections of these parts and/or assemblies including these parts performed throughout a manufacturing supply chain, and inspections of these parts and/or assemblies after the inception of product use (e.g., periodic quality inspections, maintenance inspections, etc.).
  • periodic quality inspections e.g., maintenance inspections, etc.
  • the MLM system 110 can provide maintenance predictions and preventative alerts.
  • the data repository 170 includes additional storage storing engineering schematics (including blueprints, CAD models, assemblies, GD&T information, and inspection plans). In this manner, both manufacturing and inspection data are centrally located and can be analyzed together as appropriate.
  • Geometric Dimensioning and Tolerancing GD&T is a system for defining and communicating engineering tolerances, and uses a symbolic language on engineering drawings and computer-generated three-dimensional solid models that explicitly describes nominal geometry and its allowable variation. Tolerance limits, as used herein, refer to these allowable variations of a part measurement from nominal.
  • the MLM data repository 170 can be structured to comply with any regulatory requirements within specific industries or supply chains and to provide increased data integrity and traceability.
  • the analytics engine 150 can provide meaningful statistical analyses of aggregated subsets of the standardized measurement data. Appropriate analysis of aggregated metrology inspection data can provide valuable insights about manufacturing and inspection processes that a single measurement report or part pass/fail indication cannot.
  • the analytics engine can analyze aggregated standardized measurement data of parts manufactured by one or more machines, parts measured by one or more metrology inspectors, parts manufactured by one or more entities in a supply chain relationship, of the same part measured at different points in a supply chain, and the like. Trends identified through analysis of aggregate subsets of measurement data can provide insights into manufacturing and inspection process efficiencies and capabilities, as well as predict potential quality problems.
  • the feature-based analysis enabled by the disclosed inspection data standardization can reveal measurement uncertainties associated with specific metrology inspectors, and such uncertainties can be used to select appropriate inspectors for specific projects or for further training.
  • the feature-based analysis can reveal actual machine tolerance capabilities, beneficially providing users with real performance abilities rather than relying on potentially inaccurate default machine specifications.
  • the MLM system can implement rule-based reporting for providing results of data analysis to appropriate user(s), and can provide recommendations to improve manufacturing and inspection processes.
  • Prediction engine 165 trains and implements machine learning models to improve manufacturing process ROI.
  • prediction engine 165 can train a machine learning model to predict out of tolerance parts based on the inspection data (and any associated process metrics such as machines and/or users) of previously-created, in-tolerance parts.
  • the prediction engine 165 can then implement a trained model in a manufacturing pipeline using inspection data of parts in a run in order to predict out of tolerance parts before their manufacturing is begun.
  • the prediction engine 165 can create aggregate subsets of inspection data in real-time (e.g., as it is generated during production of a run of parts) and can input this data into a trained model.
  • Machine learning data repository 185 is a data storage device storing data relating to the training and implementation of machine learning models for prediction of manufacturing conditions.
  • machine learning data repository 185 can store training data sets, trained model parameters, and new input data sets for real-time predictive manufacturing analysis.
  • the machine learning data repository 185 can store multiple trained machine learning models associated with a specific part, with each trained model associated with a different inspector, manufacturing system, and/or metrology device.
  • the prediction engine 165 can identify a suitable model based on current process setup (e.g., specifically which inspector, manufacturing system, and/or metrology device is being used for creation of a part) and can use the identified model to make predictions about out-of- tolerance conditions.
  • the prediction engine 165 can monitor the process parameters to determine whether a different model is more suitable as the process changes (e.g., when inspector shifts change over, when manufacturing tooling is replaced, etc.).
  • the unique identifier associated with the inspector, manufacturing system, and/or metrology device may be incorporated in the machine learning model as process metrics.
  • the MLM system 110 can compare measurement data of a part to nominal measurements as indicated by an associated GD&T file to identify deviations.
  • the metrology system providing the measurement data can additionally or alternatively provide deviation values.
  • the MLM system 110 can compare identified deviations to tolerance values specified at the measurement locations and output conformance or non-conformance reports in any of a number of formats. Such reports or can be automatically provided to users associated within the MLM system 110 with a part or project.
  • Some implementations of the MLM system can include a multi-tenant management component 130.
  • the multi-tenant manager 130 can manage information flow between users at different levels of a supply chain, for example allowing users to receive quality analyses and manufacturing scheduling updates from users at other levels of the supply chain. This can be accomplished in some examples through automated reporting.
  • the multi-tenant structure can provide important alerts to designated users in real time, for example by providing alerts when inspections reveal non-conforming parts and/or when equipment is nearing or has reached the end of its lifecycle of creating conforming parts.
  • the system can provide alerts to suppliers as models, blueprints, and/or inspection plans are revisioned by a OEM.
  • the MLM system can provide recommendations of manufacturing systems, metrology devices, and/or inspectors for specific parts based on determined measurement uncertainties as described in more detail below.
  • the multi-tenant structure of the disclosed MLM system can enable aggregate measurements from throughout the supply chain into a single repository.
  • the MLM system 110 is a multi -tenant system operated as a network site (such as a web site) or a collection of network sites, which serve network pages (such as web pages) to user devices 106 via the user interface 126.
  • the MLM system 110 can additionally or alternatively host content for one or more mobile applications running on user devices 106.
  • the MLM system 110 can additionally or alternatively host other applications executed by connected metrology devices 102A-102C, machining systems 104, and/or user devices 106. Such applications can be networked with one another to perform the data transfer functions described herein.
  • the multi -tenant manager 130 can include a number of supply chain management and user interface components including an alert management module, an inspection display module, and various modules for managing permissions and/or for inputting the described data types into the MLM system 110, to name a few.
  • An alert management module can provide alerts to designated users, for example, when part inspections are completed, when non-conforming parts are produced, and/or when a model is revisioned.
  • the multi -tenant manager 130 can an include inspection package reporting engine that can automatically disseminate inspection results and any associated data (photos of inspected parts, original and/or standardized inspection reports, non-conformance reports, and the like) to designated users within a company or supply chain.
  • the user data repository 180 stores data representing user profiles for companies and/or individuals as well as manufacturing relationships between companies and/or individuals. Each can have a unique identifier in the MLM system 110.
  • the user data repository 180 can store specified rules for providing access and alerts to users regarding parts, assemblies, or projects. As such, another aspect of the MLM system 110 relates to its multi- tenant functionality as a supply chain relationship management resource.
  • the MLM system 110 can provide functionality for users to browse or search for information relating to parts, assemblies, inspections, and the like for which the user has an associated permission to access such data in the user data repository 180.
  • Certain users in the user data repository can be identified as inspection operators (e.g., human operators of metrology inspection devices). Such inspection operators can have an associated measurement uncertainty, for example as determined via processes 400, 500, and/or 600 described below.
  • inspection operators e.g., human operators of metrology inspection devices.
  • Such inspection operators can have an associated measurement uncertainty, for example as determined via processes 400, 500, and/or 600 described below.
  • User data repository 180 can also store asset portfolios.
  • An asset portfolio can include the machines of an individual or company using the MLM system 110 that operate within a manufacturing-inspection environment, for example manufacturing systems and/or metrology devices.
  • the asset portfolio of a particular company may include manufacturing machines and tooling used with specific machines, each associated with a unique identifier in the MLM system 110. These machines can each have an associated measurement uncertainty in the asset portfolio.
  • the metrology data repository 170 and/or user data repository 180 can, in some embodiments, be stored remotely on one or more servers in network communication with the metrology devices 102, manufacturing systems 104, and/or user devices 106. Though shown as two separate data repositories, the metrology data repository 170 and user data repository 180 can be combined into a single data repository or split into a number of different data repositories in various implementations.
  • the MLM system 110 can provide users of the user devices 106 with access to an electronic repository of metrology inspection data, 3D CAD models and/or blueprints of parts and assemblies, machining instructions, inspection plans, and/or analysis results provided by the MLM system, to name a few examples.
  • a user interface of the MLM system 110 can provide content via a web browsing application such that the functionality of the MLM system 110 can be accessed by a number of different user devices 106.
  • the browser-based user interface can provide functionality for a user to, from any computing device, view and interact with a 2D or 3D representation of the part, to view part measurement data, and/or to view a comparison of deviations between the part measurement data and nominal with predefined tolerance(s).
  • the described MLM system 110 can provide increased access to metrology and manufacturing data compared to existing systems, which typically require specialized software installed locally on a computer in a manufacturing environment.
  • the MLM system extends accessibility of manufacturing and inspection information from the shop floor to anywhere a user may wish to view their process data.
  • the user-friendly, feature-based report format and computer model access available through the model viewing engine 160 enable any person in a company to view and understand models and inspection data, regardless of any specialized training in complicated CAD or metrology software.
  • Another benefit of the MLM system is a reduced need for companies to purchase expensive seats of CAD and metrology software just to be able to view and access their own data.
  • the model viewing engine 160 of the MLM system can be implemented via a rendering engine of the server(s) 120 for delivering interactive, three-dimensional representations of models and/or measured parts to connected user devices 106.
  • a user can view an interactive 3D model of a part with visualization of inspection data overlying the nominal model, even if the graphics processing capabilities of their device are not sufficient to perform three-dimensional graphics rendering.
  • the inspection data overlying the model can include GD&T callouts specifying tolerances, deviations identified from inspections, and/or a heat map showing different colors for inspection measurements based on how well those measurements match the specified tolerances. This can provide an accessible and user-friendly means for presenting models, inspection parameters, and inspection results to users.
  • the rendering engine can provide functionality that enables users to rotate, zoom, or otherwise manipulate the interactive 3D models.
  • the rendering engine can in some embodiments provide users with functionality to specify a manufacturing timeframe, and then can generate an interactive and dynamic heat map that shows the change in the heat map over time for a particular part run.
  • the features of the disclosed MLM system 110 can also be used by a number of users within a single company to manage their internal manufacturing and quality data.
  • Implementations of the MLM system can also be adapted to function without any network capabilities, for example for use in locally managing manufacturing and quality data of restricted projects.
  • Restricted projects can include projects requiring a certain level of governmental clearance or any other project that a company may desire to keep private. As such, in some examples no information may flow out of the MLM system 110 to other sources and all information can be used locally.
  • the server(s) 120 may be omitted and the processor(s) 122, data repositories 170, 180, standardization engine 140, multi -tenant manager 130, analytics engine 150, and model viewing engine 160 can be stored and/or executed by a local computing device hosting the non-networked implementation of the MLM system 110.
  • Machine controller 175 can control the operations of one or more manufacturing systems and metrology devices, for example as implemented in a robotic manufacturing cell.
  • Machine controller 175 can send instructions to the manufacturing systems 104 to begin or halt production, to change out tooling, or to adjust tooling position during manufacturing. These instructions can be based on the output of the analytics engine 150 and/or prediction engine 165 as described herein.
  • the machine controller 175 can receive an alert from the prediction engine 165 that a next part in a run is predicted to be out-of-tolerance.
  • the machine controller 175 can halt the operation of the manufacturing system 104 identified for creating the predicted out-of-tolerance part, and can alert associated users regarding the prediction and the half.
  • the machine controller 175 can also receive information from the analytics engine 150 and/or prediction engine 165 regarding corrective action that will mitigate the changes of the identified part being out-of-tolerance, for example (1) changing/replacing tooling of a manufacturing system 104 or (2) a compensation or bias to apply to machine/tool position instructions in order to compensate for identified wear.
  • the machine controller 175 can send instructions to the manufacturing system 104 to cause the manufacturing system 104 to automatically take the identified corrective action.
  • this can be done without requiring human intervention in real time and in-line with the manufacturing process so that scrap can be avoided while at the same time efficiently continuing manufacturing operations.
  • the MLM system 110 can open inspection data files across a range of available formats and to standardize the inspection data.
  • Figure 2 illustrates an example process 200 for standardizing inspection data and can be implemented by the standardization engine 140 in some embodiments.
  • the process 200 can standardize different formats and/or densities of metrology inspection data based on geometric features for aggregation and analysis and/or display in a unified format.
  • the standardization engine 140 can obtain inspection data representing measurement of a physical part.
  • the MLM system 110 can provide a user interface for users to upload inspection reports.
  • the MLM system 110 can be in direct communication with a metrology device and/or computing device hosting metrology software such that inspection data is automatically imported into the MLM system as it is acquired.
  • the process 200 will be described in the context of part inspection, the process 200 can also be applied to assembly inspections.
  • a part refers to a single, unitary part while an assembly refers to the coupling of two or more parts.
  • the standardization engine 140 can identify geometric features of the part represented by the inspection.
  • inspection data obtained by the standardization engine 140 can be associated with a specific part in the data repository 170.
  • the part will also be associated with a GD&T file specifying nominal measurements and tolerances at a number of locations on the part.
  • the standardization engine 140 can, in some embodiments, first check whether a GD&T file is associated with the inspected part. If so, at block 21 OB the standardization engine 140 can parse through the GD&T file to identify reference to any geometric features of the part. Some parts may not have an associated GD&T file.
  • the standardization engine 140 can parse through the data representing the model to identify reference to geometric features, for example in the element IDs of a three-dimensional CAD file.
  • Blocks 210A-210C represent some of the options available to the analytics engine 140 for identifying part features.
  • no GD&T or model may be available in the MLM system for the inspected part, and the standardization engine 140 can parse through headers of the inspection data file to identify reference to any geometric features.
  • the described parsing can be implemented in some embodiments via fuzzy logic.
  • Some parts may be created based off of two-dimensional blueprints, and for such parts a user may upload a mapping of the geometric features of the part usable by the standardization engine 140 to identify the geometric features.
  • the standardization engine 140 can output a listing of the geometric features of the part for storage in association with the part in the data repository 170 such that the listing is generated once for an initial inspection of the part and then accessed for subsequent inspections of the part.
  • the standardization engine 140 can generate a feature-based report by identifying, for each geometric feature, portions of the inspection data representing measurement of the feature. For example, the standardization engine 140 can parse through headers of the inspection file, map the headers to the identified geometric features, and store all data points under the inspection file headers with the associated identified feature.
  • the data points from the inspection file can include a measurement value and x,y,z coordinates of the data point.
  • a first feature-based report can include six data points representing a cylinder measured by a PCMM arm
  • a second feature-based report can include thousands of points representing the cylinder measured by a laser tracker.
  • both reports are formatted to specify that the data points represent a cylinder.
  • the feature- based report formatting provides the ability to aggregate inspection data from different formats based on features and/or to present a single way of viewing inspection data from any source.
  • the standardization engine 140 can identify any hardware, software, and/or inspection operator associated with creation and/or inspection of part or assembly.
  • Hardware can include a manufacturing system used to make the part and/or a metrology device used to inspect the part.
  • Software can include a program used to run the manufacturing system, a program used to run the metrology device, or a metrology software option used together with the metrology device.
  • a user uploading an inspection file can specify this information, and/or the data repositories 170, 180 of the MLM system 110 may include this information.
  • the standardization engine 140 can implement fuzzy logic to search inside of headers in the inspection file for metrology hardware and/or software information.
  • the identified information regarding hardware, software, and/or inspection operator is stored in association with the feature-based report, for example in data repository 170.
  • Standardizing inspection data as illustrated in Figure 2 can provide several advantages.
  • prior manufacturing statistical process control (SPC) systems are limited to analyzing direct point-to-point matches on inspected parts and thus require development of and adherence to specific inspection plans that outline the points needed for SPC analysis.
  • existing SPC systems are limited to analyzing data from the same type of measurement hardware and software due to relying on the point-to-point comparison.
  • the disclosed MLM system can aggregate part measurements by feature from any measurement device and without inspection plans.
  • the MLM system 110 is capable of analysis and comparison of inspection data from different sources. For instance, before a part is initially shipped from its original manufacturer it typically must be measured for quality assessment to ensure that it meets the required level of accuracy. This same part is often measured by the recipient to double check the accuracy before using the part. Using the standardization engine 140 and analytics engine 150, these two measurements (and any further measurements of the part as it continues to travel down the supply chain) can be automatically standardized and aggregated, despite being taken in different locations, and different times, and possibly using different measurement hardware and/or software.
  • part recipients can use aggregate quality data (or rankings generated therefrom) to make judgments of their suppliers regarding how close to tolerance their parts are generally, and how well-controlled their manufacturing process is.
  • the part recipient would have to manually compare the individual data sets provided as a printed report with each part to make this kind of judgment, so the MLM system both saves the part recipient time as well as provides them with a depth of analysis not possible from visually comparing printed reports.
  • aggregated inspection data can be analyzed to rank specific individuals or companies in a supply chain or industry based on metrics such as measurement data accuracy, delivery time, and/or part conformance to tolerances.
  • rankings may be used to provide specific supplier recommendations to OEM companies (or to other tiers of a supply chain) based on performance track record. Knowledge gained through such rankings can enable companies to increase efficiency in their supply chain relationships.
  • FIG. 2 Another advantage of the standardization process of Figure 2 relates to presentation of inspection reports to users.
  • Existing metrology software is both expensive and involves a steep learning curve, which often limits the number of employees at a given company who are able to view and understand inspection reports.
  • the MLM system can open files across a range of available formats and standardize the inspection data as described with respect to Figure 2.
  • These standardized inspection reports can be displayed to users, for example in a browser-based interface accessible via any connected and authorized device.
  • the inspection reports can also be mapped to the geometric features of models used to generate the inspected parts and overlaid onto the models, for example as a heat map showing where in-tolerance and out-of-tolerance measurements occurred.
  • a user can open that file using the MLM interface to view the quality analysis in a single report format or overlaid onto the associated model, eliminating the need for users to familiarize themselves with the reporting formats of a number of different metrology software.
  • Such display techniques can increase the availability and understandability of inspection data.
  • measurement uncertainty is the quantitative evaluation of the reasonable values that are associated with a measurement result. It is generally accepted that no measurement is exact.
  • the measurement value depends on factors including the measuring system, the measurement procedure, the skill of the operator, the environment, and other effects. Even if the object were to be measured several times, by the same operator and in the same way and in the same circumstances, a different measured value would in general be obtained each time, assuming that the measuring system has sufficient resolution to distinguish between the values. Thus, a measured value may not correspond to the actual value of the measured part, and measurement uncertainty is a probabilistic expression of this margin of doubt in a particular measurement value.
  • Measurement uncertainty can include two values, an interval and a confidence level.
  • the interval characterizes the range or probabilistic distribution representing the possible actual value of the measurand based on the measurement value, where measurand as used herein refers to a physical, manufactured part or assembly of parts that is being measured.
  • the interval is sometimes expressed as the measured value plus or minus a value, though the positive and negative distances from the measured value defining the interval do not have to be the same.
  • the confidence level characterizes the level of certainty that the actual value of the measurand is within the interval. As such, measurement uncertainty is an indicator of the quality and reliability of measurement results. All measurements are subject to uncertainty and a measurement result is complete only when accompanied the associated uncertainty.
  • nonconformity of measurement A with the designated tolerances can be proven. Because the uncertainty intervals for measurements B, C, and D overlap with values both within the tolerance limits and outside of these limits, conformity or nonconformity of measurements B, C, and D cannot be proven, even though measurement B is outside of the upper tolerance limit, measurement C is between nominal and the upper tolerance limit, and measurement D is equal to nominal. Based on the measurement value and uncertainty interval, conformity of measurement E with the designated tolerances can be proven.
  • measurements D and E have the same measurement value but the uncertainty interval of measurement D is larger than the uncertainty interval of measurement E. As illustrated by measurements D and E, it is more likely that the measurand will be determined to be out of tolerance as the uncertainty interval increases, even if the (unknown) actual measurand value is within tolerance.
  • the calibration data for manufacturing systems and metrology devices may or may not be accurate, and may change over time.
  • measurements D and E are each taken by metrology device F. If the smaller uncertainty interval E represents the uncertainty interval of the device provided by the calibration data but the larger uncertainty interval D represents the actual uncertainty of the device in operation, measurement E may be erroneously proven in-tolerance based on interval E while the actual measurement lies along interval D above the upper tolerance limit or below the lower tolerance limit. This can result in shipment and/or assembly of a "conforming" part that is, in reality, nonconforming. Conversely, if the larger uncertainty interval D represents the uncertainty interval of the device provided by the calibration data but the smaller uncertainty interval E represents the actual uncertainty of the device in operation, the device may be excluded from measuring parts that it is actually capable of measuring within tolerance.
  • parts are inspected both before shipping to a customer and upon receipt by the customer. If a supplier provides a part to a customer with a conformance report, but the receiving inspection produces a non-conformance report, it either (A) gets shipped back to the supplier or (2) repeated measurements must be taken to identify whether part is actually conforming and whether the error occurs at the level of the supplier or the customer. This generates inefficiency and potential strain on relationships within the supply chain.
  • the MLM system 110 can analyze feature-based inspection reports to determine an overall interval of measurement uncertainty in a particular set of measurements. This interval is reflects a collective uncertainty that is attributable to a manufacturing system used to manufacture the inspected part, a metrology device used to inspect the part, and the inspection operator who carried out the inspection. The MLM system 110 can then isolate a particular portion of that interval attributable to the human inspection operator, for example by removing the uncertainties attributable to the manufacturing system used to manufacture the inspected part and the metrology device used to inspect the part. These device-related uncertainty scores may initially be based on calibration data provided by manufacturers or testers of the devices, however the MLM system 110 can update these values to reflect device wear after learning the uncertainties attributable to particular human inspectors involved in the process.
  • Figure 4 illustrates an example process 400 for determining and utilizing uncertainty calculations to improve manufacturing process efficiency, and can be implemented by the analytics engine 150 in some embodiments.
  • the analytics engine 150 can obtain or generate a number of feature-based inspection reports each associated with an inspection operator, measurement device, and/or manufacturing system.
  • feature-based inspection reports are associated with each of a manufacturing system used to manufacture the inspected part, a metrology device used to inspect the part, and the inspection operator who carried out the inspection.
  • a feature-based inspection report may be associated with only one or two of these pieces of information.
  • the process 400 may use inspections of a number of geometric features conducted within a particular tolerance range (e.g., l/lOOO 111 of an inch) to determine uncertainty within a particular tolerance range rather than for a particular geometric feature.
  • the analytics engine 150 can aggregate information from feature-based reports based on geometric feature and/or associated parameters, as discussed in more detail with respect to the examples of Table 1.
  • the data can be aggregated appropriately for identifying a target measurement uncertainty associated with one or more of the manufacturing system, metrology device, inspector, and feature size range. It will be appreciated that inspection data sets can be generated using different units of measurement, for example inches and millimeters, and aggregated data sets are standardized so that all measurements are converted to the same unit of measurement.
  • the analytics engine 150 can identify or calculate measurement uncertainty associated with the manufacturing system, metrology hardware, and/or inspection operator, across all feature sizes or at a determined size range.
  • An example of a process for calculating the uncertainty associated with an inspector is discussed in more detail with respect to Figure 5, and an example process for calculating the uncertainty associated with a manufacturing system and/or a metrology device is discussed in more detail with respect to Figure 6.
  • machine uncertainty can be obtained from default calibration data provided from the machine manufacturer.
  • the uncertainty can be represented, in some embodiments, as an interval representing a range of measurements above and/or below the actual measurement value, where the (unknowable) real measurement of the part is likely to fall within the range.
  • Example aggregated data sets and example meanings or significances of the resulting calculated measurement uncertainties are illustrated in Table 1, below. These examples are meant to provide an overview of how different sets of data can be aggregated so that analysis will provide specific insights into manufacturing and inspection process accuracies and capabilities, and it will be appreciated that other data sets can be aggregated an analyzed as desired.
  • the process 400 can generate uncertainty scores for specific equipment/metrology device/inspector combinations in order to provide recommendations to manufacturers for meeting specific tolerances on specific geometric features, beneficially enabling reduction of parts scrapped due to usage of manufacturing systems, metrology devices, or inspectors that cannot with certainty manufacture or measure that feature at the specified tolerance.
  • the process 400 can generate two or more data sets in which one of the three associated process variables (manufacturing system, metrology device, and inspector) is varied in order to isolate the particular quantity of measurement uncertainty that is attributable to the varied process variable.
  • the three process variables can be kept the same but multiple data sets can be generated based on the nominal size of the measured feature in order to identify inspector (or, in other examples, manufacturing systems and/or metrology device) uncertainty at different size ranges.
  • the MLM system can use the results of blocks 405-415 of the process 400 for other purposes, for example to recommend training for specific inspectors, to exclude specific inspectors from performing certain measurements, to exclude specific manufacturing systems from manufacturing certain parts, to exclude specific metrology hardware from measuring certain parts, to alert designated users that manufacturing system or metrology device uncertainty deviates from the calibration value provided by the machine manufacturer, or to provide an output comparison report of all inspectors for review by inspection management personnel.
  • the analytics engine 150 can generate an uncertainty report representing feature-based uncertainty scores attributable to particular manufacturing systems, metrology devices, and/or inspectors or of combinations of manufacturing systems, metrology devices, and/or inspectors. These reports can include uncertainty scores attributable to the equipment and/or personnel of a single company or from a number of different suppliers in a supply chain relationship. These feature-based uncertainty scores can be sorted in some embodiments so that the report can be easily assessed by a user to identify the most precise manufacturing systems, metrology devices, and/or inspectors.
  • block 420 can be performed by iterating between blocks 422 (example implementation discussed with respect to Figure 5) and 424 (example implementation discussed with respect to Figure 6) to solve for the identified variables.
  • the analytics engine 150 can identify inspector-specific uncertainty values using fixed machine uncertainty values.
  • the analytics engine 150 can identify the geometric features and associated tolerances of a part.
  • the analytics engine 150 can compare the tolerances of the geometric features to feature-based uncertainty scores in one or more uncertainty reports to provide one or more recommendations regarding manufacturing setups capable of manufacturing and inspecting the part.
  • a manufacturing setup can include specific manufacturing system, metrology device, and inspector combinations to implement the manufacturing and inspection lifecycle of the part. If any uncertainty score in a report exceeds a threshold percentage of a tolerance, the process 400 can move to block 435 to provide an indication regarding one or more inspection operators and/or combinations to exclude from manufacture and/or inspection of the part.
  • Block 440 can involve filtering, from the recommendations set, any manufacturing setups where the manufacturing system or metrology device is not suitable for manufacturing or inspecting the material of the part (e.g., certain systems may not be rigid enough to manufacture Invar parts, or optical metrology systems may not be suitable for inspecting highly reflective materials such as Kapton).
  • the indications and recommendations can be managed by the multi -tenant manager 130 in some embodiments.
  • the percentage of tolerance can be 10%, 30%, 50%, 70%, or 100%, to name a few examples, and can be varied based on balancing the specific needs of a particular manufacturing- inspection cycle.
  • the combined uncertainty score of the manufacturing system, metrology device, and human inspector involved in making a particular geometric feature is 10% or less of the smallest tolerance specified for that feature in engineering schematics (e.g., where a certain part has three cylinders each associated with a different tolerance, the uncertainty score should be 10% or less of the smallest tolerance).
  • the analytics engine can implement block 425 to analyze its geometric features and associated tolerances.
  • the user of the OEM who uploads the part may or may not also be the user designated to manage the OEM's supply chain, e.g. by selecting suppliers (or in-house equipment and personnel) to manufacture particular parts.
  • the MLM system can send a notification to the user(s) designated by the OEM as managing supply chain and/or part creation, wherein the notification includes an indication that new engineering schematics were uploaded and a user-selectable option to identify any in-house manufacturing setups or suppliers capable of manufacturing and inspecting the part.
  • the analytics engine can determine capabilities of the in-house and supplier manufacturing setups relating to manufacturing and inspecting the geometric features of the part.
  • the MLM system may generate the described recommendations automatically upon detection of a new uploaded engineering schematic, or may automatically generate recommendations per user-specified rules, e.g. wanting to receive recommendations for all parts including cylinders.
  • the analytics engine can access the user data repository 180 to identify known suppliers of the OEM (or a subset who have known capability for producing this type of part) and/or any in-house assets of the OEM.
  • the analytics engine can implement blocks 405-420 to generate an uncertainty report representing the capability of inspectors, metrology devices, and/or manufacturing systems of the suppliers with respect to the identified features of the part.
  • the analytics engine can implement blocks 425, 430, and 440 to recommend one or more suppliers (and particular equipment and/or personnel at these suppliers) that can both manufacture and inspect the geometric features of the part within the designated tolerances.
  • a manufacturing setup includes at least one manufacturing system, at least one metrology device, and at least one human inspector that will take raw materials through the stages of manufacturing and inspection.
  • a manufacturing setup includes a single manufacturing system, a single metrology device, and a single human inspector.
  • two or more manufacturing systems, metrology devices, or inspectors may be needed for different geometric features of a part.
  • some parts may require several different types of metrology devices to measure various geometric features - e.g., surface contours vs. thickness - and thus a manufacturing setup recommendation can include a complete set of the devices and associated inspectors that would be needed to manufacture and inspect the part. If the MLM system determines (e.g., based on predetermined inspection rules relating to particular metrology devices) that one metrology device can't measure everything on a part, the MLM system can look for the devices that would best suit individual features.
  • the analytics engine can implement block 425 to analyze its geometric features and associated tolerances, can access the user data repository 180 to identify manufacturing systems, metrology devices, and inspectors of the manufacturing company, can implement blocks 405-420 to generate an uncertainty report for the identified inspectors, metrology devices, and manufacturing systems with respect to the identified features, and can implement blocks 425, 430, and 440 to recommend one or more combinations of manufacturing systems, metrology devices, and inspectors at the manufacturing company that can both manufacture and inspect the geometric features of the part within the designated tolerances.
  • the process 500 can be implemented by the analytics engine 150 in some embodiments.
  • the feature-specific measurements in the data described with respect to Figure 5 can be aggregated, in some embodiments, from inspection data standardized via process 200.
  • the process 500 may use inspections of a number of geometric features conducted within a particular tolerance range (e.g., 1/1000 th of an inch) to determine uncertainty within a particular tolerance range rather than for a particular geometric feature.
  • Process 500 can be used to identify human inspector capabilities per geometric features, and some embodiments can get more even granular, for example identifying capabilities per feature in combination with specific metrology device(s) and/or manufacturing system(s).
  • each inspection in the MLM system 110 can be tied to a human inspector (user), metrology device used to inspect the part, and manufacturing system used to make the part. For a given inspector, all inspections they have taken across all parts and assemblies are tied to that inspector in the MLM system databases.
  • the analytics engine 150 can identify an aggregated data set associated with an inspector for a geometric feature.
  • the aggregated data set can include a number of measurements taken by the inspector of the geometric feature.
  • the geometric feature measurements can involve the same part, different parts in a run, and/or parts manufactured based on different models. Each measurement can be associated with a measured value, a nominal value for that part surface or portion, and one or more tolerance limits.
  • Block 505 can be performed, in some embodiments, after the analytics engine 150 identifies that a threshold number of inspections of the feature have been obtained by the inspector. In one example the threshold can require the aggregated data set to include at least 50-100 inspections of the feature.
  • the analytics engine 150 can perform accuracy calculations to determine a first portion of the uncertainty score of the inspector.
  • the analytics engine 150 can calculate the deviation of the measurement from the nominal value and can then calculate the percentage of that deviation from the associated tolerance. In instances where there is both an upper tolerance limit and a lower tolerance limit, the deviation can be calculated as a percentage of the upper tolerance limit if the deviation is above nominal or can be calculated as a percentage of the lower tolerance limit if the deviation is below nominal.
  • the analytics engine 150 can calculate the mean of the percentages deviations of tolerance calculated at block 51 OA.
  • Blocks 51 OA and 510B can be programmatically combined into a single function in some embodiments.
  • the calculation of a "mean" represents one way to determine an aggregate value for a particular variable.
  • root mean squared or standard deviation can be used instead of mean.
  • the analytics engine 150 can calculate the mean tolerance based on the absolute values of the tolerances factored into the calculation of the mean percentage deviation of tolerance.
  • the analytics engine 150 can store the mean percentage deviation of tolerance and the mean tolerance in association with the inspector in user data repository 180.
  • inspector A measures cylinder 1 at 7/1000 of an inch deviation from nominal, and the associated tolerance for cylinder 1 is +/- 10/1000 inches (plus or minus ten thousandths of an inch).
  • Inspector A also measures cylinder 2 at -6/1000 of an inch deviation from nominal, and the associated tolerance for cylinder 2 is +/- 20/1000 inches.
  • the mean tolerance can be calculated as (
  • )/2 15/1000 of an inch.
  • the mean percentage deviation of tolerance of inspector A for cylinders is 50% and the mean tolerance is 15/1000 of an inch.
  • Inspector A's accuracy score can be stated as 50% of a 15/1000 inch tolerance, meaning that 50% of the time inspector A can be expected to measure within tolerance of a 15/1000 inch tolerance.
  • the analytics engine 150 can calculate an uncertainty score for inspector A associated with each geometric feature in the aggregated data.
  • the uncertainty score can be represented as an interval of measurements in which inspector A can be expected to measure.
  • the analytics engine 150 can calculate a mean upper deviation value and a mean lower deviation value for measurements taken of the feature.
  • the analytics engine 150 can calculate both (1) mean deviation above nominal based on the values and number of deviations above nominal and (2) mean deviation below nominal based on the values and number of deviations below nominal.
  • a data set associated with an inspector and a feature of size may include only measurements above nominal or measurements below nominal.
  • features of position or positional features are specified with only a single unsigned tolerance value.
  • the feature position can be a vector axis of the geometric feature, and the associated tolerance provides a cylindrical tolerance zone around the feature axis.
  • the feature position can be a point within the geometric feature, and the associated tolerance provides a spherical tolerance zone around the feature point.
  • the analytics engine 150 can first calculate the mean "upper” deviation based on the mean of the measurement deviations.
  • Analytics engine 150 can then calculate the mean “lower” deviation based on the mean of the measurement deviations for measurements falling between nominal and the mean “upper” deviation.
  • the mean "upper” and “lower” deviations can be negative values.
  • the analytics engine 150 can set an initial uncertainty range between the calculated mean upper and lower deviation values.
  • the analytics engine 150 can calculate mean machine uncertainty scores.
  • the mean machine uncertainty scores include both (1) a mean manufacturing uncertainty score generated based on the uncertainty score of the manufacturing system associated with each feature measurement, and (1) a mean metrology uncertainty score generated based on the uncertainty score of the metrology device associated with each feature measurement. It will be understood that the associated manufacturing system was used to manufacture the feature and the associated metrology device was used to measure the feature. In some circumstances different manufacturing systems can be used to manufacture different geometric features of the same part, for example by swapping out different cutters in a CNC. These uncertainty scores can be represented as an interval of probable part measurements associated with the machine's manufacture/inspection.
  • the analytics engine 150 can adjust the initial range by subtracting the mean manufacturing system interval and the mean metrology device interval from the initial range.
  • Block 515D operates to remove "known" machine uncertainty from the estimated measurement uncertainty attributable to the inspector.
  • This adjusted mean range - the span of the interval between the mean upper and mean lower deviations minus estimated machine uncertainties - where the inspector can be expected to measure that feature in the future.
  • the initial mean range for inspector A is 0.011 mm
  • the mean range for the manufacturing systems is 0.004 mm
  • the mean range for the metrology devices is 0.002 mm.
  • the portion of the interval that is likely attributable to inspector A is 0.005 mm.
  • block 515D can optionally include adjusting the mean range (minus machine uncertainty) to account for environmental variables including CTE (coefficient of thermal expansion), machining/inspecting setup, and machining/inspecting temperatures.
  • One embodiment of the process 500 can attribute 50% of the remaining uncertainty to environmental variables. Considering the above example, the portion of the interval attributed to inspector A would be reduced to 0.0025 mm.
  • data associated with the inspections in the MLM system 110 can indicate that machining and/or inspection of parts in the aggregated data sets were performed in temperature-controlled environments, for example at recommended shop temperatures of 68 degrees Fahrenheit, and the percentage attributed to environmental factors may be reduced. In other circumstances temperature variations during and/or between manufacture and inspection can be known and the percentage attributed to environmental factors may be increased accordingly.
  • the analytics engine 150 can store the adjusted range in association with the inspector feature in the user data repository 180 as the inspector's uncertainty score for the specific geometric.
  • Blocks 515A-515E can be repeated for each geometric feature.
  • the calculations of block 515 do not account for whether the features are from the same part or different parts, as each feature measurement is treated as a separate piece of data.
  • the aggregated data set can be partitioned based on feature and further based on size of feature, associated manufacturing systems and/or associated measurement devices, and blocks 515A-515E can be repeated for each partitioned data set.
  • the analytics engine 150 can calculate a mean of the uncertainty scores generated by block 515 for a number of different geometric features.
  • This mean can represent a global uncertainty score for the inspector and can be used in the MLM system 110 as a metric of the inspector's capabilities, for example to recommend the inspector for certain projects or rank the inspector relative to other inspectors.
  • this global uncertainty score would be calculated across all geometric features, manufacturing systems, and metrology devices.
  • Optional block 520 can be performed to utilize the determined uncertainty scores of the inspector within a manufacturing-inspection environment.
  • the uncertainty score can include at least three values: mean percentage deviation of tolerance, mean tolerance, and at least one uncertainty interval.
  • mean instead of mean other metrics may be used, for example root mean squared or standard deviation.
  • the feature-specific uncertainty measures for the human inspectors may be pre-computed in advance of determining to generate a capability recommendation for a particular part at block 520. Beneficially, this can enable the system to deliver the recommendation to the designated user while avoiding excessive latency that may result from performing the disclosed uncertainty-measure-isolation calculations in real time.
  • the analytics engine 150 can identify the tolerance range associated with a feature to be measured. This can be a range between upper and lower tolerance values for features of size or a range between nominal and tolerance for features of position.
  • the analytics engine 150 can multiply the tolerance range by the inspector percentage deviation of tolerance calculated at block 510B.
  • the analytics engine 150 can take the multiplied tolerance range and adjust the range by the inspector uncertainty range for the feature as calculated at block 515D. This can represent an expected range of measurements that the inspector could obtain when measuring the feature.
  • the analytics engine 150 can determine whether the expected range is within the specified tolerance. If so, the process 500 moves to block 520E and the inspector can be recommended for measuring the feature, as the inspector is capable of measuring the part in tolerance assuming that the part is, in fact, within tolerance. If not, the process 500 moves to block 520F and the inspector can be recommended to not measure the feature, as measurements obtained by the inspector are likely to be out of tolerance even if the actual part is within tolerance. Such recommendations can be output by the multi-tenant manager 130 to one or more designated users.
  • Some embodiments of the process 500 can use inspector uncertainty scores to recommend training of one or more inspectors using specific hardware and/or measuring specific geometric features in order to help inspectors improve their measurement accuracy. For example, if inspectors A, B, and C measure the same part, inspector A is 30% deviation of tolerance, B is 50% deviation of tolerance, and C is 75% deviation of tolerance (on the same manufactured part or generally) then their relative uncertainty scores reveal who needs more training.
  • inspector scores can first be calculated using equipment/device uncertainty data from manufacturer- provided calibration data to get preliminary inspector scores. Once the inspector scores stabilize in the system, the analytics engine can use the stabilized scores to identify whether equipment/device uncertainty corresponds to that provided by the manufacturer, and can use actual equipment/device uncertainty to refine inspector uncertainties. This feedback loop is discussed more with respect to Figure 7.
  • blocks 505-515 of the process 500 can be updated based on newly acquired inspection data.
  • the process 500 can be repeated in real time as new inspection data is acquired.
  • the process 500 can be repeated at predetermined intervals, for example once per day or once per week.
  • the process 500 can be repeated once a threshold amount of new inspection data is acquired, for example after every 50 or 100 new inspections for a feature.
  • the process 500 can be executed in response to a user request that requires generating an uncertainty score.
  • Figure 6 illustrates an example process 600 for generating uncertainty scores associated with a target machine a manufacturing-inspection environment, where the target machine can be a manufacturing system or a metrology device.
  • the process 600 can be implemented by the analytics engine 150 in some embodiments.
  • the feature-specific measurements in the data described with respect to Figure 6 can be aggregated, in some embodiments, from inspection data standardized via process 200.
  • the analytics engine 150 can identify inspection data associated with a target machine. For a manufacturing system, this can include inspection data sets of parts manufactured using the equipment. For a metrology device, this can include inspection data sets of parts inspected by the metrology device.
  • Analytics engine 150 can perform block 610 for each inspection in the inspection data set to determine whether or not to include the inspection in the aggregated data used in block 615.
  • the analytics engine 150 can determine whether there is an uncertainty score associated with the inspector of the inspection data set.
  • the uncertainty score for the inspector can be calculated as described with respect to the process 500 or by any other suitable calculations.
  • decision block 61 OA can include simply identifying whether there is an uncertainty score associated with the inspector for that feature in the user data repository 180. Inspectors may not have associated uncertainty scores in circumstances in which insufficient inspection data has been collected by the inspector for that feature.
  • block 61 OA can include determining whether the inspector's uncertainty score has stabilized and including the inspection data in the aggregated data set only for stabilized scores. For example, analytics engine can determine that the uncertainty score varies less than a threshold percentage over a period of time or over a number of updated calculations. As another example, analytics engine can determine that the uncertainty score stops varying beyond a predetermined decimal point, for example five decimal points for measurements in inches or three decimal points for measurements in millimeters.
  • process 600 can use and/or be limited to data from inspections performed by coordinate measurement machines (CMMs) operated without human inspectors, as such devices typically have known negligible uncertainty intervals of only a few microns.
  • CMMs coordinate measurement machines
  • some implementations of block 61 OA can identify whether the inspector of an inspection is a CMM, and if so may include the inspection in the aggregated data.
  • some embodiments may partition aggregated data into multiple sets to provide a number of uncertainty scores for the target machine.
  • the aggregated data can be partitioned into two or more sets based on temperature at which the parts were manufactured and/or inspected in order to provide temperature-specific uncertainty scores for the target machine.
  • the aggregated data can be partitioned into two or more sets based on part material, machine tooling (diamond cutters, carbide cutters, high speed steel cutters, and the like), machine programming, proximity to massive objects (such as mountains, as the gravitational field can pull a non-digital gauge in a metrology device towards it thus affecting the resultant measurements), and the like.
  • the analytics engine 150 can move to block 615.
  • the measurements in the aggregated data can be associated with a specific geometric feature, and block 615 can be repeated using the measurements for each feature in the aggregated data.
  • the analytics engine 150 can calculate a deviation of each measurement from the associated nominal value and calculate a mean value of these deviations.
  • Analytics engine 150 can further calculate a mean inspector uncertainty based on the uncertainty scores of the inspectors who contributed to the measurements of the geometric feature under consideration and the number of inspectors who contributed. Analytics engine can then subtract the mean inspector uncertainty from the mean deviation to calculate the uncertainty score of the target machine for that geometric feature.
  • block 615 can also include calculating a mean of values representing default and/or calculated uncertainty for metrology devices associated with the aggregated data and subtracting this value together with the mean inspector uncertainty score. If process 600 is determining the measurement uncertainty interval of a metrology device, block 615 can also include calculating a mean of values representing default and/or calculated uncertainty for manufacturing systems associated with the aggregated data and subtracting this value together with the mean inspector uncertainty score.
  • the analytics engine 150 can move to block 620 to calculate the mean of the uncertainty scores.
  • this mean of uncertainty scores is stored in association with the machine (manufacturing system or metrology device) in the asset portfolio of the user data repository 180.
  • the analytics engine 150 can cooperate with the multi-tenant manager to provide an alert to a designated user based on the mean uncertainty.
  • the alert can include an alert that the machine requires recalibration and/or that operation of the machine should be halted.
  • the analytics engine 150 can output a command through the network 108 to the machine or to a computing device operating the machine, where the command halts operation of the machine. This can include ceasing manufacture using a manufacturing system or disabling a metrology device for use in further inspections.
  • Such an alert or command can be based on the analytics engine 150 (1) comparing the mean of uncertainty scores to the default uncertainty score in the calibration data of the machine and (2) identifying that the mean of uncertainty scores is 30% or more greater than the default uncertainty score. Additionally or alternatively, such an alert or command can be based on the analytics engine 150 (1) performing outlier detection on a data set including number of uncertainty scores calculated for the machine and (2) determining that the mean of uncertainty scores is an outlier of the data set.
  • Another use of a machine uncertainty score obtained via the process 600 includes making determinations about manufacturing system capability to produce intolerance parts. For example, before manufacturing a part the analytics engine 150 can determine, for any feature of the part, whether the uncertainty range of any equipment in the asset portfolio associated with a user exceeds the specified feature tolerance range. If the tolerance range is exceeded by machine uncertainty, the MLM system 1 10 can alert a designated user to not use that equipment for manufacturing the part.
  • Another use of a machine uncertainty score obtained via the process 600 includes making determinations about metrology device capability to generate in-tolerance measurements of parts. For example, before inspecting a part the analytics engine 150 can determine, for any feature of the part, whether the uncertainty range of any metrology device in the asset portfolio associated with a user exceeds the specified feature tolerance range. If the tolerance range is exceeded by metrology device uncertainty and/or the uncertainty of a combination of metrology device and a specific inspector, the MLM system 1 10 can alert a designated user to not use that device and/or inspector for measuring the part.
  • the MLM system 1 10 can use the uncertainty scores output from the processes 500, 600 to provide recommendations for specific manufacturing system, metrology device, and inspector combinations for handling the manufacturing-inspection lifecycle of a part.
  • the MLM system can recommend combinations having combined uncertainty range that is at most 30% of tolerance for a part or of any feature of the part.
  • Figure 7 depicts an example feedback loop for refining inspector and machine uncertainty scores.
  • the feedback loop can be implemented by the analytics engine 150 in some embodiments, and may be pre-computed in advance of preparing particular manufacturing setup recommendations.
  • Data described with respect to Figure 7 can be aggregated, in some embodiments, from inspection data standardized via process 200.
  • the analytics engine 150 can calculate inspector uncertainty scores representing an interval of likely actual measurement values for parts measured by each of a number of inspectors.
  • the uncertainty score for each inspector can be calculated as described with respect to blocks 505-515 of process 500 in some embodiments.
  • the uncertainty score associated with an inspector can be adjusted to remove machine uncertainty scores.
  • the machine uncertainty scores represent the measurement uncertainty likely attributable to the manufacturing systems used to manufacture the measured parts and the metrology devices used to perform the inspection, which can initially be obtained from the machines' calibration data or can be updated based on obtained metrology data via process 600.
  • the inspector uncertainty scores are generated based on current machine uncertainty scores.
  • the analytics engine can take the uncertainty scores calculated at block 705 and feed this data 710 into block 715.
  • the analytics engine 150 can calculate machine uncertainty scores representing an interval of likely actual measurement values for parts manufactured by each of a number of machines and/or measured by each of a number of metrology devices.
  • the uncertainty score for each machine can be calculated as described with respect to blocks 605-625 of process 500 in some embodiments.
  • machine uncertainty scores can be calculated based on mean deviation from nominal minus inspector uncertainty or a fraction of inspector uncertainty. Thus, the machine uncertainty scores are generated based on current inspector uncertainty scores.
  • the analytics engine can take the machine uncertainty scores calculated at block 715 and feed this data 720 back into block 705. As machine uncertainty scores are refined, the uncertainty scores of inspectors whose inspection data sets involve these machines are refined by re-calculating the inspector uncertainty based on the updated machine uncertainty scores.
  • the analytics engine can take the updated inspector uncertainty scores calculated at block 705 and feed this data 715 back into block 715. As inspector uncertainty scores are refined, the uncertainty scores of machines having inspection data sets involving these inspectors are refined by re-calculating the machine uncertainty based on the updated inspector uncertainty scores.
  • the feedback loop can be initiated by the MLM system periodically, for example once per day, or can be executed as new inspection data is obtained.
  • the feedback loop can continue in some embodiments until convergence. Convergence can be defined as a cessation of change, within a threshold level, in uncertainty scores between successive iteration of blocks 705 and 715.
  • the order of magnitude of the threshold level can be set based on metrology device resolution in some implementations, for example at the fifth decimal place of measurement values in inches or the third decimal place of measurement values in millimeters.
  • the feedback loop can run for a number of iterations of blocks 705 and 715, for example 10 iterations, 50 iterations, 100 iterations, or more.
  • the number of iterations can also be dynamically determined based on the amount of new inspection data relative to a previous run of the feedback loop, for example by performing one iteration for every 10, 50, or 100 new inspections.
  • the feedback loop can be feature-specific, and multiple versions of the feedback loop can be performed sequentially or in parallel to refine feature-specific inspector and machine uncertainty scores.
  • Some embodiments of the feedback loop may use global uncertainty scores for inspectors and machines, for example a mean uncertainty score across all features, and can adjust known feature-specific scores based on the finalized global scores output from the feedback loop.
  • Some embodiments of the MLM system 110 can use machine learning to be able to automatically predict certain conditions based on relationships in manufacturing data.
  • Computing devices can use machine learning models representing data relationships and patterns, such as functions, algorithms, systems, and the like, to process input (sometimes referred to as an input vector), and produce output (sometimes referred to as an output vector) that corresponds to the input in some way.
  • a model is used to generate a likelihood or set of likelihoods that the input corresponds to a particular value.
  • artificial neural networks including deep neural networks, may be used to solve pattern-recognition problems that are difficult to solve using rule-based models.
  • a neural network typically includes an input layer, one or more intermediate layers, and an output layer, with each layer including a number of nodes.
  • the nodes in each layer connect to some or all nodes in the subsequent layer and the weights of these connections are typically learnt from data during the training process.
  • Each individual node may have a summation function which combines the values of all its inputs together.
  • a node may be thought of as a computational unit that computes an output value as a function of a plurality of different input values.
  • Nodes may be considered to be "connected” when the input values to the function associated with a current node include the output of functions associated with nodes in a previous layer, multiplied by weights associated with the individual "connections" between the current node and the nodes in the previous layer.
  • nodes of adjacent layers may be logically connected to each other, and each logical connection between the various nodes of adjacent layers may be associated with a respective weight. Each connection between the various nodes of adjacent layers may be associated with a respective weight values.
  • the weighting allows certain inputs to have a larger magnitude than others (e.g., an input value weighted by a 3x multiplier may be larger than if the input value was weighted by a 2x multiplier). This allows the model to evolve by adjusting the weight values for inputs to the node thereby affecting the output for one or more hidden nodes.
  • an optimal set of weight values are identified for each node that provides a model having, for example, a desired level of accuracy in generating expected outputs for a given set of inputs.
  • a neural network may multiply each input vector by a matrix representing the weights associated with connections between the input layer and the next layer, and then repeat the process for each subsequent layer of the neural network.
  • a neural network is a type of feed-forward machine learning model.
  • the parameters of a neural network can be set in a process referred to as training.
  • a neural network can be trained using training data that includes input data and the correct or preferred output of the model for the corresponding input data.
  • Sets of individual input vectors (“mini -batches") may be processed at the same time by using an input matrix instead of a single input vector.
  • the neural network can repeatedly process the input data, and the parameters of the network (e.g., the weight matrices) can be modified in what amounts to a trial-and-error process until the neural network produces (or "converges" on) the correct or preferred output.
  • the modification of weight values may be performed through a process referred to as "back propagation.”
  • Back propagation includes comparing the expected model output with the expected model output and then traversing the model to determine the difference between the expected node output that produces the expected model output and the actual node output.
  • An amount of change for one or more of the weight values may be identified using this difference to reduce the difference between the expected model output and the obtained model output.
  • back-propagation compares the output produced by a node (e.g., by applying a forward pass to input data) with an expected output from the node (e.g., the expected output defined during training). This difference can generally be referred to as a metric of "error.”
  • the difference of these two values may be used to identify weights that can be further updated for a node to more closely align the model result with the expected result.
  • a predicted output can be obtained by doing a forward pass using new input data input into a trained model.
  • the forward pass involves multiplying the large weight matrices representing the connection weights between nodes of adjacent layers by vectors corresponding to one or more feature vectors (from the input layer) or hidden representations (from the subsequent hidden node layers).
  • a neural network according to the present disclosure is trained to recognize patterns in the inspection data of parts in a run in order to predict whether a future, yet-to-be-manufactured part in the run will be out-of-tolerance.
  • such techniques allow manufacturers to adjust their manufacturing systems to avoid the financial, material, time, and energy costs of waste resulting from parts that require reworking or scrapping.
  • Previous statistical process control systems identify problems through trends in past inspections and therefore lack the tools to enable manufacturers to predict and prevent scrap in-line with real-time manufacturing processes.
  • real-time analysis of a manufacturing process refers to analysis of inspections of parts as the inspections are generated.
  • the inspected parts are typically inspected shortly after their manufacture, for example before creation of a next part based on the same model as the inspected part. Some processes may begin manufacture of the next part during inspection of the previous part. Some processes may inspect every part created, while other processes may inspect a regular or periodic sampling of parts created (e.g., every five parts, every ten parts, etc.). For purposes of the discussion below, the generation of the training and input data can omit consideration of non-inspected parts.
  • FIG. 8A depicted is an example wireframe view 801 of a computer model of a part together with GD&T data for the part, which is provided to illustrate and not limit the examples presented herein regarding training a neural network to predict out- of-tolerance parts. It will be appreciated that the features, properties, and tolerances depicted are for purposes of example, and the disclosed machine learning can be applied to manufacturing processes for any type of part.
  • the part shown in Figure 8A comprises a sphere, a rectangular base having two holes, and a frustoconical support member connecting the sphere to the base.
  • the GD&T or other inspection data for determining whether this part conforms to tolerances is depicted for a number of geometric features of the part, including the sphere, three planes, one cylinder' s diameter, and the cylinder's center position. Each of these features can be measured, for example, using a datum having three properties - location of a point in x, y, z space (where each of the x, y, and z coordinates is a property). Other properties can include radius, diameter, position, profile, and maximum material condition (MMC), to name a few examples.
  • the inspection data can specify a tolerance for each feature, or can specify separate tolerances for the properties of each feature.
  • Figure 8B depicts an example timeline 805 of different parts A1-A7 in a run that were manufactured based on the model shown in Figure 8 A at different times T1-T8.
  • a "run” is a set of sequentially-manufactured parts that are based on the same model and made by the same manufacturing system.
  • “sequential” and “sequentially” refer to parts that were manufactured in a particular temporal order.
  • a "lot” can be considered as a set of parts manufactured based on the same model, but not necessarily by the same manufacturing system.
  • part Al was produced first followed sequentially by A2-A8. Sequential production refers to the fact that time T8 follows (is temporally after) time T7, time T7 follows time T6, time T5 follows time T4, and so on.
  • parts A1-A8 can be used to train a machine learning model as described below.
  • parts A1-A8 can be selected for use in training data because they were manufactured successively, that is, with manufacture of part A2 consecutively following manufacture of part Al, manufacture of part A3 consecutively following manufacture of part A2, and so on.
  • other parts in the run may have been manufactured between various pairs of parts in the A1-A8 sequence. For instance, part A7 may have been manufactured consecutively before part A8, part A6 may have been manufactured five parts before part A7, part A5 may have been manufactured ten parts before part A6, and so on. Other spacings between parts in the dataset can be used in other examples.
  • a first subset of parts A1-A7 was manufactured successively while, for a second subset, other parts in the run were manufactured between the parts in sequential pair.
  • parts A5-A7 may be successively manufactured before part A8, while part A4 was manufactured five parts before part A5, part A3 was manufactured five parts before part A4, part A2 was manufactured ten parts before part A3, and part Al was manufactured fifteen parts before part A2.
  • a training or input data set can reflect both recent manufacturing conditions as well as more distant manufacturing conditions.
  • Figure 8C depicts a visual representation of an example set of training inspection data 810 that can be used to train a machine learning model to predict out-of- tolerance parts before their manufacture.
  • Training inspection data 810 can include inspection reports of different parts A1-A8 in the run 805.
  • An input data set 815 includes a sequence of the inspection reports of in-tolerance parts A1-A7, and output data 820 includes the inspection report of out-of-tolerance part A8.
  • An inspection report can have a number of different features corresponding to geometric features of the part, illustrated as surface 1 and surface 2.
  • Each feature can have at least one property, illustrated as dimensions in the x, y, and z directions.
  • the properties can be stored in association with a measurement value (the actual measured value of an inspected parts) and with GD&T information including lower tolerance, nominal value, and upper tolerance.
  • each property can be stored in association with a tuple of the measurement value and GD&T values.
  • Some embodiments can omit the nominal value, or can include the nominal value and have allowable deviations in the place of upper and lower tolerance values in the tuple.
  • the features and properties and numbers of features and properties can vary based on the geometry and GD&T of a particular part.
  • some or all features in an inspection report may be used for model training and prediction.
  • key features relating to out of tolerance conditions can be identified manually by a user or by automated trend analysis (for example by the analytics engine 150), and the training data can include such features extracted from larger inspection reports.
  • Comparison of the measurement values to the upper and lower tolerance values is used to determine whether a part is in or out of tolerance, as described herein. If a single property is out of tolerance the entire part can be considered out of tolerance. As illustrated, in tolerance measurements can be displayed in green (or using a first visual representation) and out of tolerance measurements can be displayed in red (or using a second visual representation that is different from the first visual representation). For simplicity of illustration units have been removed from the measurements and tolerances, and it will be appreciated that these can represent any suitable standard or metric units.
  • the training inspection data 810 can be provided to a machine learning model, for example a neural network, in order to train the parameters of the model to predict out-of-tolerance part A8 from in-tolerance parts A1-A7. Although seven in-tolerance parts are shown in the aggregate data 810, the aggregate data 810 can include greater or fewer numbers of in-tolerance parts in other examples. Parts A1-A7 were manufactured sequentially prior to manufacture of part A8. In one example, parts A1-A8 were manufactured successively, however as described above other parts in the run may have been manufactured between various pairs of parts in the A1-A8 sequence.
  • a machine learning model for example a neural network
  • training can use actual measurement values and/or a binary representation for in or out of tolerance measurements (e.g., 0 for in tolerance and 1 for out of tolerance).
  • the GD&T data may additionally be used for model training.
  • a number of training inspection data sets 810 can be used to refine the parameters of the machine learning model based on manufacturing conditions leading up to production of a number of out-of-tolerance parts.
  • a model trained using a training inspection data set 810 can be used in-line with a manufacturing pipeline, such that
  • Some embodiments of such models may be used only with a particular manufacturing system that was used to manufacture the parts in the training data set.
  • Other embodiments of such models can be more generally applicable to generate predictions for a number of, or any, manufacturing systems used to create parts based off of a specific model.
  • the training inspection data sets 810 can include runs of parts based on the same model but with some runs created by different manufacturing systems.
  • Figure 9A depicts an example topology of a neural network 900 for predicting out-of-tolerance parts (or for predicting measurements that can be compared to tolerances), for example using the prediction engine 165 of Figure IB.
  • the neural network has a preliminary statistical process control metric portion 905 and a neural network portion 910.
  • the statistical process control metric portion 905 includes a number of nodes that receive inspection data 810 and calculate statistical process control metrics, as described in more detail with respect to Figure 9B.
  • Other implementations can omit the statistical process control metric portion 905 and feed inspection data directly into the neural network portion 910.
  • the neural network portion 910 includes a number of connected layers 912, 914, 916.
  • the output layer 916 can be provided with the inspection data 820 of an out-of-tolerance part or with binary representations of in/out of tolerance conditions of properties or features of the out-of- tolerance part, such that the neural network 900 learns to predict the parameters of the out-of- tolerance part (or generally the out-of-tolerance condition) from the input inspection data 815.
  • the neural network 900 can be a feedforward artificial neural network, for example a multilayer perceptron, having a preliminary statistical process control metric portion designed to generate statistical process control metrics from input inspection data.
  • an artificial neural network 900 represents a network of interconnected layers of nodes, where weights of connections between nodes can be learned through the training process.
  • a neural network may include an input layer, and output layer, and any number of intermediate, internal, or "hidden” layers between the input and output layers.
  • the first layer (referred to as the "input layer” herein) has input nodes which send data via connections to the second layer of nodes.
  • Each hidden layer can transform the received data and output the transformed data for use by a subsequent layer, in the computations of these hidden layers can be considered as an encoding of patterns that enable the network to identify significant features of the inputs (e.g., how the inputs relate to the output).
  • the final layer outputs values representing the prediction of the neural network.
  • the output values may represent predicted measurements, or a likelihood that a given measurement or feature will be in tolerance.
  • a "layer" of a neural network or other machine learning model can be considered as computer-executable code that receives a set of inputs, implements a set of computations on the inputs, and provides the computationally-transformed inputs as an output.
  • the set of computations of a given node can be considered as any weighted input connections with nodes of the previous layer and any activation function (e.g., rectified linear activation, sigmoid, hyperbolic tangent).
  • the weights in a given set of computations are learned during training using a training data set, for example training inspection data set 810.
  • the individual layers may include any number of separate nodes.
  • Each node can be considered as a computer-implemented simulation of a biological neuron and represents a connection between the output of one node to the input of another.
  • Nodes of adj acent layers may be logically connected to each other by connections, represented in Figures 9A and 9B by the lines between nodes of adjacent layers. These connections may store parameters called weights that can manipulate the data in the calculations.
  • Each individual node may have a summation function which combines the values of all its weighted inputs together, and an activation function that operates on the summed weighted input to transform the summed weighted input into the output of that node.
  • a node may be thought of as a computational unit that computes an output value as a function of a number of different input values.
  • the depicted number of nodes, represented by circles in Figures 9A and 9B, are for purposes of illustration and network 900 can have greater or fewer nodes depending upon the structure of the training data.
  • Nodes may be considered to be connected when the input values to the function associated with a current node include the output of functions associated with nodes in a previous layer, multiplied by weights associated with the individual connections between the current node and the nodes in the previous layer.
  • the layers are termed to be "fully connected.” For example, turning to the depiction of Figure 9A, layer 912 is fully connected to layer 914, and layer 914 is fully connected to layer 916. In various implementations the layers 912, 914, 916 can be fully connected or partially connected.
  • Network 900 can include one or more convolutional layers, for example to have inputs for various properties of a geometric feature passed through a convolutional layer with the receptive field corresponding to the number of parameters of a feature.
  • the neural network may perform a "forward pass" of the data through the layers to generate output values.
  • Each data element may be a value, such as a floating point number or integer.
  • the forward pass includes multiplying the input values by learned weights associated with connections between the nodes of the input layer and nodes of the next layer, and applying an activation function to the results. The process is then repeated for each subsequent neural network layer.
  • the outputs 820 of the neural network can be compared to an expected output, and error rates identified based on the comparison can be fed back into the neural network via back propagation to adjust the weights of the neural network such that the output more accurately matches the expected output.
  • the artificial neural network 900 is an adaptive system that is configured to change its structure (e.g., the connection configuration and/or weights) based on information that flows through the network during training, and the weights of the hidden layers can be considered as an encoding of meaningful patterns in the data.
  • input data 815 from the inspection reports of a run of in-tolerance parts can be provided to the statistical process control metric portion 905.
  • This portion 905 can generate one or more statistical process control metrics based on the input inspection data 815.
  • Such metrics include sigma, 3 sigma, and 6 sigma values to name a few examples, as will be discussed in more detail with respect to Figure 9B.
  • Each output can be provided to a node of the next layer.
  • the measurement values for properties of specific geometric features of a measured part can be grouped together and provided to feature-specific nodes of the statistical process control metric portion 905.
  • Desired metrics of the generated statistical process control metrics and optionally the inspection data values can be provided to nodes of an input layer 912 of the neural network portion 910.
  • the desired metrics and the inspection data values of a property can be processed into a single value and input into a single node of the input layer 912.
  • the desired metrics and the inspection data values can each be provided to a separate node of the input layer 912.
  • the desired metrics and pass/fail encodings of the inspection data values can be provided together to a single node or individually to separate nodes.
  • Some embodiments can optionally include one or more process parameters to the input layer 912. These process parameters can include the unique identifier associated in the MLM system 110 with one or more of a human inspection operator, a metrology device 102, or a manufacturing system 104 involved in creating/measuring the part. Alternatively, training data can be segmented into subsets based on one or more of these process parameters and used to train a number of different versions of the network 900, with the resulting trained networks used specifically for manufacturing processes later involving the same combination of inspector, metrology device, and manufacturing system as the training data.
  • Each of the input nodes of the input layer 912 may be mapped to a corresponding one of the nodes of the statistical process control metric portion 905.
  • the input nodes are fully interconnected to the hidden nodes, and the hidden nodes are fully connected to the output nodes.
  • an interconnection may represent a piece of information learned about the two interconnected nodes.
  • a connection between a hidden node and an output node may represent a piece of information learned that is specific to the output node.
  • the interconnection may be assigned a numeric weight that can be tuned (e.g., based on a training dataset), rendering the artificial neural network 900 adaptive to inputs and capable of learning.
  • the nodes of the hidden layer 914 can retain information (e.g., specific variable values and/or transformative functions) for a set of input values and output values used to train the artificial neural network 900, referred to herein as parameters of the hidden layer. This retained information may be applied to a new set of input inspection data in order to predict whether a next manufactured part will be in or out of tolerance.
  • the hidden layer 914 allows knowledge about the input nodes of the input layer 912 to be shared amongst the output nodes of the output layer 916. To do so, an activation function /is applied to the input nodes through the hidden layer. In an example, the activation function / may be nonlinear.
  • a particular non-linear activation function is selected based on cross-validation. For example, given known example pairs (x, y), where x E X and y E Y, a function / X— > Y is selected when such a function results in the best matches (e.g., the best representations of actual correlation data).
  • one hidden layer 914 is shown, the network 900 can have two or more hidden layers in other implementations. [0199]
  • Each of the output nodes in the output layer 916 can be mapped to a particular portion of the inspection data 810 of an out-of-tolerance part.
  • actual measurement values or in-or-out-of-tolerance indicators can be fed into the output nodes. If actual measurement values are fed to the output nodes, in use the values of the output nodes of the neural network 900 can represent likely measurement values for a particular property of the next manufactured part. These values can be compared to known tolerances to determine whether the next part will be in or out of tolerance. For example, each output predicted measurement can be compared to a low tolerance and/or high tolerance value. Beneficially, this can create a measurement prediction neural network that remains applicable even if the tolerance for a given measurement changes.
  • the values of the output nodes of the neural network 900 can indicate whether (1) or not (0), or a likelihood that (a value between 0 and 1), a particular property will be out of tolerance for the next manufactured part.
  • this type of neural network can have a representation of the tolerance values encoded in its learned parameters, and beneficially it may bypass the extra step of having to compare output predicted measurements to given tolerances.
  • a pass/fail output node can be provided with a binary representation of the fail condition of the out-of-tolerance part in order to identify correlations between input inspection data and process parameters and the overall pass or fail of the part inspection.
  • the prediction engine 165 in response to determining that the output node of any property indicates (for example by showing a binary value of "1") an out-of-tolerance condition, can generate an out of tolerance alert. In another embodiment, the prediction engine 165 can generate an out of tolerance alert in response to determining that the output node of any property indicates a greater-than-threshold likelihood of an out-of-tolerance condition.
  • a threshold can be 30%, 50%, 80%, or another percentage of likelihood, and reflects a tradeoff between the desire to avoid scrap and the desire to keep a manufacturing process moving if there is a possibility that the next part will be in tolerance.
  • This threshold can be determined automatically for example based on the cost of each part (where higher cost parts would be associated with lower acceptable thresholds for likely out-of-tolerance conditions), the nature of the product including the part (e.g., a vehicle, consumer product, medical device), or known process parameters (e.g., desired throughput, maximum allowable scrap, etc.). In other examples the threshold can be user-specified based on user preferences. As described herein, the out-of-tolerance alert can be used to halt or correct the manufacturing process before a scrap part is made.
  • the specific number of layers and the specific number of nodes per layer shown in Figure 9A is illustrative only, and are not intended to be limiting. In some neural networks, different numbers of internal layers and/or different numbers of nodes in the input, hidden, and/or output layers may be used. For example, in some neural networks the layers may have hundreds, thousands, or even millions of nodes based on the number of data points in the input and/or output training data.
  • Figure 9A depicts fully connected neural network portion 910 having fully connected layers 912, 914, and 916, in variations of the disclosed neural networks there may be two or more partially connected layers and/or different numbers of fully connected layers.
  • each hidden layer 914 may have the same number or different numbers of nodes.
  • the input layer 912 and/or the output layer 916 can each include more nodes than the hidden layer(s).
  • the input layer 912 and the output layer 916 can include the same number or different number of nodes.
  • the input vectors, or the output vectors, used to train the neural network may each include n separate data elements or "dimensions," corresponding to the n nodes of the input layer 912 (where n is some positive integer).
  • the artificial neural network 900 may also use a cost function to find an optimal solution (e.g., an optimal activation function).
  • the optimal solution represents the situation where no solution has a cost less than the cost of the optimal solution.
  • the cost function includes a mean-squared error function that minimizes the average squared error between an output / (x) and a target value y over the example pairs (x, y).
  • a backpropagation algorithm that uses gradient descent to minimize the cost function may be used to train the artificial neural network 900.
  • the granularity of the network 900 can vary depending upon its desired prediction specificity. For example, at a most granular level measurement values for specific properties can be provided to the network 900 as depicted in the illustrated example. Another embodiment of the network 900 can have a similar topology but can instead receive pass/fail representations of the property measurements. A less granular version of the network 900 can receive pass/fail representations of part features instead of feature properties. The reduction in granularity can achieve faster processing times for both training and implementation of the network 900 at the expense of a more granular understanding of process trends. The output of the network can be correspondingly granular or less granular.
  • the network 900 can be trained to make predictions for a specific part (e.g., a number of manufactured parts based on the same engineering specification), and optionally for that specific part using a specific manufacturing setup.
  • the resulting trained version of network 900 may be applied to inspection data from parts in a run in order to predict whether a next manufactured part will be in or out of tolerance.
  • the part order and selection of part spacing in the analyzed run can be structured to match the part order spacing of the training data.
  • such a trained network 900 can be used to predict out-of-tolerance conditions with greater accuracy than simple trend analysis alone.
  • the trained version of network 900 can be used in a system that has in-line metrology, for example where a part (or set of parts) in a run are measured prior to manufacture of another part in the run.
  • in-line systems may optionally include automated manufacturing cells where robotic arms, conveyor belts, or other transportation means take parts from the manufacturing system where they are created to the metrology device that measures them.
  • the disclosed machine learning techniques can trigger automated process interruptions or adjustments based on predicted measurement values.
  • Figure 9B depicts an example node 905 for generating set of statistical process control metrics for input into the machine learning layers of the network of Figure 9 A.
  • the node 905 is specific to an individual feature (for example surface 1).
  • the node 905 receives x, y, and z measurement values from seven inspection reports (dataset 815), where XI -X7 are assembled into an input matrix and provided to an "x" input node, Y1-Y7 are assembled into an input matrix and provided to an "y” input node, and Z1-Z7 are assembled into an input matrix and provided to an "x" input node.
  • the node 905 uses these aggregated sets of inspection data to calculate a number of different statistical process control metrics, shown as sigma values (-3 sigma, sigma, 3 sigma, and 6 sigma), standard deviation, mean, range, process capability metrics (C P , C P k, Cr), and process performance metrics (P P , Ppk, Pr). In other embodiments a smaller subset of metrics, different statistical process control metrics (for example, C P m), or only one of these metrics may be calculated.
  • the computational structure of the node 905 can be optimized so that discrete functions (e.g., summations, calculation of standard deviation) are performed only once and then reused across a number of later nodes that require the output of the function.
  • the statistical process control metrics generated by the node 905 are reflective of various process capabilities and performances.
  • Statistical process control is a method of quality control in which statistical methods are employed in order to (1) evaluate process, and (2) control the process to make as much conforming product as possible with a minimum of waste (parts that require reworking or scrapping).
  • nominal and tolerance values from the input inspections can be provided to the node 905 for calculation of the statistical process control metrics.
  • C P represents process capability to meet two-sided tolerance limits
  • Cpk represents the process capability index which is an adjustment of C P for the effect of non-centered distribution
  • Cr represents the capability ratio used to summarize the estimated spread of the system compared to the spread of the tolerance limits. Larger C P index values indicate smaller likelihoods that the next manufactured part will be out of tolerance.
  • C P k reflects a measurement of how close the manufacturing process is to its targets (e.g., nominal values) and how consistent the process is around its average performance. With Cr, lower values indicate smaller output spreads, and multiplying the Cr value by 100 shows the percent of the tolerances that are being used by the variation in the process.
  • P p represents process performance in meeting two-sided tolerance limits
  • P P k represents the process performance index which is an adjustment of P P for the effect of non-centered distribution
  • Pr represents the performance ratio used to summarize the actual spread of the system compared to the spread of the tolerance limits.
  • P P is a measure of spread only, and a process with a narrow spread (a high P p ) may not meet tolerance requirements if it is not centered within the tolerance range.
  • P P should be used in conjunction with P P k to account for both spread and centering.
  • P p and Ppk will be equal when the process is centered on its target value. If they are not equal, the smaller the difference between these indices, the more centered the process is.
  • Lower P r values indicate smaller output spreads, and multiplying the Pr value by 100 shows the percent of the tolerances that are being used by the variation in the process.
  • Six Sigma is a set of techniques and tools within statistical process control for identifying and removing the causes of defects (e.g., out-of-tolerance parts that require reworking or must be scrapped) and minimizing variability in manufacturing.
  • the various sigma values illustrated in node 905 represent limits relative to a mean. For example, sigma represents the limit of data within one standard deviation of the mean, 3 sigma is the limit of data within three standard deviations above the mean, -3 sigma is the limit of data within three standard deviations below the mean, and 6 sigma is the limit of data within six standard deviations of the mean.
  • the disclosed neural network 900 can be trained such that its parameters represent relationships between inspection values, process capabilities and performance, and the ultimate creation of a next in or out of tolerance part.
  • the disclosed neural network 900 can be trained to make predictions about a next part, or another future part in the run (e.g., the 5 th part out, 10 th part out, or the like, depending upon the rate at which parts are manufactured and measured).
  • Figure 9C depicts example timelines 840A-840D of part runs that can be used to generate different sets of training data for training different neural networks 900A-900D in an ensemble 950 for predicting out- of-tolerance parts.
  • the neural networks 900A-900D can have any of the topologies discussed with respect to Figures 9 A and 9B.
  • the timelines 840A-840D use a sliding window 825 A- 825D through past inspections to generate different timings of part tolerance predictions.
  • the timelines 840A-840D depict a run of parts ending in part AN manufactured at time TN. Moving sequentially backwards along the timelines, part AN-I was manufactured before part AN at time TN-I, part AN-2 was manufactured before part AN-I at time TN-2, part AN-3 was manufactured before part AN-2 at time TN-3, part AN-4 was manufactured before part AN-3 at time TN-4, part AN-5 was manufactured before part AN-4 at time TN-5, and part AN-6 was manufactured before part AN-5 at time TN-6.
  • the inspection of part AN is the identified output.
  • the measured values of the features/properties of part AN are set to the output nodes of the neural networks 900A-900D.
  • the values of the output nodes of the neural networks 900A-900D represent predictions (e.g., predicted measurements or predicted out of tolerance conditions)
  • the sliding windows 825A-825D represent the varying sets of input data that are provided to the corresponding one of networks 900A-900D.
  • the spacing of the last inspection in the input inspection data relative to the inspection of part AN varies between the networks 900A-900D as shown by the sliding windows 825A-825D.
  • timeline 840A the last inspection in the sliding window 825A is of part AN-I, manufactured one part in the run before part AN. Accordingly, the parameters of network 900A are tuned to predict whether, based on input data from window 825 A, the next manufactured part will be in or out of tolerance.
  • a window 825 A of preceding inspections can be used with the inspection of each out-of-tolerance part such that a training data set includes a number of sets of "passed” inspections leading up to respective "failed” inspections.
  • the last inspection in the sliding window 825B is of part AN-2, manufactured two parts in the run before part AN. Accordingly, the parameters of network 900B are tuned to predict whether, based on input data from window 825B, the part manufactured two parts down the run will be in or out of tolerance.
  • the last inspection in the sliding window 825C is of part AN-3, manufactured three parts in the run before part AN. Accordingly, the parameters of network 900C are tuned to predict whether, based on input data from window 825C, the part manufactured three parts down the run will be in or out of tolerance.
  • the parameters of network 900D are tuned to predict whether, based on input data from window 825D, the part manufactured two parts down the run will be in or out of tolerance. Accordingly, the parameters of networks 900A-900D may differ in order to provide predictions of increasingly distant, future parts in the run.
  • the outputs of the networks 900A-900D can be used in combination to predict whether part AN will be out of tolerance. For example, if the output nodes of the networks 900A-900D provide predicted measurement values, these values can be averaged to determine the final predicted measurement values for part AN. AS another example, corresponding output nodes of the networks 900A-900D can be mapped to the same feature or property of the part, and agreement between corresponding output nodes of various combinations of the networks 900A-900D regarding whether that feature or property will be out of tolerance can be used to increase a confidence value associated with the prediction.
  • the networks 900A-900D in the ensemble 950 can generate their results in parallel in some embodiments, that is, the entire ensemble 950 can be used at once to generate the prediction for part AN when part AN is the next part scheduled for manufacture.
  • the 900A-900D in the ensemble 950 can generate their results in real-time as the inspection sets in the corresponding sliding windows 825A-825D are completed.
  • the prediction engine 165 can provide alerts that the fourth part out is predicted to be out-of-tolerance (e.g., using network 900D), and then can follow-up that alert with further alerts indicating agreement or disagreement with the prediction from subsequent uses of networks 900C, 900B, and 900A.
  • the prediction engine 165 can increase the confidence value associated with the predicted out-of-tolerance condition.
  • the agreement can be determined at a high level (e.g., generally that the part is predicted to be out of tolerance) or more granularly (e.g., that the same identified feature or property is predicted to be out of tolerance). More granular agreement can result in a higher confidence value.
  • an alert that includes instructions to adjust the manufacturing system can be generated after a threshold confidence value is determined based on agreement between two or more of the networks in the ensemble 950.
  • the inspection composition of the sliding windows 825A-825D can involve the same number and sequence (e.g., position and spacing of the inspected parts in the run relative to the last inspected part) of inspections in some embodiments, and in other embodiments can involve different numbers and sequences.
  • the illustrated example shows an ensemble to predict whether part AN will be out of tolerance as the next part, second part out, third part out, and fourth part out, it will be appreciated that the ensemble 950 can be modified to generate such predictions at other timings than in the illustrated example (e.g., next part, five parts out, ten parts out, twenty parts out, etc.).
  • the ensemble 950 is depicted as using four networks 900A-900D, in other implementations the ensemble 950 can include any number of two or more networks.
  • Figure 10 depicts an example data structure 1000 for analysis of machine learning model output, for example the output of the neural network of Figures 9A and 9C.
  • the output of the trained network can provide data representing whether a future part will be in or out of tolerance.
  • An analysis module 1 140 (discussed in more detail with respect to Figure 1 1) can receive this data from the output of the neural network and analyze the data to determine whether the part will be in or out of tolerance, and optionally to determine what actions should be taken when an out-of-tolerance part is predicted.
  • the output of the neural network can be written to a database and queried by the analysis module for analysis.
  • this writing and querying of data can add additional processing steps thereby increasing the amount of time it takes between inspection of a part and prediction of whether the next part will be in or out of tolerance.
  • some embodiments can generate a data structure 1000 like the example shown in Figure 10 in a working memory and pass this data structure 1000 directly to the analysis module. This can increase efficiency and reduce processing time relative to writing the model output to a database.
  • the type of data structure can vary depending upon the programming language used for the neural network.
  • the data structure 1000 can be a tree data structure such as an octree if the neural network is written in C++.
  • the data structure 1000 can be a JavaScript Object Notation (JSON) if the neural network is written in JavaScript.
  • JSON JavaScript Object Notation
  • the example data structure 1000 includes nine columns, and can include as many rows as there are features or properties mapped to the output nodes of the neural network 900.
  • the first column (labeled Pi- PN) represents the properties mapped to the output nodes of the neural network. These values can be populated (e.g., based on the model, on GD&T data, or another format of inspection data) when the structure of the neural network is set, and may not change from prediction to prediction. If the model is revisioned or the inspection plan is updated then the first column can be updated accordingly.
  • the second column represents the measurement predicted by the neural network for each property Pi- PN. These values can be dynamically populated after each prediction using the values of the corresponding output nodes of the neural network.
  • the third column (labeled N) represents the nominal value for each property Pi- PN
  • the fourth column (labeled Tu) represents the upper tolerance limit for each property Pi- PN
  • the fifth column (labeled Tu) represents the upper tolerance limit for each property Pi- PN.
  • the values in the third through fifth columns can be populated from GD&T data (or blueprint data, or another format of tolerance data) associated with the model, and may not change from prediction to prediction. If the model is revisioned or the inspection plan is updated then the first column can be updated accordingly.
  • the sixth column represents the deviation of the predicted measurement from the nominal measurement and can be dynamically and selectively populated by the analysis module after each prediction.
  • the analysis module can compare the predicted measurement to the upper and lower tolerance limits. If the predicted measurement is above the upper tolerance limit or below the lower tolerance limit, this represents an out-of-tolerance property and the analysis module can compute the deviation value.
  • the analysis module can set a bit (not shown) indicating that the part is predicted to be out of tolerance. If the predicted measurement is less than or equal to the upper tolerance limit and greater than or equal to the lower tolerance limit, this represents an intolerance property and the analysis module may enter a null value or no value and continue analyzing remaining data in the structure 1000.
  • the seventh column represents an identifier of the tool used to manufacture the property. Different properties of the part can be manufactured with different tooling, so the data structure 1000 can include a number of different tool IDs.
  • the analysis module can access the ID of the associated tooling and output this identifier in an alert.
  • the eighth column represents a maximum offset of the tool in column seven. This represents a maximum allowable value that the tool position can be offset from the positions specified in the default machining instructions in order to attempt compensation for a predicted manufacturing error.
  • the tool IDs and maximum offset values can be populated based on a look-up table specifying which tools create the various properties and what the maximum offset is for each tool, and may not change from prediction to prediction. If the look-up table is updated then the seventh and eighth columns can be updated accordingly.
  • the ninth column (labeled TRO) represents an offset amount recommended by the analysis module for the tool associated with a particular property. If the analysis module identifies an out-of-tolerance deviation in the sixth column for a particular property, then the analysis module can proceed to compare the deviation to the maximum tool offset value in column eight. If the deviation is less than or equal to the maximum tool offset value, then the analysis module can generate the recommended tool offset value equal to the magnitude of the deviation. If the deviation is greater than the maximum tool offset value, then the analysis module can (1) determine the difference between the deviation and the maximum tool offset value, (2) subtract the determined difference from the predicted measurement value, and (3) determine whether the adjusted predicted measurement value is above the upper tolerance limit or below the lower tolerance limit.
  • this represents a manufacturing defect condition that cannot be corrected using the maximum allowable tool offset
  • the analysis module can output a recommendation to change the tooling identified in the seventh column. If not, then the adjusted predicted measurement value is in tolerance and thus the predicted defect can be corrected using the maximum allowable tool offset. In such a situation, the analysis module can set the recommended tool offset to the maximum allowable value and output this information in an alert to a user and/or a control system 105 of the manufacturing system 104.
  • the data structure 1000 can include greater or fewer columns depending upon the type of property, the type of output of the neural network, and the manufacturing process setup. For example, if a CMM or other low-uncertainty metrology device (e.g., operated programmatically without human involvement in each inspection) is used to generate the inspections of the parts, then inspection-related uncertainties may be negligible (e.g., around 2 microns). In such circumstances, adding another column to remove inspection-related uncertainty from the identified deviation to isolate the amount attributable to the manufacturing system can involve additional processing time without providing substantial benefits due to subtracting a negligible uncertainty value.
  • a CMM or other low-uncertainty metrology device e.g., operated programmatically without human involvement in each inspection
  • adding another column to remove inspection-related uncertainty from the identified deviation to isolate the amount attributable to the manufacturing system can involve additional processing time without providing substantial benefits due to subtracting a negligible uncertainty value.
  • the data structure 1000 can include additional columns storing the uncertainties associated with the devices/inspectors.
  • the analysis module can subtract these uncertainties from the deviation to isolate the portion of deviation attributable to manufacturing system.
  • the number of columns of the data structure 1000 can be determined programmatically based on identifying the process and analyzing the associated uncertainty values.
  • Figure 11 depicts a schematic block diagram of an example of the prediction engine 165 of Figure IB.
  • the prediction engine 165 includes a number of processing modules including model parameter module 1105, training module 1110, notification handler 1115, in-line prediction module 1135, analysis and alert module 1140, and machine instruction module 1145.
  • Each module can represent a set of computer-readable instructions, stored in a memory, and one or more processors configured by the instructions for performing the features described below together.
  • the modules can be part of a distributed architecture with the model parameter module 1105, training module 1110, and notification handler 1115 included in the MLM system 110 and the in-line prediction module 1135, analysis and alert module 1140, and machine instruction module 1 145 included in the control system 105, with communications sent between the MLM system 110 and the control system 105 over network 108.
  • the modules can be implemented entirely in either the MLM system 110 and the control system 105, or can be distributed in a different configuration than illustrated.
  • the model parameter module 1105 is configured to receive data representing a CAD model, blueprint, GD&T, other inspection or part specification documentation, and any user-specified settings and analyze such data to identify the structure of the input and output nodes of the neural network 900.
  • the model parameter module 1105 can automatically identify the features and/or properties of a physical part based on the received CAD model, blueprint, or inspection documentation and can map the input and output nodes to the identified features and/or properties. In some examples, this can include all identified features and/or properties. In other examples, the model parameter module 1105 can select a subset of the features and/or properties.
  • the model parameter module 1105 can analyze an assembly model including the part model and identify any features of the part that mate with features of other parts in the assembly.
  • the input and output nodes can be mapped to these features (and/or the properties of these features).
  • a user of the MLM system 110 can specify one or more features desired for analysis by the prediction engine 165, and the input and output nodes can be mapped to these features (and/or the properties of these features).
  • the MLM system 110 can provide a user interface that enables the user to select these features from a list generated based on analyzing the CAD model, blueprint, or inspection documentation of the part.
  • the model parameter module 1105 can automatically and dynamically generate a structure of the neural network 900 that is specific to a certain part, thus initializing the training process without requiring a human operator to determine the model structure. This can be advantageous in circumstances where the user of the MLM system 110 is not familiar with programming of machine learning models, but still desires to implement in-line machine learning predictions in their manufacturing process. When the user begins a manufacturing project involving a new part or modifies an existing manufacturing project to include machine learning predictions, the model parameter module 1105 can create the structure of the neural network 900 that matches the specifications of the part based on data already provided to the MLM system 110 as described above.
  • this automated setup allows the user to implement machine learning when the user would otherwise be unable to do so. It will be appreciated that in other scenarios a human operator can be involved in the creation of the structure of the neural network 900.
  • the model parameter module 1105 can send the structure of the neural network 900 to the training module 1110.
  • the training module 1110 is configured to use inspection report data 1120 received from the control system 105 and/or accessed from data repository 170 to train the parameters of the neural network 900 as described above and/or as discussed below with respect to Figure 12A.
  • the training module 1110 can train the neural network 900 initially (e.g., before its first use in in-line manufacturing predictions) and in some embodiments can periodically or intermittently re-train the neural network, for example based on updated inspection report data 1120 received from the control system 105.
  • the training module 1110 can also monitor accuracy of the trained neural network 900 and perform re- training on an as-needed basis to maintain a threshold (e.g., user-specified or default) level of accuracy in the generated predictions.
  • a threshold e.g., user-specified or default
  • the training module 1110 can send trained models 1125 to the control system 105 via network 108.
  • the model parameter module 1105 and training module 1110 can be implemented in the control system 105.
  • the in-line prediction module 1135 is configured to identify new input data for provision to the trained model 1125 based on real-time manufacturing inspection data, and to provide the new input data to the trained model 1125 to generate part inspection predictions. For example, after a part is created by a manufacturing system 104 it can be inspected by a metrology device 102. The control system 105 can control the manufacturing system 104 to either wait to create the next part, or to create a part based on a different computer model, while the created part is inspected and the inspection input into a trained model 1125.
  • in-line prediction module 1135 can be used within the context of an automated robotic manufacturing cell, for example in a control system 105 (or multiple control systems) that control operations of the cell.
  • a robotic manufacturing cell can implements a number of robotic arms to move materials to one or more manufacturing systems in the cell, for example a CNC.
  • the CNC can use tooling to create parts, and can have a number of different tooling options and a robotic system for changing out tooling.
  • the manufacturing systems in the cell can create parts from the materials based on machine- readable manufacturing instructions that operate to control the manufacturing systems.
  • the robotic arm(s) of the cell can retrieve created parts from the manufacturing systems and move the created parts to an automated metrology device, for example a CMM.
  • the automated metrology device can inspect the part based on predetermined inspection programming and output the part inspection report.
  • the robotic arm(s) of the cell can then move an inspected part from the metrology device to a predetermined location, for example conveyor belt or storage area, depending upon the results of the inspection.
  • Such cells can be configured to create a number of different runs of parts each based on one of a corresponding number of different models, and can be configured to cycle through creating parts based on the different models.
  • the in-line prediction module 1135 can implement a number of different trained neural networks 900 (or other suitable machine learning models) in such a cell, with at least one machine learning model corresponding to each of the different part models.
  • the in-line prediction module 1135 is configured to provide the output of a neural network 900 to the analysis and alert module 1140, for example as a data structure 1000 stored in working memory as described above.
  • the analysis and alert module 1140 can be configured to implement the logic described above with respect to the data structure 1000 in order to (1) determine whether a future manufactured part will be in or out of tolerance, (2) generate a confidence value in such a prediction, (3) identify specific tooling associated with a predicted out-of-tolerance property or feature, and (4) identify any recommended corrective adjustments for the manufacturing system 104.
  • the analysis and alert module 1140 can generate an alert including such determinations for provision to the machine instruction module 1145 and/or notification handler 1115.
  • the machine instruction module 1145 is configured to control the operation of a manufacturing system 104 regarding halting manufacture of a particular part, changing out identified tooling, or continuing manufacture with identified tooling using a determined tool offset relative to the default machining instructions.
  • the notification handler 1115 is a module configured to identify any users of the MLM system 110 associated with a part, identify a subset of those users designated to receive alerts regarding the part, create a graphical user interface presenting the alert generated by the analysis and alert module 1140, and transmit data representing the graphical user interface to a user device 106 and/or electronic messaging account associated with each user of the designated subset.
  • the notification handler 1115 can access the identifiers and permissions of such users in the user data repository 180 in some embodiments.
  • the machine learning data repository 185 can be distributed between the MLM system 110 and the control system 105 as needed for storage of training data sets, trained machine learning models, new input data sets, model outputs, and any alerts generated based on the model outputs.
  • Figure 11 is discussed in the example context of the neural network 900 described above, it will be appreciated that the prediction engine 165 can implement other suitable machine learning models for predicting manufacturing performance and/or manufactured part conditions in other examples.
  • Figure 12A depicts a flow diagram of an illustrative process 1200 A for training a machine learning model, for example as discussed with respect to Figures 8-9B.
  • Figure 12B depicts a flow diagram of an illustrative process for providing out-of-tolerance part predictions in the prediction engine 165 of Figure IB via a model trained as described with respect to FIG. 12 A.
  • the processes 1200 A, 1200B can be performed under the control of one or more computer systems configured with executable instructions and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware or combinations of hardware and software.
  • the processes 1200A, 1200B can be implemented by the prediction engine 165 of the MLM system 110 in some embodiments.
  • the process 1200B can be implemented on a prediction engine running locally in the control computer of a manufacturing system or cell.
  • the prediction engine can generate the trained models locally and/or can be provided with trained models via network 108.
  • the code may be stored on a computer-readable storage medium, for example, in the form of a computer program comprising a plurality of instructions executable by one or more processors.
  • the computer-readable storage medium may be non-transitory.
  • the prediction engine 165 can identify an out-of-tolerance part in a run.
  • a part can be considered out-of-tolerance if the measured value of any of its properties is outside of the range defined based on nominal and tolerance values.
  • An out-of-tolerance may be scrapped or reworked, and either scenario represents inefficiency and waste in a manufacturing process.
  • the neural network can be trained using positive training cases (where the expected output is generated based on an in-tolerance inspection), in addition to negative training cases (where the expected output is generated based on an out-of- tolerance inspection). Some implementations can use approximately equal numbers of positive and negative cases.
  • the process 1200 A moves to block 1210, and the prediction engine 165 can identify inspection data representing the inspection of the out-of-tolerance part.
  • the prediction engine 165 can also identify inspection data for a set of in-tolerance parts manufactured prior to the identified out-of-tolerance part. As discussed above, this can include a set of parts manufactured successively before the out-of-tolerance part, or manufactured at predetermined intervals leading up to the production of the out-of-tolerance part.
  • the prediction engine 165 can access a predetermined sequence specifying the spacing between parts in the run, and may also access data specifying a desired spacing between the last (most recent) part in the set and the future part for which the prediction will be generated.
  • the prediction engine 165 can use the predetermined sequence to identify the other parts in the input data relative to the last part.
  • the sequence of in-tolerance parts can include both a recent, successively-prior subset as well as a less recent subset, for example manufactured at increasingly distant past times or at increasingly earlier positions in the run sequence in order to capture both long-term and short-term data in the training set.
  • the prediction engine 165 can then access inspections of the identified parts in the set and can create the input data with the inspections in the same sequential order as the parts in the set.
  • One implementation can include the inspections of the seven parts preceding the manufacture of the out-of-tolerance part.
  • Inspection data can follow the requirements of GD&T specifications, blueprints, or other tolerance requirements, for example as set forth in an inspection plan. As such, the inspections of the parts in the runs can be expected to correspond to one another, that is, to have the same fields of data (features, properties, etc.), and corresponding portions of each inspection data set can be included in the training data.
  • the process 1200A moves to blocks 1215, 1220, and 1225, which can be performed sequentially in any order or in parallel.
  • the prediction engine 165 can generate input data based on the inspections of the in-tolerance parts. For example, the prediction engine 165 can extract a subset of the inspection data corresponding to identified features and/or properties of interest, can correlate values in the data across the in-tolerance data sets, and can create input feature matrices or tuples using the correlated values. Prediction engine 165 can in some embodiments generate the tuples using in-or-out-of-tolerance representations of the extracted measurements rather than the actual measurement values.
  • the prediction engine 165 can generate output data based on the inspection of the out-of-tolerance part. For example, the prediction engine 165 can extract a subset of the inspection data corresponding to the same identified features and/or properties of interest as used to generate the input data. Prediction engine 165 can in some embodiments generate the output data using binary in-or-out-of-tolerance representations of the extracted measurements rather than the actual measurement values. [0247] At block 1225, the prediction engine 165 can optionally identify any additional process parameters for input into the machine learning model.
  • such process parameters can include the unique identifier associated in the MLM system 110 with one or more of a human inspection operator, a metrology device 102, or a manufacturing system 104 involved in creating/measuring the part.
  • these parameters can be used to segment the training data identified in block 1210 into data sets reflecting particular inspector/manufacturing system/metrology devices or combinations.
  • the process 1200 A moves to 1230, and the prediction engine 165 can train a machine learning model such as the neural network 900 or another suitable predictive machine learning model.
  • the input data can be provided to a statistical process control metric layer 905 and/or input layer 912 of a neural network, and the output layer can be provided to the output layer 916.
  • the parameters of the network can be tuned, for example using back-propagation, to minimize the error rate in predicting the designated output data from the designated input data.
  • the process 1200 A can loop back to block 1205 to identify another out-of-tolerance part in the run (or in another run of the part created by a different manufacturing system).
  • the process 1200A can be repeated using new input and output data based on the additional out-of-tolerance part in order to refine the parameters of the machine learning model. This can be repeated in some embodiments until all or a threshold number of out-of-tolerance parts are identified and used for training, or until the parameters of the model stabilize (e.g., change less than a predetermined amount).
  • the process 1200 A can be repeated using new input and output data based on the additional out-of-tolerance part in order to generate an additional trained model.
  • multiple trained models each having parameters that predict a different identified out-of-tolerance part can be stored as a neural network ensemble for use in predicting future out-of-tolerance parts.
  • the trained model can be stored in the machine learning data repository 185 and accessed in process 1200B to generate manufacturing process predictions. Though discussed in the context of predicting out-of-tolerance measurements, the training data can also be selected in a similar manner to train the machine learning model to predict other manufacturing process conditions, for example near-out-of-tolerance conditions.
  • the model can be trained after accumulation of sufficient inspection data for a particular run in some embodiments. In some embodiments, the model can be re-trained based on updated inspection data relating to the run, for example periodically at predetermined intervals (e.g., nightly, weekly, etc.), in response to the accuracy of the model dropping below a predetermined acceptable threshold, or in response to process changes.
  • the process 1200A can be pre-computed, that is, performed prior to real-time analysis of manufacturing process conditions.
  • the prediction engine 165 can identify input inspection data set based on real-time manufacturing and inspection data. For example, in robotic manufacturing cells as well as in manufacturing processes involving human inspectors and/or machinists, parts in a run can be inspected after manufacture and prior to creation of subsequent parts in the run. In such contexts, the prediction engine 165 can access inspection data of a number of previously manufactured parts in the run, where the input data follows the same sequence as the input training data sets described above. The prediction engine 165 can select new input inspection data sets in a consistent manner with the selection of the input training data so that a trained model is provided with consistent fields and/or quantities of data.
  • the prediction engine 165 does not identify any output data, as the next part has not been manufactured and is the subject of the presently described predictions.
  • the predictions can also be used in "batch inspection" type processes, for example where a certain number of parts (e.g., 10, 30, 50) are created, and then that batch is inspected.
  • the machine learning model can be trained to make predictions about a future part in a next batch.
  • the prediction engine 165 can access the trained model and input the data identified at block 1235 into the model, for example into statistical process control metric nodes and/or input nodes of neural network.
  • the resulting output node values represent predicted measurements and/or in/out of tolerance conditions of the next part that will be manufactured in the run, and thus the network can make metrology predictions regarding yet-to-be-made parts.
  • the prediction engine 165 can use an ensemble of trained models to generate one or both of the out-of-tolerance prediction and a confidence in the prediction.
  • the ensemble can include a number of models each trained to predict whether the next part manufactured will be out of tolerance based on a different set of previous inspections.
  • the outputs from the networks in the ensemble can be averaged in some examples to provide an average predicted measurement of the part properties.
  • the outputs from the networks in the ensemble can be used to generate a confidence value associated with an out-of-tolerance condition, and the confidence value can be used to determine what action (if any) should be taken to adjust the manufacturing system.
  • four neural networks trained can be trained to identify an out-of-tolerance condition for the next part, the part that will be manufactured two parts after the last inspected part, the part that will be manufactured three parts after the last inspected part, and the part that will be manufactured four parts after the last inspected part.
  • the target part of these different networks can be aligned to be the same part - in this example, the next part that will be manufactured.
  • Other ensembles can use different numbers of networks or different spacings of predictions (e.g., three networks that predict the next part, five parts from the last inspected part, and ten parts from the last inspected part).
  • these trained networks can output predicted measurement values, and these values can be averaged in order to calculate the predicted measurement values analyzed for purposes of determining any alerts.
  • the prediction engine 165 can determine whether each network predicts that the next manufactured part will be in or out of tolerance, and the degree of agreement between the networks can be used to generate a confidence value in the determined prediction (either specific measurement predictions or general in/out of tolerance predictions).
  • the prediction engine 165 can use the output values to determine whether next part in run will be in tolerance (and optionally to determine a confidence value in the generated prediction). As described above, this can involve determining that the output node of any property indicates a binary indication of an out- of-tolerance condition, determining that the output node of any property indicates a greater-than-threshold likelihood of an out-of-tolerance condition, or by comparing the output predicted measurement value for each property to identified tolerances.
  • block 1245 can be performed by the alert creation module 1 140 using the data structure and logic discussed with respect to Figure 10 in some embodiments.
  • the process 1200 A loops back to block 1235 to continue performing realtime, in-line analysis of the manufacturing process.
  • the input inspection data can be updated with a sliding window of recent inspections based on the determined spacing of the input data part manufacture (e.g., successive, spaced apart, or a combination, as described above).
  • the process 1200B moves to block 1250 and the prediction engine 165 can generate an out-of-tolerance alert.
  • the out-of-tolerance alert can be automatically provided to the manufacturing system to cause the manufacturing system to halt or correct the manufacturing process before the out of tolerance part is made.
  • the out-of-tolerance alert can additionally or alternatively be sent to the user computing device 106 of any designated users.
  • the out-of-tolerance alert can include a simple indication that the prediction engine 165 predicts that the next part will be out of tolerance in some embodiments.
  • inventions can include additional details, for example one or more of an identified a probability (confidence value) that the next part will be out of tolerance, an identified cause of the out-of-tolerance condition, and a recommended corrective action to adjust the manufacturing process to prevent the next part from exceeding tolerances.
  • the prediction engine 165 can identify one or more output nodes having a value that indicates the predicted out-of-tolerance condition.
  • the output node can be mapped to a particular feature and/or specific property of that feature.
  • the MLM system 1 10 or a control system 105 in communication with the MLM system 1 10 can store a mapping between the features/properties of a part and manufacturing tooling used to create the feature/property, as discussed with respect to the example Figure 10.
  • the prediction engine 165 can identify specific tooling that would be responsible for the out-of-tolerance condition of a property /feature.
  • the alert can include the unique identifier of this tooling in the MLM system 1 10 in some examples.
  • the prediction engine 165 can use the value of the identified output node to identify a likely deviation from nominal at the predicted out-of-tolerance property /feature, and can further include instructions in the alert regarding how to adjust the identified tooling compensate for the predicted deviation.
  • the process 1200B can be suspended temporarily until a sufficient number of new inspections are performed to populate the input data set with inspections of parts manufactured after the adjustment.
  • the process 1200B can loop back to block 1235 to identify input inspection data for provision to a machine learning model.
  • This new input inspection data may differ in its composition (e.g., number and spacing of inspections) than the input inspection data identified prior to adjusting the manufacturing system in order to match the composition of input data used to train a different model.
  • block 1240 can involve application of a different model than applied at the previous iteration of block 1240.
  • a "recalibrated" machine learning model can be trained using inspection data sets surrounding or following adjustment of the manufacturing system (e.g., change of tool offset, changing out a worn tool to a new tool, etc.) in order to predict out-of-tolerance conditions of parts manufactured shortly after the manufacturing system adjustment.
  • the recalibrated machine learning model can be used temporarily until a sufficient number of new inspections are performed to populate the original-composition input data set with inspections of parts manufactured after the adjustment.
  • the MLM system 110 can include an inspection-machining feedback system for refining manufacturing processes based on inspection data.
  • the control system 105 can send instructions to control the operations of a manufacturing system 104. Such instructions can be based on inspection data or insights derived from inspection data.
  • the control system 105 can be implemented on a computing device of the manufacturing system 104 and/or on a separate computing device in network communication with the manufacturing system 104.
  • Machine tooling can refer to the specific tool of a manufacturing system that creates a part (or a specific geometric feature of the part), either by additive or subtractive manufacturing.
  • the inspection-machining feedback system can be accomplished by the analytics engine 150 identifying such trends and providing offsets to the control system 105, and by the control system 105 instructing the manufacturing system 104 to set offsets to compensate for the wear conditions during manufacture of subsequent parts. This beneficially enables manufacturing of in-tolerance parts even as machine tooling wears over time to a point that would otherwise produce out-of-tolerance parts.
  • the offsets can be dynamically adjusted by the analytics engine 150 based on analysis of new inspections as additional parts are created by the manufacturing system 104.
  • the analytics engine 150 can identify deviations from nominal on specific properties of a previously manufactured part based on inspection data.
  • the analytics engine 150 can send these identified deviations to the control system 105.
  • the control system 105 can have a mapping between specific part features and/or properties and specific tooling and/or techniques of the manufacturing system. Using the mapping, the control system 105 can identify which tooling and manufacturing instructions relate to a particular deviation in the previous part inspection.
  • the control system 105 can set a tool tip offset (e.g., a bias in the position of the tool tip relative to the position specified by the manufacturing instructions) based on the deviation received from the analytics engine 150 in order to compensate for the amount and direction of the deviation from nominal during manufacture of the next part.
  • the control system 105 can set limits on the offsets based on known wear conditions of the associated tooling as determined by trend analysis via the analytics engine 150.
  • the control system 105 can generate instructions for halting or taking corrective action in a manufacturing system 104 based on a prediction output from a machine learning model.
  • molds used to create parts are not typically replaced until measured parts produced by the mold no longer conform to their accuracy requirements. Constructing a new mold is a time consuming process that, if not completed when a previous mold fails, can lead to delays in the supply chain that ultimately affect the schedule of the project; however large molds are expensive to store.
  • the analytics engine 150 can predict the endpoint of a usable lifecycle of a mold, for example when the mold is predicted to cease producing in-tolerance parts.
  • the MLM system 110 can provide alerts to designated users when the length of a remaining predicted lifecycle for a mold approaches a length of a timeline for creating a replacement mold. Beneficially, this can lead to less production interruptions and minimize the need for storing replacement molds.
  • FIG. 13 depicts a flow diagram of an illustrative process 1310 for inspection-based manufacturing process controls as described above.
  • Process 1310 includes sub-process 1310A for setting tool offsets and sub-process 1310B for determining mold failure.
  • these sub-processes can involve similar trend analysis in the analytics engine 150 regarding detecting changing wear conditions based on deviation trends.
  • sub-processes 1310A and 1310B can be implemented independently in some implementations.
  • sub-process 1310A can be used on CNCs (computer numerical controlled) or similar manufacturing machines, for example pneumatic or other robotically controlled systems.
  • Sub-process 1330 can be implemented to predict failure times of molds (male or female) used to manufacture parts.
  • the analytics engine 150 can access at least one inspection report 1305.
  • the at least one inspection report can relate to a part manufactured by a robotically-controlled system. A single, most recent inspection report can be used in some examples to identify the current deviations.
  • the analytics engine 150 can use a feature-based inspection report as described above to identify the mean error of a feature (e.g., the mean of the deviation of all properties of the feature). If aggregate inspection data is analyzed to identify trends, the aggregate datasets can be accessed in their feature-based form and grouped to look at features cut by a specific tool. As an example, some parts can require ten or more tools during manufacture, and such tools can be changed out of a CNC robotically.
  • Tools that can be analyzed for wear over time include CNC cutters, for example an end mill, or other tools that physically contact the manufactured parts.
  • a cutter can cut using the side or bottom of the tool, and inspection data sets can be aggregated based on which side of the tool was used to manufacture a feature, as different portions of a tool may wear differently over time.
  • the analytics engine 150 can calculate a deviation or deviation trend based on the at least one inspection report. For example, if a single most recent inspection is accessed at block 1305, then at block 1310 the analytics engine 150 can determine a deviation from nominal of one or more properties in the inspection. In some embodiments, the analytics engine 150 can access identifiers and uncertainties associated with one or more of manufacturing system, metrology device, and inspector involved in creation and inspection of a part, and can remove the uncertainties from the deviations to isolate the changes that are attributable to tool wear. These uncertainties can be generated as described above with respect to Figures 4-7 in some embodiments.
  • the analytics engine can calculate deviations (optionally with uncertainties removed) for the same property across the set of inspections, and can identify an equation that models the deviation trend over time.
  • the analytics engine 150 can send a determined deviation and/or deviation trend to the control system 105.
  • the control system 105 can set tool offsets based on the identified deviation. For example, the control system 105 can access a look-up table that associates specific features and/or parameters of an inspected part with specific tooling of a manufacturing system 104. Based on the deviation associated tooling, the control system 105 can determine a bias to apply to the position of the tool tip relative to its position specified in default machining instructions. The bias can be equal in magnitude to the identified deviation but opposite in direction in order to compensate for the deviation.
  • various steps of process 1310A are discussed as distributed between the analytics engine 150 and control system 105, in some embodiments the process 1310A can be distributed differently between these components or performed entirely by one or the other.
  • the control system 105 can determine whether the determined tool offset exceeds determined offset limits.
  • the offset limits can be based on tool specifications (e.g., depth of cutter blades), tool wear trend analysis received from analytics engine 150, or user-specified limits.
  • the tool wear trend analysis can track the change in deviation on specific features or properties of a part over time, and can involve removing one or more of manufacturing system, metrology device, and inspector uncertainties from the deviations to isolate changes due to tool wear as described above. If the control system 105 determines that the offset is within predetermined acceptable limits, the process 1310A proceeds to block 1325 to control the manufacturing system 104 to position the tooling according to the determined offset. As such, process 1310A can feed deviations from a last inspection data set (or set of previous inspection data sets) back into the manufacturing machine to make tool tip position updates for the next part.
  • control system 105 determines that the offset exceeds the limits, the process 1310 transitions to block 1335 to alert the designated user that the tool wear requires it to be changed out.
  • the process 1310 can also halt subsequent manufacturing in some embodiments, for example if trend analysis or machine learning predictions indicate that the next part is likely to be out-of-tolerance.
  • the analytics engine 150 can access an aggregate dataset including multiple inspection reports of parts produced by the same mold.
  • the datasets can include a time of manufacture for each inspected part in order to establish a timeline with respect to usage of the mold.
  • the sampling rate of the inspections can be dynamically adjusted, for example based on rate of deviation change or proximity to out-of-tolerance conditions, in order to conserve processing resources in performing mold trend analysis.
  • the analytics engine 150 can identify specific deviations within a single inspection data set or can identify deviation trends from analysis of an aggregated data set.
  • the analytics engine 150 can remove one or more of manufacturing system, metrology device, and inspector uncertainties from the deviations to isolate changes due to wear of the mold.
  • the analytics engine 150 can determine a mold failure timeline based on the identified deviation trend. For example, for each feature and/or each property of each feature of the part created by the mold, the analytics engine 150 can model an equation representing a best fit for the trend rate or curve that characterizes the observed deviation shift over time in the aggregate inspection data. The analytics engine 150 can proj ect these trends into a future manufacturing timeline to identify a time when at least one feature is predicted to no longer be within tolerance.
  • the analytics engine can run a separate analysis for each feature and output the time to expected out-of-tolerance conditions for each feature. These can be ranked, and the soonest time selected as the mold failure timeline.
  • block 1330 can involve generating multiple timelines based on various inspectors/metrology devices in combination with the mold to recommend a specific combination that may result in the longest in-tolerance lifetime for the mold.
  • the timeline can be based on a number of remaining in-tolerance cycles with the mold in some embodiments. This can be combined with identified throughput goals and/or actual manufacturing data rates in order to convert the remaining cycles to an estimated failure date.
  • the MLM system 110 can alert the designated user at block 1335.
  • the alert can include timeline information (e.g., predicted mold failure date and optionally replacement creation timeline if known) as well as an indication of which portion of the mold is associated with the failure.
  • the alert can include a visual depiction of the mold or a computer model of the mold with a visual change over the portion that is determined to cause the soonest out-of- tolerance condition. This can assist the alerted user with fixing the mold, if possible.
  • alerts may only be sent within a threshold time period of the new mold creation timeline, for example six months before the timeline for new mold creation.
  • the data used to guide process 1310 may originate only from metrology devices such as CMMs.
  • CMMs while programmed by humans, run inspections independently of human inspection operators and such devices typically have very small measurement uncertainty values, for example around +/- 2 microns, as indicated in the machine's calibration data. Accordingly, such metrology devices present highly accurate measurement data with relatively low measurement uncertainty (compared to processes in which humans are involved in actuating or positioning components of the inspection process).
  • the analytics engine 150 can determine with high confidence what tolerance deviations are due to tool (cutter or mold) wear, as the machine measurement uncertainty is a small fraction of typical measured values.
  • the process 1310 may require that the inspection uncertainty value be less than the determined tool wear in order to output manufacturing system control instructions.
  • Implementations disclosed herein provide systems, methods and apparatus for electronic manufacturing quality management analysis.
  • One skilled in the art will recognize that these embodiments may be implemented in hardware, software, firmware, or any combination thereof.
  • mean refers to an arithmetic mean computed by adding values and dividing by the number of values.
  • the various components shown in Figures 1A-1B, and the various processes described above may be implemented in a computing system via an appropriate combination of computerized machinery (hardware) and executable program code.
  • the multi-tenant manager 130, standardization engine 140, analytics engine 150, model viewing engine 160, and prediction engine 165 of the MLM system 110 may each be implemented by one or more physical computing devices (e.g., servers) programmed with specific executable service code.
  • Each such computing device typically includes one or more processors capable of executing instructions, and a memory capable of storing instructions and data.
  • the executable code may be stored on any appropriate type or types of non-transitory computer storage or storage devices, such as magnetic disk drives and solid-state memory arrays.
  • the model viewing engine 160 can include one or more graphics processing units and associated memories with instructions for rendering interactive three-dimensional representations of objects.
  • Some embodiments of the prediction engine 165 can be implemented using one or more graphics processing units and associated memories with instructions for training machine learning models and for generating manufacturing process predictions using trained models.
  • the functions described herein may be stored as one or more instructions on a processor-readable or computer-readable medium.
  • the term "computer-readable medium” refers to any available medium that can be accessed by a computer or processor.
  • a medium may comprise RAM, ROM, EEPROM, flash memory, optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer.
  • a computer- readable medium may be tangible and non-transitory.
  • computer-program product refers to a computing device or processor in combination with code or instructions (e.g., a "program”) that may be executed, processed or computed by the computing device or processor.
  • code may refer to software, instructions, code or data that is/are executable by a computing device or processor.
  • Software or instructions may also be transmitted over a transmission medium.
  • a transmission medium For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of transmission medium.
  • DSL digital subscriber line
  • acts, events, or functions of any of the algorithms, methods, or processes described herein can be performed in a different sequence, can be added, merged, or left out altogether (e.g., not all described acts or events are necessary for the practice of the algorithms).
  • acts or events can be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors or processor cores or on other parallel architectures, rather than sequentially.
  • the various databases and data repositories metrology data repository 170 and user data repository 180 may be implemented using relational databases, flat file systems, tables, and/or other types of storage systems that use non-transitory storage devices (disk drives, solid state memories, etc.) to store data.
  • Each such data repository may include multiple distinct databases.
  • the data provided to users including the data presented by a user interface relating to metrology data analyses, are based on an automated analysis of many recorded events, for example part inspections.
  • the user interface may, in some embodiments, be provided to a user device from application code that runs on a remote computing resource, implemented wholly in client-side application code that runs on users' computing devices, or a combination thereof.
  • the standardization engine 140, multi -tenant manager 130, analytics engine 150, and model viewing engine 160, portions thereof, and combinations thereof may be implemented by one or more servers 120.
  • any of the standardization engine 140, multi -tenant manager 130, analytics engine 150, model viewing engine 160, prediction engine 165, and machine controller 175 may be implemented by one or more server machines distinct from the servers 120.
  • the standardization engine 140, multi-tenant manager 130, analytics engine 150, model viewing engine 160, prediction engine 165, and machine controller 175 may be implemented by one or more virtual machines implemented in a hosted computing environment.
  • the hosted computing environment may include one or more rapidly provisioned and/or released computing resources.
  • the computing resources may include hardware computing, networking and/or storage devices configured with specifically configured computer-executable instructions.
  • a hosted computing environment may also be referred to as a cloud computing environment.
  • the processing of the various components of the illustrated systems can be distributed across multiple machines, networks, and other computing resources.
  • two or more components of a system can be combined into fewer components.
  • the various systems illustrated as part of the MLM system 110 of Figures 1 A and IB can be distributed across multiple computing systems, or combined into a single computing system.
  • various components of the illustrated systems can be implemented in one or more virtual machines, rather than in dedicated computer hardware systems.
  • the data repositories shown can represent physical and/or logical data storage, including, for example, storage area networks or other distributed storage systems.
  • the connections between the components shown represent possible paths of data flow, rather than actual connections between hardware. While some examples of possible connections are shown, any of the subset of the components shown can communicate with any other subset of components in various implementations.
  • determining encompasses a wide variety of actions and, therefore, “determining” can include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” can include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, “determining” can include resolving, selecting, choosing, establishing and the like.
  • Conditional language used herein such as, among others, “can,” “might,” “may,” “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or states. Thus, such conditional language is not generally intended to imply that features, elements and/or states are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without author input or prompting, whether these features, elements and/or states are included or are to be performed in any particular embodiment.
  • Disjunctive language such as the phrase "at least one of X, Y, or Z," unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
  • a device configured to are intended to include one or more recited devices. Such one or more recited devices can also be collectively configured to carry out the stated recitations.
  • a processor configured to carry out recitations A, B and C can include a first processor configured to carry out recitation A working in conjunction with a second processor configured to carry out recitations B and C.

Abstract

Certain aspects relate to systems and techniques for acquiring metrology inspection data of a manufactured physical part, comparing the data to computer model metrology specifications, aggregating the metrology inspections and inspection-model comparisons, analyzing the aggregated data, and managing dissemination of the inspection data and analyses. A prediction engine can apply machine learning to aggregate inspection data and optionally manufacturing process metrics to predict and prevent out-of-tolerance parts.

Description

METROLOGY SYSTEM FOR MACHINE LEARNING-BASED MANUFACTURING
ERROR PREDICTIONS
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] The present application claims priority to U.S. Provisional Patent Application No. 62/501,321, filed on May 4, 2017, entitled "METROLOGY SYSTEM FOR MACHINE LEARNING-BASED MANUFACTURING ERROR PREDICTION," the contents of which is hereby incorporated by reference herein in its entirety and for all purposes.
TECHNICAL FIELD
[0002] The present disclosure relates to metrology systems, and, more particularly, to machine learning systems and methods for analyzing metrology inspection information to predict manufacturing errors including out-of-tolerance parts.
BACKGROUND
[0003] Dimensional metrology is used to measure the conformity of a physical part to its intended design. Certain industries, such as aerospace, require manufactured parts to be constructed as very close matches to the computer models or other engineering schematics from which they are generated. There are a number of different 3D measurement devices that can generate data representing measured edges and surfaces of a manufactured physical part, for example laser trackers, portable coordinate measurement machine arms, or coordinate measurement machines. There also exist a number of metrology software options that can compare the measured part data to a computer model to determine how well that particular part matches the model.
SUMMARY
[0004] One aspect relates to a system comprising a first data link to a manufacturing system configured to create a run of parts based on a common engineering schematic; a second data link to a metrology device configured to measure at least some parts in the run of parts to generate measurement data representing a physical shape of each part of the at least some parts; and a machine learning system including one or more processors in communication with a computer-readable memory storing executable instructions, wherein the one or more processors are programmed by the executable instructions to at least access a neural network trained, based on measurement data of past parts in the run of parts, to make a prediction about a future part in the run of parts; forward pass the measurement data through the neural network to generate the prediction about the future part in the run; and determine whether to output instructions for adjusting operations of the manufacturing system based on the prediction
[0005] In some implementations, the neural network includes parameters (e.g., weights of particular node to node connections) trained based on metrology inspection data of the past parts in the run of parts, and wherein the prediction represents an aspect of a metrology inspection of the future part (a "metrology prediction," e.g., a particular measurement, or whether a measurement will be in or out of tolerance). In some implementations, the neural network can be trained using approximately equal numbers of positive training cases (where the expected output is an in-tolerance inspection) and negative training cases (where the expected output is an out-of-tolerance inspection) to make predictions regarding whether future parts will be in or out of tolerance, or to make predictions regarding the specific measurements of the future parts.
[0006] In some implementations, the neural network is trained to output a predicted measurement for a feature of the future part, and wherein the one or more processors are further programmed by the executable instructions to compare the predicted measurement to a tolerance specified for the predicted measurement. The one or more processors can be further programmed by the executable instructions to determine to adjust the operations of the manufacturing system based on determining that the predicted measurement is outside of the tolerance or a predetermined percentage of the tolerance. The one or more processors can be further programmed by the executable instructions to determine an offset for a tool of the manufacturing system based on the predicted measurement, wherein the tool is configured to create the feature.
[0007] In some implementations, the neural network is trained to output a likelihood that a feature of the future part will be out of tolerance, and wherein the one or more processors are further programmed by the executable instructions to determine to adjust the operations of the manufacturing system based on the likelihood exceeding a predetermined threshold (e.g., 50% likely, 75% likely, 100% likely, or any other suitable percentage).
[0008] In some implementations, the one or more processors are further programmed by the executable instructions to determine to halt the operations of manufacturing system in response to the prediction indicating that the future part will be out of a predetermined percentage of tolerance (e.g., 30% of tolerance, 50% of tolerance, 100% of tolerance, or any other suitable percentage of tolerance).
[0009] In some implementations, the one or more processors are further programmed by the executable instructions to determine a modification to the operations of the manufacturing system in response to the prediction indicating that the future part will be out of a predetermined percentage of tolerance, wherein the modification includes a tool offset.
[0010] In some implementations, the one or more processors are further programmed by the executable instructions to compute at least one statistical process control metric based on the measurement data; and provide the at least one statistical process control metric as an input into the neural network to generate the prediction. For example, the neural network can be structured to have one or more initial layers of locally connected nodes that perform computations to generate desired statistical process control metrics from input measurement data. The local connections between these nodes can be optimized to minimize or eliminate duplicate computations. The output of the initial layer(s) is the desired statistical process control metrics (described in more detail below), which can be provided (optionally together with inspection data values) to a fully connected portion of the neural network.
[0011] Another aspect relates to a computer-implemented method comprising receiving, from a metrology device, measurement data representing a physical shape of each of a number of parts in a run of parts, wherein parts in the run of parts are manufactured based on a common engineering schematic; accessing a machine learning model trained, based on measurements of past parts in the run of parts, to make a prediction about a future part in the run of parts; performing a forward pass of the measurement data through the machine learning model to generate the prediction about the future part in the run; and determining whether to output an alert for adjusting operations of the manufacturing system based on the prediction.
[0012] In some implementations, the machine learning model is trained to output a predicted measurement for a feature of the future part, and the computer-implemented method further comprises comparing the predicted measurement to a tolerance specified for the predicted measurement. The computer-implemented method can further comprise determining to adjust the operations of the manufacturing system based on determining that the predicted measurement is outside of a predetermined percentage of the tolerance. The computer- implemented method can further comprise determining an offset for a tool of the manufacturing system based on the predicted measurement, wherein the tool is configured to create the feature.
[0013] In some implementations, the machine learning model is trained to output a likelihood that a feature of the future part will be out of tolerance, the computer-implemented method further comprising determining to adjust the operations of the manufacturing system based on the likelihood exceeding a predetermined threshold. The computer-implemented method can further comprise determining to halt the operations of manufacturing system in response to the prediction indicating that the future part will be out of a predetermined percentage of tolerance. The computer-implemented method can further comprise determining a modification to the operations of the manufacturing system in response to the prediction indicating that the future part will be out of a predetermined percentage of tolerance, wherein the modification includes a tool offset.
[0014] The computer-implemented method can further comprise computing at least one statistical process control metric based on the measurement data; and providing the at least one statistical process control metric as an input into the machine learning model to generate the prediction. The computer-implemented method can further comprise outputting the alert to a control system configured to control operations of the manufacturing system. The computer-implemented method can further comprise, by the control system, halting or correcting operation of the manufacturing system in response to receiving the alert.
[0015] In some implementations, the machine learning model comprises a neural network including at least an input layer and an output layer, and the computer-implemented method can further comprise providing the measurement data to nodes of the input layer; and determining whether the future part will be out of tolerance based on values of nodes of the output layer. The computer-implemented method can further comprise identifying a node of the output layer having a value indicative of an out-of-tolerance measurement predicted for the future part; accessing a mapping between the identified node and a geometric feature of the common engineering schematic; accessing a mapping between the geometric feature and a tool of the manufacturing system; and including an identification of the tool in the alert. Further embodiments can comprise identifying a predicted deviation from tolerance based on the value of the identified node; calculating a position bias for controlling the tool to mitigate the predicted out-of-tolerance measurement; generating the alert to include the position bias; and outputting the alert to a control system configured to control operations of the manufacturing system. The computer-implemented method can further comprise, by the control system, controlling the manufacturing system to apply the position bias during control of the tool during manufacture of the geometric feature of the future part.
[0016] In some implementations, the machine learning model is trained to make the prediction based on a first set of inspections in the measurement data, and the computer- implemented method can further comprise accessing an additional machine learning model trained to make an additional prediction about the future part based on a second set of inspections in the measurement data, wherein the first set of inspections and the second set of inspections represent different sets of parts in the run of parts; and determining whether the future part will be out of tolerance based on the prediction of the machine learning model and on the additional prediction of the additional machine learning model.
[0017] Another aspect relates to a non-transitory computer readable medium storing computer-executable instructions that, when executed by a computing system comprising one or more computing devices, causes the computing system to perform operations comprising identifying an inspection of an out-of-tolerance part in a run of parts manufactured based on a common engineering schematic; identifying a set of inspections of in-tolerance parts manufactured prior in the run to the out-of-tolerance part; generating input data based on the set of inspections of the in-tolerance parts; generating expected output data based on the inspection of the out-of-tolerance part; training a machine learning model for predicting out-of-tolerance parts to predict the expected output data from the input data; and providing the trained machine learning model to a control system configured to control operations of a manufacturing system in manufacturing additional parts based on the common engineering schematic.
[0018] In some implementations, the machine learning model comprises a neural network comprising at least a statistical process control metric generation portion and a connected portion including an input layer, a hidden layer, and an output layer, and the operations further comprise providing the input data to nodes of the statistical process control metric generation portion, wherein the input data comprises measurement data representing measured values of physical features of the in-tolerance parts; generating at least one statistical process control metric at the nodes of the statistical process control metric generation portion; providing the at least one statistical process control metric of each node of the statistical process control metric generation portion to a corresponding node of the input layer; providing the output data to nodes of the output layer; and tuning parameters of nodes of the hidden layer based on back-propagation. In some implementations, the operations further comprise providing the measurement data to additional nodes of the input layer. In some implementations, the operations further comprise updating the training during manufacture of the run of parts based on inspections of additional parts in the run of parts.
[0019] In some implementations, the operations further comprise identifying a second set of inspections of in-tolerance parts manufactured prior in the run to the out-of- tolerance part, wherein the set of inspections and the second set of inspections represent at least one different in-tolerance part; generating second input data based on the second set of inspections of the in-tolerance parts; training a second machine learning model for predicting out-of-tolerance parts to predict the expected output data from the second input data; and providing the trained machine learning model and the second trained machine learning model as an ensemble to the control system.
[0020] In some implementations, the operations further comprise, by the control system, using the trained machine learning model to generate a metrology prediction regarding a future part in the run (e.g., a part that the manufacturing system has not yet begun to create). In some implementations, the operations further comprise, by the control system, determining whether and/or how to correct or halt operations of the manufacturing system based on the metrology prediction.
[0021] In some implementations the machine learning model comprises an artificial neural network comprising at least an input layer, a hidden layer, and an output layer, and the operations further comprise providing the input data to nodes of the input layer; providing the output data to nodes of the output layer; and tuning parameters of nodes of the hidden layer based on back-propagation. In some implementations, the operations further comprise accessing data representing one or more manufacturing process parameters including a manufacturing system used to create the in-tolerance parts and the out-of-tolerance part, a metrology device used to generate the inspection and set of inspections, and an inspection operator who operated the metrology device to generate the inspection and set of inspections; and providing an identifier of the one or more manufacturing process parameters to at least one additional node of the input layer. Using such an identifier, in training and inference, can enable generation of predictions that are specific to a certain machine or human inspector involved in creating the run of parts.
[0022] These manufacturing process insights and the resulting process adjustments can enable more efficient and less wasteful manufacturing processes, among other benefits described in more detail below.
BRIEF DESCRIPTION OF THE DRAWINGS
[0023] The disclosed aspects will hereinafter be described in conjunction with the appended drawings and appendices, provided to illustrate and not to limit the disclosed aspects, wherein like designations denote like elements.
[0024] Figure 1A illustrates a schematic block diagram of an example metrology lifecycle management system and network environment as described herein.
[0025] Figure IB illustrates a schematic block diagram of an example of the metrology lifecycle management system of Figure 1A.
[0026] Figure 2 depicts a flowchart of an example process for generating standardized feature-based inspection reports as described herein.
[0027] Figure 3 illustrates example measurements with associated uncertainty intervals relative to a tolerance range.
[0028] Figure 4 depicts a flowchart of an example process for determining and utilizing manufacturing or inspecting uncertainty.
[0029] Figure 5 depicts a flowchart of an example process for calculating an uncertainty score for an inspection operator.
[0030] Figure 6 depicts a flowchart of an example process for calculating an uncertainty value for a manufacturing system and/or a metrology device. [0031] Figure 7 depicts an example feedback loop for refining inspector and machine uncertainty values.
[0032] Figure 8A depicts an example view of a model of a part and GD&T data.
[0033] Figure 8B depicts a timeline of different parts in a run manufactured based on the model of Figure 8 A.
[0034] Figure 8C depicts an example set of training data including inspection reports of different parts in the run of Figure 8B.
[0035] Figure 9A depicts an example topology of a neural network for predicting out-of-tolerance parts.
[0036] Figure 9B depicts an example set of statistical process control metrics for input into the machine learning layers of the network of Figure 9A.
[0037] Figure 9C depicts example sets of training data and an ensemble of networks for predicting out-of-tolerance parts.
[0038] Figure 10 depicts an example data structure for analysis of machine learning model output, for example the output of the neural network of Figures 9 A and 9C.
[0039] Figure 11 depicts a schematic block diagram of an example of the prediction engine of Figure IB.
[0040] Figure 12A depicts a flow diagram of an illustrative process for training a machine learning model, for example in the prediction engine 165 of Figure 11 and/or as discussed with respect to Figures 8C-9B.
[0041] Figure 12B depicts a flow diagram of an illustrative process for providing out-of-tolerance parts predictions in the prediction engine of FIG. 1 via a model trained as described with respect to FIG. 12 A.
[0042] Figure 13 depicts a flow diagram of an illustrative process for inspection- based manufacturing process controls.
DETAILED DESCRIPTION
Introduction
[0043] Aspects of the disclosure relate to systems and techniques for leveraging insights gleaned from metrology inspection data to improve manufacturing and inspection processes. For example, the disclosed technology can analyze inspection data to learn the capabilities of the manufacturing systems, metrology inspection devices, and human inspection operators involved in the part creation cycle. The knowledge of these capabilities can be leveraged to create leaner, more efficient manufacturing processes that are less likely to scrap good parts (e.g., due to excessive uncertainty about the inspection values preventing approval of the part inspection) or to approve bad parts (e.g., due to an inaccurate measurement process that causes the bad part to appear in-tolerance).
[0044] The disclosed technology can acquire metrology inspection data of a number of physical, manufactured parts and standardize the inspection data in a manner that enables the system to isolate a portion of overall measurement uncertainty that is attributable to particular stages of the manufacturing process. Measurement uncertainty is a metric for the amount which the measured shape of an inspected part may reasonably be expected to differ from its actual shape (for further details on measurement uncertainty, see Figure 3 and associated description). For example, the present technology can identify portions of the inspection data representing specific geometric features of the inspected parts and generate feature-based inspection reports for analysis, rather than relying on the native inspection report format received from a particular metrology device. This technique allows the disclosed inspection data analyses to learn device and inspector capabilities with respect to particular geometric features (e.g., a cylinder, a hole, a flat surface), regardless of whether the same metrology system was used to inspect two different parts in the inspection data or whether those parts have the same shape. The learned inspector and device capabilities can in turn be used to structure manufacturing processes that have small uncertainty compared to the allowable deviations in part shape.
[0045] In one embodiment, the system analyzes feature-based inspection reports to identify how precisely a machine can manufacture or measure a certain geometric feature, or how accurately different inspection operators are able to measure certain geometric features. Beneficially, the feature-based analysis enabled by the disclosed inspection data standardization can reveal measurement uncertainties associated with specific metrology inspectors, and such uncertainties can be used to select appropriate inspectors for specific measurement projects or for further training. This can yield more efficient manufacturing processes in some implementations, as the uncertainty attributable to a human inspector of a manual metrology device is typically orders of magnitude larger than the uncertainty due to the device itself.
[0046] The described systems can further manage dissemination of the inspections, analyses, and/or other manufacturing and inspection data within a company or supply chain. As described in more detail below, trends in inspection data can reveal problem areas or inefficiencies in manufacturing and inspection processes that may otherwise be undiscovered, and a centralized electronic inspection database can provide for heighted data usability, integrity, and traceability.
Certain Benefits
[0047] In many industries, a company selling an end product (referred to herein as an "OEM," which stands for original equipment manufacturer) may request that a portion of a project be manufactured by an outside company (referred to herein as a "supplier"). For example, the OEM company can outsource different portions of the project to a number of different suppliers, and in some cases the same portion can be outsourced to a number of different suppliers. In turn, suppliers may outsource smaller components of their portion of the project to other companies specializing in those components ("sub-suppliers"). As such, the supply chain for the project can extend through many levels of different companies.
[0048] Before a part is shipped from a supplier to the requesting company it must undergo a quality inspection, and, especially in industries requiring tight tolerances and/or with mandated inspection processes, each successive recipient is likely to perform their own quality- related measurements of that same part. The measurements of a part can be manually reviewed or compared by metrology software to designated tolerances in order to provide a conformance or non-conformance report. These reports are often provided as print-outs shipped out together with conforming parts, while non-conforming parts cannot be used and thus are typically scrapped at the expense of the supplier.
[0049] Existing 3D metrology software is desktop software installed locally on user devices, providing single part analysis to pass or fail one manufactured part at a time based on whether the metrology measurements conform to specified tolerances. Such software can receive, from metrology devices, inspection data representing measurements of a manufactured part and can compare the inspection data to predetermined metrology specifications to determine whether the part has been manufactured within required tolerances. However, the inspection data is typically not utilized beyond determining conformance or nonconformance of the manufactured part, in part due to the differences in the outputs of different metrology devices and software options. For example, a metrology device like a laser scanner can acquire millions of data points representing the surfaces of an inspected part, while a portable coordinate measurement machine arm ("PCMM arm") can measure a much smaller number of designated points on the surface. These data sets have greatly varying densities and are not easily combinable or comparable in existing systems. Further, each metrology software package has its own way of storing this data in inspection reports, such that there are differences even between two reports from different metrology software package options that represent the same inspections of the same physical. Different metrology software options are optimized for use with different metrology hardware, and both within a single company and throughout a supply chain there are commonly inspections performed using a wide variety of metrology devices and software options.
[0050] Conventional statistical process control (SPC) systems require a direct point-to-point match between different inspections in order to identify trends in measurements of that point over time. This, in turn, requires that all parts inspected as part of the SPC process be made from the same engineering schematics (e.g., a CAD model, blueprint, or other engineering design). These points are designated in an inspection instruction file so that each time a part is inspected the same location on the part is measured. Thus, traditional SPC systems require adherence to specific inspection protocols and are limited to analyses of inspections of production runs of the same part, typically implemented with a single measurement device to achieve the precise point-to-point match.
[0051] Therefore, a need exists for a quality assessment system that can standardize metrology data from different source formats, aggregate metrology data over time and/or across multiple formats, and provide meaningful analyses of the aggregated data. In addition, there is a need for a quality assessment system that allows a OEM company, among others, to stay informed about the inspection quality and time schedule status of parts in varying stages of the supply chain.
[0052] The above-described problems are addressed, in some embodiments, by the electronic metrology lifecycle management systems ("MLM system") and techniques described herein. The MLM system includes a standardization engine for importing measurement data from a number of different metrology hardware and/or software sources and standardizing the measurement data into a feature-based format for data storage and/or display. The MLM system can apply machine learning to glean actionable insights from measurement and manufacturing process data, for example by predicting that a part will be scrapped before its manufacturing is begun. As used herein, measurements and measurement data can describe metrology inspection data of points located on a measurand - a physical, manufactured part being inspected. The standardized format can cluster inspection data by geometric features present in the measured part. Geometric features include cylinders, holes, flat or contoured surfaces, spheres, threads, slots, and toroids, to name just a few examples.
[0053] As such, inspection data of varying densities and formats acquired from varying metrology hardware and software sources can be aggregated together by feature for analysis. The disclosed feature-based standardization removes the restrictive point-matching limitation from SPC analysis, allowing aggregation and analysis of measurements of the same feature from different parts (for example, two or more parts manufactured based on different models), measurements of the same feature taken by different metrology devices, measurements of the same feature at different locations on the feature, and the like. Thus, analysis based on the disclosed feature-based reports can be used flexibly across a wide range of manufacturing process and provide new insights into process capabilities compared to existing SPC systems.
[0054] The feature-based analysis enabled by the disclosed inspection data standardization can reveal measurement uncertainties associated with specific metrology inspectors, manufacturing systems, and/or metrology devices. Measurement uncertainty can represent the precision or accuracy of a device or inspector in creating or measuring a part. For example, a manufacturing uncertainty interval can represent a range of part feature shapes that are likely produced by a given manufacturing system when trying to produce a feature according to a given nominal value. A measurement uncertainty interval can represent, for a given metrology device, a range around a measured value in which the possible actual measurement of the inspected feature can reasonably be expected to fall. Calibration data for manufacturing systems and metrology devices typically includes an approximation of the uncertainty that may be attributed to the device, however this is a static value that does not account for wear and changing conditions over time. The disclosed techniques for feature- based inspection data analysis can be used to adjust these values dynamically as further inspection data is collected over time, such that the machine-specific uncertainty scores of the MLM system stay up-to-date without requiring any additional testing and calibration processes.
[0055] An inspector uncertainty interval can represent a range of possible actual measurements of the feature when inspected by a particular human inspection operator. Human inspectors may undergo gage repeatability and reproducibility ("gage r&r") testing to approximate the amount of uncertainty they introduce into the measurement process. In a typical gage r&r test, a number of different inspectors take turns using the same metrology device to repeatedly measure the exact same part. Because the inspector is the only major variability between measurements, this type of testing can reveal measurement differences between the participating inspectors. This testing involves significant time and resources, and the resulting uncertainty measure has limited applicability as it (1) relates only to the particular test part subjected to the gage r&r, and (2) is a high-level metric of generally how much uncertainty may be attributable to the inspector relative to other inspectors. The disclosed techniques for feature-based inspection data analysis beneficially avoid these limitations, as they operate on inspection data that is already gathered by the inspector during the course of his or her typical employment and thus require no additional testing procedures. Further, the inspection data can involve a diverse set of parts having different shapes and sizes. As such, the resulting uncertainty measures represent inspector capabilities for measuring particular geometric features. These can be used to identify the capability of the inspector with respect to new part shapes (sharing those geometric features) that were not included in the inspection data set from which the uncertainty measures were derived.
[0056] With this heightened understanding of the actual uncertainties attributable to different points in the manufacturing and inspection cycle, the MLM system can guide manufacturers to select appropriate equipment/device/inspector combinations for specific parts to improve efficiency and reduce the number of scrapped parts.
[0057] Headings are included herein for reference and to aid in locating various sections. These headings are not intended to limit the scope of the concepts described with respect thereto. Such concepts may have applicability throughout the entire specification. Various examples will now be described with reference to the figures for purposes of illustrating and not limiting the disclosed manufacturing quality management systems and techniques.
Overview of Example MLM System
[0058] Referring to Figures 1A and IB, an embodiment of a network environment 100 is depicted that can provide users with access to a metrology lifecycle management ("MLM") system 110 that can provide the described metrology -based manufacturing and inspection management features, among other features. Figure 1 A illustrates a schematic block diagram of the MLM system 110 in a network environment 100, and Figure IB illustrates a schematic block diagram of an example of the MLM system 110. The MLM system 110 can be implemented on a single computer or with one or more physical servers or computing machines, including the example illustrated group of servers. The MLM system 110 includes one or more processor(s) and a memory 125 storing modules of computer-readable instructions that configure the processor(s) to perform the functions described herein.
[0059] As shown in Figure 1A, the network environment 100 includes MLM system 110, network 108, metrology devices 102, manufacturing systems 104, control system 105, and user devices 106. The MLM system 110 can include data links to any or all of the metrology devices 102, manufacturing systems 104, control system 105, and user devices 106, for example using the network 108 and suitable communications protocols/buses. The MLM system 1 10 can acquire part measurement data in any of a number of different formats, for example from various metrology devices 102A-C including a coordinate measurement machine 102A, PCMM arm 102B, and laser tracker 102C, to name a few. Other examples of metrology devices not illustrated include profilometers, optical comparators, laser scanners, interferometers, LiDAR devices, computed tomography metrology devices, and other devices capable of obtaining measurements representing surfaces of manufactured parts. The metrology devices 102A-102C can cooperate with one or more different metrology software platforms to generate measurement data representing coordinate points along surfaces of physical, manufactured parts. The measurement data can be provided automatically from the metrology devices 102 and/or metrology software to the MLM system 110 via network 108 in some implementations. The MLM system 110 can additionally or alternatively include functionality for users to upload measurement data into the MLM system 110, for example via a browser-based user interface on a user device 106.
[0060] The MLM system 110 can also receive and/or send information from/to one or more manufacturing systems 104. Manufacturing systems can include mills, lathes, Swiss turn, mold-based manufacturing systems, cutting systems implementing water jets, plasma, or electronic cutting means, 3D printers, routers, and the like. The MLM system 110 can receive, for example, machining plans, machine operation parameters, data from sensors positioned to observe a manufacturing system, and the like. The MLM system 1 10 can send, for example, machining plans and/or machine operation parameters that have been input or updated by users of the MLM system 110 and/or based on automated analyses of the MLM system 110. Control system 105 can represent a locally-installed module, for example a local component of the MLM system 110, that can generate and send instructions for controlling operation of a manufacturing system 104. As such, control system 105 can be connected to both the network 108 and the manufacturing system 104.
[0061] In the environment 100, users can access the MLM system 110 with user devices 106. The user devices 106 that access the MLM system 110 can include computing devices, such as desktop computers, laptop computers, tablets, personal digital assistants (PDAs), mobile phones (including smartphones), electronic book readers, media players, game platforms, and electronic communication devices incorporated into vehicles or machinery, among others. The user devices 106 can access the MLM system 110 over a network 108, for example through a browser-based portal.
[0062] The network 108 may be any wired network, wireless network, or combination thereof. In addition, the network 108 may be a personal area network, local area network, wide area network, over-the-air broadcast network, cable network, satellite network, cellular telephone network, or combination thereof. For example, the communication network 108 may be a publicly accessible network of linked networks, possibly operated by various distinct parties, such as the Internet. In some embodiments, the communication network 108 may be a private or semi-private network, such as a corporate or university intranet. The communication network 108 may include one or more wireless networks, such as a Global System for Mobile Communications (GSM) network, a Code Division Multiple Access (CDMA) network, a Long Term Evolution (LTE) network, or any other type of wireless network. Protocols and components for communicating via the Internet or any of the other aforementioned types of communication networks are well known to those skilled in the art of electronic communications and thus, need not be described in more detail herein.
[0063] Referring now to Figure IB, the MLM system 110 includes one or more servers 120, a number of data repositories 170, 180, 185, and a working memory 125 storing standardization engine 140, multi-tenant manager 130, analytics engine 150, model viewing engine 160, prediction engine 165, and machine controller 175. The MLM system 110 can be implemented with one or more physical servers or computing machines, including the servers 120 shown (among possibly others). Server(s) 120 can include one or more processor(s) 122 and a memory 124 storing computer-readable instructions that configure the processor(s) 122 to perform the functions described herein. In some embodiments, the standardization engine 140, multi-tenant manager 130, analytics engine 150, and model viewing engine 160 can be modules of computer-readable instructions stored in the memory 124 of the server(s). In other embodiments the standardization engine 140, multi-tenant manager 130, analytics engine 150, model viewing engine 160, prediction engine 165, and machine controller 175 can be stored and/or executed elsewhere, for example on a local computer 106 networked with the MLM system 110, in a virtual machine, or either of the above in combination with the servers 120. These servers 120 can access back-end computing devices, which may implement some of the described functionality of the MLM system 110. Other computing arrangements and configurations are also possible. Thus, each of the components depicted in the MLM system 110 can include hardware and/or hardware and software for performing various features. In one embodiment, the MLM system 110 is a network site (such as a web site) or a collection of network sites, which serve network pages (such as web pages) to users. In another embodiment, the MLM system 110 hosts content for one or more mobile applications or other applications executed by connected metrology devices 102A-C, manufacturing systems 104, and/or user devices 106. For ease of illustration, this specification often refers to the MLM system 110 in the web site context as being accessed through a browser-based portal. However, this is just one example of how users can be provided with a suitable graphical user interface and the MLM system 110 can be adapted for presentation in desktop applications, mobile applications, or other suitable applications. The processing of the various components of the MLM system 110 can be distributed across multiple machines, networks, or other computing resources. The various components of the MLM system 110 can also be implemented in one or more virtual machines or hosted computing environment (a.k.a., "cloud") resources, rather than in dedicated servers. Likewise, the data repositories shown can represent physical and/or logical data storage, including, for example, storage area networks or other distributed storage systems. Moreover, in some embodiments the connections between the components shown represent possible paths of data flow, rather than actual connections between hardware. Executable code modules that implement various functionalities of the MLM system 110 can be stored in the memories of the servers 120 and/or on other types of non-transitory computer-readable storage media. While some examples of possible connections are shown, any subset of the components shown can communicate with any other subset of components in various implementations.
[0064] Standardization engine 140 can receive measurement data from one or more metrology devices 102 and converts the measurement data to a standardized format. As described herein, the standardization engine 140 can identify geometric features of a part based on its associated GD&T, computer model, or a user-provided template for parts without GD&T or computer models. Inspection data associated with each geometric feature can be stored in association with the geometric feature. Thus, the standardized, feature-based reports of the MLM system 110 are independent of any format specific to a certain metrology hardware or software, and inspection data from a variety of sources can be collected and/or compared by feature.
[0065] By standardizing data from different metrology sources, the standardization engine 140 beneficially enables aggregation of data from different sources for analysis. The standardized data can be stored in association with one or more other values, for example in association with an engineering schematic from which a part was created, a manufacturing system used to create the part, an assembly or project including the part, a type of metrology hardware and/or software used to generate the measurement data of the part, an inspection operator user who performed machining and/or metrology inspection of the part, a supplier responsible for manufacturing the part, an OEM requesting manufacture of the part from the supplier, and the like. Such additional parameters can be used to identify specific trends in inspection data and/or to provide alerts to designated users. [0066] Metrology data repository 170 is a data storage device that stores such feature-based aggregated standardized measurement data. In some embodiments, the original inspection file can also be stored, for example for provision to a requesting company or OEM. Aggregated data sets in data repository 170 can include, for example, measurements of a number of different parts, measurements of a number of the same part (that is, parts in a run that are manufactured based on the same model or blueprint), or different measurements of the same part at different times and/or by different metrology equipment. As another example, standardized inspection data can be aggregated by feature and associated with a specific manufacturing system used to manufacture the inspected part, metrology device used to perform the inspection, and/or inspection operator performing the inspection.
[0067] The data repository 170 can be used to log the quality of a particular part and/or assembly at different points in time. For example, the measurement data can include initial inspections of parts performed in the manufacturing pipeline, inspections of these parts and/or assemblies including these parts performed throughout a manufacturing supply chain, and inspections of these parts and/or assemblies after the inception of product use (e.g., periodic quality inspections, maintenance inspections, etc.). As described below, by aggregating and analyzing data representing part and/or assembly quality across the lifetime of a product (e.g., from manufacturing through assembly through use), the MLM system 110 can provide maintenance predictions and preventative alerts.
[0068] The data repository 170 includes additional storage storing engineering schematics (including blueprints, CAD models, assemblies, GD&T information, and inspection plans). In this manner, both manufacturing and inspection data are centrally located and can be analyzed together as appropriate. Geometric Dimensioning and Tolerancing (GD&T) is a system for defining and communicating engineering tolerances, and uses a symbolic language on engineering drawings and computer-generated three-dimensional solid models that explicitly describes nominal geometry and its allowable variation. Tolerance limits, as used herein, refer to these allowable variations of a part measurement from nominal. Beneficially, the MLM data repository 170 can be structured to comply with any regulatory requirements within specific industries or supply chains and to provide increased data integrity and traceability. [0069] The analytics engine 150 can provide meaningful statistical analyses of aggregated subsets of the standardized measurement data. Appropriate analysis of aggregated metrology inspection data can provide valuable insights about manufacturing and inspection processes that a single measurement report or part pass/fail indication cannot. For example, the analytics engine can analyze aggregated standardized measurement data of parts manufactured by one or more machines, parts measured by one or more metrology inspectors, parts manufactured by one or more entities in a supply chain relationship, of the same part measured at different points in a supply chain, and the like. Trends identified through analysis of aggregate subsets of measurement data can provide insights into manufacturing and inspection process efficiencies and capabilities, as well as predict potential quality problems. As one example, the feature-based analysis enabled by the disclosed inspection data standardization can reveal measurement uncertainties associated with specific metrology inspectors, and such uncertainties can be used to select appropriate inspectors for specific projects or for further training. As another example, the feature-based analysis can reveal actual machine tolerance capabilities, beneficially providing users with real performance abilities rather than relying on potentially inaccurate default machine specifications. The MLM system can implement rule-based reporting for providing results of data analysis to appropriate user(s), and can provide recommendations to improve manufacturing and inspection processes.
[0070] Prediction engine 165 trains and implements machine learning models to improve manufacturing process ROI. For example, prediction engine 165 can train a machine learning model to predict out of tolerance parts based on the inspection data (and any associated process metrics such as machines and/or users) of previously-created, in-tolerance parts. The prediction engine 165 can then implement a trained model in a manufacturing pipeline using inspection data of parts in a run in order to predict out of tolerance parts before their manufacturing is begun. As described in more detail below, the prediction engine 165 can create aggregate subsets of inspection data in real-time (e.g., as it is generated during production of a run of parts) and can input this data into a trained model. If the output of the model indicates a probable out-of-tolerance part, the prediction engine 165 can identify a likely source of exceeding the tolerance(s) and recommend corrective action. [0071] Machine learning data repository 185 is a data storage device storing data relating to the training and implementation of machine learning models for prediction of manufacturing conditions. For example, machine learning data repository 185 can store training data sets, trained model parameters, and new input data sets for real-time predictive manufacturing analysis. In some implementations, the machine learning data repository 185 can store multiple trained machine learning models associated with a specific part, with each trained model associated with a different inspector, manufacturing system, and/or metrology device. The prediction engine 165 can identify a suitable model based on current process setup (e.g., specifically which inspector, manufacturing system, and/or metrology device is being used for creation of a part) and can use the identified model to make predictions about out-of- tolerance conditions. The prediction engine 165 can monitor the process parameters to determine whether a different model is more suitable as the process changes (e.g., when inspector shifts change over, when manufacturing tooling is replaced, etc.). In other implementations the unique identifier associated with the inspector, manufacturing system, and/or metrology device may be incorporated in the machine learning model as process metrics.
[0072] In addition or alternatively to standardizing measurement data, the MLM system 110 can compare measurement data of a part to nominal measurements as indicated by an associated GD&T file to identify deviations. In some implementations the metrology system providing the measurement data can additionally or alternatively provide deviation values. The MLM system 110 can compare identified deviations to tolerance values specified at the measurement locations and output conformance or non-conformance reports in any of a number of formats. Such reports or can be automatically provided to users associated within the MLM system 110 with a part or project.
[0073] Some implementations of the MLM system (e.g., distributed networked implementations or cloud implementations) can include a multi-tenant management component 130. The multi-tenant manager 130 can manage information flow between users at different levels of a supply chain, for example allowing users to receive quality analyses and manufacturing scheduling updates from users at other levels of the supply chain. This can be accomplished in some examples through automated reporting. Beneficially, the multi-tenant structure can provide important alerts to designated users in real time, for example by providing alerts when inspections reveal non-conforming parts and/or when equipment is nearing or has reached the end of its lifecycle of creating conforming parts. As another example, the system can provide alerts to suppliers as models, blueprints, and/or inspection plans are revisioned by a OEM. Further, the MLM system can provide recommendations of manufacturing systems, metrology devices, and/or inspectors for specific parts based on determined measurement uncertainties as described in more detail below. The multi-tenant structure of the disclosed MLM system can enable aggregate measurements from throughout the supply chain into a single repository.
[0074] In one embodiment, the MLM system 110 is a multi -tenant system operated as a network site (such as a web site) or a collection of network sites, which serve network pages (such as web pages) to user devices 106 via the user interface 126. The MLM system 110 can additionally or alternatively host content for one or more mobile applications running on user devices 106. The MLM system 110 can additionally or alternatively host other applications executed by connected metrology devices 102A-102C, machining systems 104, and/or user devices 106. Such applications can be networked with one another to perform the data transfer functions described herein.
[0075] Though not illustrated, the multi -tenant manager 130 can include a number of supply chain management and user interface components including an alert management module, an inspection display module, and various modules for managing permissions and/or for inputting the described data types into the MLM system 110, to name a few. An alert management module can provide alerts to designated users, for example, when part inspections are completed, when non-conforming parts are produced, and/or when a model is revisioned. As another example, the multi -tenant manager 130 can an include inspection package reporting engine that can automatically disseminate inspection results and any associated data (photos of inspected parts, original and/or standardized inspection reports, non-conformance reports, and the like) to designated users within a company or supply chain. Companies requesting parts in a supply chain can be provided with the benefits of increased transparency and data delivery times throughout their supply chain. Suppliers of parts in a supply chain can be provided with the benefit of automated provision of quality assessment to their customers, translating to less personnel time spent generating reports and interacting with the customer to provide reports for each supplied part. [0076] The user data repository 180 stores data representing user profiles for companies and/or individuals as well as manufacturing relationships between companies and/or individuals. Each can have a unique identifier in the MLM system 110. The user data repository 180 can store specified rules for providing access and alerts to users regarding parts, assemblies, or projects. As such, another aspect of the MLM system 110 relates to its multi- tenant functionality as a supply chain relationship management resource. OEM companies (and others using the software) can specify the particular suppliers that are providing requested parts and also see the suppliers of their suppliers. When an initial part inspection is performed by the manufacturer of a part, the measurement data or a resulting report can automatically be made available in the MLM system 110 to authorized users. In this way, users at varying levels of a supply chain who are involved in the production or supervision of a part can be notified of when the part is inspected, the results of the inspection, and/or when conforming parts are shipped between levels of the supply chain. The MLM system 110 can provide functionality for users to browse or search for information relating to parts, assemblies, inspections, and the like for which the user has an associated permission to access such data in the user data repository 180.
[0077] Certain users in the user data repository can be identified as inspection operators (e.g., human operators of metrology inspection devices). Such inspection operators can have an associated measurement uncertainty, for example as determined via processes 400, 500, and/or 600 described below.
[0078] User data repository 180 can also store asset portfolios. An asset portfolio can include the machines of an individual or company using the MLM system 110 that operate within a manufacturing-inspection environment, for example manufacturing systems and/or metrology devices. The asset portfolio of a particular company may include manufacturing machines and tooling used with specific machines, each associated with a unique identifier in the MLM system 110. These machines can each have an associated measurement uncertainty in the asset portfolio.
[0079] The metrology data repository 170 and/or user data repository 180 can, in some embodiments, be stored remotely on one or more servers in network communication with the metrology devices 102, manufacturing systems 104, and/or user devices 106. Though shown as two separate data repositories, the metrology data repository 170 and user data repository 180 can be combined into a single data repository or split into a number of different data repositories in various implementations.
[0080] The MLM system 110 can provide users of the user devices 106 with access to an electronic repository of metrology inspection data, 3D CAD models and/or blueprints of parts and assemblies, machining instructions, inspection plans, and/or analysis results provided by the MLM system, to name a few examples. In some implementations, a user interface of the MLM system 110 can provide content via a web browsing application such that the functionality of the MLM system 110 can be accessed by a number of different user devices 106. The browser-based user interface can provide functionality for a user to, from any computing device, view and interact with a 2D or 3D representation of the part, to view part measurement data, and/or to view a comparison of deviations between the part measurement data and nominal with predefined tolerance(s). As such, the described MLM system 110 can provide increased access to metrology and manufacturing data compared to existing systems, which typically require specialized software installed locally on a computer in a manufacturing environment. Beneficially, the MLM system extends accessibility of manufacturing and inspection information from the shop floor to anywhere a user may wish to view their process data. Further, the user-friendly, feature-based report format and computer model access available through the model viewing engine 160 enable any person in a company to view and understand models and inspection data, regardless of any specialized training in complicated CAD or metrology software. Another benefit of the MLM system is a reduced need for companies to purchase expensive seats of CAD and metrology software just to be able to view and access their own data.
[0081] The model viewing engine 160 of the MLM system can be implemented via a rendering engine of the server(s) 120 for delivering interactive, three-dimensional representations of models and/or measured parts to connected user devices 106. In some examples, a user can view an interactive 3D model of a part with visualization of inspection data overlying the nominal model, even if the graphics processing capabilities of their device are not sufficient to perform three-dimensional graphics rendering. The inspection data overlying the model can include GD&T callouts specifying tolerances, deviations identified from inspections, and/or a heat map showing different colors for inspection measurements based on how well those measurements match the specified tolerances. This can provide an accessible and user-friendly means for presenting models, inspection parameters, and inspection results to users. The rendering engine can provide functionality that enables users to rotate, zoom, or otherwise manipulate the interactive 3D models. The rendering engine can in some embodiments provide users with functionality to specify a manufacturing timeframe, and then can generate an interactive and dynamic heat map that shows the change in the heat map over time for a particular part run.
[0082] Though described as having multi-tenant capabilities, it will be appreciated that the features of the disclosed MLM system 110 can also be used by a number of users within a single company to manage their internal manufacturing and quality data. Implementations of the MLM system can also be adapted to function without any network capabilities, for example for use in locally managing manufacturing and quality data of restricted projects. Restricted projects can include projects requiring a certain level of governmental clearance or any other project that a company may desire to keep private. As such, in some examples no information may flow out of the MLM system 110 to other sources and all information can be used locally. In such embodiments the server(s) 120 may be omitted and the processor(s) 122, data repositories 170, 180, standardization engine 140, multi -tenant manager 130, analytics engine 150, and model viewing engine 160 can be stored and/or executed by a local computing device hosting the non-networked implementation of the MLM system 110.
[0083] Machine controller 175 can control the operations of one or more manufacturing systems and metrology devices, for example as implemented in a robotic manufacturing cell. Machine controller 175 can send instructions to the manufacturing systems 104 to begin or halt production, to change out tooling, or to adjust tooling position during manufacturing. These instructions can be based on the output of the analytics engine 150 and/or prediction engine 165 as described herein. For example, the machine controller 175 can receive an alert from the prediction engine 165 that a next part in a run is predicted to be out-of-tolerance. In some embodiments, the machine controller 175 can halt the operation of the manufacturing system 104 identified for creating the predicted out-of-tolerance part, and can alert associated users regarding the prediction and the half. In some embodiments, the machine controller 175 can also receive information from the analytics engine 150 and/or prediction engine 165 regarding corrective action that will mitigate the changes of the identified part being out-of-tolerance, for example (1) changing/replacing tooling of a manufacturing system 104 or (2) a compensation or bias to apply to machine/tool position instructions in order to compensate for identified wear. In such embodiments, the machine controller 175 can send instructions to the manufacturing system 104 to cause the manufacturing system 104 to automatically take the identified corrective action. Beneficially, this can be done without requiring human intervention in real time and in-line with the manufacturing process so that scrap can be avoided while at the same time efficiently continuing manufacturing operations.
Overview of Example Inspection Standardization
[0084] The MLM system 110 can open inspection data files across a range of available formats and to standardize the inspection data. Figure 2 illustrates an example process 200 for standardizing inspection data and can be implemented by the standardization engine 140 in some embodiments. The process 200 can standardize different formats and/or densities of metrology inspection data based on geometric features for aggregation and analysis and/or display in a unified format.
[0085] At block 205, the standardization engine 140 can obtain inspection data representing measurement of a physical part. For example, in some embodiments the MLM system 110 can provide a user interface for users to upload inspection reports. In some embodiments, the MLM system 110 can be in direct communication with a metrology device and/or computing device hosting metrology software such that inspection data is automatically imported into the MLM system as it is acquired. Though the process 200 will be described in the context of part inspection, the process 200 can also be applied to assembly inspections. A part refers to a single, unitary part while an assembly refers to the coupling of two or more parts.
[0086] At block 210, the standardization engine 140 can identify geometric features of the part represented by the inspection. In some embodiments, inspection data obtained by the standardization engine 140 can be associated with a specific part in the data repository 170. Preferably, the part will also be associated with a GD&T file specifying nominal measurements and tolerances at a number of locations on the part. As indicated by block 21 OA, the standardization engine 140 can, in some embodiments, first check whether a GD&T file is associated with the inspected part. If so, at block 21 OB the standardization engine 140 can parse through the GD&T file to identify reference to any geometric features of the part. Some parts may not have an associated GD&T file. In such cases, as shown by block 2 IOC, the standardization engine 140 can parse through the data representing the model to identify reference to geometric features, for example in the element IDs of a three-dimensional CAD file. Blocks 210A-210C represent some of the options available to the analytics engine 140 for identifying part features. In some implementations no GD&T or model may be available in the MLM system for the inspected part, and the standardization engine 140 can parse through headers of the inspection data file to identify reference to any geometric features. The described parsing can be implemented in some embodiments via fuzzy logic. Some parts may be created based off of two-dimensional blueprints, and for such parts a user may upload a mapping of the geometric features of the part usable by the standardization engine 140 to identify the geometric features. In some embodiments, the standardization engine 140 can output a listing of the geometric features of the part for storage in association with the part in the data repository 170 such that the listing is generated once for an initial inspection of the part and then accessed for subsequent inspections of the part.
[0087] At block 215, the standardization engine 140 can generate a feature-based report by identifying, for each geometric feature, portions of the inspection data representing measurement of the feature. For example, the standardization engine 140 can parse through headers of the inspection file, map the headers to the identified geometric features, and store all data points under the inspection file headers with the associated identified feature. The data points from the inspection file can include a measurement value and x,y,z coordinates of the data point.
[0088] As an example, a first feature-based report can include six data points representing a cylinder measured by a PCMM arm, and a second feature-based report can include thousands of points representing the cylinder measured by a laser tracker. However, both reports are formatted to specify that the data points represent a cylinder. Thus, the feature- based report formatting provides the ability to aggregate inspection data from different formats based on features and/or to present a single way of viewing inspection data from any source.
[0089] At block 220, the standardization engine 140 can identify any hardware, software, and/or inspection operator associated with creation and/or inspection of part or assembly. Hardware can include a manufacturing system used to make the part and/or a metrology device used to inspect the part. Software can include a program used to run the manufacturing system, a program used to run the metrology device, or a metrology software option used together with the metrology device. In some embodiments, a user uploading an inspection file can specify this information, and/or the data repositories 170, 180 of the MLM system 110 may include this information. In some embodiments the standardization engine 140 can implement fuzzy logic to search inside of headers in the inspection file for metrology hardware and/or software information.
[0090] At block 225, the identified information regarding hardware, software, and/or inspection operator is stored in association with the feature-based report, for example in data repository 170.
[0091] Standardizing inspection data as illustrated in Figure 2 can provide several advantages. For example, prior manufacturing statistical process control (SPC) systems are limited to analyzing direct point-to-point matches on inspected parts and thus require development of and adherence to specific inspection plans that outline the points needed for SPC analysis. In addition to requiring detailed inspection plans, existing SPC systems are limited to analyzing data from the same type of measurement hardware and software due to relying on the point-to-point comparison. In contrast to these existing systems, the disclosed MLM system can aggregate part measurements by feature from any measurement device and without inspection plans.
[0092] One benefit of the standardized data is that the MLM system 110 is capable of analysis and comparison of inspection data from different sources. For instance, before a part is initially shipped from its original manufacturer it typically must be measured for quality assessment to ensure that it meets the required level of accuracy. This same part is often measured by the recipient to double check the accuracy before using the part. Using the standardization engine 140 and analytics engine 150, these two measurements (and any further measurements of the part as it continues to travel down the supply chain) can be automatically standardized and aggregated, despite being taken in different locations, and different times, and possibly using different measurement hardware and/or software.
[0093] Over time, trends in this data can reveal where in a supply chain inaccurate measurements are occurring, enabling downstream customers to make more informed decisions for regulating their sources and to save money by reducing the number of parts scrapped for inaccuracy. Further, part recipients can use aggregate quality data (or rankings generated therefrom) to make judgments of their suppliers regarding how close to tolerance their parts are generally, and how well-controlled their manufacturing process is. Currently the part recipient would have to manually compare the individual data sets provided as a printed report with each part to make this kind of judgment, so the MLM system both saves the part recipient time as well as provides them with a depth of analysis not possible from visually comparing printed reports. In one aspect, aggregated inspection data can be analyzed to rank specific individuals or companies in a supply chain or industry based on metrics such as measurement data accuracy, delivery time, and/or part conformance to tolerances. Such rankings may be used to provide specific supplier recommendations to OEM companies (or to other tiers of a supply chain) based on performance track record. Knowledge gained through such rankings can enable companies to increase efficiency in their supply chain relationships.
[0094] Another advantage of the standardization process of Figure 2 relates to presentation of inspection reports to users. Existing metrology software is both expensive and involves a steep learning curve, which often limits the number of employees at a given company who are able to view and understand inspection reports. In contrast, the MLM system can open files across a range of available formats and standardize the inspection data as described with respect to Figure 2. These standardized inspection reports can be displayed to users, for example in a browser-based interface accessible via any connected and authorized device. The inspection reports can also be mapped to the geometric features of models used to generate the inspected parts and overlaid onto the models, for example as a heat map showing where in-tolerance and out-of-tolerance measurements occurred. Regardless of what metrology hardware and/or software was used to generate a metrology inspection and/or surface analysis of a physical part, a user can open that file using the MLM interface to view the quality analysis in a single report format or overlaid onto the associated model, eliminating the need for users to familiarize themselves with the reporting formats of a number of different metrology software. Such display techniques can increase the availability and understandability of inspection data.
Overview of Measurement Uncertainty [0095] In metrology, measurement uncertainty is the quantitative evaluation of the reasonable values that are associated with a measurement result. It is generally accepted that no measurement is exact. When a manufactured part is measured during inspection, the measurement value depends on factors including the measuring system, the measurement procedure, the skill of the operator, the environment, and other effects. Even if the object were to be measured several times, by the same operator and in the same way and in the same circumstances, a different measured value would in general be obtained each time, assuming that the measuring system has sufficient resolution to distinguish between the values. Thus, a measured value may not correspond to the actual value of the measured part, and measurement uncertainty is a probabilistic expression of this margin of doubt in a particular measurement value.
[0096] Measurement uncertainty can include two values, an interval and a confidence level. The interval characterizes the range or probabilistic distribution representing the possible actual value of the measurand based on the measurement value, where measurand as used herein refers to a physical, manufactured part or assembly of parts that is being measured. The interval is sometimes expressed as the measured value plus or minus a value, though the positive and negative distances from the measured value defining the interval do not have to be the same. The confidence level characterizes the level of certainty that the actual value of the measurand is within the interval. As such, measurement uncertainty is an indicator of the quality and reliability of measurement results. All measurements are subject to uncertainty and a measurement result is complete only when accompanied the associated uncertainty.
[0097] With respect to determining whether a measurand falls within tolerance, the uncertainty must be known before it can be determined whether tolerance is met. Turning to Figure 3, dots representing measurement values of five example measurements A, B, C, D, E are shown relative to nominal (ideal or desired) and upper/lower tolerance limit values for a measurand. The vertical arrows extending from the values represent the associated uncertainty intervals. For purposes of simplicity with the example of Figure 3 the upper and lower tolerance limits are depicted as being equal distances from nominal and the upper and lower portions of the uncertainty intervals are depicted as being equal distances from the measurement value. However, in practice the upper and lower tolerance limits and/or portions of uncertainty intervals can differ from one another.
[0098] Based on the measurement value and uncertainty interval, nonconformity of measurement A with the designated tolerances can be proven. Because the uncertainty intervals for measurements B, C, and D overlap with values both within the tolerance limits and outside of these limits, conformity or nonconformity of measurements B, C, and D cannot be proven, even though measurement B is outside of the upper tolerance limit, measurement C is between nominal and the upper tolerance limit, and measurement D is equal to nominal. Based on the measurement value and uncertainty interval, conformity of measurement E with the designated tolerances can be proven.
[0099] As shown in Figure 3, measurements D and E have the same measurement value but the uncertainty interval of measurement D is larger than the uncertainty interval of measurement E. As illustrated by measurements D and E, it is more likely that the measurand will be determined to be out of tolerance as the uncertainty interval increases, even if the (unknown) actual measurand value is within tolerance.
[0100] Existing metrology practices typically base uncertainty intervals on two factors: (1) the uncertainty interval of the manufacturing system use to make a part and (2) the uncertainty interval of the metrology device used to measure the part. Such intervals are typically based on calibration specifications provided by the equipment or device manufacturers. However, human inspection operators add an additional layer of uncertainty that cannot be accounted for by the equipment or device uncertainty. Due to the limitations of conventional inspection report formatting, existing systems do not calculate inspector-specific uncertainty. Consider the example that measurements A-E were each taken by one of inspectors A-E, and that the uncertainty intervals are part of inspector uncertainty scores (as discussed in more detail below) representing the amount of uncertainty in a measurement that is attributable to the inspector. Because the uncertainty interval of inspector D exceeds the interval between the lower and upper tolerance limits, no measurement performed by inspector D for this particular measurand can conclusively be proven as in-tolerance.
[0101] Additionally, the calibration data for manufacturing systems and metrology devices may or may not be accurate, and may change over time. Consider a different example where measurements D and E are each taken by metrology device F. If the smaller uncertainty interval E represents the uncertainty interval of the device provided by the calibration data but the larger uncertainty interval D represents the actual uncertainty of the device in operation, measurement E may be erroneously proven in-tolerance based on interval E while the actual measurement lies along interval D above the upper tolerance limit or below the lower tolerance limit. This can result in shipment and/or assembly of a "conforming" part that is, in reality, nonconforming. Conversely, if the larger uncertainty interval D represents the uncertainty interval of the device provided by the calibration data but the smaller uncertainty interval E represents the actual uncertainty of the device in operation, the device may be excluded from measuring parts that it is actually capable of measuring within tolerance.
[0102] In manufacturing, reworking or scrapping out of tolerance parts can be costly, particularly in industries such as aerospace where a single titanium part can cost thousands of dollars to manufacture. Further, parts are inspected both before shipping to a customer and upon receipt by the customer. If a supplier provides a part to a customer with a conformance report, but the receiving inspection produces a non-conformance report, it either (A) gets shipped back to the supplier or (2) repeated measurements must be taken to identify whether part is actually conforming and whether the error occurs at the level of the supplier or the customer. This generates inefficiency and potential strain on relationships within the supply chain.
[0103] As such, there is a need for tools to more accurately identify and utilize measurement uncertainty to improve measurement processes.
Overview of Example Uncertainty Analysis
[0104] The MLM system 110 can analyze feature-based inspection reports to determine an overall interval of measurement uncertainty in a particular set of measurements. This interval is reflects a collective uncertainty that is attributable to a manufacturing system used to manufacture the inspected part, a metrology device used to inspect the part, and the inspection operator who carried out the inspection. The MLM system 110 can then isolate a particular portion of that interval attributable to the human inspection operator, for example by removing the uncertainties attributable to the manufacturing system used to manufacture the inspected part and the metrology device used to inspect the part. These device-related uncertainty scores may initially be based on calibration data provided by manufacturers or testers of the devices, however the MLM system 110 can update these values to reflect device wear after learning the uncertainties attributable to particular human inspectors involved in the process. Figure 4 illustrates an example process 400 for determining and utilizing uncertainty calculations to improve manufacturing process efficiency, and can be implemented by the analytics engine 150 in some embodiments.
[0105] At block 405, the analytics engine 150 can obtain or generate a number of feature-based inspection reports each associated with an inspection operator, measurement device, and/or manufacturing system. In preferred embodiments, feature-based inspection reports are associated with each of a manufacturing system used to manufacture the inspected part, a metrology device used to inspect the part, and the inspection operator who carried out the inspection. However, in some embodiments a feature-based inspection report may be associated with only one or two of these pieces of information. In some embodiments, the process 400 may use inspections of a number of geometric features conducted within a particular tolerance range (e.g., l/lOOO111 of an inch) to determine uncertainty within a particular tolerance range rather than for a particular geometric feature.
[0106] At block 410, the analytics engine 150 can aggregate information from feature-based reports based on geometric feature and/or associated parameters, as discussed in more detail with respect to the examples of Table 1. The data can be aggregated appropriately for identifying a target measurement uncertainty associated with one or more of the manufacturing system, metrology device, inspector, and feature size range. It will be appreciated that inspection data sets can be generated using different units of measurement, for example inches and millimeters, and aggregated data sets are standardized so that all measurements are converted to the same unit of measurement.
[0107] At block 415, the analytics engine 150 can identify or calculate measurement uncertainty associated with the manufacturing system, metrology hardware, and/or inspection operator, across all feature sizes or at a determined size range. An example of a process for calculating the uncertainty associated with an inspector is discussed in more detail with respect to Figure 5, and an example process for calculating the uncertainty associated with a manufacturing system and/or a metrology device is discussed in more detail with respect to Figure 6. In some implementations machine uncertainty can be obtained from default calibration data provided from the machine manufacturer. The uncertainty can be represented, in some embodiments, as an interval representing a range of measurements above and/or below the actual measurement value, where the (unknowable) real measurement of the part is likely to fall within the range.
[0108] Example aggregated data sets and example meanings or significances of the resulting calculated measurement uncertainties are illustrated in Table 1, below. These examples are meant to provide an overview of how different sets of data can be aggregated so that analysis will provide specific insights into manufacturing and inspection process accuracies and capabilities, and it will be appreciated that other data sets can be aggregated an analyzed as desired.
Figure imgf000035_0001
5 (1) All cylinder measurements performed by The difference between the inspector A on parts manufactured by CNC A uncertainty scores resulting using laser tracker A from analysis of data sets (1)
(2) All cylinder measurements performed by and (2) can indicate uncertainty inspector A on parts manufactured by CNC B that is actually attributable to using laser tracker A CNC A and CNC B when manufacturing cylinders
6 (1) All cylinder measurements performed by The difference between the inspector A on parts manufactured by mold A uncertainty scores resulting using laser tracker A from analysis of data sets (1)
(2) All cylinder measurements performed by and (2) can indicate uncertainty inspector A on parts manufactured by mold A that is actually attributable to using laser tracker B laser tracker A and laser tracker
B when measuring cylinders
7 (1) All cylinder measurements performed by The difference between the inspector A on parts manufactured by mold A uncertainty scores resulting using laser tracker A, where the nominal value from analysis of data sets (1) of the cylinder diameter is 0.5 mm and below and (2) can indicate specific
(2) All cylinder measurements performed by uncertainty scores of inspector inspector A on parts manufactured by mold A A when measuring small (data using laser tracker A, where the nominal value set (1)) or large (data set (2)) of the cylinder diameter is above 0.5 mm cylinders
Table 1
[0109] Referring specifically to example 3, the process 400 can generate uncertainty scores for specific equipment/metrology device/inspector combinations in order to provide recommendations to manufacturers for meeting specific tolerances on specific geometric features, beneficially enabling reduction of parts scrapped due to usage of manufacturing systems, metrology devices, or inspectors that cannot with certainty manufacture or measure that feature at the specified tolerance.
[0110] As shown via examples 4-6, the process 400 can generate two or more data sets in which one of the three associated process variables (manufacturing system, metrology device, and inspector) is varied in order to isolate the particular quantity of measurement uncertainty that is attributable to the varied process variable. As shown via example 7, the three process variables can be kept the same but multiple data sets can be generated based on the nominal size of the measured feature in order to identify inspector (or, in other examples, manufacturing systems and/or metrology device) uncertainty at different size ranges. Although such analyses can be performed using two data sets as shown in Table 1, use of more data sets can provide more accurate or refined uncertainty calculations.
[0111] Turning now to optional blocks 420-440, an example application of the described uncertainty score will be described. In alternative processes, the MLM system can use the results of blocks 405-415 of the process 400 for other purposes, for example to recommend training for specific inspectors, to exclude specific inspectors from performing certain measurements, to exclude specific manufacturing systems from manufacturing certain parts, to exclude specific metrology hardware from measuring certain parts, to alert designated users that manufacturing system or metrology device uncertainty deviates from the calibration value provided by the machine manufacturer, or to provide an output comparison report of all inspectors for review by inspection management personnel.
[0112] At block 420, the analytics engine 150 can generate an uncertainty report representing feature-based uncertainty scores attributable to particular manufacturing systems, metrology devices, and/or inspectors or of combinations of manufacturing systems, metrology devices, and/or inspectors. These reports can include uncertainty scores attributable to the equipment and/or personnel of a single company or from a number of different suppliers in a supply chain relationship. These feature-based uncertainty scores can be sorted in some embodiments so that the report can be easily assessed by a user to identify the most precise manufacturing systems, metrology devices, and/or inspectors. For example, in some implementations block 420 can be performed by iterating between blocks 422 (example implementation discussed with respect to Figure 5) and 424 (example implementation discussed with respect to Figure 6) to solve for the identified variables. At block 422, the analytics engine 150 can identify inspector-specific uncertainty values using fixed machine uncertainty values.
[0113] At block 425, the analytics engine 150 can identify the geometric features and associated tolerances of a part. At decision block 430, the analytics engine 150 can compare the tolerances of the geometric features to feature-based uncertainty scores in one or more uncertainty reports to provide one or more recommendations regarding manufacturing setups capable of manufacturing and inspecting the part. A manufacturing setup can include specific manufacturing system, metrology device, and inspector combinations to implement the manufacturing and inspection lifecycle of the part. If any uncertainty score in a report exceeds a threshold percentage of a tolerance, the process 400 can move to block 435 to provide an indication regarding one or more inspection operators and/or combinations to exclude from manufacture and/or inspection of the part. This can beneficially stop parts from being scrapped due to being manufactured and/or inspected by machines or personnel who are not capable of creating or measuring within the designated tolerances. If no uncertainty score in a report exceeds a threshold percentage of a tolerance, the process 400 can move to block 440 to provide a recommendation regarding one or more inspection operators and/or combinations to manufacture and/or inspect the part. Block 440 can involve filtering, from the recommendations set, any manufacturing setups where the manufacturing system or metrology device is not suitable for manufacturing or inspecting the material of the part (e.g., certain systems may not be rigid enough to manufacture Invar parts, or optical metrology systems may not be suitable for inspecting highly reflective materials such as Kapton). The indications and recommendations can be managed by the multi -tenant manager 130 in some embodiments. The percentage of tolerance can be 10%, 30%, 50%, 70%, or 100%, to name a few examples, and can be varied based on balancing the specific needs of a particular manufacturing- inspection cycle. In some preferred examples, the combined uncertainty score of the manufacturing system, metrology device, and human inspector involved in making a particular geometric feature is 10% or less of the smallest tolerance specified for that feature in engineering schematics (e.g., where a certain part has three cylinders each associated with a different tolerance, the uncertainty score should be 10% or less of the smallest tolerance).
[0114] For example, when a OEM company creates or uploads engineering schematics of a part, the analytics engine can implement block 425 to analyze its geometric features and associated tolerances. In some implementations, the user of the OEM who uploads the part may or may not also be the user designated to manage the OEM's supply chain, e.g. by selecting suppliers (or in-house equipment and personnel) to manufacture particular parts. When the engineering schematics are uploaded, the MLM system can send a notification to the user(s) designated by the OEM as managing supply chain and/or part creation, wherein the notification includes an indication that new engineering schematics were uploaded and a user-selectable option to identify any in-house manufacturing setups or suppliers capable of manufacturing and inspecting the part. Upon user selection of the feature, the analytics engine can determine capabilities of the in-house and supplier manufacturing setups relating to manufacturing and inspecting the geometric features of the part. In other examples, the MLM system may generate the described recommendations automatically upon detection of a new uploaded engineering schematic, or may automatically generate recommendations per user-specified rules, e.g. wanting to receive recommendations for all parts including cylinders.
[0115] To determine manufacturing setup capabilities, the analytics engine can access the user data repository 180 to identify known suppliers of the OEM (or a subset who have known capability for producing this type of part) and/or any in-house assets of the OEM. The analytics engine can implement blocks 405-420 to generate an uncertainty report representing the capability of inspectors, metrology devices, and/or manufacturing systems of the suppliers with respect to the identified features of the part. From there, the analytics engine can implement blocks 425, 430, and 440 to recommend one or more suppliers (and particular equipment and/or personnel at these suppliers) that can both manufacture and inspect the geometric features of the part within the designated tolerances.
[0116] As used herein, a manufacturing setup includes at least one manufacturing system, at least one metrology device, and at least one human inspector that will take raw materials through the stages of manufacturing and inspection. In some examples, a manufacturing setup includes a single manufacturing system, a single metrology device, and a single human inspector. In other examples, two or more manufacturing systems, metrology devices, or inspectors may be needed for different geometric features of a part. For example, some parts may require several different types of metrology devices to measure various geometric features - e.g., surface contours vs. thickness - and thus a manufacturing setup recommendation can include a complete set of the devices and associated inspectors that would be needed to manufacture and inspect the part. If the MLM system determines (e.g., based on predetermined inspection rules relating to particular metrology devices) that one metrology device can't measure everything on a part, the MLM system can look for the devices that would best suit individual features.
[0117] As another example, when a manufacturing company (e.g., a supplier) is tasked with creating a part the analytics engine can implement block 425 to analyze its geometric features and associated tolerances, can access the user data repository 180 to identify manufacturing systems, metrology devices, and inspectors of the manufacturing company, can implement blocks 405-420 to generate an uncertainty report for the identified inspectors, metrology devices, and manufacturing systems with respect to the identified features, and can implement blocks 425, 430, and 440 to recommend one or more combinations of manufacturing systems, metrology devices, and inspectors at the manufacturing company that can both manufacture and inspect the geometric features of the part within the designated tolerances.
[0118] Turning now to Figure 5, an example process 500 for determining inspector capabilities by isolating the portion of measurement uncertainty attributable to the human inspector will be described in greater detail. The process 500 can be implemented by the analytics engine 150 in some embodiments. The feature-specific measurements in the data described with respect to Figure 5 can be aggregated, in some embodiments, from inspection data standardized via process 200. In some embodiments, the process 500 may use inspections of a number of geometric features conducted within a particular tolerance range (e.g., 1/1000th of an inch) to determine uncertainty within a particular tolerance range rather than for a particular geometric feature.
[0119] Process 500 can be used to identify human inspector capabilities per geometric features, and some embodiments can get more even granular, for example identifying capabilities per feature in combination with specific metrology device(s) and/or manufacturing system(s). Preferably, each inspection in the MLM system 110 can be tied to a human inspector (user), metrology device used to inspect the part, and manufacturing system used to make the part. For a given inspector, all inspections they have taken across all parts and assemblies are tied to that inspector in the MLM system databases.
[0120] At block 505, the analytics engine 150 can identify an aggregated data set associated with an inspector for a geometric feature. The aggregated data set can include a number of measurements taken by the inspector of the geometric feature. The geometric feature measurements can involve the same part, different parts in a run, and/or parts manufactured based on different models. Each measurement can be associated with a measured value, a nominal value for that part surface or portion, and one or more tolerance limits. Block 505 can be performed, in some embodiments, after the analytics engine 150 identifies that a threshold number of inspections of the feature have been obtained by the inspector. In one example the threshold can require the aggregated data set to include at least 50-100 inspections of the feature.
[0121] At block 510, the analytics engine 150 can perform accuracy calculations to determine a first portion of the uncertainty score of the inspector.
[0122] At block 510A, for each measurement in the aggregated data, the analytics engine 150 can calculate the deviation of the measurement from the nominal value and can then calculate the percentage of that deviation from the associated tolerance. In instances where there is both an upper tolerance limit and a lower tolerance limit, the deviation can be calculated as a percentage of the upper tolerance limit if the deviation is above nominal or can be calculated as a percentage of the lower tolerance limit if the deviation is below nominal.
[0123] At block 510B, the analytics engine 150 can calculate the mean of the percentages deviations of tolerance calculated at block 51 OA. Blocks 51 OA and 510B can be programmatically combined into a single function in some embodiments. As used herein, the calculation of a "mean" represents one way to determine an aggregate value for a particular variable. In alternate implementations, root mean squared or standard deviation can be used instead of mean.
[0124] At block 5 IOC, the analytics engine 150 can calculate the mean tolerance based on the absolute values of the tolerances factored into the calculation of the mean percentage deviation of tolerance.
[0125] At block 510D, the analytics engine 150 can store the mean percentage deviation of tolerance and the mean tolerance in association with the inspector in user data repository 180.
[0126] To illustrate block 510, consider the following example. Inspector A measures cylinder 1 at 7/1000 of an inch deviation from nominal, and the associated tolerance for cylinder 1 is +/- 10/1000 inches (plus or minus ten thousandths of an inch). Inspector A also measures cylinder 2 at -6/1000 of an inch deviation from nominal, and the associated tolerance for cylinder 2 is +/- 20/1000 inches. The mean percentage deviation of tolerance can be calculated as follows: .007/.01=70%; -.006/-.02=30%; (70%+30%)/2=50%. The mean tolerance can be calculated as (| 10/1000|+|-20/1000|)/2= 15/1000 of an inch.
[0127] Thus, in this example, the mean percentage deviation of tolerance of inspector A for cylinders is 50% and the mean tolerance is 15/1000 of an inch. Inspector A's accuracy score can be stated as 50% of a 15/1000 inch tolerance, meaning that 50% of the time inspector A can be expected to measure within tolerance of a 15/1000 inch tolerance.
[0128] At block 515, the analytics engine 150 can calculate an uncertainty score for inspector A associated with each geometric feature in the aggregated data. The uncertainty score can be represented as an interval of measurements in which inspector A can be expected to measure.
[0129] At block 515A, the analytics engine 150 can calculate a mean upper deviation value and a mean lower deviation value for measurements taken of the feature.
[0130] In GD&T, features of size are specified with both upper and lower tolerance values relative to a nominal value. For such features, at block 515A the analytics engine 150 can calculate both (1) mean deviation above nominal based on the values and number of deviations above nominal and (2) mean deviation below nominal based on the values and number of deviations below nominal.
[0131] However, in some circumstances a data set associated with an inspector and a feature of size may include only measurements above nominal or measurements below nominal. Further, in GD&T, features of position or positional features are specified with only a single unsigned tolerance value. For example, the feature position can be a vector axis of the geometric feature, and the associated tolerance provides a cylindrical tolerance zone around the feature axis. As another example, the feature position can be a point within the geometric feature, and the associated tolerance provides a spherical tolerance zone around the feature point.
[0132] In both of these circumstances at block 515A the analytics engine 150 can first calculate the mean "upper" deviation based on the mean of the measurement deviations. Analytics engine 150 can then calculate the mean "lower" deviation based on the mean of the measurement deviations for measurements falling between nominal and the mean "upper" deviation. In the circumstance where a feature of size is only associated with measurement deviations below tolerance, the mean "upper" and "lower" deviations can be negative values.
[0133] At block 515B, the analytics engine 150 can set an initial uncertainty range between the calculated mean upper and lower deviation values.
[0134] At block 515C, the analytics engine 150 can calculate mean machine uncertainty scores. The mean machine uncertainty scores include both (1) a mean manufacturing uncertainty score generated based on the uncertainty score of the manufacturing system associated with each feature measurement, and (1) a mean metrology uncertainty score generated based on the uncertainty score of the metrology device associated with each feature measurement. It will be understood that the associated manufacturing system was used to manufacture the feature and the associated metrology device was used to measure the feature. In some circumstances different manufacturing systems can be used to manufacture different geometric features of the same part, for example by swapping out different cutters in a CNC. These uncertainty scores can be represented as an interval of probable part measurements associated with the machine's manufacture/inspection.
[0135] At block 515D, the analytics engine 150 can adjust the initial range by subtracting the mean manufacturing system interval and the mean metrology device interval from the initial range. Block 515D operates to remove "known" machine uncertainty from the estimated measurement uncertainty attributable to the inspector. This adjusted mean range - the span of the interval between the mean upper and mean lower deviations minus estimated machine uncertainties - where the inspector can be expected to measure that feature in the future. As an example, consider that the initial mean range for inspector A is 0.011 mm, the mean range for the manufacturing systems is 0.004 mm, and the mean range for the metrology devices is 0.002 mm. The portion of the interval that is likely attributable to inspector A is 0.005 mm.
[0136] In some embodiments, block 515D can optionally include adjusting the mean range (minus machine uncertainty) to account for environmental variables including CTE (coefficient of thermal expansion), machining/inspecting setup, and machining/inspecting temperatures. One embodiment of the process 500 can attribute 50% of the remaining uncertainty to environmental variables. Considering the above example, the portion of the interval attributed to inspector A would be reduced to 0.0025 mm. In some circumstances, data associated with the inspections in the MLM system 110 can indicate that machining and/or inspection of parts in the aggregated data sets were performed in temperature-controlled environments, for example at recommended shop temperatures of 68 degrees Fahrenheit, and the percentage attributed to environmental factors may be reduced. In other circumstances temperature variations during and/or between manufacture and inspection can be known and the percentage attributed to environmental factors may be increased accordingly.
[0137] At block 515E, the analytics engine 150 can store the adjusted range in association with the inspector feature in the user data repository 180 as the inspector's uncertainty score for the specific geometric.
[0138] Blocks 515A-515E can be repeated for each geometric feature. The calculations of block 515 do not account for whether the features are from the same part or different parts, as each feature measurement is treated as a separate piece of data. In some embodiments, the aggregated data set can be partitioned based on feature and further based on size of feature, associated manufacturing systems and/or associated measurement devices, and blocks 515A-515E can be repeated for each partitioned data set.
[0139] In some embodiments, the analytics engine 150 can calculate a mean of the uncertainty scores generated by block 515 for a number of different geometric features. This mean can represent a global uncertainty score for the inspector and can be used in the MLM system 110 as a metric of the inspector's capabilities, for example to recommend the inspector for certain projects or rank the inspector relative to other inspectors. Preferably, this global uncertainty score would be calculated across all geometric features, manufacturing systems, and metrology devices.
[0140] Such inspector-specific uncertainty calculations are not present in existing manufacturing-inspection analysis systems. In existing systems, decisions are made based only on inspecting device uncertainty and manufacturing machine uncertainty.
[0141] Optional block 520 can be performed to utilize the determined uncertainty scores of the inspector within a manufacturing-inspection environment. As described above, the uncertainty score can include at least three values: mean percentage deviation of tolerance, mean tolerance, and at least one uncertainty interval. As described herein, instead of mean other metrics may be used, for example root mean squared or standard deviation. [0142] In some implementations, the feature-specific uncertainty measures for the human inspectors (and any updated uncertainty measures for machining systems or metrology devices) may be pre-computed in advance of determining to generate a capability recommendation for a particular part at block 520. Beneficially, this can enable the system to deliver the recommendation to the designated user while avoiding excessive latency that may result from performing the disclosed uncertainty-measure-isolation calculations in real time. For example, if an OEM has a hundred in-house manufacturing setups and thousands of suppliers, with each supplier having multiple manufacturing setups, attempting to calculate the feature-specific measurement uncertainties for each component of each manufacturing setup in real time (e.g., responsive to a manufacturing setup recommendation request) can result in severe delays in delivering the recommendation. In contrast, accessing pre-computed uncertainty measures enables a vast range of manufacturing setups to be evaluated in real time in comparison to the engineering specifications of a particular part to be manufactured and/or inspected. In some implementations however at least some uncertainty measures may be calculated during recommendation generation.
[0143] At block 520A, the analytics engine 150 can identify the tolerance range associated with a feature to be measured. This can be a range between upper and lower tolerance values for features of size or a range between nominal and tolerance for features of position.
[0144] At block 520B, the analytics engine 150 can multiply the tolerance range by the inspector percentage deviation of tolerance calculated at block 510B.
[0145] At block 520C, the analytics engine 150 can take the multiplied tolerance range and adjust the range by the inspector uncertainty range for the feature as calculated at block 515D. This can represent an expected range of measurements that the inspector could obtain when measuring the feature.
[0146] At block 520D, the analytics engine 150 can determine whether the expected range is within the specified tolerance. If so, the process 500 moves to block 520E and the inspector can be recommended for measuring the feature, as the inspector is capable of measuring the part in tolerance assuming that the part is, in fact, within tolerance. If not, the process 500 moves to block 520F and the inspector can be recommended to not measure the feature, as measurements obtained by the inspector are likely to be out of tolerance even if the actual part is within tolerance. Such recommendations can be output by the multi-tenant manager 130 to one or more designated users.
[0147] To illustrate the calculations of block 520, consider the following example. Suppose the tolerance associated with a cylinder is 0.010 mm, the inspector has a 70% deviation of tolerance percentage, and the uncertainty range of the inspector for cylinders is 0.002 mm. By multiplying 70% and the 0.010 mm tolerance, the inspector is expected to measure the cylinder at 0.007 mm. The inspector's uncertainty range, 0.002 mm, is divided by 2 to yield an uncertainty range of plus or minus 0.001 mm from the expected measurement. Thus, the expected range of measurements by this inspector for this cylinder is 0.006 mm - 0.008 mm. The entirety of this range falls within the 0.010 mm tolerance specification, so this inspector would be recommended as capable of measuring this cylinder.
[0148] Consider another example showing an alternate embodiment of block 520 to identify an uncertainty associated with an existing measurement. Suppose inspector A measures a cylinder at 1.005 inches, nominal for the measurement is 1.000 inches, and tolerance is plus or minus 0.006 inches. The analytics engine 150 could attach the uncertainty score to the measured deviation, so that in this example the uncertainty for this measurement is 1.005 inches +/- .00125 inches, and the measurement could reasonably be anywhere between 1.00375 inches and 1.00625 inches. This measurement cannot be determined with certainty to be in tolerance, because everything in the range of 1.006-1.00625 is above the upper tolerance limit.
[0149] Some embodiments of the process 500 can use inspector uncertainty scores to recommend training of one or more inspectors using specific hardware and/or measuring specific geometric features in order to help inspectors improve their measurement accuracy. For example, if inspectors A, B, and C measure the same part, inspector A is 30% deviation of tolerance, B is 50% deviation of tolerance, and C is 75% deviation of tolerance (on the same manufactured part or generally) then their relative uncertainty scores reveal who needs more training.
[0150] Another use of the inspector uncertainty scores includes the machine uncertainty refinement described with respect to Figure 6. In one embodiment, inspector scores can first be calculated using equipment/device uncertainty data from manufacturer- provided calibration data to get preliminary inspector scores. Once the inspector scores stabilize in the system, the analytics engine can use the stabilized scores to identify whether equipment/device uncertainty corresponds to that provided by the manufacturer, and can use actual equipment/device uncertainty to refine inspector uncertainties. This feedback loop is discussed more with respect to Figure 7.
[0151] It will be appreciated that blocks 505-515 of the process 500 can be updated based on newly acquired inspection data. In some embodiments, the process 500 can be repeated in real time as new inspection data is acquired. In some embodiments, the process 500 can be repeated at predetermined intervals, for example once per day or once per week. In some embodiments, the process 500 can be repeated once a threshold amount of new inspection data is acquired, for example after every 50 or 100 new inspections for a feature. In some embodiments, the process 500 can be executed in response to a user request that requires generating an uncertainty score.
[0152] Figure 6 illustrates an example process 600 for generating uncertainty scores associated with a target machine a manufacturing-inspection environment, where the target machine can be a manufacturing system or a metrology device. The process 600 can be implemented by the analytics engine 150 in some embodiments. The feature-specific measurements in the data described with respect to Figure 6 can be aggregated, in some embodiments, from inspection data standardized via process 200.
[0153] At block 605, the analytics engine 150 can identify inspection data associated with a target machine. For a manufacturing system, this can include inspection data sets of parts manufactured using the equipment. For a metrology device, this can include inspection data sets of parts inspected by the metrology device.
[0154] Analytics engine 150 can perform block 610 for each inspection in the inspection data set to determine whether or not to include the inspection in the aggregated data used in block 615. At decision block 61 OA, the analytics engine 150 can determine whether there is an uncertainty score associated with the inspector of the inspection data set. The uncertainty score for the inspector can be calculated as described with respect to the process 500 or by any other suitable calculations. In some embodiments, decision block 61 OA can include simply identifying whether there is an uncertainty score associated with the inspector for that feature in the user data repository 180. Inspectors may not have associated uncertainty scores in circumstances in which insufficient inspection data has been collected by the inspector for that feature.
[0155] In some embodiments, block 61 OA can include determining whether the inspector's uncertainty score has stabilized and including the inspection data in the aggregated data set only for stabilized scores. For example, analytics engine can determine that the uncertainty score varies less than a threshold percentage over a period of time or over a number of updated calculations. As another example, analytics engine can determine that the uncertainty score stops varying beyond a predetermined decimal point, for example five decimal points for measurements in inches or three decimal points for measurements in millimeters.
[0156] In some embodiments, process 600 can use and/or be limited to data from inspections performed by coordinate measurement machines (CMMs) operated without human inspectors, as such devices typically have known negligible uncertainty intervals of only a few microns. As such, some implementations of block 61 OA can identify whether the inspector of an inspection is a CMM, and if so may include the inspection in the aggregated data.
[0157] Though not illustrated, some embodiments may partition aggregated data into multiple sets to provide a number of uncertainty scores for the target machine. For example, the aggregated data can be partitioned into two or more sets based on temperature at which the parts were manufactured and/or inspected in order to provide temperature-specific uncertainty scores for the target machine. In other examples, the aggregated data can be partitioned into two or more sets based on part material, machine tooling (diamond cutters, carbide cutters, high speed steel cutters, and the like), machine programming, proximity to massive objects (such as mountains, as the gravitational field can pull a non-digital gauge in a metrology device towards it thus affecting the resultant measurements), and the like.
[0158] Once all inspections have passed through block 610, the analytics engine 150 can move to block 615. In some embodiments the measurements in the aggregated data can be associated with a specific geometric feature, and block 615 can be repeated using the measurements for each feature in the aggregated data. At block 615, the analytics engine 150 can calculate a deviation of each measurement from the associated nominal value and calculate a mean value of these deviations. Analytics engine 150 can further calculate a mean inspector uncertainty based on the uncertainty scores of the inspectors who contributed to the measurements of the geometric feature under consideration and the number of inspectors who contributed. Analytics engine can then subtract the mean inspector uncertainty from the mean deviation to calculate the uncertainty score of the target machine for that geometric feature. If process 600 is determining a manufacturing uncertainty interval of a manufacturing system, block 615 can also include calculating a mean of values representing default and/or calculated uncertainty for metrology devices associated with the aggregated data and subtracting this value together with the mean inspector uncertainty score. If process 600 is determining the measurement uncertainty interval of a metrology device, block 615 can also include calculating a mean of values representing default and/or calculated uncertainty for manufacturing systems associated with the aggregated data and subtracting this value together with the mean inspector uncertainty score.
[0159] After calculating the uncertainty score of the target machine for each geometric feature, the analytics engine 150 can move to block 620 to calculate the mean of the uncertainty scores.
[0160] At block 625, this mean of uncertainty scores is stored in association with the machine (manufacturing system or metrology device) in the asset portfolio of the user data repository 180.
[0161] Optionally, at block 630, the analytics engine 150 can cooperate with the multi-tenant manager to provide an alert to a designated user based on the mean uncertainty. For example, the alert can include an alert that the machine requires recalibration and/or that operation of the machine should be halted. Additionally or alternatively, the analytics engine 150 can output a command through the network 108 to the machine or to a computing device operating the machine, where the command halts operation of the machine. This can include ceasing manufacture using a manufacturing system or disabling a metrology device for use in further inspections. Such an alert or command can be based on the analytics engine 150 (1) comparing the mean of uncertainty scores to the default uncertainty score in the calibration data of the machine and (2) identifying that the mean of uncertainty scores is 30% or more greater than the default uncertainty score. Additionally or alternatively, such an alert or command can be based on the analytics engine 150 (1) performing outlier detection on a data set including number of uncertainty scores calculated for the machine and (2) determining that the mean of uncertainty scores is an outlier of the data set. [0162] Another use of a machine uncertainty score obtained via the process 600 includes making determinations about manufacturing system capability to produce intolerance parts. For example, before manufacturing a part the analytics engine 150 can determine, for any feature of the part, whether the uncertainty range of any equipment in the asset portfolio associated with a user exceeds the specified feature tolerance range. If the tolerance range is exceeded by machine uncertainty, the MLM system 1 10 can alert a designated user to not use that equipment for manufacturing the part.
[0163] Another use of a machine uncertainty score obtained via the process 600 includes making determinations about metrology device capability to generate in-tolerance measurements of parts. For example, before inspecting a part the analytics engine 150 can determine, for any feature of the part, whether the uncertainty range of any metrology device in the asset portfolio associated with a user exceeds the specified feature tolerance range. If the tolerance range is exceeded by metrology device uncertainty and/or the uncertainty of a combination of metrology device and a specific inspector, the MLM system 1 10 can alert a designated user to not use that device and/or inspector for measuring the part.
[0164] The MLM system 1 10 can use the uncertainty scores output from the processes 500, 600 to provide recommendations for specific manufacturing system, metrology device, and inspector combinations for handling the manufacturing-inspection lifecycle of a part. In some embodiments, the MLM system can recommend combinations having combined uncertainty range that is at most 30% of tolerance for a part or of any feature of the part.
[0165] Figure 7 depicts an example feedback loop for refining inspector and machine uncertainty scores. The feedback loop can be implemented by the analytics engine 150 in some embodiments, and may be pre-computed in advance of preparing particular manufacturing setup recommendations. Data described with respect to Figure 7 can be aggregated, in some embodiments, from inspection data standardized via process 200.
[0166] At block 705, the analytics engine 150 can calculate inspector uncertainty scores representing an interval of likely actual measurement values for parts measured by each of a number of inspectors. The uncertainty score for each inspector can be calculated as described with respect to blocks 505-515 of process 500 in some embodiments. As described above, the uncertainty score associated with an inspector can be adjusted to remove machine uncertainty scores. The machine uncertainty scores represent the measurement uncertainty likely attributable to the manufacturing systems used to manufacture the measured parts and the metrology devices used to perform the inspection, which can initially be obtained from the machines' calibration data or can be updated based on obtained metrology data via process 600. Thus, the inspector uncertainty scores are generated based on current machine uncertainty scores.
[0167] The analytics engine can take the uncertainty scores calculated at block 705 and feed this data 710 into block 715. At block 715, the analytics engine 150 can calculate machine uncertainty scores representing an interval of likely actual measurement values for parts manufactured by each of a number of machines and/or measured by each of a number of metrology devices. The uncertainty score for each machine can be calculated as described with respect to blocks 605-625 of process 500 in some embodiments. As described above, machine uncertainty scores can be calculated based on mean deviation from nominal minus inspector uncertainty or a fraction of inspector uncertainty. Thus, the machine uncertainty scores are generated based on current inspector uncertainty scores.
[0168] The analytics engine can take the machine uncertainty scores calculated at block 715 and feed this data 720 back into block 705. As machine uncertainty scores are refined, the uncertainty scores of inspectors whose inspection data sets involve these machines are refined by re-calculating the inspector uncertainty based on the updated machine uncertainty scores.
[0169] The analytics engine can take the updated inspector uncertainty scores calculated at block 705 and feed this data 715 back into block 715. As inspector uncertainty scores are refined, the uncertainty scores of machines having inspection data sets involving these inspectors are refined by re-calculating the machine uncertainty based on the updated inspector uncertainty scores.
[0170] The feedback loop can be initiated by the MLM system periodically, for example once per day, or can be executed as new inspection data is obtained. The feedback loop can continue in some embodiments until convergence. Convergence can be defined as a cessation of change, within a threshold level, in uncertainty scores between successive iteration of blocks 705 and 715. The order of magnitude of the threshold level can be set based on metrology device resolution in some implementations, for example at the fifth decimal place of measurement values in inches or the third decimal place of measurement values in millimeters. In other embodiments the feedback loop can run for a number of iterations of blocks 705 and 715, for example 10 iterations, 50 iterations, 100 iterations, or more. The number of iterations can also be dynamically determined based on the amount of new inspection data relative to a previous run of the feedback loop, for example by performing one iteration for every 10, 50, or 100 new inspections.
[0171] In some embodiments, the feedback loop can be feature-specific, and multiple versions of the feedback loop can be performed sequentially or in parallel to refine feature-specific inspector and machine uncertainty scores. Some embodiments of the feedback loop may use global uncertainty scores for inspectors and machines, for example a mean uncertainty score across all features, and can adjust known feature-specific scores based on the finalized global scores output from the feedback loop.
Overview of Example Machine Learning Systems and Techniques
[0172] Some embodiments of the MLM system 110 can use machine learning to be able to automatically predict certain conditions based on relationships in manufacturing data. Computing devices can use machine learning models representing data relationships and patterns, such as functions, algorithms, systems, and the like, to process input (sometimes referred to as an input vector), and produce output (sometimes referred to as an output vector) that corresponds to the input in some way. In some implementations, a model is used to generate a likelihood or set of likelihoods that the input corresponds to a particular value. For example, artificial neural networks, including deep neural networks, may be used to solve pattern-recognition problems that are difficult to solve using rule-based models.
[0173] Artificial neural networks are artificial in the sense that they are computational entities, analogous to biological neural networks in animals, but implemented by computing devices. A neural network typically includes an input layer, one or more intermediate layers, and an output layer, with each layer including a number of nodes. The nodes in each layer connect to some or all nodes in the subsequent layer and the weights of these connections are typically learnt from data during the training process. Each individual node may have a summation function which combines the values of all its inputs together. A node may be thought of as a computational unit that computes an output value as a function of a plurality of different input values. Nodes may be considered to be "connected" when the input values to the function associated with a current node include the output of functions associated with nodes in a previous layer, multiplied by weights associated with the individual "connections" between the current node and the nodes in the previous layer.
[0174] Specifically, nodes of adjacent layers may be logically connected to each other, and each logical connection between the various nodes of adjacent layers may be associated with a respective weight. Each connection between the various nodes of adjacent layers may be associated with a respective weight values. The weighting allows certain inputs to have a larger magnitude than others (e.g., an input value weighted by a 3x multiplier may be larger than if the input value was weighted by a 2x multiplier). This allows the model to evolve by adjusting the weight values for inputs to the node thereby affecting the output for one or more hidden nodes. In training the model, an optimal set of weight values are identified for each node that provides a model having, for example, a desired level of accuracy in generating expected outputs for a given set of inputs. When processing input data in the form of a vector such as one or more feature vectors containing information extracted from portions of the audio data, a neural network may multiply each input vector by a matrix representing the weights associated with connections between the input layer and the next layer, and then repeat the process for each subsequent layer of the neural network.
[0175] A neural network is a type of feed-forward machine learning model. The parameters of a neural network can be set in a process referred to as training. For example, a neural network can be trained using training data that includes input data and the correct or preferred output of the model for the corresponding input data. Sets of individual input vectors ("mini -batches") may be processed at the same time by using an input matrix instead of a single input vector. The neural network can repeatedly process the input data, and the parameters of the network (e.g., the weight matrices) can be modified in what amounts to a trial-and-error process until the neural network produces (or "converges" on) the correct or preferred output. The modification of weight values may be performed through a process referred to as "back propagation." Back propagation includes comparing the expected model output with the expected model output and then traversing the model to determine the difference between the expected node output that produces the expected model output and the actual node output. An amount of change for one or more of the weight values may be identified using this difference to reduce the difference between the expected model output and the obtained model output. For example, back-propagation compares the output produced by a node (e.g., by applying a forward pass to input data) with an expected output from the node (e.g., the expected output defined during training). This difference can generally be referred to as a metric of "error." The difference of these two values may be used to identify weights that can be further updated for a node to more closely align the model result with the expected result.
[0176] A predicted output can be obtained by doing a forward pass using new input data input into a trained model. The forward pass involves multiplying the large weight matrices representing the connection weights between nodes of adjacent layers by vectors corresponding to one or more feature vectors (from the input layer) or hidden representations (from the subsequent hidden node layers). A neural network according to the present disclosure is trained to recognize patterns in the inspection data of parts in a run in order to predict whether a future, yet-to-be-manufactured part in the run will be out-of-tolerance. Beneficially, such techniques allow manufacturers to adjust their manufacturing systems to avoid the financial, material, time, and energy costs of waste resulting from parts that require reworking or scrapping. Previous statistical process control systems identify problems through trends in past inspections and therefore lack the tools to enable manufacturers to predict and prevent scrap in-line with real-time manufacturing processes.
[0177] As used herein, "real-time" analysis of a manufacturing process refers to analysis of inspections of parts as the inspections are generated. The inspected parts are typically inspected shortly after their manufacture, for example before creation of a next part based on the same model as the inspected part. Some processes may begin manufacture of the next part during inspection of the previous part. Some processes may inspect every part created, while other processes may inspect a regular or periodic sampling of parts created (e.g., every five parts, every ten parts, etc.). For purposes of the discussion below, the generation of the training and input data can omit consideration of non-inspected parts.
[0178] Turning now to Figure 8A, depicted is an example wireframe view 801 of a computer model of a part together with GD&T data for the part, which is provided to illustrate and not limit the examples presented herein regarding training a neural network to predict out- of-tolerance parts. It will be appreciated that the features, properties, and tolerances depicted are for purposes of example, and the disclosed machine learning can be applied to manufacturing processes for any type of part. [0179] The part shown in Figure 8A comprises a sphere, a rectangular base having two holes, and a frustoconical support member connecting the sphere to the base. The GD&T or other inspection data for determining whether this part conforms to tolerances is depicted for a number of geometric features of the part, including the sphere, three planes, one cylinder' s diameter, and the cylinder's center position. Each of these features can be measured, for example, using a datum having three properties - location of a point in x, y, z space (where each of the x, y, and z coordinates is a property). Other properties can include radius, diameter, position, profile, and maximum material condition (MMC), to name a few examples. The inspection data can specify a tolerance for each feature, or can specify separate tolerances for the properties of each feature.
[0180] Figure 8B depicts an example timeline 805 of different parts A1-A7 in a run that were manufactured based on the model shown in Figure 8 A at different times T1-T8. A "run" is a set of sequentially-manufactured parts that are based on the same model and made by the same manufacturing system. As used herein with respect to parts in a run, "sequential" and "sequentially" refer to parts that were manufactured in a particular temporal order. A "lot" can be considered as a set of parts manufactured based on the same model, but not necessarily by the same manufacturing system. In the illustrated example, part Al was produced first followed sequentially by A2-A8. Sequential production refers to the fact that time T8 follows (is temporally after) time T7, time T7 follows time T6, time T5 follows time T4, and so on.
[0181] The inspection reports of parts A1-A8 can be used to train a machine learning model as described below. In one example, parts A1-A8 can be selected for use in training data because they were manufactured successively, that is, with manufacture of part A2 consecutively following manufacture of part Al, manufacture of part A3 consecutively following manufacture of part A2, and so on. In another example, other parts in the run may have been manufactured between various pairs of parts in the A1-A8 sequence. For instance, part A7 may have been manufactured consecutively before part A8, part A6 may have been manufactured five parts before part A7, part A5 may have been manufactured ten parts before part A6, and so on. Other spacings between parts in the dataset can be used in other examples.
[0182] In some examples, a first subset of parts A1-A7 was manufactured successively while, for a second subset, other parts in the run were manufactured between the parts in sequential pair. For example, parts A5-A7 may be successively manufactured before part A8, while part A4 was manufactured five parts before part A5, part A3 was manufactured five parts before part A4, part A2 was manufactured ten parts before part A3, and part Al was manufactured fifteen parts before part A2. By having a blend of successively-manufactured parts preceding part A8 as well as increasingly spaced apart parts manufactured before the successively-manufactured subset, a training or input data set can reflect both recent manufacturing conditions as well as more distant manufacturing conditions.
[0183] Figure 8C depicts a visual representation of an example set of training inspection data 810 that can be used to train a machine learning model to predict out-of- tolerance parts before their manufacture. Training inspection data 810 can include inspection reports of different parts A1-A8 in the run 805. An input data set 815 includes a sequence of the inspection reports of in-tolerance parts A1-A7, and output data 820 includes the inspection report of out-of-tolerance part A8.
[0184] An inspection report can have a number of different features corresponding to geometric features of the part, illustrated as surface 1 and surface 2. Each feature can have at least one property, illustrated as dimensions in the x, y, and z directions. The properties can be stored in association with a measurement value (the actual measured value of an inspected parts) and with GD&T information including lower tolerance, nominal value, and upper tolerance. For example, each property can be stored in association with a tuple of the measurement value and GD&T values. Some embodiments can omit the nominal value, or can include the nominal value and have allowable deviations in the place of upper and lower tolerance values in the tuple. The features and properties and numbers of features and properties can vary based on the geometry and GD&T of a particular part. Further, for purposes of the disclosed machine learning, some or all features in an inspection report may be used for model training and prediction. For example, key features relating to out of tolerance conditions can be identified manually by a user or by automated trend analysis (for example by the analytics engine 150), and the training data can include such features extracted from larger inspection reports.
[0185] Comparison of the measurement values to the upper and lower tolerance values is used to determine whether a part is in or out of tolerance, as described herein. If a single property is out of tolerance the entire part can be considered out of tolerance. As illustrated, in tolerance measurements can be displayed in green (or using a first visual representation) and out of tolerance measurements can be displayed in red (or using a second visual representation that is different from the first visual representation). For simplicity of illustration units have been removed from the measurements and tolerances, and it will be appreciated that these can represent any suitable standard or metric units.
[0186] The training inspection data 810 can be provided to a machine learning model, for example a neural network, in order to train the parameters of the model to predict out-of-tolerance part A8 from in-tolerance parts A1-A7. Although seven in-tolerance parts are shown in the aggregate data 810, the aggregate data 810 can include greater or fewer numbers of in-tolerance parts in other examples. Parts A1-A7 were manufactured sequentially prior to manufacture of part A8. In one example, parts A1-A8 were manufactured successively, however as described above other parts in the run may have been manufactured between various pairs of parts in the A1-A8 sequence.
[0187] In some implementations, training can use actual measurement values and/or a binary representation for in or out of tolerance measurements (e.g., 0 for in tolerance and 1 for out of tolerance). In some implementations the GD&T data may additionally be used for model training. Further, during training a number of training inspection data sets 810 can be used to refine the parameters of the machine learning model based on manufacturing conditions leading up to production of a number of out-of-tolerance parts. A model trained using a training inspection data set 810 can be used in-line with a manufacturing pipeline, such that
[0188] Some embodiments of such models may used only with a particular manufacturing system that was used to manufacture the parts in the training data set. Other embodiments of such models can be more generally applicable to generate predictions for a number of, or any, manufacturing systems used to create parts based off of a specific model. For example, the training inspection data sets 810 can include runs of parts based on the same model but with some runs created by different manufacturing systems.
[0189] Figure 9A depicts an example topology of a neural network 900 for predicting out-of-tolerance parts (or for predicting measurements that can be compared to tolerances), for example using the prediction engine 165 of Figure IB. The neural network has a preliminary statistical process control metric portion 905 and a neural network portion 910. The statistical process control metric portion 905 includes a number of nodes that receive inspection data 810 and calculate statistical process control metrics, as described in more detail with respect to Figure 9B. Other implementations can omit the statistical process control metric portion 905 and feed inspection data directly into the neural network portion 910. The neural network portion 910 includes a number of connected layers 912, 914, 916. The output layer 916 can be provided with the inspection data 820 of an out-of-tolerance part or with binary representations of in/out of tolerance conditions of properties or features of the out-of- tolerance part, such that the neural network 900 learns to predict the parameters of the out-of- tolerance part (or generally the out-of-tolerance condition) from the input inspection data 815. As illustrated, one embodiment of the neural network 900 can be a feedforward artificial neural network, for example a multilayer perceptron, having a preliminary statistical process control metric portion designed to generate statistical process control metrics from input inspection data.
[0190] Generally, an artificial neural network 900 represents a network of interconnected layers of nodes, where weights of connections between nodes can be learned through the training process. Illustratively, a neural network may include an input layer, and output layer, and any number of intermediate, internal, or "hidden" layers between the input and output layers. The first layer (referred to as the "input layer" herein) has input nodes which send data via connections to the second layer of nodes. Each hidden layer can transform the received data and output the transformed data for use by a subsequent layer, in the computations of these hidden layers can be considered as an encoding of patterns that enable the network to identify significant features of the inputs (e.g., how the inputs relate to the output). The final layer (referred to as the "output layer") outputs values representing the prediction of the neural network. As described herein, depending upon how the network is trained, the output values may represent predicted measurements, or a likelihood that a given measurement or feature will be in tolerance. As described herein, a "layer" of a neural network or other machine learning model can be considered as computer-executable code that receives a set of inputs, implements a set of computations on the inputs, and provides the computationally-transformed inputs as an output. The set of computations of a given node can be considered as any weighted input connections with nodes of the previous layer and any activation function (e.g., rectified linear activation, sigmoid, hyperbolic tangent). As described herein, the weights in a given set of computations are learned during training using a training data set, for example training inspection data set 810.
[0191] The individual layers may include any number of separate nodes. Each node can be considered as a computer-implemented simulation of a biological neuron and represents a connection between the output of one node to the input of another. Nodes of adj acent layers may be logically connected to each other by connections, represented in Figures 9A and 9B by the lines between nodes of adjacent layers. These connections may store parameters called weights that can manipulate the data in the calculations. Each individual node may have a summation function which combines the values of all its weighted inputs together, and an activation function that operates on the summed weighted input to transform the summed weighted input into the output of that node. Thus, a node may be thought of as a computational unit that computes an output value as a function of a number of different input values. The depicted number of nodes, represented by circles in Figures 9A and 9B, are for purposes of illustration and network 900 can have greater or fewer nodes depending upon the structure of the training data.
[0192] Nodes may be considered to be connected when the input values to the function associated with a current node include the output of functions associated with nodes in a previous layer, multiplied by weights associated with the individual connections between the current node and the nodes in the previous layer. When the nodes in a layer are each connected to each node in a previous layer, the layers are termed to be "fully connected." For example, turning to the depiction of Figure 9A, layer 912 is fully connected to layer 914, and layer 914 is fully connected to layer 916. In various implementations the layers 912, 914, 916 can be fully connected or partially connected. For example, in one type of neural network, the convolutional neural network, the nodes in a layer are only connected to a small region of the layer before it, called a receptive field. A convolutional kernel, which can be thought of as a matrix of weights, is applied to the receptive field of each node of the convolutional neural network layer. A convolutional kernel can be shared among the nodes of a convolutional neural network layer such that the convolutional filter of a given layer is replicated across the entire input. Network 900 can include one or more convolutional layers, for example to have inputs for various properties of a geometric feature passed through a convolutional layer with the receptive field corresponding to the number of parameters of a feature. [0193] When a neural network is used to process input data (e.g., inspection data 810 from a run of in-tolerance parts), the neural network may perform a "forward pass" of the data through the layers to generate output values. Each data element may be a value, such as a floating point number or integer. The forward pass includes multiplying the input values by learned weights associated with connections between the nodes of the input layer and nodes of the next layer, and applying an activation function to the results. The process is then repeated for each subsequent neural network layer. During training, the outputs 820 of the neural network can be compared to an expected output, and error rates identified based on the comparison can be fed back into the neural network via back propagation to adjust the weights of the neural network such that the output more accurately matches the expected output. Thus, the artificial neural network 900 is an adaptive system that is configured to change its structure (e.g., the connection configuration and/or weights) based on information that flows through the network during training, and the weights of the hidden layers can be considered as an encoding of meaningful patterns in the data.
[0194] Returning to the example of Figure 9A, input data 815 from the inspection reports of a run of in-tolerance parts can be provided to the statistical process control metric portion 905. This portion 905 can generate one or more statistical process control metrics based on the input inspection data 815. Such metrics include sigma, 3 sigma, and 6 sigma values to name a few examples, as will be discussed in more detail with respect to Figure 9B. Each output can be provided to a node of the next layer. As illustrated, the measurement values for properties of specific geometric features of a measured part can be grouped together and provided to feature-specific nodes of the statistical process control metric portion 905.
[0195] Desired metrics of the generated statistical process control metrics and optionally the inspection data values can be provided to nodes of an input layer 912 of the neural network portion 910. In one example, the desired metrics and the inspection data values of a property can be processed into a single value and input into a single node of the input layer 912. In another example, the desired metrics and the inspection data values can each be provided to a separate node of the input layer 912. In another example, the desired metrics and pass/fail encodings of the inspection data values can be provided together to a single node or individually to separate nodes. Some embodiments can alternatively forgo the statistical process control metric generation and directly feed inspection data into the input layer 912. [0196] Some embodiments can optionally include one or more process parameters to the input layer 912. These process parameters can include the unique identifier associated in the MLM system 110 with one or more of a human inspection operator, a metrology device 102, or a manufacturing system 104 involved in creating/measuring the part. Alternatively, training data can be segmented into subsets based on one or more of these process parameters and used to train a number of different versions of the network 900, with the resulting trained networks used specifically for manufacturing processes later involving the same combination of inspector, metrology device, and manufacturing system as the training data.
[0197] Each of the input nodes of the input layer 912 may be mapped to a corresponding one of the nodes of the statistical process control metric portion 905. The input nodes are fully interconnected to the hidden nodes, and the hidden nodes are fully connected to the output nodes. As used herein, an interconnection may represent a piece of information learned about the two interconnected nodes. In comparison a connection between a hidden node and an output node may represent a piece of information learned that is specific to the output node. The interconnection may be assigned a numeric weight that can be tuned (e.g., based on a training dataset), rendering the artificial neural network 900 adaptive to inputs and capable of learning.
[0198] The nodes of the hidden layer 914 can retain information (e.g., specific variable values and/or transformative functions) for a set of input values and output values used to train the artificial neural network 900, referred to herein as parameters of the hidden layer. This retained information may be applied to a new set of input inspection data in order to predict whether a next manufactured part will be in or out of tolerance. Generally, the hidden layer 914 allows knowledge about the input nodes of the input layer 912 to be shared amongst the output nodes of the output layer 916. To do so, an activation function /is applied to the input nodes through the hidden layer. In an example, the activation function / may be nonlinear. Different non-linear activation functions / are available including, for instance, a rectifier function / (x) = max(0, x). In an example, a particular non-linear activation function is selected based on cross-validation. For example, given known example pairs (x, y), where x E X and y E Y, a function / X— > Y is selected when such a function results in the best matches (e.g., the best representations of actual correlation data). Though one hidden layer 914 is shown, the network 900 can have two or more hidden layers in other implementations. [0199] Each of the output nodes in the output layer 916 can be mapped to a particular portion of the inspection data 810 of an out-of-tolerance part. For example, actual measurement values or in-or-out-of-tolerance indicators can be fed into the output nodes. If actual measurement values are fed to the output nodes, in use the values of the output nodes of the neural network 900 can represent likely measurement values for a particular property of the next manufactured part. These values can be compared to known tolerances to determine whether the next part will be in or out of tolerance. For example, each output predicted measurement can be compared to a low tolerance and/or high tolerance value. Beneficially, this can create a measurement prediction neural network that remains applicable even if the tolerance for a given measurement changes. If in-or- out-of-tolerance indicators (e.g., 0 for in, 1 for out) are fed into the output nodes, in use the values of the output nodes of the neural network 900 can indicate whether (1) or not (0), or a likelihood that (a value between 0 and 1), a particular property will be out of tolerance for the next manufactured part. In order to make such a prediction, this type of neural network can have a representation of the tolerance values encoded in its learned parameters, and beneficially it may bypass the extra step of having to compare output predicted measurements to given tolerances. In some embodiments, during training a pass/fail output node can be provided with a binary representation of the fail condition of the out-of-tolerance part in order to identify correlations between input inspection data and process parameters and the overall pass or fail of the part inspection.
[0200] In one embodiment, in response to determining that the output node of any property indicates (for example by showing a binary value of "1") an out-of-tolerance condition, the prediction engine 165 can generate an out of tolerance alert. In another embodiment, the prediction engine 165 can generate an out of tolerance alert in response to determining that the output node of any property indicates a greater-than-threshold likelihood of an out-of-tolerance condition. Such a threshold can be 30%, 50%, 80%, or another percentage of likelihood, and reflects a tradeoff between the desire to avoid scrap and the desire to keep a manufacturing process moving if there is a possibility that the next part will be in tolerance. This threshold can be determined automatically for example based on the cost of each part (where higher cost parts would be associated with lower acceptable thresholds for likely out-of-tolerance conditions), the nature of the product including the part (e.g., a vehicle, consumer product, medical device), or known process parameters (e.g., desired throughput, maximum allowable scrap, etc.). In other examples the threshold can be user-specified based on user preferences. As described herein, the out-of-tolerance alert can be used to halt or correct the manufacturing process before a scrap part is made.
[0201] The specific number of layers and the specific number of nodes per layer shown in Figure 9A is illustrative only, and are not intended to be limiting. In some neural networks, different numbers of internal layers and/or different numbers of nodes in the input, hidden, and/or output layers may be used. For example, in some neural networks the layers may have hundreds, thousands, or even millions of nodes based on the number of data points in the input and/or output training data. Although Figure 9A depicts fully connected neural network portion 910 having fully connected layers 912, 914, and 916, in variations of the disclosed neural networks there may be two or more partially connected layers and/or different numbers of fully connected layers. For example, although only one hidden layer 914 is shown in Figure 9A, in some neural networks there may be 2, 4, 5, 10, or more internal "hidden" layers. In some implementations, each layer may have the same number or different numbers of nodes. For example, the input layer 912 and/or the output layer 916 can each include more nodes than the hidden layer(s). The input layer 912 and the output layer 916 can include the same number or different number of nodes. The input vectors, or the output vectors, used to train the neural network may each include n separate data elements or "dimensions," corresponding to the n nodes of the input layer 912 (where n is some positive integer).
[0202] The artificial neural network 900 may also use a cost function to find an optimal solution (e.g., an optimal activation function). The optimal solution represents the situation where no solution has a cost less than the cost of the optimal solution. In an example, the cost function includes a mean-squared error function that minimizes the average squared error between an output / (x) and a target value y over the example pairs (x, y). In some embodiments, a backpropagation algorithm that uses gradient descent to minimize the cost function may be used to train the artificial neural network 900.
[0203] It will be appreciated that the granularity of the network 900 can vary depending upon its desired prediction specificity. For example, at a most granular level measurement values for specific properties can be provided to the network 900 as depicted in the illustrated example. Another embodiment of the network 900 can have a similar topology but can instead receive pass/fail representations of the property measurements. A less granular version of the network 900 can receive pass/fail representations of part features instead of feature properties. The reduction in granularity can achieve faster processing times for both training and implementation of the network 900 at the expense of a more granular understanding of process trends. The output of the network can be correspondingly granular or less granular.
[0204] The network 900 can be trained to make predictions for a specific part (e.g., a number of manufactured parts based on the same engineering specification), and optionally for that specific part using a specific manufacturing setup. The resulting trained version of network 900 may be applied to inspection data from parts in a run in order to predict whether a next manufactured part will be in or out of tolerance. The part order and selection of part spacing in the analyzed run can be structured to match the part order spacing of the training data. Beneficially, such a trained network 900 can be used to predict out-of-tolerance conditions with greater accuracy than simple trend analysis alone. In some implementations, the trained version of network 900 can be used in a system that has in-line metrology, for example where a part (or set of parts) in a run are measured prior to manufacture of another part in the run. Such in-line systems may optionally include automated manufacturing cells where robotic arms, conveyor belts, or other transportation means take parts from the manufacturing system where they are created to the metrology device that measures them. Beneficially, in such automated manufacturing cells the disclosed machine learning techniques can trigger automated process interruptions or adjustments based on predicted measurement values.
[0205] Figure 9B depicts an example node 905 for generating set of statistical process control metrics for input into the machine learning layers of the network of Figure 9 A. In the illustrated example, depicted based on the example training data shown in Figure 8, the node 905 is specific to an individual feature (for example surface 1). The node 905 receives x, y, and z measurement values from seven inspection reports (dataset 815), where XI -X7 are assembled into an input matrix and provided to an "x" input node, Y1-Y7 are assembled into an input matrix and provided to an "y" input node, and Z1-Z7 are assembled into an input matrix and provided to an "x" input node. [0206] Using these aggregated sets of inspection data, the node 905 calculates a number of different statistical process control metrics, shown as sigma values (-3 sigma, sigma, 3 sigma, and 6 sigma), standard deviation, mean, range, process capability metrics (CP, CPk, Cr), and process performance metrics (PP, Ppk, Pr). In other embodiments a smaller subset of metrics, different statistical process control metrics (for example, CPm), or only one of these metrics may be calculated. The computational structure of the node 905 can be optimized so that discrete functions (e.g., summations, calculation of standard deviation) are performed only once and then reused across a number of later nodes that require the output of the function.
[0207] The statistical process control metrics generated by the node 905 are reflective of various process capabilities and performances. Statistical process control is a method of quality control in which statistical methods are employed in order to (1) evaluate process, and (2) control the process to make as much conforming product as possible with a minimum of waste (parts that require reworking or scrapping). As will be appreciated, nominal and tolerance values from the input inspections can be provided to the node 905 for calculation of the statistical process control metrics.
[0208] As used in statistical process control, CP represents process capability to meet two-sided tolerance limits, Cpk represents the process capability index which is an adjustment of CP for the effect of non-centered distribution, and Cr represents the capability ratio used to summarize the estimated spread of the system compared to the spread of the tolerance limits. Larger CP index values indicate smaller likelihoods that the next manufactured part will be out of tolerance. CPk reflects a measurement of how close the manufacturing process is to its targets (e.g., nominal values) and how consistent the process is around its average performance. With Cr, lower values indicate smaller output spreads, and multiplying the Cr value by 100 shows the percent of the tolerances that are being used by the variation in the process.
[0209] Pp represents process performance in meeting two-sided tolerance limits, PPk represents the process performance index which is an adjustment of PP for the effect of non-centered distribution, and Pr represents the performance ratio used to summarize the actual spread of the system compared to the spread of the tolerance limits. The higher the PP value, the smaller the spread of the process output. PP is a measure of spread only, and a process with a narrow spread (a high Pp) may not meet tolerance requirements if it is not centered within the tolerance range. If the process is centered on its target value, PP should be used in conjunction with PPk to account for both spread and centering. Pp and Ppk will be equal when the process is centered on its target value. If they are not equal, the smaller the difference between these indices, the more centered the process is. Lower Pr values indicate smaller output spreads, and multiplying the Pr value by 100 shows the percent of the tolerances that are being used by the variation in the process.
[0210] Six Sigma is a set of techniques and tools within statistical process control for identifying and removing the causes of defects (e.g., out-of-tolerance parts that require reworking or must be scrapped) and minimizing variability in manufacturing. The various sigma values illustrated in node 905 represent limits relative to a mean. For example, sigma represents the limit of data within one standard deviation of the mean, 3 sigma is the limit of data within three standard deviations above the mean, -3 sigma is the limit of data within three standard deviations below the mean, and 6 sigma is the limit of data within six standard deviations of the mean.
[0211] By analyzing the described statistical process control metrics together with inspection data results, the disclosed neural network 900 can be trained such that its parameters represent relationships between inspection values, process capabilities and performance, and the ultimate creation of a next in or out of tolerance part.
[0212] The disclosed neural network 900 can be trained to make predictions about a next part, or another future part in the run (e.g., the 5th part out, 10th part out, or the like, depending upon the rate at which parts are manufactured and measured). Figure 9C depicts example timelines 840A-840D of part runs that can be used to generate different sets of training data for training different neural networks 900A-900D in an ensemble 950 for predicting out- of-tolerance parts. The neural networks 900A-900D can have any of the topologies discussed with respect to Figures 9 A and 9B. The timelines 840A-840D use a sliding window 825 A- 825D through past inspections to generate different timings of part tolerance predictions.
[0213] The timelines 840A-840D depict a run of parts ending in part AN manufactured at time TN. Moving sequentially backwards along the timelines, part AN-I was manufactured before part AN at time TN-I, part AN-2 was manufactured before part AN-I at time TN-2, part AN-3 was manufactured before part AN-2 at time TN-3, part AN-4 was manufactured before part AN-3 at time TN-4, part AN-5 was manufactured before part AN-4 at time TN-5, and part AN-6 was manufactured before part AN-5 at time TN-6.
[0214] With respect to each timeline, the inspection of part AN is the identified output. During training, the measured values of the features/properties of part AN are set to the output nodes of the neural networks 900A-900D. During use of trained neural networks 900A-900D for prediction, the values of the output nodes of the neural networks 900A-900D represent predictions (e.g., predicted measurements or predicted out of tolerance conditions)
Figure imgf000067_0001
[0215] The sliding windows 825A-825D represent the varying sets of input data that are provided to the corresponding one of networks 900A-900D. The spacing of the last inspection in the input inspection data relative to the inspection of part AN varies between the networks 900A-900D as shown by the sliding windows 825A-825D. Turning to timeline 840A, the last inspection in the sliding window 825A is of part AN-I, manufactured one part in the run before part AN. Accordingly, the parameters of network 900A are tuned to predict whether, based on input data from window 825 A, the next manufactured part will be in or out of tolerance. It will be appreciated that, similar to the training discussed above, multiple out- of-tolerance parts can be identified, and a window 825 A of preceding inspections can be used with the inspection of each out-of-tolerance part such that a training data set includes a number of sets of "passed" inspections leading up to respective "failed" inspections.
[0216] Turning to timeline 840B, the last inspection in the sliding window 825B is of part AN-2, manufactured two parts in the run before part AN. Accordingly, the parameters of network 900B are tuned to predict whether, based on input data from window 825B, the part manufactured two parts down the run will be in or out of tolerance.
[0217] With respect to timeline 840C, the last inspection in the sliding window 825C is of part AN-3, manufactured three parts in the run before part AN. Accordingly, the parameters of network 900C are tuned to predict whether, based on input data from window 825C, the part manufactured three parts down the run will be in or out of tolerance.
[0218] Lastly, turning to timeline 840D, the last inspection in the sliding window 825D is of part AN-4, manufactured two parts in the run before part AN. Accordingly, the parameters of network 900D are tuned to predict whether, based on input data from window 825D, the part manufactured two parts down the run will be in or out of tolerance. Accordingly, the parameters of networks 900A-900D may differ in order to provide predictions of increasingly distant, future parts in the run.
[0219] The outputs of the networks 900A-900D can be used in combination to predict whether part AN will be out of tolerance. For example, if the output nodes of the networks 900A-900D provide predicted measurement values, these values can be averaged to determine the final predicted measurement values for part AN. AS another example, corresponding output nodes of the networks 900A-900D can be mapped to the same feature or property of the part, and agreement between corresponding output nodes of various combinations of the networks 900A-900D regarding whether that feature or property will be out of tolerance can be used to increase a confidence value associated with the prediction.
[0220] The networks 900A-900D in the ensemble 950 can generate their results in parallel in some embodiments, that is, the entire ensemble 950 can be used at once to generate the prediction for part AN when part AN is the next part scheduled for manufacture. In other embodiments, the 900A-900D in the ensemble 950 can generate their results in real-time as the inspection sets in the corresponding sliding windows 825A-825D are completed. In such embodiments, the prediction engine 165 can provide alerts that the fourth part out is predicted to be out-of-tolerance (e.g., using network 900D), and then can follow-up that alert with further alerts indicating agreement or disagreement with the prediction from subsequent uses of networks 900C, 900B, and 900A. Further, as the number of networks 900A-900D in agreement with an out-of-tolerance prediction increases, the prediction engine 165 can increase the confidence value associated with the predicted out-of-tolerance condition. The agreement can be determined at a high level (e.g., generally that the part is predicted to be out of tolerance) or more granularly (e.g., that the same identified feature or property is predicted to be out of tolerance). More granular agreement can result in a higher confidence value. In some embodiments, an alert that includes instructions to adjust the manufacturing system can be generated after a threshold confidence value is determined based on agreement between two or more of the networks in the ensemble 950.
[0221] The inspection composition of the sliding windows 825A-825D can involve the same number and sequence (e.g., position and spacing of the inspected parts in the run relative to the last inspected part) of inspections in some embodiments, and in other embodiments can involve different numbers and sequences. Though the illustrated example shows an ensemble to predict whether part AN will be out of tolerance as the next part, second part out, third part out, and fourth part out, it will be appreciated that the ensemble 950 can be modified to generate such predictions at other timings than in the illustrated example (e.g., next part, five parts out, ten parts out, twenty parts out, etc.). Further, although the ensemble 950 is depicted as using four networks 900A-900D, in other implementations the ensemble 950 can include any number of two or more networks.
[0222] Figure 10 depicts an example data structure 1000 for analysis of machine learning model output, for example the output of the neural network of Figures 9A and 9C. As discussed above, the output of the trained network can provide data representing whether a future part will be in or out of tolerance. An analysis module 1 140 (discussed in more detail with respect to Figure 1 1) can receive this data from the output of the neural network and analyze the data to determine whether the part will be in or out of tolerance, and optionally to determine what actions should be taken when an out-of-tolerance part is predicted.
[0223] In some embodiments, the output of the neural network can be written to a database and queried by the analysis module for analysis. However, this writing and querying of data can add additional processing steps thereby increasing the amount of time it takes between inspection of a part and prediction of whether the next part will be in or out of tolerance. Accordingly, some embodiments can generate a data structure 1000 like the example shown in Figure 10 in a working memory and pass this data structure 1000 directly to the analysis module. This can increase efficiency and reduce processing time relative to writing the model output to a database. The type of data structure can vary depending upon the programming language used for the neural network. As an example, the data structure 1000 can be a tree data structure such as an octree if the neural network is written in C++. As another example, the data structure 1000 can be a JavaScript Object Notation (JSON) if the neural network is written in JavaScript. JSON is a lightweight data-interchange format that can be generated and parsed quickly by machines.
[0224] The example data structure 1000 includes nine columns, and can include as many rows as there are features or properties mapped to the output nodes of the neural network 900. The first column (labeled Pi- PN) represents the properties mapped to the output nodes of the neural network. These values can be populated (e.g., based on the model, on GD&T data, or another format of inspection data) when the structure of the neural network is set, and may not change from prediction to prediction. If the model is revisioned or the inspection plan is updated then the first column can be updated accordingly.
[0225] The second column (labeled Mp) represents the measurement predicted by the neural network for each property Pi- PN. These values can be dynamically populated after each prediction using the values of the corresponding output nodes of the neural network. The third column (labeled N) represents the nominal value for each property Pi- PN, the fourth column (labeled Tu) represents the upper tolerance limit for each property Pi- PN, and the fifth column (labeled Tu) represents the upper tolerance limit for each property Pi- PN. The values in the third through fifth columns can be populated from GD&T data (or blueprint data, or another format of tolerance data) associated with the model, and may not change from prediction to prediction. If the model is revisioned or the inspection plan is updated then the first column can be updated accordingly.
[0226] The sixth column (labeled D) represents the deviation of the predicted measurement from the nominal measurement and can be dynamically and selectively populated by the analysis module after each prediction. In some embodiments, the analysis module can compare the predicted measurement to the upper and lower tolerance limits. If the predicted measurement is above the upper tolerance limit or below the lower tolerance limit, this represents an out-of-tolerance property and the analysis module can compute the deviation value. In addition, the analysis module can set a bit (not shown) indicating that the part is predicted to be out of tolerance. If the predicted measurement is less than or equal to the upper tolerance limit and greater than or equal to the lower tolerance limit, this represents an intolerance property and the analysis module may enter a null value or no value and continue analyzing remaining data in the structure 1000.
[0227] The seventh column (labeled TID) represents an identifier of the tool used to manufacture the property. Different properties of the part can be manufactured with different tooling, so the data structure 1000 can include a number of different tool IDs. In response to identifying an out-of-tolerance deviation in the sixth column for a particular property, the analysis module can access the ID of the associated tooling and output this identifier in an alert. The eighth column (labeled TMO) represents a maximum offset of the tool in column seven. This represents a maximum allowable value that the tool position can be offset from the positions specified in the default machining instructions in order to attempt compensation for a predicted manufacturing error. The tool IDs and maximum offset values can be populated based on a look-up table specifying which tools create the various properties and what the maximum offset is for each tool, and may not change from prediction to prediction. If the look-up table is updated then the seventh and eighth columns can be updated accordingly.
[0228] The ninth column (labeled TRO) represents an offset amount recommended by the analysis module for the tool associated with a particular property. If the analysis module identifies an out-of-tolerance deviation in the sixth column for a particular property, then the analysis module can proceed to compare the deviation to the maximum tool offset value in column eight. If the deviation is less than or equal to the maximum tool offset value, then the analysis module can generate the recommended tool offset value equal to the magnitude of the deviation. If the deviation is greater than the maximum tool offset value, then the analysis module can (1) determine the difference between the deviation and the maximum tool offset value, (2) subtract the determined difference from the predicted measurement value, and (3) determine whether the adjusted predicted measurement value is above the upper tolerance limit or below the lower tolerance limit. If so, then this represents a manufacturing defect condition that cannot be corrected using the maximum allowable tool offset, and the analysis module can output a recommendation to change the tooling identified in the seventh column. If not, then the adjusted predicted measurement value is in tolerance and thus the predicted defect can be corrected using the maximum allowable tool offset. In such a situation, the analysis module can set the recommended tool offset to the maximum allowable value and output this information in an alert to a user and/or a control system 105 of the manufacturing system 104.
[0229] In some embodiments the data structure 1000 can include greater or fewer columns depending upon the type of property, the type of output of the neural network, and the manufacturing process setup. For example, if a CMM or other low-uncertainty metrology device (e.g., operated programmatically without human involvement in each inspection) is used to generate the inspections of the parts, then inspection-related uncertainties may be negligible (e.g., around 2 microns). In such circumstances, adding another column to remove inspection-related uncertainty from the identified deviation to isolate the amount attributable to the manufacturing system can involve additional processing time without providing substantial benefits due to subtracting a negligible uncertainty value. However, for metrology devices having larger uncertainties and/or for metrology devices requiring human inspector operation, the data structure 1000 can include additional columns storing the uncertainties associated with the devices/inspectors. The analysis module can subtract these uncertainties from the deviation to isolate the portion of deviation attributable to manufacturing system. In some embodiments the number of columns of the data structure 1000 can be determined programmatically based on identifying the process and analyzing the associated uncertainty values.
[0230] Figure 11 depicts a schematic block diagram of an example of the prediction engine 165 of Figure IB. The prediction engine 165 includes a number of processing modules including model parameter module 1105, training module 1110, notification handler 1115, in-line prediction module 1135, analysis and alert module 1140, and machine instruction module 1145. Each module can represent a set of computer-readable instructions, stored in a memory, and one or more processors configured by the instructions for performing the features described below together.
[0231] As illustrated, the modules can be part of a distributed architecture with the model parameter module 1105, training module 1110, and notification handler 1115 included in the MLM system 110 and the in-line prediction module 1135, analysis and alert module 1140, and machine instruction module 1 145 included in the control system 105, with communications sent between the MLM system 110 and the control system 105 over network 108. In other embodiments, the modules can be implemented entirely in either the MLM system 110 and the control system 105, or can be distributed in a different configuration than illustrated.
[0232] The model parameter module 1105 is configured to receive data representing a CAD model, blueprint, GD&T, other inspection or part specification documentation, and any user-specified settings and analyze such data to identify the structure of the input and output nodes of the neural network 900. For example, the model parameter module 1105 can automatically identify the features and/or properties of a physical part based on the received CAD model, blueprint, or inspection documentation and can map the input and output nodes to the identified features and/or properties. In some examples, this can include all identified features and/or properties. In other examples, the model parameter module 1105 can select a subset of the features and/or properties. For example, the model parameter module 1105 can analyze an assembly model including the part model and identify any features of the part that mate with features of other parts in the assembly. The input and output nodes can be mapped to these features (and/or the properties of these features). As another example, a user of the MLM system 110 can specify one or more features desired for analysis by the prediction engine 165, and the input and output nodes can be mapped to these features (and/or the properties of these features). The MLM system 110 can provide a user interface that enables the user to select these features from a list generated based on analyzing the CAD model, blueprint, or inspection documentation of the part.
[0233] Beneficially the model parameter module 1105 can automatically and dynamically generate a structure of the neural network 900 that is specific to a certain part, thus initializing the training process without requiring a human operator to determine the model structure. This can be advantageous in circumstances where the user of the MLM system 110 is not familiar with programming of machine learning models, but still desires to implement in-line machine learning predictions in their manufacturing process. When the user begins a manufacturing project involving a new part or modifies an existing manufacturing project to include machine learning predictions, the model parameter module 1105 can create the structure of the neural network 900 that matches the specifications of the part based on data already provided to the MLM system 110 as described above. This is done without the user having to create this structure himself or having to contact the provider of the MLM system 110 regarding setup of a new neural network. For certain projects having restricted access to part models, this automated setup allows the user to implement machine learning when the user would otherwise be unable to do so. It will be appreciated that in other scenarios a human operator can be involved in the creation of the structure of the neural network 900.
[0234] The model parameter module 1105 can send the structure of the neural network 900 to the training module 1110. The training module 1110 is configured to use inspection report data 1120 received from the control system 105 and/or accessed from data repository 170 to train the parameters of the neural network 900 as described above and/or as discussed below with respect to Figure 12A. The training module 1110 can train the neural network 900 initially (e.g., before its first use in in-line manufacturing predictions) and in some embodiments can periodically or intermittently re-train the neural network, for example based on updated inspection report data 1120 received from the control system 105. The training module 1110 can also monitor accuracy of the trained neural network 900 and perform re- training on an as-needed basis to maintain a threshold (e.g., user-specified or default) level of accuracy in the generated predictions.
[0235] The training module 1110 can send trained models 1125 to the control system 105 via network 108. In other implementations the model parameter module 1105 and training module 1110 can be implemented in the control system 105.
[0236] The in-line prediction module 1135 is configured to identify new input data for provision to the trained model 1125 based on real-time manufacturing inspection data, and to provide the new input data to the trained model 1125 to generate part inspection predictions. For example, after a part is created by a manufacturing system 104 it can be inspected by a metrology device 102. The control system 105 can control the manufacturing system 104 to either wait to create the next part, or to create a part based on a different computer model, while the created part is inspected and the inspection input into a trained model 1125.
[0237] In some implementations, in-line prediction module 1135 can be used within the context of an automated robotic manufacturing cell, for example in a control system 105 (or multiple control systems) that control operations of the cell. A robotic manufacturing cell can implements a number of robotic arms to move materials to one or more manufacturing systems in the cell, for example a CNC. The CNC can use tooling to create parts, and can have a number of different tooling options and a robotic system for changing out tooling. The manufacturing systems in the cell can create parts from the materials based on machine- readable manufacturing instructions that operate to control the manufacturing systems. The robotic arm(s) of the cell can retrieve created parts from the manufacturing systems and move the created parts to an automated metrology device, for example a CMM. The automated metrology device can inspect the part based on predetermined inspection programming and output the part inspection report. The robotic arm(s) of the cell can then move an inspected part from the metrology device to a predetermined location, for example conveyor belt or storage area, depending upon the results of the inspection. Such cells can be configured to create a number of different runs of parts each based on one of a corresponding number of different models, and can be configured to cycle through creating parts based on the different models. The in-line prediction module 1135 can implement a number of different trained neural networks 900 (or other suitable machine learning models) in such a cell, with at least one machine learning model corresponding to each of the different part models. [0238] The in-line prediction module 1135 is configured to provide the output of a neural network 900 to the analysis and alert module 1140, for example as a data structure 1000 stored in working memory as described above. The analysis and alert module 1140 can be configured to implement the logic described above with respect to the data structure 1000 in order to (1) determine whether a future manufactured part will be in or out of tolerance, (2) generate a confidence value in such a prediction, (3) identify specific tooling associated with a predicted out-of-tolerance property or feature, and (4) identify any recommended corrective adjustments for the manufacturing system 104. The analysis and alert module 1140 can generate an alert including such determinations for provision to the machine instruction module 1145 and/or notification handler 1115.
[0239] The machine instruction module 1145 is configured to control the operation of a manufacturing system 104 regarding halting manufacture of a particular part, changing out identified tooling, or continuing manufacture with identified tooling using a determined tool offset relative to the default machining instructions.
[0240] The notification handler 1115 is a module configured to identify any users of the MLM system 110 associated with a part, identify a subset of those users designated to receive alerts regarding the part, create a graphical user interface presenting the alert generated by the analysis and alert module 1140, and transmit data representing the graphical user interface to a user device 106 and/or electronic messaging account associated with each user of the designated subset. The notification handler 1115 can access the identifiers and permissions of such users in the user data repository 180 in some embodiments.
[0241] Although not illustrated in Figure 11, the machine learning data repository 185 can be distributed between the MLM system 110 and the control system 105 as needed for storage of training data sets, trained machine learning models, new input data sets, model outputs, and any alerts generated based on the model outputs. Although Figure 11 is discussed in the example context of the neural network 900 described above, it will be appreciated that the prediction engine 165 can implement other suitable machine learning models for predicting manufacturing performance and/or manufactured part conditions in other examples.
[0242] Figure 12A depicts a flow diagram of an illustrative process 1200 A for training a machine learning model, for example as discussed with respect to Figures 8-9B. Figure 12B depicts a flow diagram of an illustrative process for providing out-of-tolerance part predictions in the prediction engine 165 of Figure IB via a model trained as described with respect to FIG. 12 A. The processes 1200 A, 1200B can be performed under the control of one or more computer systems configured with executable instructions and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware or combinations of hardware and software. For example, the processes 1200A, 1200B can be implemented by the prediction engine 165 of the MLM system 110 in some embodiments. In some implementations, the process 1200B can be implemented on a prediction engine running locally in the control computer of a manufacturing system or cell. In such embodiments the prediction engine can generate the trained models locally and/or can be provided with trained models via network 108. The code may be stored on a computer-readable storage medium, for example, in the form of a computer program comprising a plurality of instructions executable by one or more processors. The computer-readable storage medium may be non-transitory.
[0243] Turning to Figure 12 A, at block 1205 the prediction engine 165 can identify an out-of-tolerance part in a run. As discussed herein, a part can be considered out-of-tolerance if the measured value of any of its properties is outside of the range defined based on nominal and tolerance values. An out-of-tolerance may be scrapped or reworked, and either scenario represents inefficiency and waste in a manufacturing process. Although the process 1200 is described in terms of using an out-of-tolerance part inspection to generate the expected output data during training, in some implementations, the neural network can be trained using positive training cases (where the expected output is generated based on an in-tolerance inspection), in addition to negative training cases (where the expected output is generated based on an out-of- tolerance inspection). Some implementations can use approximately equal numbers of positive and negative cases.
[0244] Next, the process 1200 A moves to block 1210, and the prediction engine 165 can identify inspection data representing the inspection of the out-of-tolerance part. The prediction engine 165 can also identify inspection data for a set of in-tolerance parts manufactured prior to the identified out-of-tolerance part. As discussed above, this can include a set of parts manufactured successively before the out-of-tolerance part, or manufactured at predetermined intervals leading up to the production of the out-of-tolerance part. For example, to identify the set of parts the prediction engine 165 can access a predetermined sequence specifying the spacing between parts in the run, and may also access data specifying a desired spacing between the last (most recent) part in the set and the future part for which the prediction will be generated. After identifying the last part, the prediction engine 165 can use the predetermined sequence to identify the other parts in the input data relative to the last part. In some cases the sequence of in-tolerance parts can include both a recent, successively-prior subset as well as a less recent subset, for example manufactured at increasingly distant past times or at increasingly earlier positions in the run sequence in order to capture both long-term and short-term data in the training set. The prediction engine 165 can then access inspections of the identified parts in the set and can create the input data with the inspections in the same sequential order as the parts in the set. One implementation can include the inspections of the seven parts preceding the manufacture of the out-of-tolerance part. Inspection data can follow the requirements of GD&T specifications, blueprints, or other tolerance requirements, for example as set forth in an inspection plan. As such, the inspections of the parts in the runs can be expected to correspond to one another, that is, to have the same fields of data (features, properties, etc.), and corresponding portions of each inspection data set can be included in the training data.
[0245] Next, the process 1200A moves to blocks 1215, 1220, and 1225, which can be performed sequentially in any order or in parallel. At block 1215, the prediction engine 165 can generate input data based on the inspections of the in-tolerance parts. For example, the prediction engine 165 can extract a subset of the inspection data corresponding to identified features and/or properties of interest, can correlate values in the data across the in-tolerance data sets, and can create input feature matrices or tuples using the correlated values. Prediction engine 165 can in some embodiments generate the tuples using in-or-out-of-tolerance representations of the extracted measurements rather than the actual measurement values.
[0246] At block 1220, the prediction engine 165 can generate output data based on the inspection of the out-of-tolerance part. For example, the prediction engine 165 can extract a subset of the inspection data corresponding to the same identified features and/or properties of interest as used to generate the input data. Prediction engine 165 can in some embodiments generate the output data using binary in-or-out-of-tolerance representations of the extracted measurements rather than the actual measurement values. [0247] At block 1225, the prediction engine 165 can optionally identify any additional process parameters for input into the machine learning model. As described with respect to Figure 9A, such process parameters can include the unique identifier associated in the MLM system 110 with one or more of a human inspection operator, a metrology device 102, or a manufacturing system 104 involved in creating/measuring the part. Alternatively, these parameters can be used to segment the training data identified in block 1210 into data sets reflecting particular inspector/manufacturing system/metrology devices or combinations.
[0248] Next, the process 1200 A moves to 1230, and the prediction engine 165 can train a machine learning model such as the neural network 900 or another suitable predictive machine learning model. For example, the input data can be provided to a statistical process control metric layer 905 and/or input layer 912 of a neural network, and the output layer can be provided to the output layer 916. The parameters of the network can be tuned, for example using back-propagation, to minimize the error rate in predicting the designated output data from the designated input data.
[0249] In some embodiments, after an initial training the process 1200 A can loop back to block 1205 to identify another out-of-tolerance part in the run (or in another run of the part created by a different manufacturing system). The process 1200A can be repeated using new input and output data based on the additional out-of-tolerance part in order to refine the parameters of the machine learning model. This can be repeated in some embodiments until all or a threshold number of out-of-tolerance parts are identified and used for training, or until the parameters of the model stabilize (e.g., change less than a predetermined amount). In some embodiments, the process 1200 A can be repeated using new input and output data based on the additional out-of-tolerance part in order to generate an additional trained model. In some embodiments, multiple trained models each having parameters that predict a different identified out-of-tolerance part can be stored as a neural network ensemble for use in predicting future out-of-tolerance parts.
[0250] The trained model can be stored in the machine learning data repository 185 and accessed in process 1200B to generate manufacturing process predictions. Though discussed in the context of predicting out-of-tolerance measurements, the training data can also be selected in a similar manner to train the machine learning model to predict other manufacturing process conditions, for example near-out-of-tolerance conditions. The model can be trained after accumulation of sufficient inspection data for a particular run in some embodiments. In some embodiments, the model can be re-trained based on updated inspection data relating to the run, for example periodically at predetermined intervals (e.g., nightly, weekly, etc.), in response to the accuracy of the model dropping below a predetermined acceptable threshold, or in response to process changes. Thus, in some embodiments the process 1200A can be pre-computed, that is, performed prior to real-time analysis of manufacturing process conditions.
[0251] Turning to Figure 12B, at block 1235 the prediction engine 165 can identify input inspection data set based on real-time manufacturing and inspection data. For example, in robotic manufacturing cells as well as in manufacturing processes involving human inspectors and/or machinists, parts in a run can be inspected after manufacture and prior to creation of subsequent parts in the run. In such contexts, the prediction engine 165 can access inspection data of a number of previously manufactured parts in the run, where the input data follows the same sequence as the input training data sets described above. The prediction engine 165 can select new input inspection data sets in a consistent manner with the selection of the input training data so that a trained model is provided with consistent fields and/or quantities of data. However, in contrast to the training process described above, at block 1235 the prediction engine 165 does not identify any output data, as the next part has not been manufactured and is the subject of the presently described predictions. The predictions can also be used in "batch inspection" type processes, for example where a certain number of parts (e.g., 10, 30, 50) are created, and then that batch is inspected. In such implementations, the machine learning model can be trained to make predictions about a future part in a next batch.
[0252] Next, at block 1240, the prediction engine 165 can access the trained model and input the data identified at block 1235 into the model, for example into statistical process control metric nodes and/or input nodes of neural network. The resulting output node values represent predicted measurements and/or in/out of tolerance conditions of the next part that will be manufactured in the run, and thus the network can make metrology predictions regarding yet-to-be-made parts.
[0253] In some embodiments, at block 1240 the prediction engine 165 can use an ensemble of trained models to generate one or both of the out-of-tolerance prediction and a confidence in the prediction. For example, the ensemble can include a number of models each trained to predict whether the next part manufactured will be out of tolerance based on a different set of previous inspections. The outputs from the networks in the ensemble can be averaged in some examples to provide an average predicted measurement of the part properties. In other embodiments, the outputs from the networks in the ensemble can be used to generate a confidence value associated with an out-of-tolerance condition, and the confidence value can be used to determine what action (if any) should be taken to adjust the manufacturing system.
[0254] To illustrate, as discussed with respect to Figure 9C, four neural networks trained can be trained to identify an out-of-tolerance condition for the next part, the part that will be manufactured two parts after the last inspected part, the part that will be manufactured three parts after the last inspected part, and the part that will be manufactured four parts after the last inspected part. By using a sliding window of input inspection data, the target part of these different networks can be aligned to be the same part - in this example, the next part that will be manufactured. Other ensembles can use different numbers of networks or different spacings of predictions (e.g., three networks that predict the next part, five parts from the last inspected part, and ten parts from the last inspected part).
[0255] In some embodiments, these trained networks can output predicted measurement values, and these values can be averaged in order to calculate the predicted measurement values analyzed for purposes of determining any alerts. In some embodiments, the prediction engine 165 can determine whether each network predicts that the next manufactured part will be in or out of tolerance, and the degree of agreement between the networks can be used to generate a confidence value in the determined prediction (either specific measurement predictions or general in/out of tolerance predictions).
[0256] Next, at decision block 1245, the prediction engine 165 can use the output values to determine whether next part in run will be in tolerance (and optionally to determine a confidence value in the generated prediction). As described above, this can involve determining that the output node of any property indicates a binary indication of an out- of-tolerance condition, determining that the output node of any property indicates a greater-than-threshold likelihood of an out-of-tolerance condition, or by comparing the output predicted measurement value for each property to identified tolerances. For example, block 1245 can be performed by the alert creation module 1 140 using the data structure and logic discussed with respect to Figure 10 in some embodiments.
[0257] If the prediction engine 165 determines at block 1245 that the next part will be in tolerance, the process 1200 A loops back to block 1235 to continue performing realtime, in-line analysis of the manufacturing process. The input inspection data can be updated with a sliding window of recent inspections based on the determined spacing of the input data part manufacture (e.g., successive, spaced apart, or a combination, as described above).
[0258] If the prediction engine 165 determines at block 1245 that the next part will be out of tolerance (optionally with an above-threshold confidence value), the process 1200B moves to block 1250 and the prediction engine 165 can generate an out-of-tolerance alert. In some embodiments, the out-of-tolerance alert can be automatically provided to the manufacturing system to cause the manufacturing system to halt or correct the manufacturing process before the out of tolerance part is made. In some embodiments, the out-of-tolerance alert can additionally or alternatively be sent to the user computing device 106 of any designated users. The out-of-tolerance alert can include a simple indication that the prediction engine 165 predicts that the next part will be out of tolerance in some embodiments. Other embodiments can include additional details, for example one or more of an identified a probability (confidence value) that the next part will be out of tolerance, an identified cause of the out-of-tolerance condition, and a recommended corrective action to adjust the manufacturing process to prevent the next part from exceeding tolerances.
[0259] For example, in order to generate a detailed out-of-tolerance alert, the prediction engine 165 can identify one or more output nodes having a value that indicates the predicted out-of-tolerance condition. The output node can be mapped to a particular feature and/or specific property of that feature. The MLM system 1 10 or a control system 105 in communication with the MLM system 1 10 can store a mapping between the features/properties of a part and manufacturing tooling used to create the feature/property, as discussed with respect to the example Figure 10. Using the identified output node and the mapping, the prediction engine 165 can identify specific tooling that would be responsible for the out-of-tolerance condition of a property /feature. The alert can include the unique identifier of this tooling in the MLM system 1 10 in some examples. In some examples, the prediction engine 165 can use the value of the identified output node to identify a likely deviation from nominal at the predicted out-of-tolerance property /feature, and can further include instructions in the alert regarding how to adjust the identified tooling compensate for the predicted deviation.
[0260] In some embodiments, after outputting an out-of-tolerance alert that involves adjusting the manufacturing system to compensate for predicted out-of-tolerance conditions, the process 1200B can be suspended temporarily until a sufficient number of new inspections are performed to populate the input data set with inspections of parts manufactured after the adjustment.
[0261] In some embodiments, after outputting an out-of-tolerance alert that involves adjusting the manufacturing system to compensate for predicted out-of-tolerance conditions, the process 1200B can loop back to block 1235 to identify input inspection data for provision to a machine learning model. This new input inspection data may differ in its composition (e.g., number and spacing of inspections) than the input inspection data identified prior to adjusting the manufacturing system in order to match the composition of input data used to train a different model. In such embodiments, block 1240 can involve application of a different model than applied at the previous iteration of block 1240. For example, a "recalibrated" machine learning model can be trained using inspection data sets surrounding or following adjustment of the manufacturing system (e.g., change of tool offset, changing out a worn tool to a new tool, etc.) in order to predict out-of-tolerance conditions of parts manufactured shortly after the manufacturing system adjustment. The recalibrated machine learning model can be used temporarily until a sufficient number of new inspections are performed to populate the original-composition input data set with inspections of parts manufactured after the adjustment.
Overview of Example Inspection-Guided Manufacturing
[0262] The MLM system 110 can include an inspection-machining feedback system for refining manufacturing processes based on inspection data. Returning to Figure 1A, as described above, the control system 105 can send instructions to control the operations of a manufacturing system 104. Such instructions can be based on inspection data or insights derived from inspection data. The control system 105 can be implemented on a computing device of the manufacturing system 104 and/or on a separate computing device in network communication with the manufacturing system 104.
[0263] For example, statistical analysis of aggregate data representing a number of parts manufactured by the same machine can identify trends in part deviations from nominal over time. Such trends can be indicative of changing wear conditions of the machine tooling. Machine tooling can refer to the specific tool of a manufacturing system that creates a part (or a specific geometric feature of the part), either by additive or subtractive manufacturing. The inspection-machining feedback system can be accomplished by the analytics engine 150 identifying such trends and providing offsets to the control system 105, and by the control system 105 instructing the manufacturing system 104 to set offsets to compensate for the wear conditions during manufacture of subsequent parts. This beneficially enables manufacturing of in-tolerance parts even as machine tooling wears over time to a point that would otherwise produce out-of-tolerance parts.
[0264] In some implementations, the offsets can be dynamically adjusted by the analytics engine 150 based on analysis of new inspections as additional parts are created by the manufacturing system 104. For example, the analytics engine 150 can identify deviations from nominal on specific properties of a previously manufactured part based on inspection data. The analytics engine 150 can send these identified deviations to the control system 105. The control system 105 can have a mapping between specific part features and/or properties and specific tooling and/or techniques of the manufacturing system. Using the mapping, the control system 105 can identify which tooling and manufacturing instructions relate to a particular deviation in the previous part inspection. The control system 105 can set a tool tip offset (e.g., a bias in the position of the tool tip relative to the position specified by the manufacturing instructions) based on the deviation received from the analytics engine 150 in order to compensate for the amount and direction of the deviation from nominal during manufacture of the next part. In some embodiments, the control system 105 can set limits on the offsets based on known wear conditions of the associated tooling as determined by trend analysis via the analytics engine 150. [0265] As described above, in some implementations the control system 105 can generate instructions for halting or taking corrective action in a manufacturing system 104 based on a prediction output from a machine learning model.
[0266] As another example, molds used to create parts are not typically replaced until measured parts produced by the mold no longer conform to their accuracy requirements. Constructing a new mold is a time consuming process that, if not completed when a previous mold fails, can lead to delays in the supply chain that ultimately affect the schedule of the project; however large molds are expensive to store. By identifying trends in part deviations from nominal over time, the analytics engine 150 can predict the endpoint of a usable lifecycle of a mold, for example when the mold is predicted to cease producing in-tolerance parts. The MLM system 110 can provide alerts to designated users when the length of a remaining predicted lifecycle for a mold approaches a length of a timeline for creating a replacement mold. Beneficially, this can lead to less production interruptions and minimize the need for storing replacement molds.
[0267] The inspection-machining feedback system accordingly can provide users with currently unavailable analytics for managing quality control and increasing manufacturing efficiency. Figure 13 depicts a flow diagram of an illustrative process 1310 for inspection-based manufacturing process controls as described above. Process 1310 includes sub-process 1310A for setting tool offsets and sub-process 1310B for determining mold failure. In some embodiments, these sub-processes can involve similar trend analysis in the analytics engine 150 regarding detecting changing wear conditions based on deviation trends. However, sub-processes 1310A and 1310B can be implemented independently in some implementations. In some implementations, sub-process 1310A can be used on CNCs (computer numerical controlled) or similar manufacturing machines, for example pneumatic or other robotically controlled systems. Sub-process 1330 can be implemented to predict failure times of molds (male or female) used to manufacture parts.
[0268] At block 1305, the analytics engine 150 can access at least one inspection report 1305. With respect now to sub-process 1310A, the at least one inspection report can relate to a part manufactured by a robotically-controlled system. A single, most recent inspection report can be used in some examples to identify the current deviations. The analytics engine 150 can use a feature-based inspection report as described above to identify the mean error of a feature (e.g., the mean of the deviation of all properties of the feature). If aggregate inspection data is analyzed to identify trends, the aggregate datasets can be accessed in their feature-based form and grouped to look at features cut by a specific tool. As an example, some parts can require ten or more tools during manufacture, and such tools can be changed out of a CNC robotically. It can be desirable to aggregate datasets based on tooling to include the various features created by each tool in order to identify how that tool is wearing over time. For example, if a tool cuts three different features using the same portion of the tool, then deviations based on these features should follow the same wear pattern over time. Tools that can be analyzed for wear over time include CNC cutters, for example an end mill, or other tools that physically contact the manufactured parts. As another example, a cutter can cut using the side or bottom of the tool, and inspection data sets can be aggregated based on which side of the tool was used to manufacture a feature, as different portions of a tool may wear differently over time.
[0269] At block 1310, the analytics engine 150 can calculate a deviation or deviation trend based on the at least one inspection report. For example, if a single most recent inspection is accessed at block 1305, then at block 1310 the analytics engine 150 can determine a deviation from nominal of one or more properties in the inspection. In some embodiments, the analytics engine 150 can access identifiers and uncertainties associated with one or more of manufacturing system, metrology device, and inspector involved in creation and inspection of a part, and can remove the uncertainties from the deviations to isolate the changes that are attributable to tool wear. These uncertainties can be generated as described above with respect to Figures 4-7 in some embodiments. As another example, if multiple inspections are accessed at block 1305, then at block 1320 the analytics engine can calculate deviations (optionally with uncertainties removed) for the same property across the set of inspections, and can identify an equation that models the deviation trend over time. The analytics engine 150 can send a determined deviation and/or deviation trend to the control system 105.
[0270] At block 1315, the control system 105 can set tool offsets based on the identified deviation. For example, the control system 105 can access a look-up table that associates specific features and/or parameters of an inspected part with specific tooling of a manufacturing system 104. Based on the deviation associated tooling, the control system 105 can determine a bias to apply to the position of the tool tip relative to its position specified in default machining instructions. The bias can be equal in magnitude to the identified deviation but opposite in direction in order to compensate for the deviation. Although various steps of process 1310A are discussed as distributed between the analytics engine 150 and control system 105, in some embodiments the process 1310A can be distributed differently between these components or performed entirely by one or the other.
[0271] At block 1320, the control system 105 can determine whether the determined tool offset exceeds determined offset limits. For example, the offset limits can be based on tool specifications (e.g., depth of cutter blades), tool wear trend analysis received from analytics engine 150, or user-specified limits. The tool wear trend analysis can track the change in deviation on specific features or properties of a part over time, and can involve removing one or more of manufacturing system, metrology device, and inspector uncertainties from the deviations to isolate changes due to tool wear as described above. If the control system 105 determines that the offset is within predetermined acceptable limits, the process 1310A proceeds to block 1325 to control the manufacturing system 104 to position the tooling according to the determined offset. As such, process 1310A can feed deviations from a last inspection data set (or set of previous inspection data sets) back into the manufacturing machine to make tool tip position updates for the next part.
[0272] If the control system 105 determines that the offset exceeds the limits, the process 1310 transitions to block 1335 to alert the designated user that the tool wear requires it to be changed out. The process 1310 can also halt subsequent manufacturing in some embodiments, for example if trend analysis or machine learning predictions indicate that the next part is likely to be out-of-tolerance.
[0273] Returning to block 1305, with respect to sub-process 1305B, the analytics engine 150 can access an aggregate dataset including multiple inspection reports of parts produced by the same mold. The datasets can include a time of manufacture for each inspected part in order to establish a timeline with respect to usage of the mold. The sampling rate of the inspections can be dynamically adjusted, for example based on rate of deviation change or proximity to out-of-tolerance conditions, in order to conserve processing resources in performing mold trend analysis.
[0274] At block 1320, the analytics engine 150 can identify specific deviations within a single inspection data set or can identify deviation trends from analysis of an aggregated data set. In some examples, the analytics engine 150 can remove one or more of manufacturing system, metrology device, and inspector uncertainties from the deviations to isolate changes due to wear of the mold.
[0275] Next, at block 1330, the analytics engine 150 can determine a mold failure timeline based on the identified deviation trend. For example, for each feature and/or each property of each feature of the part created by the mold, the analytics engine 150 can model an equation representing a best fit for the trend rate or curve that characterizes the observed deviation shift over time in the aggregate inspection data. The analytics engine 150 can proj ect these trends into a future manufacturing timeline to identify a time when at least one feature is predicted to no longer be within tolerance.
[0276] For example, in one embodiment the analytics engine can run a separate analysis for each feature and output the time to expected out-of-tolerance conditions for each feature. These can be ranked, and the soonest time selected as the mold failure timeline. In some embodiments, block 1330 can involve generating multiple timelines based on various inspectors/metrology devices in combination with the mold to recommend a specific combination that may result in the longest in-tolerance lifetime for the mold. The timeline can be based on a number of remaining in-tolerance cycles with the mold in some embodiments. This can be combined with identified throughput goals and/or actual manufacturing data rates in order to convert the remaining cycles to an estimated failure date.
[0277] Once the mold failure timeline is determined at block 1330, the MLM system 110 can alert the designated user at block 1335. The alert can include timeline information (e.g., predicted mold failure date and optionally replacement creation timeline if known) as well as an indication of which portion of the mold is associated with the failure. For example, the alert can include a visual depiction of the mold or a computer model of the mold with a visual change over the portion that is determined to cause the soonest out-of- tolerance condition. This can assist the alerted user with fixing the mold, if possible. In some embodiments such alerts may only be sent within a threshold time period of the new mold creation timeline, for example six months before the timeline for new mold creation.
[0278] In some implementations, the data used to guide process 1310 may originate only from metrology devices such as CMMs. CMMs, while programmed by humans, run inspections independently of human inspection operators and such devices typically have very small measurement uncertainty values, for example around +/- 2 microns, as indicated in the machine's calibration data. Accordingly, such metrology devices present highly accurate measurement data with relatively low measurement uncertainty (compared to processes in which humans are involved in actuating or positioning components of the inspection process). In analyzing the measurement data from a CMM, the analytics engine 150 can determine with high confidence what tolerance deviations are due to tool (cutter or mold) wear, as the machine measurement uncertainty is a small fraction of typical measured values. In contrast, with an inspector-operated passive robotic measurement arm, it can be difficult to pinpoint exactly what tolerance deviations are caused by inspector uncertainty and what tolerance deviations are caused by tool wear. In some implementations, the process 1310 may require that the inspection uncertainty value be less than the determined tool wear in order to output manufacturing system control instructions.
Implementing Systems and Terminology
[0279] Implementations disclosed herein provide systems, methods and apparatus for electronic manufacturing quality management analysis. One skilled in the art will recognize that these embodiments may be implemented in hardware, software, firmware, or any combination thereof.
[0280] As used herein, "mean" refers to an arithmetic mean computed by adding values and dividing by the number of values.
[0281] The foregoing embodiments have been presented by way of illustration, and not limitation. Thus, nothing in the foregoing description is intended to imply that any particular feature, characteristic or step is essential to the invention. For example, although portions of this disclosure refer to a web site that provides electronic manufacturing quality management analysis functionality, the invention is not limited either to web site based implementations or to electronic manufacturing quality management analysis systems. For example, some implementations may be hosted locally rather than through a networked web site.
[0282] The various components shown in Figures 1A-1B, and the various processes described above may be implemented in a computing system via an appropriate combination of computerized machinery (hardware) and executable program code. For example, the multi-tenant manager 130, standardization engine 140, analytics engine 150, model viewing engine 160, and prediction engine 165 of the MLM system 110 may each be implemented by one or more physical computing devices (e.g., servers) programmed with specific executable service code. Each such computing device typically includes one or more processors capable of executing instructions, and a memory capable of storing instructions and data. The executable code may be stored on any appropriate type or types of non-transitory computer storage or storage devices, such as magnetic disk drives and solid-state memory arrays. Some of the services and functions may alternatively be implemented in application- specific circuitry (e.g., ASICs or FPGAs). The model viewing engine 160 can include one or more graphics processing units and associated memories with instructions for rendering interactive three-dimensional representations of objects. Some embodiments of the prediction engine 165 can be implemented using one or more graphics processing units and associated memories with instructions for training machine learning models and for generating manufacturing process predictions using trained models.
[0283] The functions described herein may be stored as one or more instructions on a processor-readable or computer-readable medium. The term "computer-readable medium" refers to any available medium that can be accessed by a computer or processor. By way of example, and not limitation, such a medium may comprise RAM, ROM, EEPROM, flash memory, optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. It should be noted that a computer- readable medium may be tangible and non-transitory. The term "computer-program product" refers to a computing device or processor in combination with code or instructions (e.g., a "program") that may be executed, processed or computed by the computing device or processor. As used herein, the term "code" may refer to software, instructions, code or data that is/are executable by a computing device or processor.
[0284] Software or instructions may also be transmitted over a transmission medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of transmission medium.
[0285] Depending on the embodiment, certain acts, events, or functions of any of the algorithms, methods, or processes described herein can be performed in a different sequence, can be added, merged, or left out altogether (e.g., not all described acts or events are necessary for the practice of the algorithms). Moreover, in certain embodiments, acts or events can be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors or processor cores or on other parallel architectures, rather than sequentially.
[0286] The various databases and data repositories metrology data repository 170 and user data repository 180 may be implemented using relational databases, flat file systems, tables, and/or other types of storage systems that use non-transitory storage devices (disk drives, solid state memories, etc.) to store data. Each such data repository may include multiple distinct databases. In a typical implementation, the data provided to users, including the data presented by a user interface relating to metrology data analyses, are based on an automated analysis of many recorded events, for example part inspections. The user interface may, in some embodiments, be provided to a user device from application code that runs on a remote computing resource, implemented wholly in client-side application code that runs on users' computing devices, or a combination thereof.
[0287] The standardization engine 140, multi -tenant manager 130, analytics engine 150, and model viewing engine 160, portions thereof, and combinations thereof may be implemented by one or more servers 120. In other embodiments, any of the standardization engine 140, multi -tenant manager 130, analytics engine 150, model viewing engine 160, prediction engine 165, and machine controller 175 may be implemented by one or more server machines distinct from the servers 120. In yet other embodiments, the standardization engine 140, multi-tenant manager 130, analytics engine 150, model viewing engine 160, prediction engine 165, and machine controller 175 may be implemented by one or more virtual machines implemented in a hosted computing environment. The hosted computing environment may include one or more rapidly provisioned and/or released computing resources. The computing resources may include hardware computing, networking and/or storage devices configured with specifically configured computer-executable instructions. A hosted computing environment may also be referred to as a cloud computing environment.
[0288] Further, the processing of the various components of the illustrated systems can be distributed across multiple machines, networks, and other computing resources. In addition, two or more components of a system can be combined into fewer components. For example, the various systems illustrated as part of the MLM system 110 of Figures 1 A and IB can be distributed across multiple computing systems, or combined into a single computing system. Further, various components of the illustrated systems can be implemented in one or more virtual machines, rather than in dedicated computer hardware systems. Likewise, the data repositories shown can represent physical and/or logical data storage, including, for example, storage area networks or other distributed storage systems. Moreover, in some embodiments the connections between the components shown represent possible paths of data flow, rather than actual connections between hardware. While some examples of possible connections are shown, any of the subset of the components shown can communicate with any other subset of components in various implementations.
[0289] The term "determining" encompasses a wide variety of actions and, therefore, "determining" can include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, "determining" can include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, "determining" can include resolving, selecting, choosing, establishing and the like.
[0290] The phrase "based on" does not mean "based only on," unless expressly specified otherwise. In other words, the phrase "based on" describes both "based only on" and "based at least on."
[0291] Conditional language used herein, such as, among others, "can," "might," "may," "e.g.," and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or states. Thus, such conditional language is not generally intended to imply that features, elements and/or states are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without author input or prompting, whether these features, elements and/or states are included or are to be performed in any particular embodiment. The terms "comprising," "including," "having," and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term "or" is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term "or" means one, some, or all of the elements in the list. In addition, the articles "a" and "an" are to be construed to mean "one or more" or "at least one" unless specified otherwise.
[0292] Disjunctive language such as the phrase "at least one of X, Y, or Z," unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
[0293] Unless otherwise explicitly stated, articles such as "a" or "an" should generally be interpreted to include one or more described items. Accordingly, phrases such as "a device configured to" are intended to include one or more recited devices. Such one or more recited devices can also be collectively configured to carry out the stated recitations. For example, "a processor configured to carry out recitations A, B and C" can include a first processor configured to carry out recitation A working in conjunction with a second processor configured to carry out recitations B and C.
[0294] While the above detailed description has shown, described, and pointed out novel features as applied to various embodiments, it will be understood that various omissions, substitutions, and changes in the form and details of the devices or algorithms illustrated can be made without departing from the described MLM systems. Thus, nothing in the foregoing description is intended to imply that any particular feature, characteristic, step, operation, module, or block is necessary or indispensable. As will be recognized, the processes described herein can be embodied within a form that does not provide all of the features and benefits set forth herein, as some features can be used or practiced separately from others. The scope of protection is defined by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims

WHAT IS CLAIMED IS:
1. A system comprising:
a first data link to a manufacturing system configured to create a run of parts based on a common engineering schematic;
a second data link to metrology device configured to measure at least some parts in the run of parts to generate measurement data representing a physical shape of each part of the at least some parts; and
a machine learning system including one or more processors in communication with a computer-readable memory storing executable instructions, wherein the one or more processors are programmed by the executable instructions to at least:
access a neural network trained, based on measurement data of past parts in the run of parts, to make a prediction about a future part in the run of parts;
forward pass the measurement data through the neural network to generate the prediction about the future part in the run; and
determine whether to output instructions for adjusting operations of the manufacturing system based on the prediction.
2. The system of claim 1, wherein the neural network includes parameters trained based on metrology inspection data of the past parts in the run of parts, and wherein the prediction represents an aspect of a metrology inspection of the future part.
3. The system of claim 1, wherein the neural network is trained to output a predicted measurement for a feature of the future part, and wherein the one or more processors are further programmed by the executable instructions to compare the predicted measurement to a tolerance specified for the predicted measurement.
4. The system of claim 3, wherein the one or more processors are further programmed by the executable instructions to determine to adjust the operations of the manufacturing system based on determining that the predicted measurement is outside of the tolerance or a predetermined percentage of the tolerance.
5. The system of claim 4, wherein the one or more processors are further programmed by the executable instructions to determine an offset for a tool of the manufacturing system based on the predicted measurement, wherein the tool is configured to create the feature.
6. The system of claim 1, wherein the neural network is trained to output a likelihood that a feature of the future part will be out of tolerance, and wherein the one or more processors are further programmed by the executable instructions to determine to adjust the operations of the manufacturing system based on the likelihood exceeding a predetermined threshold.
7. The system of claim 1, wherein the one or more processors are further programmed by the executable instructions to determine to halt the operations of manufacturing system in response to the prediction indicating that the future part will be out of a predetermined percentage of tolerance.
8. The system of claim 1, wherein the one or more processors are further programmed by the executable instructions to determine a modification to the operations of the manufacturing system in response to the prediction indicating that the future part will be out of a predetermined percentage of tolerance, wherein the modification includes a tool offset.
9. The system of claim 1, wherein the one or more processors are further programmed by the executable instructions to:
compute at least one statistical process control metric based on the measurement data; and
provide the at least one statistical process control metric as an input into the neural network to generate the prediction.
10. A computer-implemented method comprising:
receiving, from a metrology device, measurement data representing a physical shape of each of a number of parts in a run of parts, wherein parts in the run of parts are manufactured based on a common engineering schematic; accessing a machine learning model trained, based on measurements of past parts in the run of parts, to make a prediction about a future part in the run of parts; performing a forward pass of the measurement data through the machine learning model to generate the prediction about the future part in the run; and
determining whether to output an alert for adjusting operations of the manufacturing system based on the prediction.
11. The computer-implemented method of claim 10, wherein the machine learning model is trained to output a predicted measurement for a feature of the future part, the computer-implemented method further comprising comparing the predicted measurement to a tolerance specified for the predicted measurement.
12. The computer-implemented method of claim 11, further comprising determining to adjust the operations of the manufacturing system based on determining that the predicted measurement is outside of a predetermined percentage of the tolerance.
13. The computer-implemented method of claim 12, further comprising determining an offset for a tool of the manufacturing system based on the predicted measurement, wherein the tool is configured to create the feature.
14. The computer-implemented method of claim 10, wherein the machine learning model is trained to output a likelihood that a feature of the future part will be out of tolerance, the computer-implemented method further comprising determining to adjust the operations of the manufacturing system based on the likelihood exceeding a predetermined threshold.
15. The computer-implemented method of claim 10, further comprising determining to halt the operations of manufacturing system in response to the prediction indicating that the future part will be out of a predetermined percentage of tolerance.
16. The computer-implemented method of claim 10, further comprising determining a modification to the operations of the manufacturing system in response to the prediction indicating that the future part will be out of a predetermined percentage of tolerance, wherein the modification includes a tool offset.
17. The computer-implemented method of claim 10, further comprising:
computing at least one statistical process control metric based on the measurement data; and
providing the at least one statistical process control metric as an input into the machine learning model to generate the prediction.
18. The computer-implemented method of Claim 10, further comprising outputting the alert to a control system configured to control operations of the manufacturing system.
19. The computer-implemented method of Claim 18, further comprising, by the control system, halting or correcting operation of the manufacturing system in response to receiving the alert.
20. The computer-implemented method of Claim 10, wherein the machine learning model comprises a neural network including at least an input layer and an output layer, the computer-implemented method further comprising:
providing the measurement data to nodes of the input layer; and determining whether the future part will be out of tolerance based on values of nodes of the output layer.
21. The computer-implemented method of Claim 20, further comprising:
identifying a node of the output layer having a value indicative of an out-of- tolerance measurement predicted for the future part;
accessing a mapping between the identified node and a geometric feature of the common engineering schematic;
accessing a mapping between the geometric feature and a tool of the manufacturing system; and
including an identification of the tool in the alert.
22. The computer-implemented method of Claim 21, further comprising: identifying a predicted deviation from tolerance based on the value of the identified node;
calculating a position bias for controlling the tool to mitigate the predicted out- of-tolerance measurement;
generating the alert to include the position bias; and
outputting the alert to a control system configured to control operations of the manufacturing system.
23. The computer-implemented method of Claim 22, further comprising, by the control system, controlling the manufacturing system to apply the position bias during control of the tool during manufacture of the geometric feature of the future part.
24. The computer-implemented method of Claim 10, wherein the machine learning model is trained to make the prediction based on a first set of inspections in the measurement data, the computer-implemented method further comprising:
accessing an additional machine learning model trained to make an additional prediction about the future part based on a second set of inspections in the measurement data, wherein the first set of inspections and the second set of inspections represent different sets of parts in the run of parts; and
determining whether the future part will be out of tolerance based on the prediction of the machine learning model and on the additional prediction of the additional machine learning model.
25. A non-transitory computer readable medium storing computer-executable instructions that, when executed by a computing system comprising one or more computing devices, causes the computing system to perform operations comprising:
identifying an inspection of an out-of-tolerance part in a run of parts manufactured based on a common engineering schematic;
identifying a set of inspections of in-tolerance parts manufactured prior in the run to the out-of-tolerance part;
generating input data based on the set of inspections of the in-tolerance parts; generating expected output data based on the inspection of the out-of-tolerance part;
training a machine learning model for predicting out-of-tolerance parts to predict the expected output data from the input data; and
providing the trained machine learning model to a control system configured to control operations of a manufacturing system in manufacturing additional parts based on the common engineering schematic.
26. The non-transitory computer readable medium of Claim 25, wherein the machine learning model comprises a neural network comprising at least a statistical process control metric generation portion and a connected portion including an input layer, a hidden layer, and an output layer, the operations further comprising:
providing the input data to nodes of the statistical process control metric generation portion, wherein the input data comprises measurement data representing measured values of physical features of the in-tolerance parts;
generating at least one statistical process control metric at the nodes of the statistical process control metric generation portion;
providing the at least one statistical process control metric of each node of the statistical process control metric generation portion to a corresponding node of the input layer;
providing the output data to nodes of the output layer; and
tuning parameters of nodes of the hidden layer based on back-propagation.
27. The non-transitory computer readable medium of Claim 26, the operations further comprising providing the measurement data to additional nodes of the input layer.
28. The non-transitory computer readable medium of Claim 27, the operations further comprising updating the training during manufacture of the run of parts based on inspections of additional parts in the run of parts.
29. The non-transitory computer readable medium of Claim 26, the operations further comprising:
identifying a second set of inspections of in-tolerance parts manufactured prior in the run to the out-of-tolerance part, wherein the set of inspections and the second set of inspections represent at least one different in-tolerance part;
generating second input data based on the second set of inspections of the intolerance parts;
training a second machine learning model for predicting out-of-tolerance parts to predict the expected output data from the second input data; and
providing the trained machine learning model and the second trained machine learning model as an ensemble to the control system.
30. The non-transitory computer readable medium of Claim 26, the operations further comprising, by the control system, using the trained machine learning model to generate a metrology prediction regarding a future part in the run of parts.
31. The non-transitory computer readable medium of Claim 30, the operations further comprising, by the control system, determining whether and to correct or halt operations of the manufacturing system based on the metrology prediction.
PCT/US2018/030523 2017-05-04 2018-05-01 Metrology system for machine learning-based manufacturing error predictions WO2018204410A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762501321P 2017-05-04 2017-05-04
US62/501,321 2017-05-04

Publications (1)

Publication Number Publication Date
WO2018204410A1 true WO2018204410A1 (en) 2018-11-08

Family

ID=64016558

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2018/030523 WO2018204410A1 (en) 2017-05-04 2018-05-01 Metrology system for machine learning-based manufacturing error predictions

Country Status (1)

Country Link
WO (1) WO2018204410A1 (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110688722A (en) * 2019-10-17 2020-01-14 深制科技(苏州)有限公司 Automatic generation method of part attribute matrix based on deep learning
DE102019105061A1 (en) * 2018-11-14 2020-05-14 Jenoptik Industrial Metrology Germany Gmbh Method for measuring the surface of workpieces
CN111199286A (en) * 2018-11-20 2020-05-26 皇家飞利浦有限公司 User customizable machine learning model
US10755215B2 (en) * 2018-03-22 2020-08-25 International Business Machines Corporation Generating wastage estimation using multiple orientation views of a selected product
CN112036073A (en) * 2020-07-16 2020-12-04 成都飞机工业(集团)有限责任公司 3D printing part measurement result correction method
US20210208568A1 (en) * 2018-05-29 2021-07-08 Siemens Aktiengesellschaft System, Apparatus, Manufacturing Machine, Measuring Device and Method for Manufacturing a Product
US20210233129A1 (en) * 2020-01-27 2021-07-29 Dell Products L. P. Machine learning based intelligent parts catalog
WO2021154747A1 (en) * 2020-01-27 2021-08-05 Lam Research Corporation Performance predictors for semiconductor-manufacturing processes
CN113544604A (en) * 2019-04-19 2021-10-22 纳米电子成像有限公司 Assembly error correction for a flow line
CN113924596A (en) * 2019-06-18 2022-01-11 利乐拉瓦尔集团及财务有限公司 Detecting deviations in packaging containers for liquid food products
US11360835B2 (en) 2019-11-27 2022-06-14 Tata Consultancy Services Limited Method and system for recommender model selection
WO2022189050A1 (en) * 2021-03-10 2022-09-15 Robert Bosch Gmbh Production of cameras with reduced reject rate
US11592812B2 (en) * 2019-02-19 2023-02-28 Applied Materials, Inc. Sensor metrology data integration
CN115951123A (en) * 2023-02-28 2023-04-11 国网山东省电力公司营销服务中心(计量中心) Electric energy metering method and system based on wireless communication
US20230111750A1 (en) * 2021-10-08 2023-04-13 Pratt & Whitney Canada Corp. Component inspection system and method
US20230161314A1 (en) * 2021-11-23 2023-05-25 Pratt & Whitney Canada Corp. Computer-implemented method of controlling a manufacturing machine, associated system and computer readable instructions
EP4246265A1 (en) * 2022-03-16 2023-09-20 Claritrics Inc d.b.a Buddi AI System and method for predictive analytics for fitness of test plan
US11892507B2 (en) 2021-08-27 2024-02-06 Exfo Inc. Early detection of quality control test failures for manufacturing end-to-end testing optimization
WO2024037769A1 (en) 2022-08-18 2024-02-22 Carl Zeiss Ag Method and manufacturing installation for producing a plurality of workpieces

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7235414B1 (en) * 2005-03-01 2007-06-26 Advanced Micro Devices, Inc. Using scatterometry to verify contact hole opening during tapered bilayer etch
US20080058978A1 (en) * 2006-08-31 2008-03-06 Advanced Micro Devices, Inc. Transistor gate shape metrology using multiple data sources
US7449348B1 (en) * 2004-06-02 2008-11-11 Advanced Micro Devices, Inc. Feedback control of imprint mask feature profile using scatterometry and spacer etchback
US20140293023A1 (en) * 2013-04-01 2014-10-02 The Boeing Company Laser videogrammetry

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7449348B1 (en) * 2004-06-02 2008-11-11 Advanced Micro Devices, Inc. Feedback control of imprint mask feature profile using scatterometry and spacer etchback
US7235414B1 (en) * 2005-03-01 2007-06-26 Advanced Micro Devices, Inc. Using scatterometry to verify contact hole opening during tapered bilayer etch
US20080058978A1 (en) * 2006-08-31 2008-03-06 Advanced Micro Devices, Inc. Transistor gate shape metrology using multiple data sources
US20140293023A1 (en) * 2013-04-01 2014-10-02 The Boeing Company Laser videogrammetry

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10755215B2 (en) * 2018-03-22 2020-08-25 International Business Machines Corporation Generating wastage estimation using multiple orientation views of a selected product
US20210208568A1 (en) * 2018-05-29 2021-07-08 Siemens Aktiengesellschaft System, Apparatus, Manufacturing Machine, Measuring Device and Method for Manufacturing a Product
DE102019105061A1 (en) * 2018-11-14 2020-05-14 Jenoptik Industrial Metrology Germany Gmbh Method for measuring the surface of workpieces
CN111199286A (en) * 2018-11-20 2020-05-26 皇家飞利浦有限公司 User customizable machine learning model
US11592812B2 (en) * 2019-02-19 2023-02-28 Applied Materials, Inc. Sensor metrology data integration
CN113544604A (en) * 2019-04-19 2021-10-22 纳米电子成像有限公司 Assembly error correction for a flow line
CN113924596B (en) * 2019-06-18 2023-11-14 利乐拉瓦尔集团及财务有限公司 Detecting deviations in packaging containers for liquid foods
CN113924596A (en) * 2019-06-18 2022-01-11 利乐拉瓦尔集团及财务有限公司 Detecting deviations in packaging containers for liquid food products
CN110688722A (en) * 2019-10-17 2020-01-14 深制科技(苏州)有限公司 Automatic generation method of part attribute matrix based on deep learning
CN110688722B (en) * 2019-10-17 2023-08-08 深制科技(苏州)有限公司 Automatic generation method of part attribute matrix based on deep learning
US11360835B2 (en) 2019-11-27 2022-06-14 Tata Consultancy Services Limited Method and system for recommender model selection
WO2021154747A1 (en) * 2020-01-27 2021-08-05 Lam Research Corporation Performance predictors for semiconductor-manufacturing processes
US20210233129A1 (en) * 2020-01-27 2021-07-29 Dell Products L. P. Machine learning based intelligent parts catalog
CN112036073A (en) * 2020-07-16 2020-12-04 成都飞机工业(集团)有限责任公司 3D printing part measurement result correction method
WO2022189050A1 (en) * 2021-03-10 2022-09-15 Robert Bosch Gmbh Production of cameras with reduced reject rate
US11892507B2 (en) 2021-08-27 2024-02-06 Exfo Inc. Early detection of quality control test failures for manufacturing end-to-end testing optimization
US20230111750A1 (en) * 2021-10-08 2023-04-13 Pratt & Whitney Canada Corp. Component inspection system and method
EP4213060A1 (en) * 2021-10-08 2023-07-19 Pratt & Whitney Canada Corp. Component inspection system and method
US20230161314A1 (en) * 2021-11-23 2023-05-25 Pratt & Whitney Canada Corp. Computer-implemented method of controlling a manufacturing machine, associated system and computer readable instructions
EP4246265A1 (en) * 2022-03-16 2023-09-20 Claritrics Inc d.b.a Buddi AI System and method for predictive analytics for fitness of test plan
WO2024037769A1 (en) 2022-08-18 2024-02-22 Carl Zeiss Ag Method and manufacturing installation for producing a plurality of workpieces
CN115951123A (en) * 2023-02-28 2023-04-11 国网山东省电力公司营销服务中心(计量中心) Electric energy metering method and system based on wireless communication
CN115951123B (en) * 2023-02-28 2023-06-30 国网山东省电力公司营销服务中心(计量中心) Electric energy metering method and system based on wireless communication

Similar Documents

Publication Publication Date Title
WO2018204410A1 (en) Metrology system for machine learning-based manufacturing error predictions
Lade et al. Manufacturing analytics and industrial internet of things
US20190354915A1 (en) Metrology system for measurement uncertainty analysis
Wang Towards zero-defect manufacturing (ZDM)—a data mining approach
US11656614B2 (en) In-process digital twinning
US11507052B2 (en) System and method of voxel based parametric specification for manufacturing a part
Kostenko et al. DIGITAL TWIN APPLICATIONS: DIAGNOSTICS, OPTIMISATION AND PREDICTION.
Papananias et al. A Bayesian framework to estimate part quality and associated uncertainties in multistage manufacturing
Dai et al. Reliability modelling and verification of manufacturing processes based on process knowledge management
Beyerer et al. Machine Learning for Cyber Physical Systems: Selected Papers from the International Conference ML4CPS 2018
Liu et al. Optimal coordinate sensor placements for estimating mean and variance components of variation sources
Williams et al. Augmented reality assisted calibration of digital twins of mobile robots
CN112712314A (en) Logistics data recommendation method based on sensor of Internet of things
Shagluf et al. Adaptive decision support for suggesting a machine tool maintenance strategy: from reactive to preventative
US10699035B2 (en) Part management system
Proteau et al. Predicting the quality of a machined workpiece with a variational autoencoder approach
Zhuang et al. Digital twin-based quality management method for the assembly process of aerospace products with the grey-markov model and apriori algorithm
Vishnu et al. A data-driven digital twin framework for key performance indicators in CNC machining processes
Martínez-Arellano et al. A data analytics model for improving process control in flexible manufacturing cells
Eyring et al. Analysis of a closed-loop digital twin using discrete event simulation
Bakhtadze et al. Digital ecosystem situational control based on a predictive model
Riano et al. A Closed-Loop Inspection Architecture for Additive Manufacturing Based on STEP Standard
Li et al. Module-based similarity measurement for commercial aircraft tooling design
Efeoğlu et al. Machine Learning for Predictive Maintenance: Support Vector Machines and Different Kernel Functions
Pipiya et al. Optimization and decision-making strategies with respect to product quality in the presence of several objective functions

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18794548

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 06.03.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18794548

Country of ref document: EP

Kind code of ref document: A1