CN113544604A - Assembly error correction for a flow line - Google Patents

Assembly error correction for a flow line Download PDF

Info

Publication number
CN113544604A
CN113544604A CN202080016336.0A CN202080016336A CN113544604A CN 113544604 A CN113544604 A CN 113544604A CN 202080016336 A CN202080016336 A CN 202080016336A CN 113544604 A CN113544604 A CN 113544604A
Authority
CN
China
Prior art keywords
assembly
target object
operator
steps
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080016336.0A
Other languages
Chinese (zh)
Inventor
马修·C·普特曼
瓦迪姆·潘斯基
恩索尔·金
安德鲁·桑德斯特罗姆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nano Electronic Imaging Co ltd
Nanotronics Imaging Inc
Original Assignee
Nano Electronic Imaging Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/587,366 external-priority patent/US11156982B2/en
Application filed by Nano Electronic Imaging Co ltd filed Critical Nano Electronic Imaging Co ltd
Priority claimed from PCT/US2020/029022 external-priority patent/WO2020176908A1/en
Publication of CN113544604A publication Critical patent/CN113544604A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM]
    • G05B19/41805Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM] characterised by assembly
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/18Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form
    • G05B19/406Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by monitoring or safety
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/31From computer integrated manufacturing till monitoring
    • G05B2219/31027Computer assisted manual assembly CAA, display operation, tool, result
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/31From computer integrated manufacturing till monitoring
    • G05B2219/31046Aid for assembly, show display on screen next workpiece, task, position to be assembled, executed
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

Aspects of the disclosed technology provide a computational model that utilizes machine learning to detect errors during a manual assembly process and determine a sequence of steps to complete the manual assembly process to mitigate the detected errors. In some embodiments, the disclosed techniques evaluate the target object in a step of the assembly process, where errors with respect to the nominal object are detected to obtain the comparison. Based on the comparison, a sequence of steps is obtained that completes the assembly process of the target object. The assembly instructions for creating the target object are adjusted according to this sequence of steps.

Description

Assembly error correction for a flow line
Cross Reference to Related Applications
This application is a continuation-in-part application from us patent application No. 16/587,366 entitled "DYNAMIC TRAINING FOR association LINES (dynamic training of pipeline)" filed on 30.9.2019, a continuation application from us patent application No. 16/289,422 entitled "DYNAMIC TRAINING FOR association LINES (dynamic training of pipeline)" filed on 28.2.2019, and now us patent No. 10,481,579. IN addition, the present application claims benefit from U.S. provisional application No. 62/836,192 entitled "calculation MODEL FOR DECISION designation and ASSEMBLY OPTIMIZATION" filed on 19/4/2019, U.S. provisional application No. 62/931,448 entitled "calculation MODEL FOR DECISION designation and ASSEMBLY OPTIMIZATION" filed on 6/11/2019, and U.S. provisional application No. 62/931,448 entitled "calculation MODEL FOR DECISION designation and ASSEMBLY OPTIMIZATION" filed on 7/11/2019, and U.S. provisional application No. 62/932,063 entitled "DEEP LEARNING availability DECISION FOR MANUAL OPTIMIZATION and ASSEMBLY OPTIMIZATION" filed on 19/2019. The entire contents of the above applications and patents are incorporated herein by reference.
Technical Field
The subject technology provides improvements to pipeline workflows, and in particular, includes systems and methods for adaptively updating pipeline operator instructions based on feedback and feedforward error propagation predictions made using machine learning models. As discussed in further detail below, some aspects of the technology include systems and methods for automatically adjusting instructional videos provided at one or more operator stations based on inferences made regarding manufacturing or assembly deviations.
Background
In conventional in-line workflows, detecting manufacturing errors and determining how to correct the errors by modification in downstream processes requires manual (operator) monitoring and expertise. Note that assembly and manufacturing, as well as assembly lines and production lines are used interchangeably herein. By relying on manual assembly error detection, errors are likely to be ignored (or not reported) and subsequently propagated downstream during assembly. Furthermore, many assembly workers have only received training to perform limited tasks and therefore may not be able to identify how to modify their own workflows to best correct errors originating from upstream in the assembly workflow.
In conventional manufacturing workflows, the repair of human errors in a portion of a manual inspection process is typically handled by taking corrective action on the human node. If that person continues to have a problem, she is often replaced by another person, who, like all of us, is subject to many of the same limitations. It is difficult to repeat an action overnight for many years without error and most fitters do not have the right to take corrective action. Even if such rights are given, the rights may be inconsistent and learned only from the experience of a person in applying the single flow node. Furthermore, there is no mechanism to learn from any errors, or even no positive corrective measures.
Further, electronic monitoring of the pipeline is limited and does not include a powerful mechanism for providing immediate adjustment for downstream steps in the pipeline to compensate for errors occurring in upstream steps. Further, new mechanisms are needed to assess how changes in operator actions and/or changes in assembly patterns affect the final manufactured product and provide corrective measures to improve the performance and/or characteristics of the manufactured product.
Disclosure of Invention
In some aspects, the disclosed technology relates to a method for optimizing a workflow in a pipeline, the method comprising the steps of: the method comprises the steps of detecting errors in target object assembly in a target object assembly step, evaluating the target object and a nominal object in steps of an assembly process to obtain a comparison, and determining a sequence of steps required to minimize a deviation between the target object and the nominal object based on the comparison. In some aspects, the method may further comprise a step for adjusting the assembly instructions of the target object based on the sequence of steps.
In another aspect, the disclosed technology includes a system for optimizing a pipeline workflow, the system including a plurality of image capture devices, each of which is disposed at a different location to capture movement of an operator during assembly of a target object, and an assembly instruction module configured to automatically modify guidance and instructions provided to the operator, wherein the assembly instruction module is coupled to the plurality of image capture devices. The assembly instruction module may be configured to perform operations comprising: motion data is received from the plurality of image capture devices and an error in the assembly of the target object is determined based on the motion data and one of the set of steps, wherein the motion data corresponds to a set of steps performed by an operator to assemble the target object. In some embodiments, the assembly instruction module may be further configured to perform operations including evaluating the target object and the nominal object of one of the set of steps to obtain a comparison, determining a sequence of steps required to minimize a deviation between the target object and the nominal object based on the comparison, and adjusting the assembly instructions provided to the operator according to the sequence of steps. The form of the modified assembly instructions may include, but is not limited to, video of the generated or edited motion data, text-based instructions from Natural Language Processing (NLP) of the identified deviations, or other feedback mechanisms provided to the operator.
In another aspect, the disclosed technology relates to a non-transitory computer-readable medium comprising instructions stored thereon, which when executed by one or more processors, are configured to cause the processors to perform the instructions comprising: the method comprises the steps of detecting errors in the assembly of the target object in steps of an assembly process of the target object, evaluating the target object and a nominal object in steps of the assembly process to obtain a comparison, and determining a sequence of steps required to minimize a deviation between the target object and the nominal object based on the comparison. In some embodiments, the instructions may be further configured to cause the processor to perform operations for adjusting the assembly instructions of the target object based on the sequence of steps.
Drawings
Certain features of the subject technology are set forth in the appended claims. However, the accompanying drawings are included to provide a further understanding of the disclosed aspects and together with the description serve to explain the principles of the subject technology. In the figure:
figure 1 conceptually illustrates a flow diagram of an exemplary production line deployment, in accordance with some aspects of the disclosed technology.
FIG. 2 illustrates an example of a process for performing assembly error correction at a given operator station in accordance with aspects of the disclosed technology.
FIG. 3 illustrates an example of an electronic system with which some aspects of the subject technology may be implemented.
Detailed Description
The detailed description set forth below is intended as a description of various configurations of the subject technology and is not intended to represent the only configurations in which the subject technology may be practiced. The accompanying drawings are incorporated herein and constitute a part of the detailed description. The detailed description includes specific details for the purpose of providing a more thorough understanding of the subject technology. It is clear and obvious, however, that the subject technology is not limited to the specific details set forth herein and may be practiced without these details. In some instances, structures and components are shown in block diagram form in order to avoid obscuring the concepts of the subject technology.
Aspects of the disclosed technology address the aforementioned limitations of traditional in-line process flows by providing methods for tracking, training, and incrementally improving line assembly and the final manufactured product. Improvements are achieved by providing dynamic visual or other feedback and instructions to each assembly operator, and in some embodiments, the operator feedback is based on errors, which may include, but are not limited to, assembly errors detected at one or more points in the production line, inefficient processes and/or actions, poor quality product.
By implementing the disclosed techniques, the speed of error correction can be significantly increased over manually implemented methods, for example, by quickly altering or changing the reference/instruction information provided at each station (or all stations) based on near real-time error detection. Although some embodiments described herein discuss the use of reference/instruction information in the form of video, other formats are contemplated. For example, the assembly/manufacturing instructions may be provided to the assembly operator as audible, visual, textual, and/or tactile cues or other forms of reference. For example, the audible instructional information may include voice instructions or other audible indicators. The visual assembly instruction information may include a video or animation format, for example, using an augmented reality (a/R) system or a virtual reality (V/R) system. In some aspects, the visual assembly instructions may be provided as an animation that provides an example of how an operator at a given workstation in the pipeline manipulates a workpiece (or tool). Further, in some aspects, the assembly/manufacturing instructions may include machine instructions, e.g., machine instructions that may be received and implemented by a robotic assembly operator or a machine assembly operator. The term operator as used herein may refer to a human, a robot, or a machine that uses motion to assemble an article of manufacture. Furthermore, the term operator includes human-assisted manufacturing implementations, for example where a human operator works in conjunction with or is assisted by a robot or machine tool.
In the case where assembly/manufacturing instructions are provided as reference/instruction video, this video is sometimes referred to as Standard Operating Protocol (SOP). Due to minimal hardware requirements, e.g., the use of cameras and displays for each operator, the system of the disclosed technology can be efficiently deployed, while machine learning training, updates, and error propagation can be performed at a centralized computing resource (e.g., in a computing cluster or cloud environment).
In some aspects, the video instructional information may be provided to one or more operators as part of an augmented reality display. That is, augmented reality may be used to convey instructions or deviations from standard assembly/manufacturing methods to an operator, where the display is provided in a hybrid form of augmented video, animated graphics, and/or video data representing a recorded scene. For example, the augmented reality display may provide instructions or guidance provided as animation or graphical overlays to a real-time feed of the workpiece being assembled and/or the tool being used in the assembly/manufacturing process.
In some embodiments, the system of the disclosed technology includes one or more video or motion capture devices disposed at various operating stations in the production line. The capture device is configured to record the operator's actions/interactions with the part, equipment, material, or other tool ("component") at that particular station. In some aspects, the operator action may be captured using video recording, however, other action capture formats are also contemplated, such as using a three-dimensional (3D) point cloud representing the operator action and/or the operator's interaction with the tool or the article of manufacture. Further, a reference video for each workstation may be created by recording the actions of one or more experts for a particular workstation and the interactions of the experts with the components at that workstation. The video may be created from a single instance or multiple instances of expert operations. The motion path for each expert may be extracted, and in embodiments using multiple experts or multiple examples, calculations may be performed on the set of extracted motion paths (e.g., averages) to create a reference video for a particular workstation. The reference video may be in the form of a digital or animated rendering of the path of action to be performed at a particular workstation. Note that an expert may refer to any person skilled or knowledgeable in the particular assembly steps that provide guidance.
In some embodiments, video or motion capture devices disposed at various operating stations in the production line may also capture properties (e.g., mass, tensile strength, number of defects) of the workpiece/component/tool at the respective station, which may be used to calculate assembly errors.
By capturing the interactions of the operators at their respective workstations, operator errors may be detected by comparing the captured interactions with a benchmark (ground truth) model representing ideal/expert operator interactions/workflows. That is, operator deviations from the idealized interaction model may be used to calculate assembly errors that may be repaired at different locations in the assembly chain, for example, by modifying operator instructions/guidance provided at different workstations. In addition, the quality of the assembly can be captured at each station and compared to a baseline assembly for that station. Deviations of the components from the baseline components may also be used to assign quality levels to the components at particular stations or to calculate operator/assembly errors that may be repaired by altering the operator instructions/guidance provided to each station.
The fitting correction may be performed in various ways, depending on the desired implementation. In some aspects, operator variation/error may be used to perform sorting, for example, by sorting parts into quality grades (e.g., A, B, C, etc.), and then directing those parts into the appropriate production line. In further aspects, detected assembly errors can be used to alter the process at a given station to improve quality and reduce variation. That is, the detected assembly error may be used to automatically provide instructions or guidance at the same station, for example, to correct errors caused at that station (e.g., in-station rework). The NLP may be used to process instructions or guidance to the operator. For example, NLP can be used to translate spoken instructions into a textual form, or to translate textual instructions into a spoken form.
For example, assembly error detection may be used to drive updates/changes to operator instructions or video provided at a given station where errors are known to occur. For example, if the error/deviation is identified as originating from a first operator working at the first station, the error variance associated with the article exiting the first station is reduced by modifying assembly instructions provided to the first operator, e.g., via a display device at the first station.
In a further aspect, detected assembly errors can be used to alter subsequent station assembly to overcome station variance. That is, error detection may be used to automatically trigger downstream propagation of new/updated assembly guidelines based on errors caused by upstream operators. For example, the error variance of the action performed by the first operator may be used to adjust the assembly instructions provided to a second operator associated with a second workstation downstream from the first workstation.
In yet another aspect, the error variances detected across all workstations may be propagated forward to ensure that all or part of the rework may be performed throughout the remainder of the downstream assembly chain. That is, errors generated across one or more workstations may be repaired/reduced by adjusting assembly instructions provided to one or more downstream operators. In one example, the error variance in the article caused by the first operator at the first station may be repaired by operations performed sequentially by the second operator at the second station and the third operator at the third station, i.e., by adjusting the assembly instructions provided at the second station and the third station.
In further examples, the error variance accumulated across multiple workstations may be reduced by one or more subsequent workstations. For example, by adjusting the assembly instructions provided to the third and fourth stations (e.g., to the third and fourth operators, respectively), the error variance in the articles accumulated across the first and second stations may be subsequently repaired.
By treating each operator/operator station in the assembly flow as a network node, a machine learning model can be used to optimize the assembly process by reducing the assembly variance at each node (station) to minimize errors. By minimizing individual node (operator) variances, and performing real-time updates to mitigate forward error propagation, the system of the disclosed technology can greatly reduce the manufacturing variances of the final product. Further, by accurately quantifying and tracking error contributions from specific parts of the assembly workflow, products can be ranked and classified by product quality or amount of deviation. Thus, certain quality classified products may be diverted to different manufacturing processes or to different customers, i.e. depending on the product quality.
Machine learning/Artificial Intelligence (AI) models can be used to perform error detection and/or to perform modifications needed to optimize station assembly variations. For example, a machine learning model may be trained using a variety of training data sources, including but not limited to: end product grade, end product variance statistics, desired end product characteristics (e.g., assembly time, amount of material used, physical characteristics, number of defects, etc.), workstation-specific component grade, workstation-specific component variance, desired workstation component characteristics. Further, the deployed machine learning model may be trained or initialized based on input provided from an expert or "master designer" so that institutional knowledge may be represented in an idealized model for performing error detection and error quantification calculations.
As will be appreciated by those skilled in the art, machine learning-based classification techniques may vary depending on the desired implementation without departing from the disclosed techniques. For example, the machine learning classification scheme may apply one or more of the following, either alone or in combination: hidden Markov models, recurrent neural networks, Convolutional Neural Networks (CNN), reinforcement scholarynteries, deep learning; bayesian notation, global countermeasure networks (GANs), support vector machines, image registration methods, and applicable rule-based systems, and/or any other suitable artificial intelligence algorithm. In the case of using a regression algorithm, this may include, but is not limited to: random gradient descent regressions and/or passive positive regressions, and the like.
The machine-learned classification model may also be based on a clustering algorithm (e.g., a small-lot K-means clustering algorithm), a recommendation algorithm (e.g., a Miniwise hash algorithm, or a euclidean Locality Sensitive Hash (LSH) algorithm), and/or an anomaly detection algorithm, such as a local anomaly factor. Further, the machine learning model may employ dimension reduction methods, for example, employing one or more of: a small batch dictionary learning algorithm, an incremental Principal Component Analysis (PCA) algorithm, a hidden Dirichlet allocation algorithm, a small batch K-means algorithm and/or the like.
In some embodiments, a variety of different types of machine learning training/artificial intelligence models may be deployed. For example, a general form of machine learning may be used to dynamically adjust the pipeline process to optimize the manufactured product. As will be appreciated by those skilled in the art, the machine learning/artificial intelligence model selected does not simply comprise a set of assembly/manufacturing instructions, but rather is a method of providing feedback on the overall pipeline process and its impact on the final manufactured product, as well as a way of providing dynamic adjustments to downstream operating stations in the pipeline to compensate for actions occurring in upstream operating stations. Such artificial intelligence based feedback and feedforward models are referred to herein as Artificial Intelligence Process Control (AIPC). In some embodiments, machine learning may be based on a deep learning model in a simulated environment that utilizes learning based on a target gated round robin unit (GRU) model and Hausdorff distance minimization to efficiently search the space of possible restoration paths to find the best path to correct errors in the assembly process. In further embodiments, for example, a machine learning algorithm may analyze the video input of the assembly process and predict the final quality output based on a long-short term memory model. In addition, the machine learning model may also be used in the form of NLP algorithms to adjust feedback to the operator workstation, such as converting text to speech or converting speech to text, to maximize operator compliance and understanding of adjusted instructions.
In some embodiments, errors during the manual assembly process may be corrected using a computational model that utilizes machine learning.
The target object may be assembled by a sequence of steps defined by a process. In this process, irreversible errors may occur at a particular step k, where any remaining operations need to be altered to obtain the final configuration of the nominal object. In some embodiments, a method of correcting errors may include: the defective target object of step k is compared with the nominal object of the same step k or the defective target object of step k is compared with the nominal object in its final configuration. These comparisons may be used to determine the sequence of steps necessary to minimize the deviation between the final configuration of the defective target object and the final configuration of the nominal object. In some embodiments, the quality metric of the target object may also be used to guide the correction of defective target objects.
A general numerical method can solve this problem by using a Hausdorff distance algorithm to determine how similar the sequence of k steps for assembling a defective target object is to the sequence of steps for assembling a nominal object into its final configuration. Several methods to computationally minimize the Hausdorff distance between the sequence of k steps in the defective assembled object and the sequence of steps in the final assembled nominal object are to optimize the Markov Decision Process (MDP) by an instantaneous reward formula, a multiple reward formula, or a delayed reward formula. However, the search space associated with these formulas requires a significant amount of computational resources.
Alternatively, a machine learning framework may be developed with a delayed reward policy agent using reinforcement learning. The reinforcement learning framework may be designed to allow the policy agent to determine the appropriate steps required to correct errors in defective target objects and to obtain a final configuration with performance indicators that match those of the nominal objects. The reward given to the policy agent is delayed, wherein the policy agent gets the reward only when the last step has been performed.
In some embodiments, a design of an optimal/desired manufactured product may be selected, and a skilled operator may be deployed to perform each step performed at each operating station for assembling the manufactured product according to the selected design. The optimization may be based on desired performance and/or characteristics of the final product (e.g., if the manufactured product is a paper aircraft, then the optimal paper aircraft may be the aircraft that achieves the desired flight goals), minimizing errors in the final manufactured product, or some other criteria. Multiple imaging devices may be used to capture the actions of an operator and its interactions with the article of manufacture being assembled to generate video, images, and/or 3D point cloud data. The captured data may provide granular information such as: operator hand coordinates with respect to the manufactured product at the time the manufactured product is assembled, the relationship of one hand to the other hand, and the relationship of the fingers (in some embodiments, the joints in the fingers) to the manufactured product at the time the manufactured product is assembled. Data collected from skilled operators may be used as a ground truth for optimal/desired assembly of manufactured products. The ground truth from a single instance may be sufficient in itself to create the initial machine learning model, or additional data may be collected. For example, to understand how variations in operator actions or errors affect the final manufactured product, many operators may be deployed to perform one or more steps in the assembly of the best manufactured product. This can be done for each operating station in the pipeline. The resulting end products and their respective assembly processes may be compared to each other and to ground truth to determine how errors and/or variations in operator actions affect the characteristics and/or performance of the manufactured products (e.g., operator speed may result in a poor quality aircraft). Data collected during the actual assembly process (i.e., the process of a human, robot, or machine performing actions at one or more workstations) based on an operator will be referred to herein as "actual training data". The actual training data can be supplemented with simulation data to obtain a richer data set and provide additional variation for achieving an optimal manufactured product. Note that the terms "optimal" and "desired" are used interchangeably herein.
In some embodiments, the different AI/machine learning/deep learning models discussed herein may be deployed in a particular order as described below to implement Artificial Intelligence Process Control (AIPC) to optimize the assembly of an article of manufacture. An exemplary process in which the AIPC deep learning model may be implemented is discussed in more detail with reference to FIG. 1 (e.g., in connection with the AIPC deep learning model 112) and FIG. 2. Examples of hardware systems and/or devices that may be used to implement the AIPC deep learning model are provided in fig. 3 and the corresponding description below.
First, CNN can be used in a pipeline process to classify the characteristics of the operator's hands and articles in different configurations at each operating station.
Second, Reinforcement Learning (RL) and RL proxies can be used to achieve desired results from CNN classification and predefined desired results, and be rewarded for achieving the desired results. The RL proxy may be supervised or unsupervised.
Third, a generation countermeasure network (GAN) may be used to select between conflicting RL agents. GAN may involve minimal manual supervision, relying only on manual work to select which RL agents to enter as nodes for the GAN.
Fourth, the RNN can create a feedback and feed-forward system with the winning RL as an input node so that learning can be continuous and unsupervised.
The implementation of these four AI/machine learning models is discussed in more detail below:
in some embodiments, actual training data may be entered into the CNN to classify relevant data in the assembly process, for example, which fingers/hands are used at each operating station in each step of the assembly, the portion of the product being assembled that is contacted by the operator's fingers at any point in time and space, and the shape or configuration of the manufactured product being assembled at any point in time and space.
In further embodiments, data may also be collected that does not track hand motion but represents different changes in the assembly pattern of the manufactured product (e.g., if the manufactured product is a folded paper aircraft, the data may be collected based on altering the folding sequence, implementing the folding changes, and/or introducing potential errors, if the manufactured product is an article of clothing, for example, the data may be collected based on sewing sequence, implementing the sewing changes, and/or introducing potential errors). This data may be modeled and/or collected from actual training data. The final manufactured products and their respective assembly processes may be compared to determine how errors or variations in the assembly patterns affect the characteristics and/or performance of the manufactured products.
In some embodiments, the captured data (e.g., video of the assembly process and hand tracking, etc.) is used to predict the quality of the final output. Such quality prediction allows the use of captured data to group products into quality bins without the need to manually check product quality during manufacturing and enable downstream corrective measures.
In some embodiments, the system may focus on manual assembly of the target object, where the assembly process includes a plurality of discrete steps where the operator performs different operations on the target object according to a set of instructions. The system can be built with a machine learning framework using a deep learning model that establishes a correlation between the time series of operator hand positions and the final quality of the target object (the sum of all operator actions). In some embodiments, the model may include two neural networks, the first for extracting hand position data of the operator in the 3D environment, and the second for refining the hand position data into a correlation with the final performance quality of the target object.
In some embodiments, the first neural network may use the video capture system to record a video of the operator's hand during the assembly process in a different node video, corresponding to each discrete step performed by the operator in relation to the assembly target object. For example, an operator may perform an assembly process using multiple cameras located at different locations and configured to record the assembly process simultaneously. These cameras can be used to capture video multiple times at pre-designated locations on the operator's hand. These videos may then be processed to extract a plurality of images or landmark frames representing the entire assembly process of the target object. These landmark frames may be used to extract hand tracking information, helping to define the positions or keypoints of the operator's hands and fingers during the assembly process.
In some embodiments, to extract hand tracking information, a bounding box estimation algorithm and a hand keypoint detector algorithm may be applied. In particular, the bounding box estimation algorithm may include processing landmark frames from the assembly process using threshold image segmentation to obtain a mask image of the operator's hand. The hand may be positioned on the mask using blob detection. The bounding box estimates that a box is formed around each hand of the operator using the mask image such that the box includes the highest point of the hand position shape, at least to the wrist point of the hand. The bounding box and its corresponding landmark frame are then input into the hand keypoint detector algorithm.
The hand keypoint detector algorithm may include a machine learning model that is capable of detecting specific keypoints on the operator's hand. The hand keypoint detector algorithm can estimate not only the keypoints visible in the landmark frame, but also the keypoints that are not visible in the frame due to the occlusion of joints, viewpoints, objects, and hand interactions. Since different hand positions may produce different occlusion points in different frames, some occlusion points in one frame may not be occluded in other frames. The hand keypoint detector estimates the position of the occluded keypoints with some confidence. However, estimating the location of the occluded keypoints may result in the same keypoint location being recorded for different hand positions. Hand keypoints defining the operator's hand in steps of the manual assembly process are then provided to the second neural network.
In some embodiments, the second neural network is used to predict the quality of the final state of the assembled object. In some embodiments, the neural network may be based on a Long Short Term Memory (LSTM) model. LSTM has many ordered elements that together represent the overall assembly process of the final object. The input to the LSTM units may be hand key point data corresponding to the operator's actions at a particular step in the assembly process represented by the LSTM units. Each cell in the LSTM determines whether information from a previous cell should be stored, selects a value to be updated, performs an update on the cell, selects a value to be output, and then filters the values so that the cell outputs only those selected values. The LSTM may be a sequence-to-one model (sequence-to-one model) trained using an Adam optimizer or other adaptive learning rate optimization algorithm. Using the LSTM framework, the neural network correlates the input data extracted from the manual assembly process to determine a quality measure for the final product.
In some embodiments, video and hand tracking information or input data representing the assembly process of a target object for training a model may be collected from multiple operators performing the assembly process to assemble multiple target objects using a set of assembly instructions. The operator-assembled target objects may be used in a controlled environment to collect corresponding quality measurements or output data of the performance of the assembled objects required to train the model.
In some aspects, the training data used to generate the machine learning model may be derived from simulation data, from actual training data, and/or from ground truth records of experts, either in combination or separately. In some embodiments, the simulation data results may be used to build a machine learning model, such as (but not limited to) a Reinforcement Learning (RL) agent. In other embodiments, a machine learning model, such as, but not limited to, a Reinforcement Learning (RL) agent, may be built using actual training data. RL agents are rewarded for achieving good/desired results and are penalized for poor results.
In some cases, many RL agents (some based on actual training data, some based on simulated data) may be deployed to work in concert and configured to maximize the jackpot: for example, assembling a manufactured product with minimal deviation from an ideal model/ideal example. Exemplary results that may reward the RL agent include: the finished manufactured product is completed in as few steps as possible, reducing the amount of material or time required to achieve the manufactured product. RL agents based on simulated data and RL agents based on actual training data can be used to determine the best mode of action and/or best mode of assembly that produces the best/desired artifact.
These two sets of RL agents (e.g., RL agents created based on actual training data and RL agents created based on simulated data) can now cooperate, even compete, because they both receive rewards for making the best/desired action of the manufactured product. In some embodiments, data obtained from a simulation-based RL proxy (which results in an optimal assembly pattern for an optimal manufactured product) may be used to reduce the likelihood space for an actual training data set. For example, a simulated RL agent may be used to determine the best fit mode, and then actual training data may be collected for only the best fit mode, rather than for non-optimal fit modes. By focusing only on collecting actual training data or the best fit mode, less training data may be collected and/or a larger capacity may be used to collect more actual training data, but only for the best fit mode.
Relying only on reinforcement learning to optimize the pipeline is limited because rewards sometimes conflict. For example, in the assembly of a product, some RL agents may be rewarded for a minimum number of spurious movements (e.g., fold and immediately undo the fold, or add stitches and immediately undo the stitch), while other RL agents may be rewarded for speed. RL agents rewarded for speed may determine that more spurious movements will result in faster assembly time because fewer corrections are needed downstream in the assembly process. Making such implementation trade-off decisions is not a matter that is readily calculated by humans. Even with experience and extensive examples, humans still lack the computational power to understand how the end result is the subtlety from different operators working in different ways.
To resolve these conflicting RL proxy optimizations, a GAN may be deployed to act as an arbitrator. Conflicts may occur between RL agents based on actual training data, RL agents based on simulated data, and/or RL agents based on actual training data and RL agents based on simulated data.
In some embodiments, the GAN may test each RL agent and store the results to create a more robust neural network. The GAN works by employing RL proxies and using a model that generates winners and losers in the zero-sum game. In GAN, there is a "generator" and a "discriminator". In this case, the generator would store the reward data from the conflicting RL agents, and the evaluator would evaluate which of these are most relevant to the task of creating the desired manufactured product. GAN uses deep network nodes (or neurons) to decide how to weight nodes. Since each RL agent considers it to have made the best decision, the GAN acts to determine which conflicting RL agent actually made the most appropriate choice and the discriminator adjusts the weights accordingly. When a zero-sum game is played between conflicting RL agents, a set of winners is generated between the conflicting RL agents, and only those winners are used to optimize the machine learning model used by the workflow in the pipeline. Although a large amount of data may have been generated to determine the winning RL proxy, the results are much more sparse than the data used to create and find the winners that serve as input nodes.
In some embodiments, once it is determined which RL agents survive the deficit of the GAN and receive the correct reward, they may be imported into another AI system known as the Recurrent Neural Network (RNN). RNN has many similarities to CNN in that it is a deep learning neural network, optimizing the final result by various forms of weighting of the input data. One difference from CNN is that CNN is a linear process from input to output, RNN is a loop that feeds back the resulting output, and even internal nodes as new training information. The RNN is both a feedback and a feed forward system, e.g., GRU.
In some embodiments, a machine learning framework may be built with target GRU model-based learning. Because of the predictive capabilities and relatively short training times of the GRUs, a GRU model may be selected instead of reinforcement learning. In RNN, GRU is used to distinguish between observations that should be stored in memory or updated states and observations that should be forgotten, or reset states.
In some embodiments, a GRU model may include a number of GRU units corresponding to the number of assembly steps required to build a target object. Indicating that one of a plurality of assembly steps may have a plurality of input parameters and a hidden state output per GRU unit. The GRU unit representing the last step of the assembly process will output a target object. The output of the model is the deviation of the target object from the nominal object. The deviation may be calculated using the stepwise Hausdorff distance from the target object to the nominal object and the finally configured performance indicator for the nominal object. Each GRU cell is defined by a reset, update and new gate. The GRU neural network is iteratively trained to bias it towards solving specific sub-problems and determining a set of weights for the GRU. For example, for each iteration, a number of predictions (one for each possible error in a particular step) are generated that complete the assembly process in subsequent steps. Furthermore, a corresponding predicted distance measure may be generated that corrects the assembly process. These predicted assembly process completions may be presented in a virtual representation system and their stepwise Hausdorff distances calculated to obtain a "ground truth" distance measure. The difference between the "ground truth" and the predicted distance measure can be computed and fed back into the model whose network weights are adjusted by back-propagation to produce the next iteration. The process may continue until a weight set for a GRU is identified. In some embodiments, a random gradient descent method may be used to correct defective target objects and derive the steps necessary to obtain a satisfactory final configuration.
In some embodiments, simulations, such as parameterized computer-aided design and drawing (CAD) models of in-process target objects, may be generated to develop and validate machine-learned models. The CAD system may use a local coordinate system that corresponds to the current state of the target object in the process, as well as input parameters that represent each assembly step. Using the local coordinate system of the target object in the process and the input parameters, the CAD system can determine dimensional information for each assembly step. The CAD system can then generate a three-dimensional CAD model that represents the output configuration for each step. The CAD system may continue this process until all steps in the assembly process have been performed and a CAD model of the final configuration of the assembled object may be output. Differently configured CAD models may be generated by providing various input parameters to the CAD system. To obtain a set of CAD models having a particular range of input criteria (e.g., length or width), a statistical sample of the input criteria may be provided to a CAD system to generate the set of CAD models.
The details and complexity of CAD models may vary, but trained models and systems are designed to work exclusively with lower-detail CAD systems, allow a large number of examples to be generated in a non-computationally expensive manner, and provide sufficient surface morphology details for model training and analysis. In some embodiments, the referenced CAD system may be paired with a Finite Element Analysis (FEA) or basic surface modeling tool to generate a structural analysis of the surface. This data can be used as an additional quality score for model training and analysis.
In some embodiments, the CAD system may be incorporated into model training so that additional surface models may be generated according to requests from examples of the model or the need for additional exploration data. This approach is combined with physical observation and allows deployment of pre-trained models without the need for large numbers of spatial physical samples.
In some embodiments, a CAD model of the final configuration of the assembled object may be used for simulation to generate performance metrics. Using a CAD model of the final configuration of the assembly object, the simulation can utilize numerical and computational methods to generate the performance index.
A real application of Artificial Intelligence Process Control (AIPC) involves providing feedback (e.g., by automatically modifying video instructions) to operators in the pipeline who have completed their tasks, and providing instructions (e.g., by automatically modifying video instructions) to operators downstream in the pipeline who have not yet completed their tasks ("feed forward"). The feedback-feedforward system or AIPC may be implemented using the AI methods described herein and, in some embodiments, in a particular order described herein, so that an operator in the pipeline may make a selection to optimize the final manufactured product without additional manual supervision.
In some embodiments, this involves compressing the above system only as an RNN, and looking at each movement in the process of creating one or more manufactured products in two ways: successful or unsuccessful. Each movement serves as a training. If the output node of the RNN is not optimal, the network can feed back to the actual individuals in the pipeline to make different selections, and in the path through many nodes and layers of the RNN, the weights can be re-weighted and the output can be marked as successful or unsuccessful. As the process iterates, the accuracy of the weights themselves increases. Furthermore, the network can know what is valid and what is not, even if the individual performing the assembly does not know. This adds to the training set. But also allows adjustment at different stages of the assembly process. In some cases, it may be found that the best way to produce an article with a particular characteristic at any given time is not to go back to the beginning, but rather to adjust the instructions as the process progresses. The RNN is then always optimized for the best manufactured product and learns to provide feedback to each operator at an operating station in the production line that has performed its task and to feed forward information to operators at operating stations in the production line that have not performed their task.
Figure 1 conceptually illustrates a flow diagram of an exemplary process 100 for implementing production line deployment, in accordance with some aspects of the disclosed technology. The process of FIG. 1 begins at step 102, where production deployment begins. The workflow of an exemplary production line typically includes a plurality of operating stations (nodes) in which workpieces (products) are assembled or manufactured. The nodes may be organized in order so that work at each subsequent node begins only after the operation at the previous node is completed.
In step 104, one or more reference videos are generated and/or updated. The video, as described above, may be used to provide manufacturing/assembly instructions to a particular node (also referred to herein as an operation station). That is, each node in the workflow may be provided with a reference video that provides guidance on how to complete the steps in the manufacturing workflow that correspond to that particular node.
In step 106, each video generated in step 104 is deployed to a respective workstation/node. For example, a given workflow may include ten nodes, each node having a respective and different/unique reference video. In other embodiments, the number of videos may be less than the total number of nodes. Depending on the implementation, the reference video deployed at each workstation/node may be unique or may provide similar guidance/instructions. As discussed in further detail below, the content of the reference video may be dynamic and may be updated/enhanced over time.
In step 108, a continuous record of the action is captured at each workstation/node. The action data generated by the action record may describe the operator's interaction with the workpiece/component/tool at its node/workstation in the workflow. That is, the motion data captured at each node may represent one or more operator actions corresponding to a particular portion of the product assembly or manufacture, and may correspond to instructions provided by a reference video associated with that node. In some cases, the action capture may include the capture of video data, i.e., capturing a record of all or part of the operator's actions at the workstation. In other embodiments, the motion capture may include a recording of a 3D point cloud, e.g., recording the motion of one or more specific points in the field of view of the image capture device. The actions of the operator, as well as the properties of the component (e.g., component quality, tensile strength, number of defects) may be captured at each node/station in the workflow.
In step 110, a process recipe analysis bias may be calculated, wherein the motion data captured for one or more workstations in step 108 may be analyzed to identify any bias from a comparison model, e.g., a comparison model comprising (or representing) idealized motion curves for the respective workstations. As shown in FIG. 1, step 110 may utilize an AIPC deep learning model (step 112), which may be configured to identify/classify motion deviations from a comparison model, for example, and make inferences regarding how the assembly or manufacturing process may be affected. The comparison may be performed at each workstation level and/or at the overall process level. The analysis may also take into account component attributes at each workstation or the deviation of the component from the baseline, and how the motion deviation of the workstation affects the quality of the component.
The AIPC deep learning model invoked in step 112 may be based on a collection of various types of training data, which may include, for example, examples of ideal or quality-controlled assembly/manufacturing interactions for a given workstation/node. The AIPC deep learning model may also be enhanced (or adjusted) using data provided by the domain/industry information 115, data provided by customer feedback (step 111) on a particular product manufactured using the process 100, and data provided by feedback on quality control checks of a particular product manufactured using the process 100 (step 113). It should be appreciated that the AIPC deep learning model may be implemented using a variety of computing systems, including distributed hardware and/or software modules. For example, the AIPC deep learning model may be implemented using a distributed system including a plurality of image capture devices and a plurality of display devices deployed at the pipeline and coupled to one or more systems configured to implement various AI/machine learning models and/or classifiers.
Once deviations from the comparison model are detected/identified in step 110, the automatic adjustments in step 114 may be generated using the AIPC deep learning model 112. As discussed above, video adjustments may be directed to improving manufacturing/assembly quality at one or more stations in a workflow. For example, video adjustments may be applied to a given node/workstation that is known (or predicted) to produce errors, e.g., to change instructions or guidance provided to an operator in a manner that reduces or repairs origin errors. In other embodiments, video adjustments may be applied downstream of the station where the error occurred, for example, to correct the error before the manufacturing workflow is complete. In further embodiments, once the workflow is complete, the entire workflow may be analyzed and adjustments may be made to one or more workstations in the workflow.
In some embodiments, the adjustment is made in real time immediately after the error is detected. In other embodiments, the adjustment is made at fixed intervals or after the workflow is complete.
In some aspects, the automatic adjustment determined at step 114 may be summarized at step 117 and/or provided as a production quality report. For example, adjustments resulting from the analysis of motion deviations (step 110) may be used to generate one or more quality reports describing various quality aspects of the workpiece based on the identified deviations from the idealized model of the assembly/manufacturing process.
FIG. 2 illustrates an exemplary process 200 for performing error detection analysis that may be used to facilitate assembly error correction in accordance with aspects of the present technique.
Beginning at step 210, a process for improving manufacturing/assembly using idealized visual guidance can be implemented. In step 215, video tracking of one or more assembly stations is performed. Video tracking may include a record of a human operator at a given workstation/node. In some embodiments, video tracking may further include the capture of component attributes at a given workstation/node.
In steps 220 through 224, a process is performed to analyze the recorded video from the assembly station. For example, in some embodiments, background extraction may be performed to isolate movements/components in the recorded video. In some aspects, once the background extraction is complete, the processed video contains only action/video data relating to the assembly operator (step 224) and the involved components used in the respective assembly step (step 220). In step 220, additional processing may be performed to isolate the part/assembly. As shown in the schematic diagram of process 200, step 220 may include additional processing operations including anomaly detection (step 221), detection of surface variations (222), and part classification and/or quality scoring (step 223). It should be understood that any video processing steps may be performed using various signal and/or image processing techniques, including but not limited to the use of one or more AI/machine learning algorithms and/or classifiers, e.g., performing anomaly detection (221), detecting surface variations (222), and/or performing scoring/classification (step 223).
After process steps 220 through 224 are complete, process 200 may proceed to step 226, where an action comparison is performed. The action comparison (step 226) may include a comparison of process assembly station video data relating to one or more station operators at one or more stations/nodes with corresponding idealized video/action data. Comparison of actions performed across multiple stations/nodes can be used to infer/predict changes in the quality of the final part/assembly.
In step 228, variance/quality classification of various parts/components may be performed. For example, parts/assemblies may be classified into different quality classes and/or may be disassembled or repaired according to their associated classifications/differences.
After determining the classifications/differences, the process 200 may proceed to step 230 where an analysis of the overall process/workflow is performed, e.g., based on the classifications/differences for each workstation/node determined in steps 226 and 228. By analyzing the entire workflow, the video may be automatically adjusted to account for detected deviations/defects as described above.
FIG. 3 illustrates an exemplary processing device that can be used to implement the disclosed technology. Processing device 300 includes a master Central Processing Unit (CPU)362, interfaces 368, and a bus 315 (e.g., a PCI bus). When acting under the control of appropriate software or firmware, the CPU362 is responsible for performing the various error detection monitoring and process adjustment steps of the disclosed techniques. CPU362 preferably accomplishes all these functions under the control of software including an operating system and any appropriate application software. CPU362 may include one or more processors 363 such as a processor from the Motorola family of microprocessors or the MIPS family of microprocessors. In an alternative embodiment, the processor 363 is specially designed hardware for controlling the operation of the AIPC system 310. In particular embodiments, a memory 361 (e.g., non-volatile RAM and/or ROM) also forms part of CPU 462. However, there are many different ways in which memory may be coupled to the system.
In some aspects, the processing device 310 may include an image processing system 370 or may be coupled with the image processing system 370. The image processing system 370 may include a variety of image capture devices, such as cameras that can monitor operator movement and generate motion data. For example, the image processing system 370 may be configured to capture video data and/or output/generate a 3D point cloud.
The interfaces 368 are typically provided as interface cards (sometimes referred to as "line cards"). Typically, they control the sending and receiving of data packets over the network, sometimes supporting other peripherals used with the router. Interfaces that may be provided include ethernet interfaces, frame relay interfaces, cable interfaces, DSL interfaces, token ring interfaces, and the like. In addition, various ultra high speed interfaces may be provided, such as fast token ring interfaces, wireless interfaces, ethernet interfaces, gigabit ethernet interfaces, ATM interfaces, HSSI interfaces, POS interfaces, FDDI, and the like. In general, these interfaces may include ports appropriate for communication with the appropriate media. In some cases they may also include a separate processor and, in some cases, volatile RAM. The independent processors may control such communications intensive tasks as packet switching, media control and management. By providing separate processors for the communications-intensive tasks, these interfaces allow the master microprocessor 362 to efficiently perform routing computations, network diagnostics, security functions, etc.
Although the system shown in fig. 3 is a specific processing device of the present invention, it is by no means the only network device architecture on which the present invention can be implemented. For example, an architecture having a single processor that handles communications as well as routing computations, etc. is often used. Further, other types of interfaces and media could also be used.
Regardless of the network device's configuration, one or more memories or memory modules (including memory 361) configured to store program instructions for the general-purpose network operations and the mechanisms for roaming, route optimization and routing functions described herein may be employed. For example, the program instructions may control the operation of an operating system and/or the operation of one or more application programs. The one or more memories may also be configured to store tables such as mobility binding, registration and association tables, and the like.
The logical operations of the various embodiments are implemented as: (1) executing a series of computer implemented steps, operations or processes on programmable circuitry within a general purpose computer, (2) executing a series of computer implemented steps, operations or processes on programmable circuitry for a particular purpose; and/or (3) machine modules or program engines interconnected within programmable circuitry. System 300 may practice all or part of the method, may be part of the system, and/or may operate according to instructions in the non-transitory computer-readable storage medium. Such logical operations may be implemented as modules configured to control the processor 363 to perform particular functions in accordance with the programming of the module.
It should be understood that any particular order or hierarchy of steps in the processes disclosed is an illustration of exemplary methods. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged or only portions of the illustrated steps may be performed. Some steps may be performed simultaneously. For example, in some cases, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean "one and only one" unless specifically so stated, but rather "one or more.
Phrases such as "aspect" do not imply that this aspect is essential to the subject technology or that this aspect applies to all configurations of the subject technology. The disclosure relating to an aspect may apply to all configurations, or may apply to one or more configurations. A phrase such as an aspect may refer to one or more aspects and vice versa. A phrase such as a "configuration" does not imply that such configuration is essential to the subject technology or that such configuration applies to all configurations of the subject technology. The disclosure relating to a configuration may apply to all configurations, or may apply to one or more configurations. A phrase such as a configuration may refer to one or more configurations and vice versa.
The word "exemplary" is used herein to mean "serving as an example or illustration. Any aspect or design described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects or designs.
Disclosure statement
Statement 1: a method for optimizing a pipeline workflow, the method comprising: detecting an assembly error of the target object in the step of the assembly process of the target object; evaluating the target object and the nominal object in the step of the assembly process to obtain a comparison; determining a sequence of steps required to minimize the deviation between the target object and the nominal object based on the comparison; and adjusting the assembly instructions of the target object according to the sequence of steps.
Statement 2: the method of statement 1, wherein the target object is evaluated against the nominal object in the step of the assembly process.
Statement 3: the method of any of claims 1-2, wherein the target object is evaluated for a final configuration of the nominal object.
Statement 4: the method of any of statements 1-3, wherein the sequence of steps is determined using a machine learning model configured to minimize the deviation.
Statement 5: the method of any of statements 1-4, wherein the deviation is determined based on a similarity between a sequence of steps of completing an assembly process of the target object and another sequence of steps of completing an assembly process of the nominal object.
Statement 6: the method of any of statements 1-5, wherein the deviation is minimized using a Markov Decision Process (MDP) through a reward formula.
Statement 7: the method of any of statements 1-6, wherein a sequence of steps to complete an assembly process of the target object is derived using a stochastic gradient descent method.
Statement 8: a system for optimizing a pipeline workflow, the system comprising: a plurality of image capturing devices and an assembly instruction module, wherein each of the image capturing devices is arranged at a different position to capture movement of an operator during assembly of a target object; the assembly instruction module is configured to automatically modify instructions and instructions provided to the operator, wherein the assembly instruction module is coupled to the plurality of image capture devices, and wherein the assembly instruction module is configured to perform operations comprising: receiving, by the assembly instruction module, motion data from the plurality of image capture devices, wherein the motion data corresponds to a set of steps performed by the operator to assemble a target object; determining an error in the assembly of the target object based on the motion data and one of the set of steps; evaluating the target object and nominal object in said one of the set of steps to obtain a comparison; determining a sequence of steps required to minimize a deviation between the target object and the nominal object based on the comparison; and adjusting the assembly instructions provided to the operator according to the sequence of steps.
Statement 9: the system of statement 8, wherein the motion data comprises a digital record of the operator's hand motion during assembly of the target object.
Statement 10: the system of any of claims 8-9, wherein the assembly instruction module is further configured to apply a stochastic gradient descent method to derive the sequence of steps.
Statement 11: the system of any of claims 8-10, wherein the assembly instruction module is further configured to determine the sequence of steps using a machine learning model, wherein the machine learning model is configured to minimize the deviation.
Statement 12: the system of any of claims 8-11, wherein the deviation is determined based on a similarity between a sequence of steps to complete the assembly of the target object and another sequence of steps to complete the assembly of the nominal object.
Statement 13: the system of any of statements 8-12, wherein the deviation is minimized using a Markov Decision Process (MDP) with a reward formula.
Statement 14: the system of any of claims 8 to 13, wherein the assembly instruction module is further configured to: extracting a set of images representing an assembly of the target object from the motion data; and evaluating the set of images to identify performance of the set of steps of the operator fit target object.
Statement 15: a non-transitory computer-readable medium comprising instructions stored thereon, which when executed by one or more processors are configured to cause the processors to perform instructions comprising: detecting an error in the assembly of a target object in a step of an assembly process of the target object; evaluating the target object and nominal object in the step of the assembly process to obtain a comparison; determining, based on the comparison, a sequence of steps required to minimize a deviation between the target object and the nominal object; and adjusting the assembly instructions of the target object based on the sequence of steps.
Statement 16: the non-transitory computer readable medium of statement 15, wherein the instructions are further configured to cause the processor to derive the sequence of steps using a stochastic gradient descent method.
Statement 17: the non-transitory computer-readable medium of any of claims 15-16, wherein the target object is evaluated against the nominal object in the step of the assembly process.
Statement 18: the non-transitory computer-readable medium of any of claims 15-17, wherein the target object is evaluated for a final configuration of the nominal object.
Statement 19: the non-transitory computer readable medium of any of statements 15-18, wherein the instructions are further configured to cause the processor to use a machine learning model configured to minimize the deviation to determine the sequence of steps.
Statement 20: the non-transitory computer-readable medium of any of statements 15-19, wherein the deviation is minimized using a Markov Decision Process (MDP) with a reward formula.

Claims (15)

1. A method for optimizing a pipeline workflow, the method comprising:
in the step of the target object assembling process, detecting an assembling error of the target object;
evaluating the target object and nominal object in the step of the assembly process to obtain a comparison;
determining, based on the comparison, a sequence of steps required to minimize a deviation between the target object and the nominal object; and
adjusting the assembly instruction of the target object according to the step sequence, thereby generating an adjusted assembly instruction.
2. The method of claim 1, wherein the target object is evaluated against the nominal object in the step of the assembly process.
3. The method according to any of claims 1 and 2, wherein the target object is evaluated for a final configuration of the nominal object.
4. The method of any of claims 1-3, wherein the sequence of steps is determined using a machine learning model configured to minimize the deviation.
5. The method of claim 4, wherein the machine learning model is a gated cycle unit (GRU) model.
6. The method of claim 4, wherein the machine learning model is based on a Long Short Term Memory (LSTM) model.
7. The method of claim 4, wherein the machine learning model is trained using tracking information representative of an assembly process of the target object, wherein the tracking information is collected from a set of operators.
8. The method of any of claims 1 to 7, further comprising capturing motion data corresponding to the assembly process, wherein the motion data comprises a digital record of hand motion during assembly of the target object.
9. The method of claim 8, further comprising:
extracting a set of images representing an assembly of the target object from the motion data; and
evaluating the set of images to identify performance of the sequence of steps of the operator to assemble the target object.
10. The method of any of claims 1 to 9, wherein one or more machine learning models are used to detect errors in the fitting of the target object.
11. The method of any one of claims 1 to 10, wherein the adjusted assembly instructions are configured to provide instructions to an operator for minimizing deviation of the target object.
12. The method according to any one of claims 1 to 11, wherein the deviation is determined based on a similarity between a sequence of steps of completing the assembly process of the target object and another sequence of steps of completing the assembly process of the nominal object.
13. The method of any one of claims 1 to 12, wherein the deviation is minimized using a Markov Decision Process (MDP) through a reward formula.
14. The method of any of claims 1-13, wherein the adjusted assembly instructions are translated using one or more Natural Language Processing (NLP) algorithms to provide the adjusted assembly instructions to an operator.
15. The method of any one of claims 1 to 14, wherein a sequence of steps to complete the assembly process of the target object is derived using a stochastic gradient descent method.
CN202080016336.0A 2019-04-19 2020-04-20 Assembly error correction for a flow line Pending CN113544604A (en)

Applications Claiming Priority (9)

Application Number Priority Date Filing Date Title
US201962836192P 2019-04-19 2019-04-19
US62/836,192 2019-04-19
US16/587,366 US11156982B2 (en) 2019-02-28 2019-09-30 Dynamic training for assembly lines
US16/587,366 2019-09-30
US201962931448P 2019-11-06 2019-11-06
US62/931,448 2019-11-06
US201962932063P 2019-11-07 2019-11-07
US62/932,063 2019-11-07
PCT/US2020/029022 WO2020176908A1 (en) 2019-02-28 2020-04-20 Assembly error correction for assembly lines

Publications (1)

Publication Number Publication Date
CN113544604A true CN113544604A (en) 2021-10-22

Family

ID=78094438

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080016336.0A Pending CN113544604A (en) 2019-04-19 2020-04-20 Assembly error correction for a flow line

Country Status (3)

Country Link
JP (2) JP7207790B2 (en)
KR (1) KR20220005434A (en)
CN (1) CN113544604A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117272425A (en) * 2023-11-22 2023-12-22 卡奥斯工业智能研究院(青岛)有限公司 Assembly method, assembly device, electronic equipment and storage medium
CN117635605A (en) * 2024-01-23 2024-03-01 宁德时代新能源科技股份有限公司 Battery visual inspection confirmation method and device, electronic equipment and storage medium
WO2024065189A1 (en) * 2022-09-27 2024-04-04 Siemens Aktiengesellschaft Method, system, apparatus, electronic device, and storage medium for evaluating work task

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024005068A1 (en) * 2022-06-30 2024-01-04 コニカミノルタ株式会社 Prediction device, prediction system, and prediction program
KR20240040951A (en) 2022-09-22 2024-03-29 (주)아이준 Optimizing method for manual assembly process

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4056716A (en) * 1976-06-30 1977-11-01 International Business Machines Corporation Defect inspection of objects such as electronic circuits
JP2003167613A (en) * 2001-11-30 2003-06-13 Sharp Corp Operation management system and method and recording medium with its program for realizing the same method stored
US6757571B1 (en) * 2000-06-13 2004-06-29 Microsoft Corporation System and process for bootstrap initialization of vision-based tracking systems
US20090198464A1 (en) * 2008-01-31 2009-08-06 Caterpillar Inc. System and method for assembly inspection
US20180033130A1 (en) * 2016-08-01 2018-02-01 Hitachi, Ltd. Action instruction apparatus
US10061300B1 (en) * 2017-09-29 2018-08-28 Xometry, Inc. Methods and apparatus for machine learning predictions and multi-objective optimization of manufacturing processes
CN108604393A (en) * 2016-03-04 2018-09-28 新日铁住金系统集成株式会社 Information processing system, information processing unit, information processing method and program
WO2018204410A1 (en) * 2017-05-04 2018-11-08 Minds Mechanical, Llc Metrology system for machine learning-based manufacturing error predictions

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004104576A (en) * 2002-09-11 2004-04-02 Mitsubishi Heavy Ind Ltd Wearable device for working, and device, method and program for remotely indicating working
JP2005250990A (en) * 2004-03-05 2005-09-15 Mitsubishi Electric Corp Operation support apparatus
JP4784752B2 (en) * 2006-06-30 2011-10-05 サクサ株式会社 Image processing device
JP6113631B2 (en) * 2013-11-18 2017-04-12 東芝三菱電機産業システム株式会社 Work confirmation system
JP2017091091A (en) * 2015-11-06 2017-05-25 三菱電機株式会社 Work information generation device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4056716A (en) * 1976-06-30 1977-11-01 International Business Machines Corporation Defect inspection of objects such as electronic circuits
US6757571B1 (en) * 2000-06-13 2004-06-29 Microsoft Corporation System and process for bootstrap initialization of vision-based tracking systems
JP2003167613A (en) * 2001-11-30 2003-06-13 Sharp Corp Operation management system and method and recording medium with its program for realizing the same method stored
US20090198464A1 (en) * 2008-01-31 2009-08-06 Caterpillar Inc. System and method for assembly inspection
CN108604393A (en) * 2016-03-04 2018-09-28 新日铁住金系统集成株式会社 Information processing system, information processing unit, information processing method and program
US20180033130A1 (en) * 2016-08-01 2018-02-01 Hitachi, Ltd. Action instruction apparatus
WO2018204410A1 (en) * 2017-05-04 2018-11-08 Minds Mechanical, Llc Metrology system for machine learning-based manufacturing error predictions
US10061300B1 (en) * 2017-09-29 2018-08-28 Xometry, Inc. Methods and apparatus for machine learning predictions and multi-objective optimization of manufacturing processes

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024065189A1 (en) * 2022-09-27 2024-04-04 Siemens Aktiengesellschaft Method, system, apparatus, electronic device, and storage medium for evaluating work task
CN117272425A (en) * 2023-11-22 2023-12-22 卡奥斯工业智能研究院(青岛)有限公司 Assembly method, assembly device, electronic equipment and storage medium
CN117272425B (en) * 2023-11-22 2024-04-09 卡奥斯工业智能研究院(青岛)有限公司 Assembly method, assembly device, electronic equipment and storage medium
CN117635605A (en) * 2024-01-23 2024-03-01 宁德时代新能源科技股份有限公司 Battery visual inspection confirmation method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
JP2022522159A (en) 2022-04-14
JP2023040079A (en) 2023-03-22
KR20220005434A (en) 2022-01-13
JP7207790B2 (en) 2023-01-18

Similar Documents

Publication Publication Date Title
US11675330B2 (en) System and method for improving assembly line processes
US11703824B2 (en) Assembly error correction for assembly lines
CN113544604A (en) Assembly error correction for a flow line
US20230391016A1 (en) Systems, methods, and media for artificial intelligence process control in additive manufacturing
US10518357B2 (en) Machine learning device and robot system to learn processing order of laser processing robot and machine learning method thereof
Martinez-Cantin et al. Practical Bayesian optimization in the presence of outliers
CN111656373A (en) Training neural network model
CN110059528B (en) Inter-object relationship recognition apparatus, learning model, recognition method, and computer-readable medium
CN110648305A (en) Industrial image detection method, system and computer readable recording medium
WO2020176908A1 (en) Assembly error correction for assembly lines
Sandhu et al. A comparative analysis of conjugate gradient algorithms & PSO based neural network approaches for reusability evaluation of procedure based software systems
US20210311440A1 (en) Systems, Methods, and Media for Manufacturing Processes
CN110766086B (en) Method and device for fusing multiple classification models based on reinforcement learning model
Schmitz et al. Enabling rewards for reinforcement learning in laser beam welding processes through deep learning
TWI830791B (en) Method and system for optimizing workflow in an assembly line, and non-transitory computer-readable media
TWM592123U (en) Intelligent system for inferring system or product quality abnormality
TWI801820B (en) Systems and methods for manufacturing processes
Trinks et al. Image mining for real time quality assurance in rapid prototyping
EP4075221A1 (en) Simulation process of a manufacturing system
KR102636461B1 (en) Automated labeling method, device, and system for learning artificial intelligence models
EP4075330A1 (en) Monitoring process of a manufacturing system
Chernyshev et al. Digital Object Detection of Construction Site Based on Building Information Modeling and Artificial Intelligence Systems.
KR20230165997A (en) Method for Providing Time Series Data and Artificial Intelligence Optimized for Analysis/Prediction
Li Adversarial Learning-Assisted Data Analytics in Manufacturing and Healthcare Systems
CN117726217A (en) Cigarette label paper on-machine adaptability prediction method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination