Method of monitoring the assembly of a product from a workpiece
Field of the Invention
This invention relates to assembly lines used for the manufacture of one or more products. The assembly lines may be manual, partly automated or fully automated. More particularly, the invention relates to methods for the real time asynchronous monitoring and analyzing by means of computer, of test measurements during the assembly of the product.
Background to the Invention
In many manufacturing processes it is necessary and advantageous to apply a test to the worked product to determine that the worked product has been made within the desired specification. An example of such products is consumer electronics, such as video players and cameras, DVD players, computers etc.. However it will be appreciated that the invention may be applied to the manufacture of any product in any manufacturing process requiring assembly of two or more parts, including packaging, pharmaceuticals, and consumer goods of all kinds.
Worked products which fail the test are rejected and retested, which means that they are subjected to a repeat test, or reworked, which means that they are repaired and returned to the assembly line to be retested later. Those which pass are allowed to continue to the next stage of the manufacturing process. This is advantageous because it prevents products which are out of specification from being delivered. Furthermore the earlier that the fault can be determined the better since subsequent operations are wasted if the worked on product is in any event a reject. Thus it is advantageous to provide testing of the worked product at as many stages of the manufacturing process as required to avoid faulty products
undergoing further operations unnecessarily and to identify the manufacturing stages at which the worked product is falling out of the specification.
Numerous tests exist and typically these will be either optical or electrical, but may be more sophisticated, including for example visual, laser, ultrasonic, X-ray or AOI (Automatic Optical Inspection), or any other test based on any measurable property of the article being assembled.
Typically each testing station will result in a number of rejects which are visible in a reject bin, and comparison of the number of rejects with the number of items which have passed through the test station successfully gives the yield at that test station, usually expressed as the proportion or percentage of items entering the test station, which successfully exit therefrom. A yield of 90% therefore means that 10% of those items entering the test station were rejected.
Typically the yield and other data are recorded over time and historical analysis of the test results to measure the performance of the previous manufacturing stages is undertaken.
The number of rejects can typically be automatically counted and the yield can be calculated and displayed at the test station. Thus a manager of the factory may inspect the test station periodically and use the information of the yield at a particular test station to take remedial action to improve the yield. This monitoring or inspection may take the form of viewing a particular readout from a test station or analyzing a historical yield profile, or may even be simply physically observing an increase in the reject rate.
All of these methods entail the disadvantage that there is a delay in seeing the test information. If a particular test shows a sudden increase in failures and decrease in the yield the manager may happen to see this during a periodic inspection of the test results or he may be warned of it by an operator of the test station of a
proximate manufacturing stage. In either case a significant amount of time will have passed before it is brought to the manager's attention. During this time a significant amount of production may have been lost.
Similarly historical yield profiles which are useful in determining where improvements in the manufacturing process can be made are produced after a period of time after the last test result has been made. During this period or delay significant production may be lost.
In the process industry, systems are known for analysing and displaying in real time the continuous output of analogue or digital data from sensors which monitor a production process. For example, US 4,718,025 to Minor and Matheny discloses a system for displaying the output of process sensors as a graphic representation in real time. This enables the conditions of the production process, such as the temperature and volume of a liquid in a boiler, to be continuously monitored, controlled and recorded.
In the assembly industry, it has similarly been proposed to monitor the output of an assembly line in real time so as to address the problem discussed above by identifying faults in the assembled product in time to save lost production.
However, in contrast to the process industry, monitoring of the manufacturing process is carried out in the assembly industry by means of repeated active testing of each assembled item. Each workpiece is subjected to one or more discrete, active tests at each testing stage, and each test generates a discrete test result.
Complex assembly processes can require a large number of tests, and the results of these tests must be collected and analysed in real time in order to identify faults as they emerge.
Various systems have therefore been developed for this purpose. For example WO 91/01528 to Intaq discloses a system for identifying faulty components or
assembly operations on an assembly line in real time. Manual and automatic testing stations are linked to a real time data capture and analysis system, and faults in components or assembly operations are identified by inspectors and recorded on interactive screens by means of light pens.
Due to the increasing sophistication of assembled products, and hence the increasing complexity of assembly operations, the test data which are generated are growing in volume and complexity. As systems such as those discussed above are developed to monitor these data so as to identify faults in the assembled product, testing systems are therefore coming to play an increasingly significant role in the assembly industry. The efficient operation of automatic and manual testing stations is therefore essential to the efficiency of the assembly process.
There is therefore an increasing need to find a way of monitoring the operation of the testing system itself, and it is accordingly the object of the present invention to provide an improved means of monitoring the results of tests carried out during product assembly.
According to the invention therefore there is provided a method of monitoring the assembly of a product from a workpiece at at least one test station comprising:
a. receiving a first test result from a first measurement at a first test station;
b. receiving at least one second test result from a corresponding second measurement;
c. communicating the first and second test results to a central computer processor;
d. processing the test results to calculate the value of a specific property of the test results, and
e. providing an output display of the value of the specific property in real-time.
In this specification, a specific property is defined as an aggregate property of the assembly process, derived from the aggregation of a plurality of discrete test results. A specific property therefore relates to the process of testing and manufacture, as distinct from the properties of the product which is being assembled and tested. However the present invention also provides for the nature of each test to be identified, enabling the production manager to correlate any change in a specific property with changes in the product under test.
The specific properties which may be calculated and displayed include yield, test station utilisation, retest or rework, average test time, average tested per hour, and failed to process values, as described hereafter. The value of any specific property may be calculated from a selected subset of test results, comprising for example the results of all tests carried out on a particular product. The invention also provides for the specification of system parameters such as bin size, no data timeout, and other parameters as discussed below, allowing the time period and data population size over which the yield or other specific property value is calculated to be varied by the user.
The invention further provides for the specific property values to be calculated for different levels of data aggregation. Preferably the test results from each of the test stations are aggregated to produce a combined specific property for a group of test stations. The group test specific properties may be aggregated to produce a line test specific property for a number of groups of test stations and the line test specific properties may be aggregated to produce a site test specific property for a number of lines of test stations. The site test specific properties may be aggregated to produce a multi-site specific property for a number of sites of test stations. In this way the invention makes possible the efficient and responsive management of complex and extensive testing systems and assembly operations. The lines or sites may be in mutually remote locations.
Preferably the test results can be processed to calculate and display a breakdown of failed tests, with the most frequently failed tests displayed first. Preferably for each test station or number of test stations, a change in the visual indication of the yield occurs when the test results change compared with a previous test result by more than a predetermined threshold amount, which may be determined by the user.
Brief Description of the Drawings
The invention will best be understood from the claims when read in conjunction with the detailed description and drawings wherein:
Fig. 1 shows a flowchart representing the general arrangement of an assembly process.
Fig. 2 shows a diagrammatic representation of a computer network for communicating and processing test results.
Fig. 3 shows a flow diagram representing the flow of information in the network.
Figures 4 to 14 show respectively eleven user interfaces presenting information relating to specific properties of the assembly process and the products, at different levels of detail.
Figure 4 shows information relating to three lines.
Figure 5 shows information relating to one line.
Figure 6 shows information relating to one line as a time series.
Figure 7 shows information relating to a number of test stages in one line.
Figures 8 and 9 show information relating to a number of test stations in one test stage.
Figure 10 shows information relating to a number of test stations in one test stage as a time series.
Figure 11 shows information relating to a number of tests at a number of test stations in one test stage.
Figure 12 shows information relating to one test station.
Figure 13 shows information relating to a number of tests at one test station.
Figure 14 shows information relating to a number of tests at one test station as a time series.
Figures 15 to 18 show respectively a further four user interfaces allowing an operator to predefine system parameters.
Figure 19 shows a sixteenth user interface presenting information relating to failed tests.
Figure 20 shows a seventeenth user interface allowing individuals to be designated to receive automatic alarm calls in response to the value of a specific property falling outside a predefined limit.
Figure 21 shows an eighteenth user interface allowing information relating to the assembly process and the user interfaces to be entered into the computer network.
Figure 22 shows a nineteenth user interface allowing information relating to the status of different parts of the assembly process to be produced and presented.
Figure 23 shows a twentieth user interface wherein information is presented as a statistical distribution.
Referring to Figure 1 , a flowchart shows the general arrangement of a typical assembly process acting on workpieces which during the process are assembled into products. In this specification the term workpiece shall be used to describe any partly or fully assembled product at any point in the manufacturing process which is subject to a test of any kind.
The manufacturing process in this embodiment is shown as comprising a company 1 carrying out assembly operations at two sites 2, 2', each site having a number of lines 3 for assembling one or more products, each line having a number of test stages 4 at which specific properties of the products are tested either manually or by automatic testing equipment (referred to hereinafter as "ATE"), each test stage comprising one or more test stations, each test station carrying out measurements associated with one or more tests. For simplicity, the test stages are shown in only one line. Each test comprises the measurement of one or more specific properties of the workpiece or of the process; the nature of the test will depend on the product being manufactured and will be understood by the person skilled in the art.
For example, optical or electrical tests may be carried out during the assembly of products such as video players and cameras, DVD players, computers and other consumer electronics items. Numerous other tests may be used, using techniques such as laser, ultrasonic, x-ray or AOI (Automated Optical Inspection) testing, and many different types of product may be tested - for example, packaging, pharmaceuticals, and consumer goods of all kinds.
Referring to Figures 2 and 3, a computer network 5 collects and processes the test results and provides output information, alarms and actions as discussed hereinafter. For simplicity, only a representative selection of the parts of the network is shown. Almost any general purpose digital computer - for example, a commercially available network server- can be adapted for use in the present system.
The results of manual tests are input into a manual data entry means 6, such as a box with pushbuttons, by the person performing the test. The manual data entry means and the ATEs 7 produce test station data representing the test results. Other information may be included in the test station data, such as the identity of the individual workpiece, the type of product being assembled, the time and date of the test and the identity of the testing station. The test results and other information may be processed by the test station and the test station data presented as test log files containing information aggregated over time and otherwise manipulated. The test station data from different test stations may also comprise information encoded according to the different standards of data encoding and communication adopted by the manufacturer of each test station. Once running in real time the system captures the test station data and passes them 8 to import directories for real time processing by a yield server computer 9 which processes the test station data in real time and aggregates them according to a program to produce output information. Data link 8 may be a conventional cable link, or any other convenient means such as the Internet, and yield server computer 9 may be located at the company or site, or remotely therefrom. For example, yield server computer 9 may be a remote data processing facility, linked to manual data entry means 6 and ATEs 7 via an Internet link 8.
The output information is stored and manipulated to produce both real time and time series information, which is communicated 10 to a number of output computers 1 1, and also used to trigger alarms and actions as discussed hereinafter. The output computers may be located remote from each other and from the site,
for example, in a manager's home, enabling the manager at all times to receive information relating to the assembly process. Again, data links 10 may be conventional cable links, Internet connections, or any other convenient means.
Refering to Figures 4 to 14, the output information is displayed on the screen of each output computer by means of a number of user interfaces which contain links to one another and commands facilitating the display of information in graphical, time series and other configurations. Successive figures show user interfaces displaying output information in real time at increasingly high levels of detail, corresponding respectively to a site, lines within a site, test stages within a line, test stations within a test stage, and tests within a test station.
Output information may also be processed in real time by means of statistical techniques and algorithms to produce, for example, the real time statistical distribution of test results shown in Figure 23. The specific property may be calculated from a subset of test results relating to a particular product under test. This information will help for example to identify which of a large number of altered components has caused a change in the value of a specific property.
It can be seen how by means of the invention the total yield can be monitored in real time by the user, who would typically be the production manager. The total yield can be investigated further by the user to investigate group data, that is site or line data, and further still to investigate individual test data. The user can thus take whatever remedial action is necessary. Thus from a single location, the entire manufacturing process can be monitored in real time. It will be appreciated that there may be more than one user interface so that a number of users may have access to the yield data for the manufacturing process. This may be useful if consultation with individuals with particular expertise is required before particular action is taken. Also it may be desirable to monitor the whole process from different locations depending on the time of day by means of a connection to an external network. For example the process may be monitored by managers
located in different parts of the world in different time zones so that the 24-hour management of the process can be maintained without the user working unsociable hours. The different sites may correspondingly be located in the different time zones, but this need not necessarily be the case. The user interface may also be located remotely at the home of the manager so that the system may be managed on an "on-call" basis without the requirement for the user/production manager to travel to the location of the site concerned.
The output information represents specific properties of the product and process, including the following:
1. Yield.
The yield at a test station is the proportion being those workpieces which pass the test of the total number of workpieces entering the test station, expressed as a percentage. The range of yield values for one test station is shown 121 in Figure 12, and the yield for each test station in a test stage is shown 91 in Figure 9. The product type being tested is shown for each test station at 94 in Figure 9. The yields for each test station are aggregated to produce the yield for each test stage, shown as percentages 71 in Figure 7. The yields for each test stage within a line are aggregated to produce the yield for the line, shown as a percentage 51 in Figure 5. The yields for each line may similarly be aggregated to produce the yield for the site. This interface is configured so that a click on the percentage display 51 will reveal the yield information for the test stages at the next level of detail. Clicking the Yield History button 52 will show the line yield as a time series 62 as shown in Figure 6. The Yield may be calculated separately for each product type.
The yield indicators in each interface — for example, box 51 in Figure 5, boxes 101 and 102 in Figure 10, and boxes 131, 133, and 134 in Figure 13 — are continuously updated in real time.
The yield will also have associated with it a range within which one would normally expect the yield value for the particular process or part of the process to lie, failing which one would know there was a problem which needed remedial action. The range for each test station, stage and line can be set directly by the process manager using the windows 151, 152, 153 provided in the user interfaces shown in Figures 15 to 18.
Information representing the yield may alternatively be presented as the proportion of workpieces which pass or fail each of the tests which are performed on them, calculated and presented if desired by product type. This information is presented in Figure 14 as the most commonly failed tests at one test station during each period of production at the site. The information is aggregated to produce the most commonly failed tests at each test station in a test stage, shown in Figure 11.
Clicking the Compare Failures button 95 in Figure 9 will show the top n reasons for failure as shown in Figure 11. The number n can be determined by the user; in the embodiment shown the user has selected the top 5 reasons. Clicking the failure button 93 in Figure 9 will show the top reasons for failure at that test station as shown in Figure 13. The information in Figure 13 can be displayed for different product types by selecting the required product type from the menu 132. Clicking anywhere on the graph in Figure 13 gives the display shown in Figure 14.
It will be appreciated that at each test station there may be a number of different tests. At the test station of this embodiment the tests that result in the most failures are shown. In the example shown the top failure test is the test for Rx Acoustic Level. A yield display 131 shows the current yield percentage for the selected test station in real time, continuously up-dated as the test stations performs the test on the products being manufactured.
The population over which the most frequently occurring failures are calculated can be preset and reset by the user in order to achieve the most accurate results. Clicking on the graph itself will produce a time series plot of the top five failures shown as shown in fig 14.
The interfaces of figs. 4, 5, 7 and 9 include time series buttons 41, 52, 72, 92 for each of the yield displays, which when clicked will reveal the historic yield for that display over a previous time period in the form of a graph of the percentage yield concerned over time, so the user can see the evolution of the yield for the test station(s) concerned.
The yield, or other specific property values relating to one product type, test station, group, line or site may be displayed and compared with the corresponding specific property values relating to other product types, test stations, groups, lines or sites.
2. Test Station Utilisation.
The Test Station Utilisation of a given test station is that proportion of the capacity of the test station for carrying out tests, which is used in any given time period. The Test Station Utilisation for one test station is shown 122 in Figure 12, and for each test station in a test stage as 81 in Figure 8. Alternatively, this information may be presented graphically as a time series.
Similarly, the utilisation of other production equipment such as SMD, flash programmers and rework equipment may be calculated and presented.
3. Re-Test or Re- Work Value.
The Re-Test Value is that proportion of the workpieces entering a test station which pass the tests at that test station after passing through the test station more
than once in a predetermined time period, which time period is defined by the user. The Re-Test Value for one test station is shown 123 in Figure 12, and for each test station in a test stage as 82, 82' in Figure 8.
The Re-Work Value is that proportion of the workpieces entering a test station which pass the tests at that test station after passing through the test station more than once outside the predetermined time period.
4. Average Test Time
The Average Test Time is the time taken to test a predefined population of workpieces at a test station, divided by the size of the population. The Average Test Time for one test station is shown at 124 in Figure 12.
5. Average Tested per Hour
The Average Tested per Hour is the number of workpieces tested at a test station in a given one-hour period, shown at 125 in Figure 12.
6. Failed to Process
The Failed to Process figure is the number of test results from a given population which the central computer processor failed to process, and is shown for a single test station both as a total and as a percentage at 126 in Figure 12. This figure provides a means of indicating problems such as electrical faults in data carrying cables which might otherwise go unrecognised.
The performance of each test station may be monitored by testing a sample workpiece having precisely determined properties, and comparing the test station data with a previously stored sample of values relating to that workpiece.
Referring to Figures 15 to 18, configuration interfaces enable an operator to configure the system and to predefine system parameters which determine the way in which the test results are processed, output information is presented, and alarms and actions are produced as discussed hereinafter.
System parameters include the following:
1. Bin size.
Fig. 15 shows a window 154 wherein the bin size may be specified, being the number of test results or aggregated test results forming a sample population over which a further aggregated test result is computed.
2. No Data Timeout
Window 172 in Figure 17 allows the maximum time interval between consecutive test station data transmissions to be determined, exceeding which will cause the yield display (91 in Figure 9) for that test station to change colour indicating that the test station is not working.
3. Test Station Exclusion
The data from a given test station or group of test stations to be excluded from the calculation of aggregated specific properties by placing a tick in box 173 of Figure 17 or box 183 of Figure 18. This facilitates for example the use of specified test stations for temporary operations which will not be included in the overall calculations.
4. Alert values.
Fig. 17 shows a user interface providing windows 174, 174' wherein there may be defined a given proportion of failing test results out of a given population, which will result in an alert condition. Any aspect of the output information may be configured so as to trigger an alert. The alert is triggered by the value of a specific property of the process or product falling outside a predefined limit or range.
The alert condition may be indicated by a change in colour of a particular part of a user interface, indicating the part of the process causing the alert. Figure 21 shows a window wherein a display colour may be specified which will indicate an alert. A user interface may also be configured to automatically display information indicating an alert as soon as an alert occurs.
Alerts may be triggered by yield variations outside the threshold values set by the operator in windows 155, 155' in Figure 15, 161, 161 ' in Figure 16, 171, 171 ' in Figure 17 and 181, 181 ' in Figure 18. For example, where the yield rises above the threshold value, the yield display may turn green; where the yield falls below the threshold value, the yield display may turn red. A range of threshold values provides for a range of alert responses.
Clicking the Failure button 93 in Figure 9 or 73 in Figure 7 gives a top failures screen as shown in Figure 13, which shows all the failures at a given test station over a given population. A given frequency of failure in any test may be configured to trigger an alert response by setting the triggering number of failures and the size of the population within which they must occur in order to trigger an alert (for example, 5 failures out of any 10 consecutive workpieces passing through any one test) in windows 182, 182' in Figure 18.
Alerts may also be triggered by a comparison facility. For example, the value of a given specific property at one test stage may be compared with the value of the given specific property at a second test stage, and an alert triggered if there is a difference of for example more than 5% between the two values. Similarly, the
yield for a first line may be compared with the yield for a second line, and an alarm triggered where the two values differ by more than a predefined percentage.
Referring to Figure 6, a given range of values for a given specific property may be configured to trigger a visual indicator such as a product change label 61, which appears on the time series information to indicate the point at which the predefined value change occurred. The content of the label is determined by the user when defining the values by which it is triggered. In this way a change such as, for example, the use of a new component in the assembly process, may be identified in a test by the range of values of a specific property associated with that component, and the point at which the new component was introduced will then be clearly indicated on all relevant time series information. Alternatively a change in the assembly process or in an ambient environmental condition, for example, humidity, may be similarly identified and labelled.
An alert may also trigger an alarm, which may be an audible or visible warning device. Alternatively the computer network may be configured to carry out an action, such as intervening in the assembly process. Figure 20 shows a user interface wherein contact information may be specified enabling the computer network to telephone, email or otherwise contact a designated person in the event of an alert.
An alert may also trigger the production of a management report showing details of the situation triggering the alert. Alternatively management reports may be generated on demand, or automatically in predefined circumstances, such as at a particular time of the day.
The specific properties measured may include non product related data, including environmental conditions such as, for example, measurements of ambient temperature, humidity, or external RF interference, and these data may be
correlated with changes in the measured values of the specific properties of the product or products under test.
In a further embodiment of a further aspect of the invention, Figure 22 shows a further user interface indicating the status of various parts of the assembly process, including the test stations. An indicator 220 for each part shows whether it is functioning or not. The time series buttons enable an operator to demand and view a time series presentation of the status information, presenting the operator with a historical record of the functioning of each part of the assembly process. The operator may thus instantly assess the status and downtime of each critical stage in the assembly process, and immediately identify any problems as soon as they occur.
Several embodiments of the invention have now been described in detail. It is to be noted, however, that these descriptions of specific embodiments are merely illustrative of the principles underlying the inventive concept. It is contemplated that various modifications of the disclosed embodiments, as well as other embodiments of the invention will, without departing from the spirit and scope of the invention, be apparent to persons skilled in the art.