EP3883715A1 - Monitoring a laser machining process using deep folding neural networks - Google Patents
Monitoring a laser machining process using deep folding neural networksInfo
- Publication number
- EP3883715A1 EP3883715A1 EP19789615.2A EP19789615A EP3883715A1 EP 3883715 A1 EP3883715 A1 EP 3883715A1 EP 19789615 A EP19789615 A EP 19789615A EP 3883715 A1 EP3883715 A1 EP 3883715A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- data
- laser
- machining
- tensor
- laser processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003754 machining Methods 0.000 title claims abstract description 147
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 68
- 238000012544 monitoring process Methods 0.000 title claims abstract description 32
- 238000012545 processing Methods 0.000 claims description 194
- 238000000034 method Methods 0.000 claims description 120
- 230000008569 process Effects 0.000 claims description 106
- 238000012546 transfer Methods 0.000 claims description 42
- 238000012549 training Methods 0.000 claims description 20
- 238000012360 testing method Methods 0.000 claims description 9
- 230000005855 radiation Effects 0.000 claims description 5
- 230000001678 irradiating effect Effects 0.000 claims description 2
- 230000005540 biological transmission Effects 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 29
- 230000000875 corresponding effect Effects 0.000 description 15
- 238000013527 convolutional neural network Methods 0.000 description 11
- 238000001514 detection method Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 9
- 239000000155 melt Substances 0.000 description 9
- 239000000463 material Substances 0.000 description 8
- 238000003466 welding Methods 0.000 description 8
- 230000008859 change Effects 0.000 description 7
- 238000010606 normalization Methods 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 5
- 230000005484 gravity Effects 0.000 description 5
- 238000010978 in-process monitoring Methods 0.000 description 5
- 238000013507 mapping Methods 0.000 description 5
- 238000005476 soldering Methods 0.000 description 5
- 230000004913 activation Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 4
- 238000005259 measurement Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 3
- 238000012014 optical coherence tomography Methods 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000003698 laser cutting Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 239000011148 porous material Substances 0.000 description 2
- 238000001454 recorded image Methods 0.000 description 2
- 229910000679 solder Inorganic materials 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000009529 body temperature measurement Methods 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000011109 contamination Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000002844 melting Methods 0.000 description 1
- 230000008018 melting Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000035515 penetration Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000011144 upstream manufacturing Methods 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B23—MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
- B23K—SOLDERING OR UNSOLDERING; WELDING; CLADDING OR PLATING BY SOLDERING OR WELDING; CUTTING BY APPLYING HEAT LOCALLY, e.g. FLAME CUTTING; WORKING BY LASER BEAM
- B23K26/00—Working by laser beam, e.g. welding, cutting or boring
- B23K26/02—Positioning or observing the workpiece, e.g. with respect to the point of impact; Aligning, aiming or focusing the laser beam
- B23K26/03—Observing, e.g. monitoring, the workpiece
- B23K26/032—Observing, e.g. monitoring, the workpiece using optical means
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B13/00—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
- G05B13/02—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
- G05B13/0265—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion
- G05B13/027—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion using neural networks only
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B23—MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
- B23K—SOLDERING OR UNSOLDERING; WELDING; CLADDING OR PLATING BY SOLDERING OR WELDING; CUTTING BY APPLYING HEAT LOCALLY, e.g. FLAME CUTTING; WORKING BY LASER BEAM
- B23K1/00—Soldering, e.g. brazing, or unsoldering
- B23K1/005—Soldering by means of radiant energy
- B23K1/0056—Soldering by means of radiant energy soldering by means of beams, e.g. lasers, E.B.
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B23—MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
- B23K—SOLDERING OR UNSOLDERING; WELDING; CLADDING OR PLATING BY SOLDERING OR WELDING; CUTTING BY APPLYING HEAT LOCALLY, e.g. FLAME CUTTING; WORKING BY LASER BEAM
- B23K26/00—Working by laser beam, e.g. welding, cutting or boring
- B23K26/20—Bonding
- B23K26/21—Bonding by welding
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B23—MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
- B23K—SOLDERING OR UNSOLDERING; WELDING; CLADDING OR PLATING BY SOLDERING OR WELDING; CUTTING BY APPLYING HEAT LOCALLY, e.g. FLAME CUTTING; WORKING BY LASER BEAM
- B23K26/00—Working by laser beam, e.g. welding, cutting or boring
- B23K26/36—Removing material
- B23K26/38—Removing material by boring or cutting
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B23—MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
- B23K—SOLDERING OR UNSOLDERING; WELDING; CLADDING OR PLATING BY SOLDERING OR WELDING; CUTTING BY APPLYING HEAT LOCALLY, e.g. FLAME CUTTING; WORKING BY LASER BEAM
- B23K31/00—Processes relevant to this subclass, specially adapted for particular articles or purposes, but not covered by only one of the preceding main groups
- B23K31/006—Processes relevant to this subclass, specially adapted for particular articles or purposes, but not covered by only one of the preceding main groups relating to using of neural networks
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B23—MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
- B23K—SOLDERING OR UNSOLDERING; WELDING; CLADDING OR PLATING BY SOLDERING OR WELDING; CUTTING BY APPLYING HEAT LOCALLY, e.g. FLAME CUTTING; WORKING BY LASER BEAM
- B23K31/00—Processes relevant to this subclass, specially adapted for particular articles or purposes, but not covered by only one of the preceding main groups
- B23K31/12—Processes relevant to this subclass, specially adapted for particular articles or purposes, but not covered by only one of the preceding main groups relating to investigating the properties, e.g. the weldability, of materials
- B23K31/125—Weld quality monitoring
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
Definitions
- the present disclosure relates to a system for monitoring a laser machining process for machining a workpiece and a machining system for machining a workpiece using a laser beam, which system comprises such a system for monitoring a laser machining process for machining a workpiece. Furthermore, the present disclosure relates to a method for monitoring a laser machining process for machining a workpiece.
- a processing system for processing a workpiece by means of a laser beam the laser beam emerging from a laser light source or an end of a laser guide fiber is focused or bundled onto the workpiece to be processed with the aid of beam guidance and focusing optics.
- the processing can include laser cutting, soldering or welding, for example.
- the laser processing system can comprise, for example, a laser processing head.
- the in-process monitoring or monitoring of a laser processing process typically takes place in that certain signals or parameters of the laser processing process, such as temperature values, the plasma radiation, the laser power of the laser processing head, the amount and type of the backscattered laser power, etc., are recorded and assessed independently of one another will. For example, measured values of a signal or parameter are continuously measured or recorded over a certain period of time in order to obtain a signal corresponding to the parameter.
- the geometry of the steam capillary (also called keyhole) and that of the steam capillary surround the melt pool by means of image processing and evaluation during the laser processing process. This is followed by processing and classification of the individual signals, for which a multitude of setting values for filter processes, median or average value calculations, envelopes, threshold values etc. must be set by an experienced specialist.
- the signal is checked to determine whether the signal fulfills certain error criteria. For example, it is checked whether the signal falls below or exceeds predefined threshold values.
- the individual signals are compared with predefined reference curves around which the so-called envelope curves have been placed.
- Another criterion can be, for example, the integral of the signal over the envelope.
- an error is output by the in-process monitoring. I.e. the in-process monitoring generates a notification that a processing error has occurred.
- the classification of the signals and the monitoring of the keyhole and weld pool geometry thus describe the quality of the laser processing process. Based on the classification of the signal or parameter profiles and the monitoring of the keyhole and melting bath geometry, machining errors are recognized and classified, and depending on this, the machined workpiece is, for example, "good” (ie suitable for further processing or sales) or "bad” marked (ie as a committee) or classified.
- the control parameters of the process can be influenced during the ongoing laser processing process by monitoring the signals or parameters or the geometry of the keyhole and weld pool geometry. The laser processing process can thus be controlled.
- both the signal processing and the classification of the signals take place separately for each signal or parameter, ie independently of the other signals or parameters.
- the definition of the parameters for both the signal work as well as for the classification of the signals, for example the formation of the envelope curves, etc., and the image processing must therefore be carried out by specialists.
- the laser processing system is therefore only monitored on the basis of individual signal or parameter profiles and the monitoring of the keyhole and weld pool geometry. Monitoring taking into account all signals or parameters of the laser processing process and with simultaneous inclusion of the keyhole and weld pool geometry is therefore not carried out.
- 3A shows an exemplary signal which represents the plasma radiation of a laser machining process, for example a welding process, and the signal of a temperature of the laser machining process within the respective envelopes. Both signals lie within the envelope so that the weld is classified as correct, since the integral of the signal over the envelope is less than a defined threshold.
- Fig. 3B picture the signals are clearly above the respective envelopes. If the threshold value for the integral over the envelope curve is parameterized accordingly, the signal is classified as faulty and a processing error is determined.
- 3C shows courses of three different signals. Each waveform in itself is classified as proper. In fact, there is a faulty weld. To recognize these, the courses of several signals would have to be used and, if necessary, assessed by a specialist.
- the invention is based on the idea that the determination of the machining result of the laser machining process, in particular the detection of machining errors and the determination or characterization of the machining area, is carried out using a deep neural network which uses current sensor data, control data and / or image data as input data Laser processing process, preferably as raw data.
- a system for monitoring a laser machining process for machining a workpiece comprises: an arithmetic unit which is set up to determine input tensor based on current data of the laser processing process and to determine an output tensor based on these tensor using a transfer function which contains information about a current processing result, the transfer function between the input tensor and the output tensor is formed by a trained neural network.
- the machining result can include information about a machining field and / or a machining area of the workpiece.
- the system is able to independently and directly determine the processing result of the laser processing process. For example, it can be determined whether there is a machining error in a workpiece machined by the laser machining system. In addition, it can be determined whether a machining area of the workpiece has predetermined features or a predetermined geometry, for example whether a keyhole has been formed or what extent a weld pool has. Based on this, parameters of the laser processing process can be set in order to avoid further errors.
- the system for monitoring can thus be used to control a laser processing process or a laser processing system.
- the use of a neural network which forms the transfer function, has the advantage that the system can independently recognize whether and which processing errors are present. Accordingly, it is no longer necessary for the acquired sensor data to be preprocessed in order to be accessible for error detection. Furthermore, it is not necessary to define error criteria that characterize the processing quality or indicate any processing errors. It is also not necessary to specify or adapt a parameterization of the error criteria. This simplifies the monitoring of a laser processing process. The steps mentioned do not have to be carried out or accompanied by experts in laser processing.
- the system for monitoring a laser machining process in accordance with the aspects disclosed herein performs the detection of machining errors and the determination of the keyhole and weld pool geometry independently, i.e. automated, through and can be easily customized.
- the system can therefore contain information about a current processing result of the current processing area, for example about a condition of the processing area itself, for example an expansion of the processing area, the presence of a so-called keyhole or a steam capillary, the existence of the existence of a weld pool, a position and / or depth of the keyhole within the weld pool, an extent or a shape of the weld pool, etc.
- the system may also be able to recognize a processing error and indicate its type.
- the output tensor can contain, for example, at least one of the following information: presence of at least one machining error, the type of machining error, probability of a machining error of a specific type, position of the machining error on a surface of a machined workpiece.
- the type of machining error can be at least one of the following: pore, hole, missing weld through of the workpiece, wrong friend, splash or gap.
- the computing unit can therefore be set up to determine the output tensor for the current processing area of the laser processing process while the laser processing process is still being carried out.
- the laser processing process can be monitored in real time through the direct determination of the machining result.
- the computing unit can be set up to form the output tensor in real time and to output control data to a laser processing system performing the laser processing process.
- the output tensor can contain the information as to whether machining a workpiece is good or bad. Based on this information, the laser machining process can be controlled accordingly, for example by adapting process parameters. For example, a laser power can be increased or reduced, the focus position of the laser can be changed, or a distance between the processing head of a laser processing system and the workpiece can be changed.
- the transfer function between the input tensor and the output tensor is formed by a trained or trained neural network.
- the computing unit can comprise a neural network.
- the neural network can have been trained by error feedback or back propagation.
- the neural network can be a learned deep neural network, for example a learned deep folding neural network or convolution network.
- the convolutions network can have between 10 and 40 convolutions, preferably 34 convolutions.
- the convolutions network can have at least one so-called "fully connected" layer.
- the neural network can be set up for transfer leaming.
- the neural network can be adaptable to changed requirements of a changed laser processing process.
- the computing unit can in particular be set up to adapt the neural network to a changed laser processing process by means of transfer leaming, for example based on training data.
- the training data can include test data of the changed laser processing process for determining a corresponding input tensor and a predetermined output tensor which is assigned to the test data and which contains information about a corresponding, previously determined processing result of the changed laser processing process.
- the processing result can contain information about e.g. machining errors identified by an expert.
- the training data can comprise several sets of such test data and associated output sensors.
- the test data can be based on values of a sensor parameter that were detected by at least one sensor unit during a previous laser machining process and / or on values of a control parameter that were used during a previous laser machining process.
- the neural network that forms the transfer function can be adapted to a changed situation or a changed laser processing process.
- the transfer function is modified for this.
- the changed situation can consist, for example, that the workpieces to be machined have different materials, different degrees of contamination and / or thicknesses, or that the parameters of the laser machining change.
- a training data set used for training or liming the neural network or a reduced training data set can be supplemented with new examples.
- the input tensor can include or consist of current data from the laser processing process as raw data. Accordingly, the current data need not be processed before the input tensor is formed. A data processing step upstream of the formation of the input tensor can thus be omitted.
- the neural network therefore determines the output tensor directly on the basis of the raw data.
- the computing unit can be set up to collect a large number of current data of the processing process, which correspond to the same current time, in the input tensor and to map them together to the output tensor by means of the transfer function.
- the simultaneous processing of all relevant current data of the laser processing process enables the processing result to be determined more reliably and more quickly. This enables the laser processing process to be monitored more reliably and more precisely.
- the input tensor can include current data of the laser processing process, which include acquired sensor data and / or control data, which for example comprise 512 samples, each sample being assigned to a point in time.
- the sensor data and control data are also referred to below as process data.
- the input tensor is formed from the current data by placing a window over 512 samples every 256 samples. This ensures that the samples overlap between two input tensor formed in time.
- An image can be recorded for each sample, which can be assigned to the respective sample of the sensor data and / or control data via the time of the recording.
- Each input tensor can accordingly contain data such as sensor data, image data and / or control data of the processing process, which correspond to the respective point in time. I.e.
- the input tensor generated in this way can include the last n recorded sensor data and / or the last n recorded image data and / or the last n used control data for a given point in time during the execution of the laser processing process.
- n 1.
- n 512 and a first input tensor comprises current sensor and / or control data, in other words current process data, and a second input tensor comprises current image data.
- the first input tensor comprises the last 512 samples or values of the respective sensor data, control data and / or image data.
- a data record of dimension mx 512 will be generated every 5.12 ms.
- m stands for the number of m different types of data, which comprise the (recorded) sensor data and the (received or used) control data.
- the first input tensor of dimension mx 512 is formed from these data records every 2.56 ms.
- a second input tensor of the image data is generated. For example, an image with 512 x 512 pixels corresponding to an input tensor of the current process data is acquired. Accordingly, in this case the correspondingly generated input tensor of the image data has the dimension 512 512.
- the current sensor data can include one or more temperatures, a plasma radiation, intensities of reflected or backscattered laser light at different wavelengths, a keyhole depth and / or a distance from a laser processing head executing the laser processing process to the workpiece.
- the control data can include an output power of a laser on the laser processing head, a focus position, a focus diameter, a position of the laser processing head, a processing speed and / or a path signal.
- the image data can include an image of a surface of the workpiece, for example an image of a machining area of the workpiece.
- the processing area can include a weld pool and / or a keyhole.
- the path signal can be a control signal of a laser processing system which carries out the laser processing process and which controls a movement of a laser processing head relative to the workpiece.
- a machining error that has occurred for example, can be quickly and easily localized on the workpiece, since it is known at what point in time which area of the workpiece is or has been machined by the laser machining system.
- the system can thus be able to indicate the point in time during the laser processing process at which an error occurred.
- the system can only calculate the point in time based on the known processing speed, a known point in time as a defined starting point and the time assignment of the input tensor. The temporal assignment of the input tensor results from the generation rate of the input tensor and a number of input tensor generated since the starting point.
- the system can further comprise at least one sensor unit for acquiring current sensor data of the laser processing process during the laser processing process.
- the sensor data that are encompassed by the sensor unit thus represent values of a parameter detected or measured by the sensor unit, for example a physical parameter such as a temperature.
- the at least one sensor unit can comprise a temperature sensor, a light sensor or a plasma sensor.
- the sensor unit can also convert a distance sensor, for example a triangulation system and / or an OCT (“optical coherence tomography” or “optical coherence tomography”) system. grasp.
- a distance to a surface of the workpiece can be determined by means of the distance sensor, for example a distance between the one laser processing head of the laser processing system and the workpiece surface.
- the system can further comprise at least one image capture unit for capturing current image data of a machining area of the workpiece during the laser machining process.
- the image acquisition unit can comprise a camera or a camera system, in particular a 2D and / or a 3D camera system, preferably with incident light LED lighting.
- the image acquisition unit can comprise a stereo camera system.
- the image data preferably correspond to a two-dimensional image or a two-dimensional image of a section of a workpiece surface which comprises the processing area of the laser processing process.
- the processing area can include a so-called melt pool and keyhole. In other words, the image data can include an image of the weld pool and the keyhole.
- the acquisition rates of the image acquisition unit and the sensor unit can be the same.
- the data of the image acquisition unit and the sensor unit can be correlated per predetermined period of time.
- the image acquisition unit and the sensor unit can always acquire the respective data at the same times.
- the image acquisition unit can record an image of the workpiece at those times at which a temperature sensor takes a temperature measurement.
- the current data of the laser processing process can include current sensor data and / or current image data and / or current control data of the laser processing process.
- the sensor data and control data are also called process data below.
- the sensor data represent values of at least one parameter detected or measured by a sensor unit.
- the control data represent values of at least one control parameter of the laser processing process or laser processing system.
- the computing unit preferably has at least one interface which is set up to receive this current data.
- the at least one interface can, for example, be set up to receive training data for training or adapting the neural network, control data of a laser processing system and / or sensor data of a sensor unit and / or image data from an image acquisition unit.
- the system can therefore be set up to obtain values from at least one control parameter, for example via an interface, from a controller of a laser processing system which carries out the laser processing process.
- the network architecture for classifying the sensor data is different from the network architecture for classifying the image data.
- the respective neural networks which classify process data and image data, are interconnected after the last or penultimate hidden layer.
- the last hidden layer of the respective network is the representation of the characteristics of the input tensor of the image and process data. The classification of these related characteristics takes place in the following fully connected layers.
- This procedure has the advantage that only a few shifts need to be trained when training the entire network, and when used, restricted to process data only, the network that was trained for the process data can be used again.
- the networks for image and process data are trained separately. The networks have thus learned to map the input tensor consisting of the image data and the input tensor consisting of the process data on feature vectors.
- the input tensor (s) may contain current data for x past times during an execution of the laser machining process, where x is a natural number, and for each of the x times the input tensor may contain image data corresponding to this time, and sensor or control data, i.e. Process data.
- the times x can be spaced equally far apart, for example 256 ms or 512 or 1024 ms.
- the input tensor (s) can be mapped from the transmission function to the output tensor, i.e. Image data and control or sensor data are processed by a common transfer function.
- two branches of the network are run through and the features of the respective input tensor are connected in one layer.
- One branch of the network has the image data as the input tensor, the other branch of the network has the process data. This procedure is called a “feature level fusion”. Both networks can be easily decoupled again and used individually. Because the combination of image and process data in a tensor can not be useful in some situations.
- a laser machining system for machining a workpiece by means of a laser beam
- the machining system comprising a laser machining head for irradiating a laser beam onto a workpiece to be machined and a system for detecting a machining error in accordance with one of the aspects described herein.
- the acquisition entry is preferably arranged on the laser processing head.
- a method for monitoring a laser machining process for machining a workpiece comprising the following steps: determining one or more input tensor based on current data of the laser machining process and determining an output tensor based on the one or more Input tensor by means of a transfer function, the output tensor containing information about a current processing result, the transfer function between the input tensor and the output tensor being formed by a trained neural network.
- FIG. 1 shows a schematic illustration of a laser machining system for machining a workpiece by means of a laser beam and a system for monitoring a laser machining process according to one embodiment
- FIG. 2 shows a block diagram of a system for monitoring a laser machining process according to one embodiment
- 3A, 3B and 3C show representations of exemplary courses of sensor values
- Fig. 5 shows a block diagram of a deep folding neural network according to an embodiment
- 6 shows a block diagram of a deeply folding neural network for classifying process data according to an embodiment
- FIG. 7 shows a block diagram of a deep folding neural network for classification of image data according to an embodiment
- FIG. 8 shows a block diagram of a deeply folding neural network for classifying image and process data according to one embodiment.
- FIG. 9 shows a method for monitoring a laser machining process for machining a workpiece according to one embodiment.
- FIG. 1 shows a schematic illustration of a laser machining system 100 for machining a workpiece by means of a laser beam according to embodiments of the present disclosure.
- the laser processing system 100 is configured to perform a laser processing process in accordance with embodiments of the present disclosure.
- the laser processing system 100 comprises a laser processing head 101, in particular a laser cutting, laser soldering or laser welding head, and a system 300 for detecting processing errors.
- the laser processing system 100 comprises a laser device 110 for providing a laser beam 10 (also referred to as a “processing beam” or “processing laser beam”).
- the laser processing system 100 or parts thereof, such as the processing head 101, can be movable along a processing direction 20 according to embodiments.
- the machining direction 20 can be a cutting, soldering or welding direction and / or a direction of movement of the laser machining system 100, such as the machining head 101, with respect to the workpiece 1.
- the processing direction 20 can be a horizontal direction.
- the machining direction 20 can also be referred to as the “feed direction”.
- the laser processing system 100 is controlled by a control unit 140, which is set up to control the processing head 101 and / or the laser device 110.
- the system 300 for monitoring a laser processing process comprises a computing unit 320.
- the computing unit 320 is set up to determine an input tensor based on current data of the laser processing process and to determine an output tensor based on the input tensor by means of a transfer function which contains information about a current processing result of the Contains laser processing process.
- the output tensor can be the result of one or more arithmetic operations and contain information as to whether and which errors occurred during the processing of the workpiece 1 by the laser processing system 100. Furthermore, the output tensor can contain information about the type, position and size of the defect (s) on the workpiece surface 2.
- the output tensor can also include information about a machining area of the workpiece 1, for example a size, shape or extent of a keyhole and / or a molten pool.
- the computing unit 320 is combined with the control unit 140 (not shown). In other words, the functionality of the computing unit 320 can be combined with that of the control unit 140 in a common processing unit.
- the system 300 further comprises at least one sensor unit 330 and one image acquisition unit 310.
- the at least one sensor unit 330 is set up to detect the value of a parameter of a laser machining process that is carried out by the laser machining system 100, to generate sensor data from the detected values and to transmit these to the computing unit 320.
- the acquisition can take place continuously or in real time.
- the sensor unit 330 can be set up to acquire values from a plurality of parameters and to forward these to the computing unit 320. The values can be recorded at the same time.
- the image capture unit 310 is designed to capture image data of a machined surface 2 of the workpiece 1 and / or a machining area of the laser machining process.
- the machining area can be defined as an area of the workpiece surface at which the laser beam 10 strikes the workpiece surface at a current time and the material of the workpiece surface has melted and / or where a puncture or puncture hole is present in the material.
- the machining area can in particular be defined as an area of the workpiece surface in which a weld pool and / or a keyhole is formed.
- the image acquisition unit 310 is arranged on the processing head 101.
- the image acquisition unit 310 can be arranged subsequently on the processing head 101 with reference to the processing direction 20.
- the image acquisition unit 310 can also be arranged coaxially with a laser beam 10 and / or a measurement beam 13 described later.
- the computing unit 320 is set up to receive the image data acquired by the image acquisition unit 310 and the sensor data acquired by the sensor unit 330 and to form the input tensor on the basis of the current image data and the current sensor data.
- the laser processing system 100 or the system 300 comprises a measuring device 120 for measuring a distance between an end section of the processing head 101 and a workpiece 1 to be processed.
- the measuring device can comprise an optical coherence tomograph, in particular an optical short-coherence tomograph.
- the laser device 110 can have a collimator lens 112 for collimating the laser beam 10.
- the coherence tomograph can have collimator optics 122 that are set up to collimate an optical measurement beam 13 and focusing optics 124 that are set up to focus the optical measurement beam 13 onto the workpiece 1.
- FIG. 2 shows a block diagram of the system 300 for monitoring a laser machining process according to one embodiment.
- the system 300 comprises the computing unit 320, at least one sensor unit 330 and an image capturing unit 310.
- the computing unit 320 is connected to the sensor unit 330 and the image capturing unit 310, so that the computing unit 320 the image data captured by the image capturing unit 310 and the sensor data captured by the sensor unit 320 can receive.
- computing unit 320 contains a processor for determining the output tensor.
- the transfer function is typically stored in a memory (not shown) of the computing unit 320 or as a circuit, for example as FPGA, realized.
- the memory can be designed to store further data, for example the determined output tensor.
- the computing unit 320 can comprise an input / output unit 322, which in particular can have a graphical user interface for interaction with a user.
- the computing unit 320 can have a data interface 321, via which the computing unit transfers the output tensor to an external location, e.g. another computing unit, computer, PC, an external storage unit, such as a database, memory card or hard drive.
- Computing unit 320 may further include a communication interface (not shown) with which the computing unit can communicate with a network.
- the computing unit 320 can graphically display the output tensor on the output unit 322.
- the computing unit 320 can be connected to a control unit 140 of a laser processing system 100 in order to transmit the output tensor to the control unit 140.
- the computing unit 320 can also be set up to receive control data from the control unit 140 of the laser processing system 100 via the interface 321 and also to include the control data in the input tensor.
- the control data can include, for example, the output power of the laser device 110, the distance of the machining head 101 from the surface of the workpiece 1, the feed direction and speed, in each case at a given time.
- Computing unit 320 forms one or more input tensor for a transfer function from the current data. According to the one or more input tensors are formed from current raw data. I.e. A preceding processing of the current data by the computing unit 320, the sensor unit 330 or the image acquisition unit 310 does not take place.
- the transfer function is through a learned, i.e. pre-trained neural network.
- the computing unit contains the deep folding neural network.
- the output tensor is formed by applying the transfer function to the one or more input tensor. Based on the transfer function, the output tensor is thus determined from the one or more input tensor.
- the output tensor contains information or data about a current processing result of the laser processing process.
- the processing result can be, for example, processing errors that have occurred and / or the information about a processing area of the workpiece.
- This information about a current machining error can include: whether there is at least one machining error, the type of at least one machining error, the position of the machining error on the surface of the machined workpiece 1 and / or the size or extent of the machining error.
- the information about the processing area can be: location and / or size of the keyhole, location and / or size and / or geometry of the weld pool.
- the output tensor can also contain the probability with which a processing error of a certain type has occurred, or with what certainty the system has recognized a processing error of a specific type.
- Image acquisition unit 310 may be a camera system or a stereo camera system, e.g. with incident light include led lighting.
- the image data correspond to a two-dimensional image of a section of the workpiece surface.
- the captured or recorded image data represent a two-dimensional image of the workpiece surface, as shown by way of example in FIG. 4 and described in detail below.
- the computing unit 320 can be configured to graphically display the input tensor and / or the output tensor on the output unit 322.
- the computing unit 320 can graphically display the sensor data and / or image data contained in the input tensor, as curve profiles, as shown in FIGS. 3A to 3C, or as two-dimensional images of that of the workpiece 1, as shown in FIG Overlay the information contained in the output tensor.
- 4 shows a representation of image data according to an embodiment. 4 shows an exemplary image of the weld pool and the keyhole at 850 nm with superimposed geometry data.
- the cross 2a shows the center of the keyhole
- the cross 2b shows the center of the melt pool
- the line 2c shows the outline of the melt pool
- the line 2d shows the outline of the keyhole as determined by the computing unit 320 as information Initial tensor are included.
- the bordering rectangle 2e (“bounding box”) shows the calculated size of the weld pool.
- the information that the processing result of a laser processing process is classified as “bad” in the output tensor, for example.
- the system 300 for monitoring the laser machining process can issue an error.
- 2e target specifications or reference values would have to be specified or stored for the size of the bordered rectangle.
- a two-stage morphological operation is carried out for the calculation (“blob analysis”).
- the parameters required for this, such as the binary thresholds, must be specified by experts in conventional systems. Changes to the welding process with this procedure require changes to the parameters by experienced experts.
- FIG. 5 shows a block diagram of a deep folding neural network 400 according to a first embodiment.
- the input tensor 405, 415 contain various types or types of sensor data, control data and image data of the laser processing system.
- the input tensor 405 for the image data has the dimensions “image height in pixels” x “image width in pixels” and the input tensor 415 for the sensor and / or control data has the dimension “number of types of sensor and / or control data” x “number Samples ”.
- the image data thus form the input tensor 405 for the “branch” of the neural network 400, which reduces the image data to the significant features.
- the sensor data form the input tensor 415 for the branch of the neural network 400, which calculates the significant features from the sensor data.
- the sensor data can be, for example, temperatures measured by one or more temperature sensors, a plasma radiation measured by a corresponding sensor, an intensity of laser light reflected or reflected by a workpiece surface, a wavelength of reflected or backscattered laser light, or a measured by a distance sensor Distance between a laser processing head and the workpiece.
- the control data may be control signals generated by a controller to cause a laser processing system to perform the laser processing process.
- the control data can be a focus position and a focus diameter of a laser beam or a path signal, the path signal representing a position signal which specifies the relative position of a laser processing head of the laser processing system relative to the workpiece.
- the sensor data and / or control data directly form the input tensor 415 of the deep falling neural network.
- the image data directly form the input tensor 405 of the deep folding neural network. This means that a so-called “end-to-end” mapping or analysis takes place between the input tensor 405 415 and the output tensor. Since image data and process data are combined in one network in this deeply folding neural network, one speaks of a so-called “feature level fusion”.
- the computing unit can be set up to summarize a set of the sensor data, control data and / or image data corresponding to the respective time in the respective input tensor 405, 415 for each of the times n and to map them as a whole onto the output tensor by means of the transfer function 420.
- the acquisition rates of the image acquisition unit for acquiring the image data and the sensor unit for acquiring the sensor data can be the same, and the image acquisition unit and the sensor unit carry out the acquisition at the same times in each case.
- the output tensor 430 and accordingly the output layer has a dimension corresponding to the information contained.
- the output tensor 430 contains, for example, at least one of the following information: presence of at least one machining error, type of machining error, position of the machining error on a surface of a machined workpiece, probability of a machining error of a specific type, spatial and / or areal extent of the machining error on the surface of the machined workpiece, location and / or size of the keyhole, location and / or size and / or geometry of the weld pool.
- the output tensor 430 can be forwarded to a control unit of the respective laser processing process (not shown).
- the control unit can use the information contained in the output tensor 430 to adapt the laser machining process, for example by adapting various parameters of the laser machining process.
- FIG. 6 shows a block diagram of a deeply folding neural network 600 according to a further embodiment, which is suitable for mapping an input tensor, which comprises current sensor and / or control data, onto an output tensor.
- the input tensor 630 contains 512 measurement values or samples of 4 different types of sensor data and / or control data, i.e. Process data, the laser processing system.
- the sensor data and control data directly form the input tensor 630 of the deep folding neural network. I.e. a so-called “end-to-end” mapping or analysis takes place between the input tensor 630 and the output tensor 640.
- the input layer or input tensor 630 therefore has the dimension 4 ⁇ 512.
- the transfer function which is formed by the deep neural network, is intended to contain information about a current processing error, that is to say a processing error that occurred at a time when the samples were recorded.
- the output tensor 640 should, for example, contain the information “error yes / no”, the existence or probability of error “hole”, the presence or probability of error “splash”, the presence or probability of error “gap”, the presence of or Incorrect probability of "wrong friend / lack of penetration” error included.
- the output tensor 640 or the output layer thus has the dimension 1 ⁇ 5.
- the deep folding neural network 600 maps an input tensor 630 of dimension 4 x 512 onto the output tensor 640 of dimension 1 x 5: R 2048 - »-R 5 .
- the deep folding neural network 600 (“Deep Convolutional Neural Net”), hereinafter abbreviated to “CNN”, can comprise a plurality of folding layers 610 which carry out folding with a plurality of cores. Furthermore, the CNN 600 can have a “fully connected” layer or block 620 and / or a “leaky ReLu” block or layer 650. As shown in FIG. 6, the CNN comprises, for example, 21 folding layers, at least some folding layers comprising a normalization (“batch normalization”) and residual blocks.
- FIG. 7 shows a block diagram of a deeply folding neural network 700 according to an embodiment, which is suitable for mapping an input tensor, which comprises current image data, onto an output tensor.
- the input tensor 730 comprises an image of the workpiece, for example the machining area of the workpiece, with the size 512 ⁇ 512 pixels. I.e. the input layer 730 has the dimension 512 x 512.
- the input tensor 730 contains the acquired raw data of the image data. This raw image data directly forms the input tensor of the deep folding neural network. I.e. So-called “end-to-end” mapping or analysis takes place between the input tensor 730 and the output tensor 740. No characteristics of the keyhole or the melt pool are therefore calculated or parameterized in an intermediate step.
- the transfer function is intended to provide information about whether the keyhole is present and / or information about the position of a center of gravity or center of the keyhole, about a rectangle surrounding the keyhole and / or about a rectangle surrounding the weld pool.
- the output tensor 740 thus contains the values "Pkeyhole” (keyhole present / not available), "XKeyhole” (position of the center of gravity or center of the keyhole in the x direction), “YKeyhole” (position of the center of gravity or center of the keyhole in the Y direction), “dXKeyhole” (size of the keyhole in the x direction), “dYKeyhole” (size of the keyhole in the Y direction), “Xchmelzbad” (position of the center of gravity or center of the melt pool in the x direction) , "Y melt pool” (position of the center of gravity or center of the melt pool in the y direction), “dX melt pool” (size of the melt pool in the x direction), and “dY melt pool” (size of the melt pool in the y direction).
- the output tensor 740 or the output layer thus comprises 9 values and thus has the dimension 1 x 9. Accordingly, according to the embodiment shown in FIG. 7, the neural network 700 maps the input tensor 730 of the dimension 512x512 onto the output tensor 740 of the dimension 1x9: R 262 144 ⁇ R 9 .
- the CNN comprises, for example, 34 folding layers 710, at least some folding layers comprising a normalization (“batch normlization”) and so-called residual blocks.
- the convolution network also has two so-called “fully connected” layers 720.
- the neural network 700 is serialized in the last layer 750 and mapped onto the output tensor 740 by means of a sigmoidal activation function.
- the normalization usually includes the mean value and the standard deviation via a "mini batch”. The impact of this is regulation.
- these parameters are used as hyperparameters in a trained deep folding neural network: “Batch Normalization”, “Accelarating Deep Network Training by Reducing Internal Covariate Shift” (according to Sergey Ioffe, Christian Szegedy).
- a “convolution 32 3x3” block stands for a convolution block or a convolution layer with 32 different 3x3 convolution filter masks. I.e. the "Convolution 32 3x3” block creates a tensor m x n x 32 from an input tensor 730 of dimension m x n x c, m stands for the height, n for the width, c for the number of channels.
- the specification “/ 2” in a convolution block in FIG. 7 describes a “stride” of 2. That is, the filter core is shifted by 2 pixels, so that the dimension is reduced by half.
- the information about the blocks, eg "512 x 512" each describe the dimension mxn of the tensor without the number of channels.
- the indication “residual block” indicates that the result of an output layer (1 + 2) is added to the output of a previous layer (1) before the value is passed on via the activation function.
- FIG. 8 shows a deep folding neural network for the classification of image data and process data.
- the neural network 800 in each case comprises a neural network 845, 855 according to the embodiments of FIGS. 6 and 7.
- the neural network 800 is created by chaining or coupling or hanging at least one fully connected layer 830 "Fully Connected Layer") and optionally further fully connected layers 820 with so-called "Leaky ReLu" activation function.
- the last fully connected layer 830 can be mapped onto the output tensor 840 by means of a sigmoidal activation function.
- the two input tensor 805, 815 of the respective neural networks 845, 855 are mapped onto the output tensor 840, the output tensor 840 having the following components: P (error), P (hole), P (splash), P (gap), P (wrong friend), "x_keyhole”, “y_keyhole”, “dx_keyhole”, “dy_keyhole”, “x_schmelzbad”, “y_schmelzbad”, “dx_schmelzbad” and “dy_schmelzbad”.
- P stands for the probability of a processing error of a certain type.
- the output tensor 840 of the neural network 800 therefore has the dimension 1 ⁇ 13.
- the neural network used in the embodiments of FIGS. 5 to 8 is a trained deep green folding neural network.
- the CNN used examples before delivery of the system to detect machining errors to learn which is a "good” and which is a "bad” machined workpiece surface, or which is a "good” and a “bad” weld or solder or weld or solder seam.
- the CNN has learned to classify a machined workpiece surface as "good” or "bad”, or it has learned to identify machining errors, to localize them, classify them according to their type and determine their size.
- the system should reliably determine whether the machined workpiece surface has machining errors or which geometric properties the machining area has. It can preferably identify which defects are present (e.g. a pore, a hole, ejection, splashes, adhesion or a lack of weld-through or "wrong friend"), and can possibly also locate the machining error and indicate its size on the workpiece surface.
- the CNN is provided with input data sets and corresponding output tensor.
- the default NEN input data records contain, for example, sensor, image and / or control data of the laser processing process as described above.
- a corresponding predefined output tensor or result tensor is assigned to each predefined input data record. This output tensor contains the desired result of the CNN for the respective laser processing process for the respective input data record.
- the corresponding predefined output tensor contains information about the classification of the machining errors present on the section of the machined workpiece surface and / or about the geometric features of the machining area.
- This assignment of an output tensor to each specified input data record is done manually (so-called "labeling" of the recorded sensor, image and control data). I.e. There is a predetermined assignment of the sensor, image and control data to the result of the transfer function.
- the output tensor indicates whether a machining error has occurred in a laser machining process on which the input data record is based, what type of error is present, at which location on the machined workpiece surface, for example using a two-dimensional coordinate system with x- and y- Coordinates, the machining error is present, and the size of the machining error in the x- and y-direction, whether a keyhole and / or a weld pool is present, where the keyhole and / or the weld pool are located in relation to one another or to a current machining point, and what area and / or which semiaxes have the keyhole and / or the melt pool, etc.
- the transfer functions formed by the CNN are then determined by means of optimization methods and stored in the system 300, preferably in the memory of the computing unit 320.
- the optimization process is carried out, for example, with the "back propagation" process with an Adam optimization.
- the CNN provides the assignment of the input data record to the processing result.
- these parameters are used as hyperparameters in the trained network: “batch normalization”, “accelerating deep network training by reducing internal covariate shift” (according to Sergey Ioffe, Christian Szegedy).
- the taught deep folding neural network is configured so that it can be adapted to a changed situation or a changed laser processing process by means of so-called transfer leaming.
- the basic training of the network is carried out in advance before the system is put into operation. If the machining process changes after commissioning, only so-called transfer leaming is carried out.
- the changed situation can be, for example, that the workpieces to be machined change, for example when the material changes.
- the thickness of the workpiece surface or the material composition can also change slightly.
- other process parameters can be used to machine the workpiece. This can result in other processing errors. For example, the probability of different types of machining errors can change or the machining errors can be designed differently. This means that the neural network must be adapted to the changed situation and the processing errors that have changed as a result.
- Transfer leaming is similar to the initial lighting of the neural network. Typically, however, only a few specific folding layers of the deep folding neural network are adapted in transfer leaming, in particular the last two to three folding layers. The number of parameters of the neural network that are changed is considerably less than when training or angling the neural network. This enables the customer's transfer leaming to be completed quickly, typically in less than an hour. I.e. With transfer leaming, the entire neural network is not re-trained or taught.
- the system 300 can receive the training data required for transfer leaming via the interface 321.
- the training data can include test data records of the changed laser processing process, from which the computing unit forms a corresponding input tensor during transfer leaming.
- the training data comprise a predetermined output tensor which is assigned to the respective test data record and which contains information about a corresponding processing result of the modified laser processing process, previously determined by an expert.
- test data sets contain sensor data that were recorded when a processing error occurred during a previous laser processing process, and the associated output tensor contains information about the error, for example the type of error, the position and the extent of the machining error on the workpiece.
- the first step 910 includes determining an input tensor based on current data from the laser machining process.
- a second step 920 an output tensor is determined based on the input tensor using a transfer function, the output tensor containing information about a current processing result.
- the transfer function is predefined and formed by a learned neural network.
- the method for monitoring a laser machining process can be carried out while the workpiece is being machined. According to one embodiment, the method is run through once for the entire machined workpiece surface.
- the use of a neural network which forms the transfer function, has the advantage that the system can independently recognize whether and which processing errors are present. Accordingly, it is no longer necessary for the received current data, such as the image or sensor data, to be preprocessed in order to be accessible for error detection. Furthermore, it is not necessary to extract features from the recorded data that characterize the processing quality or any processing errors. In addition, it is not necessary to decide which extracted features are necessary or relevant for the evaluation of the processing quality or the classification of the processing errors. It is also not necessary to specify or adapt a parameterization of the extracted features for the classification of the processing errors. This simplifies the determination or assessment of the processing quality or processing errors by the laser processing system. The steps mentioned do not have to be carried out or accompanied by experts in laser processing.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Mechanical Engineering (AREA)
- Optics & Photonics (AREA)
- Artificial Intelligence (AREA)
- Plasma & Fusion (AREA)
- Evolutionary Computation (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Quality & Reliability (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Biomedical Technology (AREA)
- Automation & Control Theory (AREA)
- Medical Informatics (AREA)
- Laser Beam Processing (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102018129441.7A DE102018129441B4 (en) | 2018-11-22 | 2018-11-22 | System for monitoring a laser processing process, laser processing system and method for monitoring a laser processing process |
PCT/EP2019/077485 WO2020104103A1 (en) | 2018-11-22 | 2019-10-10 | Monitoring a laser machining process using deep folding neural networks |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3883715A1 true EP3883715A1 (en) | 2021-09-29 |
Family
ID=68281408
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP19789615.2A Pending EP3883715A1 (en) | 2018-11-22 | 2019-10-10 | Monitoring a laser machining process using deep folding neural networks |
Country Status (7)
Country | Link |
---|---|
US (1) | US12013670B2 (en) |
EP (1) | EP3883715A1 (en) |
JP (2) | JP2022509143A (en) |
KR (1) | KR102611342B1 (en) |
CN (1) | CN113329836A (en) |
DE (1) | DE102018129441B4 (en) |
WO (1) | WO2020104103A1 (en) |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3885069A1 (en) * | 2020-03-25 | 2021-09-29 | Bystronic Laser AG | Quality control of a laser machining process by means of machine learning |
US11537111B2 (en) * | 2020-04-01 | 2022-12-27 | General Electric Company | Methods and apparatus for 2-D and 3-D scanning path visualization |
DE102020112116A1 (en) | 2020-05-05 | 2021-11-11 | Precitec Gmbh & Co. Kg | Method for analyzing a laser machining process, system for analyzing a laser machining process and laser machining system with such a system |
DE102020123479A1 (en) * | 2020-09-09 | 2022-03-10 | Precitec Gmbh & Co. Kg | Method for monitoring the condition of a laser processing head and a laser processing system for performing the same |
DE102020211343A1 (en) | 2020-09-10 | 2022-03-10 | Trumpf Laser- Und Systemtechnik Gmbh | Process for laser welding using a laser beam guided in a double-core fiber and associated laser welding machine and computer program product |
CN116133785A (en) * | 2020-09-18 | 2023-05-16 | 百超激光有限公司 | Computer-implemented method and optimization tool for improving laser cutting process parameters by the optimization tool |
DE102021107544B4 (en) | 2021-03-25 | 2023-01-05 | Precitec Gmbh & Co. Kg | Method for normalizing sensor signals for monitoring a laser machining process, method for monitoring a laser machining process and laser machining system |
DE102021111349A1 (en) | 2021-05-03 | 2022-11-03 | Precitec Gmbh & Co. Kg | Method for monitoring a laser welding process and associated laser welding system |
KR102532753B1 (en) * | 2021-07-22 | 2023-05-16 | 울산과학기술원 | Method and apparatus for monitoring metal 3D printer, and computer program for the method |
DE102021121112A1 (en) * | 2021-08-13 | 2023-02-16 | Precitec Gmbh & Co. Kg | Analyzing a laser machining process based on a spectrogram |
DE102021123038A1 (en) * | 2021-09-06 | 2023-03-09 | Trumpf Laser- Und Systemtechnik Gmbh | Spatter detection by artificial intelligence in laser processing |
DE102021127016A1 (en) | 2021-10-19 | 2023-04-20 | Precitec Gmbh & Co. Kg | Process signal reconstruction and anomaly detection in laser machining processes |
CN114453733B (en) * | 2022-01-29 | 2024-05-31 | 苏州富润泽激光科技有限公司 | Vibrating mirror welding control method and system |
US20230302539A1 (en) * | 2022-03-25 | 2023-09-28 | General Electric Company | Tool for scan path visualization and defect distribution prediction |
KR20240109515A (en) * | 2023-01-04 | 2024-07-11 | 에스케이온 주식회사 | Apparatus and method for predicting welding quality |
DE102023103439A1 (en) * | 2023-02-13 | 2024-08-14 | TRUMPF Laser- und Systemtechnik SE | Computer-aided procedure |
CN117884786B (en) * | 2024-03-15 | 2024-05-28 | 哈尔滨工业大学(威海) | Solder ball laser welding defect detection method |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH05233585A (en) * | 1992-02-24 | 1993-09-10 | Hitachi Ltd | Device abnormality diagnostic method |
DE19957163C1 (en) * | 1999-11-27 | 2001-08-09 | Thyssenkrupp Stahl Ag | Method and device for quality control of the seam on sheets or strips butt welded with a laser |
JP2001276980A (en) | 2000-03-30 | 2001-10-09 | Matsushita Electric Ind Co Ltd | Connecting apparatus |
KR101700896B1 (en) | 2008-11-21 | 2017-01-31 | 프레시텍 게엠베하 운트 코 카게 | Method and device for monitoring a laser machining operation to be performed on a workpiece and laser machining head having such a device |
JP5172041B2 (en) | 2009-07-20 | 2013-03-27 | プレシテック カーゲー | Laser machining head and method for compensating for changes in the focal position of the laser machining head |
US9355441B2 (en) | 2010-06-28 | 2016-05-31 | Precitec Kg | Method for closed-loop controlling a laser processing operation and laser material processing head using the same |
KR101780049B1 (en) | 2013-07-01 | 2017-09-19 | 한국전자통신연구원 | Apparatus and method for monitoring laser welding |
EP3329433A1 (en) * | 2015-07-29 | 2018-06-06 | Illinois Tool Works Inc. | System and method to facilitate welding software as a service |
JP6339603B2 (en) * | 2016-01-28 | 2018-06-06 | ファナック株式会社 | Machine learning apparatus, laser apparatus, and machine learning method for learning laser processing start condition |
JP6404893B2 (en) | 2016-12-22 | 2018-10-17 | ファナック株式会社 | Tool life estimation device |
JP6487475B2 (en) | 2017-02-24 | 2019-03-20 | ファナック株式会社 | Tool state estimation device and machine tool |
JP6490124B2 (en) | 2017-03-07 | 2019-03-27 | ファナック株式会社 | Laser processing apparatus and machine learning apparatus |
CN108346151A (en) | 2018-03-12 | 2018-07-31 | 湖南大学 | A method of judging laser welding penetration |
DE102018129425B4 (en) * | 2018-11-22 | 2020-07-30 | Precitec Gmbh & Co. Kg | System for recognizing a machining error for a laser machining system for machining a workpiece, laser machining system for machining a workpiece by means of a laser beam comprising the same, and method for detecting a machining error in a laser machining system for machining a workpiece |
-
2018
- 2018-11-22 DE DE102018129441.7A patent/DE102018129441B4/en active Active
-
2019
- 2019-10-10 US US17/295,904 patent/US12013670B2/en active Active
- 2019-10-10 KR KR1020217018693A patent/KR102611342B1/en active IP Right Grant
- 2019-10-10 CN CN201980089975.7A patent/CN113329836A/en active Pending
- 2019-10-10 JP JP2021528414A patent/JP2022509143A/en active Pending
- 2019-10-10 EP EP19789615.2A patent/EP3883715A1/en active Pending
- 2019-10-10 WO PCT/EP2019/077485 patent/WO2020104103A1/en unknown
-
2024
- 2024-03-28 JP JP2024053110A patent/JP2024083389A/en active Pending
Also Published As
Publication number | Publication date |
---|---|
KR20210091789A (en) | 2021-07-22 |
US20220011726A1 (en) | 2022-01-13 |
JP2024083389A (en) | 2024-06-21 |
DE102018129441B4 (en) | 2023-11-16 |
CN113329836A (en) | 2021-08-31 |
WO2020104103A1 (en) | 2020-05-28 |
DE102018129441A1 (en) | 2020-05-28 |
US12013670B2 (en) | 2024-06-18 |
JP2022509143A (en) | 2022-01-20 |
KR102611342B1 (en) | 2023-12-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DE102018129441B4 (en) | System for monitoring a laser processing process, laser processing system and method for monitoring a laser processing process | |
DE102018129425B4 (en) | System for recognizing a machining error for a laser machining system for machining a workpiece, laser machining system for machining a workpiece by means of a laser beam comprising the same, and method for detecting a machining error in a laser machining system for machining a workpiece | |
EP2365890B1 (en) | Method and device for monitoring a laser machining operation to be performed on a workpiece and laser machining head having such a device | |
DE102014208768B4 (en) | Method and device for quality assurance | |
DE102012219187B4 (en) | System for binary classification of elements of interest in a reproducible process | |
EP4146426A1 (en) | Method for analysing a laser machining process, system for analysing a laser machining process and laser machining system comprising a system of this type | |
EP2886239B1 (en) | Method and device for monitoring and controlling the processing path of a laser joining process | |
DE102019114012A1 (en) | Microscopy method, microscope and computer program with verification algorithm for image processing results | |
EP3768454B1 (en) | Method for automatically determining optimal welding parameters for welding on a workpiece | |
DE102013109915A1 (en) | Method and device for checking an inspection system for detecting surface defects | |
DE202014010601U1 (en) | Systems for generating or modifying a welding sequence | |
DE102019209088A1 (en) | Method for evaluating a laser cut edge, mobile device and system | |
DE102014207095A1 (en) | Edge measurement video tool with robust edge discrimination travel | |
EP3885069A1 (en) | Quality control of a laser machining process by means of machine learning | |
WO2021069346A1 (en) | Laser working system for performing a working process on a workpiece by means of a laser beam and method for monitoring a working process on a workpiece by means of a laser beam | |
DE102019209376A1 (en) | Device and method for monitoring a laser machining process, use of an event-based camera, computer program and storage medium | |
EP4119284A1 (en) | Quality estimator calibration for a laser cutting method | |
EP3961559B1 (en) | Method and device for detecting defects during surface modification method | |
DE102011085677A1 (en) | Monitoring a laser machining operation to be performed on a workpiece, comprises learning a process environment and classifying a current process result, where learning step comprises monitoring a laser machining process by a camera | |
WO2023017178A1 (en) | Method and system for analyzing a laser machining process on the basis of a spectrogram | |
DE102021120435A1 (en) | Method and apparatus for determining the size of defects during a surface modification process | |
DE102020100345B4 (en) | System and method for monitoring a laser machining process and the associated laser machining system | |
DE102018220342A1 (en) | Method for monitoring a laser machining process on one or more workpieces | |
DE102023103439A1 (en) | Computer-aided procedure | |
EP4368330A1 (en) | Control of a laser cutting machine using airborne sound signals |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20210622 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20231106 |