WO2022194960A1 - Détection de défauts dans la fabrication additive - Google Patents
Détection de défauts dans la fabrication additive Download PDFInfo
- Publication number
- WO2022194960A1 WO2022194960A1 PCT/EP2022/056878 EP2022056878W WO2022194960A1 WO 2022194960 A1 WO2022194960 A1 WO 2022194960A1 EP 2022056878 W EP2022056878 W EP 2022056878W WO 2022194960 A1 WO2022194960 A1 WO 2022194960A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- laser
- image
- manufacturing
- camera system
- reflected
- Prior art date
Links
- 238000004519 manufacturing process Methods 0.000 title claims abstract description 215
- 239000000654 additive Substances 0.000 title claims abstract description 64
- 230000000996 additive effect Effects 0.000 title claims abstract description 64
- 230000007547 defect Effects 0.000 title claims abstract description 62
- 238000001514 detection method Methods 0.000 title claims abstract description 43
- 238000000034 method Methods 0.000 claims abstract description 169
- 238000010276 construction Methods 0.000 claims description 30
- 238000013461 design Methods 0.000 claims description 22
- 238000011156 evaluation Methods 0.000 claims description 18
- 238000001454 recorded image Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 abstract description 52
- 239000000843 powder Substances 0.000 description 40
- 230000008859 change Effects 0.000 description 32
- 238000004422 calculation algorithm Methods 0.000 description 23
- 238000009826 distribution Methods 0.000 description 23
- 238000009434 installation Methods 0.000 description 23
- 238000005259 measurement Methods 0.000 description 17
- 230000011218 segmentation Effects 0.000 description 15
- 238000002844 melting Methods 0.000 description 14
- 230000008018 melting Effects 0.000 description 14
- 238000013459 approach Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 12
- 239000000463 material Substances 0.000 description 12
- 238000004088 simulation Methods 0.000 description 12
- 230000005855 radiation Effects 0.000 description 11
- 238000013528 artificial neural network Methods 0.000 description 10
- 230000003287 optical effect Effects 0.000 description 10
- 238000005457 optimization Methods 0.000 description 10
- 238000007639 printing Methods 0.000 description 10
- 230000004927 fusion Effects 0.000 description 9
- 238000012986 modification Methods 0.000 description 9
- 230000004048 modification Effects 0.000 description 9
- 230000009466 transformation Effects 0.000 description 9
- 239000002994 raw material Substances 0.000 description 8
- 230000033001 locomotion Effects 0.000 description 7
- 239000004566 building material Substances 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 6
- 239000000155 melt Substances 0.000 description 6
- 238000000149 argon plasma sintering Methods 0.000 description 5
- 230000002829 reductive effect Effects 0.000 description 5
- 238000012546 transfer Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000002591 computed tomography Methods 0.000 description 4
- 238000005336 cracking Methods 0.000 description 4
- 238000011065 in-situ storage Methods 0.000 description 4
- 238000010309 melting process Methods 0.000 description 4
- 230000010363 phase shift Effects 0.000 description 4
- 230000002123 temporal effect Effects 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 230000007704 transition Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 3
- 230000006399 behavior Effects 0.000 description 3
- 230000032798 delamination Effects 0.000 description 3
- 238000000151 deposition Methods 0.000 description 3
- 230000004807 localization Effects 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 239000012768 molten material Substances 0.000 description 3
- 238000009304 pastoral farming Methods 0.000 description 3
- 238000012805 post-processing Methods 0.000 description 3
- 238000010521 absorption reaction Methods 0.000 description 2
- 239000004035 construction material Substances 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 230000008021 deposition Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000007373 indentation Methods 0.000 description 2
- 238000002329 infrared spectrum Methods 0.000 description 2
- 238000005192 partition Methods 0.000 description 2
- 230000000306 recurrent effect Effects 0.000 description 2
- 238000002310 reflectometry Methods 0.000 description 2
- 230000002787 reinforcement Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000001757 thermogravimetry curve Methods 0.000 description 2
- 230000001131 transforming effect Effects 0.000 description 2
- 238000010146 3D printing Methods 0.000 description 1
- 238000003444 Hoppe reaction Methods 0.000 description 1
- 239000008186 active pharmaceutical agent Substances 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001364 causal effect Effects 0.000 description 1
- 239000002800 charge carrier Substances 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000002485 combustion reaction Methods 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 238000001816 cooling Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 239000012530 fluid Substances 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 238000010438 heat treatment Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 230000001678 irradiating effect Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 229910001092 metal group alloy Inorganic materials 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000004751 neurological system process Effects 0.000 description 1
- 238000012856 packing Methods 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000005488 sandblasting Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000005245 sintering Methods 0.000 description 1
- 238000007711 solidification Methods 0.000 description 1
- 230000008023 solidification Effects 0.000 description 1
- 238000010561 standard procedure Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000005728 strengthening Methods 0.000 description 1
- 230000035882 stress Effects 0.000 description 1
- 230000003746 surface roughness Effects 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
- 230000008646 thermal stress Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B22—CASTING; POWDER METALLURGY
- B22F—WORKING METALLIC POWDER; MANUFACTURE OF ARTICLES FROM METALLIC POWDER; MAKING METALLIC POWDER; APPARATUS OR DEVICES SPECIALLY ADAPTED FOR METALLIC POWDER
- B22F10/00—Additive manufacturing of workpieces or articles from metallic powder
- B22F10/20—Direct sintering or melting
- B22F10/28—Powder bed fusion, e.g. selective laser melting [SLM] or electron beam melting [EBM]
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B22—CASTING; POWDER METALLURGY
- B22F—WORKING METALLIC POWDER; MANUFACTURE OF ARTICLES FROM METALLIC POWDER; MAKING METALLIC POWDER; APPARATUS OR DEVICES SPECIALLY ADAPTED FOR METALLIC POWDER
- B22F10/00—Additive manufacturing of workpieces or articles from metallic powder
- B22F10/30—Process control
- B22F10/38—Process control to achieve specific product aspects, e.g. surface smoothness, density, porosity or hollow structures
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B22—CASTING; POWDER METALLURGY
- B22F—WORKING METALLIC POWDER; MANUFACTURE OF ARTICLES FROM METALLIC POWDER; MAKING METALLIC POWDER; APPARATUS OR DEVICES SPECIALLY ADAPTED FOR METALLIC POWDER
- B22F12/00—Apparatus or devices specially adapted for additive manufacturing; Auxiliary means for additive manufacturing; Combinations of additive manufacturing apparatus or devices with other processing apparatus or devices
- B22F12/90—Means for process control, e.g. cameras or sensors
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B29—WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
- B29C—SHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
- B29C64/00—Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
- B29C64/30—Auxiliary operations or equipment
- B29C64/386—Data acquisition or data processing for additive manufacturing
- B29C64/393—Data acquisition or data processing for additive manufacturing for controlling or regulating additive manufacturing processes
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B33—ADDITIVE MANUFACTURING TECHNOLOGY
- B33Y—ADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
- B33Y10/00—Processes of additive manufacturing
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B33—ADDITIVE MANUFACTURING TECHNOLOGY
- B33Y—ADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
- B33Y50/00—Data acquisition or data processing for additive manufacturing
- B33Y50/02—Data acquisition or data processing for additive manufacturing for controlling or regulating additive manufacturing processes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/0006—Industrial image inspection using a design-rule based approach
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/0008—Industrial image inspection checking presence/absence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/521—Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B22—CASTING; POWDER METALLURGY
- B22F—WORKING METALLIC POWDER; MANUFACTURE OF ARTICLES FROM METALLIC POWDER; MAKING METALLIC POWDER; APPARATUS OR DEVICES SPECIALLY ADAPTED FOR METALLIC POWDER
- B22F2999/00—Aspects linked to processes or compositions used in powder metallurgy
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B29—WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
- B29C—SHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
- B29C64/00—Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
- B29C64/10—Processes of additive manufacturing
- B29C64/141—Processes of additive manufacturing using only solid materials
- B29C64/153—Processes of additive manufacturing using only solid materials using layers of powder being selectively joined, e.g. by selective laser sintering or melting
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B33—ADDITIVE MANUFACTURING TECHNOLOGY
- B33Y—ADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
- B33Y30/00—Apparatus for additive manufacturing; Details thereof or accessories therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
- G06T2207/10021—Stereoscopic video; Stereoscopic image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30164—Workpiece; Machine component
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P10/00—Technologies related to metal processing
- Y02P10/25—Process efficiency
Definitions
- the present invention relates to an apparatus and method for defect detection in additive manufacturing, and more particularly relates to defect detection during the additive manufacturing process.
- Metallic additive manufacturing (3D printing) describes manufacturing processes for metallic components in which material is applied layer by layer and three-dimensional objects are thus produced.
- SLM selective laser melting
- LPBF laser powder bed fusion
- This manufacturing process makes it possible to produce complex geometries from one part, for which several components have to be produced and connected to one another in conventional manufacturing, e.g. in the CNC process.
- the process is cost-effective, particularly in the production of small series down to batch size 1.
- Another method for which defect detection can also be used is the so-called Direct Energy Deposition method, in which either a wire or powder is fused to build up complex geometries.
- a challenge is the early detection of errors, the reproducibility and the certification of machines and components, which is particularly important in the aviation industry.
- State-of-the-art methods rely, for example, on computer tomography scans, i.e. the component that has already been produced is scanned using a computer tomograph and examined for errors.
- the method is expensive, since either the computed tomography scanner is required or time-consuming transport to a provider of computed tomography scans is necessary. Both cases make production more expensive.
- Other methods from the prior art rely on the optical control by cameras mounted in the printer with temperature measuring units mounted in the pressure chamber in order to monitor the temperature in the melting pool (melting pool).
- optical methods are error-prone because they depend on the operator, i.e.
- a method is desirable which in situ, i.e. during the manufacturing process, fully automatically detects and displays errors and, if necessary, corrects the process parameters so that the print is either successfully completed or the print is completely or partially canceled when a manufacturing defect is detected.
- Such a process can significantly reduce the printing costs and at the same time reduce the production time.
- Porosity and density Porosity is described by small indentations or holes that have formed in the component and thus reduce the material density in the section under consideration. The reduction in density causes the component to fatigue more quickly and eventually break. The holes are often almost spherical and smaller than 100 pm. Furthermore, the indentations are distributed almost equally over a given area. There are many reasons why porosity occurs, but they also involve gas bubbles trapped in the atomized powder, which are not displaced due to the laser’s insufficient radiation power and thus prevent the powder from fully fusing.
- Incomplete fusion Incomplete Fusion Holes: Individual layers or hatches fuse incompletely and form differently shaped holes. The holes do not have a clear structure as with porosity.
- Cracking, delamination and distortion Stress that arises in a component due to rapid heating and cooling can lead to defects such as cracking (cracking), i.e. there are fractures in the component, delamination, i.e. the manufactured layers separate from one another or lead to distortion (warpage), i.e. surface bending. Cracking, delamination and warping differ from the above defect types in that they spread over a large area, but can be induced by the presence of the above defects.
- FEM finite element methods
- Surface finish the printing process leaves a certain surface roughness on components, which largely depends on the material, the layer thickness and the printing parameters used. Sandblasting of components in post-processing is standard, but electrochemical post-processing is also used. Inadequate laser fusing can result in optically unsatisfactory surfaces that even post-processing steps cannot remove.
- a method for detecting manufacturing errors during the additive manufacturing of a manufacturing object which has the steps of providing a camera system on or in a construction space of an additive manufacturing device, determining a camera system path of the camera system during the manufacturing of the manufacturing object, Production of the production object in several production steps, with the camera system following the camera system path during production and recording and storing images of the production object at predetermined positions, determining an image depth map for at least one sequence of at least two image recordings, determining a total depth map from all image depth maps for one production step, determining a 3-dimensional design of the production object from the total depth map.
- the method also has the steps: comparison of the 3-dimensional configuration with a 3-dimensional target configuration, determination of deviations between the 3- dimensional design and the 3-dimensional target design, evaluation of the deviation determined in the deviation determination step if deviations are determined, output of a deviation evaluation.
- the camera system is a stereo camera system that simultaneously records two images of the production object, stores them and determines the determination of the local depth map from the two simultaneously recorded images.
- the image depth map determination step, the overall depth map determination step, the 3-dimensional configuration determination step, the adjustment step, the deviation determination step and/or the evaluation step take place after the production of the production object has been completed.
- the depth map determination step, the 3-dimensional design determination step, the adjustment step, the deviation determination step and/or the evaluation step take place after the completion of one of the production steps of the production of the production object.
- the local depth map is determined by transforming a first recorded image in such a way that it corresponds to a second recorded image and during the transformation the local depth map is determined by a depth map determination algorithm from the class of monitored or self-monitored machine learning method is generated.
- the depth map determination algorithm is carried out by training an artificial neural network using at least one known first recorded image, one known second recorded image and one known local depth map.
- Image acquisition the speed at which the camera system moves is taken into account.
- Change in orientation of the camera system is detected and the position, change in position, orientation or change in orientation of the camera system is recorded as an input variable during the transformation of the first image recording into the second image recording and the depth map is determined therefrom.
- the camera position is determined from the path information, the relative displacement of the recorded images, by position sensor information or by information from the encoders of the servomotors of the camera system.
- the 3-dimensional design is carried out by using algorithms such as powercrust, marching cube, Poisson surface reconstruction or floppy reconstruction. It is also disclosed that the camera system can detect light in the infrared range, in particular in the near infrared range, and that it also has the step of detecting a temperature map of the production object during a production step.
- a method for detecting manufacturing errors during additive manufacturing of a manufacturing object which has the steps: providing a detection laser system on or in a construction space of the additive manufacturing device manufacturing the production object in several manufacturing steps using a production laser in the construction space of the Additive manufacturing device, use of a laser in a construction space of an additive manufacturing device to determine properties of a manufacturing object.
- the determined properties are a 3-dimensional design and/or a surface temperature.
- the method also has the steps: comparison of the 3-dimensional configuration with a 3-dimensional target configuration, determination of deviations between the 3-dimensional configuration and the 3-dimensional target configuration,
- the method further comprises the steps: evaluation of the deviation determined in the deviation determination step, if a deviation is determined, outputting a deviation evaluation.
- the 3-dimensional configuration takes place by means of pulsed distance measurement, AMCW distance measurement or FMCW distance measurement.
- the detection laser is the manufacturing laser of the additive manufacturing device.
- the detection laser is a laser separate from the manufacturing laser of the additive manufacturing device.
- the method further comprises the steps of: providing a partially reflecting mirror which is arranged such that a beam of the laser passes through from the transparent side and a beam reflected from the production object is reflected by the partially reflecting mirror a fixed mirror, the surfaces of the partially reflecting mirror and the fixed mirror being designed in such a way that a beam reflected by the production object is reflected onto a one-dimensional, preferably circular area, and the provision of at least one sensor on the one-dimensional area, which is set up, to detect the reflected beam.
- the method also has the steps: providing a reflector rotating about a first axis, providing a fixed mirror which is designed in the shape of a circular segment perpendicular to the first axis and is arranged such that the rotating reflector is in the center of the circular segment is located and has such a surface that is radial to the circle segment beam traveling parallel to the first axis is reflected and focused onto a building layer, providing a laser receiving device, rotating the rotating reflector about the first axis, directing a laser beam onto the rotating reflector, reflecting the laser beam by the rotating reflector in the direction of the fixed mirror, reflecting of the laser beam through the fixed mirror towards the production object, reflecting the laser beam through the production object towards the fixed mirror, reflecting the reflected laser beam through the fixed mirror towards the rotating reflector, reflecting the reflected laser beam through the rotating reflector to a laser receiver, detecting of the reflected laser beam by the laser receiving device.
- the device also has: a laser receiving device set up to receive a laser beam reflected by a workpiece, a calculation unit set up to calculate a transit time from orientation, intensity, frequency, modulation or reception time of the laser from its emission and thus to determine the dimensions of the workpiece.
- the device also has: a laser emitter, which is arranged and set up to emit a laser beam, by means of which, in cooperation with the laser receiving device, a runtime of the laser can be determined from its emission and thus the dimensions of the workpiece to determine.
- a laser emitter which is arranged and set up to emit a laser beam, by means of which, in cooperation with the laser receiving device, a runtime of the laser can be determined from its emission and thus the dimensions of the workpiece to determine.
- the device has: an arrangement of optically effective elements, which is designed in such a way that the laser beam impinges perpendicularly on the construction layer in the detection area.
- the device has: a reflector that can be rotated about a first axis, a fixed mirror that is designed in the shape of a segment of a circle perpendicular to the first axis and is arranged in such a way that the rotatable reflector is located in the center of the segment of a circle, and such a surface has that a beam running radially to the circle segment is reflected parallel to the first axis and is focused on a building layer.
- a method for preventing manufacturing errors during additive manufacturing of a manufacturing object which has the steps: providing a detection system in the additive manufacturing device for detecting at least one property of the manufacturing object, manufacturing the manufacturing object in several manufacturing steps, wherein during During production, the recording system determines the at least one property of the production object, comparison of the at least one property with a target state of the property, determination of deviations between the at least one property and the target state of the property, evaluation of the deviation determined in the deviation determination step, if deviations are determined, determination an adjustment of at least one process parameter, based on the deviation determined in the deviation determination step, to reduce the deviation in further production f verge, application of the adjustment determined in the adjustment determination step of the at least one process parameter.
- the at least one process parameter is one or more from the group of installation space temperature, production laser intensity or production speed.
- the detection system is a camera system and/or a projector camera system and/or a laser detection system for detecting a three-dimensional configuration of the production object or for detecting a surface temperature of the production object.
- the adjustment is determined in the adjustment determination step by means of a model-free algorithm.
- the adjustment is determined in the adjustment determination step by means of an artificial neural network for modeling transition probabilities or optimal action sequences.
- An action is understood as the adjustment of a process parameter as a result of a change in a state variable, for example the temperature in the melt pool.
- the adjustment is determined in the adjustment determination step by means of a model-based algorithm.
- the adjustment is determined in the adjustment determination step using an artificial neural network for modeling optimized sequences of changes in process parameters based on system modeling for transferring the additive manufacturing device from an actual state to a target state.
- the property is a change in the heat radiation of a layer due to the application of a layer with raw material, with a first temperature distribution on the visible surface of the layer being recorded in a first step before the layer with the raw material is applied, in a second step, carried out after the layer of raw material has been deposited, detecting a second temperature distribution on the visible surface of the layer of raw material, and in a third step comparing the first and second temperature distributions and identifying areas where the second temperature distribution deviates from a setpoint.
- a temperature difference map is formed from the first temperature distribution and the second temperature distribution and the temperature differences determined in the temperature difference map are compared with expected temperature differences as setpoints in order to identify the areas in which the temperature differences deviate from the setpoints.
- a device for additive manufacturing is disclosed, which is set up to carry out a method according to the preceding method claims.
- a method for detecting manufacturing errors during the additive manufacturing of a manufacturing object which has the steps: providing a camera system on or in a construction space of an additive manufacturing device, determining a camera system path of the camera system during the manufacturing of the manufacturing object in several Manufacturing steps, wherein the camera system travels the camera system path during production and on takes and saves images of the production object at predetermined positions, determining an image depth map for at least one sequence of at least two images, determining a total depth map from all image depth maps for a production step, determining a 3-dimensional design of the production object from the total depth map, using a laser to scan a construction layer in the installation space of an additive manufacturing device to determine properties of a manufacturing object.
- the determined properties are a 3-dimensional configuration and/or a surface temperature.
- the method also has the steps: comparison of the 3-dimensional configuration with a 3-dimensional target configuration, determination of deviations between the 3-dimensional configuration and the 3-dimensional target configuration, evaluation of the in the
- Variance determined in the variance determination step if variances are determined, output of a variance evaluation.
- the method also has the steps: comparison of the 3-dimensional configuration with a 3-dimensional target configuration, determination of deviations between the 3-dimensional configuration and the 3-dimensional target configuration, evaluation of the in the
- Variance determined in the variance determination step if variances are determined, output of a variance evaluation.
- the camera system is a stereo camera system that simultaneously records and stores two images of the production object, and the determination of the local depth map is determined from the two simultaneously recorded images.
- the determined properties are a 3-dimensional configuration and/or a surface temperature.
- the method also has the step: calibrating the laser by means of an image recording generated by the camera system, generation of a laser disparity map (disparity map) from the data determined with the laser for the same image section as that generated by the Image recording recorded by the camera system, generating an image disparity map from the data determined with the camera system, generating a total disparity map from the laser disparity map and the image disparity map. Generate a 3-dimensional design of the manufacturing object based on the overall disparity map.
- the method further comprises the steps of: providing a partially reflecting mirror which is arranged such that a beam of the laser passes through from the transparent side and a beam reflected from the production object is reflected by the partially reflecting mirror a fixed mirror, the surfaces of the partially reflecting mirror and the fixed mirror being designed in such a way that a beam reflected by the production object is reflected onto a one-dimensional, preferably circular area, and the provision of at least one sensor on the one-dimensional area, which is set up, to detect the reflected beam.
- the surfaces of the partially reflecting mirror and of the fixed mirror are designed to be quasi-parabolic. It is also disclosed that the method also has the steps: providing a reflector rotating about a first axis, providing a fixed mirror which is designed in the shape of a segment of a circle perpendicular to the first axis and is arranged in such a way that the rotating reflector is in the center of the segment of a circle and has a surface such that a beam running radially to the segment of a circle is reflected parallel to the first axis and is focused onto a building layer, providing a laser receiving device, rotating the rotating reflector about the first axis, directing a laser beam onto the rotating reflector, reflecting the laser beam by the rotating reflector towards the fixed mirror, reflecting the laser beam through the fixed mirror towards the production object, reflecting the laser beam as a reflected laser beam by the production object towards the fixed mirror, reflecting the reflected laser beam by the fixed mirror towards the rotating reflector, reflecting the reflected laser beam by the rotating reflector, reflecting the reflected laser beam
- the image depth map determination step, the total depth map determination step, the 3-dimensional configuration determination step, the adjustment step, the deviation determination step and/or the evaluation step take place after the production of the production object has been completed.
- the depth map determination step, the 3-dimensional design determination step, the adjustment step, the deviation determination step and/or the evaluation step take place after the completion of one of the production steps of the production of the production object.
- the detection laser is a separate laser from the manufacturing laser of the additive manufacturing apparatus.
- FIG. 1 shows a representation of a first exemplary embodiment of a device according to the invention for constructing a three-dimensional production object.
- FIG. 2 shows an installation space of the device from FIG.
- FIG. 3 shows a sensor arrangement of the first exemplary embodiment.
- FIG. 4 shows a representation of a second exemplary embodiment of the device according to the invention for constructing a three-dimensional production object.
- FIG. 5 shows a representation of a third exemplary embodiment of the device according to the invention for constructing a three-dimensional production object.
- FIG. 6 shows a representation of the stereoscopic determination of depth.
- FIG. 7 shows a cloud of points and an associated triangle mesh (surface representation).
- FIG. 8 shows a flow chart of a method according to the invention for 3D reconstruction.
- FIG. 9 shows an illustration of a first modification of the devices according to the invention for constructing a three-dimensional production object according to the exemplary embodiments.
- FIG. 10 shows a representation of a second modification of the devices according to the invention for constructing a three-dimensional production object according to the exemplary embodiments.
- FIG. 11 shows a sectional view of the second modification of the devices according to the invention for constructing a three-dimensional production object according to the exemplary embodiments.
- a basic structure of a first exemplary embodiment of a device for generating a three-dimensional object (hereinafter also referred to as generating device, laser sintering device, LSM or printer) by layer-by-layer solidification of a building material, which is designed as a laser sintering device according to one embodiment, is shown below with reference to FIG. described.
- generating device laser sintering device, LSM or printer
- layers of the construction material are applied successively on top of one another and the locations in the respective layers corresponding to the object to be manufactured are each selectively solidified before the application of a subsequent layer.
- a powdered build material is used which is solidified at selected locations by exposure to an energy beam.
- the powdery building material is heated locally at the selected points by means of a laser beam, so that it is connected to neighboring components of the building material by sintering or melting.
- the generating device 1 has an optical system whose components are fixed to the components of the machine frame, respectively.
- a construction space 10 is provided in the machine frame.
- the optics system includes a laser 6, a deflection mirror 7 and a scanner 8.
- the laser 6 generates a beam 9 which hits the deflection mirror 7 and is deflected by it in the direction of the scanner 8.
- the scanner 8 is designed in a known manner in such a way that it can direct the incoming beam 9 to any points in a construction layer (current layer) 11 that is located in the construction space 10 .
- an entry window 12 is provided between the scanner 8 and the construction space 10 in an upper partition wall 5 of the construction space 10 , which allows the beam 9 to pass through into the construction space 10 .
- An F-q lens 14 is placed in the path of the ray 9 and directs the ray 9 perpendicular to the build layer 11.
- a container 25 open at the top is provided in the installation space 10 .
- a carrier device 26 for carrying a three-dimensional object to be formed is arranged in the container 25 .
- the carrier device 26 can be moved back and forth in the vertical direction in the container 25 by means of a drive (not shown).
- a build plate 13 is provided at the upper end of the carrier device 26 and can be moved vertically with the carrier device 26 .
- the three-dimensional object is arranged on the building board 13 during its manufacture. In the area of the upper edge of the container
- a coater 27 (recoater) is provided for applying build-up material to be solidified onto the surface of the carrier device 26 or a previously solidified layer.
- the coater 27 can be moved in the horizontal direction over the construction layer 11 by means of a drive indicated schematically by arrows in FIG.
- Metering devices 28 are provided on both sides of the building layer 11, which provide a predetermined quantity of the building material for the coater 27 to apply.
- a supply opening 30 is provided on the side of the dosing device 28 .
- the feed opening 30 extends across the entire width of the building layer 11 in the direction perpendicular to the plane of representation in FIG.
- the construction space in the embodiment is divided into an upper area 40 and a lower area 41 .
- the upper area 40 forms the actual work area in which the building material is applied in layers and selectively solidified.
- the layers extend in the x-direction and y-direction in a horizontal plane.
- the lower portion 41 receives the container 25 on.
- some components are formed by a method of forming a three-dimensional element layer by layer by selectively solidifying locations corresponding to the object in the respective layers.
- a laser sintering method is used to manufacture it.
- the construction material is fed into the construction space 10 via the feed opening 30 and fed to the coater 27 in a predetermined quantity with the metering devices 28, 29.
- the coater 27 carries a layer of the build material on the support device
- the beam 9 is directed to selected positions in the building layer 11 in order to selectively solidify the building material at the locations corresponding to the three-dimensional object to be formed.
- the carrier device is then lowered by the thickness of one layer, a new layer is applied and the process is repeated until all layers of the object to be formed are produced.
- a sensor arrangement is explained with reference to FIG.
- a camera system 50 is provided in the upper area 40 of the installation space.
- a first camera module 52 and a second camera module 53 are provided on a carriage 51 .
- the first camera module 52 and the second camera module 53 are aligned essentially downwards and are arranged non-rotatably relative to one another at a fixed, known distance d.
- the carriage 51 is attached to a bracket 56 in a rail system.
- the carriage 51 can be displaced along the horizontal x-axis via a third servomotor 57 .
- the carrier 56 can be displaced along the horizontal y-axis via rails 58 and servomotors 59 . Due to this arrangement, the carriage 51 and thus the first camera module 52 and the second camera module 53 can be moved freely in the plane spanned by the x and y axes and can be aligned with any point on the workpiece.
- a LiDAR system is implemented by the device for generating a three-dimensional object.
- a laser emitter is provided and a measuring laser beam is generated by the laser emitter, which is directed by a deflection device onto the object to be detected—in the present case the construction layer 11 .
- the deflection device is suitable for directing the laser beam to each point of the building layer 11 whose depth is to be measured.
- the laser beam is reflected from the surface of the object to be measured, i.e. from the surface of the building layer 11 in this case.
- a laser receiver is also provided, which detects the reflection of the laser beam. From the signal propagation time, i.e.
- the distance for each is determined with knowledge of the arrangement of the laser emitter and the laser receiver and with knowledge of the deflection of the deflection device measuring point determined.
- the production laser 6 is used as a measuring laser, with the deflection mirror 7 and the scanner 8 serving as a deflection device.
- the laser receiver is arranged (outside) on or in the combustion chamber 10 and aligned with the building layer 11 .
- a second exemplary embodiment of the invention is explained with reference to FIG. Apart from the differences described below, the second exemplary embodiment is identical to the first exemplary embodiment.
- a camera system 50' is provided on the carriage 51.
- the camera system has a camera module 52'.
- the camera module 52′ can be rotated about the vertical z-axis via a first servomotor 54 and its alignment in the vertical plane can thus be changed.
- the camera module 52' can be rotated about a horizontal axis, i.e. an axis perpendicular to the z-axis, via a second servomotor 55, and the orientation can thus be changed downwards.
- the carriage 51 and thus the camera module 52' can be moved freely in the plane spanned by the x and y axes and the alignment of the first camera module 52' can be aligned with any point on the workpiece.
- a third exemplary embodiment of the invention is explained with reference to FIG. Apart from the differences described below, the third exemplary embodiment is identical to the first exemplary embodiment.
- a circular bearing ring 61 is provided in the upper area 40 of the installation space 11 and is connected to the housing in a stationary manner. Within the bearing ring 61, a rotary ring 62 is rotatably mounted. The rotary ring 62 can be freely rotated within the bearing ring 61 via the servomotor 63 .
- a first camera module 52" is attached to the rotary ring 62. The camera module 52" is aligned radially inwards. In this case, the camera module 52" is inclined downwards by the angle of inclination a in the direction of the building layer 11. The angle of inclination a is preselected and fixed.
- Every area of the construction layer 11 can be detected by the camera module.
- Further camera modules for example a second camera module 53′′, can be attached to the rotary ring.
- the second camera module 53′′ is aligned radially inwards and is inclined downwards in the direction of the structural layer 11 by an angle of inclination ⁇ .
- the angle of inclination ⁇ is preselected, fixed and different from the angle of inclination ⁇ .
- this arrangement allows stereoscopic capture in one pass, faster stereoscopic capture with one camera module each through parallel capture by multiple cameras, or larger area coverage.
- angles of inclination ⁇ and ⁇ can each be changed by a servomotor.
- the area covered by the camera module 52" and the camera module 53" can thus be changed. This allows camera modules with a longer focal length to be used, which increases the resolution but reduces the area captured. Changing the angles of inclination ⁇ and ⁇ nevertheless ensures that the entire area of the building layer 11 remains detectable.
- a first method for in-situ defect detection defines a defect as a deviation (additive or reductive) between the surface of the printed component compared to the CAD model that goes beyond a specified tolerance.
- the component is continuously reconstructed in three dimensions during printing and compared with the model, typically in a CAD format such as STL or STEP, in order to detect deviations.
- a method for calculating the distance is the so-called LiDAR system, which emits a laser beam and thus (directly or indirectly) measures the duration until, after emission of a laser beam, its reflection returns to the system. It is then possible to determine how far away an object is via transit time and the speed of light.
- Stereo cameras are systems consisting of two or more cameras that generate two or more than two images of the same scene at the same time and determine the depth from them.
- two images of the same scene are recorded with a movably arranged camera at different times and in different locations, and the depth is determined from them. Since both systems have already determined the depth dimensions, the reconstruction problem is reduced to the segmentation of geometric objects, i.e. determining which depth point belongs to the object of interest and which depth point belongs to the surface of the unprocessed powder bed and can be ignored.
- Thermal segmentation assumes that a thermogram exists that includes the objects as well as at least part of the surrounding powder bed.
- the camera modules are set up to detect thermal radiation.
- the temperature gradient (dT) can be used to determine whether it belongs to the powder bed or to the solid object. The rule is then dT > S, where S is a lower bound that is determined experimentally.
- the probability (T) can be determined for each pixel as to whether the material covered by this pixel has melted or not. For this, the well-known physical properties of the material and corresponding simulation methods (finite elements or similar) are used. Then, when the probability is calculated, an appropriate bound (P) is chosen and if T > P then this pixel belongs to the solid object, else to the powder bed. It should be noted here that incomplete fusion holes would also have a low probability. Therefore, in a last step, the surfaces marked as not melted have to be found. The areas with T ⁇ P and an area smaller than X square pixels are ignored and therefore belong to the lasered area, X is to be selected appropriately.
- thermal segmentation it is advantageous to transform the coordinates found into the image coordinates of the depth maps, point cloud and/or the RGB image. Inaccuracies can be avoided if the resolution of the thermal image is sufficiently high relative to the optical image and ideally the resolution of the thermal image corresponds to that of the optical image. This can be achieved, for example, by a sufficiently high resolution of the thermal image acquisition, multiple measurements or measurements with several but offset cameras.
- optical segmentation Another option for segmentation is optical segmentation.
- segmentation takes place via the depth map or the point cloud.
- the determination is made via the depth map gradient (dT), i.e. the difference in depth between neighboring pixels. If the depth gradient dT > S, a threshold determined experimentally and roughly equal to a factor of the density ratio between the raw powder and the fused powder, and the layer thickness of a powder layer, then an edge is assumed to exist here. The shell is determined on the basis of the determined points. Otherwise, the same procedure as in thermal segmentation is followed.
- dT depth map gradient
- the segmentation can also be carried out/learned using unsupervised or self-supervised methods.
- the CAD can be used as a prior to further improve the process or to achieve more precise results.
- a point network (mesh) must then be calculated on the basis of the point cloud in order to be able to compare the geometry with a CAD file, for example in STL format.
- a problem with the use of Li DAR is that presumably not all types of defects can be detected, for example those locations that have not been lasered or have been insufficiently lasered (incomplete fusion).
- stereo cameras can compensate for this, since they also depict textures in addition to depth be able.
- the working hypothesis in this case is that there are differences in texture between a good and a bad section in the part.
- the challenge with stereo cameras is their accuracy, which decreases with the square of the distance to be measured. The further away the workpiece is from the camera, a larger and larger area is covered by a camera pixel.
- the invention proposes solutions for the use of a LiDAR and for the use of stereoscopy in defect detection in additive manufacturing. Furthermore, it is proposed to merge the information from LiDAR and stereoscopy in order to achieve the highest possible accuracy in the reconstruction.
- the sensors represent the data in different structures
- stereo cameras provide two images in RGB space
- lidar sensors generate point clouds containing the distance to the object and the orientation relative to the sensor. Therefore, it is necessary to use a multi-stage procedure, which in the first step finds a uniform representation of the input data between the different sensor modalities and then calculates the 3D representation of the object from it.
- the accurate depth is estimated, i.e. it is determined how far the object imaged in a pixel of the camera is from the camera.
- a camera system 50 is installed in the build chamber 10 for this purpose.
- the camera system 50 is preferably a stereo camera 50.
- the camera system can move freely in the x-y direction, i.e. in the horizontal plane, in a translatory manner.
- a translational movement in the z-direction or the height direction is not provided in the first exemplary embodiment, since a movement in the height direction removes the camera from the working area and the error in the depth estimation of a stereo camera increases quadratically with the distance to the object. Therefore, the camera system 40 should be placed as close as possible (optically or physically) to the printed object.
- the camera system 50 can be rotated about the x-axis and the y-axis.
- multiple cameras can also be attached in order to increase the scanning speed, for example in a row, so that the aggregated field of view covers a larger part of the installation space in either the x-direction or the y-direction.
- the depth estimation method begins in a path determination step S20 with the determination of an optimal path for the camera system 50 through the upper installation space 40.
- the path depends on various factors such as the progress in the manufacturing process, lighting conditions and the speed required to complete the scan in time.
- the scanning process begins in a scanning step S30.
- the camera system 50 follows the path determined in the path determination step S20 and takes pictures of the workpiece at predetermined positions.
- the image depth map itself is determined in several stages and begins in an image depth map determination step S40 with the calculation of an individual depth map for each image pair.
- an image A which was taken from perspective A
- the depth map is generated by a depth map determination algorithm, which is an algorithm, for example, from the class of self-monitored machine learning methods. Learning is done by transforming a known image into another known image.
- the images of a pair of stereo images can be used for this purpose, ie the algorithm learns to transform the right image into the left image and vice versa.
- consecutive images of a video sequence can also be used since the scene to be reconstructed is static, ie none of the objects to be reconstructed is in motion.
- both are combined and a temporal sequence of stereo pairs (e.g. 3 images) is used for the learning.
- the transformation error ie the discrepancy between the transformed image and the correct image, is used to improve the algorithm during learning.
- Several neural networks are used here, which learn the depth estimation.
- the depth map can be determined using trigonometric algorithms.
- the epipolar geometry describes with sufficient accuracy how the depth (z-axis) of the respectively imaged regions can be determined from a pair of stereo images.
- the central challenge associated with this is to find the corresponding pixels in the stereo image pair (correspondence problem), i.e. to find out which pixel from the first image depicts the same area of the imaged object as a certain pixel from the second image.
- the depth can be estimated from the disparity, i.e. the pixel shift between the left and right image.
- the invention teaches:
- Another advantage of the approach is that a mono camera can be used instead of a stereo camera.
- the estimated depth maps for the left and right images in the stereo image pair are averaged:
- the function / is estimated by minimizing the photometric reprojection error.
- the concept behind this is to reconstruct the other image from the depth map estimated for one image of the stereo pair and to minimize the error that arises in the reconstruction.
- the disparity is learned implicitly, because in order to learn one image from the other of a stereo pair, the algorithm has to learn how to move the individual pixels in the image plane in order to reconstruct the other image.
- L p calculates: the reconstruction error between the original image l t and the image I t reconstructed from the depth map.
- L SS[M contains the Structural Similarity Index (SSIM), which measures the perceived similarity between two images modeled. The range of the function is [—1,1], where 1 is reached when both images are exactly the same.
- L norm describes the absolute pixel error between the two images.
- the images are indexed with t since the optimization criterion is still averaged over a context.
- Part of the context is the image of a stereo pair for which no depth map estimate was determined, as well as the stereo pairs before and after the target image. That is, for the estimation based on image / t £ the context vector consists of I t> t+i > ⁇ t+i) ⁇ ⁇
- the second component L s of the optimization criterion ensures that the transitions between the depth estimates of two neighboring pixels are smooth. This avoids abrupt changes in the disparity (gradient) between two neighboring pixels, which are probably artifacts.
- the underlying assumption is that there are only a few exceptional cases where a large gradient can be justified. Note that d t is the inverse depth and d is the average inverse depth, so d t * is the mean-normalized inverse depth.
- the measured speed v is multiplied by the time DT s ⁇ t that has elapsed between the two images (4, I t ) in order to determine the actual translational change.
- the optimization criterion is then calculated as the difference between the distance actually covered and the estimated distance covered
- the factor m £ ⁇ 0,1 ⁇ is a binary mask that masks stationary pixels.
- the method described is based on the assumption that the scene for which a depth map is estimated is moving, either because the camera is actively moving or because a majority of the objects in the scene are moving. If the pixel values do not change between two consecutive images, the performance drops sharply. Therefore, it is proposed to filter stationary pixels in the optimization criterion through a binary mask, ie
- the so-called PackNet architecture is used, which is composed of several convolution layers for the encoder and decoder. Each layer in the encoder is followed by a packing operation that compresses the learned features and an unpacking operation that decompresses the features to produce the depth map.
- the depth map of the final slice is then used to reconstruct images of the stereo pair omitted from the depth map, or the same stereo image but with a time separation of [ ⁇ 1,1].
- the network architecture encoder is taken from V. Guizilini, R. Ambrus, S. Pillai, A. Raventos, and A.Gaidon, "3DPacking for Self-Supervised Monocular Depth Estimation,” arXiv:1905.02693 [cs] , Mar.
- a recurrent layer and an lxl convolution layer which reduces the dimension of the output to 6, ie 3 translational dimensions ( Dc,Dg,Dz ) and 3 rotational dimensions (fc, fg, fz).
- the recurrent layer makes it possible to use larger time horizons for estimating the change in configuration in order to get more stable estimates.
- left-right reconstruction the relative configuration change between the left and right object of the stereo camera is known from the camera calibration.
- the correspondence between a pixel position in the left image and the right image can be established by translation and rotation.
- the pixel values are then taken from the original image for the reconstruction using bilinear sampling kernels (sampling):
- the pixel coordinates (x ,yf) result from the inverse transformation of the pixel coordinates (X t .V t ) in the target image / with the intrinsic camera parameters and the corresponding transformation matrix A q :
- the resolution of the depth maps that result in the intermediate layers of the decoder can be increased to the resolution at the output of the decoder by means of a nearest-neighbor interpolation.
- the depth maps scaled in this way are additionally reconstructed according to the method described above and taken into account in the optimization criterion.
- the optimization criterion is averaged over all reconstructions, pixels and the batch and can be trained using a standard method such as a gradient descent method. A distinction must be made between the initialization phase and the transfer phase.
- the coefficients of the model are estimated for the first time based on a training set that has been previously collected.
- the model is then ready for the first application.
- additional data is collected, which in turn can be used to improve the model.
- a complete point cloud from the individual scans it is necessary to locate the measured depths in an absolute xyz reference system.
- the resolution improves with focal length. This follows from the fact that, for a given image size, the pixel density of the image plane also increases with increasing focal length. Hence the
- the depth map determined in this way is initially dimensionless.
- the speed at which a camera moves is taken into account during the transformation. I.e. in addition to the images is the
- Speed of movement of the camera system 40 is an input parameter for the depth map determination algorithm. This allows the depth map determination algorithm to learn the correct unit of measurement. In addition, not only the translational speed of the camera system 40 is taken into account, but preferably also the rotational speed of the changes in orientation of the camera system 40. On the one hand, the depth estimation for an individual image can be improved, on the other hand, the accuracy of the subsequent combination of all local ones Depth maps to a global depth map for the entire construction space.
- the accuracy of the depth estimation is significantly influenced by two types of error:
- the accuracy of the estimate decreases quadratically with the distance to the object (systematic error). This is because a single pixel covers a larger and larger spatial area in the world the further away that area is from the camera. Since the depth is only ever estimated for a single pixel, this means that depth changes that exist within a pixel are only averaged.
- the first error can be reduced by reducing the distance between the camera and the object, increasing the focal length of the lens, increasing the viewing distance between the stereo camera lenses, and making the distance traveled by the camera between two stereo pairs small, to to maximize the overlap.
- the overlap makes it possible to minimize the error in the global depth map generation step.
- the error introduced by the inadequacy of the model can be reduced by more data and a better model.
- the model error can be included in later calculations by assigning the corresponding uncertainty of the model with regard to the estimation to each estimated value for the depth.
- the Bayesian interpretation of probability is used here.
- the path planning with which the depth estimation started can be adjusted in such a way that areas with high uncertainty are approached again.
- a multiplicity of overlapping depth maps are created, which are converted into a uniform depth map for the entire installation space in an overall depth map determination step S50.
- a camera pose position and orientation of the camera
- the aim is to align the points of the individual depth maps with each other in such a way that the most plausible global depth map is created.
- a number of methods are preferably combined in order to achieve increased accuracy.
- An estimate of the camera pose for each individual depth map has already been determined in the image depth map determination step S40. However, this estimate may still be imprecise.
- the camera is localized on the one hand based on the information from the rotary encoders of the motors that change the camera pose, from the inertial measurement unit (IMU) attached to the camera or an interior localization technology (e.g. RFID, magnetometer, etc.). Furthermore, the camera can be localized based on the image information from the image recordings. Individual features in the overlapping areas of the images are calculated (SIFT features). The relative poses of the images to one another can thus be determined and merged with the relative poses from the image depth map determination. The correspondence between the individual features in the image recordings is then filtered in order to reduce erroneous assignments. The depth maps are then divided into segments, each with n consecutive images, and for each of these segments the camera pose aligns all images with one another.
- SIFT features Individual features in the overlapping areas of the images
- the relative poses of the images to one another can thus be determined and merged with the relative poses from the image depth map determination.
- the correspondence between the individual features in the image recordings is then filtered in order to reduce err
- a three-dimensional configuration of the workpiece is determined.
- the total depth map is converted into a point cloud by choosing a suitable coordinate system and calculating the xyz coordinates for each individual depth point.
- the individual points are enriched with secondary information that is necessary for further calculations, e.g. uncertainty and surface normals. If necessary, the point cloud is built up layer by layer. At the end of the print, a point cloud is obtained that includes all printed objects and all defects.
- n layers e.g. 20 ⁇ n ⁇ 100
- Meshing of the point cloud from the last n layers e.g. 20 ⁇ n ⁇ 100
- subsequent defect detection in order to detect possible errors for the current layer and correct them with appropriate laser correction.
- the number of layers n can possibly also be controlled adaptively.
- the aim is to determine a surface representation of the desired objects with the cloud of points obtained. This means that adjacent points are connected with edges and 3 edges form a triangle, which can be used to describe a surface in three-dimensional space.
- One of the most widespread formats is the so-called STL format, which is also used for additive manufacturing and the additive manufacturing device, among other things.
- a point cloud and the associated triangular network (mesh) are shown in FIG.
- the transition can be implemented using a variety of different algorithms. The following algorithms are mentioned here as an example:
- Delaunay triangulation algorithms create a 3-dimensional object, i.e. this is filled with tetrahedrons (volume based), and are therefore only suitable to a limited extent because there is some overhead.
- the surface reconstruction and the meshing could also be carried out after each lasered layer and in this way an attempt could be made to localize defects in each layer. This procedure is more complex and finding and classifying defects at the local level is more difficult.
- Each extracted area can now be compared with the model and checked for errors using the original CAD geometry. All recognized structures that are now inside a geometry, in the case of certain defects such as warping, possibly also structures that are outside of a geometry, are to be seen as potential sources of error and may need to be examined more closely, e.g. by calculating the volume or the associated bounding box to determine the size of the defect. At this point, a distinction must also be made between the following defect types:
- a graph is generated from the mesh; a commercial program library is sufficient for this. All individual, connected graphs form a separate object.
- the printed object or the type of defect can be determined with the bounding boxes and volume (size comparison). With this procedure, the defect extraction and classification can be efficiently implemented.
- a distinction can be made here between a macro and micro scale and thus appropriate countermeasures can be initiated, i.e. merging the defect (increased performance, time, parameters, etc.) or a print termination of the corresponding component.
- a defect file is created, which can be visualized with ParaView, for example.
- the individual defect types are indicated by different colors in order to be able to distinguish them from one another easily and quickly.
- the method described below makes fewer assumptions about the type of defect, only that it manifests itself robustly in the selected sensor, for example RGB or NIR/IR camera.
- the principle of the method is based on the observation that it is often easy to generate a large amount of " good" sensor data, ie in which there is no defect. In this way, a generative model that explicitly or implicitly represents the (spatial) distribution of the " good" data can be learned.
- a defect is then defined as a data point to which the learned distribution assigns a low probability, ie so that the statement is made that the observed data point is probably not a "good” sensor datum.
- the method has the following challenges:
- the camera system 40 is capable of detecting (near) infrared (NIR/IR).
- NIR/IR near infrared
- a further individual camera is provided for this.
- the temperature on the surface of the component can be continuously measured during printing using a (near) infrared (NIR/IR) camera. This allows spatial and temporal temperature gradients to be determined, which provide information about the melting process and thus also about possible errors.
- the temperature is a defect characteristic, which is often at the beginning of the causal chain for the development of a defect, so that a defect can be identified earlier than, for example, the depth-based method, since the defect is already present here.
- NIR/IR cameras A challenge faced by NIR/IR cameras is their spatial resolution, which is lower than RGB cameras, i.e. a pixel in an RGB image cannot directly correspond to a pixel in an IR image. Conversely, a pixel in the IR image is only the temperature average from the implicit temperature values that would be determined if the NIR/IR camera had the same resolution as the IR camera. This can mean that small defects cannot be detected (early). Furthermore, the resolution is often also lower than the diameter of the laser beam on the surface of the component, with the same effects as before.
- a method for detecting defects in the powder layer can be performed. This method comprises the following steps, for example: in a first step, which is carried out after a layer of the component has been completed, a first temperature distribution on the visible surface of the layer is recorded. In a second step, carried out after a powder layer of raw material has been applied, a second temperature distribution on the visible surface of the powder layer is recorded. In a third step, the first and second temperature distributions are compared with one another and areas are identified in which the second temperature distribution deviates from a target value.
- the third step can include, for example, forming a temperature difference map from the first and the second temperature distribution and comparing the temperature differences determined in the temperature difference map with expected temperature differences as setpoint values.
- the third step can include, for example, determining an expected temperature distribution from the first temperature distribution, for example using a simulation and/or using a method from the field of artificial intelligence, which forms the setpoint for the second temperature distribution. For example, any method described in this document can be used as a method from the field of artificial intelligence.
- the application of the powder can be corrected in the identified areas of the powder layer. In this case, the second and the third step can also be repeated.
- at least one process parameter can also be adjusted based on the determined deviation.
- an NI R/I R image can be recorded by the camera system 40, for example.
- the first and/or second temperature distribution can also be detected, for example, by detecting a radiant power emitted in the NIR/IR range on a large number of sections of the surface.
- the first and/or second temperature distribution are each a detectable result of a property of the production object.
- any other property can be used as the basis of the method.
- the camera system 40 can be provided with appropriate sensors, for example, or an additional detection device can be provided for detecting the property before and after application of the raw material.
- a laser pulse is directed onto the object to be measured, and the time difference between the exit of the pulse and the arrival of the reflection of the pulse at the object is measured. Since the speed of light is constant, depending on the surrounding medium, the distance between the object and the laser can be determined from it.
- the geometry of the component can then be determined on the basis of the process parameters, e.g. layer index, and defects can be determined as described above.
- the distance d to the component is given as follows: where c denotes the speed of light and t 0 f the transit time of light.
- the additive manufacturing device must be expanded to include a time measuring unit and a photodetector.
- the laser in the AMCW method emits a constant wave.
- the radiant power of the laser is modulated with a reference signal.
- the phase shift of the reflection and the reference signal is then determined.
- the phase shift determined is a function of the distance, which can be calculated from this.
- DF describes the phase shift DF
- f m describes the frequency of the signal formed by the amplitudes of the reflected signal.
- a challenge associated with this method is the continuous change in the radiation power and thus the melting process at the powder surface. Common process parameters are therefore probably obsolete and have to be found again.
- the additive manufacturing device In order to implement the method described, the additive manufacturing device must be expanded to include components that can modulate the laser beam on the one hand and determine the phase shift on the other.
- FMCW frequency-modulated continuous wave
- the frequencies of the carrier signal are modulated in the FMCW method, ie the wavelength of the laser is modulated as a function of a reference signal.
- a challenge that arises with this method is changing the frequency of the laser beam and thus the wavelength.
- the wavelength affects the absorption of energy at the material surface, so the process parameters for a print may have to be adjusted.
- the additive manufacturing device must be expanded to include components similar to those in the AMCW method.
- the reflectivity of a surface changes as a function of its temperature. If the reflectivity drops, the intensity with which the reflected laser pulse arrives at the photodetector also changes. This can be described as dR dR
- R describes the reflection factor and T the temperature.
- the second part of the equation describes the changes in the reflection factor as a function of the change in the number density of the charge carriers dN, caused by the optical excitation.
- the functional expansion of the melting laser is particularly effective when structural changes are made to the additive manufacturing device. Since the laser is very sensitive to movement, the beam is directed onto the surface using mirrors and lenses. This means that the angle of incidence and the angle of reflection change depending on the position. Furthermore, situations can arise in which the reflection is direct goes back into the focusing lens, ie when the angle of incidence of the light in the near range is around 90 degrees. As described above, the additive manufacturing device must be expanded to include photodetectors, among other things. Since the reflected laser beam no longer has a fixed position in the installation space due to the continuous change in the angle of reflection, the additive manufacturing device must be designed in such a way that the individual elements are arranged in such a way that the light transit time and light intensity measurements are carried out precisely.
- the energy source used for melting the raw material travels a melting path on a powder layer in order to melt the powder of the powder layer.
- the melt path includes a sequence of points of each layer.
- a corresponding laser power which can also be zero, and/or other process parameters are determined for each point. This makes it possible, for example, to take into account the effects of the laser's start-up and braking processes and to compensate for them by adjusting the process parameters.
- a path planning method is used to determine the melting path.
- a path planning method includes, for example, a step for assigning at least one process parameter to a point of the fusion path, for example to all points of the fusion path.
- the path planning method may include, for example, using an artificial intelligence method or, for example, a Markov decision process.
- the method may also include the method disclosed above for detecting defects in the powder layer to match the melt path to the powder layer.
- an adaptation method is therefore provided which can be carried out, for example, by means of an additional controller.
- This controller for example a scan controller, determines an adjustment of at least one process parameter based on a determined deviation of at least one property of the production object from a target state of the property.
- the detection system has at least one infrared camera that is mounted coaxially to the melting laser, so that, for example, a central pixel of the sensor field of the infrared camera is located in front of the melt pool generated by the melting laser in the direction of movement.
- the images of the melt pool recorded by the infrared camera are compared with a target state, for example by means of a neural network, and the deviations from this are determined and evaluated.
- An adjustment of at least one process parameter for example the laser power, is determined from the evaluation.
- a second neural network can therefore be provided, for example, which receives the property, for example the images captured by the infrared camera, as an input signal and an adjustment is calculated from this.
- This neural network can be a feature detector or a state representation network, for example. Likewise, methods of conditional neural processes and/or models based on them can be used.
- the recorded images can be converted into a single image, for example after scanning, so that the maximum radiant power for each pixel is determined along the melting path. This makes it possible to determine for each pixel, for example, whether the powder has been completely melted.
- the F-0 lens 14 can cause refraction distortions that are difficult to compensate for when it passes through the F-0 lens 14 again.
- a fixed mirror 15 is provided instead of the F-0 lens.
- the fixed mirror 15 is horizontally in the shape of a segment of a circle, with the scanner 8 being arranged in the center of the circle.
- the mirror surface of the fixed mirror 15 is inclined downwards in such a way that a beam 9 arriving horizontally from the scanner 8 is reflected vertically downwards onto the building layer 11 .
- the scanner 8 has a mirror which rotates about the vertical axis (z-axis) in such a way that the beam 9 sweeps over the entire horizontal extension of the fixed mirror 15 and is reflected vertically downwards by the fixed mirror over this area. This produces a one-dimensional area in the shape of a segment of a circle on the construction layer 11, on which the beam 9 impinges.
- the laser 6, the mirror 7, the scanner 8 and the fixed mirror 15 are moved over the construction space in a constant arrangement relative to one another in the x-direction.
- a portion of the beam 9 reflected by the building layer 13, the reflected beam 9', is reflected back onto the scanner 8 via the fixed mirror. Since the mirror of the scanner 8 has already rotated further about the z-axis by a sufficiently precisely predeterminable value when the reflected beam 9' arrives, the reflected beam is not reflected directly to the laser 6, but hits it there slightly offset.
- a laser receiver 16 is provided, which is fixed relative to the laser 6, the mirror 7, the scanner 8 and the fixed mirror 15 and moves together with them along the x-axis.
- the beam 9 is always aligned perpendicular to the building layer 11 and the running length of the beam 9 and the reflected beam 9 'only varies exactly by the distance of height differences on the building layer 11, can be in all detected areas
- Construction layer 11 carry out an exact depth determination using LiDAR.
- a modified F-0 lens 14' is designed to be provided with a partially specular surface 141.
- FIG. An annular fixed mirror 15' is also provided.
- Two laser sensors 16' are attached to a ring 17 in such a way that they can move on the ring.
- Both the partially reflecting surface 141 and the ring-shaped fixed mirror 15' are configured quasi-parabolic, so that the reflected beam 9' is always reflected onto the position of the orbital ring 17.
- the sensors 16' are moved along the revolving ring 17 in such a way that the reflected beam 9' can be detected either by one or the other sensor 16'. Since the reflection can suddenly change from one side to the other when crossing over the center of the image, it is advantageous to arrange the sensors 16' opposite one another on both sides.
- the arrangement according to the second modification does not permit as high an accuracy as the arrangement according to the first modification because the beam alignment is not always vertical. However, since here with the sensors 16′ considerably fewer components can be moved relative to the installation space, the complexity is lower.
- the lidar sensor is calibrated online, i.e. with each image.
- the calibration parameters it is now possible to create a disparity map from the lidar data. This is where the calibration parameters come into play, because they allow the disparity map to be calculated for the same image section that the stereo camera sees (lidar scans in radius X, i.e. the point cloud must be aligned with the stereo image).
- a global disparity map is created from the two disparity maps, which is much more precise than the individual maps. Both steps, calibration and fusion can be implemented with neural networks.
- Defects arise on the one hand from factors that already exist before printing, for example the geometry is not suitable for printing or there were quality defects during the production of the powder.
- defects also occur during printing because process parameters such as the scanning speed of the laser do not match the current state of the print.
- the process parameters are determined and set before the manufacturing process. The determination can be made by simulating the manufacturing process and/or by creating samples.
- it is necessary to dynamically change the process parameters during runtime, in response to the defect probability existing at a point in the process.
- the manufacturing process is formulated as a Markov decision problem, which can be solved with an algorithm from the class of reinforcement learning.
- the algorithm continuously receives information about the manufacturing process, for example the temperature in the powder bed, and decides whether the process parameters need to be changed with the aim of reducing the probability of defects.
- model-based approach can be used.
- the environment is explicitly modeled in order to plan the best change in the process parameters.
- Planning means that there is an actual state and a target state.
- the planning algorithm then tries to find the best sequence of changes in the process parameters (policy) with the help of the model of the environment.
- the best sequence in this context is the one that leads to the lowest defect probability.
- model-based approaches require less data (sample efficiency), since many relationships are (locally) linear in reality. This means that only a few data points from the local environment are needed to learn the connection.
- Model-free approaches need significantly more examples so that they only learn the connection between the change in the process parameters and the success (reward).
- the described disadvantage of model-free approaches disappears if precise simulations of the additive manufacturing process can be used or if the number of printers in use whose data is accessible is large, because planning in model-based approaches is essential more complex and is more constrained by simplifying assumptions and approximations than is the case with model-free approaches.
- the challenge in model-free approaches that were trained in a simulation is the successful transfer to the flardware.
- Even with highly accurate simulations there is a so-called reality gap, i.e. the learned model recommends suboptimal changes to the process parameters, since the simulation model differs too much from reality in some or all dimensions.
- the dynamic behavior of the overall system is modeled as far as possible, for example a measurement error of the infrared cameras, which is caused, for example, by quantization or losses or absorption by reflection on the lens or other components result.
- a thermal drift of the sensors, latencies in the transmission, friction and other factors can be included in the simulation.
- thermocouples are introduced into the powder bed.
- a path is scanned in the powder bed, i.e., for example, traveled by means of the laser and thereby, for example, at least heated and/or at least partially welded, and the measured values of the thermocouples with the simulation, for example with a CFD model (Computational Fluid Dynamics) compared and, if necessary, the parameters of the simulation adjusted.
- CFD model Computer Fluid Dynamics
- Additive manufacturing devices from different manufacturers differ in many ways, even when using the same manufacturing process. This has an impact on the dynamics of the environment in the manufacturing process. It is difficult to simulate each individual additive manufacturing device in advance and adjust the parameters of the algorithm accordingly. Therefore, two methods are proposed to use the data from each additive manufacturing device to improve the algorithms. Methods come from the field of Privacy Preserving Machine Learning, which make it possible to protect sensitive data of individual customers without impairing learning.
- Each additive manufacturing device on which our software is installed continuously transmits process parameters and the states of its environment to a central server.
- confidential information such as the geometry of the component is modified so that the data can still be used to learn the algorithm, but the original geometry or the confidential information can no longer be determined.
- the algorithm is improved and the improvements are transmitted back to the additive manufacturing device.
- Each additive manufacturing device on which our software is installed has an additional computing unit that makes it possible to improve the model using the data generated in the machine.
- the printer already has a generic model when it is delivered, which is then adapted to the specific properties of the printer over the runtime through the learning process. After such a local After the model has been improved with the data from the additive manufacturing device, the changes in the model parameters are transmitted to a central server, which improves a global model with each update. The global model is then sent back to the additive manufacturing device to replace the old model.
- the manufacturing laser of the additive manufacturing device is used to detect the surface temperature or the shape of the workpiece.
- a separate detection device with a separate detection laser can also be provided.
- a camera system consisting of two image recording devices that record images at the same time is used.
- the camera system can consist of an image recording device that records images offset in time and space.
- the camera system has a further camera for recordings in the NIR/IR spectrum.
- one of the stereo cameras or the single camera can be used for this function if it has the ability to capture the NIR/IR spectrum.
- the camera pose is determined, among other things, on the basis of information from the rotary encoders of the motors that change the camera pose, from the inertial measurement unit (IMU) attached to the camera, or an interior localization technology (e.g. RFID, magnetometer, etc.) localized.
- the camera pose can also be determined using a further camera and a marking on one of the housing walls.
- the exemplary embodiment according to FIG. 4 corresponds to the first exemplary embodiment and differs from it in the points listed below.
- the camera system 50 has a further camera module 60 . This is connected in a rotationally fixed manner to the first camera module 52 and the second camera module 53 and is arranged on the carriage 51 .
- the third camera module 60 is oriented upwards.
- the upper housing cover 43 usually consists of a glass pane.
- a grid 44 that can be seen from below is applied to this.
- the third camera module 60 can capture the grid 44 and from this determine its position on the x-y plane and its orientation.
- the camera systems are provided within the installation space 40. However, they can also be provided outside of the installation space 40 . In this case, the installation space must be equipped with appropriate windows. This has the advantage that the camera systems are not exposed to the thermal stress that prevails in the installation space.
- the camera systems can be arranged above or on one or more of the four sides of the installation space.
- the device according to the invention for constructing a three-dimensional production object is shown as a laser sintering device.
- the invention is also applicable to Direct Energy Deposition manufacturing devices. Since there is no longer a powder bed, the object must now be separated from the background during segmentation. Due to the fact that the background does not match the With the same regularity as in the powder bed process, it should be noted during segmentation that the apparatus that applies the material (powder or wire) (MAA) (e.g. a robot arm) is separated from the production object and the production object from the background. Structural changes could be that the MAA are painted in a uniform color that is not adopted by the material.
- the background can be shielded and colored in such a way that it can be easily segmented, e.g. by printing in a separate cell.
- the sensors do not have to be attached in the upper part of the installation space, but can either be attached directly to the MAA or on the side in / on the printer or in an independent, mobile device such as a robot arm . It is important that cavities and other areas that are only visible from a few perspectives can be fully reached, with attachment close to the MAA being advantageous.
- the provision of different components around the sensor is advantageous in order to form protection against the heat or the provision of different filter systems in order to reduce the high radiation intensity (light intensity) but also the influence of UV radiation / IR radiation on the image (RGB ) to reduce.
- the advantage of additional devices results from the fact that there are few / no phases in which the radiation intensity generated by the MAA is low, such as in the powder bed process when the recoater is applying new powder.
- the path planning is limited to determining the trigger moments for the image recordings and - if the camera modules can be aligned independently - planning the camera alignment .
- the camera system 50 has the first camera module 52 and the second camera module 53 in order to carry out stereoscopic depth detection using two camera images.
- depth detection is possible using the grazing light method.
- the second camera module 53 is replaced by a projector which projects a light pattern (structured light) onto the building layer 11 .
- the first camera module 52 which is offset from the second camera module 53 and thus also from the projector and is aligned with the building layer 11, takes images of the building layer 11 with the light pattern according to the camera system path. From the perspective of the first camera module 52, a change in the depth of the construction level 11 results in distortions of the light pattern.
- the changes in depth of the building layer 11 can be determined in a known manner on the basis of epipolar geometry and a point cloud can be generated according to the first exemplary embodiment.
- the high accuracy in the micrometer range achieved by the grazing light method is offset by higher costs due to the need for one or more projectors.
- the depth determination by the grazing light method can also be carried out in addition to the stereoscopic depth determination.
- a depth determination can also be carried out by means of radar.
- a radar sensor is provided on (outside) or in the upper area of the installation space 10 .
- the radar sensor has a radar emitter and a radar receiver.
- the radar emitter emits electromagnetic waves in the radio frequency range as a radar signal, which are reflected by the building layer 11 .
- the reflections of the radar signal are received by the radar receiver.
- From the signal propagation time, ie the The distance is determined from the time difference between sending and receiving the radar signal and the propagation speed of the radar signal in the corresponding surrounding medium in and/or outside of the installation space.
- the procedure is similar to lidar depth determination.
- the results from the radar depth determination can be combined/fused with the other depth determination methods, or replace individual other depth determination methods.
- a camera system path is a sequence of positions, orientations or image acquisition initiations of a camera system with one or more camera modules.
- An additive manufacturing device within the meaning of this document is any material-applying manufacturing, for example by LSM, DED (direct energy deposition) or MAA.
- a partially reflecting mirror or a partially reflecting surface is one that reflects the radiation when it is irradiated from one side, and the radiation is at least partially irradiated from the other side, at least to a greater extent than when irradiating one side.
- I Creation device device for creating a three-dimensional object
- laser sintering device LSM
- printer printer
- second camera module (3rd embodiment) a angle of inclination of the first camera module 52" ß angle of inclination of the second camera module 53"
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Chemical & Material Sciences (AREA)
- Materials Engineering (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Manufacturing & Machinery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Optics & Photonics (AREA)
- Automation & Control Theory (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Plasma & Fusion (AREA)
- Analytical Chemistry (AREA)
- Mechanical Engineering (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
L'invention concerne un procédé pour empêcher des défauts de fabrication lors de la fabrication additive d'un objet de fabrication, ledit procédé comprenant les étapes suivantes : Prévision d'un système de détection dans le dispositif de fabrication additive pour détecter au moins une propriété de l'objet de fabrication, fabrication de l'objet de fabrication en plusieurs étapes de fabrication, le système de détection déterminant la ou les propriétés de l'objet de fabrication pendant la fabrication, alignement de la ou des propriétés avec un état théorique de la propriété, détermination d'écarts entre la ou les propriétés et l'état théorique de la propriété, évaluation de l'écart déterminé dans l'étape de détermination d'écart, si des écarts sont déterminés, détermination d'une adaptation d'au moins un paramètre de processus, sur la base de l'écart déterminé dans l'étape de détermination d'écart, afin de réduire l'écart dans la suite du processus de fabrication, et utilisation de l'adaptation du ou des paramètres de processus, déterminée dans l'étape de détermination d'adaptation.
Applications Claiming Priority (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102021106432.5A DE102021106432A1 (de) | 2021-03-16 | 2021-03-16 | Defekterkennung in der additiven Fertigung |
DE102021106429.5A DE102021106429A1 (de) | 2021-03-16 | 2021-03-16 | Defekterkennung in der additiven Fertigung |
DE102021106432.5 | 2021-03-16 | ||
DE102021106430.9 | 2021-03-16 | ||
DE102021106429.5 | 2021-03-16 | ||
DE102021106431.7A DE102021106431A1 (de) | 2021-03-16 | 2021-03-16 | Defekterkennung in der additiven Fertigung |
DE102021106430.9A DE102021106430A1 (de) | 2021-03-16 | 2021-03-16 | Defekterkennung in der additiven Fertigung |
DE102021106431.7 | 2021-03-16 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022194960A1 true WO2022194960A1 (fr) | 2022-09-22 |
Family
ID=81307210
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2022/056874 WO2022194956A1 (fr) | 2021-03-16 | 2022-03-16 | Prévention de défauts de fabrication au cours de la fabrication additive |
PCT/EP2022/056878 WO2022194960A1 (fr) | 2021-03-16 | 2022-03-16 | Détection de défauts dans la fabrication additive |
PCT/EP2022/056839 WO2022194939A1 (fr) | 2021-03-16 | 2022-03-16 | Détection de défauts de fabrication dans l'impression 3d |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2022/056874 WO2022194956A1 (fr) | 2021-03-16 | 2022-03-16 | Prévention de défauts de fabrication au cours de la fabrication additive |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2022/056839 WO2022194939A1 (fr) | 2021-03-16 | 2022-03-16 | Détection de défauts de fabrication dans l'impression 3d |
Country Status (1)
Country | Link |
---|---|
WO (3) | WO2022194956A1 (fr) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230342908A1 (en) * | 2022-04-22 | 2023-10-26 | Baker Hughes Oilfield Operations Llc | Distortion prediction for additive manufacturing using image analysis |
WO2024118871A1 (fr) * | 2022-12-01 | 2024-06-06 | Vulcanforms Inc. | Systèmes et procédés de détection de défauts de revêtement ultérieur pendant des processus de fabrication additive |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU2024201314A1 (en) * | 2023-02-27 | 2024-09-12 | Howmedica Osteonics Corp. | Additive manufacturing layer defect identification and analysis using mask template |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150323318A1 (en) * | 2014-05-09 | 2015-11-12 | MTU Aero Engines AG | Device and method for generative production of at least one component area of a component |
WO2018191627A1 (fr) * | 2017-04-14 | 2018-10-18 | Desktop Metal, Inc. | Étalonnage d'imprimante 3d par vision artificielle |
US20200290154A1 (en) * | 2018-02-21 | 2020-09-17 | Sigma Labs, Inc. | Systems and methods for measuring radiated thermal energy during an additive manufacturing operation |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109074660B (zh) * | 2015-12-31 | 2022-04-12 | Ml 荷兰公司 | 单目相机实时三维捕获和即时反馈的方法和系统 |
WO2018234331A1 (fr) * | 2017-06-20 | 2018-12-27 | Carl Zeiss Ag | Procédé et dispositif de fabrication additive |
EP4007691A4 (fr) * | 2019-08-02 | 2023-11-08 | Origin Laboratories, Inc. | Procédé et système de commande de rétroaction intercouche et de détection de défaillance dans un processus de fabrication additive |
-
2022
- 2022-03-16 WO PCT/EP2022/056874 patent/WO2022194956A1/fr active Application Filing
- 2022-03-16 WO PCT/EP2022/056878 patent/WO2022194960A1/fr active Application Filing
- 2022-03-16 WO PCT/EP2022/056839 patent/WO2022194939A1/fr active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150323318A1 (en) * | 2014-05-09 | 2015-11-12 | MTU Aero Engines AG | Device and method for generative production of at least one component area of a component |
WO2018191627A1 (fr) * | 2017-04-14 | 2018-10-18 | Desktop Metal, Inc. | Étalonnage d'imprimante 3d par vision artificielle |
US20200290154A1 (en) * | 2018-02-21 | 2020-09-17 | Sigma Labs, Inc. | Systems and methods for measuring radiated thermal energy during an additive manufacturing operation |
Non-Patent Citations (3)
Title |
---|
B K FOSTER ET AL: "Optical, layerwise monitoring of powder bed fusion", PROCEEDINGS: 26TH ANNUAL INTERNATIONAL SOLID FREEFORM FABRICATION SYMPOSIUM - AN ADDITIVE MANUFACTURING CONFERENCE, 12 August 2015 (2015-08-12), pages 295 - 307, XP055536185, Retrieved from the Internet <URL:http://sffsymposium.engr.utexas.edu/sites/default/files/2015/2015-24-Foster.pdf> [retrieved on 20181218] * |
DI CATALDO SANTA ET AL: "Optimizing Quality Inspection and Control in Powder Bed Metal Additive Manufacturing: Challenges and Research Directions", PROCEEDINGS OF THE IEEE, IEEE. NEW YORK, US, vol. 109, no. 4, 4 February 2021 (2021-02-04), pages 326 - 346, XP011845009, ISSN: 0018-9219, [retrieved on 20210322], DOI: 10.1109/JPROC.2021.3054628 * |
SARAH K. EVERTON ET AL: "Review of in-situ process monitoring and in-situ metrology for metal additive manufacturing", MATERIALS & DESIGN, vol. 95, 23 January 2016 (2016-01-23), AMSTERDAM, NL, pages 431 - 445, XP055320137, ISSN: 0264-1275, DOI: 10.1016/j.matdes.2016.01.099 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230342908A1 (en) * | 2022-04-22 | 2023-10-26 | Baker Hughes Oilfield Operations Llc | Distortion prediction for additive manufacturing using image analysis |
WO2024118871A1 (fr) * | 2022-12-01 | 2024-06-06 | Vulcanforms Inc. | Systèmes et procédés de détection de défauts de revêtement ultérieur pendant des processus de fabrication additive |
Also Published As
Publication number | Publication date |
---|---|
WO2022194956A1 (fr) | 2022-09-22 |
WO2022194939A1 (fr) | 2022-09-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022194960A1 (fr) | Détection de défauts dans la fabrication additive | |
EP2329222B1 (fr) | Procédé et dispositif de mesure pour déterminer la géométrie de roues ou d'essieux d'un véhicule | |
DE102015011914B4 (de) | Konturlinienmessvorrichtung und Robotersystem | |
EP1173749B1 (fr) | Traitement d'image pour preparer l'analyse d'une texture | |
DE112010004767B4 (de) | Punktwolkedaten-Verarbeitungsvorrichtung, Punktwolkedaten-Verarbeitungsverfahren und Punktwolkedaten-Verarbeitungsprogramm | |
EP3362262B1 (fr) | Procédé et dispositif d'application d'une couche pour dispositif de fabrication d'un objet tridimensionnel | |
EP2422297B1 (fr) | Dispositif et procédé de prise de vue d'une plante | |
EP1589484B1 (fr) | Procédé pour la détection et/ou le suivi d'objets | |
DE69826753T2 (de) | Optischer Profilsensor | |
DE102007045835B4 (de) | Verfahren und Vorrichtung zum Darstellen eines virtuellen Objekts in einer realen Umgebung | |
DE102019115874A1 (de) | Systeme und verfahren zur verbesserten entfernungsschätzung durch eine monokamera unter verwendung von radar- und bewegungsdaten | |
WO2019201565A1 (fr) | Procédé, dispositif et support d'enregistrement lisible par ordinateur comprenant des instructions pour le traitement de données de capteur | |
DE102021106429A1 (de) | Defekterkennung in der additiven Fertigung | |
EP2728374A1 (fr) | Invention concernant le calibrage main-oeil de caméras, en particulier de caméras à image de profondeur | |
DE102019215903A1 (de) | Verfahren und Vorrichtung zum Erzeugen von Trainingsdaten für ein Erkennungsmodell zum Erkennen von Objekten in Sensordaten eines Sensors insbesondere eines Fahrzeugs, Verfahren zum Trainieren und Verfahren zum Ansteuern | |
DE102019103519A1 (de) | Vorrichtung zum Bestimmen von dimensionalen und/oder geometrischen Eigenschaften eines Messobjekts | |
DE102021124430B3 (de) | Visualisieren von Lidar-Messdaten | |
DE102021106430A1 (de) | Defekterkennung in der additiven Fertigung | |
Baucher et al. | Defect characterization through automated laser track trace identification in SLM processes using laser profilometer data | |
DE4242189C1 (de) | Verfahren und Vorrichtung zum Aufnehmen eines Entfernungsbildes | |
DE202010017899U1 (de) | Anordnung zur Aufnahme geometrischer und photometrischer Objektdaten im Raum | |
DE102021106432A1 (de) | Defekterkennung in der additiven Fertigung | |
DE102021106431A1 (de) | Defekterkennung in der additiven Fertigung | |
DE102014116904A1 (de) | Verfahren zum optischen Abtasten und Vermessen einer Szene und zur automatischen Erzeugung einesVideos | |
DE102008055932A1 (de) | Verfahren zur modellbasierten Simulation eines Verhaltens eines Sensors |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22716172 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22716172 Country of ref document: EP Kind code of ref document: A1 |